OpenStack Juno Scripted Install with Neutron on CentOS 7

By | November 24, 2014

Note: the are newer versions of this article: OpenStack Newton – Quick Install on CentOS 7 and also Scripted Installation of OpenStack Mitaka

This article provides scripts to simplify the installation of OpenStack Juno, including the Neutron networking component, onto a minimum number of servers for testing. If you’ve done a manual install by following the official documentation, you know there’s a large number of steps, and a basic install can take hours. Given the complexity of even a basic environment, there’s a lot of places where you can make a typo or enter the wrong IP address in a configuration file. The goal of these scripts is to make it quick and easy to stand up a new stack in a repeatable fashion.

In my previous article: OpenStack Juno Scripted Installation on CentOS 7, I used the nova-network legacy networking component. This is about as simple as it gets, requiring only two servers and two subnets. There’s a benefit too in terms of network performance for the instances, since the instances are “plugged into” a virtual bridge that is uplinked directly to the physical network

Now, using neutron, we need three servers and three subnets. Instances are “plugged into” an open vswitch (OVS) on the compute node. The instance traffic is then encapsulated in a GRE tunnel and shipped over to the network node, where it is routed (by the neutron L3 agent) to the physical network. The network performance in this case is dramatically worse than with the nova-network scenario. I understand that neutron is required for a multi-tenant, secure and scalable cloud. Just keep this in mind if your goal is to build a simple private cloud that performs well. Nuff said.

OK, so we need three servers and three subnets. The basic design is shown in the figure below:

neutron diagram

We’ve got three servers, a controller node, a network node and a compute node. All three have a NIC connected to our management network (192.168.1.0/24 in my case), which will provide SSH access to the servers as well as provide web and API access to the stack. In a customer/public facing cloud scenario, you’d probably want to separate the SSH traffic onto its own NIC, or otherwise limit SSH traffic to the inside.

The network node will require two additional NICs, one for the external instance traffic (192.168.2.0/24 in my case) where floating IPs will be applied, and one NIC for the tunnel network (10.0.0.0/24 in my case). The tunnel network will carry software-defined networks between the network node and the compute nodes.

The compute node will require one additional NIC for the tunnel network, and will also require a 2nd hard disk for our cinder-volumes.

Let’s allocate some IP addresses for the nodes. Don’t kill yourself configuring all this by hand, we’ve got a script for this! I will assign the following IP addresses for the first NIC on the management network (192.168.1.0/24):

  • juno-controller: 192.168.1.232
  • juno-network: 192.168.1.231
  • juno-compute: 192.168.1.230

I will assign tunnel endpoint address for the second NIC on the tunnel network (10.0.0.0/24):

  • juno-network: 10.0.0.231
  • juno-compute: 10.0.0.230

The third NIC on the network node will not have an address of its own, floating IPs will use this NIC to access the outside world (192.168.2.0/24). The instances themselves will be on a software-defined network (10.0.1.0/24) that we’ll define later with neutron.

The Scripts

OK, all of the scripts here rely on a config file, which defines various details of the stack: the IP address of the controller, some key passwords, and the IP configuration of the node that we’re configuring. We’ll use this config file for all three nodes, adjusting the local IP info for each one, while the stack details remain the same. Save the text below as a file named config and copy it to all three nodes.

[view raw text]

#stack information
CONTROLLER_IP=192.168.1.232
ADMIN_TOKEN=ADMIN123
SERVICE_PWD=Service123
ADMIN_PWD=password
META_PWD=meta123

#this host IP info
THISHOST_NAME=juno-controller
THISHOST_IP=192.168.1.232
THISHOST_NETMASK=255.255.255.0
THISHOST_GATEWAY=192.168.1.1
THISHOST_DNS=192.168.1.1
THISHOST_TUNNEL_IP=na
THISHOST_TUNNEL_NETMASK=255.255.255.0

The first script that we’ll use will configure the IP addresses on the node we’re working on. Remember to adjust THISHOST_NAME, THISHOST_IP and THISHOST_TUNNEL_IP in the config file to match the addresses we allocated for each node. Save the text below as ipsetup.sh, copy it to each node and make it executable (chmod +x ipsetup.sh)

[view raw text]

#!/bin/bash

#get config info
source config

#get primary NIC info
for i in $(ls /sys/class/net); do
    NIC=$i
    MY_MAC=$(cat /sys/class/net/$i/address)
    if [ "$(cat /sys/class/net/$i/ifindex)" == '2' ]; then
        #setup the IP configuration for management NIC
        sed -i.bak "s/dhcp/none/g" /etc/sysconfig/network-scripts/ifcfg-$NIC
        sed -i "s/HWADDR/#HWADDR/g" /etc/sysconfig/network-scripts/ifcfg-$NIC
        sed -i "/#HWADDR/a HWADDR=\"$MY_MAC\"" /etc/sysconfig/network-scripts/ifcfg-$NIC
        sed -i "s/UUID/#UUID/g" /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "IPADDR=\"$THISHOST_IP\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "NETMASK=\"$THISHOST_NETMASK\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "GATEWAY=\"$THISHOST_GATEWAY\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "DNS1=\"$THISHOST_DNS\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        mv /etc/sysconfig/network-scripts/ifcfg-$NIC.bak .
    fi
    if [ "$(cat /sys/class/net/$i/ifindex)" == '3' ]; then
        #create config file for Tunnel NIC
        echo "HWADDR=\"$MY_MAC\"" > /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "TYPE=\"Ethernet\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "BOOTPROTO=\"none\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "IPV4_FAILURE_FATAL=\"no\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "NAME=\"$NIC\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "ONBOOT=\"yes\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "IPADDR=\"$THISHOST_TUNNEL_IP\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "NETMASK=\"$THISHOST_TUNNEL_NETMASK\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC

    fi        
    if [ "$(cat /sys/class/net/$i/ifindex)" == '4' ]; then
        #create config file for External NIC
        echo "HWADDR=\"$MY_MAC\"" > /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "TYPE=\"Ethernet\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "BOOTPROTO=\"none\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "IPV4_FAILURE_FATAL=\"no\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "NAME=\"$NIC\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "ONBOOT=\"yes\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
    fi        
done

#setup hostname
echo "$THISHOST_NAME" > /etc/hostname
echo "$THISHOST_IP    $THISHOST_NAME" >> /etc/hosts

After running the ipsetup.sh script on each node, reboot the node. Next, we’ve got a script for each node which will install and configure the OpenStack packages.

The Controller Node

The following script installs the basic controller stack, which includes MariaDB, RabbitMQ, Glance, and the API/Scheduler components of Nova, Neutron and Cinder. Save the text below to a file named controller-node.sh, make it executable, and run it on the controller node.

[view raw text]

#!/bin/bash

#get the configuration info
source config

#install ntp
yum -y install ntp
systemctl enable ntpd.service
systemctl start ntpd.service

#openstack repos
yum -y install yum-plugin-priorities
yum -y install epel-release
yum -y install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm
yum -y upgrade
#yum -y install openstack-selinux

#loosen things up
systemctl stop firewalld.service
systemctl disable firewalld.service
sed -i 's/enforcing/disabled/g' /etc/selinux/config
echo 0 > /sys/fs/selinux/enforce

#install database server
yum -y install mariadb mariadb-server MySQL-python

#edit /etc/my.cnf
sed -i.bak "10i\\
bind-address = $CONTROLLER_IP\n\
default-storage-engine = innodb\n\
innodb_file_per_table\n\
collation-server = utf8_general_ci\n\
init-connect = 'SET NAMES utf8'\n\
character-set-server = utf8\n\
" /etc/my.cnf

#start database server
systemctl enable mariadb.service
systemctl start mariadb.service

echo 'now run through the mysql_secure_installation'
mysql_secure_installation

#create databases
echo 'Enter the new MySQL root password'
mysql -u root -p <<EOF
CREATE DATABASE nova;
CREATE DATABASE cinder;
CREATE DATABASE glance;
CREATE DATABASE keystone;
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$SERVICE_PWD';
FLUSH PRIVILEGES;
EOF

#install messaging service
yum -y install rabbitmq-server
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

#install keystone
yum -y install openstack-keystone python-keystoneclient

#edit /etc/keystone.conf
sed -i.bak "s/#admin_token=ADMIN/admin_token=$ADMIN_TOKEN/g" /etc/keystone/keystone.conf

sed -i "/\[database\]/a \
connection = mysql://keystone:$SERVICE_PWD@$CONTROLLER_IP/keystone" /etc/keystone/keystone.conf

sed -i "/\[token\]/a \
provider = keystone.token.providers.uuid.Provider\n\
driver = keystone.token.persistence.backends.sql.Token\n" /etc/keystone/keystone.conf

#finish keystone setup
keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
chown -R keystone:keystone /var/log/keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl
su -s /bin/sh -c "keystone-manage db_sync" keystone

#start keystone
systemctl enable openstack-keystone.service
systemctl start openstack-keystone.service

#schedule token purge
(crontab -l -u keystone 2>&1 | grep -q token_flush) || \
  echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' \
  >> /var/spool/cron/keystone
  
#create users and tenants
export OS_SERVICE_TOKEN=$ADMIN_TOKEN
export OS_SERVICE_ENDPOINT=http://$CONTROLLER_IP:35357/v2.0
keystone tenant-create --name admin --description "Admin Tenant"
keystone user-create --name admin --pass $ADMIN_PWD
keystone role-create --name admin
keystone user-role-add --tenant admin --user admin --role admin
keystone role-create --name _member_
keystone user-role-add --tenant admin --user admin --role _member_
keystone tenant-create --name demo --description "Demo Tenant"
keystone user-create --name demo --pass password
keystone user-role-add --tenant demo --user demo --role _member_
keystone tenant-create --name service --description "Service Tenant"
keystone service-create --name keystone --type identity \
  --description "OpenStack Identity"
keystone endpoint-create \
  --service-id $(keystone service-list | awk '/ identity / {print $2}') \
  --publicurl http://$CONTROLLER_IP:5000/v2.0 \
  --internalurl http://$CONTROLLER_IP:5000/v2.0 \
  --adminurl http://$CONTROLLER_IP:35357/v2.0 \
  --region regionOne
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

#create credentials file
echo "export OS_TENANT_NAME=admin" > creds
echo "export OS_USERNAME=admin" >> creds
echo "export OS_PASSWORD=$ADMIN_PWD" >> creds
echo "export OS_AUTH_URL=http://$CONTROLLER_IP:35357/v2.0" >> creds
source creds

#create keystone entries for glance
keystone user-create --name glance --pass $SERVICE_PWD
keystone user-role-add --user glance --tenant service --role admin
keystone service-create --name glance --type image \
  --description "OpenStack Image Service"
keystone endpoint-create \
  --service-id $(keystone service-list | awk '/ image / {print $2}') \
  --publicurl http://$CONTROLLER_IP:9292 \
  --internalurl http://$CONTROLLER_IP:9292 \
  --adminurl http://$CONTROLLER_IP:9292 \
  --region regionOne

#install glance
yum -y install openstack-glance python-glanceclient

#edit /etc/glance/glance-api.conf
sed -i.bak "/\[database\]/a \
connection = mysql://glance:$SERVICE_PWD@$CONTROLLER_IP/glance" /etc/glance/glance-api.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = glance\n\
admin_password = $SERVICE_PWD" /etc/glance/glance-api.conf

sed -i "/\[paste_deploy\]/a \
flavor = keystone" /etc/glance/glance-api.conf

sed -i "/\[glance_store\]/a \
default_store = file\n\
filesystem_store_datadir = /var/lib/glance/images/" /etc/glance/glance-api.conf

#edit /etc/glance/glance-registry.conf
sed -i.bak "/\[database\]/a \
connection = mysql://glance:$SERVICE_PWD@$CONTROLLER_IP/glance" /etc/glance/glance-registry.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = glance\n\
admin_password = $SERVICE_PWD" /etc/glance/glance-registry.conf

sed -i "/\[paste_deploy\]/a \
flavor = keystone" /etc/glance/glance-registry.conf

#start glance
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service

#upload the cirros image to glance
yum -y install wget
wget http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
glance image-create --name "cirros-0.3.3-x86_64" --file cirros-0.3.3-x86_64-disk.img \
  --disk-format qcow2 --container-format bare --is-public True --progress
  
#create the keystone entries for nova
keystone user-create --name nova --pass $SERVICE_PWD
keystone user-role-add --user nova --tenant service --role admin
keystone service-create --name nova --type compute \
  --description "OpenStack Compute"
keystone endpoint-create \
  --service-id $(keystone service-list | awk '/ compute / {print $2}') \
  --publicurl http://$CONTROLLER_IP:8774/v2/%\(tenant_id\)s \
  --internalurl http://$CONTROLLER_IP:8774/v2/%\(tenant_id\)s \
  --adminurl http://$CONTROLLER_IP:8774/v2/%\(tenant_id\)s \
  --region regionOne

#install the nova controller components
yum -y install openstack-nova-api openstack-nova-cert openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler \
  python-novaclient

#edit /etc/nova/nova.conf
sed -i.bak "/\[database\]/a \
connection = mysql://nova:$SERVICE_PWD@$CONTROLLER_IP/nova" /etc/nova/nova.conf

sed -i "/\[DEFAULT\]/a \
rpc_backend = rabbit\n\
rabbit_host = $CONTROLLER_IP\n\
auth_strategy = keystone\n\
my_ip = $CONTROLLER_IP\n\
vncserver_listen = $CONTROLLER_IP\n\
vncserver_proxyclient_address = $CONTROLLER_IP\n\
network_api_class = nova.network.neutronv2.api.API\n\
security_group_api = neutron\n\
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver\n\
firewall_driver = nova.virt.firewall.NoopFirewallDriver" /etc/nova/nova.conf

sed -i "/\[keystone_authtoken\]/i \
[database]\nconnection = mysql://nova:$SERVICE_PWD@$CONTROLLER_IP/nova" /etc/nova/nova.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = nova\n\
admin_password = $SERVICE_PWD" /etc/nova/nova.conf

sed -i "/\[glance\]/a host = $CONTROLLER_IP" /etc/nova/nova.conf

sed -i "/\[neutron\]/a \
url = http://$CONTROLLER_IP:9696\n\
auth_strategy = keystone\n\
admin_auth_url = http://$CONTROLLER_IP:35357/v2.0\n\
admin_tenant_name = service\n\
admin_username = neutron\n\
admin_password = $SERVICE_PWD\n\
service_metadata_proxy = True\n\
metadata_proxy_shared_secret = $META_PWD" /etc/nova/nova.conf

#start nova
su -s /bin/sh -c "nova-manage db sync" nova

systemctl enable openstack-nova-api.service openstack-nova-cert.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-cert.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

#create keystone entries for neutron
keystone user-create --name neutron --pass $SERVICE_PWD
keystone user-role-add --user neutron --tenant service --role admin
keystone service-create --name neutron --type network \
  --description "OpenStack Networking"
keystone endpoint-create \
  --service-id $(keystone service-list | awk '/ network / {print $2}') \
  --publicurl http://$CONTROLLER_IP:9696 \
  --internalurl http://$CONTROLLER_IP:9696 \
  --adminurl http://$CONTROLLER_IP:9696 \
  --region regionOne

#install neutron
yum -y install openstack-neutron openstack-neutron-ml2 python-neutronclient which

#edit /etc/neutron/neutron.conf
sed -i.bak "/\[database\]/a \
connection = mysql://neutron:$SERVICE_PWD@$CONTROLLER_IP/neutron" /etc/neutron/neutron.conf

SERVICE_TENANT_ID=$(keystone tenant-list | awk '/ service / {print $2}')

sed -i '0,/\[DEFAULT\]/s//\[DEFAULT\]\
rpc_backend = rabbit\
rabbit_host = '"$CONTROLLER_IP"'\
auth_strategy = keystone\
core_plugin = ml2\
service_plugins = router\
allow_overlapping_ips = True\
notify_nova_on_port_status_changes = True\
notify_nova_on_port_data_changes = True\
nova_url = http:\/\/'"$CONTROLLER_IP"':8774\/v2\
nova_admin_auth_url = http:\/\/'"$CONTROLLER_IP"':35357\/v2.0\
nova_region_name = regionOne\
nova_admin_username = nova\
nova_admin_tenant_id = '"$SERVICE_TENANT_ID"'\
nova_admin_password = '"$SERVICE_PWD"'/' /etc/neutron/neutron.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = neutron\n\
admin_password = $SERVICE_PWD" /etc/neutron/neutron.conf

#edit /etc/neutron/plugins/ml2/ml2_conf.ini
sed -i "/\[ml2\]/a \
type_drivers = flat,gre\n\
tenant_network_types = gre\n\
mechanism_drivers = openvswitch" /etc/neutron/plugins/ml2/ml2_conf.ini

sed -i "/\[ml2_type_gre\]/a \
tunnel_id_ranges = 1:1000" /etc/neutron/plugins/ml2/ml2_conf.ini

sed -i "/\[securitygroup\]/a \
enable_security_group = True\n\
enable_ipset = True\n\
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver" /etc/neutron/plugins/ml2/ml2_conf.ini

#start neutron
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service
systemctl enable neutron-server.service
systemctl start neutron-server.service

#install dashboard
yum -y install openstack-dashboard httpd mod_wsgi memcached python-memcached

#edit /etc/openstack-dashboard/local_settings
sed -i.bak "s/ALLOWED_HOSTS = \['horizon.example.com', 'localhost'\]/ALLOWED_HOSTS = ['*']/" /etc/openstack-dashboard/local_settings
sed -i 's/OPENSTACK_HOST = "127.0.0.1"/OPENSTACK_HOST = "'"$CONTROLLER_IP"'"/' /etc/openstack-dashboard/local_settings

#start dashboard
setsebool -P httpd_can_network_connect on
chown -R apache:apache /usr/share/openstack-dashboard/static
systemctl enable httpd.service memcached.service
systemctl start httpd.service memcached.service

#create keystone entries for cinder
keystone user-create --name cinder --pass $SERVICE_PWD
keystone user-role-add --user cinder --tenant service --role admin
keystone service-create --name cinder --type volume \
  --description "OpenStack Block Storage"
keystone service-create --name cinderv2 --type volumev2 \
  --description "OpenStack Block Storage"
keystone endpoint-create \
  --service-id $(keystone service-list | awk '/ volume / {print $2}') \
  --publicurl http://$CONTROLLER_IP:8776/v1/%\(tenant_id\)s \
  --internalurl http://$CONTROLLER_IP:8776/v1/%\(tenant_id\)s \
  --adminurl http://$CONTROLLER_IP:8776/v1/%\(tenant_id\)s \
  --region regionOne
keystone endpoint-create \
  --service-id $(keystone service-list | awk '/ volumev2 / {print $2}') \
  --publicurl http://$CONTROLLER_IP:8776/v2/%\(tenant_id\)s \
  --internalurl http://$CONTROLLER_IP:8776/v2/%\(tenant_id\)s \
  --adminurl http://$CONTROLLER_IP:8776/v2/%\(tenant_id\)s \
  --region regionOne

#install cinder controller
yum -y install openstack-cinder python-cinderclient python-oslo-db

#edit /etc/cinder/cinder.conf
sed -i.bak "/\[database\]/a connection = mysql://cinder:$SERVICE_PWD@$CONTROLLER_IP/cinder" /etc/cinder/cinder.conf

sed -i "/\[DEFAULT\]/a \
rpc_backend = rabbit\n\
rabbit_host = $CONTROLLER_IP\n\
auth_strategy = keystone\n\
my_ip = $CONTROLLER_IP" /etc/cinder/cinder.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = cinder\n\
admin_password = $SERVICE_PWD" /etc/cinder/cinder.conf

#start cinder controller
su -s /bin/sh -c "cinder-manage db sync" cinder
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

Now reboot the controller and it should be up and running.

The Network Node

The next script configures the network node, which runs the majority of the neutron services and carries the network traffic of the instances, coming in over the tunnel network from the compute node(s) and routing the traffic out to the eternal network. Save the text below as network-node.sh, make it executable, and run it on the network node.

[view raw text]

#!/bin/bash

source config

#install ntp
yum -y install ntp
systemctl enable ntpd.service
systemctl start ntpd.service

#openstack repos
yum -y install yum-plugin-priorities
yum -y install epel-release
yum -y install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm
yum -y upgrade
#yum -y install openstack-selinux

#loosen things up
systemctl stop firewalld.service
systemctl disable firewalld.service
sed -i 's/enforcing/disabled/g' /etc/selinux/config
echo 0 > /sys/fs/selinux/enforce

#get primary NIC info
for i in $(ls /sys/class/net); do
    if [ "$(cat /sys/class/net/$i/ifindex)" == '3' ]; then
        NIC=$i
        MY_MAC=$(cat /sys/class/net/$i/address)
        echo "$i ($MY_MAC)"
    fi
done

echo 'export OS_TENANT_NAME=admin' > creds
echo 'export OS_USERNAME=admin' >> creds
echo 'export OS_PASSWORD='"$ADMIN_PWD" >> creds
echo 'export OS_AUTH_URL=http://'"$CONTROLLER_IP"':35357/v2.0' >> creds
source creds

echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
echo 'net.ipv4.conf.all.rp_filter=0' >> /etc/sysctl.conf
echo 'net.ipv4.conf.default.rp_filter=0' >> /etc/sysctl.conf
sysctl -p

#install neutron
yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

sed -i '0,/\[DEFAULT\]/s//\[DEFAULT\]\
rpc_backend = rabbit\
rabbit_host = '"$CONTROLLER_IP"'\
auth_strategy = keystone\
core_plugin = ml2\
service_plugins = router\
allow_overlapping_ips = True/' /etc/neutron/neutron.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = neutron\n\
admin_password = $SERVICE_PWD" /etc/neutron/neutron.conf

#edit /etc/neutron/plugins/ml2/ml2_conf.ini
sed -i "/\[ml2\]/a \
type_drivers = flat,gre\n\
tenant_network_types = gre\n\
mechanism_drivers = openvswitch" /etc/neutron/plugins/ml2/ml2_conf.ini

sed -i "/\[ml2_type_flat\]/a \
flat_networks = external" /etc/neutron/plugins/ml2/ml2_conf.ini

sed -i "/\[ml2_type_gre\]/a \
tunnel_id_ranges = 1:1000" /etc/neutron/plugins/ml2/ml2_conf.ini

sed -i "/\[securitygroup\]/a \
enable_security_group = True\n\
enable_ipset = True\n\
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver\n\
[ovs]\n\
local_ip = $THISHOST_TUNNEL_IP\n\
enable_tunneling = True\n\
bridge_mappings = external:br-ex\n\
[agent]\n\
tunnel_types = gre" /etc/neutron/plugins/ml2/ml2_conf.ini

sed -i "/\[DEFAULT\]/a \
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver\n\
use_namespaces = True\n\
external_network_bridge = br-ex" /etc/neutron/l3_agent.ini

sed -i "/\[DEFAULT\]/a \
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver\n\
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq\n\
use_namespaces = True" /etc/neutron/dhcp_agent.ini

sed -i "s/auth_url/#auth_url/g" /etc/neutron/metadata_agent.ini
sed -i "s/auth_region/#auth_region/g" /etc/neutron/metadata_agent.ini
sed -i "s/admin_tenant_name/#admin_tenant_name/g" /etc/neutron/metadata_agent.ini
sed -i "s/admin_user/#admin_user/g" /etc/neutron/metadata_agent.ini
sed -i "s/admin_password/#admin_password/g" /etc/neutron/metadata_agent.ini

sed -i "/\[DEFAULT\]/a \
auth_url = http://$CONTROLLER_IP:5000/v2.0\n\
auth_region = regionOne\n\
admin_tenant_name = service\n\
admin_user = neutron\n\
admin_password = $SERVICE_PWD\n\
nova_metadata_ip = $CONTROLLER_IP\n\
metadata_proxy_shared_secret = $META_PWD" /etc/neutron/metadata_agent.ini

#get external NIC info
for i in $(ls /sys/class/net); do
    if [ "$(cat /sys/class/net/$i/ifindex)" == '4' ]; then
        NIC=$i
        MY_MAC=$(cat /sys/class/net/$i/address)
        echo "$i ($MY_MAC)"
    fi
done

systemctl enable openvswitch.service
systemctl start openvswitch.service
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex $NIC
ethtool -K $NIC gro off

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
  /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
  /usr/lib/systemd/system/neutron-openvswitch-agent.service

systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \
  neutron-dhcp-agent.service neutron-metadata-agent.service \
  neutron-ovs-cleanup.service
systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \
  neutron-dhcp-agent.service neutron-metadata-agent.service

After the script has run, reboot the network node.

The Compute Node

Next we configure the compute node, which will run the QEMU/KVM hypervisor, nova-compute, cinder-volume, and the neutron virtual switch (OVS), which will connect the instances to the network node for routing to the external network. Save the text below to a file named compute-node.sh, make it executable and run it on the compute node.

[view raw text]

#!/bin/bash

source config

#install ntp
yum -y install ntp
systemctl enable ntpd.service
systemctl start ntpd.service

#openstack repos
yum -y install yum-plugin-priorities
yum -y install epel-release
yum -y install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm
yum -y upgrade
#yum -y install openstack-selinux

#loosen things up
systemctl stop firewalld.service
systemctl disable firewalld.service
sed -i 's/enforcing/disabled/g' /etc/selinux/config
echo 0 > /sys/fs/selinux/enforce

echo 'net.ipv4.conf.all.rp_filter=0' >> /etc/sysctl.conf
echo 'net.ipv4.conf.default.rp_filter=0' >> /etc/sysctl.conf
sysctl -p

#get primary NIC info
for i in $(ls /sys/class/net); do
    if [ "$(cat /sys/class/net/$i/ifindex)" == '3' ]; then
        NIC=$i
        MY_MAC=$(cat /sys/class/net/$i/address)
        echo "$i ($MY_MAC)"
    fi
done

#nova compute
yum -y install openstack-nova-compute sysfsutils libvirt-daemon-config-nwfilter

sed -i.bak "/\[DEFAULT\]/a \
rpc_backend = rabbit\n\
rabbit_host = $CONTROLLER_IP\n\
auth_strategy = keystone\n\
my_ip = $THISHOST_IP\n\
vnc_enabled = True\n\
vncserver_listen = 0.0.0.0\n\
vncserver_proxyclient_address = $THISHOST_IP\n\
novncproxy_base_url = http://$CONTROLLER_IP:6080/vnc_auto.html\n\
network_api_class = nova.network.neutronv2.api.API\n\
security_group_api = neutron\n\
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver\n\
firewall_driver = nova.virt.firewall.NoopFirewallDriver" /etc/nova/nova.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = nova\n\
admin_password = $SERVICE_PWD" /etc/nova/nova.conf

sed -i "/\[glance\]/a host = $CONTROLLER_IP" /etc/nova/nova.conf

#if compute node is virtual - change virt_type to qemu
if [ $(egrep -c '(vmx|svm)' /proc/cpuinfo) == "0" ]; then
    sed -i '/\[libvirt\]/a virt_type = qemu' /etc/nova/nova.conf
fi

#install neutron
yum -y install openstack-neutron-ml2 openstack-neutron-openvswitch

sed -i '0,/\[DEFAULT\]/s//\[DEFAULT\]\
rpc_backend = rabbit\n\
rabbit_host = '"$CONTROLLER_IP"'\
auth_strategy = keystone\
core_plugin = ml2\
service_plugins = router\
allow_overlapping_ips = True/' /etc/neutron/neutron.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = neutron\n\
admin_password = $SERVICE_PWD" /etc/neutron/neutron.conf

#edit /etc/neutron/plugins/ml2/ml2_conf.ini
sed -i "/\[ml2\]/a \
type_drivers = flat,gre\n\
tenant_network_types = gre\n\
mechanism_drivers = openvswitch" /etc/neutron/plugins/ml2/ml2_conf.ini

sed -i "/\[ml2_type_gre\]/a \
tunnel_id_ranges = 1:1000" /etc/neutron/plugins/ml2/ml2_conf.ini

sed -i "/\[securitygroup\]/a \
enable_security_group = True\n\
enable_ipset = True\n\
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver\n\
[ovs]\n\
local_ip = $THISHOST_TUNNEL_IP\n\
enable_tunneling = True\n\
[agent]\n\
tunnel_types = gre" /etc/neutron/plugins/ml2/ml2_conf.ini

systemctl enable openvswitch.service
systemctl start openvswitch.service

sed -i "/\[neutron\]/a \
url = http://$CONTROLLER_IP:9696\n\
auth_strategy = keystone\n\
admin_auth_url = http://$CONTROLLER_IP:35357/v2.0\n\
admin_tenant_name = service\n\
admin_username = neutron\n\
admin_password = $SERVICE_PWD" /etc/nova/nova.conf

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
  /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
  /usr/lib/systemd/system/neutron-openvswitch-agent.service

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service
systemctl start openstack-nova-compute.service
systemctl enable neutron-openvswitch-agent.service
systemctl start neutron-openvswitch-agent.service

#cinder storage node
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

yum -y install openstack-cinder targetcli python-oslo-db MySQL-python

sed -i.bak "/\[database\]/a connection = mysql://cinder:$SERVICE_PWD@$CONTROLLER_IP/cinder" /etc/cinder/cinder.conf
sed -i '0,/\[DEFAULT\]/s//\[DEFAULT\]\
rpc_backend = rabbit\
rabbit_host = '"$CONTROLLER_IP"'\
auth_strategy = keystone\
my_ip = '"$THISHOST_IP"'\
iscsi_helper = lioadm/' /etc/cinder/cinder.conf
sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = cinder\n\
admin_password = $SERVICE_PWD" /etc/cinder/cinder.conf

systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

echo 'export OS_TENANT_NAME=admin' > creds
echo 'export OS_USERNAME=admin' >> creds
echo 'export OS_PASSWORD='"$ADMIN_PWD" >> creds
echo 'export OS_AUTH_URL=http://'"$CONTROLLER_IP"':35357/v2.0' >> creds
source creds

When the script is complete, reboot the compute node.

Defining Neutron Networks

OK, just when you thought we had way too many networks, we need to create another one. After all, the goal of neutron is to create a scalable network infrastructure for your cloud, so that tenants can create their own complex cloud networks. So for our admin user, we will create one network (10.0.1.0/24). This network will be available for instances running on the compute node to be “plugged” into, will be tunneled over to the network node, where a neutron router will route the traffic to the external network (192.168.2.0/24). Here, a floating IP can be assigned to the instance so that you can reach it from the outside.

Note that ext-net and ext-subnet are shared items that will be used by all tenants to access the external network. By contrast, admin-net, admin-subnet and admin-router are private and will be used only by the admin user. To create these private networks for another user, you would have to change the creds file to contain the credentials for that user before running the script, or create these items in the GUI while logged on as that user (worthy of a separate post I think).

We also have to define the external network itself, so that neutron understands everything end to end. This last script creates the external network, the tenant network, and sets up a router between the two. Save the text below as make-network.sh, make it executable, and run it on any node.

[view raw text]

#!/bin/bash
source creds

neutron net-create ext-net --shared --router:external True \
--provider:physical_network external --provider:network_type flat

neutron subnet-create ext-net --name ext-subnet \
--allocation-pool start=192.168.2.200,end=192.168.2.220 \
--disable-dhcp --gateway 192.168.2.254 192.168.2.0/24

neutron net-create admin-net

neutron subnet-create admin-net --name admin-subnet \
--dns-nameserver 192.168.1.1 \
--gateway 10.0.1.1 10.0.1.0/24

neutron router-create admin-router

neutron router-interface-add admin-router admin-subnet

neutron router-gateway-set admin-router ext-net

It’s important to understand what’s going on here. First we’re creating the external network called ext-net. Next, we define a subnet on that ext-net network (192.168.2.0/24), and we’re allocating a range of addresses (20 in my case) for floating IPs. Next, we create a tenant network called admin-net and create a subnet on that network (10.0.1.0/24). Finally we create a neutron router, which has two interfaces. One will have the gateway address for the tenant subnet (10.0.1.1), and the other will have the first address allocated on the external network (192.168.2.200).

Growing the Stack

You can simply add more compute nodes, just build another server with two NICs and two disks, and run the scripts on the new node. You can also create as many tenant networks as you like.

Good luck stackers!

 

 

70 thoughts on “OpenStack Juno Scripted Install with Neutron on CentOS 7

  1. Debs

    Hi Brian, first of thanks very much for putting all this together. I am an extremely new stacker – like started two days ago. I ran thru your entire dialog above and it worked to the point where I can attempt to deploy an instance. I checked nova, glance, keystone and neutron using basic commands such as image-list or user-list and all looks good. However I can’t seem to deploy an instance.

    I keep getting the following error

    Error: Failed to launch instance “test3”: Please try again later [Error: No valid host was found. ].

    and having googled around I found this in the log file

    ERROR nova.compute.manager [-] Instance failed network setup after 1 attempt(s) 2014-11-29 11:36:58.939 4732 TRACE nova.compute.manager Traceback (most recent call last):

    Can you possibly shed some light?

    Many thanks, Debbie

    Reply
    1. Brian Seltzer Post author

      I’m assuming you created a network? If so, it may be that somewhere along the line your neutron database got into an inconsistent state. Try repopulating your neutron database (run this command on the controller node):

      su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf \
      –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno” neutron

      Reply
      1. Debbie Harris

        Hi Brian, thanks for responding. I worked out what it was after a fair amount of googling and checking the log file more thoroughly.

        A few lines below the error showed something like can’t add to external interface – or something like that. Which made me think realise I can’t see my demo network when I looked in the network tab of the provisioning

        So I realised the network wasn’t shared and therefore I could not choose it. I added this to the script and all worked fine… I hope it was the right thing to do?

        # neutron net-create –shared demo-net

        Thanks again,

        Debbie

        Reply
        1. Brian Seltzer Post author

          You can do that, but it sort of defeats the purpose of neutron. Sharing the user network means that multiple tenants can put their instances on the same network, thus creating a security concern. The demo-net I create in the script is for use by the admin user. Sorry I guess it was poorly named. To create a network for the “demo” user, you can either create a creds file for the demo user, source it and run the network creation script at the command line, or create the various network elements through the GUI interface while logged on as demo. I just updated the script to use the name admin-net to clarify.

          Reply
  2. Gerwin

    Hi Brian,

    Thanks for the scripts.
    It was very helpful. I found some typo’s tho 🙂

    You might want to change the mysql connection string in the controller_node script.
    Right now it sais:
    mysql://nova:Service123@$CONTROLLER_IP/nova

    Note that the password is included. If you change the password in the config, you’ll get an error after the installation. It can’t connect to the db anymore.

    Also the epel-release-7-2.noarch.rpm isn’t available anymore.
    You might want to change it to: epel-release-7-5.noarch.rpm

    Regards,
    Gerwin

    Reply
  3. galvezjavier

    Hi Brian, amazin blog. I have tried your guied HA Openstack ( Ubuntu icehouse) deployment, and lookign at this post, with Neutron, do you think is possible to build a HA network node ? I think following your previous post and looking at your scripts, it is possible to deploy a HA controller, but not sure about the network node.

    Reply
    1. Brian Seltzer Post author

      As far as I know, you can’t make Neutron highly available, at least not in an Active-active way. You may be able to do it active-passive, with a solution like pacemaker-corosync.

      Reply
  4. Dheeraj

    Hi Brian Seltzer,

    First of all thanks for your script and explaining the openstack in easiest way.
    i am getting one error in keystone end point creation, the error is looks like :
    “keystone endpoint-create: error: argument –service/–service-id/–service_id: expected one argument”

    I am stack here.
    How can i fix this problem.

    Many many thanks,

    Regards,
    Dheeraj

    Reply
    1. Brian Seltzer Post author

      Assuming that you didn’t make any changes with the script, and you started from a clean CentOS 7 build, I would suspect that your config file isn’t setup correctly. If there was a problem with the script, I would expect more readers to be complaining. In any case, to troubleshoot endpoint creation, I would first check that the service (that you’re trying to create an endpoint for) exists. Use the command keystone service-list and see if that service exists. If not, then try creating it by hand, and look back in the script to the place where the service is created and see if there’s a problem with the command.

      Reply
      1. Dheeraj

        Thanks Brian,

        I resolve the problem.
        My problem is at MySQL table;
        In your script you are installing “yum -y install mariadb mariadb-server MySQL-python”
        It create problem with mariadb and MySQL file and it shows service load failed.
        to come out this problem first i install “yum -y install mysql*”
        then i install “yum -y install mariadb mariadb-server MySQL-python” using your script.

        And instead of using this update
        “yum -y install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch
        you can use simply
        “yum -y install epel-release”
        this will install new version of epel (no need to thing about version of epel-release).

        Reply
  5. martin

    Hello Brian, I’m trying to put together a openstack (i actually build two), and both gives me the same error. I’ve tried at first to build a stack with the document (juno documentation with centos 7) and then recreated another one with the sames nodes (compute/network/controller) like the one you are suggesting and followed your procedure (scripts). When I try to lunch an instance (cirros image), I get the same error on both stack:

    NovaException: Unexpected vif_type=binding_failed

    I’ve passed a lot of hours trying to find anything but I can’t find anything.
    Would I have missed anything on both stack? Something network related?

    Thanks a lot for any help that you can provide me.

    Reply
    1. Brian Seltzer Post author

      Sorry, I recently updated the scripts due to the upgrade of elrepo from version 7.2 to 7.5, but I forgot to update the script for the network node. Once I fixed that, everything worked flawlessly. Please try rebuilding your network node now that the script is fixed. Hope that helps!

      Reply
  6. martin

    My error was that I was not using the good network.
    Choosing ext-net gives me the error.

    Other that that, I had to specified the glance_host=controller in the cinder conf on the compute node. If not, i wasn’t able to create a volume with the image when creating an instance.

    Reply
  7. John

    Your scripts carped all over the place when I first tired them….seems the ‘epel-release-7.2.noarch.rpm’ doesn’t exist anymore — its now 7.5. Once I made that change to the scripts, everything looked like it installed correctly. Tired firing up the cirros image you so thoughtfully added in, but no joy — no IP assigned, and even after I manually added it, can’t ping or talk to anything.

    I keep running into networking problems with neutron. It just doesn’t seem to work without a lot of threats and cursing. I can talk to the servers just fine, but try and get anything at all out of a VM just doesn’t seem to happen. Juno now has a ‘default’ security group that looks to allow everything, but I added specific ports on ingress and still no joy 🙁

    Any pointers about troubleshooting neutron networking?

    Thanks for contributing directions to the masses 🙂

    Reply
    1. Brian Seltzer Post author

      Yeah, someone told me that 7.2 went away, so I fixed that. In order to get into your instance via ssh, you do have to add a security rule for ssh ingress. You’ll also have to assign a floating IP address to the instance after its built. As for troubleshooting, the first thing to do is use the console screen and log onto the instance (most images don’t allow password logon, but the cirros image does). You can then check to see if you got an IP address, or if you can ping out to the network node, the router, etc. You can also try to ssh to the instance (the floating IP) from the network node, just to eliminate any routing issues. Beyond that, I’d need to know more about your network diagram to see what the problem might be. I just fixed the neutron scripts yesterday, so you might want to rebuild your network node unless you just built it today.

      Reply
  8. martin

    Hello Brian, I’ve rebuild all my nodes again (third time) and I’m again stuck at setting the network. Doing it like you gives me an error (No valid host). I did the setup as admin user but creating another network with demo credentials gives me the same error.
    Here is my config if you can shed some light.

    [root@controller ~]# neutron net-list
    +————————————–+———+—————————————————+
    | id | name | subnets |
    +————————————–+———+—————————————————+
    | 29243e5d-3f19-455a-af79-aa3f9a55d6be | ext-net | d4d82b8c-ace9-4387-83ae-9bb018ec060b 10.3.14.0/24 |
    +————————————–+———+—————————————————+
    [root@controller ~]# neutron subnet-list
    +————————————–+————+————–+————————————————+
    | id | name | cidr | allocation_pools |
    +————————————–+————+————–+————————————————+
    | d4d82b8c-ace9-4387-83ae-9bb018ec060b | ext-subnet | 10.3.14.0/24 | {“start”: “10.3.14.200”, “end”: “10.3.14.250”} |
    +————————————–+————+————–+————————————————+
    [root@controller ~]# neutron port-list
    +————————————–+——+——————-+————————————————————————————+
    | id | name | mac_address | fixed_ips |
    +————————————–+——+——————-+————————————————————————————+
    | 1a653091-8678-45f2-b134-026a060d82d3 | | fa:16:3e:90:70:66 | {“subnet_id”: “d4d82b8c-ace9-4387-83ae-9bb018ec060b”, “ip_address”: “10.3.14.203”} |
    +————————————–+——+——————-+————————————————————————————+

    Reply
    1. Brian Seltzer Post author

      You shouldn’t try to connect an instance to ext-net. The compute node has no connectivity to that network, except through the GRE tunnel to the network node. You should always connect your instance to the virtual network.

      Reply
    1. Brian Seltzer Post author

      A few lines above this point, you were prompted to enter a new password for the MySQL root user. Now, you must use that new password to login to create the OpenStack databases.

      Reply
  9. dheerajchitara

    Hi Brian Seltzer,

    I installed all the three nodes using the script and all node and juno components are running fine.
    My physical machine having two NIC card, on both card i assign these ip respectively: 172.17.16.20 and 172.17.15.172.
    And all the nodes are running on VM.

    My Controller ip is 172.17.15.200 .
    My Network node ip is 172.17.15.201, 10.0.0.231, 172.17.16.50 .
    My Compute node ip is 172.17.15.202 .
    My subnet 1 having ip 172.17.15.0/24 (Web and SSH).
    My subnet 2 having ip 10.0.0.0/24 (GRE Tunnel).
    My subnet 3 having ip 172.17.16.0/24 (Floating IP).

    I create my public network(172.17.16.0/24) and private network(172.17.15.0/24) with router connected to both public and private interface but my route shows internal(private) network status is UP and External(public) network status is DOWN (router gateway and floating ip status DOWN).
    Because of this my instance console is not opening, it’s showing error server disconnected.

    I also try/create ifcfg-br-ex file and add port to OVS bridge but is’s also not works.
    I am totally confused in my network cardinales.

    Thanks,
    Dheeraj

    Reply
    1. Brian Seltzer Post author

      I don’t think your network design is correct. 172.17.15.0/24 is your management network (Web/SSH). We do not use this network within neutron at all. So you can’t use that as your private network. Your external floating IP network 172.17.16.0/24 is fine (let’s call it ext-net). Now you need to create a private network, say 10.0.1.0/24 (let’s call it demo-net) and create a neutron router to route demo-net to ext-net. So where you mentioned 3 networks in your design, you really need 4 (management, external, GRE tunnel, and private). Hope that helps!

      Reply
  10. compendius

    Hi, this is very helpful. I have got it working.
    I also tried with a custom ‘packstack’ answer file , which worked, but I noticed that it does not implement ML2 like you do on the compute/network node (only the controller node). You also include the ‘fix’ to point to ML2 rather that openvswitch (alter /usr/lib/systemd/system/neutron-openvswitch-agent.service), which I have seen referenced elsewhere.
    So this brings me onto DVR. I cannot get distributed virtual routing working. I can create the DVR and see the new namespace on the compute node but I cannot reach my instances, (my instances do not get dhcp)
    Have you managed to get DVR working in a three node cluster as above. If so do you have any tips?
    I think for DVR to work you need the l3 agent and ML2 running on the compute/network node?
    I followed this –

    https://kimizhang.wordpress.com/2014/11/25/building-redundant-and-distributed-l3-network-in-juno/

    Thanks

    Reply
  11. compendius

    Hi,

    Thanks for the quick reply.
    That may be a great spot by you. Indeed it would look like you need a dhcp agent on each compute node which would explain my issue. I will test.

    Cheers

    Reply
  12. compendius

    Hi,

    I got it working (dvr). There is not need to have a dhcp agent on the compute nodes. What is needed however is meaningful config in the l3agent.ini and ml2_conf.ini files referring to dvr on the controller node aswell as the network node. These nodes both need roughly the same parameters in both files then all worked. Thanks

    Reply
  13. compendius

    I will clean it up and post a link to the entire setup here. May take a bit of time

    Reply
  14. Daniel Ruiz

    Hi,

    Your scripts are really good work!!!! …But I’m a bit confused about how I must reconfigure your scripts to apply in my scenario. I’m going to explain it to you:
    I have a HPC with 33 computers: 1 server with 3 physical NICs and 32 computes with 2 physical NICs. Controller and network server are the same server. I think that controller-network server must have configured all of three NICs (IPs too): eth0 as MGMT (192.168.1.0/24), eth1 as DATA (10.0.0.0/24) and eth2 as EXT (192.168.2.0), but I think that EXT must have configured its IP address, because that interface connects internal cluster with external world. However, you say that eth2 doesn’t have to have an IP address.
    Am I wrong with this scenario?
    If you want, I could write you outside this forum for explaining you better my scenario…

    Thanks.

    Reply
    1. Brian Seltzer Post author

      I haven’t tested the scripts with the controller and the network node being on the same server. It may work, but the scripts were created with the idea of having a separate controller and network node. As for the EXT interface needing an IP address, it does not. When you assign a floating IPs to the instances, they will be available via the EXT interface. The interface itself doesn’t need one.

      I would recommend that you use your server with 3 nics as your network node, a 2 nic server as your controller node (only one nic needed), and the remaining 2 nic servers as your compute nodes, then the scripts should work as intended. Hope that helps!

      Reply
  15. martin

    I just created myself another stack with one controller doing also the network and a compute node.
    I must have done something not right on the network configuration of the OS. I have all my bridges DOWN everytime I reboot. If i “ifconfig up” them all, everything is working,

    [root@controller network-scripts]# ip a|grep ‘^[0-9]’
    1: lo: mtu 65536 qdisc noqueue state UNKNOWN
    2: eno16780032: mtu 1500 qdisc mq state UP qlen 1000
    3: eno33559296: mtu 1500 qdisc mq state UP qlen 1000
    4: eno50338560: mtu 1500 qdisc noop master ovs-system state DOWN qlen 1000
    5: ovs-system: mtu 1500 qdisc noop state DOWN
    6: br-ex: mtu 1500 qdisc noop state DOWN
    7: br-int: mtu 1500 qdisc noop state DOWN
    9: br-tun: mtu 1500 qdisc noop state DOWN

    I’m not sure what to look for.

    Reply
  16. Heinrich

    Thanks for the scripts

    I am getting the following when logged on to the GUI web interface and I don’t know where to start my investigation

    Error: Unable to retrieve container list.
    Error: Unauthorized: Unable to retrieve usage information.
    Error: Unauthorized: Unable to retrieve instances.

    Reply
  17. Arshad

    Hi Brian – really useful posts. Do you have scripts using Neutron with VLAN segmentation instead of GRE tunnels. If not – how much of a change would it be for your scripts? thanks

    Reply
  18. dheerajchitara

    hi Brian Seltzer,
    I am going to implement the Docker on openstack from this blog.

    http://blog.oddbit.com/2015/02/06/installing-nova-docker-on-fedora-21/

    but i am facing some problem can you help me to solve this problem.

    I am installing docker on compute node (CentOS 7) (Open-stack 3 node architecture).
    It’s install properly and running fine and also showing on controller.
    But when I want to lunch the instance from docker image it showing error.
    And In Nova services the host nova-docker status is down.

    == Glance images ==
    +————————————–+——–+————-+——————+———–+——–+
    | ID | Name | Disk Format | Container Format | Size | Status |
    +————————————–+——–+————-+——————+———–+——–+
    | 0c5c6867-e72c-4d9d-b8f5-92ef0605137f | centos | raw | docker | 232420352 | active |
    | 6e79d693-779b-44b7-b4a6-541f9dd78d59 | cirros | qcow2 | bare | 13200896 | active |
    +————————————–+——–+————-+——————+———–+——–+

    == Nova managed services ==
    +—-+——————+—————–+———-+———+——-+—————————-+—————–+
    | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
    +—-+——————+—————–+———-+———+——-+—————————-+—————–+
    | 1 | nova-consoleauth | juno-controller | internal | enabled | up | 2015-04-01T04:17:22.000000 | – |
    | 2 | nova-conductor | juno-controller | internal | enabled | up | 2015-04-01T04:17:22.000000 | – |
    | 3 | nova-cert | juno-controller | internal | enabled | up | 2015-04-01T04:17:22.000000 | – |
    | 4 | nova-scheduler | juno-controller | internal | enabled | up | 2015-04-01T04:17:22.000000 | – |
    | 5 | nova-compute | juno-compute | nova | enabled | up | 2015-04-01T04:17:25.000000 | – |
    | 6 | nova-compute | nova-docker | nova | enabled | down | 2015-03-28T06:52:21.000000 | – |
    +—-+——————+—————–+———-+———+——-+—————————-+————-

    And the error showed in log file is
    Error nova.virt.driver Enable to load virtulization driver.

    Import Error No module named oslo_utils

    Reply
        1. dheerajchitara

          Thanks for your support or suggestion Brian and hojin,

          controller network and compute nodes and services works fine.
          But i am stacked at nova-docker that service status shows “DOWN” that i mentioned above in ==Nova managed services== id:6.
          And oslo_utils error is resolved using provided link by hojin kim.

          Reply
          1. Brian Seltzer Post author

            Does nova-docker write any logs, perhaps in a subdirectory of /var/log? You might find an error that would point to the problem.

          2. dheerajchitara

            Hi Brian Seltzer

            i was install openstack using your script. I can ping all the nodes and openstack services are running OK.
            But I cannot execute the command glance image-list, neutron net-list or other command from
            compute node and these commands executed from controller.
            I thought that syncing between controller and compute node is not proper. So that the docker is not running.
            But without the docker installation other openstack instances are working properly.

            Or in the log file of docker and nova i didn’t find oslo_utils error of virtulization of docker image.
            that error I was already resolve by ppt provided by hojin kim but again its not working and doesn’t show any error.

            I can’t find any solution for this problem. suggest me solution.

            Thanks !! Once Again …!!

          3. Brian Seltzer Post author

            Hi Dheeraj,

            What errors do you get when you try to execute commands from the compute node?

          4. dheerajchitara

            HI Brian,

            I resolved that problem that error generated by virtual box network (Promiscuous Mode).

            But the error was generated by nova boot is
            “ERROR (BadRequest): Multiple possible networks found, use a Network ID to be more pecific.”

            And the log file gives error regarding “nova.compute.manager [-] Bandwidth usage not supported by hypervisor.”

            Thanks Brain for your support.

  19. aouimar mustapha

    it works fine after installation and dashboard appear to me over the tree node , and so over the host device and login correctly. but after reboot , the dashboard appear only in controller node , and i can’t login it say: Une erreur s’est produite durant l’authentification. Veuillez recommencer plus tard. (it means an occurred error , try later). any help please , i will be grateful.

    Reply
  20. Shirley

    Hi Brain,

    I’m a starter on Openstack. Thanks for sharing your script to build the Openstack. It is very helpful to build the Openstack system.

    After I deployed the Openstack system with 1 controller, 1 network and 2 compute nodes, I cannot ping the tenant router gateway from a system outside of the Openstack. I supposed to be able to ping it, right?

    Do you know what could be the root cause and how I can debug this issue? Like which logs I can look at and find the details?

    Checked some configurations in neutron and I got 404 on list the net-gateway. In your script, you have the command to add the gateway:

    neutron router-gateway-set admin-router ext-net

    So, I don’t know why we don’t have the gateway set. Any idea? And how can I debug it?

    (neutron) gateway-device-list
    Not Found (HTTP 404) (Request-ID: req-67e5c7de-153c-44b5-97e1-e0b0c3298048)
    (neutron) net-external-list
    +————————————–+———+—————————————————-+
    | id | name | subnets |
    +————————————–+———+—————————————————-+
    | 68beffe0-c28a-4b90-9957-f4652a58621f | ext-net | 02674e4f-7200-4a78-afe1-32ceda776395 10.30.33.0/24 |
    +————————————–+———+—————————————————-+
    (neutron)
    (neutron) net-gateway-list
    Not Found (HTTP 404) (Request-ID: req-ff48109b-b7e8-4cb6-a836-72d72d7799bc)

    (neutron) net-show ext-net
    +—————————+————————————–+
    | Field | Value |
    +—————————+————————————–+
    | admin_state_up | True |
    | id | 68beffe0-c28a-4b90-9957-f4652a58621f |
    | name | ext-net |
    | provider:network_type | flat |
    | provider:physical_network | external |
    | provider:segmentation_id | |
    | router:external | True |
    | shared | True |
    | status | ACTIVE |
    | subnets | 02674e4f-7200-4a78-afe1-32ceda776395 |
    | tenant_id | 57e5e6b278ed4aecafd4ab9506f55eb5 |
    +—————————+————————————–+
    (neutron) net-list
    +————————————–+———–+—————————————————–+
    | id | name | subnets |
    +————————————–+———–+—————————————————–+
    | 68beffe0-c28a-4b90-9957-f4652a58621f | ext-net | 02674e4f-7200-4a78-afe1-32ceda776395 10.30.33.0/24 |
    | 6dec6e69-7385-4044-8c29-040d3ca73fca | admin-net | 8400d030-bbc1-47b5-aac5-a7ada9b685ba 192.168.2.0/24 |
    +————————————–+———–+—————————————————–+
    (neutron) net-show admin-net
    +—————————+————————————–+
    | Field | Value |
    +—————————+————————————–+
    | admin_state_up | True |
    | id | 6dec6e69-7385-4044-8c29-040d3ca73fca |
    | name | admin-net |
    | provider:network_type | gre |
    | provider:physical_network | |
    | provider:segmentation_id | 1 |
    | router:external | False |
    | shared | False |
    | status | ACTIVE |
    | subnets | 8400d030-bbc1-47b5-aac5-a7ada9b685ba |
    | tenant_id | 57e5e6b278ed4aecafd4ab9506f55eb5 |
    +—————————+————————————–+
    (neutron) agent-list
    +————————————–+——————–+———-+——-+—————-+—————————+
    | id | agent_type | host | alive | admin_state_up | binary |
    +————————————–+——————–+———-+——-+—————-+—————————+
    | 172cc485-4147-4054-b04d-01d6641b404a | DHCP agent | network | 🙂 | True | neutron-dhcp-agent |
    | 58b9d538-d576-4a4d-8bd4-058302fb01d6 | Metadata agent | network | 🙂 | True | neutron-metadata-agent |
    | 681743da-b835-48a6-bbab-16fd3759c6f3 | Open vSwitch agent | network | 🙂 | True | neutron-openvswitch-agent |
    | ca568a91-77e2-40de-b457-92aab86b46f4 | Open vSwitch agent | compute2 | 🙂 | True | neutron-openvswitch-agent |
    | ce6e37c1-a84b-41c6-a816-34cc51c393be | L3 agent | network | 🙂 | True | neutron-l3-agent |
    | fba2dda0-818e-4d3e-98e7-ba710931f505 | Open vSwitch agent | compute1 | 🙂 | True | neutron-openvswitch-agent |
    +————————————–+——————–+———-+——-+—————-+—————————+

    Reply
    1. Brian Seltzer Post author

      Don’t forget to modify the security group rules for your admin project to include ICMP ingress, and SSH ingress. By default, you get no access to the VM. You can do this within the web dashboard under project – compute – access & security – security groups – default – manage rules. Hope that helps.

      Reply
      1. Shirley

        Hi Brian,
        Thanks for the quick reply. You are right. After I added the roles for ICMP and SSH, I am able to access the instances from outside now.

        I have also installed Orchestration module, but when I tried to create a stack. The heat stack-create is hung there. From the heat log I have seen the following message:

        2015-04-30 02:56:08.473 8307 ERROR oslo.messaging._drivers.impl_rabbit [req-bc7e9d03-cba7-45a9-b9fa-b7db273a910f ] AMQP server controller:5672 closed the connection. Check login credentials: Socket closed

        Is this the passowrd for RabbitMQ user? In your script, it seems you don’t create this user? Can I add this user in?

        Thanks,
        -Shirley

        Reply
        1. Brian Seltzer Post author

          Correct, I leave the Rabbit password as the default, so I don’t include anything for that in my scripts. If you do set the password, then you’ll have to configure that in the conf files of all of your services.

          Reply
      2. Shirley

        Got it. I corrected the password in the config file. It works now. Thanks very much!

        One more question 🙂 Is there an easy way to destroy all the configurations for an Openstack setup? For example, after I deployed the OpenStack, I want to change some of the configurations or reconfigure all the modules. I want to destroy all the configurations and re-run your scripts with some changes to redeploy them. But I don’t want to rebuild the base OS and the network configuration.

        Reply
        1. Brian Seltzer Post author

          That’s a toughie. It’s easy enough to delete the MySQL databases, re-create them and re-create all of the users, services and endpoints, but removing all of the binaries and config files might be a bit of a challenge. In my lab, I use virtual images to quickly create pristine base OS instances. With a little forethought, you could also reserve some space in your LVM volume groups for snapshots, and use them to revert the server back to their pre-install states. The snapshot space would need to be large enough for all of the space used during the OpenStack installation. Hope that helps.

          Reply
  21. Daniel Ruiz

    Hi,

    I’m trying to install Juno in a testing environment, only with 2 computers: one will act as controller and network and the other will act as compute. Both have 2 network interfaces: eth0 has (and need to has) public IP and eth1 has no address.
    I’m using your scripts, after some modifications, but I have the following question: how can I merge three interfaces in network node in only 2 interfaces? What interface should be the same? DATA+MGMT or EXT+MGMT?

    Could you help me?

    Thanks.

    Reply
      1. Daniel Ruiz

        Hi,

        With my configuration, instances starts OK, but despite dashboard shows each instance has a different IP addres, if you connect to the instances (with dashboard), eth0 has no IP address and there is no connectivity between instances. I think the problem is that I want eth0 as public+mgmt interface and eth1 as data interface (for VMs)… because it seems all network traffic is going through eth0 when my purppose is use eth1.

        Thanks.

        Reply
        1. Brian Seltzer Post author

          You must have a router on your network and three separate IP subnets, one for management, one for external VM traffic, and one for tunnel traffic. If you try to mix and match, you’re going to run into trouble.

          Reply
  22. Shekar

    Brian, This is very helpful. I deployed openstack system (Juno, CentOS7) on single node. Everything looks good but I can’t ping the instances from my openstack VM.

    My openstack VM IP is 172.16.31.160 on 172.16.31.0/24. I’ve created the cirros instance with 10.0.1.2 and assigned floating ip 172.16.31.201 from allocation_pool (172.16.31.200 – 172.16.31.220, external_network 172.16.31.0/24) . I’ve added security rules for ping and ssh. Can you please help me to debug the issue. Here are some of configs.

    [root@centosstack ~]# ifconfig br-ex
    br-ex: flags=4163 mtu 1500
    inet 172.16.31.160 netmask 255.255.255.0 broadcast 172.16.31.255
    inet6 fe80::20c:29ff:fec9:9a92 prefixlen 64 scopeid 0x20
    ether 00:0c:29:c9:9a:92 txqueuelen 0 (Ethernet)
    RX packets 22124 bytes 2465546 (2.3 MiB)
    RX errors 0 dropped 0 overruns 0 frame 0
    TX packets 14686 bytes 4657802 (4.4 MiB)
    TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

    [root@centosstack ~]# nova list
    +————————————–+———–+——–+————+————-+———————————-+
    | ID | Name | Status | Task State | Power State | Networks |
    +————————————–+———–+——–+————+————-+———————————-+
    | 58ed1045-4f73-4dbe-881b-4c04a94f036b | firstinst | ACTIVE | – | Running | demo-net=10.0.1.2, 172.16.31.201 |
    +————————————–+———–+——–+————+————-+———————————-+

    root@centosstack ~]# neutron subnet-list
    +————————————–+————-+—————-+—————————————————-+
    | id | name | cidr | allocation_pools |
    +————————————–+————-+—————-+—————————————————-+
    | ef6917a9-e563-4471-8ac3-66e7967ce7b6 | demo-subnet | 10.0.1.0/24 | {“start”: “10.0.1.2”, “end”: “10.0.1.254”} |
    | fb0cc267-8c3b-4549-8c26-a481243786eb | ext-subnet | 172.16.31.0/24 | {“start”: “172.16.31.200”, “end”: “172.16.31.220”} |

    [root@centosstack ~]# neutron subnet-show ext-subnet
    +——————-+—————————————————-+
    | Field | Value |
    +——————-+—————————————————-+
    | allocation_pools | {“start”: “172.16.31.200”, “end”: “172.16.31.220”} |
    | cidr | 172.16.31.0/24 |
    | dns_nameservers | |
    | enable_dhcp | False |
    | gateway_ip | 172.16.31.254 |
    | host_routes | |
    | id | fb0cc267-8c3b-4549-8c26-a481243786eb |
    | ip_version | 4 |
    | ipv6_address_mode | |
    | ipv6_ra_mode | |
    | name | ext-subnet |
    | network_id | 1eac04f0-ac68-432b-95c2-ac3173067031 |
    | tenant_id | 355645d1bca44f4d84f663b269ba821d |
    +——————-+—————————————————-+

    [root@centosstack ~]# nova secgroup-list-rules default
    +————-+———–+———+———–+————–+
    | IP Protocol | From Port | To Port | IP Range | Source Group |
    +————-+———–+———+———–+————–+
    | | | | | default |
    | tcp | 22 | 22 | 0.0.0.0/0 | |
    | icmp | -1 | -1 | 0.0.0.0/0 | |
    | | | | | default |
    +————-+———–+———+———–+————–+
    [root@centosstack ~]# ping 172.16.31.200
    PING 172.16.31.200 (172.16.31.200) 56(84) bytes of data.
    64 bytes from 172.16.31.200: icmp_seq=1 ttl=64 time=2.53 ms
    64 bytes from 172.16.31.200: icmp_seq=2 ttl=64 time=0.062 ms
    ^C
    — 172.16.31.200 ping statistics —
    2 packets transmitted, 2 received, 0% packet loss, time 1000ms
    rtt min/avg/max/mdev = 0.062/1.300/2.539/1.239 ms
    [root@centosstack ~]# ping 172.16.31.201
    PING 172.16.31.201 (172.16.31.201) 56(84) bytes of data.
    From 172.16.31.201 icmp_seq=1 Destination Host Unreachable
    From 172.16.31.201 icmp_seq=2 Destination Host Unreachable
    From 172.16.31.201 icmp_seq=3 Destination Host Unreachable
    From 172.16.31.201 icmp_seq=4 Destination Host Unreachable
    ^C
    — 172.16.31.201 ping statistics —

    Reply
    1. Brian Seltzer Post author

      Does the cirros instance actually have an IP address bound to its virtual NIC? You can review the instance logs to see if it received its IP.

      Reply
  23. Ian

    Hi

    Tried running your scripts and they implode badly setting up credentials. Basically just error after error like the below. Pages of them .. So I guess something changed in OS and broke everything .. **again**

    /usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.
    ‘python-keystoneclient.’, DeprecationWarning)
    WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored).
    Invalid OpenStack Identity credentials.

    Just thought I’d let you know.

    http://specs.openstack.org/openstack/keystone-specs/specs/keystoneclient/deprecate-cli.html

    It’s a shame as this is the only coherent example I’ve found to date that might actually help someone get a working instance up with multiple nodes and see how it is glued together. Very frustrating the way they keep breaking useful stuff ..

    –Ian

    Reply
    1. Brian Seltzer Post author

      If you used the raw text link to the controller script, I just noticed I had an old version of epel-release in there. Fixed now. Give it another try.

      Reply
      1. Ian

        Hi

        Thanks. I’ll wipe it all and run through it again tomorrow.

        Reply
  24. marthamcenteno

    Thank you Brian for breaking down this complex setup.. Any thoughts to how much of this would apply to a Kilo installation?

    Regards,
    Martha

    Reply
    1. Brian Seltzer Post author

      I know some changes were made in kilo that would make this juno-specific script obsolete. I plan to update this article for kilo when I get a chance.

      Reply
  25. Krishnan G

    Hi Brain,
    Previous Article is working fine.. with 2 node VM for me.. But it fail for multi node setup using OVS..
    I have simulated the 3 VM using vmware workstation in my system and it has single NIC.
    I am not sure is it possible to configure Single NIC in VM Workstation and connect GRE Tunnel for all the 3 VM to use ( 6 interfaces with OVS Bridge and port). Some issue here to assign the ports. When i try to associate ip from ext-net, it get that. but port is failing.

    After installation of all the Node with all the scirpts finaly i configured network, routers. But it fail to install vm instance as it fail to get the port. I can see bridge interfaces are down in network node as well as compute node.

    Error in nova-cmpute log: compute node
    ExternalNetworkAttachForbidden: It is not allowed to create an interface on external network 12180680-2450-49a8-8028-884c303458af

    Controller Node
    VM – Used NAT network adapter from vmware
    1NIC – 192.168.42.0/24(vmnet8 -NAT)

    Network Node
    VM – 3 Network Adapters
    1 NIC from 192.168.42.0/24(vmnet8 -NAT)
    1 NIC from 10.0.0.0/24 custom net(host only – vmnet9 for Tunnel IP)-10.0.0.128
    1 NIC from 192.168.43.0/24 custom net(host only – vmnet11 for flotting ip) no ip configured
    Compute Node
    VM – 2 Network Adapters
    1 NIC from 192.168.42.0/24(vmnet8 -NAT)
    1 NIC from 10.0.0.0/24 custom net(host only – vmnet9 for Tunnel IP) 10.0.0.129

    #!/bin/bash
    source creds

    neutron net-create ext-net –shared –router:external True \
    –provider:physical_network external –provider:network_type flat

    neutron subnet-create ext-net –name ext-subnet \
    –allocation-pool start=192.168.43.200,end=192.168.43.220 \
    –disable-dhcp –gateway 192.168.43.2 192.168.43.0/24

    neutron net-create admin-net

    neutron subnet-create admin-net –name admin-subnet \
    –dns-nameserver 192.168.42.2 \
    –gateway 10.0.1.1 10.0.1.0/24

    neutron router-create admin-router

    neutron router-interface-add admin-router admin-subnet

    neutron router-gateway-set admin-router ext-net

    Reply
    1. Brian Seltzer Post author

      It looks like you’re trying to connect the instance to the external network (ext-net), which is not correct. You should connect your instance to your virtual network (admin-net).

      Reply
  26. Krishnan G

    admin-net(10.0.1.x/24) is not shared for user demo. only way i can see instance using admin-net as admin user up and running…but not as demo user.

    1. Is it a right behaviour?

    2. Also after i get into console, it fails to install OS well . I try other iso image and it fail to detect NIC Card & no usable disk error during VM install

    Based on your procedure we suppose to use floating ip which should be in the range and it is ext-net 192.168.43.200,end=192.168.43.220..

    Instance Name Image Name IP Address Size Key Pair Status Availability Zone Task Power State Time since created Actions

    vm1 cirros-0.3.3-x86_64 10.0.1.3 m1.tiny – Active nova None Running 10 minutes

    below output is from compute node. br-tun is using 10.0.1.130

    3: eno33554960: mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:69:8a:97 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.130/24 brd 10.0.0.255 scope global eno33554960
    valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe69:8a97/64 scope link
    valid_lft forever preferred_lft forever

    [root@juno-compute ~]# ovs-vsctl show
    e0a2562d-97a8-4e40-8832-6f98a10686bb
    Bridge br-int
    fail_mode: secure
    Port patch-tun
    Interface patch-tun
    type: patch
    options: {peer=patch-int}
    Port “qvo3e3e3a68-0c”
    tag: 1
    Interface “qvo3e3e3a68-0c”
    Port br-int
    Interface br-int
    type: internal
    Bridge br-tun
    fail_mode: secure
    Port “gre-0a000081”
    Interface “gre-0a000081″
    type: gre
    options: {df_default=”true”, in_key=flow, local_ip=”10.0.0.130″, out_key=flow, remote_ip=”10.0.0.129″}
    Port patch-int
    Interface patch-int
    type: patch
    options: {peer=patch-tun}
    Port br-tun
    Interface br-tun
    type: internal
    ovs_version: “2.3.1”

    Reply
    1. Brian Seltzer Post author

      If you want to use the demo user, then you have to create a network for the demo tenant. Like so:

      TENANT=’demo’
      TENANT_ID=$(keystone tenant-list | awk ‘/ ‘”$TENANT”‘ / {print $2}’)
      echo $TENANT_ID

      neutron net-create –tenant-id $TENANT_ID demo-net

      neutron subnet-create demo-net \
      –name demo-subnet \
      –tenant-id $TENANT_ID \
      –dns-nameserver 192.168.1.1 \
      –gateway 10.0.1.1 \
      10.0.1.0/24

      neutron router-create –tenant-id $TENANT_ID demo-router
      neutron router-interface-add demo-router demo-subnet
      neutron router-gateway-set demo-router ext-net

      Reply
  27. Krishnan G

    Hi Brain,
    Thanks for your clarification. I am bit confused and not sure how to login using ssh either from compute node or other node into the instances.. After install the instalce, i can see below address for instance in dashboard
    10.0.1.10 – ip address assigned for VM.
    192.168.43.209 – floating ip associated manually

    i am able to login to VM. But ip address not assigned inside the VM. Not able to ping VM IP Address from Compute node. as well. But VM Console i am able to login.

    below ip addr configured in compute node. May i know what is is the issue here?

    [root@juno-compute ~]# ip addr
    1: lo: mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
    2: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:69:8a:8d brd ff:ff:ff:ff:ff:ff
    inet 192.168.42.135/24 brd 192.168.42.255 scope global eno16777736
    valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe69:8a8d/64 scope link
    valid_lft forever preferred_lft forever
    3: eno33554960: mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:69:8a:97 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.130/24 brd 10.0.0.255 scope global eno33554960
    valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe69:8a97/64 scope link
    valid_lft forever preferred_lft forever
    4: ovs-system: mtu 1500 qdisc noop state DOWN
    link/ether 82:f3:c6:5a:f4:ac brd ff:ff:ff:ff:ff:ff
    5: br-int: mtu 1500 qdisc noop state DOWN
    link/ether a2:3b:eb:50:20:45 brd ff:ff:ff:ff:ff:ff
    7: br-tun: mtu 1500 qdisc noop state DOWN
    link/ether ee:f8:e8:a5:81:4d brd ff:ff:ff:ff:ff:ff
    8: qbrd1ee9292-63: mtu 1500 qdisc noqueue state UP
    link/ether b2:a3:27:6d:b6:aa brd ff:ff:ff:ff:ff:ff
    inet6 fe80::b0a3:27ff:fe6d:b6aa/64 scope link
    valid_lft forever preferred_lft forever
    9: qvod1ee9292-63: mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 8e:36:8f:dc:b2:65 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::8c36:8fff:fedc:b265/64 scope link
    valid_lft forever preferred_lft forever
    10: qvbd1ee9292-63: mtu 1500 qdisc pfifo_fast master qbrd1ee9292-63 state UP qlen 1000
    link/ether b2:a3:27:6d:b6:aa brd ff:ff:ff:ff:ff:ff
    inet6 fe80::b0a3:27ff:fe6d:b6aa/64 scope link
    valid_lft forever preferred_lft forever
    11: tapd1ee9292-63: mtu 1500 qdisc pfifo_fast master qbrd1ee9292-63 state UNKNOWN qlen 500
    link/ether fe:16:3e:69:a3:a1 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc16:3eff:fe69:a3a1/64 scope link
    valid_lft forever preferred_lft forever

    Reply
    1. Brian Seltzer Post author

      It’s a little complicated. The 10.0.1.0 network is a virtual network. There’s no physical instantiation of that network, its ports or its IPs. You will only see the 10.0.1.10 address internally within the instance. The instance’s virtual NIC is attached to a virtual switch port on your open vswitch. To see this port, on the compute node type ‘ovs-vsctl show’. You will see a bridge called br-int and a port qv-xxxx. That’s your instance’s port. The br-int is patched to br-tun which has a gre tunnel port over to your network node. On the network node, ‘ovs-vsctl show’ will show you the ports on your virtual router for the 10.0.1.0 subnet. The internal port (10.0.1.0) and the external port (192.168.43.0). The router runs in its own network namespace. To see the namespaces, type ‘ip netns list’. You will see a qrouter name space with the same id as your router. Type ‘neutron router-list’ and get the id of your router. Then type ‘ip netns qrouter-xxxxxxxx ip a’ (where xxxxxxxx is the id of your router) and you will see the ip addresses for both sides of the router as well as the floating ip assigned to the instance. Got that? Haha it’s complicated!

      Reply

Leave a Reply