OpenStack Havana – Flat Networking

By | December 28, 2013

This post shows how to deploy OpenStack using a flat networking model.  The OpenStack documentation does a fair job of showing how to deploy the various services of OpenStack, however details of service placement and networking in a multi-node installation are unclear at best.  The examples provided are usually incomplete or lacking in critical details.

I spent much of 2013 working with OpenStack, trying to figure out the best design for my use case.  I’ve ended up with a flat networking model that minimizes complexity and maximizes performance.  The design uses two networks, one for management and control of OpenStack, and one for virtual machine traffic.  This is about as basic a network design as can be achieved with OpenStack.  Yes, you could probably squeeze this down to one network, but it actually makes the configuration more complicated in the end.

To support two networks, our compute nodes (the servers that run the hypervisor that will host the virtual machines) will require two network cards, although you can also use two VLANs on a single network card.  If you want more information on how to configure VLANs, read my article: OpenStack – I need how many NICs?

layout614

Now there are certainly caveats to this design.  This design is for a private cloud, not a public cloud.  From a security perspective, note that the web portal and API access to this cloud is via the management interface, where ssh access is also enabled.  This would not be considered secure if web/API access from the Internet was required.  This design also supports no software-defined networks.  The VMs will be deployed on one big flat network segment that is accessible by the customers, so no NAT and no floating IPs are required.  Both subnets are fully routed to the larger network and so are accessible by the customer.

Software-defined networking (SDN) is a powerful feature of cloud computing.  It enables your customers to create their own virtual networks, each possibly reusing the same IP address ranges.  The virtual machines on these virtual networks are then made available on the physical network via software-based NAT routers, and traffic between virtual machines residing on different hosts is tunneled over the physical network using tunneling protocols such as GRE or VXLAN.

SDN is an important feature in that customers may want to define their own networks for security reasons, or to isolate environments for testing, or to reuse IP address ranges.  However, if these feature aren’t required, then the flat networking model eliminates the complexity as well as the performance degradation introduced by the software-defined NAT, routing and tunneling.

As you can see in the diagram above, we’ve got two types of computers (nodes) in our deployment.  The controller node and one or more compute nodes.  The controller node will host the core components of OpenStack, which include:

  • Database Server (MySQL)
  • Messaging Service (RabbitMQ)
  • Identity Service (Keystone)
  • Image Service (Glance)
  • Block Storage (Cinder)
  • Web Portal (Horizon)
  • Compute API (Nova API)

That’s a lot of stuff to cram into one server, so you may want to spread these out onto multiple servers.  Perhaps your organization has a team of database admins who maintain dedicated MySQL servers that you can deploy your databases to.  From a performance perspective, the most important service to think about is Cinder.  Cinder provides block storage volumes to virtual machines via iSCSI.  This means that the network connection to your Cinder service could get very busy, and the disks could get hammered as well.  So you might want to deploy Cinder onto a dedicated server, or indeed multiple servers.  There are also alternative storage solutions, but I’ll save all that for another post.

The compute nodes will run the rest of the components that we need:

  • Hypervisor (KVM)
  • Compute Components (Nova-compute)
  • Networking (Nova-network)
  • Metadata Service (Nova-api-metadata)

All of the compute nodes will be identical.  You can add as many compute nodes as needed to scale out your cloud.

My Setup

OK, so I’ve got two network segments, one for management and one for virtual machines.  Each has an associated IP range.  My management network is shared with other existing infrastructure in my data center, however, the virtual machine segment must be dedicated to OpenStack, otherwise we’ll probably create IP address conflicts.  My networks are:

  • Management (192.168.1.0/24 – router address: 192.168.1.254)
  • Virtual Machines (192.168.2.0/24 – router address: 192.168.2.254)

All of my nodes (controller and compute) will be connected to the management network using their first NIC (eth0) and have an IP address assigned.  The compute nodes will also be connected to the virtual machine network using their second NIC (eth1), however no IP address will be assigned.  Instead, a virtual switch (actually a bridge) will be linked to eth1, and virtual machines will be “plugged” into this virtual switch.  The virtual machines will be given 192.168.2.x IP addresses so that they can communicate over the virtual machine network.

All of my nodes will run Ubuntu 12.04 LTS x64.

Building the Controller Node

I will not cover the installation of Ubuntu onto the servers. There’s plenty of documentation out there already. However, we do need to mention that our controller node will need one extra disk partition or extra disk to hold the Cinder storage volumes.  So when installing Ubuntu onto the controller node, make sure you leave room on your disk or provide a 2nd disk for Cinder.

After the OS is installed, we can then configure the network.  On the controller node, this is nothing special, only that eth0 should be given a static IP address on the management network.  This is done by editing the /etc/network/interfaces file.  My controller will have the address of 192.168.1.128.

 

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
	address 192.168.1.128
	netmask 255.255.255.0
	gateway 192.168.1.254
	dns-nameservers 192.168.1.2 192.168.1.3
	dns-search behindtheracks.com

Of course, your addresses, DNS servers and suffix will be different.  Now we can install the OpenStack components.  The following steps were scraped off of the OpenStack documentation, so not much original thinking here.  First we install some base components and add the OpenStack package repository.

 

apt-get update
apt-get install ntp
apt-get install python-mysqldb mysql-server
apt-get install python-software-properties
add-apt-repository cloud-archive:havana
apt-get update && apt-get dist-upgrade
apt-get install rabbitmq-server
apt-get install keystone

Then we edit /etc/mysql/my.cnf and change the bind-address to 0.0.0.0 to enable MySQL access over the network, then restart MySQL (service mysql restart).

Keystone (Identity Service)

Next we configure keystone.  We must define an admin_token and a database connection string.  I’ve set my admin_token to ADMIN123.  The database connection string shows the username, password, IP address and the name of the database.  We’ll create the database and the user in a moment.

[DEFAULT]
...
admin_token = ADMIN123
...
[sql]
connection = mysql://keystone:Service123@192.168.1.128/keystone

Now we will create the keystone database.  While we’re at it, we might as well create the databases for the other OpenStack services.  You can paste the following into a terminal window of your controller node (or your MySQL server):

 

mysql -u root -p <<END
CREATE DATABASE nova;
CREATE DATABASE cinder;
CREATE DATABASE glance;
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'Service123';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'Service123';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'Service123';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'Service123';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'Service123';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'Service123';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'Service123';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'Service123';
FLUSH PRIVILEGES;
END

Each database is created and a corresponding user account is granted full access to the database with a password of Service123.  Next, we will populate the keystone database with its tables, and restart the keystone service.

 

keystone-manage db_sync
service keystone restart

The keystone identity service is now ready to begin defining users and services.

We need to create an admin user as well as user ID’s services and service endpoint definitions in keystone.  If you look at the OpenStack documentation, this is a laborious process that is prone to error.  Luckily, someone (sorry I forgot where I found this) created a nice bash script to do the work.  The script below defines all of the stuff that we need.  Notice near the top of the file, we’re setting user names, passwords, and IP addresses specific to my environment,  Change these as needed.  Also, if you deploy various servers to separate hosts, you’ll need to tweak the service endpoint URLs.  Finally, notice that I’ll commented out the creation of the network service because we’re not using it (we’re using nova-network instead of quantum/neutron).

 

#!/bin/bash

# Modify these variables as needed
ADMIN_PASSWORD=password
SERVICE_PASSWORD=Service123
DEMO_PASSWORD=demo
export OS_SERVICE_TOKEN=ADMIN123
export OS_SERVICE_ENDPOINT="http://localhost:35357/v2.0"
SERVICE_TENANT_NAME=service
#
MYSQL_USER=keystone
MYSQL_DATABASE=keystone
MYSQL_HOST=localhost
MYSQL_PASSWORD=Service123
#
KEYSTONE_REGION=regionOne
KEYSTONE_HOST=192.168.1.128

# Shortcut function to get a newly generated ID
function get_field() {
    while read data; do
        if [ "$1" -lt 0 ]; then
            field="(\$(NF$1))"
        else
            field="\$$(($1 + 1))"
        fi
        echo "$data" | awk -F'[ \t]*\\|[ \t]*' "{print $field}"
    done
}

# Tenants
ADMIN_TENANT=$(keystone tenant-create --name=admin | grep " id " | get_field 2)
DEMO_TENANT=$(keystone tenant-create --name=demo | grep " id " | get_field 2)
SERVICE_TENANT=$(keystone tenant-create --name=$SERVICE_TENANT_NAME | grep " id " | get_field 2)

# Users
ADMIN_USER=$(keystone user-create --name=admin --pass="$ADMIN_PASSWORD" --email=admin@domain.com | grep " id " | get_field 2)
DEMO_USER=$(keystone user-create --name=demo --pass="$DEMO_PASSWORD" --email=demo@domain.com --tenant-id=$DEMO_TENANT | grep " id " | get_field 2)
NOVA_USER=$(keystone user-create --name=nova --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=nova@domain.com | grep " id " | get_field 2)
GLANCE_USER=$(keystone user-create --name=glance --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=glance@domain.com | grep " id " | get_field 2)
#QUANTUM_USER=$(keystone user-create --name=quantum --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=quantum@domain.com | grep " id " | get_field 2)
CINDER_USER=$(keystone user-create --name=cinder --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=cinder@domain.com | grep " id " | get_field 2)

# Roles
ADMIN_ROLE=$(keystone role-create --name=admin | grep " id " | get_field 2)
MEMBER_ROLE=$(keystone role-create --name=Member | grep " id " | get_field 2)

# Add Roles to Users in Tenants
keystone user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $ADMIN_TENANT
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $ADMIN_ROLE
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER --role-id $ADMIN_ROLE
#keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $QUANTUM_USER --role-id $ADMIN_ROLE
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER --role-id $ADMIN_ROLE
keystone user-role-add --tenant-id $DEMO_TENANT --user-id $DEMO_USER --role-id $MEMBER_ROLE

# Create services
COMPUTE_SERVICE=$(keystone service-create --name nova --type compute --description 'OpenStack Compute Service' | grep " id " | get_field 2)
VOLUME_SERVICE=$(keystone service-create --name cinder --type volume --description 'OpenStack Volume Service' | grep " id " | get_field 2)
IMAGE_SERVICE=$(keystone service-create --name glance --type image --description 'OpenStack Image Service' | grep " id " | get_field 2)
IDENTITY_SERVICE=$(keystone service-create --name keystone --type identity --description 'OpenStack Identity' | grep " id " | get_field 2)
EC2_SERVICE=$(keystone service-create --name ec2 --type ec2 --description 'OpenStack EC2 service' | grep " id " | get_field 2)
#NETWORK_SERVICE=$(keystone service-create --name quantum --type network --description 'OpenStack Networking service' | grep " id " | get_field 2)

# Create endpoints
keystone endpoint-create --region $KEYSTONE_REGION --service-id $COMPUTE_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':8774/v2/$(tenant_id)s' --adminurl 'http://'"$KEYSTONE_HOST"':8774/v2/$(tenant_id)s' --internalurl 'http://'"$KEYSTONE_HOST"':8774/v2/$(tenant_id)s'
keystone endpoint-create --region $KEYSTONE_REGION --service-id $VOLUME_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':8776/v1/$(tenant_id)s' --adminurl 'http://'"$KEYSTONE_HOST"':8776/v1/$(tenant_id)s' --internalurl 'http://'"$KEYSTONE_HOST"':8776/v1/$(tenant_id)s'
keystone endpoint-create --region $KEYSTONE_REGION --service-id $IMAGE_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':9292' --adminurl 'http://'"$KEYSTONE_HOST"':9292' --internalurl 'http://'"$KEYSTONE_HOST"':9292'
keystone endpoint-create --region $KEYSTONE_REGION --service-id $IDENTITY_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':5000/v2.0' --adminurl 'http://'"$KEYSTONE_HOST"':35357/v2.0' --internalurl 'http://'"$KEYSTONE_HOST"':5000/v2.0'
keystone endpoint-create --region $KEYSTONE_REGION --service-id $EC2_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':8773/services/Cloud' --adminurl 'http://'"$KEYSTONE_HOST"':8773/services/Admin' --internalurl 'http://'"$KEYSTONE_HOST"':8773/services/Cloud'
#keystone endpoint-create --region $KEYSTONE_REGION --service-id $NETWORK_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':9696/' --adminurl 'http://'"$KEYSTONE_HOST"':9696/' --internalurl 'http://'"$KEYSTONE_HOST"':9696/'

Run the script on the controller.

OK, now we can do a quick test to show that keystone is working.  All of the OpenStack command line tools require a user name, password and auth URL, to run successfully, making for some very long commands.  Rather than enter this info for every command, we set some environment variables.  Create a file called creds, and enter the following information:

 

export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.1.128:35357/v2.0

Change the IP address, user name and password to match your environment. Now type source creds to set the information into your terminal environment.  Now to test keystone, type keystone user-list and you should see something like this:

 

+----------------------------------+--------+---------+-------------------+
|                id                |  name  | enabled |       email       |
+----------------------------------+--------+---------+-------------------+
| 7ec72d54885f4819812dfe99fe0a41c0 | admin  |   True  |  admin@domain.com |
| 2a1a51042481437dae9e6b69557dee0e | cinder |   True  | cinder@domain.com |
| 10ea230a871a49eeb73bc7ae6f05355e |  demo  |   True  |  demo@domain.com  |
| 319c83b145784c1fabf11ed47fd8eadf | glance |   True  | glance@domain.com |
| a2f07689eb5e462f9acd069433495583 |  nova  |   True  |  nova@domain.com  |
+----------------------------------+--------+---------+-------------------+

OK, on to the next service…

Glance (Image Service)

Glance provides storage for disk images.  You upload images for your favorite operating systems that can be used to deploy virtual machines.  We’ll install Glance on the controller.

 

apt-get install glance python-glanceclient

Next we edit both the /etc/glance/glance-api.conf and the /etc/glance/glance-registry.conf and add the sql connection and the keystone authentication settings:

 

...
sql_connection = mysql://glance:Service123@192.168.1.128/glance
...
[keystone_authtoken]
auth_host = 192.168.1.128
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = Service123

Next, edit /etc/glance/gpance-api-paste.ini and /etc/glance/glance-registry-paste.ini and set the following keystone settings:

 

[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=192.168.1.128
admin_user=glance
admin_tenant_name=service
admin_password=Service123
flavor=keystone

Next, we can populate the glance database and restart the services:

 

glance-manage db_sync
service glance-registry restart
service glance-api restart

Now we can test the service. Let’s download some images to put in our image store. We’ll get Ubuntu and Cirros (the Cirros image is useful for troubleshooting). The following commands will download the images and upload them into Glance:

 

wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
glance image-create --name="Cirros 0.3.1" --disk-format=qcow2 --container-format=bare --is-public=true < cirros-0.3.1-x86_64-disk.img

wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
glance image-create --name="Ubuntu 12.04" --disk-format=qcow2 --container-format=bare --is-public=true < precise-server-cloudimg-amd64-disk1.img

Finally, we can show the images we have stored, by typing glance image-list. The results should look like this:

 

+--------------------------------------+--------------+-------------+------------------+-----------+--------+
| ID                                   | Name         | Disk Format | Container Format | Size      | Status |
+--------------------------------------+--------------+-------------+------------------+-----------+--------+
| f36ea6ee-1eea-4f18-af1c-c6a502f468ec | Cirros 0.3.1 | qcow2       | bare             | 13147648  | active |
| a3e923a2-ae2b-4230-9bf9-d12d66bdd1be | Ubuntu 12.04 | qcow2       | bare             | 255066112 | active |
+--------------------------------------+--------------+-------------+------------------+-----------+--------+

Nova (Compute Services)

Now we will configure the compute services that control the deployment of virtual machines.  This is just the services that run on the controller node.  Some other services run on the compute nodes.

 

apt-get install nova-novncproxy novnc nova-api \
  nova-ajax-console-proxy nova-cert nova-conductor \
  nova-consoleauth nova-doc nova-scheduler \
  python-novaclient

Next, we edit /etc/nova/nova.conf and add the following to the file:

 

...
auth_strategy=keystone
my_ip=192.168.1.128
vncserver_listen=192.168.1.128
vncserver_proxyclient_address=192.168.1.128
rpc_backend = nova.rpc.impl_kombu
rabbit_host = 192.168.1.128

[database]
connection = mysql://nova:Service123@192.168.1.128/nova

[keystone_authtoken]
auth_host = 192.168.1.128
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = Service123

also edit /etc/nova/api-paste.ini and configure the filter:authtoken section:

 

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 192.168.1.128
auth_port = 35357
auth_protocol = http
auth_uri = http://192.168.1.128:5000/v2.0
admin_tenant_name = service
admin_user = nova
admin_password = Service123

Then we can populate the database and restart the services:

 

nova-manage db sync
service nova-api restart
service nova-cert restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart

Finally we can test by typing nova image-list. The list of glance images should be displayed.

Horizon (web portal)

The web portal is installed using the following commands:

 

apt-get install memcached libapache2-mod-wsgi openstack-dashboard
apt-get remove --purge openstack-dashboard-ubuntu-theme

Then you can access the web portal by pointing your web browser to http://192.168.1.128/horizon

You can logon as admin, using the same password that you used in the keystone script and the creds file (password in my example).  Although the web portal will function, we haven’t yet setup enough services to create any virtual machines.  So on we go…

Cinder (Block Storage)

First we install the Cinder control services:

 

apt-get install cinder-api cinder-scheduler

Then we edit the /etc/cinder/cinder.conf and add the following:

 

...
rpc_backend = cinder.openstack.common.rpc.impl_kombu
rabbit_host = 192.168.1.128
rabbit_port = 5672

[database]
sql_connection = mysql://cinder:Service123@192.168.1.128/cinder

and we edit the /etc/cinder/api-paste.ini and configure the keystone:authtoken section:

 

[filter:authtoken]
paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory
auth_host=192.168.1.128
auth_port = 35357
auth_protocol = http
admin_tenant_name=service
admin_user=cinder
admin_password=Service123

Now we can populate the database and restart the services:

 

cinder-manage db sync
service cinder-scheduler restart
service cinder-api restart

Next we’ll setup an LVM volume group for Cinder to use and install the cinder volume service. I’ve got a second hard disk (/dev/sdb) to use, so I’ll create my volume group there:

 

pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
apt-get install cinder-volume

At this point, the controller configuration is complete.

Compute Nodes

Now onto the compute nodes. Again we do a fresh install of Ubuntu 12.04 LTS x64.  Next we’ll setup our networking.  Edit /etc/network/interfaces. It should look something like this:

 

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
	address 192.168.1.129
	netmask 255.255.255.0
	gateway 192.168.1.254
	dns-nameservers 192.168.1.2 192.168.1.3
	dns-search behindtheracks.com

auto eth1
iface eth1 inet manual

auto br100
iface br100 inet manual
up ip link $IFACE promisc on
up ip link eth1 promisc on
bridge_ports eth1

Notice that we’ve got an IP address assigned to eth0, this is our management interface.  However, eth1 has no IP address and it’s assigned to a bridge interface br100, which will be created later.  eth1 is the physical network card that virtual machines attached to br100 will use to get to the physical network.

Now we can install the required software:

 

apt-get update
apt-get install ntp
apt-get install python-software-properties
add-apt-repository cloud-archive:havana
apt-get update && apt-get dist-upgrade
apt-get install python-mysqldb
apt-get install python-novaclient
apt-get install nova-compute-kvm python-guestfs
apt-get install nova-api-metadata
apt-get install nova-network
chmod 0644 /boot/vmlinuz*

Next, edit the /etc/nova/nova.conf file and add the following:

 

dnsmasq_config_file=/etc/dnsmasq-nova.conf

auth_strategy=keystone
rpc_backend = nova.rpc.impl_kombu
rabbit_host = 192.168.1.128

my_ip=192.168.1.129
vnc_enabled=True
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.129
novncproxy_base_url=http://192.168.1.128:6080/vnc_auto.html

glance_host=192.168.1.128

network_manager=nova.network.manager.FlatDHCPManager
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
flat_network_bridge=br100
flat_interface=eth1
public_interface=eth1

[database]
connection = mysql://nova:Service123@192.168.1.128/nova

Notice that the compute node’s IP address is used in the vnc section.  Make sure this file is updated with the correct IP address on each compute node.  Next, edit /etc/nova/api-paste.ini and configure the filter:authtoken section:

 

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS

Now reboot the compute node, and it should be about ready to deploy virtual machines.  AFter the reboot it’s a good idea to create and source the creds file (as we created on the controller node back in part one), so that we can execute commands on the compute node.  However, the next commands can just as well be executed on the controller node.  We need to define the network for the VMs.  Enter the following command (adjust for your subnet information):

 

nova network-create vmnet --fixed-range-v4=192.168.2.0/24 \
  --bridge-interface=br100 --multi-host=T --dns1=192.168.1.1

One last thing. dnsmasq is the service that will provide DHCP addressing as well as act as the default router for the virtual machines. This is fine and dandy if you want a Linux process routing all of your VM traffic. I don’t. Since each compute node will have the nova-network installed, we can tell dnsmasq to set the VM’s default gateway to our physical router, which will provide better performance and eliminate dnsmasq as a single point of failure. We create the file /etc/dnsmasq-nova.conf and add the following line:

 

dhcp-option=3,192.168.2.254

Adjust for your subnet of course.  And we’re done! Now we can test. If you want to add more compute nodes, just repeat the procedure above, and assign a unique IP address where appropriate.

Testing

OK, we’re ready to start deploying virtual machines.  Before we begin, we’ll need an ssh key pair to logon to our VMs over the network.  if you don’t already have a key pair, at a Linux command prompt type ssh-keygen to create one,  Then type cat .ssh/id_dsa.pub and highlight and copy the resulting jibberish.  Now logon to the web portal as admin, and in the left-hand pane, select the project tab.  Click the Access and Security link and then in the right-hand pane, select the Keypairs tab.

Now click the Import Keypair button, enter your name and paste the jibberish into the public key field, and click Import Keypair.

Next, we should enable ssh traffic and ping to get to our VMs.  Click on the Security Groups tab, and click the Edit Rules button for the default group.  Click Add Rule, and add a Custom TCP Rule for port 22 (ssh) and add an All ICMP rule.

Now, finally, in the left-pane, click the Instances link and click Launch Instance.  Give your instance an Instance Name (how about test1), select a flavor (flavors define the number of virtual CPU’s, RAM, disk size), and select an Instance Boot Source of ‘boot from image’, then select your Ubuntu image.  If you’ve got more than one key pair defined, then click the Access and Security tab and select your Key pair, then click launch.

Unless you royally mucked up the config files (and you probably did), your instance should be launched on your compute node, an IP address should be allocated, and within a minute or two, you should be able to ssh into your new instance at the given IP address.  Note that the default user account on the Ubuntu image is ubuntu, so you should type ssh ubuntu@ipaddress.  Your key should have been imported during the deployment so you shouldn’t need to enter a password.

Having trouble?  I’m not surprised.  We covered a lot of configuration.  If anything is a miss, you may get some errors, and OpenStack doesn’t provide much feedback in the web portal.  If your instance fails to launch or fails to connect to the network, then it’s time to start looking at various logs on the compute node and the controller node. However, that’s a topic for future posts.  I hope this was helpful!

8 thoughts on “OpenStack Havana – Flat Networking

  1. learner

    thank you very much. This is the best post of 100+ I have read to date. However, I still have some gaps in my knowledge.

    What happens if you want to use a SAN or NAS for cinder?
    Does Cinder now support FC also, I have been seeing mixed information on this

    Reply
    1. Michael Petersen

      I was able to get Cinder working with NFS on grizzly. It seemed to have issues with nfsv4 so I reverted back to v3. Here was info from the Cinder.conf:

      # NFS Setup
      volume_driver=cinder.volume.drivers.nfs.NfsDriver
      nfs_shares_config=/etc/cinder/nfs_shares
      nfs_mount_point_base=$state_path/mnt
      nfs_mount_options = nfsvers=3
      nfs_disk_util=df
      nfs_sparsed_volumes=True
      state_path = /var/lib/cinder
      auth_strategy = keystone

      Then create the file nfs_shares with this info:

      {ipaddressOfSan}:/exports/{blah}

      I don’t know how you’d do it over a fiber channel, most of my setup was over 10G interfaces. Hopefully that is helpful if you are looking to use a NAS with NFS. Most of their driver information doesn’t have a lot of data to walk you through the setup, just options that you aren’t sure if you need or note.

      Like Brian says though, as long as you can get your storage device mounted on the server that is running cinder then you should be able to change the state path and volume directory to make it work:

      state_path = /var/lib/cinder
      lock_path = /var/lock/cinder
      volumes_dir = /var/lib/cinder/volumes

      Make sure you are mounting the device to /var/lib/cinder/volumes and you should be good to go. Verify permissions and all of that good stuff as well.

      Reply
  2. Brian Seltzer

    Well no, to my knowledge, Cinder can’t control storage provisioning on an FC array, but you can still use FC storage (or NAS) underlying LVM. In other words, if you provision an SAN or NAS LUN to your cinder host, as let’s say /dev/sdb, then you can create your cinder-volumes volume group the same as if it’s local storage. As for NAS, I do believe there is a Cinder plug-in for NetApp, but since I don’t have a NetApp, I haven’t looked into it much…

    Reply
  3. Tri

    I install with your introduction, but step
    #nova network-list
    +————————————–+——-+—————-+
    | ID | Label | Cidr |
    +————————————–+——-+—————-+
    | c0c7a71d-956b-41b4-8b3b-51fe315a3bb8 | vmnet | 192.168.2.0/24 |
    +————————————–+——-+—————-+

    #nova image-list
    +————————————–+————–+——–+——–+
    | ID | Name | Status | Server |
    +————————————–+————–+——–+——–+
    | ef57d0ea-176e-463a-bd88-55dd49421fa6 | Cirros.0.3.1 | ACTIVE | |
    | 2d3108d5-db9c-4466-9cdf-cb8c6df1213c | Ubuntu12.04 | ACTIVE | |
    +————————————–+————–+——–+——–+
    #nova list
    +————————————–+————–+——–+————+————-+———-+
    | ID | Name | Status | Task State | Power State | Networks |
    +————————————–+————–+——–+————+————-+———-+
    | 16b21377-11d9-4d8d-bd8f-756b31e0d970 | Cirros | ERROR | None | NOSTATE | |
    | 219f2174-e7c9-4323-bda1-3da557af7149 | Cirros | ERROR | None | NOSTATE | |
    | e496b99e-73d5-45e3-8417-43e4beac12cf | Cirros | ERROR | None | NOSTATE | |
    | 29564c3f-f4ed-4d18-9cdc-d17624027148 | Ubuntu_Saucy | ERROR | None | NOSTATE | |
    | a47cef32-cf7a-4442-8396-c91b1821e120 | Ubuntu_Saucy | ERROR | None | NOSTATE | |
    +————————————–+————–+——–+————+————-+———-+
    I check your log,
    tail -f nova-scheduler.log
    INFO nova.scheduler.filter_scheduler [req-25b754dd-bdda-4e54-a708-6dbe44d349bb 649561b5ee694ef28538b7c192769ab4 2742f5167e3a4d57980a3004355e0f31] Attempting to build 1 instance(s) uuids: [u’219f2174-e7c9-4323-bda1-3da557af7149′]
    2014-07-28 16:23:03.762 16591 WARNING nova.scheduler.driver [req-25b754dd-bdda-4e54-a708-6dbe44d349bb 649561b5ee694ef28538b7c192769ab4 2742f5167e3a4d57980a3004355e0f31] [instance: 219f2174-e7c9-4323-bda1-3da557af7149] Setting instance to ERROR state.

    Would you please tell me about my error ?.

    Best and Regards.
    Tri.

    Reply
    1. Brian Seltzer Post author

      You need to find the error in another log. You should check the logs on the compute nodes, especially nova-network and nova-compute logs, to see what caused the error noted by the scheduler.

      Reply
  4. Tri

    Thank you, your reply.
    I has been check the logs on the compute nodes,
    #nova network-list
    +————————————–+——-+—————-+
    | ID | Label | Cidr |
    +————————————–+——-+—————-+
    | 736a5c40-c6bc-4c02-95cc-b819769244d8 | vmnet | 192.168.1.0/24 |
    +————————————–+——-+—————-+

    my logs.
    #tail -f /var/log/nova/nova-network.log
    ERROR nova.network.manager [req-1a0bd0f1-3328-4269-94fc-170618648a64 641fafd8ca934c42b30753564ec9dcbd eaf4414e5c24451d904122a0cecc0940] Unable to release 192.168.1.3 because vif doesn’t exist.

    Would you please tell me about my error ?.

    Best and Regards.
    Tri.

    Reply
  5. Alexander

    Hello nice tutorial,
    In my project I m in the same situation as your? But i m wonder how should I manage the route for the VM requiring an external access (e.g: Internet, Network legacy…).

    Do you define as default gateway the IP of the external router (192.168.2.254)? On which interface? the bridge? the physical interface eth1?

    Regards,

    Reply
    1. Brian Seltzer Post author

      The external router address is defined in the /etc/dnsmasq-nova.conf file as shown above. We add

      dhcp-option=3,192.168.2.254

      to the file. That sets the gateway address within each VM when it gets a dynamic address from dnsmasq. The hosts do not need to have this gateway defined on any interfaces.

      Reply

Leave a Reply