This post shows how to deploy OpenStack using a flat networking model. The OpenStack documentation does a fair job of showing how to deploy the various services of OpenStack, however details of service placement and networking in a multi-node installation are unclear at best. The examples provided are usually incomplete or lacking in critical details.
I spent much of 2013 working with OpenStack, trying to figure out the best design for my use case. I’ve ended up with a flat networking model that minimizes complexity and maximizes performance. The design uses two networks, one for management and control of OpenStack, and one for virtual machine traffic. This is about as basic a network design as can be achieved with OpenStack. Yes, you could probably squeeze this down to one network, but it actually makes the configuration more complicated in the end.
To support two networks, our compute nodes (the servers that run the hypervisor that will host the virtual machines) will require two network cards, although you can also use two VLANs on a single network card. If you want more information on how to configure VLANs, read my article: OpenStack – I need how many NICs?
Now there are certainly caveats to this design. This design is for a private cloud, not a public cloud. From a security perspective, note that the web portal and API access to this cloud is via the management interface, where ssh access is also enabled. This would not be considered secure if web/API access from the Internet was required. This design also supports no software-defined networks. The VMs will be deployed on one big flat network segment that is accessible by the customers, so no NAT and no floating IPs are required. Both subnets are fully routed to the larger network and so are accessible by the customer.
Software-defined networking (SDN) is a powerful feature of cloud computing. It enables your customers to create their own virtual networks, each possibly reusing the same IP address ranges. The virtual machines on these virtual networks are then made available on the physical network via software-based NAT routers, and traffic between virtual machines residing on different hosts is tunneled over the physical network using tunneling protocols such as GRE or VXLAN.
SDN is an important feature in that customers may want to define their own networks for security reasons, or to isolate environments for testing, or to reuse IP address ranges. However, if these feature aren’t required, then the flat networking model eliminates the complexity as well as the performance degradation introduced by the software-defined NAT, routing and tunneling.
As you can see in the diagram above, we’ve got two types of computers (nodes) in our deployment. The controller node and one or more compute nodes. The controller node will host the core components of OpenStack, which include:
- Database Server (MySQL)
- Messaging Service (RabbitMQ)
- Identity Service (Keystone)
- Image Service (Glance)
- Block Storage (Cinder)
- Web Portal (Horizon)
- Compute API (Nova API)
That’s a lot of stuff to cram into one server, so you may want to spread these out onto multiple servers. Perhaps your organization has a team of database admins who maintain dedicated MySQL servers that you can deploy your databases to. From a performance perspective, the most important service to think about is Cinder. Cinder provides block storage volumes to virtual machines via iSCSI. This means that the network connection to your Cinder service could get very busy, and the disks could get hammered as well. So you might want to deploy Cinder onto a dedicated server, or indeed multiple servers. There are also alternative storage solutions, but I’ll save all that for another post.
The compute nodes will run the rest of the components that we need:
- Hypervisor (KVM)
- Compute Components (Nova-compute)
- Networking (Nova-network)
- Metadata Service (Nova-api-metadata)
All of the compute nodes will be identical. You can add as many compute nodes as needed to scale out your cloud.
OK, so I’ve got two network segments, one for management and one for virtual machines. Each has an associated IP range. My management network is shared with other existing infrastructure in my data center, however, the virtual machine segment must be dedicated to OpenStack, otherwise we’ll probably create IP address conflicts. My networks are:
- Management (192.168.1.0/24 – router address: 192.168.1.254)
- Virtual Machines (192.168.2.0/24 – router address: 192.168.2.254)
All of my nodes (controller and compute) will be connected to the management network using their first NIC (eth0) and have an IP address assigned. The compute nodes will also be connected to the virtual machine network using their second NIC (eth1), however no IP address will be assigned. Instead, a virtual switch (actually a bridge) will be linked to eth1, and virtual machines will be “plugged” into this virtual switch. The virtual machines will be given 192.168.2.x IP addresses so that they can communicate over the virtual machine network.
All of my nodes will run Ubuntu 12.04 LTS x64.
Building the Controller Node
I will not cover the installation of Ubuntu onto the servers. There’s plenty of documentation out there already. However, we do need to mention that our controller node will need one extra disk partition or extra disk to hold the Cinder storage volumes. So when installing Ubuntu onto the controller node, make sure you leave room on your disk or provide a 2nd disk for Cinder.
After the OS is installed, we can then configure the network. On the controller node, this is nothing special, only that eth0 should be given a static IP address on the management network. This is done by editing the /etc/network/interfaces file. My controller will have the address of 192.168.1.128.
Of course, your addresses, DNS servers and suffix will be different. Now we can install the OpenStack components. The following steps were scraped off of the OpenStack documentation, so not much original thinking here. First we install some base components and add the OpenStack package repository.
Then we edit /etc/mysql/my.cnf and change the bind-address to 0.0.0.0 to enable MySQL access over the network, then restart MySQL (service mysql restart).
Keystone (Identity Service)
Next we configure keystone. We must define an admin_token and a database connection string. I’ve set my admin_token to ADMIN123. The database connection string shows the username, password, IP address and the name of the database. We’ll create the database and the user in a moment.
Now we will create the keystone database. While we’re at it, we might as well create the databases for the other OpenStack services. You can paste the following into a terminal window of your controller node (or your MySQL server):
Each database is created and a corresponding user account is granted full access to the database with a password of Service123. Next, we will populate the keystone database with its tables, and restart the keystone service.
The keystone identity service is now ready to begin defining users and services.
We need to create an admin user as well as user ID’s services and service endpoint definitions in keystone. If you look at the OpenStack documentation, this is a laborious process that is prone to error. Luckily, someone (sorry I forgot where I found this) created a nice bash script to do the work. The script below defines all of the stuff that we need. Notice near the top of the file, we’re setting user names, passwords, and IP addresses specific to my environment, Change these as needed. Also, if you deploy various servers to separate hosts, you’ll need to tweak the service endpoint URLs. Finally, notice that I’ll commented out the creation of the network service because we’re not using it (we’re using nova-network instead of quantum/neutron).
Run the script on the controller.
OK, now we can do a quick test to show that keystone is working. All of the OpenStack command line tools require a user name, password and auth URL, to run successfully, making for some very long commands. Rather than enter this info for every command, we set some environment variables. Create a file called creds, and enter the following information:
Change the IP address, user name and password to match your environment. Now type source creds to set the information into your terminal environment. Now to test keystone, type keystone user-list and you should see something like this:
OK, on to the next service…
Glance (Image Service)
Glance provides storage for disk images. You upload images for your favorite operating systems that can be used to deploy virtual machines. We’ll install Glance on the controller.
Next we edit both the /etc/glance/glance-api.conf and the /etc/glance/glance-registry.conf and add the sql connection and the keystone authentication settings:
Next, edit /etc/glance/gpance-api-paste.ini and /etc/glance/glance-registry-paste.ini and set the following keystone settings:
Next, we can populate the glance database and restart the services:
Now we can test the service. Let’s download some images to put in our image store. We’ll get Ubuntu and Cirros (the Cirros image is useful for troubleshooting). The following commands will download the images and upload them into Glance:
Finally, we can show the images we have stored, by typing glance image-list. The results should look like this:
Nova (Compute Services)
Now we will configure the compute services that control the deployment of virtual machines. This is just the services that run on the controller node. Some other services run on the compute nodes.
Next, we edit /etc/nova/nova.conf and add the following to the file:
also edit /etc/nova/api-paste.ini and configure the filter:authtoken section:
Then we can populate the database and restart the services:
Finally we can test by typing nova image-list. The list of glance images should be displayed.
Horizon (web portal)
The web portal is installed using the following commands:
Then you can access the web portal by pointing your web browser to http://192.168.1.128/horizon
You can logon as admin, using the same password that you used in the keystone script and the creds file (password in my example). Although the web portal will function, we haven’t yet setup enough services to create any virtual machines. So on we go…
Cinder (Block Storage)
First we install the Cinder control services:
Then we edit the /etc/cinder/cinder.conf and add the following:
and we edit the /etc/cinder/api-paste.ini and configure the keystone:authtoken section:
Now we can populate the database and restart the services:
Next we’ll setup an LVM volume group for Cinder to use and install the cinder volume service. I’ve got a second hard disk (/dev/sdb) to use, so I’ll create my volume group there:
At this point, the controller configuration is complete.
Now onto the compute nodes. Again we do a fresh install of Ubuntu 12.04 LTS x64. Next we’ll setup our networking. Edit /etc/network/interfaces. It should look something like this:
Notice that we’ve got an IP address assigned to eth0, this is our management interface. However, eth1 has no IP address and it’s assigned to a bridge interface br100, which will be created later. eth1 is the physical network card that virtual machines attached to br100 will use to get to the physical network.
Now we can install the required software:
Next, edit the /etc/nova/nova.conf file and add the following:
Notice that the compute node’s IP address is used in the vnc section. Make sure this file is updated with the correct IP address on each compute node. Next, edit /etc/nova/api-paste.ini and configure the filter:authtoken section:
Now reboot the compute node, and it should be about ready to deploy virtual machines. AFter the reboot it’s a good idea to create and source the creds file (as we created on the controller node back in part one), so that we can execute commands on the compute node. However, the next commands can just as well be executed on the controller node. We need to define the network for the VMs. Enter the following command (adjust for your subnet information):
One last thing. dnsmasq is the service that will provide DHCP addressing as well as act as the default router for the virtual machines. This is fine and dandy if you want a Linux process routing all of your VM traffic. I don’t. Since each compute node will have the nova-network installed, we can tell dnsmasq to set the VM’s default gateway to our physical router, which will provide better performance and eliminate dnsmasq as a single point of failure. We create the file /etc/dnsmasq-nova.conf and add the following line:
Adjust for your subnet of course. And we’re done! Now we can test. If you want to add more compute nodes, just repeat the procedure above, and assign a unique IP address where appropriate.
OK, we’re ready to start deploying virtual machines. Before we begin, we’ll need an ssh key pair to logon to our VMs over the network. if you don’t already have a key pair, at a Linux command prompt type ssh-keygen to create one, Then type cat .ssh/id_dsa.pub and highlight and copy the resulting jibberish. Now logon to the web portal as admin, and in the left-hand pane, select the project tab. Click the Access and Security link and then in the right-hand pane, select the Keypairs tab.
Now click the Import Keypair button, enter your name and paste the jibberish into the public key field, and click Import Keypair.
Next, we should enable ssh traffic and ping to get to our VMs. Click on the Security Groups tab, and click the Edit Rules button for the default group. Click Add Rule, and add a Custom TCP Rule for port 22 (ssh) and add an All ICMP rule.
Now, finally, in the left-pane, click the Instances link and click Launch Instance. Give your instance an Instance Name (how about test1), select a flavor (flavors define the number of virtual CPU’s, RAM, disk size), and select an Instance Boot Source of ‘boot from image’, then select your Ubuntu image. If you’ve got more than one key pair defined, then click the Access and Security tab and select your Key pair, then click launch.
Unless you royally mucked up the config files (and you probably did), your instance should be launched on your compute node, an IP address should be allocated, and within a minute or two, you should be able to ssh into your new instance at the given IP address. Note that the default user account on the Ubuntu image is ubuntu, so you should type ssh ubuntu@ipaddress. Your key should have been imported during the deployment so you shouldn’t need to enter a password.
Having trouble? I’m not surprised. We covered a lot of configuration. If anything is a miss, you may get some errors, and OpenStack doesn’t provide much feedback in the web portal. If your instance fails to launch or fails to connect to the network, then it’s time to start looking at various logs on the compute node and the controller node. However, that’s a topic for future posts. I hope this was helpful!