OpenStack: My Test Lab

By | May 16, 2014

This article is for those of you struggling to build OpenStack with only one or two machines and only one network. Many of us don’t have a dozen machines at work to test various OpenStack configurations. I certainly don’t. For a while, I had five physical servers. On two of them I installed the KVM hypervisor and built various VMs for my OpenStack controller nodes and such. The other three I used as compute nodes. These servers were uplinked to Cisco switches via trunk ports with a number of VLANs provisioned on them. This made it possible to bring in the three or four networks required to setup Neutron properly, build back-end storage networks, separation of management and API networks and all that good stuff. Sadly those servers were scarfed up when we went into production, and the budget isn’t there right now to buy more.

So I’ve built my own lab at home. It certainly doesn’t have all the bells and whistles that I had at work, but it’s enough to test various deployment scenarios. My series on OpenStack High Availability was completely built in my basement on two physical servers (but could have been built with only one if need be).

The Servers

I’ve built two servers using Dell Precision workstations that I bought on ebay. Both have two quad-core Xeon processors and 32GB of RAM. That makes a nice hypervisor host, and happens to be the maximum cores and RAM that a free version of VMWare ESXi will allow. Each server has a 450GB SSD disk installed, so disk IOPS are no problem. I think I managed to put these servers together for around $700 each. Between the two servers, I can run around 30 or so VMs with an average of 2GB of vRAM each. Not a bad size test lab.

The Network

Like most of us, the basis of my home network is a cable modem and wireless router. My wireless router is one of those with a handful of hard-wired Ethernet ports on the back that run at 1Gb/sec. That would normally be enough, but I happen to have several ports used by a NAS appliance, a Cable TV tuner, and a Media PC. Anyway, I needed more ports, so I bought an additional 8-port Ethernet switch that is uplinked to my wireless router. End result: I’ve got enough ports.

The wireless router instantiates the 192.168.1.0/24 subnet in my house, and provides DHCP services on that subnet. Anything on the network, wired or wireless, will get an address from the router, unless you define the address statically.

For the most part, this is all you need. As far as OpenStack is concerned, most services require only one subnet. For instance, a controller node can have just one network connection for management and API access, web access, etc. Really, the only role that requires a second network is the compute node. Even the compute node could conceivably use just a single network, but the problem is that OpenStack wants to serve up IP addresses on its own, and so wants to usurp an entire subnet.

If you let OpenStack use your management network for VMs, it won’t be long before you end up with an IP address conflict. In my case, given that my wireless router is at 192.168.1.1, as soon as the nova-compute service starts, and spawns a dnsmasq process, dnsmasq takes 192.168.1.1 and down goes my route to the Internet! Now there may be various ways to avoid this, but it’s just as easy to build a second network, and the resulting configuration will be closer to a real-world deployment.

My extra Ethernet switch does support VLAN configuration, but I’ve found that I can just have two IP subnets coexist on the same wire without VLAN tagging. I’m sure you network pros are rolling your eyes right now, but remember that this is my basement and security and scalability are not issues here. Simple is better here.

The second network is 192.168.2.0/24, and I instantiated this by building a CentOS VM as a router. You can read my article CentOS as a Router for more information on that. I also added a static route for this new network to my wireless router, that points to the CentOS router’s nic on the 192.168.1 side. The CentOS router’s second nic is connected to a second port group in VMware, where the 192.168.2.0 network is accessible. I set the CentOS router’s 2nd NIC to 192.168.2.254, so that dnsmasq can freely use the bottom of that subnet without causing any conflicts.

As for OpenStack compute nodes, yes they are VMs, and as such, performance is not good, but it’s adequate for testing. The compute node is simply given two NICs and the NIC for OpenStack VMs is plugged into that second port group (the routed, 192.168.2.0/24 network). The management NIC is plugged into the direct (192.168.1.0/24) network, as are any single-NIC VMs (such as controller nodes). The result is that all traffic is direct on 192.168.1.0/24 except for traffic to and from OpenStack instances which has to traverse the CentOS router. Hopefully the diagram below will clarify all of this.

myTestLab (1)

 

 

 

Leave a Reply