I’m currently working on an OpenStack deployment. Looking at the documentation and the various deployment scenarios, each host appears to require at least two if not three or four network interface cards, depending on the OpenStack services running on that particular host. For me, and I suspect for a lot of enterprises, this just isn’t an option. I’m using blade servers, with two built-in NICs, and I want to team (bond) them for high availability, so essentially, I’ve only got one NIC per host.
OpenStack deployments utilize four separate networks. These include:
- API Network: through which your users access the OpenStack dashboard and the REST API
- External Network: through which your users’ VMs communicate with the rest of the network/Internet.
- Data Network: through which traffic is switched between external-network-connected hosts and compute hosts
- Management Network: through which VMs connect to iSCSI storage, admins ssh into hosts, etc.
It’s possible to combine some of these networks, for example, the API and Management networks could be combined, but this presents a security concern, especially if you are offering access to your API and dashboard over the Internet. So, it’s probably best to keep the four networks separate if possible.
What’s the answer? Well, VLAN’s of course. The switch ports to which the OpenStack hosts are connected need to be configured as trunk ports (as opposed to access ports), which carry the various VLANs for each of the networks. Then, the hosts’ network interfaces need to be configured using the corresponding VLAN IDs.
VLAN configuration is fairly simple on popular Linux distributions that you’ll probably use for OpenStack, such as CentOS and Ubuntu. CentOS has the required VLAN support package pre-installed by default, but Ubuntu does not. No big deal, a simple: sudo apt-get install vlan after the initial Ubuntu install takes care of that. Well, that is, if you have connectivity to install packages from a repository, which you probably don’t at this point, given that your connectivity is VLANs!!!
CentOS or Ubuntu
OK, so you’ve got a catch 22 for Ubuntu. One solution is to put the vlan package onto a thumb drive or into an ISO image, something that you can sneaker-net over to the host and install by hand, or present via the blade’s out-of-band interface (e.g. HP ILO virtual CD/DVD drive). In any case, you can get it done, it’s just a bit of a pain. CentOS on the other hand, just works out of the box.
It’s kind of a shame that Ubuntu doesn’t simply include the VLAN package in the base install, since a lot of the better-documented OpenStack installation examples are done on Ubuntu, and RedHat/CentOS examples are fewer and less complete. There’s another reason I’m favoring CentOS, it automatically discovers my multipath SAN storage, where Ubuntu requires manual multipath configuration (yucky). I’m a big fan of Ubuntu on the desktop, and as a virtual server, but as a physical host in the enterprise, it looks like CentOS is the way to go.
Anyway back to the networking, my blades have two NICs, which are detected by CentOS as eth0 and eth1. I’ll configure these to be members of a NIC team using the Linux bonding module. The virtual bonding interface, bond0 will be created, and eth0 and eth1 will be slaves to bond0. Then, I will create an interface for each VLAN that uses bond0 as its trunk.
The bonding interface, bond0, will load-balance traffic across eth0 and eth1 using one of various types of load-balancing/failover techniques. Which one to use depends on what types of load-balancing your network switches support. In my case, I’m using Cisco switches that support LACP port-channeling. This provides good distribution of traffic across the NICs and rapid failover in the event of a link failure.
In CentOS, you will find your network interface definitions under /etc/sysconfig/network-scripts. In our example, let’s say we’ve got four VLANs with the IDs of 10,20,30 and 40. In this case, we’ll need to create or modify the following config files:
Of these config files, only the VLAN interfaces will be configured with IP addresses. eth0, eth1 and bond0 are just configured to be brought up and configured as a bonded set. So, these config files are very simple:
DEVICE=eth0 ONBOOT=yes BOOTPROTO=none MASTER=bond0 SLAVE=yes
DEVICE=eth1 ONBOOT=yes BOOTPROTO=none MASTER=bond0 SLAVE=yes
DEVICE=bond0 BOOTPROTO=none ONBOOT=yes BONDING_OPTS="miimon=100 mode=4 lacp_rate=1" IPV6INIT=no
Note that my BONDING_OPTS is specific to LACP port-channeling. There are seven different load-balancing modes to choose from, which I won’t get into here. Mode 4 is appropriate for LACP.
Finally, and here’s where we finally give the server an IP address and enable communication over the network, we define the VLAN interfaces. I’ll show one here. Let’s configure the interface for VLAN 10. Let’s assume that your VLAN 10 is addressed using the 10.0.10.0/24 subnet, the IP address will be 10.0.10.5 and the router is located at 10.0.10.1, and that’s going to be your default gateway. Then you would create the following config file:
DEVICE=bond0.10 BOOTPROTO=none ONBOOT=yes IPADDR=10.0.10.5 NETMASK=255.255.255.0 GATEWAY=10.0.10.1
Now, ignoring for now that we’ve got three more VLANs to configure, we restart networking using the sudo service network restart command, and the interfaces should come up. Finish the other three VLANs and you’ve got yourself four networks trunked over redundant NICs.
Of course, you’ll likely deploy OpenStack across numerous hosts. Depending on their role, not every host will require access to every VLAN. For example, a cloud controller node will require access to the API and Management VLANs. A network node will require access to the External, Data, and Management VLANs. A compute node will require access to the Data and Management VLANs. Plan out your deployment and configure your VLANs as appropriate. But that’s a topic for another day…