OpenStack – I need how many NICs?

By | May 15, 2013

I’m currently working on an OpenStack deployment.  Looking at the documentation and the various deployment scenarios, each host appears to require at least two if not three or four network interface cards, depending on the OpenStack services running on that particular host.  For me, and I suspect for a lot of enterprises, this just isn’t an option.  I’m using blade servers, with two built-in NICs, and I want to team (bond) them for high availability, so essentially, I’ve only got one NIC per host.

OpenStack Networks

OpenStack deployments utilize four separate networks.  These include:

  • API Network: through which your users access the OpenStack dashboard and the REST API
  • External Network: through which your users’ VMs communicate with the rest of the network/Internet.
  • Data Network: through which traffic is switched between external-network-connected hosts and compute hosts
  • Management Network: through which VMs connect to iSCSI storage, admins ssh into hosts, etc.

It’s possible to combine some of these networks, for example, the API and Management networks could be combined, but this presents a security concern, especially if you are offering access to your API and dashboard over the Internet.  So, it’s probably best to keep the four networks separate if possible.

What’s the answer?  Well, VLAN’s of course.  The switch ports to which the OpenStack hosts are connected need to be configured as trunk ports (as opposed to access ports), which carry the various VLANs for each of the networks.  Then, the hosts’ network interfaces need to be configured using the corresponding VLAN IDs.

VLAN configuration is fairly simple on popular Linux distributions that you’ll probably use for OpenStack, such as CentOS and Ubuntu.  CentOS has the required VLAN support package pre-installed by default, but Ubuntu does not.  No big deal, a simple: sudo apt-get install vlan after the initial Ubuntu install takes care of that.  Well, that is, if you have connectivity to install packages from a repository, which you probably don’t at this point, given that your connectivity is VLANs!!!

CentOS or Ubuntu

OK, so you’ve got a catch 22 for Ubuntu.  One solution is to put the vlan package onto a thumb drive or into an ISO image, something that you can sneaker-net over to the host and install by hand, or present via the blade’s out-of-band interface (e.g. HP ILO virtual CD/DVD drive).  In any case, you can get it done, it’s just a bit of a pain.  CentOS on the other hand, just works out of the box.

It’s kind of a shame that Ubuntu doesn’t simply include the VLAN package in the base install, since a lot of the better-documented OpenStack installation examples are done on Ubuntu, and RedHat/CentOS examples are fewer and less complete.  There’s another reason I’m favoring CentOS, it automatically discovers my multipath SAN storage, where Ubuntu requires manual multipath configuration (yucky).  I’m a big fan of Ubuntu on the desktop, and as a virtual server, but as a physical host in the enterprise, it looks like CentOS is the way to go.

NIC Bonding

Anyway back to the networking, my blades have two NICs, which are detected by CentOS as eth0 and eth1.  I’ll configure these to be members of a NIC team using the Linux bonding module.  The virtual bonding interface, bond0 will be created, and eth0 and eth1 will be slaves to bond0.  Then, I will create an interface for each VLAN that uses bond0 as its trunk.

The bonding interface, bond0, will load-balance traffic across eth0 and eth1 using one of various types of load-balancing/failover techniques.  Which one to use depends on what types of load-balancing your network switches support.  In my case, I’m using Cisco switches that support LACP port-channeling.  This provides good distribution of traffic across the NICs and rapid failover in the event of a link failure.

Config Files

In CentOS, you will find your network interface definitions under /etc/sysconfig/network-scripts.  In our example, let’s say we’ve got four VLANs with the IDs of 10,20,30 and 40.  In this case, we’ll need to create or modify the following config files:

  • ifcfg-eth0
  • ifcfg-eth1
  • ifcfg-bond0
  • ifcfg-bond0.10
  • ifcfg-bond0.20
  • ifcfg-bond0.30
  • ifcfg-bond0.40

Of these config files, only the VLAN interfaces will be configured with IP addresses.  eth0, eth1 and bond0 are just configured to be brought up and configured as a bonded set.  So, these config files are very simple:

ifcfg-eth0

DEVICE=eth0
ONBOOT=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes

ifcfg-eth1

DEVICE=eth1
ONBOOT=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes

ifcfg-bond0

DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="miimon=100 mode=4 lacp_rate=1"
IPV6INIT=no

Note that my BONDING_OPTS is specific to LACP port-channeling.  There are seven different load-balancing modes to choose from, which I won’t get into here.  Mode 4 is appropriate for LACP.

Finally, and here’s where we finally give the server an IP address and enable communication over the network, we define the VLAN interfaces.  I’ll show one here.  Let’s configure the interface for VLAN 10.  Let’s assume that your VLAN 10 is addressed using the 10.0.10.0/24 subnet, the IP address will be 10.0.10.5 and the router is located at 10.0.10.1, and that’s going to be your default gateway.  Then you would create the following config file:

ifcfg-bond0.10

DEVICE=bond0.10
BOOTPROTO=none
ONBOOT=yes
IPADDR=10.0.10.5
NETMASK=255.255.255.0
GATEWAY=10.0.10.1

Now, ignoring for now that we’ve got three more VLANs to configure, we restart networking using the sudo service network restart command, and the interfaces should come up. Finish the other three VLANs and you’ve got yourself four networks trunked over redundant NICs.

Of course, you’ll likely deploy OpenStack across numerous hosts.  Depending on their role, not every host will require access to every VLAN.  For example, a cloud controller node will require access to the API and Management VLANs.  A network node will require access to the External, Data, and Management VLANs.  A compute node will require access to the Data and Management VLANs.  Plan out your deployment and configure your VLANs as appropriate.  But that’s a topic for another day…

9 thoughts on “OpenStack – I need how many NICs?

  1. Dorian Carpentier De Changy

    Hi Mr, I read you topic how to with interest. Thank you for the alternative against 2-3 nic’s proposed.
    You mention 2 machines having respectively 1 and 2 Nic’s. I’ve got 2 machines with 1 nic ,both of them behind a home router.
    In a first instance, to domain sticked, will it be possible to manage the traffic over the vlan’s using a similar configuration based on a single nic per node?

    I figure out that this would be possible in your article where you could make not use of the 2 nic’s, is that not true?

    Second, I have seen on my box/router that vlan’s are configured, but I don’t think I can manage them, I will have to purchase a or two cheap ones.

    For an internship I would need to play with the services of openstack. Maybe one node running at home whilst for demo purpose the second laptop behind a router/swith on the campus.

    Do you can tell if that is possible? what are the issues (to whom much services ‘d I be bound due to single nic’s availability)?

    Thank you sir, Looking for your answer. plse help.
    Dorian

    Reply
  2. Brian Seltzer

    Home routers often don’t provide VLAN capability. Mine doesn’t. I ended up buying an inexpensive 8-port Cisco switch (the SG200-08, available on Amazon), which does allow me to create VLANs. Then I was able to create a trunk port with several VLANs on it, and created the VLAN interfaces as shown above. Hope that helps!

    Reply
  3. Ian Daniel

    Hi

    The bonding side of this on Linux is pretty easy. I have multiple vlans all working fine on 3 hosts. Openstack deploys ok and gui works etc etc.

    Making this stuff work properly with neutron and openstack .. that’s not so easy. I have searched high and low, the openstack docs are, to be frank, pretty poor for this kind of thing (well ok, they’re just poor full stop) and what looked like a sensible idea (5 x GbE nics bonded and then vlans for GRE, External and Mgmt) is fast becoming a total nightmare.

    Do you happen to have anything that clearly shows how to make openstack work with bonds and vlans? Because if you spend a few hours on google you can find a whole host of posts that *don’t* help you at all.

    Your other post (scripted install with Neutron and CentOS) is really good and helpful. But obviously I’m missing something with regard to vlans and getting things to work. I’m half tempted to just rip it all out and use single interfaces to prove it all works using that post. Then maybe go back to bonding if I ever find a coherent example that works. I’m trying to use packstack for it and am most of the way there but the networking is just eluding me.

    Thanks for the posts.
    Ian

    Reply
      1. Ian Daniel

        Hi,

        Yes, sort of, it worked but there were a few cosmetic issues with the router i.e. the ports showed down even though they weren’t. I’m actually playing with getting liberty to work with a similar config now. Were you having issues with it I can try and help if you are.

        Reply
        1. Jordan Olin (OLINSolutions, Inc.)

          Hi Ian,

          Thanks for the replies and the link. Your write-up is really helpful.

          I am going to re-install my Controller node with Centos 7. Right now it is running Oracle Linux 7, but there is a compatibility issue with their OpenStack 2.0 distribution and Oracle Linux 7.1’s UEK (R3, but needs R4).

          I am still trying to understand what I need to do on my TP-Link switches with respect to both VLAN and LACP configuration. Not clear as to whether I need to use Tagged or Untagged VLANs; and if Tagged, what PVID do I need to use. Also, if I have multiple ports on my servers (I have anywhere from two ports to six ports depending on the server), should I create both a standard Linux bond (server-side teaming), or use active LACP between the two.

          Regardless, thanks again for the information.

          Cheers,

          Jordan.

          Reply
          1. Ian Daniel

            Hi Jordan

            VLAN 10 is a flat network, no tagging, PVID for the ports in that VLAN are set to 10 and I use a separate GbE port for that VLAN on all servers. For the LACP bond I use the remaining ports on the compute and network servers. That bond has tagging enabled on the switches so those ports are members of both VLAN 20 and 30 using tagged frames and the PVID for those ports I left at default(1).

            Regards,

            –Ian

Leave a Reply