This article provides scripts to simplify the installation of OpenStack Juno, including the Neutron networking component, onto a minimum number of servers for testing. If you’ve done a manual install by following the official documentation, you know there’s a large number of steps, and a basic install can take hours. Given the complexity of even a basic environment, there’s a lot of places where you can make a typo or enter the wrong IP address in a configuration file. The goal of these scripts is to make it quick and easy to stand up a new stack in a repeatable fashion.
In my previous article: OpenStack Juno Scripted Installation on CentOS 7, I used the nova-network legacy networking component. This is about as simple as it gets, requiring only two servers and two subnets. There’s a benefit too in terms of network performance for the instances, since the instances are “plugged into” a virtual bridge that is uplinked directly to the physical network
Now, using neutron, we need three servers and three subnets. Instances are “plugged into” an open vswitch (OVS) on the compute node. The instance traffic is then encapsulated in a GRE tunnel and shipped over to the network node, where it is routed (by the neutron L3 agent) to the physical network. The network performance in this case is dramatically worse than with the nova-network scenario. I understand that neutron is required for a multi-tenant, secure and scalable cloud. Just keep this in mind if your goal is to build a simple private cloud that performs well. Nuff said.
OK, so we need three servers and three subnets. The basic design is shown in the figure below:
We’ve got three servers, a controller node, a network node and a compute node. All three have a NIC connected to our management network (192.168.1.0/24 in my case), which will provide SSH access to the servers as well as provide web and API access to the stack. In a customer/public facing cloud scenario, you’d probably want to separate the SSH traffic onto its own NIC, or otherwise limit SSH traffic to the inside.
The network node will require two additional NICs, one for the external instance traffic (192.168.2.0/24 in my case) where floating IPs will be applied, and one NIC for the tunnel network (10.0.0.0/24 in my case). The tunnel network will carry software-defined networks between the network node and the compute nodes.
The compute node will require one additional NIC for the tunnel network, and will also require a 2nd hard disk for our cinder-volumes.
Let’s allocate some IP addresses for the nodes. Don’t kill yourself configuring all this by hand, we’ve got a script for this! I will assign the following IP addresses for the first NIC on the management network (192.168.1.0/24):
- juno-controller: 192.168.1.232
- juno-network: 192.168.1.231
- juno-compute: 192.168.1.230
I will assign tunnel endpoint address for the second NIC on the tunnel network (10.0.0.0/24):
- juno-network: 10.0.0.231
- juno-compute: 10.0.0.230
The third NIC on the network node will not have an address of its own, floating IPs will use this NIC to access the outside world (192.168.2.0/24). The instances themselves will be on a software-defined network (10.0.1.0/24) that we’ll define later with neutron.
OK, all of the scripts here rely on a config file, which defines various details of the stack: the IP address of the controller, some key passwords, and the IP configuration of the node that we’re configuring. We’ll use this config file for all three nodes, adjusting the local IP info for each one, while the stack details remain the same. Save the text below as a file named config and copy it to all three nodes.
The first script that we’ll use will configure the IP addresses on the node we’re working on. Remember to adjust THISHOST_NAME, THISHOST_IP and THISHOST_TUNNEL_IP in the config file to match the addresses we allocated for each node. Save the text below as ipsetup.sh, copy it to each node and make it executable (chmod +x ipsetup.sh)
After running the ipsetup.sh script on each node, reboot the node. Next, we’ve got a script for each node which will install and configure the OpenStack packages.
The Controller Node
The following script installs the basic controller stack, which includes MariaDB, RabbitMQ, Glance, and the API/Scheduler components of Nova, Neutron and Cinder. Save the text below to a file named controller-node.sh, make it executable, and run it on the controller node.
Now reboot the controller and it should be up and running.
The Network Node
The next script configures the network node, which runs the majority of the neutron services and carries the network traffic of the instances, coming in over the tunnel network from the compute node(s) and routing the traffic out to the eternal network. Save the text below as network-node.sh, make it executable, and run it on the network node.
After the script has run, reboot the network node.
The Compute Node
Next we configure the compute node, which will run the QEMU/KVM hypervisor, nova-compute, cinder-volume, and the neutron virtual switch (OVS), which will connect the instances to the network node for routing to the external network. Save the text below to a file named compute-node.sh, make it executable and run it on the compute node.
When the script is complete, reboot the compute node.
Defining Neutron Networks
OK, just when you thought we had way too many networks, we need to create another one. After all, the goal of neutron is to create a scalable network infrastructure for your cloud, so that tenants can create their own complex cloud networks. So for our admin user, we will create one network (10.0.1.0/24). This network will be available for instances running on the compute node to be “plugged” into, will be tunneled over to the network node, where a neutron router will route the traffic to the external network (192.168.2.0/24). Here, a floating IP can be assigned to the instance so that you can reach it from the outside.
Note that ext-net and ext-subnet are shared items that will be used by all tenants to access the external network. By contrast, admin-net, admin-subnet and admin-router are private and will be used only by the admin user. To create these private networks for another user, you would have to change the creds file to contain the credentials for that user before running the script, or create these items in the GUI while logged on as that user (worthy of a separate post I think).
We also have to define the external network itself, so that neutron understands everything end to end. This last script creates the external network, the tenant network, and sets up a router between the two. Save the text below as make-network.sh, make it executable, and run it on any node.
It’s important to understand what’s going on here. First we’re creating the external network called ext-net. Next, we define a subnet on that ext-net network (192.168.2.0/24), and we’re allocating a range of addresses (20 in my case) for floating IPs. Next, we create a tenant network called admin-net and create a subnet on that network (10.0.1.0/24). Finally we create a neutron router, which has two interfaces. One will have the gateway address for the tenant subnet (10.0.1.1), and the other will have the first address allocated on the external network (192.168.2.200).
Growing the Stack
You can simply add more compute nodes, just build another server with two NICs and two disks, and run the scripts on the new node. You can also create as many tenant networks as you like.
Good luck stackers!