OpenStack Mitaka – Scripted Installation on CentOS 7

By | September 22, 2016

Note: There is a newer version of this article: OpenStack Newton – Quick Install on CentOS 7

This article shows how to deploy OpenStack Mitaka on CentOS 7 using a single python script. The script follows the steps in the official OpenStack documentation. The idea is simply to automate those steps to avoid the time, effort, confusion and mistakes of performing the steps by hand. The script installs the core OpenStack services including Keystone, Glance, Nova, Neutron, Cinder and Dashboard, as well as MySQL, Rabbit, Memcache and Chrony.

The intention is to build a simple as possible OpenStack environment that consists of one controller node and one or more compute nodes. For testing purposes, these nodes may be built as virtual machines, though it’s certain that performance of instances running on a virtual compute node will be pretty poor. That said, we need two CentOS 7 machines with the following minimum recommended requirements:

Controller Node:

2 CPU
4 GB RAM
2 network adapters (one for management, one for the provider network)

Compute Node:

4 CPU
8 GB RAM
2 network adapters (one for management, one for the provider network)
2nd hard disk (for cinder volumes)

Networking:

We will need two networks: a management network and a provider network. The 1st network adapter of both nodes will be attached to the management network. This network can be shared by other hosts on your network, we just need to allocate an IP address for each node. In my example, I’ll use a private network (192.168.1.0/24) and allocate 192.168.1.79 for the controller node, and 192.168.1.80 for the compute node. No need to configure these IP addresses by hand, the script will do that.

The provider network should be a new, empty subnet (192.168.2.0/24) – OpenStack will control the provisioning of IP addresses on this network, so it is not recommended to share this network with other systems.

These two networks will communicate with each other via a router. In an enterprise data center, you’ll have routers in place, and if you ask nicely, your network administrator will take care of this for you. If you are working on a home network, you can build yourself a router. For more information, see this article: Build a Router on CentOS 7.

The resulting environment will look like the diagram below:

screen-shot-2016-09-22-at-8-52-02-am

Note that the 2nd NIC on the OpenStack nodes don’t have an IP address assigned. OpenStack will orchestrate the IP addressing for the provider network at runtime.

Steps for Deployment

First of course, you’ll need to build yourself the two CentOS 7 hosts mentioned above. I won’t get into how that’s done, but the basic OS installation on virtual or physical servers that meet of exceed the requirements above will suffice.

Download the script onto the servers

You can download the script from: http://pastebin.com/raw/2QUrU93v . To download the file from the Linux command line and make it executable, type the following commands:

curl -O http://pastebin.com/raw/2QUrU93v
tr -d '\r' < 2QUrU93v > openstack-deploy.py
chmod +x openstack-deploy.py

Now a quick check that the script is ready to run:

# ./openstack-deploy.py -h
usage: openstack-deploy.py [-h] {hostconfig,controller,compute} ...

positional arguments:
  {hostconfig,controller,compute}

optional arguments:
  -h, --help            show this help message and exit

As you can see , the script has three positional arguments: hostconfig, controller and compute. We will start by using hostconfig, which will setup the networking on the host.

Configure Networking on the Controller Node

To configure networking on the controller node, we’ll need to provide the network information for the management network, including hostname, IP address, subnet mask, default gateway, and two DNS server addresses. In my example, my controller will use the following:

  hostname: controller
IP Address: 192.168.1.79
   Netmask: 255.255.255.0
   Gateway: 192.168.1.1
     DNS 1: 192.168.1.253
     DNS 2: 192.168.1.252

We can configure the network using the script like so:

sudo ./openstack-deploy.py hostconfig -n controller -i 192.168.1.79 \
  -m 255.255.255.0 -g 192.168.1.1 -d 192.168.1.253 -d2 192.168.1.252

The script will configure the network interfaces and reboot the host.

Configure Networking on the Compute Node

In my example, my compute node will use the following:

  hostname: compute1
IP Address: 192.168.1.80
   Netmask: 255.255.255.0
   Gateway: 192.168.1.1
     DNS 1: 192.168.1.253
     DNS 2: 192.168.1.252

We can configure the network using the script like so:

sudo ./openstack-deploy.py hostconfig -n compute1 -i 192.168.1.80 \
  -m 255.255.255.0 -g 192.168.1.1 -d 192.168.1.253 -d2 192.168.1.252

The script will configure the network interfaces and reboot the host.

Deploying OpenStack on the Controller Node

Next we will deploy the OpenStack controller components onto the controller node. For this, we must ssh into the controller node and run the script, providing the admin token, admin password, service password and demo password, like so:

sudo ./openstack-deploy.py controller -t tokenstring \
  -a adminpasswd -s servicepasswd -d demopasswd

The script will install and configure the necessary components. This will take a few minutes. When the script has finished, you should be able to point your browser at the controller’s IP address to launch the dashboard. In my case, the URL would be http://192.168.1.79/dashboard

Deploying OpenStack on the Compute Node

Next we will deploy the OpenStack compute components onto the compute node. For this, we ssh into the compute node and run the script. We must provide the controller IP address, admin password, and service password, like so:

sudo ./openstack-deploy.py compute -c controllerip -a adminpasswd \
  -s servicepasswd

The script will install and configure the components. When the script has finished, there’s a few more things we need to do before we can deploy an instance.

Authentication for Command Line Tools

One of the things the script does is to create a client environment script named adminrc, which is created in the directory from where the script was executed. To enable authenticated access for the OpenStack command line tools, log onto the controller via ssh, and simply source this file. This sets environment variables for the life of the shell session.

source adminrc

Deploying a Glance Image

I’m a fan of the debian OpenStack image. Lean and mean. With an ssh session open with the controller node, we can get the debian image deployed into our Glance service.

wget http://cdimage.debian.org/cdimage/openstack/current/debian-8.6.0-openstack-amd64.qcow2

openstack image create "debian" --file debian-8.6.0-openstack-amd64.qcow2 \
  --disk-format qcow2 --container-format bare --public

Note that by the time you read this, the current version of debian may have changed. Go to http://cdimage.debian.org/cdimage/openstack/current/ to see what the latest version is.

Setup the Network

The script has setup the Neutron networking service to support a flat provider network configuration. We now need to create that network, as well as the subnet that our instances will use. In my example, I’m going to use the 192.168.2.0/24 subnet for my instances, which we can setup like so:

neutron net-create --shared --provider:physical_network provider \
  --provider:network_type flat mynet

neutron subnet-create --name mysubnet \
  --allocation-pool start=192.168.2.100,end=192.168.2.200 \
  --dns-nameserver 192.168.1.253 --gateway 192.168.2.254 \
  mynet 192.168.2.0/24

Here I’ve allocated 100 IP addresses to be consumed by instances on the 192.168.2.0/24 network. I’ve also provided the default gateway and DNS address.

Enable ssh Access to Your Instances

The following command will add a rule to the default security group to allow port 22 (ssh) inbound to the instances.

openstack security group rule create --proto tcp --dst-port 22 default

Import Your ssh Key

The next command will import your ssh key:

openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

Create a Flavor 

A flavor is a set of sizing parameters for an instance. The default flavors are somewhat under/over sized when you’re running a small test environment, so I like to create a flavor that is sized just right. Using this command:

openstack flavor create --id 0 --vcpus 1 --ram 512 --disk 2 m1.myflavor

Finally! Create an Instance

Let’s create an instance called instance1 using our new flavor, our debian image, our default security group, and our ssh key:

openstack server create --flavor m1.myflavor --image debian \
  --security-group default --key-name mykey instance1

It will take a few moments for the instance to boot, after which you can discover its IP address by looking at the instance list:

openstack server list
+--------------------------------------+-----------+--------+---------------------+
| ID                                   | Name      | Status | Networks            |
+--------------------------------------+-----------+--------+---------------------+
| 1bb1eebd-7c31-492d-aa44-da869d5cfdbf | instance1 | ACTIVE | mynet=192.168.2.101 |
+--------------------------------------+-----------+--------+---------------------+

and we can ssh into the instance:

bash:~$ ssh debian@192.168.2.101
The authenticity of host '192.168.2.101 (192.168.2.101)' can't be established.
ECDSA key fingerprint is SHA256:nJnn827v7QYnSpne90KsZJTnLQF55kHzsQTKmIQ1F+U.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.101' (ECDSA) to the list of known hosts.

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
debian@instance1:~$

Source Code for the Script

One thought on “OpenStack Mitaka – Scripted Installation on CentOS 7

  1. Brad Laue

    Awesome! I made productive use of your Juno installation scripts – they were instrumental for me in understanding the way OpenStack components work together, over and above the (then spotty, now a lot better) documentation. Many thanks.

    Reply

Leave a Reply