OpenStack Newton – Quick Install – CentOS 7

By | December 26, 2016

This article shows how to quickly deploy OpenStack Newton on CentOS 7 using a single python script. The script follows the steps in the official OpenStack documentation. The idea is simply to automate those steps to avoid the time, effort, confusion and mistakes of performing the steps by hand. The script installs the core OpenStack services including Keystone, Glance, Nova, Neutron, Cinder and Dashboard, as well as MariaDB, Rabbit and Memcached.

The intention is to build a simple as possible OpenStack environment that consists of one controller node and one or more compute nodes. For testing purposes, these nodes may be built as virtual machines, though it’s certain that performance of instances running on a virtual compute node will be pretty poor. That said, we need two CentOS 7 machines with the following minimum recommended requirements:

Controller Node:

  • 2 CPU
  • 4 GB RAM
  • 2 network adapters (one for management, one for the provider network)

Compute Node:

  • 4 CPU
  • 8 GB RAM
  • 2 network adapters (one for management, one for the provider network)
  • 2nd hard disk (for cinder volumes)

Networking:

We will need two networks: a management network and a provider network. The 1st network adapter of both nodes will be attached to the management network. This network can be shared by other hosts on your network, we just need to allocate an IP address for each node. In my example, I’ll use a private network (192.168.1.0/24) and allocate 192.168.1.79 for the controller node, and 192.168.1.80 for the compute node. No need to configure these IP addresses by hand, the script will do that.

The provider network should be a new, empty subnet (192.168.2.0/24) – OpenStack will control the provisioning of IP addresses on this network, so it is not recommended to share this network with other systems.

These two networks will communicate with each other via a router. In an enterprise data center, you’ll have routers in place, and if you ask nicely, your network administrator will take care of this for you. If you are working on a home network, you can build yourself a router. For more information, see this article: Build a Router on CentOS 7.

The resulting environment will look like the diagram below:

screen-shot-2016-09-22-at-8-52-02-am

Note that the 2nd NIC on the OpenStack nodes don’t have an IP address assigned. OpenStack will orchestrate the IP addressing for the provider network at runtime.

Steps for Deployment

First of course, you’ll need to build yourself the two CentOS 7 hosts mentioned above. I won’t get into how that’s done, but the basic OS installation on virtual or physical servers that meet of exceed the requirements above will suffice.

Download the script onto the servers

You can view the script at: http://pastebin.com/raw/1qfj2FQp. To download the file from the Linux command line and make it executable, type the following commands:

curl -O http://pastebin.com/raw/1qfj2FQp
tr -d '\r' < 1qfj2FQp > openstack-deploy.py
chmod +x openstack-deploy.py

Now a quick check that the script is ready to run:

# ./openstack-deploy.py -h
usage: openstack-deploy.py [-h] {hostconfig,controller,compute} ...

positional arguments:
  {hostconfig,controller,compute}

optional arguments:
  -h, --help            show this help message and exit

As you can see , the script has three positional arguments: hostconfig, controller and compute. We will start by using hostconfig, which will setup the networking on the host.

Configure Networking on the Controller Node

To configure networking on the controller node, we’ll need to provide the network information for the management network, including hostname, IP address, subnet mask, default gateway, and two DNS server addresses. In my example, my controller will use the following:

  hostname: controller
IP Address: 192.168.1.79
   Netmask: 255.255.255.0
   Gateway: 192.168.1.1
     DNS 1: 192.168.1.253
     DNS 2: 192.168.1.252

We can configure the network using the script like so:

sudo ./openstack-deploy.py hostconfig -n controller -i 192.168.1.79 \
  -m 255.255.255.0 -g 192.168.1.1 -d 192.168.1.253 -d2 192.168.1.252

The script will configure the network interfaces and reboot the host.

Configure Networking on the Compute Node

In my example, my compute node will use the following:

  hostname: compute1
IP Address: 192.168.1.80
   Netmask: 255.255.255.0
   Gateway: 192.168.1.1
     DNS 1: 192.168.1.253
     DNS 2: 192.168.1.252

We can configure the network using the script like so:

sudo ./openstack-deploy.py hostconfig -n compute1 -i 192.168.1.80 \
  -m 255.255.255.0 -g 192.168.1.1 -d 192.168.1.253 -d2 192.168.1.252

The script will configure the network interfaces and reboot the host.

Deploying OpenStack on the Controller Node

Next we will deploy the OpenStack controller components onto the controller node. For this, we must ssh into the controller node and run the script, providing the admin token, admin password, service password and demo password, like so:

sudo ./openstack-deploy.py controller -a adminpasswd \
  -s servicepasswd -d demopasswd

The script will install and configure the necessary components. This will take a few minutes. When the script has finished, you should be able to point your browser at the controller’s IP address to launch the dashboard. In my case, the URL would be http://192.168.1.79/dashboard

Deploying OpenStack on the Compute Node

Next we will deploy the OpenStack compute components onto the compute node. For this, we ssh into the compute node and run the script. We must provide the controller IP address, admin password, and service password, like so:

sudo ./openstack-deploy.py compute -c controllerip -a adminpasswd \
  -s servicepasswd

The script will install and configure the components. When the script has finished, there’s a few more things we need to do before we can deploy an instance.

Authentication for Command Line Tools

One of the things the script does is to create a client environment script named adminrc, which is created in the directory from where the script was executed. To enable authenticated access for the OpenStack command line tools, log onto the controller via ssh, and simply source this file. This sets environment variables for the life of the shell session.

source adminrc

Deploying a Glance Image

I’m a fan of the debian OpenStack image. Lean and mean. With an ssh session open with the controller node, we can get the debian image deployed into our Glance service.

wget http://cdimage.debian.org/cdimage/openstack/archive/8.6.2/debian-8.6.2-openstack-amd64.qcow2

openstack image create "debian" --file debian-8.6.2-openstack-amd64.qcow2 \
  --disk-format qcow2 --container-format bare --public

Note that by the time you read this, the current version of debian may have changed. Go to http://cdimage.debian.org/cdimage/openstack/current/ to see what the latest version is.

Setup the Network

The script has setup the Neutron networking service to support a flat provider network configuration. We now need to create that network, as well as the subnet that our instances will use. In my example, I’m going to use the 192.168.2.0/24 subnet for my instances, which we can setup like so:

neutron net-create --shared --provider:physical_network provider \
  --provider:network_type flat mynet

neutron subnet-create --name mysubnet \
  --allocation-pool start=192.168.2.100,end=192.168.2.200 \
  --dns-nameserver 192.168.1.253 --gateway 192.168.2.254 \
  mynet 192.168.2.0/24

Here I’ve allocated 100 IP addresses to be consumed by instances on the 192.168.2.0/24 network. I’ve also provided the default gateway and DNS address.

Enable ssh Access to Your Instances

The following command will add a rule to the default security group to allow port 22 (ssh) inbound to the instances.

openstack security group rule create --proto tcp --dst-port 22 default

Import Your ssh Key

The next command will import your ssh key:

openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

Create a Flavor 

A flavor is a set of sizing parameters for an instance. The default flavors are somewhat under/over sized when you’re running a small test environment, so I like to create a flavor that is sized just right. Using this command:

openstack flavor create --id 0 --vcpus 1 --ram 512 --disk 2 m1.myflavor

Finally! Create an Instance

Let’s create an instance called instance1 using our new flavor, our debian image, our default security group, and our ssh key:

openstack server create --flavor m1.myflavor --image debian \
  --security-group default --key-name mykey instance1

It will take a few moments for the instance to boot, after which you can discover its IP address by looking at the instance list:

openstack server list
+--------------------------------------+-----------+--------+---------------------+
| ID                                   | Name      | Status | Networks            |
+--------------------------------------+-----------+--------+---------------------+
| 1bb1eebd-7c31-492d-aa44-da869d5cfdbf | instance1 | ACTIVE | mynet=192.168.2.101 |
+--------------------------------------+-----------+--------+---------------------+

and after we’ve waited a minute or two for the instance to finish initializing, we can ssh into the instance:

bash:~$ ssh debian@192.168.2.101
The authenticity of host '192.168.2.101 (192.168.2.101)' can't be established.
ECDSA key fingerprint is SHA256:nJnn827v7QYnSpne90KsZJTnLQF55kHzsQTKmIQ1F+U.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.101' (ECDSA) to the list of known hosts.

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
debian@instance1:~$

Source Code for the Script

15 thoughts on “OpenStack Newton – Quick Install – CentOS 7

  1. martin dumont

    What if my NIC’s aren’t like expected by your script?
    [root@node-4 network-scripts]# ls /sys/class/net
    eno1 eno2 eno3 eno4 enp6s0 enp6s0.16 lo virbr0 virbr0-nic

    The only valid nic’s are enp6s0 and enp6s0.16.

    Thanks a lot for your wonderful script. If I can pass this step, it would be awsome.

    Reply
  2. Brian Seltzer Post author

    Hi Martin, it shouldn’t matter what your nics are named. The script (on line 42) enumerates the nics found in /sys/class/net, so whatever it finds, it uses.

    Reply
  3. Raj

    Hi Brian,

    I’m facing small error, if you can help, I would appreciate. I’m running centos7.3

    sudo ./openstack-deploy.py compute -c controllerip -a adminpasswd -s servicepasswd

    Traceback (most recent call last):
    File “./openstack-deploy.py”, line 287, in
    conn = pymysql.connect(host=’localhost’, port=3306, user=’root’, password = ”)
    File “/usr/lib/python2.7/site-packages/pymysql/__init__.py”, line 90, in Connect
    return Connection(*args, **kwargs)
    File “/usr/lib/python2.7/site-packages/pymysql/connections.py”, line 694, in __init__
    self.connect()
    File “/usr/lib/python2.7/site-packages/pymysql/connections.py”, line 916, in connect
    self._request_authentication()
    File “/usr/lib/python2.7/site-packages/pymysql/connections.py”, line 1124, in _request_authentication
    auth_packet = self._read_packet()
    File “/usr/lib/python2.7/site-packages/pymysql/connections.py”, line 991, in _read_packet
    packet.check_error()
    File “/usr/lib/python2.7/site-packages/pymysql/connections.py”, line 393, in check_error
    err.raise_mysql_exception(self._data)
    File “/usr/lib/python2.7/site-packages/pymysql/err.py”, line 107, in raise_mysql_exception
    raise errorclass(errno, errval)
    pymysql.err.OperationalError: (1045, u”Access denied for user ‘root’@’localhost’ (using password: NO)”)

    Reply
        1. Raj

          The commad which I ran was ./openstack-deploy.py controller -a adminpasswd \
          -s servicepasswd -d demopasswd

          Reply
  4. Raj

    Hi Brian,

    I have successfully completed installation and while launching instance, I’m facing a problem.

    Instance it stuck on probing edd (edd=off to disable)… ok not moving ahead. I tried (solution) linux edd=off option at end of kernel line too, but still not moving ahead.

    Kindly help.

    Reply
        1. Brian Seltzer Post author

          I just tried that image and it worked fine for me. However I did notice that the image requires an 8GB root volume, which is rather large. My initial attempt failed when I tried to make the root volume 2GB.

          Reply
          1. Raj

            I’m trying with 2 CPU 4 GB Ram and Disk 128 GB, flavor. But still failed.

          2. Raj

            Hi Brian,

            I’m able to boot instance but it’s not assigning IP to instance. though DHCP is assigning IP to instance but eth0 has no ip allocated.
            ———–
            Ref:

            localhost login: [ 305.251735] cloud-init[715]: Cloud-init v. 0.7.5 running ‘init’ at Thu, 25 May 2017 22:54:20 +0000. Up 305.17 seconds.
            [ 305.287394] cloud-init[715]: ci-info: +++++++++++++++++++++++Net device info+++++++++++++++++++++++
            [ 305.287720] cloud-init[715]: ci-info: +——–+——+———–+———–+——————-+
            [ 305.288385] cloud-init[715]: ci-info: | Device | Up | Address | Mask | Hw-Address |
            [ 305.288860] cloud-init[715]: ci-info: +——–+——+———–+———–+——————-+
            [ 305.289510] cloud-init[715]: ci-info: | lo: | True | 127.0.0.1 | 255.0.0.0 | . |
            [ 305.289982] cloud-init[715]: ci-info: | eth0: | True | . | . | fa:16:3e:b4:7d:6f |
            [ 305.290483] cloud-init[715]: ci-info: +——–+——+———–+———–+——————-+
            [ 305.290948] cloud-init[715]: ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
            [ 305.539236] cloud-init[715]: 2017-05-25 22:54:20,463 – url_helper.py[WARNING]: Calling ‘http://169.254.169.254/2009-04-04/meta-data/instance-id’ failed [0/120s]: unexpected error [‘NoneType’ object has no attribute ‘status_code’]

          3. Brian Seltzer Post author

            Take a look at the logs located in /var/log/nova on both the compute node and the controller node. Sometimes it’s necessary to restart the nova metadata api service.

Leave a Reply