OpenStack Juno Scripted Installation on CentOS 7

By | November 9, 2014

Note: the are newer versions of this article: OpenStack Newton – Quick Install on CentOS 7 and also Scripted Installation of OpenStack Mitaka

This article shows how to install OpenStack Juno on CentOS 7 with a couple of scripts. If you’ve manually installed OpenStack by following the official documentation, you know that there are many steps: adding repositories, installing services, modifying configuration files. I can take all day just to get the core services up and running. Tiring and tedious. After many test installations, I decided to write a few scripts to build the basic environment.

These scripts were built to deploy OpenStack on VMware virtual machines, but they should also work on physical machines. The scripts are basically a list of commands – no elaborate error checking. However, by keeping the scripts simple, they’re easier to read and understand.

The goal of the scripts is to deploy a single OpenStack controller, which runs the core controller services: mysql, rabbit, glance, nova, and cinder; and to deploy one or more compute nodes, which run nova-compute, nova-network, nova-metadata-api, and cinder-volume. These core services are enough to start deploying instances.

Note: this article uses the nova-network networking component which is simple and performs well. If you want to deploy the more complex neutron networking component, see my new article: OpenStack Juno Scripted Install with Neutron on CentOS 7.

The controller requires one network interface (NIC), connected to the management network. Compute nodes require two NICs, one connected to the management network, and the other connected to another subnet that we’ll use for OpenStack instances (virtual machines). The compute node also needs a second hard disk to be used for cinder (/dev/sdb). The resulting configuration will look something like this.

diagram

So to get started we need two CentOS 7 64 bit machines. The basic text-only installation is what we need. I won’t go into detail here. As stated above, the controller needs one NIC. The compute node needs two NICs and an extra hard disk for cinder.

Once these machines are built, we need to copy the scripts (shown below) to the machines, then we’ll customize the configuration and run the scripts.

Building the Controller

The first file is simply called config. This file will contain the specifics of the stack, and the IP configuration info for the node we are building.

[view raw text]

#stack information
CONTROLLER_IP=192.168.1.237
ADMIN_TOKEN=ADMIN123
SERVICE_PWD=Service123
ADMIN_PWD=password

#this host IP info
THISHOST_NAME=juno-controller
THISHOST_IP=192.168.1.237
THISHOST_NETMASK=255.255.255.0
THISHOST_GATEWAY=192.168.1.1
THISHOST_DNS=192.168.1.1

Save this file on the controller (in your home directory) and give it the filename: config

Adjust the information accordingly. I’m using the IP address 192.168.1.237 for my controller. So both CONTROLLER_IP and THISHOST_IP are set to that address. I’ve also set the hostname, netmask, gateway, and DNS address accordingly. I’m also defining the admin token to be used in Keystone, the password to be used by the OpenStack services, and the password to be used by the OpenStack admin user.

This next script simply writes the IP information from the config file to the if-config file for the primary NIC. Note that the script assumes that the primary NIC has an interface index of 2 (the local loopback adapter is 1). If you’re building a VM, this assumption is probably correct. However, if your machine has multiple NICs, or you’re using aliases or VLANs, your interface indexes may be different. You should inspect the NICs that your system sees by listing /sys/class/net and looking at the ifindex of the NICs in your system, then adjust the script accordingly.

[view raw text]

#!/bin/bash

#get config info
source config

#enumerate NICs
for i in $(ls /sys/class/net); do
    NIC=$i
    MY_MAC=$(cat /sys/class/net/$i/address)
    if [ "$(cat /sys/class/net/$i/ifindex)" == '2' ]; then
        #setup the IP configuration for 1st NIC
        sed -i.bak "s/dhcp/none/g" /etc/sysconfig/network-scripts/ifcfg-$NIC
        sed -i "s/HWADDR/#HWADDR/g" /etc/sysconfig/network-scripts/ifcfg-$NIC
        sed -i "/#HWADDR/a HWADDR=\"$MY_MAC\"" /etc/sysconfig/network-scripts/ifcfg-$NIC
        sed -i "s/UUID/#UUID/g" /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "IPADDR=\"$THISHOST_IP\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "NETMASK=\"$THISHOST_NETMASK\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "GATEWAY=\"$THISHOST_GATEWAY\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "DNS1=\"$THISHOST_DNS\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        mv /etc/sysconfig/network-scripts/ifcfg-$NIC.bak .
    fi
    if [ "$(cat /sys/class/net/$i/ifindex)" == '3' ]; then
        #create config file for 2nd NIC
        echo "HWADDR=\"$MY_MAC\"" > /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "TYPE=\"Ethernet\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "BOOTPROTO=\"none\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "IPV4_FAILURE_FATAL=\"no\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "NAME=\"$NIC\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
        echo "ONBOOT=\"yes\"" >> /etc/sysconfig/network-scripts/ifcfg-$NIC
    fi        
done

#setup hostname
echo "$THISHOST_NAME" > /etc/hostname
echo "$THISHOST_IP    $THISHOST_NAME" >> /etc/hosts

reboot

Save this file as ipconfig.sh and grant it execute permissions (chmod +x ipconfig.sh). Notice that if the system has a second NIC (ifindex = 3) then it will create a basic if-config file for that NIC, with no IP address. This won’t do anything on our controller (which has only one NIC), but it will be used when we build the compute node. When you run this script, the server will reboot and come up with the new IP address.

 

sudo ./ipconfig.sh

Now, we install and configure the OpenStack controller services.In the middle of this next script, the mysql_secure_installation routine will be called. You will be prompted for a password. The initial password is blank (just press enter). Then you can enter a new mysql root password of your choosing, and select the security defaults that you desire. Then the script will continue. Here’s the script:

[view raw text]

#!/bin/bash

#get the configuration info
source config

#install ntp
yum -y install ntp
systemctl enable ntpd.service
systemctl start ntpd.service

#openstack repos
yum -y install yum-plugin-priorities
yum -y install epel-release
yum -y install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm
yum -y upgrade
#yum -y install openstack-selinux

#loosen things up
systemctl stop firewalld.service
systemctl disable firewalld.service
sed -i 's/enforcing/disabled/g' /etc/selinux/config
echo 0 > /sys/fs/selinux/enforce

#install database server
yum -y install mariadb mariadb-server MySQL-python

#edit /etc/my.cnf
sed -i.bak "10i\\
bind-address = $CONTROLLER_IP\n\
default-storage-engine = innodb\n\
innodb_file_per_table\n\
collation-server = utf8_general_ci\n\
init-connect = 'SET NAMES utf8'\n\
character-set-server = utf8\n\
" /etc/my.cnf

#start database server
systemctl enable mariadb.service
systemctl start mariadb.service

echo 'now run through the mysql_secure_installation'
mysql_secure_installation

#create databases
echo 'Enter the new MySQL root password'
mysql -u root -p <<EOF
CREATE DATABASE nova;
CREATE DATABASE cinder;
CREATE DATABASE glance;
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$SERVICE_PWD';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$SERVICE_PWD';
FLUSH PRIVILEGES;
EOF

#install messaging service
yum -y install rabbitmq-server
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

#install keystone
yum -y install openstack-keystone python-keystoneclient

#edit /etc/keystone.conf
sed -i.bak "s/#admin_token=ADMIN/admin_token=$ADMIN_TOKEN/g" /etc/keystone/keystone.conf

sed -i "/\[database\]/a \
connection = mysql://keystone:$SERVICE_PWD@$CONTROLLER_IP/keystone" /etc/keystone/keystone.conf

sed -i "/\[token\]/a \
provider = keystone.token.providers.uuid.Provider\n\
driver = keystone.token.persistence.backends.sql.Token\n" /etc/keystone/keystone.conf

#finish keystone setup
keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
chown -R keystone:keystone /var/log/keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl
su -s /bin/sh -c "keystone-manage db_sync" keystone

#start keystone
systemctl enable openstack-keystone.service
systemctl start openstack-keystone.service

#schedule token purge
(crontab -l -u keystone 2>&1 | grep -q token_flush) || \
  echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' \
  >> /var/spool/cron/keystone
  
#create users and tenants
export OS_SERVICE_TOKEN=$ADMIN_TOKEN
export OS_SERVICE_ENDPOINT=http://$CONTROLLER_IP:35357/v2.0
keystone tenant-create --name admin --description "Admin Tenant"
keystone user-create --name admin --pass $ADMIN_PWD
keystone role-create --name admin
keystone user-role-add --tenant admin --user admin --role admin
keystone role-create --name _member_
keystone user-role-add --tenant admin --user admin --role _member_
keystone tenant-create --name demo --description "Demo Tenant"
keystone user-create --name demo --pass password
keystone user-role-add --tenant demo --user demo --role _member_
keystone tenant-create --name service --description "Service Tenant"
keystone service-create --name keystone --type identity \
  --description "OpenStack Identity"
keystone endpoint-create \
  --service-id $(keystone service-list | awk '/ identity / {print $2}') \
  --publicurl http://$CONTROLLER_IP:5000/v2.0 \
  --internalurl http://$CONTROLLER_IP:5000/v2.0 \
  --adminurl http://$CONTROLLER_IP:35357/v2.0 \
  --region regionOne
unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

#create credentials file
echo "export OS_TENANT_NAME=admin" > creds
echo "export OS_USERNAME=admin" >> creds
echo "export OS_PASSWORD=$ADMIN_PWD" >> creds
echo "export OS_AUTH_URL=http://$CONTROLLER_IP:35357/v2.0" >> creds
source creds

#create keystone entries for glance
keystone user-create --name glance --pass $SERVICE_PWD
keystone user-role-add --user glance --tenant service --role admin
keystone service-create --name glance --type image \
  --description "OpenStack Image Service"
keystone endpoint-create \
  --service-id $(keystone service-list | awk '/ image / {print $2}') \
  --publicurl http://$CONTROLLER_IP:9292 \
  --internalurl http://$CONTROLLER_IP:9292 \
  --adminurl http://$CONTROLLER_IP:9292 \
  --region regionOne

#install glance
yum -y install openstack-glance python-glanceclient

#edit /etc/glance/glance-api.conf
sed -i.bak "/\[database\]/a \
connection = mysql://glance:$SERVICE_PWD@$CONTROLLER_IP/glance" /etc/glance/glance-api.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = glance\n\
admin_password = $SERVICE_PWD" /etc/glance/glance-api.conf

sed -i "/\[paste_deploy\]/a \
flavor = keystone" /etc/glance/glance-api.conf

sed -i "/\[glance_store\]/a \
default_store = file\n\
filesystem_store_datadir = /var/lib/glance/images/" /etc/glance/glance-api.conf

#edit /etc/glance/glance-registry.conf
sed -i.bak "/\[database\]/a \
connection = mysql://glance:$SERVICE_PWD@$CONTROLLER_IP/glance" /etc/glance/glance-registry.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = glance\n\
admin_password = $SERVICE_PWD" /etc/glance/glance-registry.conf

sed -i "/\[paste_deploy\]/a \
flavor = keystone" /etc/glance/glance-registry.conf

#start glance
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service

#upload the cirros image to glance
yum -y install wget
wget http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
glance image-create --name "cirros-0.3.3-x86_64" --file cirros-0.3.3-x86_64-disk.img \
  --disk-format qcow2 --container-format bare --is-public True --progress
  
#create the keystone entries for nova
keystone user-create --name nova --pass $SERVICE_PWD
keystone user-role-add --user nova --tenant service --role admin
keystone service-create --name nova --type compute \
  --description "OpenStack Compute"
keystone endpoint-create \
  --service-id $(keystone service-list | awk '/ compute / {print $2}') \
  --publicurl http://$CONTROLLER_IP:8774/v2/%\(tenant_id\)s \
  --internalurl http://$CONTROLLER_IP:8774/v2/%\(tenant_id\)s \
  --adminurl http://$CONTROLLER_IP:8774/v2/%\(tenant_id\)s \
  --region regionOne

#install the nova controller components
yum -y install openstack-nova-api openstack-nova-cert openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler \
  python-novaclient

#edit /etc/nova/nova.conf
sed -i.bak "/\[database\]/a \
connection = mysql://nova:$SERVICE_PWD@$CONTROLLER_IP/nova" /etc/nova/nova.conf

sed -i "/\[DEFAULT\]/a \
rpc_backend = rabbit\n\
rabbit_host = $CONTROLLER_IP\n\
auth_strategy = keystone\n\
my_ip = $CONTROLLER_IP\n\
vncserver_listen = $CONTROLLER_IP\n\
vncserver_proxyclient_address = $CONTROLLER_IP\n\
network_api_class = nova.network.api.API\n\
security_group_api = nova" /etc/nova/nova.conf

sed -i "/\[keystone_authtoken\]/i \
[database]\nconnection = mysql://nova:Service123@$CONTROLLER_IP/nova" /etc/nova/nova.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = nova\n\
admin_password = $SERVICE_PWD" /etc/nova/nova.conf

sed -i "/\[glance\]/a host = $CONTROLLER_IP" /etc/nova/nova.conf

#start nova
su -s /bin/sh -c "nova-manage db sync" nova

systemctl enable openstack-nova-api.service openstack-nova-cert.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-cert.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

#install dashboard
yum -y install openstack-dashboard httpd mod_wsgi memcached python-memcached

#edit /etc/openstack-dashboard/local_settings
sed -i.bak "s/ALLOWED_HOSTS = \['horizon.example.com', 'localhost'\]/ALLOWED_HOSTS = ['*']/" /etc/openstack-dashboard/local_settings
sed -i 's/OPENSTACK_HOST = "127.0.0.1"/OPENSTACK_HOST = "'"$CONTROLLER_IP"'"/' /etc/openstack-dashboard/local_settings

#start dashboard
setsebool -P httpd_can_network_connect on
chown -R apache:apache /usr/share/openstack-dashboard/static
systemctl enable httpd.service memcached.service
systemctl start httpd.service memcached.service

#create keystone entries for cinder
keystone user-create --name cinder --pass $SERVICE_PWD
keystone user-role-add --user cinder --tenant service --role admin
keystone service-create --name cinder --type volume \
  --description "OpenStack Block Storage"
keystone service-create --name cinderv2 --type volumev2 \
  --description "OpenStack Block Storage"
keystone endpoint-create \
  --service-id $(keystone service-list | awk '/ volume / {print $2}') \
  --publicurl http://$CONTROLLER_IP:8776/v1/%\(tenant_id\)s \
  --internalurl http://$CONTROLLER_IP:8776/v1/%\(tenant_id\)s \
  --adminurl http://$CONTROLLER_IP:8776/v1/%\(tenant_id\)s \
  --region regionOne
keystone endpoint-create \
  --service-id $(keystone service-list | awk '/ volumev2 / {print $2}') \
  --publicurl http://$CONTROLLER_IP:8776/v2/%\(tenant_id\)s \
  --internalurl http://$CONTROLLER_IP:8776/v2/%\(tenant_id\)s \
  --adminurl http://$CONTROLLER_IP:8776/v2/%\(tenant_id\)s \
  --region regionOne

#install cinder controller
yum -y install openstack-cinder python-cinderclient python-oslo-db

#edit /etc/cinder/cinder.conf
sed -i.bak "/\[database\]/a connection = mysql://cinder:$SERVICE_PWD@$CONTROLLER_IP/cinder" /etc/cinder/cinder.conf

sed -i "/\[DEFAULT\]/a \
rpc_backend = rabbit\n\
rabbit_host = $CONTROLLER_IP\n\
auth_strategy = keystone\n\
my_ip = $CONTROLLER_IP" /etc/cinder/cinder.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = cinder\n\
admin_password = $SERVICE_PWD" /etc/cinder/cinder.conf

#start cinder controller
su -s /bin/sh -c "cinder-manage db sync" cinder
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

That’s a lot of code, I know, but each section is fairly straightforward. Each section is commented to describe what it’s doing. Save the file as controller-node.sh, grant it execution rights (chmod +x controller-node.sh), and run the script like so:

 

sudo ./controller-node.sh

Your controller should now be ready to go.

Building the Compute Node

Again, our compute node should have two NICs and two hard disks. The second hard disk will be used for the cinder-volumes LVM volume group. If you have a different hard disk configuration, you can modify the script to deploy the cinder-volumes volume group onto a different location.

As before, the first thing we need to do is modify the config file. In this case, the stack information remains the same as before, but we modify the IP address and hostname for this host as shown here.

 

#stack information
CONTROLLER_IP=192.168.1.237
ADMIN_TOKEN=ADMIN123
SERVICE_PWD=Service123
ADMIN_PWD=password

#this host IP info
THISHOST_NAME=juno-compute
THISHOST_IP=192.168.1.236
THISHOST_NETMASK=255.255.255.0
THISHOST_GATEWAY=192.168.1.1
THISHOST_DNS=192.168.1.1

Again, save this with the filename: config

Next run the ipconfig.sh script on the compute node. In this case, the if-config files for both NICs will be modified. The second NIC will not receive an IP address. Instead, OpenStack will create a virtual bridge and connect it to the second NIC. When Instances (VMs) are created, they will be connected to this bridge.

After the IP configuration is set and the system has been rebooted, we can run the script that installs the OpenStack compute node services.

[view raw text]

#!/bin/bash

source config

#install ntp
yum -y install ntp
systemctl enable ntpd.service
systemctl start ntpd.service

#openstack repos
yum -y install yum-plugin-priorities
yum -y install epel-release
yum -y install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm
yum -y upgrade
#yum -y install openstack-selinux

#loosen things up
systemctl stop firewalld.service
systemctl disable firewalld.service
sed -i 's/enforcing/disabled/g' /etc/selinux/config
echo 0 > /sys/fs/selinux/enforce

#get name of 2nd NIC
for i in $(ls /sys/class/net); do
    if [ "$(cat /sys/class/net/$i/ifindex)" == '3' ]; then
        NIC=$i
        MY_MAC=$(cat /sys/class/net/$i/address)
        echo "$i ($MY_MAC)"
    fi
done

#nova compute
yum -y install openstack-nova-compute sysfsutils libvirt-daemon-config-nwfilter

sed -i.bak "/\[DEFAULT\]/a \
rpc_backend = rabbit\n\
rabbit_host = $CONTROLLER_IP\n\
auth_strategy = keystone\n\
my_ip = $THISHOST_IP\n\
vnc_enabled = True\n\
vncserver_listen = 0.0.0.0\n\
vncserver_proxyclient_address = $THISHOST_IP\n\
novncproxy_base_url = http://$CONTROLLER_IP:6080/vnc_auto.html" /etc/nova/nova.conf

sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = nova\n\
admin_password = $SERVICE_PWD" /etc/nova/nova.conf

sed -i "/\[glance\]/a host = $CONTROLLER_IP" /etc/nova/nova.conf

#if compute node is virtual - change virt_type to qemu
if [ $(egrep -c '(vmx|svm)' /proc/cpuinfo) == "0" ]; then
    sed -i '/\[libvirt\]/a virt_type = qemu' /etc/nova/nova.conf
fi

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service
systemctl start openstack-nova-compute.service

yum -y install openstack-nova-network openstack-nova-api

sed -i "/\[DEFAULT\]/a \
network_api_class = nova.network.api.API\n\
security_group_api = nova\n\
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver\n\
network_manager = nova.network.manager.FlatDHCPManager\n\
network_size = 254\n\
allow_same_net_traffic = True\n\
multi_host = True\n\
send_arp_for_ha = True\n\
share_dhcp_address = True\n\
force_dhcp_release = True\n\
flat_network_bridge = br100\n\
flat_interface = $NIC\n\
public_interface = $NIC" /etc/nova/nova.conf

systemctl enable openstack-nova-network.service openstack-nova-metadata-api.service
systemctl start openstack-nova-network.service openstack-nova-metadata-api.service

#cinder storage node
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

yum -y install openstack-cinder targetcli python-oslo-db MySQL-python

sed -i.bak "/\[database\]/a connection = mysql://cinder:$SERVICE_PWD@$CONTROLLER_IP/cinder" /etc/cinder/cinder.conf
sed -i '0,/\[DEFAULT\]/s//\[DEFAULT\]\
rpc_backend = rabbit\
rabbit_host = '"$CONTROLLER_IP"'\
auth_strategy = keystone\
my_ip = '"$THISHOST_IP"'\
iscsi_helper = lioadm/' /etc/cinder/cinder.conf
sed -i "/\[keystone_authtoken\]/a \
auth_uri = http://$CONTROLLER_IP:5000/v2.0\n\
identity_uri = http://$CONTROLLER_IP:35357\n\
admin_tenant_name = service\n\
admin_user = cinder\n\
admin_password = $SERVICE_PWD" /etc/cinder/cinder.conf

systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

echo 'export OS_TENANT_NAME=admin' > creds
echo 'export OS_USERNAME=admin' >> creds
echo 'export OS_PASSWORD='"$ADMIN_PWD" >> creds
echo 'export OS_AUTH_URL=http://'"$CONTROLLER_IP"':35357/v2.0' >> creds
source creds

Save this file as compute-node.sh and grant it execute permissions (chmod +x compute-node.sh). Then run it:

 

sudo ./compute-node.sh

Notice that the script created a file called creds. Source this file before running any OpenStack command line utilities, as shown below.

Creating the VM Network

Finally, we need to create an OpenStack network object for the instances to connect to. In my case, I’m using the subnet 192.168.2.0/24. My router address on that subnet is 192.168.2.254.

 

source creds

nova network-create demo-net --bridge br100 --multi-host T \
  --fixed-range-v4 192.168.2.0/24 --gateway 192.168.2.254 \
  --dhcp-server 192.168.2.1 --dns1 192.168.1.1

Finally, reboot the compute node and you should be ready to deploy instances. Enjoy!

 

51 thoughts on “OpenStack Juno Scripted Installation on CentOS 7

  1. Brad Laue

    Better question – feel free to delete those last two. What does the disk/partition layout look like on the controller and compute node? How much space is set aside for ephemeral disks and glance storage?

    Reply
    1. Brian Seltzer Post author

      The default disk layout on CentOS allocates most of the primary disk to the root file system (the rest going to a swap partition and a small boot partition). So the size of the disk will dictate how much storage is available. Ephemeral instance storage is located at /var/lib/nova/instances on the compute nodes. Glance images are stored at /var/lib/glance/images on the controller node. In either case, you could provision another local disk, SAN or NAS storage and mount it to that path, making sure to copy the existing contents to the new storage (while the services are shut down) and resetting the permissions if necessary. However, if I was building the environment to scale, I’d probably look to avoid ephemeral storage and store images and volumes on shared, scalable storage like Ceph.

      Reply
  2. George

    How is the tenant traffic going to leave the cloud? I see that you’re using nova-network but there is no external attached interface on any node.

    l

    Reply
    1. Brian Seltzer Post author

      This is a simple as possible design from a networking standpoint. Management, API and Dashboard access are all on the first NIC of the hosts. VM traffic is on the second NIC of the compute nodes. Presumably then, cloud tenants must be able to access both the management and the VM subnets. This is acceptable for a test environment, which is the goal of the article. Surely a secure cloud would have a custom NIC, VLAN and firewall settings.

      Reply
  3. Brad Laue

    This ties into my next question actually – I didn’t get a br100 interface created when nova-network starts. Does it not appear until I define a network with nova-manage?

    Reply
  4. Robert Sam

    I am facing issue with MySQL. While it asked for root password during the first mysql_secure_installation I entered blank..it threw error:ERROR 2002 (HY000): Can’t connect to local MySQL server through socket ‘/var/lib/mysql/mysql.sock’ (2)

    To come out of it, I pressed Ctrl+C, then it asked for NEw password, I entered mysql123 as password and then the same error. How to fix it?

    Reply
    1. Brian Seltzer Post author

      If you get an error from MySQL when you press enter at the first password prompt, I would open a new terminal window, SSH into the host again and troubleshoot the issue. In your case it sounds like MySQL didn’t start for some reason.

      Reply
  5. Robert Sam

    Installed:
    MySQL-python.x86_64 0:1.2.3-11.el7 mariadb.x86_64 1:5.5.37-1.el7_0
    mariadb-server.x86_64 1:5.5.37-1.el7_0

    Complete!
    ln -s ‘/usr/lib/systemd/system/mariadb.service’ ‘/etc/systemd/system/multi-user.target.wants/mariadb.service’
    Job for mariadb.service failed. See ‘systemctl status mariadb.service’ and ‘journalctl -xn’ for details.
    now run through the mysql_secure_installation
    /usr/bin/mysql_secure_installation: line 379: find_mysql_client: command not found

    NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
    SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

    In order to log into MariaDB to secure it, we’ll need the current
    password for the root user. If you’ve just installed MariaDB, and
    you haven’t set the root password yet, the password will be blank,
    so you should just press enter here.

    Enter current password for root (enter for none):
    ERROR 2002 (HY000): Can’t connect to local MySQL server through socket ‘/var/lib/mysql/mysql.sock’ (2)
    Enter current password for root (enter for none):

    Reply
  6. Ryan

    How to add another compute node.? I executed the script again that you given for deploying compute node(first compute node).
    now when i launch the the image , it goes to spawing status forever…

    Reply
    1. Brian Seltzer Post author

      Assuming you gave the 2nd compute node a unique hostname and IP address, and used the correct controller IP address in the config file, it should work fine. Take a look at the logs in /var/log/nova on the compute nodes and the controller node to see if you can spot any errors.

      Reply
  7. Daniel Ruiz

    Hi Brian,

    Would it be possible to “upgrade” theses scripts for using neutron instead of nova-network?

    Thanks.

    Reply
    1. Brian Seltzer Post author

      OK, I’ve published my scripted Juno installation with neutron. Use the site’s nav bar and select OpenStack – Juno, and you’ll find the new version of this article that uses Neutron.

      Reply
  8. basava

    hello ,I am new to openstack and centos7.I have installed openstack using packstack.Now i want to configure compute and controller node on different machines.So how should i proceed?Please tell the procedure.

    Reply
    1. Brian Seltzer Post author

      Hi Basava, all you need to do is to build two new CentOS7 machines and then follow the instructions in this article, and the result will be separate controller and compute nodes. Hope that helps.

      Reply
  9. 1mike@live.com

    I appreciate your hard work to make others users work easier. After completing all the process I try to access the dashboard by http://10.10.10.10:35357/v3/ but receive the below error

    Error report:
    This XML file does not appear to have any style information associated with it. The document tree is shown below.

    Can you please tell me where I made mistake? As for my understanding I might make something wrong in the last section (Creating the VM Network). Can you please explain this in the same way you did for above or step by step. I am new to Open stack with a basic knowledge of Linux.
    Thank for your help.

    Reply
  10. 1mike@live.com

    Thanks for your advice. It works now and can access. I try to create a virtual system of Centos7 but it failed all times with an error:

    Error: Failed to launch instance “Centos7”: Please try again later [Error: No valid host was found. ].

    Please advice me.

    Reply
    1. Brian Seltzer Post author

      That could mean that none of your compute nodes has enough memory or disk space available to deploy the CentOS image. What flavor did you select?

      Reply
  11. Mike

    Thanks Brian. I already fix this issues. Now I m try to test the SAN Storage system. If you have any articles for then please share the link.

    Your articles are very nice, simple and working.

    Regard,
    Mike

    Reply
  12. Mike

    Hi Mr. Brian,
    I am trying to install the open stack server (controller) again on another server using the same script as above in new environment but always ends up with the error. Before I was completed the installation and its working fine but as I start with new installation give me error. I need your advice because I tried 10 times and all of them fails with error as below link.
    For error please visit the below link:
    http://1drv.ms/1Ddy5Rx
    https://onedrive.live.com/redir?resid=AD1E66A14389A8B%21212
    Hope to hear from you soon because after this installation I have to move on production environment.

    Regards,
    MIKE

    Reply
  13. Mike

    According to your advice, I download many iso file of Centos7 from the below link but still have the same error.

    https://onedrive.live.com/redir?resid=AD1E66A14389A8B!570&authkey=!AJDEW1lqVxqni7c&ithint=file%2cpdf

    I am not an expert like you but I guess that it might be an open stack package error because that package contains public key: NO KEY. Please visit the below link to check the installation process and I highlighted the no key lines. Once it show up, the installation failed because unable to create keystone. That is my idea.

    https://onedrive.live.com/redir?resid=AD1E66A14389A8B!567&authkey=!AIqz8wBfw9d1FYM&ithint=file%2cdocx

    I need your advice on this because I download many centos7 and install it and test the open stack on it. Unfortunately the results are same.

    Reply
    1. Brian Seltzer Post author

      I just fixed the epel-release line. Hope that solves it for you. I just tested the script again and it’s working fine.

      Reply
      1. Mike

        Now the script is working fine, after updated the script by you. Finally I can able to install the Open Stack. But at last step I got the below error while Creating the VM Network.

        ./admin-openrc.sh

        ERROR (ClientException): The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-c731b854-c512-45e4-b754-7548b077e802)

        I uploaded the full script which I use to install on both servers.

        http://1drv.ms/1zx6FEt

        Please advise me how I can fix it. I also want to change the demo-net to company name –net. Can I do that also?

        Reply
        1. Brian Seltzer Post author

          Hi Mike,

          It looks like your config file for the compute node is pointing to the wrong controller IP address. Your controller IP address is 172.20.1.23 but your config file is set to point to 172.20.1.24. So the compute node probably can’t communicate with the controller at all. The controller IP should remain the same for all servers in your stack.

          Reply
  14. Mike

    A big Thanks to you Mr. Brian because of you only I successfully install the Open Stack servers in Production environment.

    After installation everything is working fine but then I have to restart the servers(controller and 2 compute). Services not start after that. I am sharing the logs and error Image to you. Please help me to finish this task.
    http://1drv.ms/1Dw3FtQ

    I have other Questions too.

    1. How can I change password of admin.
    2. How can I create new users.

    Reply
    1. Brian Seltzer Post author

      It looks like your rabbitmq service isn’t running on the controller node. Check the status, on the controller, with this command:

      service rabbitmq-server status

      To manage users, you can use the web dashboard. Update the currently logged on user’s password using the profile menu at the upper right. To manage other users and projects, use the Identity item at left.

      Reply
      1. MIKE

        Yesterday I have to reinstall the Open Stack again because its not stable at all. After fresh installation, its seems working fine. But when I try for Instances, its give error as below links.

        http://1drv.ms/1EWVmsy

        Please advice how can I fix it.

        Reply
        1. Brian Seltzer Post author

          Those errors seem to indicate that glance and cinder can’t be reached. Either those services aren’t running on your controller, or the service endpoints aren’t created in keystone. Recommend that you go to the command line on the controller and try the following commands:

          systemctl list-units | grep openstack
          keystone service-list
          keystone endpoint-list

          Reply
          1. Brian Seltzer Post author

            Authentication is handled by the keystone service. As I mentioned before, we need to determine the status of the keystone services and endpoints. Also you should look in the keystone log for any errors. Good luck.

    1. Brian Seltzer Post author

      Yeah it looks like Keystone isn’t working right. All of the other services rely on Keystone. I would recommend that you start over with fresh Linux builds. I would not recommend re-running the scripts on servers where you had already run them.

      Reply
  15. Keith Hui

    Hi Brain,

    Thanks for the script, save me a lot of time to edit the .conf files and setup the database. I wanted to explore to add a storage network in the compute node with an additional NIC so the compute node can access external storage like ceph. Do you have any suggestion as how to approach that with the basic OpenStack already deployed? Do you have any script to install the ceph client and the keyring for OpenStack?

    I am going to try you Neutron script next, keep up the good work.

    Keith

    Reply
    1. Brian Seltzer Post author

      I haven’t had a chance to script the ceph configuration. As for a dedicated storage network, that should be easy enough. Just define a storage network subnet and configure your extra NIC to be part of that subnet, and also place your Ceph servers on that subnet. The routing table of your compute node will automatically include an entry that sends traffic out the new NIC for servers on that subnet, so your Ceph traffic will use that NIC.

      Reply
  16. Keith Hui

    Thanks again Brian, I will try your recommendation for the storage network setup. Please update me if you feel like doing a ceph deployment script.

    Keith

    Reply
  17. Mike

    Hi Brain, How can I create 2 controllers by following your above scripts. I need 1 more controller for failover. Please advice me.

    Reply
  18. Sinethra

    Hi Brian ,

    I need your help with these last commands

    ” nova network-create demo-net –bridge br100 –multi-host T \
    –fixed-range-v4 192.168.2.0/24 –gateway 192.168.2.254 \
    –dhcp-server 192.168.2.1 –dns1 192.168.1.1 ”

    In this can you tell me whether these are Physical Networks or the network that created inside Openstack to connect with VM’s :- –fixed-range-v4 192.168.2.0/24 , –gateway 192.168.2.254 , –dhcp-server 192.168.2.1

    Reply
    1. Brian Seltzer Post author

      192.168.2.0/24 is the physical network that NIC 2 of the compute node is connected to. The VMs (instances) will connect to this network, through the linux bridge br100. We have to define this network within OpenStack and select it when we create instances.

      Reply
      1. Sinethra

        Thank you for replying , but i only have one single network 192.168.0.0/24 …Is there any Modifications i should do on those final commands ??

        Reply
          1. Brad Laue

            FWIW, I’ve used Brian’s scripted method to set OpenStack up on a single node with two network cards – one of which is not connected. That second NIC becomes the storage / tunnel network.

            If you’re doing this in a lab environment you can easily use a single switch with no router to provide the fabric over which the tenant/storage networks can send traffic – just note that it will perform more poorly the more virtual machine instances you spin up.

            Does requires a second NIC, but not a third – and an Intel gigE network card is pretty cheap these days…

  19. Angel

    Hi Brian

    First of all thank you for sharing with us your Openstack setup, it’s been really helpful, specially as I have a similar one at home, and I’m trying to figure out something…

    On your setup (as I have) you have an external router which has its job providing access from/to the instances and you don’t use the internal routing DnsMasq provides, but then…I’m curious to how you fixed the access to the nova-api-metadata service. Did you inject an static route within the instance to reach it?.

    I tried tons of things, but none of them worked, and without metadata server, as you know, you cannot inject the SSH keys.

    Reply
    1. Brian Seltzer Post author

      In this setup, the nova-metadata-api service is installed on every compute node. When you build and start an instance, it first comes up with an automatically provisioned IP address (APIPA) such as 169.254.169.254 and tries to reach the metadata service. Meanwhile, the nova-network service creates an iptables entry to forward requests from this address to the local address/port where the metadata service is running. There’s nothing special you need to do to make this work, other than make sure that nova-metadata-api and nova-network are both running on all of the compute nodes. I’ve seen issues where the instances couldn’t connect to the metadata service, and it can be a challenge to troubleshoot. I’ve found that rebooting the compute node caused problems, and restarting services in the correct order solved the problem. I think restarting the nova-network service may help you, as it will recreate all the necessary iptables entries. Hope that helps.

      Reply
  20. rohit

    Hi Brian

    I need to setup Juno on Centos but when I run thru the standard install instructions I get Mitaka version, 13.0.0-1.el7. I think that is because Juno is EOL. Do you know of a way to instal Juno on Centos 7.x?

    thanks in advance

    Reply

Leave a Reply