OpenStack High Availability – Keystone and RabbitMQ

By | May 5, 2014

In this article, we’ll build a redundant Keystone identity service and RabbitMQ message queue service for our OpenStack Icehouse controller stack. In the last two articles, we built redundant load balancers using HAProxy and Keepalived, and we built redundant database servers using MySQL and Galera. Now, with those pieces in place, we can start building the OpenStack services that make up the controller stack.

For this step in the process, we’ll build two new Ubuntu 14.04 servers with the following names and IP addresses:

icehouse1 (192.168.1.35)
icehouse2 (192.168.1.36)

We’ll want the two nodes to be able to resolve each other by name so let’s add each node to the other’s /etc/hosts file like so:

/etc/hosts on icehouse1:

127.0.0.1	localhost
127.0.1.1	icehouse1
192.168.1.36	icehouse2

/etc/hosts on icehouse2:

127.0.0.1	localhost
127.0.1.1	icehouse2
192.168.1.35	icehouse1

RabbitMQ Cluster

Now we can install RabbitMQ on both nodes:

apt-get update
apt-get install ntp rabbitmq-server

After the install has completed, we should stop rabbitmq on both nodes:

service rabbitmq-server stop

Then we copy the erlang cookie from node 1 to node 2. You may need to enable root ssh logon on node 2 for this step, otherwise you might need to copy the file to your home directory on node 2, then move it to the correct location. To enable root logon on node 2, simply type:

passwd root

and enter a new password for root when prompted. Then back on node 1, we can copy the cookie to node 2:

scp /var/lib/rabbitmq/.erlang.cookie root@192.168.1.36:/var/lib/rabbitmq/.erlang.cookie

Then, we can start both nodes again:

service rabbitmq-server start

Now, we can tell RabbitMQ to form a cluster. On node 2, type the following:

rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit@icehouse1
rabbitmqctl start_app
rabbitmqctl cluster_status

You should see a response like this, showing two nodes in the cluster:

Cluster status of node rabbit@icehouse2 ...
[{nodes,[{disc,[rabbit@icehouse1,rabbit@icehouse2]}]},
 {running_nodes,[rabbit@icehouse1,rabbit@icehouse2]},
 {partitions,[]}]
...done.

Keystone Service

Now we can install Keystone.

apt-get install keystone python-mysqldb

After the install has completed, we edit /etc/keystone.conf on both nodes to define the admin_token, the rabbit_hosts and the database connection:

/etc/keystone/keystone.conf:

[DEFAULT]
admin_token=ADMIN123
...
...
rabbit_hosts=192.168.1.35:5672,192.168.1.36:5672
...
...
[database]
connection = mysql://keystone:Service123@192.168.1.32/keystone
...

Notice that we’re defining rabbit_hosts not rabbit_host as you would do if you only had one rabbit server. Next we should create our database for Keystone.  On any host with the mysql client installed, we can type:

mysql -h 192.168.1.32 -u root -p

mysql> create database keystone;
mysql> grant all on keystone.* to keystone@'%' identified by 'Service123';
mysql> flush privileges;
mysql> exit;

The IP address shown here happens to be our VIP on the load balancer that is pointing to our MySQL nodes (but you knew that since you read the previous two articles, right?).  Then, on node 1 we restart keystone and populate the database tables:

service keystone restart
keystone-manage db_sync

The keystone install on each node creates a self-signed certificate for signing authentication tokens.  We need these certificates to match so that a token signed on one node will be readable by the other.  Before we start the Keystone service on node 2, we should copy the PKI certificates from node 1 to node 2:

scp -r /etc/keystone/ssl root@192.168.1.36:/etc/keystone

Then we can restart Keystone on node 2:

service keystone restart

OK, now that we’ve got our two Keystone servers built, we should go back to our HAProxy load balancers, and add some configuration settings for Keystone (we build our HAProxy nodes in the article: Redundant Load Balancers – HAProxy and Keepalived).  We need to add a stanza for the admin endpoint as well as the public/internal API endpoint.  Add the following to /etc/haproxy/haproxy.cfg:

/etc/haproxy/haproxy.cfg

listen keystone_admin 192.168.1.32:35357
        balance source
        option tcpka
        option httpchk
        maxconn 10000
        server node1 192.168.1.35:35357 check inter 2000 rise 2 fall 5
        server node2 192.168.1.36:35357 check inter 2000 rise 2 fall 5

listen keystone_api 192.168.1.32:5000
        balance source
        option tcpka
        option httpchk
        maxconn 10000
        server node1 192.168.1.35:5000 check inter 2000 rise 2 fall 5
        server node2 192.168.1.36:5000 check inter 2000 rise 2 fall 5

Notice that we’re listening on the virtual IP address (VIP) that we established in the load balancer article (192.168.1.32), and we’re pointing to the IP addresses of our Keystone nodes.  Reload the configuration on both HAProxy nodes:

service haproxy reload

Now we can access our redundant Keystone instances through the VIP. At this point, we have a redundant but empty Keystone service, so we need to take the usual steps to create users, roles, and tenants and so forth.  I use the typical script to do this like so:

keystone_populate.sh

#!/bin/bash

# Modify these variables as needed
ADMIN_PASSWORD=password
SERVICE_PASSWORD=Service123
DEMO_PASSWORD=demo
export OS_SERVICE_TOKEN=ADMIN123
export OS_SERVICE_ENDPOINT="http://192.168.1.32:35357/v2.0"
SERVICE_TENANT_NAME=service
#
MYSQL_USER=keystone
MYSQL_DATABASE=keystone
MYSQL_HOST=localhost
MYSQL_PASSWORD=Service123
#
KEYSTONE_REGION=regionOne
KEYSTONE_HOST=192.168.1.32

# Shortcut function to get a newly generated ID
function get_field() {
    while read data; do
        if [ "$1" -lt 0 ]; then
            field="(\$(NF$1))"
        else
            field="\$$(($1 + 1))"
        fi
        echo "$data" | awk -F'[ \t]*\\|[ \t]*' "{print $field}"
    done
}

# Tenants
ADMIN_TENANT=$(keystone tenant-create --name=admin | grep " id " | get_field 2)
DEMO_TENANT=$(keystone tenant-create --name=demo | grep " id " | get_field 2)
SERVICE_TENANT=$(keystone tenant-create --name=$SERVICE_TENANT_NAME | grep " id " | get_field 2)

# Users
ADMIN_USER=$(keystone user-create --name=admin --pass="$ADMIN_PASSWORD" --email=admin@domain.com | grep " id " | get_field 2)
DEMO_USER=$(keystone user-create --name=demo --pass="$DEMO_PASSWORD" --email=demo@domain.com --tenant-id=$DEMO_TENANT | grep " id " | get_field 2)
NOVA_USER=$(keystone user-create --name=nova --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=nova@domain.com | grep " id " | get_field 2)
GLANCE_USER=$(keystone user-create --name=glance --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=glance@domain.com | grep " id " | get_field 2)
#QUANTUM_USER=$(keystone user-create --name=quantum --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=quantum@domain.com | grep " id " | get_field 2)
CINDER_USER=$(keystone user-create --name=cinder --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=cinder@domain.com | grep " id " | get_field 2)
SWIFT_USER=$(keystone user-create --name=swift --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=cinder@domain.com | grep " id " | get_field 2)

# Roles
ADMIN_ROLE=$(keystone role-create --name=admin | grep " id " | get_field 2)
MEMBER_ROLE=$(keystone role-create --name=Member | grep " id " | get_field 2)

# Add Roles to Users in Tenants
keystone user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $ADMIN_TENANT
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $ADMIN_ROLE
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER --role-id $ADMIN_ROLE
#keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $QUANTUM_USER --role-id $ADMIN_ROLE
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER --role-id $ADMIN_ROLE
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $SWIFT_USER --role-id $ADMIN_ROLE
keystone user-role-add --tenant-id $DEMO_TENANT --user-id $DEMO_USER --role-id $MEMBER_ROLE

# Create services
COMPUTE_SERVICE=$(keystone service-create --name nova --type compute --description 'OpenStack Compute Service' | grep " id " | get_field 2)
VOLUME_SERVICE=$(keystone service-create --name cinder --type volume --description 'OpenStack Volume Service' | grep " id " | get_field 2)
OBJECT_SERVICE=$(keystone service-create --name swift --type object-store --description 'OpenStack Object Storage Service' | grep " id " | get_field 2)
IMAGE_SERVICE=$(keystone service-create --name glance --type image --description 'OpenStack Image Service' | grep " id " | get_field 2)
IDENTITY_SERVICE=$(keystone service-create --name keystone --type identity --description 'OpenStack Identity' | grep " id " | get_field 2)
EC2_SERVICE=$(keystone service-create --name ec2 --type ec2 --description 'OpenStack EC2 service' | grep " id " | get_field 2)
#NETWORK_SERVICE=$(keystone service-create --name quantum --type network --description 'OpenStack Networking service' | grep " id " | get_field 2)

# Create endpoints
keystone endpoint-create --region $KEYSTONE_REGION --service-id $COMPUTE_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':8774/v2/$(tenant_id)s' --adminurl 'http://'"$KEYSTONE_HOST"':8774/v2/$(tenant_id)s' --internalurl 'http://'"$KEYSTONE_HOST"':8774/v2/$(tenant_id)s'
keystone endpoint-create --region $KEYSTONE_REGION --service-id $VOLUME_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':8776/v1/$(tenant_id)s' --adminurl 'http://'"$KEYSTONE_HOST"':8776/v1/$(tenant_id)s' --internalurl 'http://'"$KEYSTONE_HOST"':8776/v1/$(tenant_id)s'
keystone endpoint-create --region $KEYSTONE_REGION --service-id $OBJECT_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':8080/v1/AUTH_%(tenant_id)s' --adminurl 'http://'"$KEYSTONE_HOST"':8080' --internalurl 'http://'"$KEYSTONE_HOST"':8080/v1/AUTH_%(tenant_id)s'
keystone endpoint-create --region $KEYSTONE_REGION --service-id $IMAGE_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':9292' --adminurl 'http://'"$KEYSTONE_HOST"':9292' --internalurl 'http://'"$KEYSTONE_HOST"':9292'
keystone endpoint-create --region $KEYSTONE_REGION --service-id $IDENTITY_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':5000/v2.0' --adminurl 'http://'"$KEYSTONE_HOST"':35357/v2.0' --internalurl 'http://'"$KEYSTONE_HOST"':5000/v2.0'
keystone endpoint-create --region $KEYSTONE_REGION --service-id $EC2_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':8773/services/Cloud' --adminurl 'http://'"$KEYSTONE_HOST"':8773/services/Admin' --internalurl 'http://'"$KEYSTONE_HOST"':8773/services/Cloud'
#keystone endpoint-create --region $KEYSTONE_REGION --service-id $NETWORK_SERVICE --publicurl 'http://'"$KEYSTONE_HOST"':9696/' --adminurl 'http://'"$KEYSTONE_HOST"':9696/' --internalurl 'http://'"$KEYSTONE_HOST"':9696/'

Notice that all of the endpoints point to our VIP.  Now we can test.  To make testing easier, we can create a text file with our OpenStack command line environment variables:

credentials

export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.1.32:35357/v2.0

and then load these with the command:

source credentials

Now we should be able to use the Keystone command line to test our installation.

keystone user-list

should return a list of users.  To test redundancy, you can stop the keystone service, or reboot, one node at a time and see that the command still works.  In our next article, we’ll build redundant Swift and Glance services.

 

 

13 thoughts on “OpenStack High Availability – Keystone and RabbitMQ

  1. bob

    Hi Brian,
    Should the keystone_populate.sh script be ran on both nodes, or is just populating the DB so both will automatically see it?
    Boris

    Reply
    1. Brian Seltzer

      Hi Bob. You only need to run it once. Yeah, the users, tenants and roles get written to the DB, so both nodes will see them.

      Reply
  2. bob

    Hi Brian,
    Again thank you so much for your assistance.
    As far as the credential file should that be created on both nodes and should it be root user or default admin user that creates it and owns it?
    Thanks,
    Bobby

    Reply
    1. Brian Seltzer

      The credential file is only needed for you to run the command line utilities (so that you don’t have to enter all those things as arguments to each command line). The server doesn’t need it at all. You’ll probably find that you’ll want this on every node that has OpenStack command line tools installed, such as keystone servers, nova, cinder, glance servers, etc, but it’s only a convenience for you as an admin

      Reply
  3. bob

    Hi Brian,

    When I run keystone user-list on either node i get error “The request you have made requires authentication. (HTTP 401)” any idea?
    Looking over the keystone_populate.sh script, i do not see a keystone user being created, is that the cause?

    Bobby

    Reply
  4. Brian Seltzer

    There is no keystone user. Did you create the credentials file and source it? You have to source it again for every shell session…

    Reply
    1. bob

      yes i did.. I also noticed in the database table user, the admin user does not have a project associated to it.

      Reply
    2. bob

      Got it to work, for some reason I had to set these below, I thought they are set with the script: Only difference I see is the order i did it in was different. Do I need to set these values in the file or someplace else?
      export OS_SERVICE_ENDPOINT=http://:35357/v2.0
      export OS_SERVICE_TOKEN=ADMIN123

      Reply
      1. Brian Seltzer

        You shouldn’t need those after the keystone_populate.sh has been run. You should only need the exports in the credentials file. If those don’t successfully authenticate, then I suspect that the script didn’t run completely (maybe you ended up with a typo in your script). Otherwise you might have a typo in your credentials file, or if you are using haproxy, maybe a bug in your haproxy.cfg files. What’s your credentials file look like?

        Reply
        1. bob

          export OS_USERNAME=admin
          export OS_PASSWORD=password
          export OS_TENANT_NAME=admin
          export OS_AUTH_URL=http://10.1.0.2:35357/v2.0

          Reply
        2. bob

          The populate script looks like:

          ADMIN_PASSWORD=password
          SERVICE_PASSWORD=keystone_password
          DEMO_PASSWORD=demo_password
          export OS_SERVICE_TOKEN=ADMIN_TOKEN
          export OS_SERVICE_ENDPOINT=”http://10.1.0.2:35357/v2.0″
          SERVICE_TENANT_NAME=service

          Reply
  5. Erik Andersen (Azendale)

    I found that if you use passwords with wierd characters in them (for example, randomly generated passwords) you will probably want to enclose them in quotes, otherwise the script won’t run right.

    I think that is the cause of the 401 errors I am now getting trying to set up swift. (When the keystone populate script failed, I fixed it and ran it again, but that seems to have made a mess. So I dropped the keystone database, then redid the section of this article where we created the database and gave permissions, and then the db_sync part again. Then I tried running the fixed script. Still having swift trouble.)

    Reply

Leave a Reply