In this article, we’ll build a redundant Keystone identity service and RabbitMQ message queue service for our OpenStack Icehouse controller stack. In the last two articles, we built redundant load balancers using HAProxy and Keepalived, and we built redundant database servers using MySQL and Galera. Now, with those pieces in place, we can start building the OpenStack services that make up the controller stack.
For this step in the process, we’ll build two new Ubuntu 14.04 servers with the following names and IP addresses:
We’ll want the two nodes to be able to resolve each other by name so let’s add each node to the other’s /etc/hosts file like so:
/etc/hosts on icehouse1:
/etc/hosts on icehouse2:
Now we can install RabbitMQ on both nodes:
After the install has completed, we should stop rabbitmq on both nodes:
Then we copy the erlang cookie from node 1 to node 2. You may need to enable root ssh logon on node 2 for this step, otherwise you might need to copy the file to your home directory on node 2, then move it to the correct location. To enable root logon on node 2, simply type:
and enter a new password for root when prompted. Then back on node 1, we can copy the cookie to node 2:
Then, we can start both nodes again:
Now, we can tell RabbitMQ to form a cluster. On node 2, type the following:
You should see a response like this, showing two nodes in the cluster:
Now we can install Keystone.
After the install has completed, we edit /etc/keystone.conf on both nodes to define the admin_token, the rabbit_hosts and the database connection:
Notice that we’re defining rabbit_hosts not rabbit_host as you would do if you only had one rabbit server. Next we should create our database for Keystone. On any host with the mysql client installed, we can type:
The IP address shown here happens to be our VIP on the load balancer that is pointing to our MySQL nodes (but you knew that since you read the previous two articles, right?). Then, on node 1 we restart keystone and populate the database tables:
The keystone install on each node creates a self-signed certificate for signing authentication tokens. We need these certificates to match so that a token signed on one node will be readable by the other. Before we start the Keystone service on node 2, we should copy the PKI certificates from node 1 to node 2:
Then we can restart Keystone on node 2:
OK, now that we’ve got our two Keystone servers built, we should go back to our HAProxy load balancers, and add some configuration settings for Keystone (we build our HAProxy nodes in the article: Redundant Load Balancers – HAProxy and Keepalived). We need to add a stanza for the admin endpoint as well as the public/internal API endpoint. Add the following to /etc/haproxy/haproxy.cfg:
Notice that we’re listening on the virtual IP address (VIP) that we established in the load balancer article (192.168.1.32), and we’re pointing to the IP addresses of our Keystone nodes. Reload the configuration on both HAProxy nodes:
Now we can access our redundant Keystone instances through the VIP. At this point, we have a redundant but empty Keystone service, so we need to take the usual steps to create users, roles, and tenants and so forth. I use the typical script to do this like so:
Notice that all of the endpoints point to our VIP. Now we can test. To make testing easier, we can create a text file with our OpenStack command line environment variables:
and then load these with the command:
Now we should be able to use the Keystone command line to test our installation.
should return a list of users. To test redundancy, you can stop the keystone service, or reboot, one node at a time and see that the command still works. In our next article, we’ll build redundant Swift and Glance services.