In this article, we’ll build a highly available OpenStack Icehouse Glance image service backed by highly available Swift object storage. To make Glance highly available, our two (or more) Glance servers must store their images on shared storage. We can use the Swift object store for the Glance storage, so we’ll kill two birds with one stone here, build both services, and configure Glance to use Swift as its storage back end. This is part four of our OpenStack High Availability Controller Stack series.
In the last article, OpenStack High Availability – Keystone and RAbbitMQ, we built two Ubuntu 14.04 LTS servers for our OpenStack controllers and we installed Keystone and RabbitMQ on them. The host names and IP addresses were:
- icehouse1 (192.168.1.35)
- icehouse2 (192.168.1.36)
We’ll install Glance on these in a few minutes, but first we should build our Swift object storage service. For this, I’ll build three more servers:
- swift1 (192.168.1.37)
- swift2 (192.168.1.38)
- swift3 (192.168.1.42)
On these servers, I’ve added a second hard disk (/dev/sdb) to use as the object storage. If you don’t have a second hard disk available, you could reserve some disk space during the OS installation and create a separate partition. Since I’m building these as virtual machines, adding another virtual disk is no big deal. Also note that I’m going to combine the swift proxy and swift storage roles onto the same servers (I only have so much capacity in my lab).
First, we’ll install the Swift storage components.
Once the installation is complete, we create the configuration file for Swift, where we define a unique hash for this group of Swift servers. This file should be copied to all other Swift servers so they match.
Next, we’ll create the disk partition to use for Swift storage and format it as an xfs file system:
Then we’ll create a directory to mount it in and grant ownership to the swift user:
We should also create a few other directories for Swift:
Swift uses rsync to replicate data between the storage nodes, so we need to configure rsync:
Notice that we’ve specified the local IP address of the swift1 node in the example. On swift2 use the swift2 address (192.168.1.38), on swift3 use the swift3 address (192.168.1.42). We also need to enable rsync to start on boot. To do this, we modify /etc/default/rsync and change RSYNC_ENABLE to true:
and then start the rsync service:
Complete the steps above on both Swift nodes.
Now we’ll install the Swift proxy components. Again, we’re installing this on the same three servers that we just installed the storage components on, but you can install the proxy on separate servers if you wish.
We need to configure memcached to listen on the local IP address. Change the listener line in memcached to use the local IP address of the node:
Then restart the service:
Next, we create the proxy configuration file:
Notice that we’ve specified the IP address of our load balanced Keystone service (192.168.1.32), the IP addresses of our memcache servers, and the username and password of our swift user.
Swift Ring Configuration
Now, we’ll create our Swift ring configuration and add our storage locations. We’ll create the necessary files in the /etc/swift directory on one node, and then copy the files to the other node. These files should be copied to all proxy and storage nodes.
Then we copy the files to the other nodes:
Then on both nodes, we make sure we grant ownership of the config files to the swift user and restart the services:
Load Balancing Swift
The last thing we need to do with Swift is to add it to our load balancer configuration. In the previous article, Redundant Load Balancers – HAProxy and Keepalived, we build redundant HAProxy servers. Now we’ll go back to those servers and add a stanza to /etc/haproxy/haproxy.cfg that points to our Swift proxies:
Notice that we’re binding to our VIP (192.168.1.32) and pointing to our two swift proxy nodes. To make the change take effect, we reload the HAProxy configuration:
OK, we now have a load balanced, redundant Swift service. To check it, we can use the swift stat command. As usual when working with OpenStack commands, we should source our environment file, which looks like this:
and we source it like so:
Then we can use the command:
The output should look something like this:
Glance Image Service
In this next section, we’ll install Glance on our controller nodes and use Swift as the high availability storage back end for Glance. However, an even better option (in my opinion) is to use Ceph as the back end for Glance. To do that, read the next article: OpenStack High Availability – Ceph Storage for Cinder and Glance, otherwise, you can proceed with this article.
Remember we built these two nodes in the previous article:
We install glance on both nodes:
Glance has several config files that we need to modify. The first is /etc/glance/glance-api.conf. We’ll find and modify the following lines:
Notice that we’re pointing to our VIP for authentication, database and swift. The next configuration file is /etc/glance/glance-registry.conf. Again we modify the following lines:
And the third config file is /etc/glance/glance-cache.conf:
Now we create the glance database:
Then on one node, we populate the database tables:
and restart the glance services on both nodes:
Load Balancing Glance
Back on our load balancers, we add stanzas for the glance services:
We now have a load balanced, redundant glance service backed by redundant swift storage. To test, we can add a new image to glance. Again, make sure you’ve sourced your credentials file:
Here are the locations of Ubuntu 14.04 and 12.04 as well. Repeat the step above, replacing the URL, image name and file name to upload these into glance.
Ubuntu 14.04: https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
Ubuntu 12.04: https://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img