OpenStack High Availability – Nova and Horizon

By | May 13, 2014

In this article, we’ll build highly available OpenStack Icehouse Nova services and the Horizon dashboard.  In previous articles, we built the highly available HAProxy load balancers, database servers and the other basic OpenStack controller services.

In the previous articles, we built two OpenStack Icehouse controller nodes:

  • icehouse1(192.168.1.35)
  • icehouse2(192.168.1.36)

Now we will add Nova to these controllers.

Nova Controller Services

First we install the Nova Controller packages:

apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient

Next, we modify /etc/nova/nova.conf and add the following settings:

/etc/nova/nova.conf

rpc_backend = rabbit
rabbit_hosts = 192.168.1.35,192.168.1.36
my_ip = 192.168.1.35

vncserver_listen = 192.168.1.35
vncserver_proxyclient_address = 192.168.1.32
auth_strategy = keystone

[database]
connection = mysql://nova:Service123@192.168.1.32/nova

[keystone_authtoken]
auth_uri = http://192.168.1.32:5000
auth_host = 192.168.1.32
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = Service123

Notice that I’ve set my_ip and vncserver_listen to the IP address of icehouse1.  On icehouse2, use the icehouse2 address.  The rabbit_hosts points to the addresses of both controller nodes.  All other IP addresses point to our load balancer VIP (192.168.1.32).  Next, we create the nova database:

mysql -h 192.168.1.32 -u root -p
create database nova;
grant all on nova.* to nova@'%' identified by 'Service123';
exit

Then we can populate the database (on one node only) and restart the services (on both nodes):

nova-manage db sync

service nova-api restart
service nova-cert restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart

Now, we need to load balance our Nova services. If you recall, we built our HAProxy load balancers:

  • haproxy1 (192.168.1.30)
  • haproxy2 (192.168.1.31)

One both of these load balancers, we edit the /etc/haproxy/haproxy.cfg and add the following stanzas:

/etc/haproxy/haproxy.cfg

listen nova_ec2 192.168.1.32:8773
        balance source
        option tcpka
        option httpchk
        maxconn 10000
        server node1 192.168.1.35:8773 check inter 2000 rise 2 fall 5
        server node2 192.168.1.36:8773 check inter 2000 rise 2 fall 5

listen nova_osapi 192.168.1.32:8774
        balance source
        option tcpka
        option httpchk
        maxconn 10000
        server node1 192.168.1.35:8774 check inter 2000 rise 2 fall 5
        server node2 192.168.1.36:8774 check inter 2000 rise 2 fall 5

listen nova_metadata 192.168.1.32:8775
        balance source
        option tcpka
        option httpchk
        maxconn 10000
        server node1 192.168.1.35:8775 check inter 2000 rise 2 fall 5
        server node2 192.168.1.36:8775 check inter 2000 rise 2 fall 5

listen novnc 192.168.1.32:6080
        balance source
        option tcpka
        maxconn 10000
        server node1 192.168.1.35:6080 check inter 2000 rise 2 fall 5
        server node2 192.168.1.36:6080 check inter 2000 rise 2 fall 5

Then we reload the HAProxy configuration:

service haproxy reload

At this point, you have high availability Nova services. For a quick check, we source our credentials file and try a command:

# source credentials
# nova image-list
+--------------------------------------+--------+--------+--------+
| ID                                   | Name   | Status | Server |
+--------------------------------------+--------+--------+--------+
| be5c4f9e-b8c8-4ed8-91e7-cef6eaf64e0a | cirros | ACTIVE |        |
+--------------------------------------+--------+--------+--------+

Horizon Dashboard

To install the dashboard, we install the packages:

apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard
apt-get remove --purge openstack-dashboard-ubuntu-theme

Then we edit the /etc/openstack-dashboard/local_settings.py and change any references of 127.0.0.1 to our VIP address (192.168.1.32):

/etc/openstack-dashboard/local_settings.py

...
CACHES = {
   'default': {
       'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
       'LOCATION' : '192.168.1.32:11211',
   }
}
...
OPENSTACK_HOST = "192.168.1.32"
...

and we edit /etc/memcached.conf and change the listening address from 127.0.0.1 to the address of this controller node (192.68.1.35 for icehouse1, 192.168.1.36 for icehouse2):

/etc/memcached.conf

...
-l 192.168.1.35
...

and restart the services:

service apache2 restart
service memcached restart

Now back to our load balancers. We again add a few stanzas to our /etc/haproxy/haproxy.cfg:

/etc/haproxy/haproxy.cfg

listen dashboard 192.168.1.32:80
        balance  source
        capture  cookie vgnvisitor= len 32
        cookie  SERVERID insert indirect nocache
        mode  http
        option  forwardfor
        option  httpchk
        option  httpclose
        rspidel  ^Set-cookie:\ IP=
        server node1 192.168.1.35:80 cookie control01 check inter 2000 rise 2 fall 5
        server node2 192.168.1.36:80 cookie control02 check inter 2000 rise 2 fall 5

listen memcached 192.168.1.32:11211
        balance source
        option tcpka
        option httpchk
        maxconn 10000
        server node1 192.168.1.35:11211 check inter 2000 rise 2 fall 5
        server node2 192.168.1.36:11211 check inter 2000 rise 2 fall 5

and reload the connfiguration:

service haproxy reload

That’s it. You should be able to access the dashboard via the url: http://192.168.1.32/horizon and logon with admin/password.

 

 

12 thoughts on “OpenStack High Availability – Nova and Horizon

  1. Anil Dhingra

    Hey Brian .. thanks for sharing such wonderful info .. only thing I am missing here is neutron HA .. which I think active/active not possible & need corosync/pacemaker cluster .. any idea how to integrate neutron HA with above setup

    Reply
    1. Brian Seltzer

      Hi Anil, I haven’t found a good way to integrate Neutron into my HA setup, nor for that matter make it perform well compared to using the nova-network flat model and pointing the instances to the physical router. I’m hopeful that the situation with Neutron will improve over the next few releases (and as vendors bring more plug-ins). In the meantime I’m still researching the problem, and I plan to look into the wiki page that you mentioned. Thanks for that.

      Reply
  2. bgyako

    Hi Brian,
    I’m on my final steps, thanks for all your help.
    My HAproxy config is giving me a little trouble.
    I have 2 nics on HAProxy server, 1 external on internal.
    I added these line to HAproxy.cfg

    bind publicIP:80
    default_backend dashboard

    But, I think I need to modify the listen portion of dashboard you have in your instructions. I’m just not sure what the config should look like. Thanks again for your feed back.

    Reply
    1. Brian Seltzer Post author

      The listen address should be the public VIP for the load balancers. I’m not sure what that default_backend statement is all about.

      Reply
      1. bgyako

        I got to work. added a line bind public IP.
        The default back end was the HAproxy instructions, that was completely not needed.
        looks like this :
        listen dashboard
        bind 130.245.183.131:80
        the rest same as yours.

        Thanks for the help

        Reply
  3. bgyako

    Hi Brian,
    So I noticed you set up memcache for dashboard, my understanding is that nova-consoleauth
    does not work well with HAproxy and clustering unless you disable nova-consoleauth on all but one node, or use memcache. In which case you would need to add memcach configuration to nova.conf. Is this correct?

    Reply
  4. Rakan Bakir

    I have installed glance and swift part then followed it with nova and horizon part, i can’t create instance and i keep getting the below in apache2 error log

    Recoverable error: Unable to establish connection: HTTPConnectionPool(host=’10.0.0.10′, port=8776): Max retries exceeded with url: /v1/6f0655190a2d4616987c9140de6614a6/types (Caused by : [Errno 111] Connection refused)

    i checked the port 8776 and it belong to cinder, but the sequence of the tutorials i skipped the cinder part because it belong to ceph part, so where is the problem exactly.

    After each part i made sure that everything is working
    swift
    nova
    glance
    keystone
    msyql

    everything

    can you please help me?

    Reply
  5. Brian Seltzer Post author

    I’m not sure you can run instances without having a cinder service available, even though you may not be using cinder volumes, the API’s will complain like you are seeing. There may be some way to tell Nova that there is no cinder service, but I’m not aware of it.

    Reply
  6. Rakan Bakir

    Thank you very much for you quick response, i will built openstack again with ceph and will get back to swift issue later.

    Reply
  7. S

    Hey Brian, have you faced Horizon throwing “Something went wrong!” error I am able to access all nova API’s from CLI (nova list and so on). By tracing apache logs I see the following: [Fri Jan 01 02:55:21.338884 2016] [:error] [pid 20547:tid 140498760050432] Unauthorized: Unauthorized (HTTP 401), however I verified the credentials that are being used and they are legit.

    Reply
    1. Brian Seltzer Post author

      Yes I’ve seen that. With Juno and prior, the horizon dashboard would throw and error if you didn’t have nova, cinder and glance deployed. I just installed Liberty last week and it appears that the dashboard will work without cinder.

      Reply

Leave a Reply