OpenStack High Availability – Heat Orchestration Service

By | May 18, 2014

In the last few articles, we built the highly available OpenStack Icehouse core services. Now we will add the Heat orchestration service. This continues our high availability series:

In the previous article, we built two controllers, which are:

  • icehouse1 (192.168.1.35)
  • icehouse2 (192.168.1.36)

We will now install the Heat packages on these controllers. First we install the packages on both nodes:

apt-get install heat-api heat-api-cfn heat-engine

Then we modify /etc/heat/heat.conf and add or set the following lines:

/etc/heat/heat.conf

[DEFAULT]
verbose = True
log_dir=/var/log/heat

heat_metadata_server_url = http://192.168.1.32:8000
heat_waitcondition_server_url = http://192.168.1.32:8000/v1/waitcondition

rabbit_hosts=192.168.1.35:5672,192.168.1.36:5672

[database]
connection = mysql://heat:Service123@192.168.1.32/heat

[ec2authtoken]
auth_uri = http://192.168.1.32:5000/v2.0

[keystone_authtoken]
auth_host = 192.168.1.32
auth_port = 35357
auth_protocol = http
auth_uri = http://192.168.1.32:5000/v2.0
admin_tenant_name = service
admin_user = heat
admin_password = Service123

Notice that we’re using our load balancer VIP (192.168.1.32) for all service endpoints and our two controller addresses for rabbit. We can copy the config file from one node to the other easily:

scp /etc/heat/heat.conf root@192.168.1.36:/etc/heat
ssh root@192.168.1.36 chown heat:heat /etc/heat/heat.conf

In the article when we installed Keystone, we only defined users, roles, services and endpoints for the core services, so we’ll now add them for Heat. Before we do that, we need to source our credentials file:

source credentials

Then we can create the various Keystone objects for Heat:

keystone user-create --name=heat --pass=Service123 --email=heat@example.com
keystone user-role-add --user=heat --tenant=service --role=admin
keystone service-create --name=heat --type=orchestration --description="Orchestration"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ orchestration / {print $2}') \
  --publicurl=http://192.168.1.32:8004/v1/%\(tenant_id\)s \
  --internalurl=http://192.168.1.32:8004/v1/%\(tenant_id\)s \
  --adminurl=http://192.168.1.32:8004/v1/%\(tenant_id\)s
keystone service-create --name=heat-cfn --type=cloudformation --description="Orchestration CloudFormation"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ cloudformation / {print $2}') \
  --publicurl=http://192.168.1.32:8000/v1 \
  --internalurl=http://192.168.1.32:8000/v1 \
  --adminurl=http://192.168.1.32:8000/v1
keystone role-create --name heat_stack_user

We also need to create the database:

mysql -h 192.168.1.32 -u root -p
CREATE DATABASE heat;
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'Service123';
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'Service123';
flush privileges;
exit

Then populate the database tables:

heat-manage db_sync

And finally restart the services on both nodes:

service heat-api restart
service heat-api-cfn restart
service heat-engine restart

Now, we need to add our new services to our load balancers. Remember we have built a pair of load balancers which are:

  • haproxy1 (192.168.1.30)
  • haproxy2 (192.168.1.31)

Add the following lines to the config file on both nodes:
/etc/haproxy/haproxy.cfg

listen heat_api_cluster
  bind 192.168.1.32:8004
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server icehouse1 192.168.1.35:8004  check inter 2000 rise 2 fall 5
  server icehouse2 192.168.1.36:8004  check inter 2000 rise 2 fall 5

listen heat_cf_api_cluster
  bind 192.168.1.32:8000
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server icehouse1 192.168.1.35:8000  check inter 2000 rise 2 fall 5
  server icehouse2 192.168.1.36:8000  check inter 2000 rise 2 fall 5

and then reload the configuration:

service haproxy reload

You should now have a highly available orchestration service. To test, we’ll perform the standard quick test. Create a test template:

~/test-stack.yml

heat_template_version: 2013-05-23

description: Test Template

parameters:
  ImageID:
    type: string
    description: Image use to boot a server
  NetID:
    type: string
    description: Network ID for the server

resources:
  server1:
    type: OS::Nova::Server
    properties:
      name: "Test server"
      image: { get_param: ImageID }
      flavor: "m1.tiny"
      networks:
      - network: { get_param: NetID }

outputs:
  server1_private_ip:
    description: IP address of the server in the private network
    value: { get_attr: [ server1, first_address ] }

and then create the test stack:

NET_ID=$(nova net-list | awk '/ vmnet / { print $2 }')
heat stack-create -f test-stack.yml -P "ImageID=cirros;NetID=$NET_ID" testStack

The stack creation should begin. The following command will show the status:

# heat stack-list
+--------------------------------------+------------+-----------------+----------------------+
| id                                   | stack_name | stack_status    | creation_time        |
+--------------------------------------+------------+-----------------+----------------------+
| 50935e5b-cdfb-4e80-90ad-7b80d420684a | testStack  | CREATE_COMPLETE | 2014-05-18T12:47:09Z |
+--------------------------------------+------------+-----------------+----------------------+

That’s it. Stay tuned for more, as we build the remaining services…

 

One thought on “OpenStack High Availability – Heat Orchestration Service

  1. ZhanHan

    I configure the heat ha by ngnix, the heat-api is working well,but when the scale-up or scale-down policy is triggered,my heat-api-cfn logged “AWS authentication failure”, and it’s post request is give a 403, but
    it is the same with the request authenticated successful. Did anyone has the same problem? Mail me your advice,please.

    Reply

Leave a Reply