In this article, we’ll build our first OpenStack Icehouse compute node that uses the high availability controller stack that we built in the last few articles. This same process can be used to build additional compute nodes.
We’ll build an Ubuntu 14.04 LTS server for our compute node:
- icehouse-compute1 (192.168.1.45)
This server will require two network adapters. The first (eth0) will have the IP address above. The second will be on a separate network 192.168.2.0/24 and will be linked to a Linux bridge and will not have an IP address defined on it.
Remember, our architecture will ultimately look like this:
To configure the network, we’ll edit the /etc/network/interfaces file and configure it like this:
Adjust your DNS domain name and addresses as appropriate. Next, we install and configure the packages:
Note: There’s an issue with kernel permissions that going on at the time of this writing. If you find that you can’t run any instances, you can try typing:
This will relax the permissions on the kernel. I believe a fix is being worked on for this problem, so you may or may not run into this.
Now we configure Nova. Edit the /etc/nova/nova.con and add the following lines:
Notice that I’m pointing to the load balancer VIP for glance, novncproxy, database and keystone. I’m using this compute node’s IP address for my_ip and vncserver, and I’m using the IP addresses of the two controller nodes for rabbit.
A quick note for those who are testing. If your compute node is a VM (or otherwise incapable of hardware-accelerated virtualization), set the following in /etc/nova/nova-compute.conf
/etc/nova/nova-compute.conf (only for virtual compute nodes!)
One more tip, if you want to use the router on your 192.168.2.0 network, rather than using the dnsmasq as the router, add your router’s IP address to the file /etc/dnsmasq-nova.conf. This greatly improves network performance of your VMs.
At this point a reboot is advised, or at least restart the nova services:
Once the reboot or service restart is complete, one last step is to define the network for the instances to use:
That’s it. You should now be able to logon to the Horizon dashboard and start provisioning VMs. Of course, you should do the usual stuff (create or import SSH keys, enable SSH access in your access groups, etc. Enjoy!
Note: To add more compute nodes, simply allocate another management IP address for the new compute node (e.g. 192.168.1.46), then repeat this process again, replacing the IP address as you go along (replace any instances of 192.168.1.45 with 192.168.1.46).