Redundant Load Balancers – HAProxy and Keepalived

By | April 26, 2014

In this article, we’ll build a pair of HAProxy load balancers running on Ubuntu 14.04 LTS, and use keepalived to provide failover of our virtual IP address(es).  This article is part of our series on building a highly available OpenStack controller stack, but it can also be used on its own.  We’ll install the services and setup a base configuration that we’ll return to as we build the various OpenStack services.

For this configuration, I’ve created two Ubuntu 14.04 servers (mine are virtual servers with two virtual cpus and 1 GB of RAM).  You’d probably want to make these bigger in a production environment, depending on the number of concurrent connections you expect.  I’ve given them hostnames and IP addresses:

  • haproxy1 (192.168.1.30)
  • haproxy2 (192.168.1.31)

We’ll also need to allocate a third IP address to use as the virtual IP address (VIP).  We’ll use 192.168.1.32.  This will ultimately be the endpoint used to access the OpenStack services that we’ll build later.

The first thing we need to do is to let the kernel know that we intend to bind additional IP addresses that won’t be defined in the interfaces file.  To do that we edit /etc/sysctl.conf and add the following line:

/etc/sysctl.conf

net.ipv4.ip_nonlocal_bind=1

Then we run the following command to make this take effect without rebooting:

sysctl -p

Next we can install the software we need (HAProxy and keepalived):

apt-get update && apt-get install keepalived haproxy -y

Next, we define the keepalived configuration by creating the following file:
/etc/keepalived/keepalived.conf

global_defs {
  router_id haproxy1
}
vrrp_script haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}
vrrp_instance 50 {
  virtual_router_id 50
  advert_int 1
  priority 101
  state MASTER
  interface eth0
  virtual_ipaddress {
    192.168.1.32 dev eth0
  }
  track_script {
    haproxy
  }
}

Notice there’s a few specific items that we need to set for this.  I’ve set the router_id to be the hostname, and I’ve specified the VIP as 192.168.1.32. When you create this file on the second node, make sure to use the hostname of the second node.

Next, we will define the HAProxy configuration:

/etc/haproxy/haproxy.cfg

global
	chroot /var/lib/haproxy
	user haproxy
	group haproxy
	daemon
	log 192.168.1.30 local0
	stats socket /var/lib/haproxy/stats
	maxconn 4000

defaults
	log	global
	mode	http
	option	httplog
	option	dontlognull
        contimeout 5000
        clitimeout 50000
        srvtimeout 50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http

listen stats 192.168.1.30:80
        mode http
        stats enable
        stats uri /stats
        stats realm HAProxy\ Statistics
        stats auth admin:password

Notice that I’ve used the local IP address in the file in two locations, in the global section for the log location, and in the stats listener.  When you setup the second node, make sure to use its IP address.  Also notice the username and password in the status auth line.  Set this to whatever you want.  Then, you will be able to access the stats page via your browser.

Now we need to enable HAProxy.  To do this, edit the file /etc/default/haproxy and change ENABLED from 0 to 1:

/etc/default/haproxy

# Set ENABLED to 1 if you want the init script to start haproxy.
ENABLED=1
# Add extra flags here.
#EXTRAOPTS="-de -m 16"

Now we can restart the services:

service keepalived restart
service haproxy restart

Once you’ve completed all of these steps on both nodes, you should now have a highly available load balancer pair.  At this point, our VIP should be active on one node (assuming that you built node 1 first, it should be active on that node).  To confirm, we can use the ip command:

# ip a | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 192.168.1.30/24 brd 192.168.1.255 scope global eth0
    inet 192.168.1.32/32 scope global eth0

Notice that both the local IP and the VIP are shown. If we now reboot node 1, node 2 will quickly pick up the VIP.

That’s it!  We have our high availability load balancer complex.  We’re not load balancing anything yet, but as we build the services of our OpenStack controller stack, we’ll return to the haproxy.cfg and add things as we go.  In the next article, we’ll create a Highly Available MySQL service for our OpenStack deployment, and place it behind these load balancers.

 

12 thoughts on “Redundant Load Balancers – HAProxy and Keepalived

  1. Poonam

    i am using the above keepalived config but facing a problem with both nodes transitioning to master state. VIP is running on Node A. If I reboot Node B , it comes up with VIP as well.

    Reply
    1. Brian Seltzer Post author

      This problem will occur if your network switch is filtering multicast traffic. You have to set the multicast filter to forward the multicast traffic.

      Reply
      1. Poonam

        Hello Brian,
        Thanks for your prompt response.
        However we do not have any multicast filtering in the switches at all. The VIP is on the same subnet as the haproxy nodes.

        Reply
        1. Brian Seltzer Post author

          The filtering is at the switch level, not the router level, so even though everything is on the same subnet, the switch may still filter the keepalive traffic. Switches often default to multicast filter-all, which will block the flow. If you can’t configure your switch to pass multicast traffic, then you can configure keepalived to use unicast instead. Check out this blog post and take note of the vrrp_unicast_ lines in the config file: http://blog.miketoscano.com/?p=383

          Reply
          1. Tytus Kurek

            Another option is to use unicast traffic instead of multicast. Just add the following lines to the “vrrp_instance” section:

            unicast_src_ip [local IP]
            unicast_peer {
            [remote IP1]
            [remote IP2]

            }

            P.S.

            @Brian: Thank you very much for sharing all of these information. I found your blog very useful!

  2. Serkan

    Great article. I’ve configured 3 servers in 30 minutes without any problem. Thank you very much.

    Reply
  3. Marks

    Great article. I’ve set up 4 Ubuntu 14.04 VMs with failover (virtual ip 10.200.10.121). I’m little bit confused about keepalived. In the article the keepalived configuration sets up the first node as master but it doesn’t specifiy if the rest of the nodes should be slave. Also on the haproxy configuration I have the 4 nodes with the listen IP:PORT as the virtual IP.

    haproxy.cfg:
    #…
    frontend http-in
    bind 10.200.10.121:80
    #…

    keepalived.conf (all machines):

    global_defs {
    router_id {{MACHINE_HOSTNAME}} #this is the only thing that is different on all nodes
    }
    vrrp_script haproxy {
    script “killall -0 haproxy”
    interval 2
    weight 2
    }
    vrrp_instance 50 {
    virtual_router_id 50
    advert_int 1
    priority 101
    state MASTER
    interface eth0
    virtual_ipaddress {
    10.200.10.121 dev eth0
    }
    track_script {
    haproxy
    }
    }

    I guess what I’m trying to say is I don’t understand why this works. Since when I check with ip a | grep eth0, all nodes output the same thing:

    node1:
    2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 10.200.10.44/24 brd 10.200.10.255 scope global eth0
    inet 10.200.10.121/32 scope global eth0

    node2:
    2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 10.200.10.45/24 brd 10.200.10.255 scope global eth0
    inet 10.200.10.121/32 scope global eth0

    node3:
    2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 10.200.10.46/24 brd 10.200.10.255 scope global eth0
    inet 10.200.10.121/32 scope global eth0

    node4:
    2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 10.200.10.146/24 brd 10.200.10.255 scope global eth0
    inet 10.200.10.121/32 scope global eth0

    I can’t exactly figure out who is answering requests to the virtual ip since all of them report to have it (unless i start rebooting nodes until a hiccup is noticed on haproxy stats page and then another takes over).

    Thank you for an excellent article.

    Reply
    1. Brian Seltzer Post author

      I’m not sure I understand it 100% either, but it is keepalived’s job to monitor the active node (which responds to requests on the virtual IP) and tell the other nodes not to respond, until the active node becomes unresponsive, and activate another node. Only the one active node is servicing requests on the virtual IP at any given time. Not exactly sure the best way to tell which node is active…

      Reply
    2. igorc9encompassIgor

      The secret is called ARP. The thing is that keepalived sends gratuitous ARP packet for the VIP to the upstream router ONLY for the node that is currently in master state. That way the router updates it’s ARP table with the MAC address of the master node for the VIP. When a request for the VIP comes it looks it’s ARP table and finds there the MAC of the master node so the connection will end up on that network interface.

      Consequently in case of fail over, keepalived elects a new master and sends new gratuitous ARP packet with the MAC of the new master’s network interface.

      Reply
  4. Marin Biberović

    Hello,

    I don’t understand how this Vitual IP is supposed to be set up from the networking side. I have four servers, each with their public IPs, what do I ask of my hosting company to be able to set up two load balancers?

    If someone could clarify/explain how this works I’d be most grateful.

    Thank you!

    Reply

Leave a Reply