MySQL High Availability with Galera and HAProxy

By | April 26, 2014

In this article, we’ll build a highly available MySQL database service. We’ll actually use MariaDB (a fork of MySQL) and build two database servers. Then we’ll use Galera to do bidirectional replication, and we’ll put the whole thing behind the HAProxy pair that we built in the last article, Redundant Load Balancers – HAProxy and Keepalived. We can then use this cluster for our high availability OpenStack deployment. I’ve chosen MariaDB because they’ve got a version built for Galera available in an apt repo for Ubuntu 14.04.

So, to start, I’ve built two Ubuntu 14.04 servers and given them the following hostnames and IP addresses:

  • MySQL1 (
  • MySQL2 (

Next, we will add the MariaDB repo and install the MariaDB and Galera packages:

apt-get install python-software-properties
sudo apt-key adv --recv-keys --keyserver hkp:// 0xcbcb082a1bb943db
sudo add-apt-repository 'deb trusty main'
apt-get update
apt-get install mariadb-galera-server galera

Note that there are a number of mirrors for this repo. You can get the optimum mirror by using the MariaDB repo configuration tool. Once the software has been installed, we can create the Galera configuration file:



# Galera Provider Configuration

# Galera Cluster Configuration

# Galera Synchronization Congifuration

# Galera Node Configuration

Notice that we need to set the IP addresses for the wsrep_cluster_address to match the addresses of our two nodes, then on each node, we set use the local hostname and IP address in the last two lines.

Next, we need to edit the MySQL configuration to remove the bind-address statement (not sure why this is necessary but it seems to break the HAProxy configuration if you don’t do it). Just comment out the following line using a #:


#bind-address           =

Complete all of the above steps on both nodes, and then we should stop the mysql service on both nodes:

service mysql stop

Now we need to copy the contents of /etc/mysql/debian.cnf from node 1 to node 2 (there are some passwords in that file that need to match across both nodes once we’ve got them clustered). Once that’s complete, we can create the cluster. On the first node, we start the mysql service with as special argument to create the new cluster:

service mysql start --wsrep-new-cluster

And on the second node, we start the usual way:

service mysql start

That’s that. We’ve got two MySQL nodes replicating, but we need to do some configuration to put our load balancers out front. If you’ll recall from the HAProxy article, we built two HAProxy nodes (, and created a virtual IP address (VIP) of HAProxy will check to see that MySQL is up and running by logging onto the service. Therefore, we’ll need to create a MySQL that can logon from either HAProxy node. On one of the MySQL nodes, logon to MySQL as root and enter the following:

mysql -u root -p
grant all on *.* to root@'%' identified by 'password' with grant option;
insert into mysql.user (Host,User) values ('','haproxy');
insert into mysql.user (Host,User) values ('','haproxy');
flush privileges;

Notice that I specified the IP addresses of my HAProxy nodes, and I used the username ‘haproxy’.

Now we need to head over to our HAProxy nodes and do some work over there. The fist thing we need to do is install the mysql client. Remember, HAProxy will logon to MySQL to check that the service is running, so we need the client.

apt-get update
apt-get install mysql-client

Then, we need to add a stanza to our haproxy.cfg to proxy MySQL traffic. Remember, we’ve already got global,defaults, and stats stanzas in the file from the HAProxy article. Just add the following stanza to the end of the file:


listen galera
        balance source
        mode tcp
        option tcpka
        option mysql-check user haproxy
        server MySQL1 check weight 1
        server MySQL2 check weight 1

Notice that I’m using the VIP ( for the listener, and I’m pointing to my two MySQL servers ( and 34). Also notice, that the mysql-check user is haproxy, that I added to MySQL earlier.

We then just have to reload the HAProxy configuration on both nodes to have it take effect :

service haproxy reload

Now, you should be able to access MySQL through the load balancer:

mysql -h -u root -p

That’s that. We now have a load balanced, redundant database service for our OpenStack deployment.


34 thoughts on “MySQL High Availability with Galera and HAProxy

    1. Brian Seltzer

      Bob, it looks like MariaDB has released their Trusty repo since I wrote this article. I guess I’ll go back and upgrade my cluster (and the article) to use 14.04. Thanks for the heads up.

      1. bob

        Brian, thanks for getting back to me, can you also confirm the link in your article, it seems it’s pointing at the basic Maria DB, correct? Shouldn’t the path be /mariadb/repo/5.5-galera not /mariadb/repo/5.5?

        Great write up by the way.

        1. Brian Seltzer

          I will. I’ll run through the install again, hopefully over the weekend, and update the article to use 14.04 and verify the link

        2. Brian Seltzer

          OK, I ran through the install on 14.04, and I’ve updated the article. That’s awesome, now my whole stack is on 14.04.

      2. bob

        Can you confirm if the path in instructions is correct for MariaDB with galera?
        It seems that is the path for regular mariaDB. I’m pretty new to linux and your tutorial has been very helpful. Thanks so much

  1. bob

    Hi Brian, In the process of configuring the cluster, and I do not see the file cluster.cnf. Do I need to create it, or did I install the incorrect version of MAria DB? Is there a command I can run to make sure I downloaded and installed Galera?

    1. Brian Seltzer

      Yes you need to create the file from scratch. To verify that the galera package is installed, you could use: dpkg -l | grep galera :or something like that…

      1. bob

        Yes it is running :

        dpkg -l | grep galera

        ii galera 25.3.5-wheezy amd64 Galera Replication Framework

        ii mariadb-galera-server 5.5.37+maria-1~trusty all MariaDB database server with Galera cluster (metapackage depending on the lat est version)

        ii mariadb-galera-server-5.5 5.5.37+maria-1~trusty amd64 MariaDB database server with Galera cluster binaries

  2. bob

    Completed set up, but when I try to connect getting error ” Lost connection to MySQL server at ‘reading initial communication packet’, system error: 0″

    Noticed on cluster.cnf the bind-address is set to, is that correct?

    Aha!!!! I figured it out.. your instructions say to use root user.. no privileges for root from HAproxy box.. tried with haproxy user and it worked. Only concern is no password was required? is this correct? Can you also confirm the bind address question above.

    1. Brian Seltzer

      The packet error indicates that haproxy can’t determine that the mysql service is up on the nodes. Ensure that you’ve run the commands that grant access to the haproxy user from the haproxy nodes: insert into mysql.user (Host,User) values (‘′,’haproxy’);
      insert into mysql.user (Host,User) values (‘′,’haproxy’);

      Yes means listen on all IP addresses.

      notice the mysql command in the instructions: grant all on *.* to root@’%’ identified by ‘password’ with grant option; that gives root access from all hosts using the password ‘password’.

      1. bob

        So this means I will need to grant access to every service user from openstack (ex nova, keystone) to mysql from haproxy servers?

  3. bob

    Hi Brian,

    Again thank you for all your help.
    I found a doc on for Haproxy and the config for galera shows
    bind :3306 it also shows option as httpchk not tcpka. can you clarify and help me understand. Lastly, shouldn’t the bind-address be set to the VIP on Haproxy to allow compute nodes access through controller?

    1. Brian Seltzer

      I’m using mysql-check which connects to the mysql service using the mysql client to see if its alive. The OpenStack way uses a galera check service that responds to http request. httpchk is only used if your using the http-style galera check. tcpka is unrelated to the check and is for keep alives.

  4. Pravin

    The last part is where i am stuck.

    i did service reload ffor haproxy and encountered the below error.

    * Reloading haproxy haproxy [WARNING] 188/162710 (5292) : config : ‘option httplog’ not usable with proxy ‘galera’ (needs ‘mode http’). Falling back to ‘option tcplog’.

    later for mysql log i am getting the below error
    ERROR 2013 (HY000): Lost connection to MySQL server at ‘reading initial communication packet’, system error: 0

    I would appreciate your help on this

    1. ralf

      Hi …
      i came across this while trying to run through this tutorials 🙂
      within the haproxy.cfg we need to change it like this …

      [defaults-section->comment out the following lines]
      # mode http
      # option httplog

      [add the following line to the stats-section]
      option httplog

      afterwards reloading the haproxy should not give you any pain any more.

  5. Pedro Sousa

    Hi Brian,

    nice article. A question: how do you handle split brain in galera db a 2-node scenario? My understanding is that you need a 3 node setup.


        1. Martins

          Pedro, you are right. first tests quickly showed that without arbitrator cluster is not pretty unpredictable, so, for serious usage, please use arbitrator as whell or 3rd mysql server.
          And don’t forget about ntp and precise time on galera hosts!!!

  6. Josef ten Koppel

    This is a great tuturial!! I installed a mariadb Cluster 10 with 3 nodes this morning,! I had an issue Like Pravin had (ERROR 2013 (HY000): Lost connection to MySQL server at ‘reading initial ) at first . I had to put a line in my.cnf in section [mysqld] skip-name-resolve that solved this issue for me. I still have 1 problem left. MYSQL Services don’t start on any node wenn all 3 nodes were down. I have to start te first node manualy with ‘service mysql start –wsrep-new-cluster! and then I have to start the other nodes.
    maybe I missed something. Help would be appreciated.
    Greetings Josef

    1. Brian Seltzer Post author

      That’s correct, if all nodes are down, you have to start the first node with –wsrep-new-cluster but hopefully you don’t have all three nodes down very often. I suppose you could write a script to automate the startup, but it could get a bit complicated.

  7. iphenk

    Hi brian,

    when i try reloading haproxy, it show :
    * Reloading haproxy haproxy
    [ALERT] 172/101436 (24899) : Starting proxy galera: cannot bind socket [fail]
    how do I fix??

    1. Brian Seltzer Post author

      type the following command on the haproxy server:

      netstat -ntlp | grep 3306

      The program listening on port 3306 should be haproxy. You don’t have MySQL/Galera running on the same server as haproxy do you?

  8. huyupeng

    in the article, you said “bidirectional replication”, but when i insert data into db by using vip access, the data was inserted into one mysql server db only. why ?

    1. Brian Seltzer Post author

      Galera should replicate the data to the other node. Did you start the node 1 with the –wsrep-new-cluster switch?

  9. Steven LI

    Did you ever try network HA mode? There are a few options, but no one is good enough. We can use DVR for load balance, but the network node is still the single failure point; if we use L3HA for network node redundancy, but the loadbalance can not be achieved.

  10. javier

    Hi Brian,

    In first place great tutorial. Now, I found a bug in your procedure, exactly when you are creating the haproxy user in the database.

    MySQL2 node appears unreachable in haproxy

    1. HAProxy/stats error message “Access denied for user ‘haproxy’@’ 30)”
    2. MySQL2 syslog error message “Access denied for user ‘haproxy’@’′(or 30) (using password: NO)”

    The Galera replicate only DDL (Data definition language) and innodb-DML (Data Manipulation Language) so if you create a user by “insert into mysql.user ….” it is myissam-DML and is not replicated

    Create users using the statement
    create user ‘haproxy’@’’;
    create user ‘haproxy’@’’;

    I hope this reply is helpful

  11. javier

    Another problems that I faced while configuring was that the cluster does not start in the first try

    Ubuntu Server 15.04 (vivid)
    MariaDB 10.0.21

    Syslog Error shows –>

    #007/usr/bin/mysqladmin: connect to server at ‘localhost’ failed
    error: ‘Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’ (111 “Connection refused”)’
    Check that mysqld is running and that the socket: ‘/var/run/mysqld/mysqld.sock’ exists!

    1. There seems to be a bug in my Galera version that is causing that the process on the primary node fails because there is no mysqld process alive (on any node)

    2. After restart the mysql process it fails because a mysqld process remains alive but unresponsive (zombie) so the port 3306 is locked

    1. In cluster.cnf file from first node set variable wsrep_cluster_address=”gcomm://”
    2. Kill unresponsive mysqld process with “kill -9” command

    I hope this reply is helpful too


  12. jibrann

    hi brian,
    thanks for posting such a nice bundle of articles on HA
    can you please just confirms me you talking about every time on every article under HA that ” So, to start, I’ve built two Ubuntu 14.04 servers and given them the following hostnames and IP addresses”
    please tell me under every HA Controller Series i need to install ubuntu 14.04lts again n again for each server ?

    i am abit confuse in it please tell me

    1. Brian Seltzer Post author

      Hi Jibran,

      One of the goals of the articles is to show how to build a scalable infrastructure. Maximum scale is achieved by spreading services across many nodes, so yes, each article suggests that you build more Ubuntu servers. However, you can certainly colocate services on fewer nodes. For example, you can put MySQL and the controller services on the same pair of servers. You can put Swift proxies on the same servers as the swift storage nodes. You should, however, keep the compute nodes separate from the controllers because of network complexity.

      Hope that helps.


Leave a Reply