If you followed my previous post about high availability MySQL your application now has one less single point of failure. That's good, but what happens when your MySQL cluster begins to get overloaded? By load-balancing MySQL connections between hosts you can more easily accommodate a larger volume of queries.

A load balanced database cluster


This article will build on the MySQL cluster introduced in my previous post. If you haven't already, set that up. You'll also need another two virtual machines, each with one IP address.

  • - lb-db-01 (lb-db-01.vm.xeriom.net)
  • - lb-db-02 (lb-db-02.vm.xeriom.net)
  • * - db-01 (db-01.vm.xeriom.net)
  • * - db-02 (db-02.vm.xeriom.net)
  • * - virtual IP address

IP addresses marked with a * are brought over from the previous article.

All boxes have been firewalled. It's just plain common sense.

We have the technology

Install Heartbeat and MySQL Proxy on both load balancer boxes.

sudo apt-get install heartbeat mysql-proxy --yes

Configure and run MySQL Proxy

Open the firewall on the database boxes to allow the load balancing boxes to connect.

# On db-01 and db-02
sudo iptables -I INPUT 4 -p tcp \
  --dport mysql -s lb-db-01.vm.xeriom.net -j ACCEPT
sudo iptables -I INPUT 4 -p tcp \
  --dport mysql -s lb-db-02.vm.xeriom.net -j ACCEPT
sudo sh -c "iptables-save -c > /etc/iptables.rules"

If you followed the previous post you'll probably also want to remove the rule that allowed MySQL access from the test box to the floating IP address on the backend boxes. It's not hugely important at the moment, but it's nice to be neat. When you put this into production it will become much more important to control access to the database boxes.

# On db-01 and db-02
sudo iptables -D INPUT -p tcp --dport mysql -s \
  -d -j ACCEPT
sudo sh -c "iptables-save -c > /etc/iptables.rules"

Remember to swap for your floating IP address and for your test box IP address or you'll get a "bad rule" error.

You'll also need to open the MySQL port on the load balancer boxes. Note that MySQL Proxy listens on port 4040, not the regular MySQL port 3306. My test box here is - it should be whichever IP address outside the database cluster that you're going to connect from to test the proxy works.

# On lb-db-01
sudo iptables -I INPUT 4 -p tcp \
  --dport 4040 -d lb-db-01.vm.xeriom.net -s -j ACCEPT
sudo sh -c "iptables-save -c > /etc/iptables.rules"
# On lb-db-02
sudo iptables -I INPUT 4 -p tcp \
  --dport 4040 -d lb-db-02.vm.xeriom.net -s -j ACCEPT
sudo sh -c "iptables-save -c > /etc/iptables.rules"

Run the proxy on both boxes, telling it the address of the real database servers, then try to connect from the test box.

sudo /usr/sbin/mysql-proxy \
  --proxy-backend-addresses=db-01.vm.xeriom.net:3306 \
  --proxy-backend-addresses=db-02.vm.xeriom.net:3306 \
# On the test box
mysql -u some_user -p'some_other_password' -h lb-db-01.vm.xeriom.net
mysql> \q
mysql -u some_user -p'some_other_password' -h lb-db-02.vm.xeriom.net
mysql> \q

You may be told that your load balancer hosts don't have access to the MySQL server. If this happens, login to the MySQL hosts, add a user at the hostname that failed, and try again.

ERROR 1130 (00000): Host 'lb-db-01' is not allowed to connect to this MySQL server
# On db-01 and db-02
mysql -u root -p
Enter password: [Enter your MySQL root password]
mysql> grant all on my_application.* to 'some_user'@'lb-db-01' 
  identified by 'some_other_password';
mysql> grant all on my_application.* to 'some_user'@'lb-db-02' 
  identified by 'some_other_password';
mysql> \q

If you got MySQL prompts both times then both proxies are working. Remove the firewall rules that let your test box talk directly to each node and add rules that allow access only to the floating IP address.

# On lb-db-01
sudo iptables -D INPUT -p tcp \
  --dport 4040 -d lb-db-01.vm.xeriom.net -s \
sudo iptables -I INPUT 4 -p tcp \
  --dport 4040 -d -s \
sudo sh -c "iptables-save -c > /etc/iptables.rules"
# On lb-db-02
sudo iptables -D INPUT -p tcp \
  --dport 4040 -d lb-db-02.vm.xeriom.net -s \
sudo iptables -I INPUT 4 -p tcp \
  --dport 4040 -d -s \
sudo sh -c "iptables-save -c > /etc/iptables.rules"

Configure and run Heartbeat

Now it's time to configure Heartbeat on both boxes. Open up the firewall and then populate Heartbeat's configuration files.

# On lb-db-01
sudo iptables -I INPUT 4 -p udp \
  --dport 694 -s lb-db-02.vm.xeriom.net -j ACCEPT
sudo sh -c "iptables-save -c > /etc/iptables.rules"
# On lb-db-02
sudo iptables -I INPUT 4 -p udp \
  --dport 694 -s lb-db-01.vm.xeriom.net -j ACCEPT
sudo sh -c "iptables-save -c > /etc/iptables.rules"
# On both load balancer boxes
sudo cp /usr/share/doc/heartbeat/authkeys /etc/ha.d/
sudo sh -c "zcat /usr/share/doc/heartbeat/ha.cf.gz > /etc/ha.d/ha.cf"
sudo sh -c "zcat /usr/share/doc/heartbeat/haresources.gz > /etc/ha.d/haresources"

The authkeys should be readable only by root because it's going to contain a valuable password.

sudo chmod go-wrx /etc/ha.d/authkeys

Edit /ec/ha.d/authkeys and add a password of your choice so that it looks like below.

auth 2
2 sha1 your-password-here

Configure ha.cf according to your network. In this case the nodes are lb-db-01.vm.xeriom.net and lb-db-02.vm.xeriom.net. To figure out what your node names are run uname -n on each of the nodes. These must match the values you use in the node directives in the configuration file.

logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
bcast eth0
udpport 694
auto_failback on
node lb-db-01.vm.xeriom.net
node lb-db-02.vm.xeriom.net

Tell Heartbeat that it will be managing the floating IP address with lb-db-01 being the preferred node by editing /etc/ha.d/haresources. Remember that this file must be identical on both boxes.


If you've had Heartbeat running on the database boxes (as will be the case from the last article) then nuke it now.

# On the database boxes
sudo apt-get uninstall heartbeat

Then remove the alias from eth0 on both boxes.

# On the database boxes
sudo ifconfig eth0 inet -alias

Now we're ready to fire up Heartbeat on the load balancer boxes.

# On lb-db-01 then lb-db-02
sudo /etc/init.d/heartbeat restart

Testing, testing, testing

Fire up mysql on the test box and connect to the floating IP address. You should get the MySQL command prompt.

mysql -u some_user -p'some_other_password' -h my_application

Typing out exactly what is done to test this would take a long time and, largely, would be a waste of space. Here's a summary of the procedure. At all stages you should get a result from your query.

  1. Run a query such as show processlist;
  2. Shutdown db-01
  3. Run the query again
  4. Start db-01
  5. Shutdown db-02
  6. Run the query again
  7. Start db-02
  8. Shutdown lb-db-01
  9. Run the query again
  10. Shutdown db-01
  11. Run the query again
  12. Start db-01
  13. Shutdown db-02
  14. Run the query again
  15. Start db-02
  16. Start lb-db-01
  17. Run the query again

If your query ran successfully each time then congratulations, you've now got a load balanced, highly available, MySQL instance.

Where now?

Being highly available and load balanced doesn't protect you from mistakes. Backup often, and check you can restore from your backups. You may be interested in building a MySQL binlog-only server to get point-in-time recovery.

MySQL Proxy talks Lua. Consider learning how to write it.

I've not yet documented how to take the cluster beyond two load balancers and two database nodes. It's possible, but it shouldn't be used as a solution to scaling the setup I've described without some research. Instead of expanding beyond two nodes in a master-master cluster it may be more suitable to setup several master-master nodes and shard or federate your data. It may be that you need to rearrange your schema or play with master-slave replication and do some tricks on the slave to make reads faster. How you scale your database depends on your data and how you use it. Do your homework... and be sure to blog about it and let me know how it goes.

Thinking of a title is the hardest part

If you found this article useful, give me some love over at Working With Rails. If I get 100 points then I get to live.

written by
Disagree? Found a typo? Got a question?
If you'd like to have a conversation about this post, email craig@barkingiguana.com. I don't bite.
You can verify that I've written this post by following the verification instructions:
curl -LO http://barkingiguana.com/2008/07/20/load-balanced-highly-available-mysql-on-ubuntu-804.html.orig
curl -LO http://barkingiguana.com/2008/07/20/load-balanced-highly-available-mysql-on-ubuntu-804.html.orig.asc
gpg --verify load-balanced-highly-available-mysql-on-ubuntu-804.html.orig{.asc,}