This post is the final chapter of a LAMP Tutorial series in which we will show you how to set up a fully redundant, high-availability LAMP stack in the cloud with ElasticHosts. Find the other parts here:
- LAMP Tutorials (1/6): Set Up a LAMP Stack on a Cloud Server
- LAMP Tutorials (2/6): Move MySQL to a Separate Cloud Database Server
- LAMP Tutorials (3/6): Create a Second MySQL Cloud Database Server
- LAMP Tutorials (4/6): Add a Second Cloud Web Server with Round-Robin DNS Load Balancing
- LAMP Tutorials (5/6): Add a Front-End Apache Cloud Load Balancer
Add a Second High-Availability Cloud Load Balancer
The final step towards our fully redundant, high-availability LAMP stack is adding a second load balancer. The two load balancers use the heartbeat package to monitor each other and check if the other node is still alive. We configure our two load balancers in an active/passive setup, which means we have one active load balancer, and the other one is a “hot standby” and becomes active if the active one fails. The steps are as follows:
- Step 1: Add a second load balancer
- Step 2: Set up hostnames
- Step 3: Install and configure heartbeat
- Step 4: Test heartbeat and total failover
Step 1: Add a second load balancer
Set up a new server called haddock and do the following things:
- Give it a static IP address (in this tutorial, we’ll use 18.104.22.168)
- Give it the private address 10.0.0.6 and attach it to the VLAN – check you can ping it from herring
- Follow the steps in Add an Apache load balancer to install an Apache load balancer
- Check that visiting its static IP in a browser now shows our site
Step 2: Set up hostnames
Next, we need to set up hostnames on both herring and haddock. We’ll call herring (10.0.0.5) loadb1, and haddock (10.0.0.6) loadb2. Start by editing the /etc/hosts file on both machines:
And replace the content of the file with the following:
127.0.0.1 localhost 10.0.0.5 loadb1 10.0.0.6 loadb2
On herring, now run:
echo loadb1 > /etc/hostname service hostname start
hostname hostname -f
On herring, both commands should show loadb1. You should now do the same on haddock, substituting loadb2 for loadb1.
Step 3: Install and configure heartbeat
We now install a package called heartbeat on both servers, which lets the two nodes monitor each other and check if the other node is still alive. First, add the universe and multiverse repositories to your sources file:
vi /etc/apt/sources.list deb http://gb.archive.ubuntu.com/ubuntu lucid universe deb http://gb.archive.ubuntu.com/ubuntu lucid multiverse
Then, update your sources:
Now, installing heartbeat is as simple as:
apt-get install heartbeat
We can now set up a simple configuration by creating just three files, all of which are stored under /etc/ha.d:
- ha.cf, the main configuration file
- haresources, resource configuration file
- authkeys, authentication information
These three configuration files must be identical on both haddock and herring. Firstly, create ha.cnf:
The file should consist of this:
use_logd on debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 10 udpport 694 ucast eth0 22.214.171.124 ucast eth0 126.96.36.199 ucast eth1 10.0.0.5 ucast eth1 10.0.0.6 node loadb1 node loadb2 auto_failback on
These options specify various logging directives:
keepalivespecifies that heartbeat should check the status of the other server every 2 seconds.
deadtimespecifies that it should assume the other server has gone down if there is no response after 10 seconds.
ucastspecifies the IP addresses of the two servers (unicast directives to the machine’s own IP addresses are ignored, which is why the file can be the same on both herring and haddock).
auto_failbackspecifies that if the master server goes down and then comes back up again, heartbeat should automatically return control to the master server when it reappears.
Note that the names of the two nodes at the end of the above files must match the names returned by
uname -n on each server.
Next, create haresources:
On both servers, the file should look like this:
The file specifies the node that this resource group would prefer to be run on and the IP address it serves. We specify our virtual IP, which is 188.8.131.52 (you should substitute yours). And finally, we need to set up authorisation keys:
The content of the file should be as follows, on both servers:
auth 1 1 sha1 somerandomstring
somerandomstring is a password that the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms – we use SHA1.
/etc/ha.d/authkeys should be readable by root only, therefore finally, do this:
chmod 600 /etc/ha.d/authkeys
Step 4: Test heartbeat and total failover
On both herring and haddock, run the following:
/etc/init.d/heartbeat stop /etc/init.d/heartbeat start
Now, if you visit your static IP, you should see the site running. After a few seconds, try
ifconfig on herring to check that it is the master node:
You should see a new entry for eth0:0 as follows:
eth0:0 Link encap:Ethernet HWaddr 02:00:2e:14:79:79 inet addr:184.108.40.206 Bcast:220.127.116.11 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
There should be no equivalent entry for
ifconfig on haddock – if there is, check that your haresources file is identical on both servers.
Finally, the acid test:
In your ElasticHosts control panel, shut down herring. This is our master server, so when it is shut down, the site should automatically fail over to haddock. After you have shut down herring, re-visit the static IP address in a browser and check the site continues to work. You should also run ifconfig on haddock and check that the new eth0:0 entry has appeared, as above. If this is the case, then our high-traffic, scalable, redundant web application is now complete, with no single point of failure anywhere in the system.
This was the last chapter in the LAMP Tutorial series. Follow this link to read more tutorials.