LAMP Tutorials (3/6): Create a Second MySQL Cloud Database Server

This post is the third chapter of a LAMP Tutorial series in which we will show you how to set up a fully redundant, high-availability LAMP stack in the cloud with ElasticHosts. Find the other parts here:

Create a Second MySQL Cloud Database Server



We have now set up a LAMP stack and moved MySQL to a separate server. The next step in creating our scalable, redundant web application is to add a second MySQL server. This will help the application scale and provide redundancy – if one MySQL server fails for any reason, your application will simply switch to the other one.

In general, there are two kinds of MySQL replication:

  • Master-slave replication means one database is the master (handling both read and write requests) and one the slave (handling read requests only).
  • Master-master replication means both servers handle write requests.

For this tutorial, we’re going to use master-master replication: that means each of our database servers functions as both a master and a slave.

Step 1: Create a third server

First we create a third server, on which we will create our second MySQL database. Following our naming convention for database servers, this server will be called crocodile. So, create crocodile and install MySQL and rsync, as described in the Set Up a LAMP Stack tutorial. Then set it up to use our private VLAN, with the IP address 10.0.0.3.Three Servers

Step 2: Create a second database

The next thing we do is dump and copy the MySQL database on beagle over to crocodile so that we have two databases in the same state.

This is similar to what we did in the Move MySQL to a Separate Cloud Database Server tutorial, but we reproduce the instructions here, for ease:

beagle$ mysqldump -u root -p -c prices > prices.dump  
beagle$ rsync prices.dump root@10.0.0.3:~  

On crocodile, set up a new MySQL database called prices:

crocodile$ mysql -u root -p  
> CREATE DATABASE prices;
> EXIT;

And then import the database dump:

crocodile$ mysql -u root -p -D prices < prices.dump  

Check it has copied okay:

crocodile$ mysql -u root -p  
> USE prices;
> SELECT * FROM metals;
> EXIT;

The next thing to do is granting the correct permissions on the new MySQL database on crocodile:

crocodile$ mysql -u root -p  
> GRANT ALL PRIVILEGES ON prices.* to elastic1@10.0.0.1 IDENTIFIED BY 'oag4Chai';
> FLUSH PRIVILEGES;
> EXIT;

We also need to edit the MySQL configuration on crocodile to allow connections from other IP addresses:

crocodile$ vi /etc/mysql/my.cnf  

And edit this line:

bind-address = 10.0.0.3  

Then restart MySQL:

crocodile$ /etc/init.d/mysql restart  

Step 3: Create a slave user on both servers

Next, on each server, we need to allow a user from the other server to connect to the database as a slave. So we use the IP address of the other server, plus a username and password that we’ll use in the configuration files of the other server. You may want to use more secure usernames and passwords:

beagle$ mysql -u root -p  
> GRANT REPLICATION SLAVE ON *.* TO user_crocodile@10.0.0.3 IDENTIFIED BY 'pwd_crocodile';
> FLUSH PRIVILEGES;
> EXIT;
crocodile$ mysql -u root -p  
> GRANT REPLICATION SLAVE ON *.* TO user_beagle@10.0.0.2 IDENTIFIED BY 'pwd_beagle';
> FLUSH PRIVILEGES;
> EXIT;

Step 4: Configure our servers

Next, stop the server on beagle:

beagle$ /etc/init.d/mysql stop  

Open /etc/mysql/my.cnf:

beagle$ vi /etc/mysql/my.cnf  

And uncomment/edit the following lines - to enable binary logging, give the server a unique ID, and specify which database we want to replicate):

server-id = 1  
log_bin = /var/log/mysql/mysql-bin.log  
binlog_do_db = prices  

Then, go to the end of the file, and add these extra lines:

replicate-same-server-id = 0  
auto-increment-increment = 2  
auto-increment-offset = 1  
master-host = 10.0.0.3  
master-user = user_beagle  
master-password = pwd_beagle  
master-connect-retry = 60  
replicate-do-db = prices  

And do the same on crocodile, as follows:

crocodile$ /etc/init.d/mysql stop  
crocodile$ vi /etc/mysql/my.cnf

server-id = 2  
log_bin = /var/log/mysql/mysql-bin.log  
binlog_do_db = prices  

And add these lines to the end of the file:

replicate-same-server-id = 0  
auto-increment-increment = 2  
auto-increment-offset = 2  
master-host = 10.0.0.2  
master-user = user_crocodile  
master-password = pwd_crocodile  
master-connect-retry = 60  
replicate-do-db = prices  

Note that the value of auto-increment-offset is different on the different machines – that’s very important to prevent primary key duplication!

Now restart MySQL on both machines:

beagle$ /etc/init.d/mysql restart  
crocodile$ /etc/init.d/mysql restart  

Then check the log files for any errors:

beagle$ vi /var/log/mysql.log  
crocodile$ vi /var/log/mysql.log  

Step 5: Synchronize the servers

Almost there – now we need to find where the binary logs are being written. These are where replication operations get written and read.

beagle$ mysql -u root -p  
> FLUSH TABLES WITH READ LOCK;
> SLAVE STOP;
> SHOW MASTER STATUS;

Note down the file and position. Leave the lock running to prevent the database changing.

File | Position | BinlogDoDB | BinlogIgnoreDB | +——————+———-+————–+——————+ | mysql-bin.000002 | 106 | | | And do the same on crocodile: crococdile$ mysql -u root -p > FLUSH TABLES WITH READ LOCK; > SLAVE STOP; > SHOW MASTER STATUS; File | Position | BinlogDoDB | BinlogIgnoreDB | +——————+———-+————–+——————+ | mysql-bin.000002 | 106 | | |

Next, on beagle, type the following (insert the appropriate values from the results of the query on crocodile above). You’re telling it the address of the master host, the login details to use when connecting, and the location of the master’s bin files:

> CHANGE MASTER TO MASTER_HOST='10.0.0.3', MASTER_USER='user_beagle', MASTER_PASSWORD='pwd_beagle', MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=106;
> START SLAVE;

And on crocodile:

> CHANGE MASTER TO MASTER_HOST='10.0.0.2', MASTER_USER='user_crocodile', MASTER_PASSWORD='pwd_crocodile', MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=106;
> START SLAVE;

Check replication to confirm that replication is working, first try this command on both servers:

> SHOW SLAVE STATUS\G;

In the results, both Slave_IO_Running and Slave_SQL_Running should be “YES”. If not, something went wrong. You can now try issuing an actual command on beagle, and seeing if it gets copied to crocodile:

beagle$ mysql -u root -p  
> USE prices;
> INSERT INTO metals (name, price_usd_lb) VALUES ('antimony', 155.5);
crocodile$ mysql -u root -p  
> USE prices;
> SELECT * FROM metals;

Now we have two databases running, and we can just tell our front-end web application to use both.

Step 6:Round-robin database selection

A simple way to tell our front-end web application to use alternate databases is to use a random number. On puffin, our web server, open our PHP page:

puffin$ vi /var/www/index.php  

And replace the existing first line of PHP (connecting to the database) with this:

$ran = rand(1,10);
if ($ran < 5) {  
$db = '10.0.0.2:3306';
$backup = '10.0.0.3:3306';
} else {
$db = '10.0.0.3:3306';
$backup = '10.0.0.2:3306';
}
// connect to the database
$con = mysql_connect($db,'elastic1','oag4Chai');
if (!$con) {  
$con = mysql_connect($backup,'elastic1','oag4Chai') or die('Could not connect to the database server!');
}

Restart the web server, and check that the front page looks okay:

puffin$ /etc/init.d/apache2 restart  

You now have a (very) simple round-robin load balancer for your database.

You're finished



In the next tutorial, we’ll look at Adding a Second Cloud Web Server with Round-Robin DNS Load Balancing.