Category Archives: Data Backup

MySQL: How do you set up master-slave replication in MySQL? (CentOS, RHEL, Fedora)

Before we go into how to set up master-slave replication in MySQL, let us talk about some of the reasons I have set up master-slave replication using MySQL.

1) Offload some of the queries from one server to another and spread the load: One of the biggest advantages to have master-slave set up in MySQL is to be able to use master for all of the inserts and send some, if not all, select queries to slave. This will most probably speed up your application without having to diving into optimizing all the queries or buying more hardware.

2) Do backups from slave: One of the advantages people overlook is that you can use MySQL slave to do backups from. That way site is not affected at all when doing backups. This becomes a big deal when your database has grown to multiple gigs and every time you do backups using mysqldump, site lags when table locks happen. For some sites, this could mean that site goes down for few secs to minutes. If you have slave, you just take slave out of rotation (should be built into code) and run backups off the slave. You can even stop slave MySQL instance and copy the var folder instead of doing mysqldump.

Ok let us dive into how to setup master-slave replication under MySQL. There are many configuration changes you can do to optimize your MySQL set up. I will just touch on very basic ones to get the replication to work. Here are some assumptions:

Master server ip: 10.0.0.1
Slave server ip: 10.0.0.2
Slave username: slaveuser
Slave pw: slavepw
Your data directory is: /usr/local/mysql/var/

Put the following in your master my.cnf file under [mysqld] section:

# changes made to do master
server-id = 1
relay-log = /usr/local/mysql/var/mysql-relay-bin
relay-log-index = /usr/local/mysql/var/mysql-relay-bin.index
log-error = /usr/local/mysql/var/mysql.err
master-info-file = /usr/local/mysql/var/mysql-master.info
relay-log-info-file = /usr/local/mysql/var/mysql-relay-log.info
datadir = /usr/local/mysql/var
log-bin = /usr/local/mysql/var/mysql-bin
# end master

Copy the following to slave’s my.cnf under [mysqld] section:

# changes made to do slave
server-id = 2
relay-log = /usr/local/mysql/var/mysql-relay-bin
relay-log-index = /usr/local/mysql/var/mysql-relay-bin.index
log-error = /usr/local/mysql/var/mysql.err
master-info-file = /usr/local/mysql/var/mysql-master.info
relay-log-info-file = /usr/local/mysql/var/mysql-relay-log.info
datadir = /usr/local/mysql/var
# end slave setup

Create user on master:
mysql> grant replication slave on *.* to slaveuser@'10.0.0.2' identified by 'slavepw';

Do a dump of data to move to slave
mysqldump -u root --all-databases --single-transaction --master-data=1 > masterdump.sql

import dump on slave
mysql < masterdump.sql

After dump is imported go in to mysql client by typing mysql. Let us tell the slave which master to connect to and what login/password to use:
mysql> CHANGE MASTER TO MASTER_HOST='10.0.0.1', MASTER_USER='slaveuser', MASTER_PASSWORD='slavepw';

Let us start the slave:
mysql> start slave;

You can check the status of the slave by typing
mysql> show slave status\G

The last row tells you how many seconds its behind the master. Don’t worry if it doesn’t say 0, the number should be going down over time until it catches up with master (at that time it will show Seconds_Behind_Master: 0) If it shows NULL, it could be that slave is not started (you can start by typing: start slave) or it could be that it ran into an error (shows up in Last_errno: and Last_error under show slave status\G).

Related posts:

————————————-
DISCLAIMER: Please be smart and use code found on internet carefully. Make backups often. And yeah.. last but not least.. I am not responsible for any damage caused by this posting. Use at your own risk.

MySQL: How do I dump all tables in a database into separate files?

There have been numerous occasions where I needed to make backups of individual tables from selected database. Usually I can achieve this by typing:

mysqldump database_name table1 > table1.sql
mysqldump database_name table2 > table2.sql

This could be very painful if you have 10’s or 100’s of tables. Until today, I never ran into a situation where I had to deal with dumping more than few tables at a time. Today I had to do a dump of 181 tables. I was not going to sit there and type in that command with 181 table names. It is not just time consuming but it is also stupid. So I wrote this script to help me with this task. We still use mysqldump command as described above, except we do it programmatically to make it easier on us:

#!/bin/bash
db=$1
if [ "$db" = "" ]; then
echo "Usage: $0 db_name"
exit 1
fi
mkdir $$
cd $$
clear
for table in `mysql $db -e 'show tables' | egrep -v 'Tables_in_' `; do
echo "Dumping $table"
mysqldump --opt -Q $db $table > $table.sql
done
if [ "$table" = "" ]; then
echo "No tables found in db: $db"
fi

You can also compress your files by adding bzip2, zip or any other compression commands after mysqldump command. Here is the same script with bzip2 command added:
#!/bin/bash
db=$1
if [ "$db" = "" ]; then
echo "Usage: $0 db_name"
exit 1
fi
mkdir $$
cd $$
clear
for table in `mysql $db -e 'show tables' | egrep -v 'Tables_in_' `; do
echo "Dumping $table"
mysqldump --opt -Q $db $table > $table.sql
bzip2 $table.sql
done
if [ "$table" = "" ]; then
echo "No tables found in db: $db"
fi

I do not recommend doing compression on a production server since most compression program put descent amount of load on the server. Also note that this will delay your dump considerably. You may also want to use different parameters for running mysqldump. Type man mysqldump in your shell to read more.

Related posts:

————————————-
DISCLAIMER: Please be smart and use code found on internet carefully. Make backups often. And yeah.. last but not least.. I am not responsible for any damage caused by this posting. Use at your own risk.

MySQL: InnoDB: ERROR: the age of the last checkpoint is [number]

One of the mysql database servers I manage started to have issues with doing backups yesterday. mysqldump was running but nothing was happening on the backup side. I started to investigate to see why our full backups were failing. I opened up the mysql error log file (mine is at: /usr/local/mysql/var/hostname.err) and notice there were many instances of following error.

070815 15:31:46 InnoDB: ERROR: the age of the last checkpoint is 9433957,
InnoDB: which exceeds the log group capacity 9433498.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.

I poked around and found out our log files were set to default (5 mb) and they needed to be increased. After doing some calculations and research, I decided going to 50 mb at this point would be a good way to go. So I made the change in my.cnf file and added: innodb_log_file_size = 50M

I then stopped our mysql server and restarted it. And to my horror I saw following error messages show up in the mysql error logs:

070815 17:37:40 [ERROR] /usr/local/mysql/libexec/mysqld: Incorrect information in file: './dbname/table_name.frm'
070815 17:37:40 [ERROR] /usr/local/mysql/libexec/mysqld: Incorrect information in file: './dbname/table_name.frm'

I stopped the mysql server right away. I then remember, by help of a friend, that I have to remove the old log files if I change the size by changing innodb_log_file_size. So I issued a command: mv ib_logfile? ..

This command moves both ib_logfile1 and ib_logfile2 down one directory. I didn’t want to just remove ’em so instead I moved them. After that, I restarted mysql again and to my comfort, everything came back up without errors. So lesson learned: if you change ib_logfile size by using innodb_log_file_size setting, make sure you move existing log files before you start mysql again.

————————————-
DISCLAIMER: Please be smart and use code found on internet carefully. Make backups often. And yeah.. last but not least.. I am not responsible for any damage caused by this posting. Use at your own risk.

Apache: How do you set up log rotation and archiving for Apache logs?

If you have a website and never have archived your apache logs, you may be surprised one day by running out of space due to logs using up all the free space on specific partition. Most people don’t think about archiving until it is too late. Here we set up a simple log rotation script which will work for apache 1.x and also for apache 2.x version as well. I have included comments in the script itself to help you understand what is going on throughout.

#!/bin/sh
#
# Created by: Sunny Walia
# Date created: 5/24/03
# Date Modified: 5/29/07
# Purpose: do any type of preprocessing before log.php can do its work.
# Also does log rotation and backup.
#
# Modification history:
# 5/25/03: added comments.
# Gzip and move to backup logs folder
# added config info on top
#
#
# configuration info
SERVERNAME="svr1"
WORKINGDIR="/admin/backups/$SERVERNAME/logs/"
BACKUPLOGSDIR="/admin/backups/$SERVERNAME/logs/archived/"
BACKUP_DIR="/admin/backups/$SERVERNAME/"
LOGDIR="/usr/local/apache2/logs/"
DATE=`date +%m%d%y%H`
APACHCTL_LOC="/usr/local/apache2/bin/"
#
mkdir $WORKINGDIR -p
mkdir $BACKUPLOGSDIR -p
mkdir $BACKUP_DIR -p
#
# move the logs into their temp name so apache can continue to write to it
mv ${LOGDIR}access_log ${LOGDIR}${SERVERNAME}_access_log.old
mv ${LOGDIR}error_log ${LOGDIR}${SERVERNAME}_error_log.old
#
# restart apache gracefully so it will finish what its doing and start a new log file
# when done serving current requests
${APACHCTL_LOC}apachectl graceful
#
# give some time to apache to finish serving pending requests (in secs)
sleep 15
#
# copy the files to our directory for processing
cp ${LOGDIR}${SERVERNAME}_access_log.old ${WORKINGDIR}${SERVERNAME}_access_log
cp ${LOGDIR}${SERVERNAME}_error_log.old ${WORKINGDIR}${SERVERNAME}_error_log
#
mv ${LOGDIR}${SERVERNAME}_access_log.old ${LOGDIR}${SERVERNAME}_access_log.$DATE
mv ${LOGDIR}${SERVERNAME}_error_log.old ${LOGDIR}${SERVERNAME}_error_log.$DATE
gzip ${LOGDIR}${SERVERNAME}_access_log.$DATE
gzip ${LOGDIR}${SERVERNAME}_error_log.$DATE
#
# we move instead of copy than delete since if mv fails, we still have originals
# mv can fail if target location is full
mv ${LOGDIR}${SERVERNAME}_access_log.$DATE.gz $BACKUPLOGSDIR
mv ${LOGDIR}${SERVERNAME}_error_log.$DATE.gz $BACKUPLOGSDIR

Let us make the script executable:
chmod +x script_name.sh

You can automate this to happen every day by putting it in your crontab:
crontab -e

1 0 * * * /path/to/this/script

This will rotate your logs at 12:01 AM every day.

————————————-
DISCLAIMER: Please be smart and use code found on internet carefully. Make backups often. And yeah.. last but not least.. I am not responsible for any damage caused by this posting. Use at your own risk.

Rsync: Using rsync to backup data from one server to another over SSH. Quick rsync tutorial.

Rsync is a great tool which can be used to do many tasks which involved copying/moving data. If privacy/security is of concern, which it always should be, you can use rsync to do all the copying/moving of data over SSH. Read through “man rsync” to get deeper understanding of rsync. Here is my attempt to a short tutorial on rsync. Let us start with most simple example of using rsync over ssh.

rsync -ae ssh server1:/home /home/backups/server1_home_backup/

This command will download all the files/directories from /home on server1 and copies them to /home/backups/server1_home_backup/
-a = archive mode. This will preserve permissions, timestamps, etc.
-e = specify which remote shell to use. In our case, we want to use ssh which follow right after “e”

Let us improve on this and add couple more parameters:

rsync -zave ssh --progress server1:/home /home/backups/server1_home_backup/
-z = adds zip compression.
-v = verbose
–progress = my favorite parameter when I am doing rsync manually, not so good when you have it in cron. This show progress (how_many_files_left/how_many_files_total) and speed along with some other useful data.

Great.. we are moving along pretty good. Let us add some security to make sure things work the way we want to.

rsync --delete-after -zave ssh --progress server1:/home /home/backups/server1_home_backup/

–delete-after = this will delete files on backup server which are missing from source after ALL syncing is done. If you don’t care of having extra files on your backup server and have plenty of disk space to spare, do not use this parameter.

Lastly, one of the VERY handy parameters,

rsync --delete-after -zave ssh --progress server1:/home /home/backups/server1_home_backup/ -n

The -n (or –dry-run) parameter is great to use for testing. It will not transfer or delete any files, rather will report to you what it would have done if it was ran with out -n parameter. This way you can test it with out destroying or transfering data just to find out that is not what you wanted.

For further reading: man rsync