Category Archives: MySQL backup

Oh dear MySQL slave, where did you put those rows?

I need help from my fellow mysql users.  I know some of the people who read this are alot better then me with mysql so hopefully you can help 🙂

So today we decided that we are going to migrate one of our master database servers to new hardware.  Since we got the hardware this morning and we wanted to move on to it asap, we decided that we will take our slave down, copy data from it, and bring it up on future master server.  At that point, we will let it run as slave to the current master server until its time for us to take it down.  Reason we did that instead of mysqldump/import was to avoid the lag mysqldump creates on our server.

After we did all this and put up the new master server, we started to notice odd issues.  After looking around and comparing old db with new, we found out that new db was missing data.  How it happened is beyond me and is the reason why I am writing this.  We never had issues with the slave which would cause data to be lost; so what happened to those missing rows?  Is this something which is common?  Can we not trust our slave enough to use it as master if master died?  Can we not run backups off the slave with confident that our data is protected and up to date so to keep load down on our master?  All these questions which keep me awake and wondering…

MySQL: How do you set up master-slave replication in MySQL? (CentOS, RHEL, Fedora)

Before we go into how to set up master-slave replication in MySQL, let us talk about some of the reasons I have set up master-slave replication using MySQL.

1) Offload some of the queries from one server to another and spread the load: One of the biggest advantages to have master-slave set up in MySQL is to be able to use master for all of the inserts and send some, if not all, select queries to slave. This will most probably speed up your application without having to diving into optimizing all the queries or buying more hardware.

2) Do backups from slave: One of the advantages people overlook is that you can use MySQL slave to do backups from. That way site is not affected at all when doing backups. This becomes a big deal when your database has grown to multiple gigs and every time you do backups using mysqldump, site lags when table locks happen. For some sites, this could mean that site goes down for few secs to minutes. If you have slave, you just take slave out of rotation (should be built into code) and run backups off the slave. You can even stop slave MySQL instance and copy the var folder instead of doing mysqldump.

Ok let us dive into how to setup master-slave replication under MySQL. There are many configuration changes you can do to optimize your MySQL set up. I will just touch on very basic ones to get the replication to work. Here are some assumptions:

Master server ip: 10.0.0.1
Slave server ip: 10.0.0.2
Slave username: slaveuser
Slave pw: slavepw
Your data directory is: /usr/local/mysql/var/

Put the following in your master my.cnf file under [mysqld] section:

# changes made to do master
server-id = 1
relay-log = /usr/local/mysql/var/mysql-relay-bin
relay-log-index = /usr/local/mysql/var/mysql-relay-bin.index
log-error = /usr/local/mysql/var/mysql.err
master-info-file = /usr/local/mysql/var/mysql-master.info
relay-log-info-file = /usr/local/mysql/var/mysql-relay-log.info
datadir = /usr/local/mysql/var
log-bin = /usr/local/mysql/var/mysql-bin
# end master

Copy the following to slave’s my.cnf under [mysqld] section:

# changes made to do slave
server-id = 2
relay-log = /usr/local/mysql/var/mysql-relay-bin
relay-log-index = /usr/local/mysql/var/mysql-relay-bin.index
log-error = /usr/local/mysql/var/mysql.err
master-info-file = /usr/local/mysql/var/mysql-master.info
relay-log-info-file = /usr/local/mysql/var/mysql-relay-log.info
datadir = /usr/local/mysql/var
# end slave setup

Create user on master:
mysql> grant replication slave on *.* to slaveuser@'10.0.0.2' identified by 'slavepw';

Do a dump of data to move to slave
mysqldump -u root --all-databases --single-transaction --master-data=1 > masterdump.sql

import dump on slave
mysql < masterdump.sql

After dump is imported go in to mysql client by typing mysql. Let us tell the slave which master to connect to and what login/password to use:
mysql> CHANGE MASTER TO MASTER_HOST='10.0.0.1', MASTER_USER='slaveuser', MASTER_PASSWORD='slavepw';

Let us start the slave:
mysql> start slave;

You can check the status of the slave by typing
mysql> show slave status\G

The last row tells you how many seconds its behind the master. Don’t worry if it doesn’t say 0, the number should be going down over time until it catches up with master (at that time it will show Seconds_Behind_Master: 0) If it shows NULL, it could be that slave is not started (you can start by typing: start slave) or it could be that it ran into an error (shows up in Last_errno: and Last_error under show slave status\G).

Related posts:

————————————-
DISCLAIMER: Please be smart and use code found on internet carefully. Make backups often. And yeah.. last but not least.. I am not responsible for any damage caused by this posting. Use at your own risk.

MySQL: How do I import individual table dump files in to MySQL using shell script?

After I wrote the post: How do I dump all tables in a database into separate files? I got emails from couple people asking how to import the individual table files back in to MySQL. First way to import each sql file created by the post is to import each file individually by typing:mysql db_name < table1.sql This will work as long as you are only importing few files. But if you need to import all of the files in the directory, which could be in 100’s, this method does not scale well. To achieve this I wrote a shell script which does the work for me. Of course, there are other ways to do this and I am only showing you one way of doing it. This works for me so here it is:

#!/bin/bash
db=$1
if [ "$db" = "" ]; then
echo "Usage: $0 db_name"
exit 1
fi
mkdir done
clear
for sql_file in *.sql; do
echo "Importing $sql_file";
mysql $db< $sql_file;
mv $sql_file done;
done

Related posts:

————————————-
DISCLAIMER: Please be smart and use code found on internet carefully. Make backups often. And yeah.. last but not least.. I am not responsible for any damage caused by this posting. Use at your own risk.

MySQL: How do I dump all tables in a database into separate files?

There have been numerous occasions where I needed to make backups of individual tables from selected database. Usually I can achieve this by typing:

mysqldump database_name table1 > table1.sql
mysqldump database_name table2 > table2.sql

This could be very painful if you have 10’s or 100’s of tables. Until today, I never ran into a situation where I had to deal with dumping more than few tables at a time. Today I had to do a dump of 181 tables. I was not going to sit there and type in that command with 181 table names. It is not just time consuming but it is also stupid. So I wrote this script to help me with this task. We still use mysqldump command as described above, except we do it programmatically to make it easier on us:

#!/bin/bash
db=$1
if [ "$db" = "" ]; then
echo "Usage: $0 db_name"
exit 1
fi
mkdir $$
cd $$
clear
for table in `mysql $db -e 'show tables' | egrep -v 'Tables_in_' `; do
echo "Dumping $table"
mysqldump --opt -Q $db $table > $table.sql
done
if [ "$table" = "" ]; then
echo "No tables found in db: $db"
fi

You can also compress your files by adding bzip2, zip or any other compression commands after mysqldump command. Here is the same script with bzip2 command added:
#!/bin/bash
db=$1
if [ "$db" = "" ]; then
echo "Usage: $0 db_name"
exit 1
fi
mkdir $$
cd $$
clear
for table in `mysql $db -e 'show tables' | egrep -v 'Tables_in_' `; do
echo "Dumping $table"
mysqldump --opt -Q $db $table > $table.sql
bzip2 $table.sql
done
if [ "$table" = "" ]; then
echo "No tables found in db: $db"
fi

I do not recommend doing compression on a production server since most compression program put descent amount of load on the server. Also note that this will delay your dump considerably. You may also want to use different parameters for running mysqldump. Type man mysqldump in your shell to read more.

Related posts:

————————————-
DISCLAIMER: Please be smart and use code found on internet carefully. Make backups often. And yeah.. last but not least.. I am not responsible for any damage caused by this posting. Use at your own risk.

MySQL: InnoDB: ERROR: the age of the last checkpoint is [number]

One of the mysql database servers I manage started to have issues with doing backups yesterday. mysqldump was running but nothing was happening on the backup side. I started to investigate to see why our full backups were failing. I opened up the mysql error log file (mine is at: /usr/local/mysql/var/hostname.err) and notice there were many instances of following error.

070815 15:31:46 InnoDB: ERROR: the age of the last checkpoint is 9433957,
InnoDB: which exceeds the log group capacity 9433498.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.

I poked around and found out our log files were set to default (5 mb) and they needed to be increased. After doing some calculations and research, I decided going to 50 mb at this point would be a good way to go. So I made the change in my.cnf file and added: innodb_log_file_size = 50M

I then stopped our mysql server and restarted it. And to my horror I saw following error messages show up in the mysql error logs:

070815 17:37:40 [ERROR] /usr/local/mysql/libexec/mysqld: Incorrect information in file: './dbname/table_name.frm'
070815 17:37:40 [ERROR] /usr/local/mysql/libexec/mysqld: Incorrect information in file: './dbname/table_name.frm'

I stopped the mysql server right away. I then remember, by help of a friend, that I have to remove the old log files if I change the size by changing innodb_log_file_size. So I issued a command: mv ib_logfile? ..

This command moves both ib_logfile1 and ib_logfile2 down one directory. I didn’t want to just remove ’em so instead I moved them. After that, I restarted mysql again and to my comfort, everything came back up without errors. So lesson learned: if you change ib_logfile size by using innodb_log_file_size setting, make sure you move existing log files before you start mysql again.

————————————-
DISCLAIMER: Please be smart and use code found on internet carefully. Make backups often. And yeah.. last but not least.. I am not responsible for any damage caused by this posting. Use at your own risk.