MySQL backups using mysqldump

MySQL backups are essential to running a site with MySQL backend. Generally you can get away with doing nightly backups but on our site, due to couple issues we had in past, we are forced to do hourly backups of our db.

Intially I was doing backup by using: mysqldump dbname > weekdayHour.dbname.sql hourly. This allowed us to have week worth of backups done every hour and auto overwriting old backups. Eventually we added stored procedures and triggers to the mix and all of the sudden this dump wasn’t getting all the stored procedures and triggers. So we started using mysqldump with -R parameter which in man states:

Dump stored routines (functions and procedures) from the dumped databases.

Recently when we started to see more and more traffic, we noticed that our server was under heavy load on the hour. Ofcourse we quickly found it was due to mysqldump running on the hour which was causing the lag. Eventually we had enough traffic on the site where this was causing connection problems with mysql. So I went through man mysqldump again to find solution. And thanks to those smart people in mysqldump dev team, there I found couple more parameters to make this backup process little less painful. At this point we are using: mysqldump -R -q –single-transaction > weekdayHour.dbname.sql This seems to have decreased the load on the server and we haven’t got errors connecting to mysql. Since we use innodb tables in this particular db, we could use single-transaction parameter. Here is what our friendly man mysqldump has to say about these two parameters:

-q …This option is useful for dumping large tables. It forces mysqldump to retrieve rows for a table from the server a row at a time rather than retrieving the entire row set and buffering it in memory before writing it out…

–single-transaction …This option issues a BEGIN SQL statement before dumping data from the server. It is useful only with transactional tables such as InnoDB and BDB, because then it dumps the consistent state of the database at the time when BEGIN was issued without blocking any applications…

If you are thinking about using these two parameters, please spend couple minutes reading through man mysqldump and look at other notes which might pertain to your setup.

One last but very important thing we do after our backups are run is to move the new backup files off the server just in case server dies. I achieved this by using ncftp package which includes ncftpput command line utility. With ncftp, you can store ip/login/pw in a file and tell ncftp to use that for login information. Lets look at the script as a whole:

NOTE: comments in this script is for information purposes only. You can remove them if you like

DATE=`date '+%u%H'` # this sets up weekly rotation of backup files
BACKUP_DIR="/admin/backups/" # this dir will be created if it doesn't exist
HOST=`hostname` #you may want to hard code this if you hostname returns wrong information
mkdir ${BACKUP_DIR}mysql/ -p
/usr/local/mysql/bin/mysqldump -R -q --single-transaction --databases dbname1 dbname2 -ppassword > ${BACKUP_DIR}mysql/$HOST$DATE.sql
rm ${BACKUP_DIR}mysql/$HOST$DATE.sql.gz > /dev/null 2>&1
gzip -9 ${BACKUP_DIR}mysql/$HOST$DATE.sql > /dev/null 2>&1
/usr/bin/ncftpput -R -f ${BACKUP_DIR}hostinfo / ${BACKUP_DIR}mysql/$HOST$DATE.sql.gz

This is what your hostinfo file should contain (just make sure you edit it and put your own server ip, username, and password):
user loginname
pass loginpassword

To read more about mysqldump, see man mysqldump

13 thoughts on “MySQL backups using mysqldump

  1. Pingback: mysqldump tips by crazytoon - Rusty Razor Blade

  2. Phil

    Two questions from a newbie:
    1)I’m dumping from mySQL 5.0.45 and restoring to 4.0. Do you forsee any issues?

    2)Can individual tables be dumped?

  3. Phil

    I was able to restore to the 4.0 database and to individual tables. Thanks.

    Is there a way to get the dump files to break into multiple files if they are over 10meg? That’s the limit my isp allows for imports.

    I manually broke up the larger table dumps and it worked but the process was painful.


    [source server is win2003 with iis.]

  4. Steve

    I use the CMS PHP-Fusion on my current website. It has an old version of PHP. I am trying to move my database to my new host which has PHP 5.

    I am getting the following error…

    Error at the line 27: ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=83 ;

    Query: CREATE TABLE `fusion_admin` (
    `admin_id` tinyint(2) unsigned NOT NULL auto_increment,
    `admin_rights` char(2) NOT NULL default ”,
    `admin_image` varchar(50) NOT NULL default ”,
    `admin_title` varchar(50) NOT NULL default ”,
    `admin_link` varchar(100) NOT NULL default ‘reserved’,
    `admin_page` tinyint(1) unsigned NOT NULL default ‘1’,
    PRIMARY KEY (`admin_id`)

    MySQL: You have an error in your SQL syntax. Check the manual that corresponds to your MySQL server version for the right syntax to use near ‘ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=83′ at line

    Since I do not know squat about databases etc…is there any quick answer of fix to this situation. I have no idea what this means.

  5. Pingback: MySQL: How do I dump all tables in a database into separate files? | Technology: Learn and Share

  6. Zerd

    Sure you don’t want to use a pipe? Saves space and time.

    /usr/local/mysql/bin/mysqldump -R -q –single-transaction –databases dbname1 dbname2 -ppassword | gzip -q -9 – > ${BACKUP_DIR}mysql/$HOST$
    rm ${BACKUP_DIR}mysql/$HOST$DATE.sql.gz > /dev/null 2>&1
    mv ${BACKUP_DIR}mysql/$HOST$ ${BACKUP_DIR}mysql/$HOST$DATE.sql.gz

  7. go here

    These are really impressivee ideas iin on the topic of
    blogging. You have touched some pleasant things here.

    Any way keep up wrinting.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>