Oh dear MySQL slave, where did you put those rows?

I need help from my fellow mysql users.  I know some of the people who read this are alot better then me with mysql so hopefully you can help :)

So today we decided that we are going to migrate one of our master database servers to new hardware.  Since we got the hardware this morning and we wanted to move on to it asap, we decided that we will take our slave down, copy data from it, and bring it up on future master server.  At that point, we will let it run as slave to the current master server until its time for us to take it down.  Reason we did that instead of mysqldump/import was to avoid the lag mysqldump creates on our server.

After we did all this and put up the new master server, we started to notice odd issues.  After looking around and comparing old db with new, we found out that new db was missing data.  How it happened is beyond me and is the reason why I am writing this.  We never had issues with the slave which would cause data to be lost; so what happened to those missing rows?  Is this something which is common?  Can we not trust our slave enough to use it as master if master died?  Can we not run backups off the slave with confident that our data is protected and up to date so to keep load down on our master?  All these questions which keep me awake and wondering…

6 thoughts on “Oh dear MySQL slave, where did you put those rows?

  1. Mark Robson

    You MUST monitor mysql slaves all the time using something such as mk-table-checksum – otherwise there is no guarantee they will stay in sync even if initially there.

    There are many possible reasons for this, the mysql manual explains some of them. Most of them are to do with the application’s design, others are shortcomings in mysql itself.

    Using row-based replication on 5.1 + might be better, but it still won’t be guaranteed – you’ll want to continue checking.

    Mark

  2. Sunny Walia Post author

    Frank: what do you use for large databases if that tool is not suitable for it?

    Xaprb/Mark: Thanks for you comments. Good to know it is not something I mis configured and it is a known issue.

  3. Xaprb

    Well… define small. That means different things to different people. The mk-table-checksum tools are suitable for 100GB or so of data, depending on your performance needs.

  4. Sunny Walia Post author

    From all the people I have talked to (mostly mysql consultants), they have never had issues with mk-table-checksum running on any of the dbs they used it on. I really don’t think I need to be concerned with the size at this time. Since Baron mentioned 100gb of data and none of my dbs are there (biggest one is close to 40 gigs), I will use this tool until I start to see performance issues.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>