I have a scenario where there are multiple mysql 4.x servers. These databases were supposed to be replicating to another server. After checking things out on a slave it appears that this slave has not replicated any databases in some time.
Some of these databases are > 4G in size and one is 43G(which resides on another server). Has anyone out there replicated databases without creating a snapshot to copy over to a slave? I cannot shutdown the master server because of the downtime. It will probably take over an hour and 40 minutes to create a snapshot. So this is out of the question.
I was going to perform a load data from master on the slave to pull everything from scratch. Any idea how long this will take on databases ranging from 1-4G and the 43G database will be for another day. All of the tables on the master are myIsam so I don't think I will have a problem with the load from master method.
What are the best methods on the slave to clean things up or reset things so I can just start from a clean slate?
Any suggestions?
Thanks in advance
You need a snapshot to start replication. Snapshots require either the database to be locked (at least) read-only. So you can have a consistent place to start from.
Downtime is a necessary thing, customers usually understand it as long as it doesn't happen too often.
Related
Magento is creating 700 + connections leading to database breakdown whenever cache is flushed or indexing is triggered. Production site remain down for 20 mins till all connections clears.
All connections firing same query. And remain in state creating sort index. Using very high database configuration.
DB on Amazon rds.Any help is appreciated. This is breaking our production site.
This is the reason why there is load balancer (Mysql - Master Slave architecture). Let me explain how it works.
There is a master database. 2)
There are multiple slaves (replica) databases connected to the master.
Whenever there is a write on the master database the slaves are also updated. This will make the slaves updated as like the master containing the updated data. Whenever you want to perform any optimization or the db gets down then you can switch the database to one of the slave avoiding the downtime. When the master is up and running again you can anytime switch to that database.
Check this link :
https://severalnines.com/blog/how-cluster-magento-nginx-and-mysql-multiple-servers-high-availability
Hope this helps you.
We've got a database (mysql) driven application which contains business critical information, were looking at building a system that will allow us to backup the db frequently (say every 15 mins) essentially so that we minigate the danger of any data loss. Where torn between two setups :
Adding a backup jobs too a queue every 15 mins on a cron and storing these backups on another server. (To save space we would then delete most of these backups after 3 days, but keep the 06:00, 12:00, 18;00 hour versions.)
or
Is there a RAID like setup were all our data will be automatically copied to another hard drive or in this case server, in which case what would happen if we lost data, would the loss be carried to the other server (we would also run standard daily backups for our archives in edition to this) ?
or
Is there another established method for creating frequent backups ?
In my opinion, the optimal backup scheme would be following.
Delayed slave. It allows you quickly restore your database in case of master failure. It may help in case of DROP DATABASE or other wrong SQL. So, you need something additionally.
Incremental backups every day with Xtrabackup from the delayed slave. Optionally you could also check TwinDB for incremental backups.
As long as you need 15 minutes granularity you may pull binary logs from the master with mysqlbinlog from MySQL 5.6 (even if the master is 5.5 or 5.1). So, mysqlbinlog runs on a remote host and pulls logs from the master.
If you need to restore the database you have two ways.
If you can restore from the delayed slave you use that slave as a new master.
If on some reason you can't use the delayed slave (you missed the DROP command) then you restore last night copy from the incremental backup and apply binary logs since the last backup up to the moment of accident (again, if the accident is wrong DROP table you replay logs up to the last event before the DROP).
This schema will be optimal from performance standpoint (no impact on an application) and allows no data loss at all.
If you're doing backups more often than one hour, what you need is replication. Setting up a secondary database server that can serve as a hot-standby is a lot better than abusing your database with repeated reads.
If you're backing up your database frequently, look at innobackupex to snapshot your tables, or possibly LVM snapshots.
I have several slave dbs replicated from the same master db, however, for one of the slaves, i would like to keep it as a backup db, which will never have rows updated or deleted.
basically the purpose is to have a backup db with all rows stored by using the replication(mysqldump is waaay slow to do the backup), no update/delete query get replicated, insert query only. i know there will be some conflicts going on no doubt, but still wonder if any filtering options on statement/query on the slave end or any other solutions.
You should never run a production database without a working backup scheme in place - at least as long as you value your data. If you fear that a wrong sql instruction can ruin your database, then you may try point in time recovery.
If you already use replication your master server will log all write/update operations to its binlog - which it will send to the slave servers for replication. You can do for example nightly backups of you complete database. If you destroy your database in the morning, you can import the backup from the night and reapply the instructions from the binlog from after the backup till before the instruction that killed your database.
You could then skip this instruction and apply the instructions that came afterwards. This can also cause consistency issues, as the instruction after the skipped instruction see different data in the database as they did when they were originally executed.
I have similar problem. I know it's old thread but it can help others:
link: mysql replication works only if I choose database by USE database
I maintain big MySQL database. I need to backup it every night, but the DB is active all the time. There are queries from users.
Now I just disable the website and then do a backup, but this is very bad as the service is disabled and users don't like this.
What is a good way to backup the data if data is changed during the backup?
What is best practice for this?
I've implemented this scheme using a read-only replication slave of my database server.
MySQL Database Replication is pretty easy to set up and monitor. You can set it up to get all changes made to your production database, then take it off-line nightly to make a backup.
The Replication Slave server can be brought up as read-only to ensure that no changes can be made to it directly.
There are other ways of doing this that don't require the replication slave, but in my experience that was a pretty solid way of solving this problem.
Here's a link to the docs on MySQL Replication.
If you have a really large (50G+ like me) MySQL MyISAM only databases, you can use locks and rsync. According to MySQL documentation you can safely copy raw files while read lock is active and you cannot do it with InnoDB.
So if the goal is zero downtime and you have extra HD space, create a script:
rsync -aP --delete /var/lib/mysql/* /tmp/mysql/sync
Then do the following:
Do flush tables
Run script
Do flush tables with read lock;
Run script again
Do unlock tables;
On first run rsync will copy a lot without stopping MySQL. The second run will be very short, it will only delay write queries, so it is a real zero downtime solution.
Do another rsync from /tmp/mysql/sync to a remote server, compress, keep incremental versions, anything you like.
This partly depends upon whether you use innodb or myiasm. For innodb; mySQL have their own (which costs money) solution for this (innodb hot copy) but there is an open source version from Percona you may want to look at:
http://www.percona.com/doc/percona-xtrabackup/
What you want to do is called "online backup". Here's a pointer to a matrix of possible options with more information:
http://www.zmanda.com/blogs/?p=19
It essentially boils down to the storage backend that you are using and how much hardware you have available.
I'm currently using mysqldump to back up databases that are growing rapidly in size. Though I run it late at night, there have been occasional problems when it happens to run during a moment of high traffic (which happens at night sometimes). For example, last night one of my sites locked up just after the time of the database backup with a completely full (and non-clearing) processlist.
Does anyone have a suggestion for a better way to approach this? Putting the site in a temporary maintenance state during backup is not an option as the goal is to maximize availability (some sql dumps take awhile). One idea that comes to mind is to run both master and slave copies and shut down + back up the slave copy, leaving the master copy alone during the process. Hopefully there is a simpler solution though - I'd rather not run a slave copy for backup purposes only unless absolutely necessary. Any suggestions?
Thanks.
Two thoughts:
run the slave. If nothing else, it gives you a warm spare for your production traffic in case of failure. You can also run reports and tools from it, freeing up cycles from your production server.
get to innodb and use mysqldump --single-transaction (see man page)
Good luck!
I use Percona Xtrabackup, which is similar to InnoDB Hot Backup with more functionality and is distributed for free. Xtrabackup takes snapshots without locking innodb tables and will record the current master logfile info and, if requested, the slave info if you are taking a backup from a slave.
I would recommend running a slave and doing a backup like this or with mysqldump. The slave gives you a hot backup that you can quickly switch over to and be up and running within minutes if your master blows up due to a hardware issue or various software or user error issues that take out the server. The backup with xtrabackup or mysqldump gives you a backup that you can use to restore data in case you accidentally drop a table or delete some rows you shouldn't have, since the replicated server wouldn't save you there.