I have a master MySQL server on server X.
I want it to be backed up every N hours to another MySQL server (let's call it Y).
I don't know if it matters, but X is windows server and Y is Ubuntu linux.
I do like the idea of replication, but can I make it work not real time, but once in lets say 4 hours?
I worked at a place that was afraid of replication due to a previously-botched installation.
They still had binary logging in place, so I would FLUSH the binary logs, copy them to the second server, extract the statements with mysqlbinlog, and apply them to the second database.
You can control how often all of this happens, how much of your bandwidth the file copy consumes, etc.
Plus, if you want to switch to "real" replication, it's easy!
Good luck.
Related
is there a way to replicate mysql while the master server already has a lot of data.I tried the normal way, but I had difficulty getting the MASTER_LOG_POS value. how can the slave server be able to replicate data that previously existed on the master server.
Generally you start with an exact full copy of your existing database. This means creating a real copy of your MySQL data directory (while the server is off), go with a (consistent) snapshot, or use a tool like Percona XtraBackup.
Only after you have 2 identical MySQL servers, you can start replicating. Note that using a tool like mysqldump is not a good idea for consistent snapshots.
If you have a relatively small amount of data you could use mysqldump --master-data=1 --single-transaction. This will create a snapshot with the correct master-binlog and position required. This should not be used for production environments or large amounts of data.
I have a MySQL install on a shared server and have access through phpMyAdmin. I want to make a continuous, real time clone of that database to a cloud mySQL database (we have created an Nginx-ready MySQL server specially for this database) I want to create a real time clone of the old one, then update code to point to the new database...
I think you will have difficulty doing real-time replication of a MySQL in a shared server environment. Since you appear to be moving db servers, I would be inclined to do a hot copy of your data, and install that on the new db server. At the same time as taking that copy, you should switch on query logging on your application.
Your switch over would then consist of running logged queries against the new database (faster than they were logged!) and finally, at a point that all logged queries have been run, switching the configuration of the app so that the new db is used.
Edit: the problem with a hot copy is that data is being written to the db at the same time as it is being copied. That means that the 'last updated' time will be different for each table. On that basis, is it possible in your application to set up a 'last_updated' column for each row? If so you will be able to tell for each table which logged queries still need to be copied.
What you're looking for is replication. It has far to many options to cover here in a single post.
http://dev.mysql.com/doc/refman/5.5/en/replication.html
If your going to do replication over the internet you'll want to secure it.Your host might allow a virtual local area network So this doesn't use up your bandwidth resources.
A great set of tools from percona you should look at are maatkit
https://launchpad.net/percona-toolkit
Documentation and usage examples
http://www.maatkit.org/doc/
It's good for other tasks but it also allows you to replicate a live database quickly.
When your working with live databases make sure your backups are upto date.
I have a massive MySQL database (around 10 GB), and I need to copy it to a different server (slicehost). I don't want to do a DB dump and reimport b/c I think that would take forever. Is it possible to just move the raw SQL files from one machine to the next, setup an identical mysql server, and flip the switch?
Generally, yes. It's preferable to have the same underlying architecture and server version, but those aren't critically necessary. Make sure you stop the source server so that the raw files are a consistent copy.
I do this all the time when overwriting my dev database. We have backups on a replica that are made from tarring up /var/mysql when the server is stopped. I move those to another machine, overwrite iblog and ibdata, then overwrite all the directories in data except for mysql and test.
It should work.
This is the principle that the mysqlhotcopy tool uses, although this tool is meant to be run while the server is operating.
You don't have a "massive" database, you have a smallish database at 10G. So dump/restore should not be a problem.
Copying the files directly might work in a subset of circumstances, but dump/restore is much better (i.e. less chance of problems).
Clearly, try it on a non-production system with the same version(s) of mysql and data size first to ensure that it's going to work on production.
I need to set up 4 MySQL servers. Each of them needs to support both reads and writes, so a master server (that accepts only writes) is out of the question. I need the data between these 4 servers to be synchronized. It does not matter to me if they have a constant connection open between themselves or if they each connect periodically. I looked at the MySQL replication page but did not find it useful for what I need. What is the best way to do this?
It's called multi master replication, there are tools like mmm to help accomplish this.
Though be warn, it can be very lengthy manual process to recover from replication failures, as you'll end up with situation where you're unsure which copy of the data is more up to date.
When I have two mysql servers that have different jobs (holding different databases) but want to be able to use one of them to slip in when the other one fails, what would you suggest how I keep the data on both of them equal "close to realtime"?
Obviously it's not possible to make a full database dump every x minutes.
I've read about the Binary Log, is that the way that I need to go? Will that not slow down the fallback server a lot? Is there a way to not include some tables in the binary log - where it doesn't matter that the data has changed?
You may want to consider the master-master replication scenario, but with a slight twist. You can specify which databases to replicate and limit the replication for each server.
For server1 I would add --replicate-do-db=server_2_db and on server2 --replicate-do-db=server_1_db to your my.cnf (or my.ini on Windows). This would mean that only statements for the server_1_db would be replicated to server2 and vice verse.
Please also make sure that you perform full backups on a regular basis and not just rely on replication as it does not provide safety from accidental DROP DATABASE statements or their like.
Binary log is definitely the way to go. However, you should be aware that with MySQL you can't just flip back and forth between servers like that.
One server will be the master and the other will be the slave. You write/read to the master, but can only read from the slave server. If you ever write to the slave, they'll be out of sync and there's no easy way to get them to sync up again (basically, you have to swap them so the master is the new slave, but this is a tedious manual process).
If you need true hot-swappable backup databases you might have to go to a system other than MySQL. If all you want is a read-only live backup that you can use instantly in the worst-case scenario (master is permanently destroyed), Binary Log will suit you just fine.