mysql sync two tables from 2 databases - mysql

I have a query . I have two tables on two different servers .Both the tables have the same structure . The master table on one server gets updated on a daily basis , so i want a cron job or a php script cron to update the second slave table on a different server .
I have seen a lot of scripts but none resolved my requirements .

I can't believe you didn't find a suitable script to do this. Depending on server-to-server bandwidth and connectivity, and table data size, you can:
directly transfer the whole table:
mysqldump [options] sourcedatabase tablename \
| mysql [options] --host remoteserver --user username ...
transfer the table with MySQL compression
# same as above, mysql has the "-C" flag
transfer using SSH encryption and compression; mysql is executed remotely
mysqldump [options] sourcedatabase tablename \
| ssh -C user#remoteserver 'mysql [options]'
transfer using intermediate SQL file and rsync to transfer only modifications
mysqldump [options] sourcedb tbl > dump.sql
rsync [-z] dump.sql user#remoteserver:/path/to/remote/dump.sql
ssh user#remoteserver "mysql [options] < /path/to/remote/dump.sql"
The above are all simple table overwrites, remote data is LOST and replaced by the master copy. The mysqldump-plus-rsync-plus-ssh runs in a time roughly proportional to modifications, which means that if you have a 10-GB SQL dump and add a dozen INSERTS, the transfer stage will need at most a couple seconds to synchronize the two SQL files.
To optimize the insert stage too, either you go for full MySQL replication, or you will need a way to identify operations on the table in order to replicate them manually at sync time. This may require alterations to the table structure, e.g. adding "last-synced-on" and "needs-deleting" columns, or even introduction of ancillary tables.

You can use one simple solution - Data Synchronization tool in dbForge Studio for MySQL.
Create Data Comparison project that would compare and synchronize two tables on two different MySQL servers.
Run application in command line mode using created Data Comparison project document (*.dcomp file).

Related

Mysql Db sync options

Question on MySQL replication (mainly).
So I have 2 MySQL databases, which are identical schema, but not connected by a network of any kind. I need one-way (only) syncing of data, i.e. Db1 always needs to copied/dumped down, and synced with Db2 . There is update/insert/delete activity on both.
I have ensured that DB2 (which is the receiver, always) has sequences in a very high range - so records created on or 'owned' by DB2 won't conflict with DB1 records when synced. There is also a rule that down on DB2, we won't edit any data that was created on DB1 (we can tell by the sequence number, but also by the kinds of data being inputted on each DB).
I've already got this working via a mysqldump (from Db1), and modifying the dump to have "REPLACE INTO" instead of "INSERT into" - and running this modified mysqldump-output as a SQL script. Volumes are not too high, works fine.
Is it possible to do this (simply enough) via replication. Mysql can create snapshot dumps, and I would copy them over and run a replication command - is that feasible.
I would use auto_increment_increment & auto_increment_offset to make sure the records don't overlap. I know you said you set the records on DB2 to a high range, but eventually they may still overlap. Just making DB1 use odd values and DB2 use even values makes sure that will never happen.
You can't use replication, if the two instances are not connected by a network. Replication requires that the replica is able to make a connection to its master.
But you can use binary logs, which is one part of how replication works. On DB1, enable binary logging, which will record every change made to the database. Periodically, copy these logs to the server for DB2 (I assume you have some way of doing this if you're currently using mysqldump). Use the mysqlbinlog tool to convert the logs into SQL commands to replay against your DB2 instance.
Example:
mysqlbinlog binlog.000001 binlog.000002 | mysql -u root -p
It's up to you to keep track of which binlogs you have copied and executed. Just before you copy a set of binlog files, run FLUSH LOGS on DB1. This causes DB1 to close the binlog file it's currently writing to, and open a new binlog file. This way you can work with whole files at a time without worrying about partial files.

Restore data from sql.gz file, but skip a single table

I took a backup of my database from engine yard, which is downloaded in sql.gz File. Since one of the tables of my database is too large, so I want to skip it while restoring it in my local system.
I use gunzip < file_name.sql.gz | mysql -u user_name -p password database name command to restore backup.
You may have identified a solution for this by now but I figured I'd add some insight. The restore operation of MySQL does not currently offer an easy way to exclude a single table from the restore operation, but there are a few options you could consider:
If your local MySQL server offers the 'Blackhole' engine you could use gawk to alter the ENGINE definition for that table when it is created. This would be something like:
gunzip < file_name.sql.gz | gawk -v RS='' '{print gensub(/(CREATE TABLE .[table_to_be_skipped].*) ENGINE=InnoDB/, "\\1 ENGINE=Blackhole", 1)}' | mysql -u user_name -p password database name.
This instructs the database to just pass through the row inserts against this table during the reload. Once the load completes you could modify this back to the InnoDB engine with alter table [table_to_be_skipped] engine=innodb;. The drawback to this would be that you are still downloading and parsing through a larger backup.
Another option would be to use the Awk method described here to create two backup files, one representing the tables and data prior to the excluded table, and another representing everything that follows it. The tricky part here is that if you add tables on either side of the excluded table you would have to update this script. This also has the consequence of having to download and parse a larger backup file but tends to be a bit easier to remember from a syntax perspective.
By far the best option to addressing this would be to simply do a manual backup on your source database that ignores this table. Use a replica if possible, and use the --single-transaction option to mysqldump if you are using all InnoDB tables or consistency of non-InnoDB tables is of minimal importance to your local environment. The following should do the trick:
mysqldump -u user_name -p --single-transaction --ignore-table=[table_to_be_skipped] database name | gzip > file_name.sql.gz
This has the obvious benefit of not requiring any complex parsing or larger file downloads.
Not sure if this will help, but we have documentation available for dealing with database backups - also, you may want to talk to someone from support through a ticket or in #engineyard on IRC freenode.

MYSQLDUMP failing. Couldn't execute 'SHOW TRIGGERS LIKE errors like (Errcode: 13) (6) and (1036) [duplicate]

This question already has answers here:
mysqldump doing a partial backup - incomplete table dump
(4 answers)
Closed 9 years ago.
Does anyone know why MYSQLDUMP would only perform a partial backup of a database when run with the following instruction:
"C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump" databaseSchema -u root --password=rootPassword > c:\backups\daily\mySchema.dump
Sometimes a full backup is performed, at other times the backup will stop after including only a fraction of the database. This fraction is variable.
The database does have several thousand tables totalling about 11Gb. But most of these tables are quite small with only about 1500 records, many only have 150 - 200 records. The column counts of these tables can be in the hundreds though because of the frequency data stored.
But I am informed that the number of tables in a schema in MySQL is not an issue. There are also no performance issues during normal operation.
And the alternative of using a single table is not really viable because all of these tables have different column name signatures.
I should add that the database is in use during the backup.
Well after running the backup with instruction set:
"C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump" mySchema -u root --password=xxxxxxx -v --debug-check --log-error=c:\backups\daily\mySchema_error.log > c:\backups\daily\mySchema.dump
I get this:
mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE '\_dm\_10730\_856956\_30072013\_1375194514706\_keyword\_frequencies'': Error on delete of 'C:\Windows\TEMP\#sql67c_10_8c5.MYI' (Errcode: 13) (6)
Which I think is a permissions problem.
I doubt any one table in my schema is in the 2GB range.
I am using MySQL Server 5.5 on a Windows 7 64 bit server with 8 Gb of memory.
Any ideas?
I am aware that changing the number of files which MySQL can open, the open_files_limit parameter, may cure this matter.
Another possibility is interference from anti virus products as described here:
How To Fix Intermittent MySQL Errcode 13 Errors On Windows
There are a few possibilities for this issue that I have run into and here is my workup:
First: Enable error/debug logging and/or verbose output, otherwise we won't know of an error that could be creating the issue:
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check --log-error=c:\backup\mysqldump_error.log > c:\backup\visualRSS.dump
So long as debug is enabled in your distribution, you should now
both be able to log errors to a file, as well as view output on a
console. The issue is not always clear here, but it is a great first
step.
Have you reviewed your error or general logs? Not often useful information for this issue, but sometimes there is, and every little bit helps with tracking these problems down.
Also watch SHOW PROCESSLIST while you are running this. See if you are seeing status columns like: WAITING FOR..LOCK/METADATA LOCK which would indicates the operation is unable to acquire a lock because of another operation.
Depending on info gathered above: Assuming I found nothing and had to shoot blind, here is what I would do next with some common cases I have experienced:
Max Packet Size errors: If you receive an error regarding max-allowed-packet-size, which, you can add --max_allowed_packet=160M to your parameters to see if you can get it large enough:
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --max_allowed_packet=160M > c:\backup\visualRSS.dump
Try to reduce run time/size using --compact flag. mysqldump will add everything you need to create the schema and insert the data along with other information: You can significantly reduce run-time and file size by just requiring the dump contain only the INSERTS to your schema and avoid all statements to create the schema and other non-critical info within ea. insert.This can mitigate a lot of problems is appropriate for use, but you will want to use a separate dump with the --nodata to export your schema ea. run to allow you to create all the empty tables etc.
/Create Raw data, exclude add-drop table, comment, lock and key check statements/
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --compact > c:\backup\visualRSS.dump
/Create Schema dump with no data:/
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --nodata > c:\backup\visualRSS.dump
Locking Issues: By default, mysqldump uses the LOCK TABLE (unless you specify
single transaction) to read a table while it is dumping and wants to acquire a read-lock on the table, DDL operations and your global lock type may create this case. Without seeing the hung query you will typically see a small backup file size as you described, and usually the mysqldump operation will sit until you kill it, or the server closes the idle connection. You can use the --single-transaction flag to set a REPEATABLE READ type for the transaction to essentially take a snapshot of the table without blocking operations or being block, saved for some older server vers that have issues with ALTER/TRUNCATE TABLE while in this mode.
FileSize issues: If I read incorrectly that this backup HAS NOT successfully run before, indication the 2GB filesize
potential issue, you can try piping mysqldump output straight into
something like 7zip on the fly:
mysqldump |7z.exe a -si name_in_outfile
output_path_and_filename
If you continue to have issues or there is an unavoidable issue prohibiting mysqldump from being used. Percona XtraBackup is what I prefer, or there is the Enterprise Backup for MySQL from Oracle. It is open source, far more versatile than mysqldump, has a very reliable group of developers working on it and has many great features that mysqldump does not have, like streaming/hot backups, etc. Unfortunately the windows build is old, unless you can compile from binary or run a local linux VM to handle that for you.
Very important I noticed that you are not backing up your information_schema table, this needs to be mentioned exclusively if it is of significance to your backup scheme.

mysqlrepair --all-databases AND specific table? Or, What is the best way to be checking/repairing MySQL tables frequently?

A server has multiple mysql databases with identical schemas. There is a single table within each database that tends to crash and require repairing. We would like to run something like below on a cron.
mysqlrepair --auto-repair --all-databases --force --silent
For a single table (not large in size) in ~100 databases,
Reading http://dev.mysql.com/doc/refman/5.0/en/mysqlcheck.html, it states that the tables being processed are locked from other processes. So, am I correct that this should only be something that runs late at night?
The command above is missing the option to specify a single table. How can I add this and not have it override the --all-databases flag? The --tables flag states it
Overrides the `--databases` or `-B` option.
And a related question, short of modifying our application code, is there a better way to be checking/repairing our mysql tables?

Migrating a MySQL server from one box to another

The databases are prohibitively large (> 400MB), so dump > SCP > source is proving to be hours and hours work.
Is there an easier way? Can I connect to the DB directly and import from the new server?
You can simply copy the whole /data folder.
Have a look at High Performance MySQL - transferring large files
Use can use ssh to directly pipe your data over the Internet. First set up SSH keys for password-less login. Next, try something like this:
$ mysqldump -u db_user -p some_database | gzip | ssh someuser#newserver 'gzip -d | mysql -u db_user --password=db_pass some_database'
Notes:
The basic idea is that you are just dumping standard output straight into a command on the other side, which SSH is perfect for.
If you don't need encryption then you can use netcat but it's probably not worth it
The SQL text data goes over the wire compressed!
Obviously, change db_user to user user and some_database to your database. someuser is the (Linux) system user, not the MySQL user.
You will also have to use --password the long way because having mysql prompt you will be a lot of headache.
You could setup a MySQL slave replication and let MySQL copy the data, and then make the slave the new master
400M is really not a large database; transferring it to another machine will only take a few minutes over a 100Mbit network. If you do not have 100M networks between your machines, you are in a big trouble!
If they are running the exact same version of MySQL and have identical (or similar ENOUGH) my.cnf and you just want a copy of the entire data, it is safe to copy the server's entire data directory across (while both instances are stopped, obviously). You'll need to delete the data directory of the target machine first of course, but you probably don't care about that.
Backup/restore is usually slowed down by the restoration having to rebuild the table structure, rather than the file copy. By copying the data files directly, you avoid this (subject to the limitations stated above).
If you are migrating a server:
The dump files can be very large so it is better to compress it before sending or use the -C flag of scp. Our methodology of transfering files is to create a full dump, in which the incremental logs are flushed (use --master-data=2 --flush logs, please check you don't mess any slave hosts if you have them). Then we copy the dump and play it. Afterwards we flush the logs again (mysqladmin flush-logs), take the recent incremental log (which shouldn't be very large) and play only it. Keep doing it until the last incremental log is very small so that you can stop the database on the original machine, copy the last incremental log and then play it - it should take only a few minutes.
If you just want to copy data from one server to another:
mysqldump -C --host=oldhost --user=xxx --database=yyy -p | mysql -C --host=newhost --user=aaa -p
You will need to set the db users correctly and provide access to external hosts.
try importing the dump on the new server using mysql console, not an auxiliar software
I have no experience with doing this with mysql, but to me it seems the bottleneck is transferring the actual data?
4oo MB isnt that much. But if dump -> SCP is slow, i dont think connecting to the db server from the remove box would be any faster?
I'd suggest dumping, compressing, then copying over network or burning to disk and manually transfering the data.
Compressing such a dump will most likely give you quite good compression rate since, most likely , theres a lot of repeptetive data.
If you are only copying all the databases of the server, copy the entire /data directory.
If you are just copying one or more databases and adding them to an existing mysql server:
create the empty database in the new server, set up the permissions for users etc.
copy the folder for the database in /data/databasename to the new server /data/databasename
I like to use BigDump: Staggered Mysql Dump Importer after Exporting my database from the old server.
http://www.ozerov.de/bigdump/
One thing to note though, if you don't set the export options (namely the maximum length of created queries) respective to the load your new server can handle, it'll just fail and you will have to try again with different parameters. Personally, I set mine to about 25,000, but that's just me. Test it out a bit and you'll get the hang of it.