Download only changed tables in MySQL using checksum - mysql

I'd like to download a copy of a MySQL database (InnoDB) to use it locally. Since the database is growing rapidly, I want to find out a way to speed up this process and save bandwidth.
I'm using this command to copy the database to my local computer (Ubuntu):
ssh myserver 'mysqldump mydatabase --add-drop-database | gzip' | zcat | mysql mydatabase
I've added multiple --ignore-tables to ignore tables that don't need to be up to date.
I've already got an (outdated) version of the database, so there is no need to download all tables (some tables hardly change). I'm thinking of using the checksum for each table and add unchanged tables to --ignore-tables.
Since I can't find many example of using checksums and mysqldump, I'm brilliant (not very likely) or there is an even better way to download (or better: one-way sync) the database in a smart way.
Database replication is not what I'm looking for, since that requires a binary log. That's a bit overkill.
What's the best way to one-way sync a database, ignoring tables that haven't been changed?

One solution could be using the mysqldump --tab option. mysqldump delimited
mkdir /tmp/dbdump
chmod 777 /tmp/dbdump
mysqldump --user=xxx --password=xxx --skip-dump-date --tab=/tmp/dbdump database
Then use rsync with --checksum to send over changed files to destination. Run the create scripts, then load data using LOAD DATA INFILE

Related

Some file lost in MySQL database. How to re-create it in proper way?

The problem is, that one MYI and one MYD file from MySQL database has been accidentally deleted. The only file left intact is FRM one. Only one table from the whole database is damaged that way, all other tables are OK and the database works generally fine, except the table with deleted files, which is obviously inaccessible.
There's a full database dump in pure SQL format available.
The question is, how do I re-create these files and table in safe and proper manner?
My first idea was to extract the full create table command from the dump and run it on live database. It's not so easy, as the whole dump file has over 10GB, so any operations within its content are really pain in . Yes, I know about sed and know how to use it - but I consider it the last option to choose.
Second and current idea is to create copy of this database on independent server, make a dump of the table in question and then use resulting SQL file to create the table again on the production server. I'm not quite experienced with MySQL administration tasks (well, just basic ones), but for me this option seems to be safe and reasonable.
Will the second option work as I expect?
Is it the best option, or are there any more recommendable solutions?
Thank you in advance for your help.
The simplest solution is to copy the table you deleted. There's a chance mysqld still has an open file handle to the data files you deleted. On UNIX/Linux/OS X, a file isn't truly deleted while some process still has an open file handle to it.
So you might be able to do this:
mysql> CREATE TABLE mytable_copy LIKE mytable;
mysql> INSERT INTO mytable_copy SELECT * FROM mytable;
If you've restarted MySQL Server since you deleted the files, this won't work. If the server has closed its file handle to the data file, this won't work. If you're on Windows, I have no idea.
The next simplest solution is to restore your existing 10GB dump file to a temporary instance of MySQL Server, as you said. I'd use MySQL Sandbox but some people would use a virtual machine, or if you're using an AWS environment, launch a spot EC2 instance or a small RDS instance.
Then dump just the table you need:
mysqldump -h tempserver mydatabase mytable > mytable.sql
Then restore it to your real server.
mysql -h realserver mydatabase < mytable.sql
(I'm omitting the user & password options, I prefer to put those in .my.cnf anyway)

Duplicating PostgreSQL database on one server to MySQL database on another server

I have a PostgreSQL database with 4-5 tables (some of those have more than 20 million rows). i have to replicate this entire database onto another machine. However, there I have MySQL (and for some reason cannot install PostgreSQL) on that machine.
The database is static and is not updated or refreshed. No need to sync between the databases once replication is done. So basically, I am trying to backup the data.
There is a utility called pg_dump which will dump the contents onto a file. I can zip and ftp this onto the other server. However, I do not have psql on the other machine to reload this into a database. Is there a possibility that mysql might parse and decode this file into a consistent database?
Postgres is version 9.1.9 and mysql is version 5.5.32-0ubuntu0.12.04.1.
Is there any other simple way to do this without installing any services?
Depends on what you consider "simple". Since it's only a small number of tables, the way I'd do it is like this:
dump individual tables with pg_dump -t table_name --column-inserts
edit the individual files, change the schema definitions to be compatible with mysql (e.g. using auto_increment instead of serial, etc. : like this: http://www.xach.com/aolserver/mysql-to-postgresql.html only in reverse)
load the files into the mysql utility like you would any other mysql script.
If the files are too large for step #2, use the -s and -a arguments to pg_dump to dump the data and the schema separately, then edit only the schema file and load both files in mysql.

mysql import/export

I am trying to create an automatic process which will synchronize the databases of two servers. One site is live, and I need the testing environment to sync up with the live site every so often (I am thinking a cron job for that).
How can I implement this?
You can keep the systems up to date with MySQL replication
http://dev.mysql.com/doc/refman/5.0/en/replication.html
You are basically looking at a Master-Slave configuration
If you'd like something a little simpler, you can use mysqldump to dump your database, then ssh to ship it over the wire, and mysql to load it in again.
mysqldump mydatabase | ssh -h the_test_server "mysql mytestdatabase"
You will have to purge mytestdatabase before doing the transfer, but if you are looking for a single command to 'synchronize' database, this will do it.

How do I register an mysql database?

Sorry for a noob question regarding MySQL. I downloaded FlightStats to learn about mysql but I can't figure out how to register it with my localhost mysql db. I know in MS SQL you can simply register any sql db using sql studio. I tried to google but come up with no result. Perhaps, my search phrase is wrong. I'm searching with "how to register a mysql database, register a mysql database...etc.". How do you register or setup an database from existing database like FlightStats? I'm using DBVisualizer. Is there a way in dbVis that I'm not aware of to regsiter a database?
Thanks
edit: sorry for the bad wording. I found this. I have the .myd, .myi and .frm and I want to get it to restore(?) with my local mysql instance. I look at all the answers but I'm still confuse as how you restore the database from those 3 files.
A little background first. The FlightStats download page linked to in the original question appears to provide zipped tarballs of the binary table storage files from the MySQL data directory. Given that this is considered a viable means of distribution, and combined with the use of MERGE tables, I would surmise that this tarball contains a bunch of MyISAM data files (.myi, .myd). Jack's edit confirms that this is the situation.
This is an atypical means of distributing a MySQL data set, although not at all uncommon when backing up MyISAM storage, and probably not all that unheard of for moving large data sets around; it likely works out considerably more space-efficient than a corresponding dump file. Of course, in SQL Server land, it's pretty common to attach database files into an instance.
Broadly speaking, you'd recover the database as follows:
Locate the MySQL data directory; typically /var/mysql or similar
Create a new directory with the desired database name e.g. flightdata
Extract the .myi, .myd and other files from the tarball into this directory
Make sure the entire directory is owned by the user MySQL runs as (usually mysql) - use chmod -R to make sure you get everything
Open a MySQL console
USE <database-name>
SHOW TABLES
You should see some tables listed. In addition, the downloads page linked includes a couple of SQL scripts, which contain SQL commands that you need to run against your database once it's in place. These will cause the merge definitions and table indexes to be rebuilt. You can pipe these into the command-line client, e.g. mysql -u<username> -p<password> <database-name> < <sql-file>.
It may be a good idea to shut down the MySQL server while you're doing this; use e.g. /etc/init.d/mysql stop or similar, and restart once the files are extracted in place.
There's generally a way to import sql files using a GUI database tool. I'm not familiar with DBVisualizer, but as long as you have a MySQL command line client installed you can do it there as well. It's pretty easy:
Create a blank schema. You can do this in your GUI tool or on the command line client. Just use CREATE DATABASE flightstats;, or whatever name you want.
Use the following command line syntax to import/run an sql file on the new schema: mysql -u <username> -p flightstats < /path/to/file.sql
The -p option prompts for a password. I generally set up the database using step 1 as the root user, then GRANT some permissions on it to a new user id, then use that user id to run the SQL file.
This process is pretty much what a GUI tool will do in the background.
Registering a database? dont know what that means however mysql gui tools can help you creating a database. Have a look at it or better you download phpmyadmin.
Google WAMP for Windows.
Google MAMP for Mac.
Google LAMP for Linux.
Any questions?

mysql restore for files on another server

I have a test database on a separate remote server than my production DB. Every once in awhile, I want to try and test things by uploading a copy of my production DB to my testing DB. Unfortunately, the backup file is now half a gig and I'm having trouble transferring it via FTP or SSH. Is there an easy way that I can use the mysql restore command between servers? Also, is there another way to move over large files that I'm not considering? Half a gig doesn't seem that big, I would imagine that people run into this issue frequently.
Thanks!
Are the servers accessible to each other?
If so, you can just pipe the data from one db to another without using a file.
ex: mysqldump [options] | mysql -h test -u username -ppasswd
0.Please consider whether you really need production data (especially if it contains some sensitive information)
1.The simplest solution is to compress the backup on the source server (usually gzip), transfer it across the wire, then decompress on the target server.
http://www.techiecorner.com/44/how-to-backup-mysql-database-in-command-line-with-compression/
2.If you don't need the exact replica of production data (e.g. you don't need some application logs, errors, some other technical stuff) you can consider creating a backup and restore on a source server to a different DB name, delete all unnecessary data and THEN take a backup that you will use.
3.Restore full backup once on your reference server in your Dev environment and then copy transaction logs only (to replay them on the reference server). Depending on the usage pattern transaction logs may take a lot less space as the whole database.
Mysql allows you to connect to a remote database server to run sql commands. Using this feature, we can pipe the output from mysqldump and ask mysql to connect to the remote database server to populate the new database.
mysqldump -u root -p rootpass SalesDb | mysql --host=185.32.31.96 -C SalesDb
Use an efficient transfer method, rather than ftp.
If you have a dump file created by mysqldump, on the test db server, and you update it every so often. I think you could save time (if not disk space) by using rsync to transfer it. Rsync will use ssh and compress data for the transfer, but I think both the local and remote files should/could be uncompressed.
Rsync will only transfer the changed portion of a file.
It may take some time to decide what, precisely, has changed in a dump file, but the transfer should be quick.
I must admit though, I've never done it with a half-gigabyte dump file.