How to import large database into local database - mysql

I'm going to import the database from serve to my local.
Dumped file's size is 5GB.
How can I export and import database quickly?

Oxymoron: "quickly" versus "shoveling 5GB around".
If not already dumped:
mysqldump --opt ... | mysql ...
Depending on where you execute that pipe, one or the other would have -h to refer to the other machine.
If already dumped via mysqldump to, say, db.dump, then
mysql ... < db.dump

A simple dump using mysqldump will create a text file with instructions to recreate the database. This process is slow, lock tables and usually results in a large file.
If you need to backup and restore large databases I recommend to take a look at Percona XtraBackup.
XtraBackup works by copying the database files directly from the MySql folder, which results in data that is internally inconsistent, but then it performs crash recovery on the files to make them a consistent, usable database again.

Related

How to create a logical backup of a relational table in mysql workbench using mysqldump?

How do I put these codes in MySQL workbench to create a logical backup of a BOOK table without using the command line? I put the codes below in the workbench and it shows an error: "mysqldump" is not valid at this position.
mysqldump [arguments] > file-name
mysqldump csit115 BOOK --user csit115 --password
--verbose --lock_tables > book.bak
Physical backups consist of raw copies of the directories and files that store database contents. This type of backup is suitable for large, important databases that need to be recovered quickly when problems occur.
Logical backups save information represented as logical database structure (CREATE DATABASE, CREATE TABLE statements) and content (INSERT statements or delimited-text files). This type of backup is suitable for smaller amounts of data where you might edit the data values or table structure, or recreate the data on a different machine architecture.

Download only changed tables in MySQL using checksum

I'd like to download a copy of a MySQL database (InnoDB) to use it locally. Since the database is growing rapidly, I want to find out a way to speed up this process and save bandwidth.
I'm using this command to copy the database to my local computer (Ubuntu):
ssh myserver 'mysqldump mydatabase --add-drop-database | gzip' | zcat | mysql mydatabase
I've added multiple --ignore-tables to ignore tables that don't need to be up to date.
I've already got an (outdated) version of the database, so there is no need to download all tables (some tables hardly change). I'm thinking of using the checksum for each table and add unchanged tables to --ignore-tables.
Since I can't find many example of using checksums and mysqldump, I'm brilliant (not very likely) or there is an even better way to download (or better: one-way sync) the database in a smart way.
Database replication is not what I'm looking for, since that requires a binary log. That's a bit overkill.
What's the best way to one-way sync a database, ignoring tables that haven't been changed?
One solution could be using the mysqldump --tab option. mysqldump delimited
mkdir /tmp/dbdump
chmod 777 /tmp/dbdump
mysqldump --user=xxx --password=xxx --skip-dump-date --tab=/tmp/dbdump database
Then use rsync with --checksum to send over changed files to destination. Run the create scripts, then load data using LOAD DATA INFILE

percona backup and restore

I'm trying to use percona xtrabackup to backup a mysql database. in the restoring the database according to the documentation:
rsync -avrP /data/backup/ /var/lib/mysql/
this will copy the ibdata1 as well.
what if I have want to restore the backup into an existing mysql instance with some some existing databases? would this corrupt my other databases? clearly this will overwrite existing ibdata1.
I suppose you have a local http/php server, so in case you don't need to batch import or export information, I suggest you use a database manager app that can import or export as sql, csv or tsv files.
I use a web-based admin tool called Adminer and it works great (plus, it's just a single php file). It has options to export or import a whole database or just certain tables and even specific registers. Its usage is pretty straightforward.

Duplicating PostgreSQL database on one server to MySQL database on another server

I have a PostgreSQL database with 4-5 tables (some of those have more than 20 million rows). i have to replicate this entire database onto another machine. However, there I have MySQL (and for some reason cannot install PostgreSQL) on that machine.
The database is static and is not updated or refreshed. No need to sync between the databases once replication is done. So basically, I am trying to backup the data.
There is a utility called pg_dump which will dump the contents onto a file. I can zip and ftp this onto the other server. However, I do not have psql on the other machine to reload this into a database. Is there a possibility that mysql might parse and decode this file into a consistent database?
Postgres is version 9.1.9 and mysql is version 5.5.32-0ubuntu0.12.04.1.
Is there any other simple way to do this without installing any services?
Depends on what you consider "simple". Since it's only a small number of tables, the way I'd do it is like this:
dump individual tables with pg_dump -t table_name --column-inserts
edit the individual files, change the schema definitions to be compatible with mysql (e.g. using auto_increment instead of serial, etc. : like this: http://www.xach.com/aolserver/mysql-to-postgresql.html only in reverse)
load the files into the mysql utility like you would any other mysql script.
If the files are too large for step #2, use the -s and -a arguments to pg_dump to dump the data and the schema separately, then edit only the schema file and load both files in mysql.

mysql restore for files on another server

I have a test database on a separate remote server than my production DB. Every once in awhile, I want to try and test things by uploading a copy of my production DB to my testing DB. Unfortunately, the backup file is now half a gig and I'm having trouble transferring it via FTP or SSH. Is there an easy way that I can use the mysql restore command between servers? Also, is there another way to move over large files that I'm not considering? Half a gig doesn't seem that big, I would imagine that people run into this issue frequently.
Thanks!
Are the servers accessible to each other?
If so, you can just pipe the data from one db to another without using a file.
ex: mysqldump [options] | mysql -h test -u username -ppasswd
0.Please consider whether you really need production data (especially if it contains some sensitive information)
1.The simplest solution is to compress the backup on the source server (usually gzip), transfer it across the wire, then decompress on the target server.
http://www.techiecorner.com/44/how-to-backup-mysql-database-in-command-line-with-compression/
2.If you don't need the exact replica of production data (e.g. you don't need some application logs, errors, some other technical stuff) you can consider creating a backup and restore on a source server to a different DB name, delete all unnecessary data and THEN take a backup that you will use.
3.Restore full backup once on your reference server in your Dev environment and then copy transaction logs only (to replay them on the reference server). Depending on the usage pattern transaction logs may take a lot less space as the whole database.
Mysql allows you to connect to a remote database server to run sql commands. Using this feature, we can pipe the output from mysqldump and ask mysql to connect to the remote database server to populate the new database.
mysqldump -u root -p rootpass SalesDb | mysql --host=185.32.31.96 -C SalesDb
Use an efficient transfer method, rather than ftp.
If you have a dump file created by mysqldump, on the test db server, and you update it every so often. I think you could save time (if not disk space) by using rsync to transfer it. Rsync will use ssh and compress data for the transfer, but I think both the local and remote files should/could be uncompressed.
Rsync will only transfer the changed portion of a file.
It may take some time to decide what, precisely, has changed in a dump file, but the transfer should be quick.
I must admit though, I've never done it with a half-gigabyte dump file.