Prevent verbose output undumping from mysqldump - mysql

I've dumped a table on a remote server from one database (MySQL 5.5) to a file. It took the server about 2 seconds to perform the operation. Now I'm trying to undump data from the file into another DB (same version) on the server.
The server outputs the data being processed on the screen in spite of the fact I didn't specify --verbose parameter. How can I prevent the output?
It takes the server some 10 minutes to perform the operation. Is that time acceptable or can I make it much faster? If yes, how can I do this?

Loading (undumping) is via the mysql commandline tool:
mysql -u user -p dbname < mydump.sql

Related

Best method to transfer/clone large MySQL databases to another server

I am running a web application within a shared hosting environment which uses a MYSQL database which is about 3GB large.
For testing purposes I have setup a XAMPP environment on my local macOS machine. To copy the online DB to my local machine I used mysqldump on the server and than directly imported the dump file to mysql:
// Server
$ mysqldump -alv -h127.0.0.3 --default-character-set=utf8 -u dbUser -p'dbPass' --extended-insert dbName > dbDump.sql
// Local machine
$ mysql -h 127.0.0.3 -u dbUser -p'dbPass' dbName < dbDump.sql
The only optimization here is the use of extended-insert. However the import takes about 10 hours!
After some searching I found that disabling foreign constraint checks during the import should speed up the process. So I added the following line at the begining of the dump file:
// dbDump.sql
SET FOREIGN_KEY_CHECKS=0;
...
However this did not make any significant difference... The import now took about 8 hours. Faster but still pretty long.
Why does it take so much time to import the data? Is there a better/faster way to do this?
The server is not the fastest (shared hosting...) but it takes just about 2 minutes to export/dump the data. That exporting is faster than importing (no syntax checks, no parsing, just writing...) is not surprising but 300 times faster (10 hours vs. 2 minutes)? This is a huge difference...
Isn't there any other solution that would be faster? Copy the binary DB file instead, for example? Anything would be better than using a text file as transfer medium.
This is not just about transferring the data to another machine for testing purposes. I also create daily backups of the database. If it would be necessary to restore the DB it would be pretty bad if the site is down for 10 hours...

Foolproof methods for mysql -> mysql migration

I've been migrating ddbb (a few GB size) in mySQL workbench 6.1, from one mySQL server to another mySQL. Never having done this before I thought it was 99% reliable. Instead, 2 out of 3 tries have failed.
My ddbb dont have complex features (triggers, SP & functions,...). The errors, though, are difficult to interpret, almost always about tables failing to export, reason unknown. There might be occasionally a duplicated key index in source, but that shouldn't prevent an export from happening?
I've tried all the different methods available in the interface:
1) Server > Data Export > Data Import
2) Migration wizard
3) Schema transfer wizard
4) Reverse engineer
but no real difference.
Also, all methods seem variants of the same, do these menu options rely on the same procedure internally, how really different are they?
My questions are generic:
1) Is there a foolproof method, relaxed about errors, e.g. is
mysqldbcopy from myQL utilities much better that workbench wizards?
2) Does mySQL wizards configuration make any difference (e.g. a checkbox that causes errors by being too demanding if the source db has a problem) I just want to transfer the db, not perfection in the target server. I've switched SSL=NO, but still not working.
3) What is the single most important cause of errors in migration, e.g. server overloaded, enough memory, table structure?
Thanks in advance,
There might be occasionally a duplicated key index in source, but that shouldn't prevent an export from happening?
Yeah, It shouldn't prevent export operation.
I've tried all the different methods available in the interface:
All interface you have used might have some timeout configured so it don't really execute fully as your database is BIG.
So how to migrate MySQL database from one server to another?
To do it properly, I suggest you use command line like this:
Step 1: create backup file on old server
mysqldump -u [[user_name]] -p[[password]] [[db_name]] > db_backup.sql
Step 2: Transfer backup file to new server.
Step 3: Import backup file in new server.
mysql -u [[user_name]] -p[[password]] [[db_name]] < db_backup.sql
Pro tip:
you can combine step 1 & 2 if you have remote MySQL enabled on old server. Just execute this command on new server so it will download the backup file in current directory of new server.
mysqldump -h [[xxx.xx.xxx.xxx]] -u [[user_name]] -p[[password]] [[db_name]] > db_backup.sql
where [[xxx.xx.xxx.xxx]] represents ip address/hostname for old server.
Extra Note:
Please note that there is no space between -p and [[password]]. you can also omit the [[password]] if you think it's security issue to include password in command.
If you have access to your terminal you can try using "mysqldump" and also you could try percona xtrabackup tool.
Mysql dump : (If your DB is too large then I suggest you to use screens)
Backup all DB : mysqldump -u root -pxxxx --all-databases > all_db_backup.sql
Backup Tables : mysqldump -u root -pxxxx DatabaseName table1 table2 > tables.sql
Backup Individual databases : mysqldump -u root -pxxx --databases DB1 DB2 > Only_DB.sql
To import : Sync all the files to another server and try importing as show below
mysql -u root -pxxxx < all_db_backup.sql (Use Screen for large Databases)
Individual DB : mysql -u root -pxxx DBName < DB.sql
( Note : Before you import make sure your backuped file already has create database if not exists statements or you could create those DB names before importing )

MySQL database dump from remote host without temporary file

I'm trying to implement a database backup cron (other solutions welcome) in my job but I have a small problem:
I have a large database that is over 10GB in space and the current vm doesn't have space to store it in the temporary file that mysql creates.
I know I can use mysqldump with a host parameter, but my question is, when doing that does the temporary file generated by mysqldump stay at the machine that is running it or does it stay on the database server?
UPDATE:
I forgot to mention that I'm trying to backup a network of websites and that some of them are behind a firewall (needing VPN access), some need server hopping to get to the database server.
You can run a shell script from an archive host, where you've traded password-less ssh keys with the database server. This lets you transfer the file directly over ssh, without creating any temp files on the remote database server:
ssh -C myhost.com mysqldump -u my_user --password=bigsecret \
--skip-lock-tables --opt database_name > local_backup_file.sql
Obviously there are ways to secure that password on the command line, but this a method that could accomplish what you want. One advantage of this method is that it doesn't require the archive host to have access to port 3306 on the remote host.
This guy's version is cool because it also compresses the data on-the-fly before transferring it over the network, and then he uncompresses it before loading it into a local database.
ssh me#remoteserver 'mysqldump -u user -psecret production_database | \
gzip -9' | gzip -d | mysql local_database
But that's why my version uses ssh -C, which enables its own compression algorithm and avoids extra gzip pipes.
Depending on the circumstance it might be a better idea to use MySQL replication. Set up MySQL on your backup server and configure it as a slave of your production database (see http://dev.mysql.com/doc/refman/5.7/en/replication-howto.html). You can then dump the slave database easily.
An advantage of this approach is you're not transferring 10GB each time you want to backup, you're only transferring any changes to the database as and when they occur.
You'll need to keep an eye on the replication though, because if it fails your slave database will become stale.

upload large database file to the server

I have 5GB database that needs to be uploaded to phpmyadmin and that too on the shared server where i cannot access the shell.Is there any solution that can take lesser time to upload? Please do help me by providing the steps to upload the sql file. I have searched through internet but could not find an answer.
Do not use phpmyadmin.
Assuming you have shell, upload the file and feed it directly to mysql command.
Your shell command will look like:
cat file.sql | mysql -uuser -ppassword database
or you can do gzipped file:
zcat file.sql.gz | mysql -uuser -ppassword database
Prior doing this check:
database connection works (correct database, user and password)
database is empty :)
mysql max packet size is OK
you have enough diskspace
* UPDATE *
You said you do not have shell access.
Then you have following options -
upload the file and contact support, let they do it for you.
feed it remote, cpanel have special menu where you can get remove access, other panels have same ability too.
in this case code will be executed on your computer and look like:
cat file.sql | mysql -uroot -phipopodil -hwebsite.com
or for windows:
/path/to/mysql -uroot -phipopodil -hwebsite.com < file.sql
do some "hack" - feed it through crontab, at or via php system() command.
If you choose "hack" option, note following:
php have max_execution_time - even if you set it to zero, there could be some limit "imposed" from hosting.
usually hosts have limited mysql updates per hour.
there could be some ulimit restrictions.
if you execute feeding of 5 GB on shared server, server will slow down and administrator will check what you are doing.
This depends on your database, you tagged it with 3 different database types, mysql, sql-server, and postgresql. I know mysql and postgresql have import features, although I'd be surprised if SQL Server didn't as well. You could import the database file via the command line instead of having to use phpmyadmin.
Incidentally, the phpmyadmin tool also has an import feature, but that again depends on the format of your database. If it's a compatible sql file, you could upload it to phpmyadmin and import it there, but I'd recommend the previous method I mentioned, upload it to your host, then use whatever database tool (mysqlimport for mysql, or if it's the result of a pg_dump command, you can just run:
psql <dbname> < <yourfile>
ie
psql mydatabase < inputfile.sql

Quickest way to transfer data from MySQL DB 1 to MySQl DB 2

I have a new database, similar to the old one but with more columns and tables. So the data on the old table is still useable and needs transferring.
The old database is on a different server to the new one. I want to transfer the data from one database to the other.
I have navicat, but it seems to take forever using the host to host data transfer. Also downloading a sql file then executing that takes too long also (it executes about 4 inserts per second).
The downloaded SQL file is about 40mb (with complete insert statements). The final one would probably be 60mb to 80mb.
What's the best way to transfer this data? (a process I will need to repeat a few times for testing)
Doing a mysqldump on the source machine and then slurping it in on the other side, even on a 40-100MB file is well within reason. Do it from the command line.
(source machine)
mysqldump -u user -p password database > database.sql
..transfer file to recipient machine...
(recipient machine)
mysql -u user -p password database < database.sql
Can you not transfer only a portion of the data for testing first? Then, later, transfer the entire thing when you're satisfied with test-results?
(it executes about 4 inserts per second)
That sounds more like there's something wrong with your database.. Are you sure that's alright?
Cody thank you for the direction. For some reason it did not work for me, but the below did on my redhat linux server:
(recipient machine)
mysql -u [username] -p -h localhost database < database.sql
(source machine)
I just used php myadmin
Is there a command that can be run to pull the DB from another server something along the lines of: mysqldump -u [username] -p -h [host address] [dbname] > [filename].sql
Thanks