export table using ssh - mysql

I want to export a table of mysql database using ssh so how can I take this one.

Export the table to a file, using MySQLDump, PHPMyAdmin or your favorite tool.
Transfer the file to the host of your liking, using scp or your favorite tool.
What do you mean "export using SSH?" SSH is a protocol to securely talk to a server. SCP is a way to copy files between hosts using SSH, so I assume you mean that.
mysqldump db_name tbl_name >dumpfile
scp dumpfile 127.0.0.1:.
If you dislike mysqldump, you can always just SELECT INTO OUTFILE from your favorite MySQL client (or PHP program, or just the commandline mysql). After that you transfer the file to the other host, and run LOAD DATA INFILE to load the table. You'll have to recreate the table as well.

mysqldump -uuser -ppass dbname tablename >dump_table.sql

If the dump is going to be very large, you'll probably want to run it in the background, to assure you losing connection to SSH doesn't cause a problem.
Also, you could use nice if it is a live server and don't want to effect its performance too much
nohup nice mysqldump -uuser -ppassword [other flags] database tablename > dumpfile.sql
You can then use SSH File Transfer to download it to your computer, or use scp to send it to another server:
nohup nice scp dumpfile.sql user#IP:/path/ &
Then you can load it in to mysql
nohup nice mysql -uuser -ppassword database < dumpfile.sql &
**Use nohup if its going to take a long time and losing connection could be a problem. nohup causes to run in background
**Use nice to lower the priority of the process, ie. in case is live server and performance is important. You should know nice doesn't stop your mysql from being slowed down by a query that takes a long time to finish, so beware.

Related

Does the command mysqldump generate the dump locally before transferring? [duplicate]

I want to dump specific table in my remote server database, which works fine, but one of the tables is 9m rows and i get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2002359
so after reading online i understood i need to increase my max_allowed_packet, and its possible to add it to my command.
so im running the following command to dump my table:
mysqldump -uroot -h my.host -p'mypassword' --max_allowed_packet=512M db_name table_name | gzip > dump_test.sql.gz
and from some reason, i still get:
Lost connection to MySQL server during query when dumping table `table_name` at row: 2602499
am i doing something wrong?
its weird, only 9m records...not too big.
Try adding the --quick option to your mysqldump command; it works better with large tables. It streams the rows from the resultset to the output rather than slurping the whole table, then writing it out.
mysqldump -uroot -h my.host -p'mypassword' --quick --max_allowed_packet=512M db_name table_name | \
gzip > dump_test.sql.gz
You can also try adding the --compress option to your mysqldump command. That makes it use the more network-friendly compressed connection protocol to your MySQL server. Notice that you still need the gzip pipe; MySQL's compressed protocol doesn't cause the dump to come out of mysqldump compressed.
It's also possible the server is timing out its connection to the mysqldump client. You can try resetting the timeout durations. Connect to your server via some other means and issue these queries, then run your mysqldump job.
These set the timeouts to one calendar day.
SET GLOBAL wait_timeout=86400;
SET GLOBAL interactive_timeout=86400;
Finally, if your server is far away from your machine (through routers and firewalls) something may be disrupting mysqldump's connection. Some inferior routers and firewalls have time limits on NAT (network address translation) sessions. They're supposed to keep those sessions alive while they are in use, but some don't. Or maybe you're hitting a time or size limit configured by your company for external connections.
Try logging into a machine closer to the server and running mysqldump on it.
Then use some other means (sftp?) to copy your gz file to your own machine.
Or, you may have to segment the dump of this file. You can do something like this (not debugged).
mysqldump -uroot -h my.host -p'mypassword' \
db_name table_name --skip-create-options --skip-add-drop-table \
--where="id>=0 AND id < 1000000" | \
gzip....
Then repeat that with these lines.
--where="id>=1000000 AND id < 2000000" | \
--where="id>=2000000 AND id < 3000000" | \
...
until you get all the rows. Pain in the neck, but it will work.
For me, all worked fine when I skip lock tables
mysqldump -u xxxxx --password=xxxxx --quick --max_allowed_packet=512M --skip-lock-tables --verbose -h xxx.xxx.xxx.xxx > db.sql
I may create problems with consistency but allowed me to backup a 5GB database without any issue.
other option to try:
net_read_timeout=3600
net_write_timeout=3600
on my.ini/my.cnf or via SET GLOBAL ...
Using JohnBigs comment above, the --compress flag was what worked for me.
I had previously tried --single-transaction, --skip-extended-insert, and --quick the w/o success.
Also, make sure you MYSQL.EXE client is the same version as your mysql server.
So, if you're mysql version is 8.0.23 but your client version is 8.0.17 or 8.0.25, you may have issues. I ran into this problem using a version 8.0.17 on a mysql server 8.0.23 - changing the client version to match the server version resolved the issue.
I had a similar problem on my server, where MySQL would apparently restart during the nightly backups. It was always the same database, but the actual table sometimes varied.
Tried several from the other answers here, but in the end it was just some cronjob executing queries that didn't finish. This caused not so much CPU and RAM usage that it triggered the monitoring, but apparently enough that compressing the dump caused the OOM killer to become active. Fixed the cronjob and the next backup was ok again.
Things to look for:
OOM? dmesg | grep invoked
Process killed? grep killed /var/log/kern.log
If none of the other works, you can use the mysqldump where features, Break your huge query into multiple smaller query.
It might be tedious but it would most likely work.
e.g.
"C:\Program Files\MySQL\MySQL Workbench 8.0 CE\mysqldump.exe" --defaults-file="C:\...\my_password.cnf"
--host=localhost --protocol=tcp --user=mydbuser --compress=TRUE --port=16861 --default-character-set=utf8 --quick --complete-insert --replace
--where="last_modify > '2022-01-01 00:00:00'"
> "C:\...\dump.txt"
my_password.cnf
[client]
password=xxxxxxxx
[mysqldump]
ignore-table=db.table1
ignore-table=db.table2
Then, you just modified the last_modify column to move further back into the future, and your huge table is now split into many small tables.

Foolproof methods for mysql -> mysql migration

I've been migrating ddbb (a few GB size) in mySQL workbench 6.1, from one mySQL server to another mySQL. Never having done this before I thought it was 99% reliable. Instead, 2 out of 3 tries have failed.
My ddbb dont have complex features (triggers, SP & functions,...). The errors, though, are difficult to interpret, almost always about tables failing to export, reason unknown. There might be occasionally a duplicated key index in source, but that shouldn't prevent an export from happening?
I've tried all the different methods available in the interface:
1) Server > Data Export > Data Import
2) Migration wizard
3) Schema transfer wizard
4) Reverse engineer
but no real difference.
Also, all methods seem variants of the same, do these menu options rely on the same procedure internally, how really different are they?
My questions are generic:
1) Is there a foolproof method, relaxed about errors, e.g. is
mysqldbcopy from myQL utilities much better that workbench wizards?
2) Does mySQL wizards configuration make any difference (e.g. a checkbox that causes errors by being too demanding if the source db has a problem) I just want to transfer the db, not perfection in the target server. I've switched SSL=NO, but still not working.
3) What is the single most important cause of errors in migration, e.g. server overloaded, enough memory, table structure?
Thanks in advance,
There might be occasionally a duplicated key index in source, but that shouldn't prevent an export from happening?
Yeah, It shouldn't prevent export operation.
I've tried all the different methods available in the interface:
All interface you have used might have some timeout configured so it don't really execute fully as your database is BIG.
So how to migrate MySQL database from one server to another?
To do it properly, I suggest you use command line like this:
Step 1: create backup file on old server
mysqldump -u [[user_name]] -p[[password]] [[db_name]] > db_backup.sql
Step 2: Transfer backup file to new server.
Step 3: Import backup file in new server.
mysql -u [[user_name]] -p[[password]] [[db_name]] < db_backup.sql
Pro tip:
you can combine step 1 & 2 if you have remote MySQL enabled on old server. Just execute this command on new server so it will download the backup file in current directory of new server.
mysqldump -h [[xxx.xx.xxx.xxx]] -u [[user_name]] -p[[password]] [[db_name]] > db_backup.sql
where [[xxx.xx.xxx.xxx]] represents ip address/hostname for old server.
Extra Note:
Please note that there is no space between -p and [[password]]. you can also omit the [[password]] if you think it's security issue to include password in command.
If you have access to your terminal you can try using "mysqldump" and also you could try percona xtrabackup tool.
Mysql dump : (If your DB is too large then I suggest you to use screens)
Backup all DB : mysqldump -u root -pxxxx --all-databases > all_db_backup.sql
Backup Tables : mysqldump -u root -pxxxx DatabaseName table1 table2 > tables.sql
Backup Individual databases : mysqldump -u root -pxxx --databases DB1 DB2 > Only_DB.sql
To import : Sync all the files to another server and try importing as show below
mysql -u root -pxxxx < all_db_backup.sql (Use Screen for large Databases)
Individual DB : mysql -u root -pxxx DBName < DB.sql
( Note : Before you import make sure your backuped file already has create database if not exists statements or you could create those DB names before importing )

Export a large MySQL table as multiple smaller files

I have a very large MySQL table on my local dev server: over 8 million rows of data. I loaded the table successfully using LOAD DATA INFILE.
I now wish to export this data and import it onto a remote host.
I tried LOAD DATA LOCAL INFILE to the remote host. However, after around 15 minutes the connection to the remote host fails. I think that the only solution is for me to export the data into a number of smaller files.
The tools at my disposal are PhpMyAdmin, HeidiSQL and MySQL Workbench.
I know how to export as a single file, but not multiple files. How can I do this?
I just did an import/export of a (partitioned) table with 50 millions record, it needed just 2 minutes to export it from a reasonably fast machine and 15 minutes to import it on my slower desktop. There was no need to split the file.
mysqldump is your friend, and knowing that you have a lot of data it's better to compress it
#host1:~ $ mysqldump -u <username> -p <database> <table> | gzip > output.sql.gz
#host1:~ $ scp output.sql.gz host2:~/
#host1:~ $ rm output.sql.gz
#host1:~ $ ssh host2
#host2:~ $ gunzip < output.sql.gz | mysql -u <username> -p <database>
#host2:~ $ rm output.sql.gz
Take a look at mysqldump
Your lines should be (from terminal):
export to backupfile.sql from db_name in your mysql:
mysqldump -u user -p db_name > backupfile.sql
import from backupfile to db_name in your mysql:
mysql -u user -p db_name < backupfile.sql
You have two options in order to split the information:
Split the output text file into smaller files (as many as you need, many tools to do this, e.g. split).
Export one table each time using the option to add a table name after the db_name, like so:
mysqldump -u user -p db_name table_name > backupfile_table_name.sql
Compressing the file(s) (a text file) is very efficient and can minimize it to about 20%-30% of it's original size.
Copying the files to remote servers should be done with scp (secure copy) and interaction should take place with ssh (usually).
Good luck.
I found that the advanced options in phpMyAdmin allow me to select how many rows to export, plus the start point. This allows me to create as many dump files as required to get the table onto the remote host.
I had to adjust my php.ini settings, plus the phpMyAdmin config 'ExecTimeLimit' setting
as generating the dump files takes some time (500,000 rows in each).
I use HeidiSQL to do the imports.
As an example of the mysqldump approach for a single table
mysqldump -u root -ppassword yourdb yourtable > table_name.sql
Importing is then as simple as
mysql -u username -ppassword yourotherdb < table_name.sql
Use mysqldump to dump the table into a file.
Then use tar with -z option to zip the file.
Transfer it to your remote server (with ftp, sftp or other file transfer utility).
Then untar the file on remote server
Use mysql to import the file.
There is no reason to split the original file or to export in multiple files.
If you are not comfortable with using the mysqldump command line tool, here are two GUI tools that can help you with that problem, although you have to be able to upload them to the server via FTP!
Adminer is a slim and very efficient DB Manager tool that is at least as powerful as PHPMyAdmin and has only ONE SINGLE FILE that has to be uploaded to the server which makes it extremely easy to install. It works way better with large tables / DB than PMA does.
MySQLDumper is a tool developed especially to export / import large tables / DBs so it will have no problem with the situation you describe. The only dowside is that it is a bit more tedious to install as there are more files and folders (~350 files in ~1.5MB), but it shouldn't be a problem to upload it via FTP either, and it will definately get the job done :)
So my advice would be to first try Adminer and if that one also fails go the MySQLDumper route.
How do I split a large MySQL backup file into multiple files?
You can use mysql_export_explode
https://github.com/barinascode/mysql-export-explode
<?php
#Including the class
include 'mysql_export_explode.php';
$export = new mysql_export_explode;
$export->db = 'dataBaseName'; # -- Set your database name
$export->connect('host','user','password'); # -- Connecting to database
$export->rows = array('Id','firstName','Telephone','Address'); # -- Set which fields you want to export
$export->exportTable('myTableName',15); # -- Table name and in few fractions you want to split the table
?>
At the end of the SQL files are created in the directory where the script is executed in the following format
---------------------------------------
myTableName_0.sql
myTableName_1.sql
myTableName_2.sql
...

mysqldump compression

I am trying to understand how mysqldump works:
if I execute mysqldump on my pc and connect to a remote server:
mysqldump -u mark -h 34.32.23.23 -pxxx --quick | gzip > dump.sql.gz
will the server compress it and send it over to me as gzip or will my computer receive all the data first and then compress it?
Because I have a very large remote db to export, and I would like to know the fastest way to do it over a network!
You should make use of ssh + scp,
because the dump on localhost is faster,
and you only need to scp over the gzip (lesser network overhead)
likely you can do this
ssh $username#34.32.23.23 "mysqldump -u mark -h localhost -pxxx --quick | gzip > /tmp/dump.sql.gz"
scp $username#34.32.23.23:/tmp/dump.sql.gz .
(optional directory of /tmp, should be change to whatever directory you comfortable with)
Have you tried the --compress parameter?
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_compress
This is how I do it:
Do a partial export using SELECT INTO OUTFILE and create the files on the same server.
If your table contains 10 million rows. Do a partial export of 1 million rows at a time, each time in a separate file.
Once the 1st file is ready you can compress and transfer it. In the meantime MySQL can continue exporting data to the next file.
On the other server you can start loading the file into the new database.
BTW, lot of this can be scripted.

In MySQL, what are the practices to backup databases?

I am using mysql
How often do you back up your database?
How do you normally backup your database?
Export all data into sql or cvs format and keep it in a folder??
Setting up a cronjob to run a script that does a mysqldump and stores the dump on a separate disk from the database itself (or a remote server) is a quite easy and efficient way to backup a database in my opinion. You could even have it dump every database with the --all-databases switch
If you have more than one MySQL server, you could also use replication
Frequency of backups depends on how much data you are willing to lose in case of a failure.
How Often - Depends on the activity and traffic you have on your Database.
I use phpMyAdmin and then click on the export tab and export the database as .sql
you can reach phpmyadmin by either localhost/phpmyadmin if you are running xampp/wamp/lamp etc or you can click on the 'phpmyadmin' link in your webhost's cPanel if you are hosting your webstie online.
I backup databases daily, using 'mysqldump [options] db_name [tables]', and export all data into sql.
Of course, this depend on the size of your databases.
I use mysqldump:
mysqldump -uroot -p database_name > ~/backups/database_name-$(date +%s).sql
1.If you want the easiest way to back-up the database then you can opt for phpMyadmin.
2.If you prefer command line then follow this way.
You would simply modify the path and filename to reflect the specifics of your new backup file, like so:
C:\ mysql -uroot -p [DatabaseName] < C:/backups/[backUpDatabaseName.sql]
3.Mysql dump.
C:\ mysqldump [myDatabaseName] > [ExportDatabaseName]
You can also look hotbackup