How can I backup a mysql database which is running on a remote server, I need to store the back up file in the local pc.
Try it with Mysqldump
#mysqldump --host=the.remotedatabase.com -u yourusername -p yourdatabasename > /User/backups/adump.sql
Have you got access to SSH?
You can use this command in shell to backup an entire database:
mysqldump -u [username] -p[password] [databasename] > [filename.sql]
This is actually one command followed by the > operator, which says, "take the output of the previous command and store it in this file."
Note: The lack of a space between -p and the mysql password is not a typo. However, if you leave the -p flag present, but the actual password blank then you will be prompted for your password. Sometimes this is recommended to keep passwords out of your bash history.
No one mentions anything about the --single-transaction option. People should use it by default for InnoDB tables to ensure data consistency. In this case:
mysqldump --single-transaction -h [remoteserver.com] -u [username] -p [password] [yourdatabase] > [dump_file.sql]
This makes sure the dump is run in a single transaction that's isolated from the others, preventing backup of a partial transaction.
For instance, consider you have a game server where people can purchase gears with their account credits. There are essentially 2 operations against the database:
Deduct the amount from their credits
Add the gear to their arsenal
Now if the dump happens in between these operations, the next time you restore the backup would result in the user losing the purchased item, because the second operation isn't dumped in the SQL dump file.
While it's just an option, there are basically not much of a reason why you don't use this option with mysqldump.
This topic shows up on the first page of my google result, so here's a little useful tip for new comers.
You could also dump the sql and gzip it in one line:
mysqldump -u [username] -p[password] [database_name] | gzip > [filename.sql.gz]
mysqldump -h [domain name/ip] -u [username] -p[password] [databasename] > [filename.sql]
Tried all the combinations here, but this worked for me:
mysqldump -u root -p --default-character-set=utf8mb4 [DATABASE TO BE COPIED NAME] > [NEW DATABASE NAME]
If you haven't install mysql_client yet and using Docker container instead:
sudo docker exec MySQL_CONTAINER_NAME /usr/bin/mysqldump --host=192.168.1.1 -u username --password=password db_name > dump.sql
You can directly pipe it to the remote server where you wish to copy your data to:
mysqldump -u your_db_user_name -p --set-gtid-purged=OFF --triggers --routines --events --compress --skip-lock-tables --verbose your_local_sql_db_name | mysql -u your_db_user_name -p -h your_remote_server_ip your_remote_server_db_name
You need to have created the db on your remote sql server.
Using the above command, I was able to copy from my local sql server version 8.0.23 to my remote sqlserver running 8.0.25
This is how you would restore a backup after you successfully backup your .sql file
mysql -u [username] [databasename]
And choose your sql file with this command:
source MY-BACKED-UP-DATABASE-FILE.sql
Related
how to take backup of all table and procedure of project in mysql with the help of command line client
From OS shell, and not from inside MySQL shell
mysqldump --routines <db name> -u<username> -p > backup.sql
If you want to save all databases:
mysqldump --routes --all-databases -u<username> -p > backup.sql
Just as a side note, for security reasons avoid storing plain text password in DB tables, and avoid printing your actual database online ;)
I know how mysqldump works.
But dont know where to use it?
If I execute this command after starting mysql program then it says error.
I am using ubuntu. So how can I use this utility?
Backup your database this way too..
mysql -u root -p DB_NAME > db_name_backup.sql
If you want to backup all database simply run this
mysql -u root -p > mysql_db_backup.sql
You will learn more about mysql and mysqldump here..
Guide:
mysqldump and mysql
MySQL Database Backup using mysqldump
shell> mysqldump --opt db_name > backup-file.sql
You can read the dump file back into the server like this:
shell> mysql db_name < backup-file.sql
Or like this:
shell> mysql -e "source /path-to-backup/backup-file.sql" db_name
mysqldump is also very useful for populating databases by copying data
from one MySQL server to another:
shell> mysqldump --opt db_name | mysql --host=remote_host -C db_name
It is possible to dump several databases with one command:
shell> mysqldump --databases db_name1 [db_name2 ...] > my_databases.sql
If you want to dump all databases, use the --all-databases option:
shell> mysqldump --all-databases > all_databases.sql
If tables are stored in the InnoDB storage engine, mysqldump provides a
way of making an online backup of these (see command below). This
backup just needs to acquire a global read lock on all tables (using
FLUSH TABLES WITH READ LOCK) at the beginning of the dump. As soon as
this lock has been acquired, the binary log coordinates are read and
lock is released. So if and only if one long updating statement is
running when the FLUSH... is issued, the MySQL server may get stalled
until that long statement finishes, and then the dump becomes
lock-free. So if the MySQL server receives only short (in the sense of
"short execution time") updating statements, even if there are plenty
of them, the initial lock period should not be noticeable.
shell> mysqldump --all-databases --single-transaction > all_databases.sql
For point-in-time recovery (also known as “roll-forward”, when you need
to restore an old backup and replay the changes which happened since
that backup), it is often useful to rotate the binary log (see
Section 8.4, “The Binary Log”) or at least know the binary log
coordinates to which the dump corresponds:
shell> mysqldump --all-databases --master-data=2 > all_databases.sql
or
shell> mysqldump --all-databases --flush-logs --master-data=2 > all_databases.sql
The simultaneous use of --master-data and --single-transaction works as
of MySQL 4.1.8. It provides a convenient way to make an online backup
suitable for point-in-time recovery if tables are stored in the InnoDB
storage engine.
For more information on making backups, see Section 6.1, “Database
Backups”.
mysqldump -u MYSQL_USER -h MYSQL_SERVER -pMYSQL_PASS --all-databases > "dbs.sql"
You use it directly on the terminal, just like mysql it self, and pass the parameters directly to it.
mysqldump -u [user] -p[password] [database name] > dumpfilename.sql
yes you can.
see http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html for more information on the tool.
If it's an entire DB, then:
$ mysqldump -u [uname] -p[pass] db_name > db_backup.sql
If it's all DBs, then:
$ mysqldump -u [uname] -p[pass] --all-databases > all_db_backup.sql
If it's specific tables within a DB, then:
$ mysqldump -u [uname] -p[pass] db_name table1 table2 >
table_backup.sql
You can even go as far as auto-compressing the output using gzip (if your DB is very big):
$ mysqldump -u [uname] -p[pass] db_name | gzip > db_backup.sql.gz
If you want to do this remotely and you have the access to the server in question, then the following would work (presuming the MySQL server is on port 3306):
$ mysqldump -P 3306 -h [ip_address] -u [uname] -p[pass] db_name >
db_backup.sql
To IMPORT:
ype the following command to import sql data file:
$ mysql -u username -p -h localhost DATA-BASE-NAME < data.sql
In this example, import 'data.sql' file into 'blog' database using vivek as username:
$ mysql -u sat -p -h localhost blog < data.sql
If you have a dedicated database server, replace localhost hostname with with actual server name or IP address as follows:
$ mysql -u username -p -h 202.54.1.10 databasename < data.sql
OR use hostname such as mysql.cyberciti.biz
$ mysql -u username -p -h mysql.cyberciti.biz database-name < data.sql
If you do not know the database name or database name is included in sql dump you can try out something as follows:
$ mysql -u username -p -h 202.54.1.10 < data.sql
REfer: http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html
I created a database using Mysql Workbench. Now I want to export this database to my home PC.
How can I do this if the 2 PCs have no network connection?
I use mysqldump to export the database. You can use something like
mysqldump -u [username] -p [database name] > backup.sql
to store it in a file. After that you can import into another database via
mysql -u [username] -p [database name] < backup.sql
As edit was rejected posting it as an answer; hope it will be helpful.
Followed to the queries give by "Marc Hauptmann" -
Few quick-tips for generic issues that can be faced while performing DB dump and restore:-
As correctly mentioned above by "Marc" it is always advised not to provide db password in command line export [if you do so, it can be easily sniffed in history or reverse-search]
If you are transferring large dump file it is advised to compress it before transferring. [it should be uncompressed before restore]
While exporting if you want to export data with 'new database name' it can also be done. [It will require new Db to be created before using it in import]
Also if we are exporting data from production servers to make sure it doesn't impact performance, export from other servers with below additional option "-h [hostname]"
mysqldump -h [hostname] -u [username] -p [database name] > backup.sql
Using gzip is pretty painless and really shrinks these files.
mysqldump -u [uname] -p[pass] [dbname] | gzip -9 > [backupfile.sql.gz]
gunzip < [backupfile.sql.gz] | mysql -u [uname] -p[pass] [dbname]
But man, it is 2014 - this stuff is also easy to do via a secure shell connection.
I am writing a batch file to backup MySQL database. When I typed mysqldump --help in the command-line, it shows a long list of options. I want to know what are the default parameters that MySQL Administrator uses to create a full backup.
I am using Windows XP.
mysqldump -u [username] -p [password] [databasename] > [backupfile.sql]
[username] - this is your database username
[password] - this is the password for your database
databasename] - the name of your database
[backupfile.sql] - the file to which the backup should be written
Example:
mysqldump -u root -pasd dbname > dbbackup.sql
Note that the above solution would not backup stored procedures nor triggers, if the underlying database has any. A better command would be:
mysqldump -uroot -ppassword --routines --triggers dbname > dbbackup.sql
If there are no stored procedures nor triggers, the additional switches will do no harm
I want to copy a mysql database from my local computer to a remote server.
I am trying to use the mysql dump command. All the examples on the internet suggest doing something like
The initial mysql> is just the prompt I get after logging in.
mysql> mysqldump -u user -p pass myDBName | NewDBName.out;
But when I do this I get You have an error in your SQL syntax; check the manual that corresponds ... to use near 'mysqldump -u user -p pass myDBName | NewDBName.out'
Since I have already logged in do I need to use -u and -p? Not doing so gives me the same error. Can you see what is wrong?
In addition to what Alexandre said, you probably don't want to pipe (|) output to NewDBName.out, but rather redirect it there (>).
So from the Windows/Unix command line:
mysqldump -u user -p pass myDBName > NewDBName.out
Note that if you have large binary fields (e.g. BLOBS) in some columns you may need to set an additional option (I think it was --hex-blob, but there might have been another option too). If that applies to you, add a comment and I'll research the setting.
mysqldump is not an SQL statement that you execute inside a mysql session but a distinct binary that should be started from your OS shell.
The are a few ways to use this. One of them is to pipe the output of mysqldump to another MySQL instance:
echo CREATE DATABASE remote_db | mysql -h remote_host -u remote_user -premote_password
mysqldump -h source_host -u root -ppassword source_db | mysql -h remote_host -u remote_user -premote_password -D remote_db
I have had to dump large sets of data recently. From what I have found on a 200Mb database with 10,000+ records in many of the tables is the following. I used the linux 'time' command to get actual time.
12 minutes using:
mysqldump -u user -p pass myDBName > db-backups.sql
7 minutes to clone the database:
mysqldump -u user -p pass myDBName | mysql -u user -p pass cloneDBName
And in less than a second:
mysqlhotcopy -u user -p pass myDBName cloneDBName
The last one blew my mind, but you have to be logged in locally where the database server resides. Personally I think this is much faster than remotely doing a dump, the you can compress the .sql file and transfer it manually.