I try to dump my Cloud SQL instance database from my local computer.
I know I should use gcloud commands but in the project I will use it would be a real pain to rewrite all the mysqldump instructions.
I can connect to Cloud SQL via the MySQL client, but when I try to use mysqldump I get the following:
mysqldump --databases testdb -h 130.211.xxx.xxx -u root -p > testdump.sql
mysqldump: Got error: 1227: Access denied; you need (at least one of) the SUPER privilege(s) for this operation when using LOCK TABLES
And of course CloudSQL doesn't support SUPER privileges... :/
Does anyone know if there's a way around?
Yes it accepts it, but you must first, use the cloud_sql_proxy, and have the right permissions. Also this is not in the documentation as of the moment, neither as a warning nor as an official method. Still using an intermediary bucket for dumps is not of my liking.
In Mac OS with latest mysqldump as of moment (posted of an example of the problems I encountered might vary with os and mysqldump version)
mysqldump --column_statistics=0 -h 127.0.0.1 -u <user> -p <db> --set-gtid-purged=OFF> <dumpFile>
// this is because I use the tcp connection sample for the cloud sql proxy
mysql -h 127.0.0.1 -u <user> -p -D <database> < DBs/mysqldump100519.sql
According to their documentation, seems you have 2 options.
The first, which you do not like is to use a gcloud command.
The second, use RESTful API to access the service which is, under the hood, used by gcloud commands. You may use the same request from inside you code. Take a look here.
Hi All,
I am trying to restore nearly 8GB DB into remote server using mysql command in command prompt. It is been 8 hours since i started the process. But it still restores the DB. I tried with the command
> mysql -h hostname -u username -p dbname < location of the dump file
My questions are,
Does it take these much hours time to restore these amount of DB?
Is it possible to restore 8GB database?
Am i doing in correct way?
Is there any other better way to restore the DB?
In my opinion the answer of #Ferri is good, in cases like this the CLI is always the best option.
The only improvement that I suggest is to use gzip to reduce the weight of the script.
Dump the db like so:
mysqldump --host yourhost -u root --port 3306 -p yourdb | gzip -9 > yourdb.sql.gz
Restore the db like so:
gzip -cd yourdb.sql.gz | mysql -h yourhost -u root -p yourdb
Command
mysql -h IP -u Username -p schema < file
Example
mysql -h 192.168.10.122 -u root -p mydatabase < /tmp/20160628_test_minificated.sql
Does it take these much hours time to restore these amount of DB?
Depends of size of dumpfile and connection speed.
Is it possible to restore 8GB database?
Yes, by this way you can restore big databases.
Am i doing in correct way?
For me this is the best way when you are working from command line interface and destination also is a command line interface.
Is there any other better way to restore the DB?
Yes you have multiple options, like phpmyadmin, workbrench, heidisql and many others but each one have their own limitations.
Is it possible to dump a database from a remote host through an ssh connection and have the backup file on my local computer.
If so how can this be achieved?
I am assuming it will be some combination of piping output from the ssh to the dump or vice versa but cant figure it out.
This would dump, compress and stream over ssh into your local file
ssh -l user remoteserver "mysqldump -mysqldumpoptions database | gzip -3 -c" > /localpath/localfile.sql.gz
Starting from #MichelFeldheim's solution, I'd use:
$ ssh user#host "mysqldump -u user -p database | gzip -c" | gunzip > db.sql
ssh -f user#server.com -L 3306:server.com:3306 -N
then:
mysqldump -hlocalhost > backup.sql
assuming you also do not have mysql running locally. If you do you can adjust the port to something else.
I have created a script to make it easier to automate mysqldump commands on remote hosts using the answer provided by Michel Feldheim as a starting point:
mysqldump-remote
The script allows you to fetch a database dump from a remote host with or without SSH and optionally using a .env file containing environment variables.
I plan to use the script for automated database backups. Feel free to create issues / contribute - hope this helps others as well!
I would like to know the command to perform a mysqldump of a database without the prompt for the password.
REASON:
I would like to run a cron job, which takes a mysqldump of the database once everyday. Therefore, I won't be able to insert the password when prompted.
How could I solve this ?
Since you are using Ubuntu, all you need to do is just to add a file in your home directory and it will disable the mysqldump password prompting. This is done by creating the file ~/.my.cnf (permissions need to be 600).
Add this to the .my.cnf file
[mysqldump]
user=mysqluser
password=secret
This lets you connect as a MySQL user who requires a password without having to actually enter the password. You don't even need the -p or --password.
Very handy for scripting mysql & mysqldump commands.
The steps to achieve this can be found in this link.
Alternatively, you could use the following command:
mysqldump -u [user name] -p[password] [database name] > [dump file]
but be aware that it is inherently insecure, as the entire command (including password) can be viewed by any other user on the system while the dump is running, with a simple ps ax command.
Adding to #Frankline's answer:
The -p option must be excluded from the command in order to use the password in the config file.
Correct:
mysqldump –u my_username my_db > my_db.sql
Wrong:
mysqldump –u my_username -p my_db > my_db.sql
.my.cnf can omit the username.
[mysqldump]
password=my_password
If your .my.cnf file is not in a default location and mysqldump doesn't see it, specify it using --defaults-file.
mysqldump --defaults-file=/path-to-file/.my.cnf –u my_username my_db > my_db.sql
A few answers mention putting the password in a configuration file.
Alternatively, from your script you can export MYSQL_PWD=yourverysecretpassword.
The upside of this method over using a configuration file is that you do not need a separate configuration file to keep in sync with your script. You only have the script to maintain.
There is no downside to this method.
The password is not visible to other users on the system (it would be visible if it is on the command line). The environment variables are only visible to the user running the mysql command, and root.
The password will also be visible to anyone who can read the script itself, so make sure the script itself is protected. This is in no way different than protecting a configuration file. You can still source the password from a separate file if you want to have the script publicly readable (export MYSQL_PWD=$(cat /root/mysql_password) for example). It is still easier to export a variable than to build a configuration file.
E.g.,
$ export MYSQL_PWD=$(>&2 read -s -p "Input password (will not echo): "; echo "$REPLY")
$ mysqldump -u root mysql | head
-- MySQL dump 10.13 Distrib 5.6.23, for Linux (x86_64)
--
-- Host: localhost Database: mysql
-- ------------------------------------------------------
-- Server version 5.6.23
/*!40101 SET #OLD_CHARACTER_SET_CLIENT=##CHARACTER_SET_CLIENT */;
/*!40101 SET #OLD_CHARACTER_SET_RESULTS=##CHARACTER_SET_RESULTS */;
/*!40101 SET #OLD_COLLATION_CONNECTION=##COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
MariaDB
MariaDB documents the use of MYSQL_PWD as:
Default password when connecting to mysqld. It is strongly recommended to use a more secure method of sending the password to the server.
The page has no mentions of what a "more secure" method may be.
MySQL
This method is still supported in the latest documented version of MySQL: https://dev.mysql.com/doc/refman/8.0/en/environment-variables.html though it comes with the following warning:
Use of MYSQL_PWD to specify a MySQL password must be considered extremely insecure and should not be used. Some versions of ps include an option to display the environment of running processes. On some systems, if you set MYSQL_PWD, your password is exposed to any other user who runs ps. Even on systems without such a version of ps, it is unwise to assume that there are no other methods by which users can examine process environments.
The security of environment variables is covered in much details at https://security.stackexchange.com/a/14009/10002 and this answer also addresses the concerns mentioned in the comments. TL;DR Irrelevant for over a decade.
Having said that, the MySQL documentation also warns:
MYSQL_PWD is deprecated as of MySQL 8.0; expect it to be removed in a future version of MySQL.
To which I'll leave you with maxschlepzig's comment from below:
funny though how Oracle doesn't deprecate passing the password on the command line which in fact is extremely insecure
Final thoughts
Connecting to a system using a single factor of authentication (password) is indeed insecure. If you are worried about security, you should consider adding mutual TLS on top of the regular connection so both the server and the client are properly identified as being authorized.
To use a file that is anywhere inside of OS, use --defaults-extra-file eg:
mysqldump --defaults-extra-file=/path/.sqlpwd [database] > [desiredoutput].sql
Note: .sqlpwd is just an example filename. You can use whatever you desire.
Note: MySQL will automatically check for ~/.my.cnf which can be used instead of --defaults-extra-file
If your using CRON like me, try this!
mysqldump --defaults-extra-file=/path/.sqlpwd [database] > "$(date '+%F').sql"
Required Permission and Recommended Ownership
sudo chmod 600 /path/.sqlpwd && sudo chown $USER:nogroup /path/.sqlpwd
.sqlpwd contents:
[mysqldump]
user=username
password=password
Other examples to pass in .cnf or .sqlpwd
[mysql]
user=username
password=password
[mysqldiff]
user=username
password=password
[client]
user=username
password=password
If you wanted to log into a database automatically, you would need the [mysql] entry for instance.
You could now make an alias that auto connects you to DB
alias whateveryouwant="mysql --defaults-extra-file=/path/.sqlpwd [database]"
You can also only put the password inside .sqlpwd and pass the username via the script/cli. I'm not sure if this would improve security or not, that would be a different question all-together.
For completeness sake I will state you can do the following, but is extremely insecure and should never be used in a production environment:
mysqldump -u [user_name] -p[password] [database] > [desiredoutput].sql
Note: There is NO SPACE between -p and the password.
Eg -pPassWord is correct while -p Password is incorrect.
Yeah it is very easy .... just in one magical command line no more
mysqldump --user='myusername' --password='mypassword' -h MyUrlOrIPAddress databasename > myfile.sql
and done :)
For me, using MariaDB I had to do this: Add the file ~/.my.cnf and change permissions by doing chmod 600 ~/.my.cnf. Then add your credentials to the file. The magic piece I was missing was that the password needs to be under the client block (ref: docs), like so:
[client]
password = "my_password"
[mysqldump]
user = root
host = localhost
If you happen to come here looking for how to do a mysqldump with MariaDB. Place the password under a [client] block, and then the user under a [mysqldump] block.
You can achieve this in 4 easy steps
create directory to store script and DB_backups
create ~/.my.cnf
create a ~/.script.sh shell script to run the mysqldump
Add a cronjob to run the mysql dump.
Below are the detailed steps
Step 1
create a directory on your home directory using sudo mkdir ~/backup
Step 2
In your home directory run sudo nano ~/.my.cnf and add the text below and save
[mysqldump]
#use this if your password has special characters (!##$%^&..etc) in it
password="YourPasswordWithSpecialCharactersInIt"
#use this if it has no special characters
password=myPassword
Step 3
cd into ~/backup and create another file script.sh
add the following text to it
SQLFILE=/path/to/where/you/want/to/dump/dbname.sql
DATABASE=dbname
USER=myUsername
mysqldump --defaults-file=~/.my.cnf -u ${USER} ${DATABASE}|gzip > dbName_$(date +\%Y\%m\%d_\%H\%M).sql.gz
Step 4
In your console, type crontab -e to open up the cron file where the auto-backup job will be executed from
add the text below to the bottom of the file
0 0 * * * ./backup/script.sh
The text added to the bottom of the cron file assumes that your back up shall run daily at midnight.
That's all you need folk
;)
Here is a solution for Docker in a script /bin/sh :
docker exec [MYSQL_CONTAINER_NAME] sh -c 'exec echo "[client]" > /root/mysql-credentials.cnf'
docker exec [MYSQL_CONTAINER_NAME] sh -c 'exec echo "user=root" >> /root/mysql-credentials.cnf'
docker exec [MYSQL_CONTAINER_NAME] sh -c 'exec echo "password=$MYSQL_ROOT_PASSWORD" >> /root/mysql-credentials.cnf'
docker exec [MYSQL_CONTAINER_NAME] sh -c 'exec mysqldump --defaults-extra-file=/root/mysql-credentials.cnf --all-databases'
Replace [MYSQL_CONTAINER_NAME] and be sure that the environment variable MYSQL_ROOT_PASSWORD is set in your container.
Hope it will help you like it could help me !
Check your password!
Took me a while to notice that I was not using the correct user name and password in ~/.my.cnf
Check the user/pass basics before adding in extra options to crontab backup entries
If specifying --defaults-extra-file in mysqldump then this has to be the first option
A cron job works fine with .my.cnf in the home folder so there is no need to specify --defaults-extra-file
If using mysqlpump (not mysqldump) amend .my.cnf accordingly
The ~/.my.cnf needs permissions set so only the owner has read/write access with:
chmod 600 ~/.my.cnf
Here is an example .my.cnf:
[mysql]
host = localhost
port = 3306
user = BACKUP_USER
password = CORRECTBATTERYHORSESTAPLE
[mysqldump]
host = localhost
port = 3306
user = BACKUP_USER
password = CORRECTBATTERYHORSESTAPLE
[mysqlpump]
host = localhost
port = 3306
user = BACKUP_USER
password = CORRECTBATTERYHORSESTAPLE
The host and port entries are not required for localhost
If your user name in linux is the same name as used for your backup purposes then user is not required
Another tip, whilst you are doing a cronjob entry for mysqldump is that you can set it to be a low priority task with ionice -c 3 nice 19. Combined with the --single-transaction option for InnoDB you can run backups that will not lock tables or lock out resources that might be needed elsewhere.
I have the following.
/etc/mysqlpwd
[mysql]
user=root
password=password
With the following alias.
alias 'mysql -p'='mysql --defaults-extra-file=/etc/mysqlpwd'
To do a restore I simply use:
mysql -p [database] [file.sql]
This is how I'm backing-up a MariaDB database using an expanding variable.
I'm using a "secrets" file in a Docker-Compose setup to keep passwords out of Git, so I just cat that in an expanding variable in the script.
NOTE: The below command is executed from the Docker host itself:
mysqldump -h192.168.1.2 -p"$(cat /docker-compose-directory/mariadb_root_password.txt)" -uroot DB-Name > /backupsDir/DB-Name_`date +%Y%m%d-%H:%M:%S`.sql
This is tested and known to work correctly in Ubuntu 20.04 LTS with mariadb-client.
I'm doing mine a different way, using Plink(Putty command line) to connect to remotehost, then the below command is in the plink file that runs on the remote server, then I use RSYNC from windows to get it and backup to an onprem NAS.
sudo mysqldump -u root --all-databases --events --routines --single-transaction > dump.sql
I have keys setup on the remote host and using PowerShell that's scheduled via task scheduler to run weekly.
what about --password=""
worked for me running on 5.1.51
mysqldump -h localhost -u <user> --password="<password>"
Definitely I think it would be better and safer to place the full cmd line in the root crontab , with credentails.
At least the crontab edit is restricred (readable) to someone who already knows the password.. so no worries to show it in plain text...
If needed more than a simple mysqldump... just place a bash script that accepts credentails as params and performs all amenities inside...
The bas file in simple
#!/bin/bash
mysqldump -u$1 -p$2 yourdbname > /your/path/save.sql
In the Crontab:
0 0 * * * bash /path/to/above/bash/file.sh root secretpwd 2>&1 /var/log/mycustomMysqlDump.log
You can specify the password on the command line as follows:
mysqldump -h <host> -u <user> -p<password> dumpfile
The options for mysqldump are Case Sensitive!
What's the easiest way to move mysql schemas (tables, data, everything) from one server to another?
Is there an easy method move all this from one server running mysql to another also already running mysql?
If you are using SSH keys:
$ mysqldump --all-databases -u[user] -p[pwd] | ssh [host/IP] mysql -u[user] -p[pwd]
If you are NOT using SSH keys:
$ mysqldump --all-databases -u[user] -p[pwd] | ssh user#[host/IP] mysql -u[user] -p[pwd]
WARNING: You'll want to clear your history after this to avoid anyone finding your passwords.
$ history -c
Dump the Database either using mysqldump or if you are using PHPMyAdmin then Export the structure and data.
For mysqldump you will require the console and use the following command:
mysqldump -u <user> -p -h <host> <dbname> > /path/to/dump.sql
Then in the other server:
mysql -u <user> -p <dbname> < /path/to/dump.sql
If you're moving from the same architecture to the same architecture (x86->x86, x86_64 -> x86_64), you can just rsync your MySQL datadir from one server to the other. Obviously, you should not run this while your old MySQL daemon is running.
If your databases are InnoDB-based, then you will want to make sure that your InnoDB log files have been purged and their contents merged to disk before you copy files. You can do this by setting innodb_fast_shutdown to 0 (the default is 1, which will not flush the logs to disk), which will cause the log file to be flushed on the next server shutdown. You can do this by logging on to MySQL as root, and in the MySQL shell, do:
SET GLOBAL innodb_fast_shutdown=0
Or by setting the option in your my.cnf and restarting the server to pull in the change, then shutting down to flush the log.
Do something like:
#On old server (notice the ending slash and lack thereof, it's very important)
rsync -vrplogDtH /var/mysql root#other.server:/var/mysql/
#Get your my.cnf
scp /etc/my.cnf root#other.server:/etc/my.cnf
After that you might want to run mysql_upgrade [-p your_root_password] to make sure the databases are up-to-date.
I will say it's worked for me in the (very recent) past (moving from an old server to a new one, both running FreeBSD 8.x), but YMMV depending on how many versions you were in the past.