I have a gpg-encrypted dump of a MySQL database.
I'm trying to restore it decrypting in a single command, but it doesn't work - I think - because I need to input two passwords, one for decrypting and one for accessing db, and it seems to mess things up.
What I do is:
gpg --decrypt dump.sql.gpg | mysql -u user -p db_name
It ask me the DB password and the gpg password "together", so that I can't type both.
Is it possible to have the password-requirement separated?
Thanks
Some parameters are missing:
mute gpg output to terminal: --quiet --no-tty
batch mode: --batch
--decrypt outputs the decription to console instead of to a file
remove spaces between -u and the username and -p and the password
provide the database you want the data (it may be in the dump though)
Something like:
gpg --quiet --no-tty --batch --decrypt dump.sql.gpg | mysql -uUSER -pPASSWORD [database]
Hope it helps!
Related
I'm writing a bash script to do some db stuff. New to MySQL. I'm on Mac and have MySQL installed via homebrew.
Am using username "root" right now and there isn't a pw set. I included the pw syntax below just to help others out that may have a pw.
My goal is to have mysql commands be as "clean" as possible in my bash script
Not a hige deal, but would like to do this if possible.
Example
# If I can do it without logging in (*ideal)
mysql CREATE DATABASE dbname;
# Or by logging in with - mysql -u root -pPassword
CREATE DATABASE dbname;
# Instead of
mysql -u root -pPassword -e"CREATE DATABASE dbname";
Tried to simplify it. I have a handful of things I gotta do, so would rather keep my code cleaner if possible. I tried logging in with the bash script, but the script stopped once logged into MySQL and didn't run any commands.
Another option I was considering (but don't really like) would be just to keep username and pw string in a var and call it for every commmand like so
# Set the login string variable
login_details="-u root -p password -e"
# example command
mysql $login_details"CREATE DATABASE dbname";
So any ideas?
Write a new bash script file and run this file after putting all your commands into it. Don't forget to give right username and password in your bash script.
For bash script:
#!/bin/bash
mysql -u root -pSeCrEt << EOF
use mysql;
show tables;
EOF
If you want to run single mysql command.
mysql -u [user] -p[pass] -e "[mysql commands]"
Example:
mysql -h 192.168.1.10 -u root -pSeCrEt -e "show databases"
To execute multiple mysql commands:
mysql -u $user -p$passsword -Bse "command1;command2;....;commandn"
Note: -B is for batch, print results using tab as the column separator, with each row on a new line. With this option, mysql does not use the history file. Batch mode results in nontabular output format and escaping of special characters. -s is silent mode. Produce less output. -e is to execute the statement and quit
I am trying to create a bash script that uses mysqldump to create a backup of the database that is specified as parameter. However mysqldump fails with an access denied error. Using the same command directly (copying it to the shell an executing it) works without any problem.
#!/bin/bash
# ... use parameters to get db name and password
# build the mysqldump command and execute it...
command="mysqldump -alv -h127.0.0.3 --default-character-set=utf8 -u ${database} -p'${pw}' --extended-insert ${database} | gzip > ${path}"
echo "$command"
echo ""
$command
This gives me the following output:
$ ./dbbak DBUSER DBNAME PASSWORD
mysqldump -alv -h127.0.0.3 --default-character-set=utf8 -u DBUSER -p'PASSWORD' --extended-insert DBNAME | gzip > /path/to/backup/backup.sql.gz
Warning: Using a password on the command line interface can be insecure.
-- Connecting to 127.0.0.3...
mysqldump: Got error: 1045: Access denied for user 'DBUSER'#'localhost' (using password: YES) when trying to connect
As said before: When I copy the echoed mysqldump command and execute it directly, the backup works just fine.
What is the problem here? Since the command is executed correctly when being used manually all parameters (password, username, etc.) seem to be correct. Additionally the bash script is executed with the same user account as the manual command.
So why does the manual execution work while the bash script fails?
EDIT:
As Jens pointed out in his comment, removing the quotes from the password will solve the problem. ...-p${pw}... will work, BUT this will also lead to a new problem, if the password contains special characters like $ < > ...
I assume that the problem with the quotes is how bash parses the string. Meanwhile I found some docs that say, that it is a bad habit to store commands in variables and execute them. Instead one should execute commands directly. However the following does not work as well:
result=$(mysqldump -alv -h127.0.0.3 --default-character-set=utf8 -u ${database} -p'${pw}' --extended-insert ${database} | gzip > ${path})
When executing this with bash -x dbbak the output shows the problem:
...
++ mysqldump -alv -h127.0.0.3 --default-character-set=utf8 -u DBUSER '-p'\''DBPASS'\''' --extended-insert DBNAME
While I do understand why the quotes around DBPASS are added ('DBPASS' --> \''DBPASS'\'), I do not understand why there are also quotes around-p`.
How do I get rid of these quotes when executing the command?
You can either:
store the password in an environment variable MYSQL_PWD
store the password in a plain-text file .my.cnf which you need to put into
the home directory of the user that executes the script
use the mysql_config_editor utility to store the password in an encrypted
file
The first one is the easiest to use/implement but obviously the least secure.
I recommend to take a look at the documentation where all the possibilities are described. ;)
Configure it by .cnf file and provide it in --defaults-file
mysqldump --defaults-file=~/my_mysql.cnf db table > table.sql
In ~/my_msyql.cnf
[mysqldump]
user=user_name
password=my_password
host=my_host
This is also safe if you version this. You can save my_mysql.cnf differently per environment.
To remove the single quotes around the password solved for me.
i found this code but do not quite understand what the command is doing.
sudo -u test-user mysql -U test_traffic traffic < ./phoenix/data/sql/lib.model.schema.sql
i know the last part is using lib.model.schema.sql to create the tables and fields
the first part i dont quite understand: sudo -u test-user mysql -U test_traffic traffic
i know the command sudo and mysql
please explain?
thanks
Let's look at it bit by bit. Firstly the format
sudo -u username command
is an instruction to run command (which might be simple or complex) as the user username. So in your example, you are running the mysql command as the user test-user. You should note that this includes all the parameters to the mysql command - that's the entire rest of the line.
The command
mysql -U test_traffic traffic < ./phoenix/data/sql/lib.model.schema.sql
appears corrupt (certainly running it on 5.0.51a fails). It would make sense if the -U was a -u which would indicate that that the command was to be executed for mysql user test_traffic. If it was a -u you would then have an instruction to import the sql file into the traffic database.
So the combined instruction says, import the lib.model.schema.sql file into the database test_traffic using the mysql user test_traffic and executing the entire command as if you were logged-in as the user test-user.
Try Below steps for mysql:
mysql > -h hostname -u username -p password
mysql > use databasename;
mysql > source path/to/scriptfile
If you want to inject theschema.sql file into your database, with a shell script, simply use :
mysql -h [host] -u [username] -p[password] -D [database] < your_file
If you want to dynamicly tell which file should be loaded, replace your_file by $1 and pass the name of the file as an argument to your script.
Take care also to the -p option. There is no space between the -p and your password.
How can I backup a mysql database which is running on a remote server, I need to store the back up file in the local pc.
Try it with Mysqldump
#mysqldump --host=the.remotedatabase.com -u yourusername -p yourdatabasename > /User/backups/adump.sql
Have you got access to SSH?
You can use this command in shell to backup an entire database:
mysqldump -u [username] -p[password] [databasename] > [filename.sql]
This is actually one command followed by the > operator, which says, "take the output of the previous command and store it in this file."
Note: The lack of a space between -p and the mysql password is not a typo. However, if you leave the -p flag present, but the actual password blank then you will be prompted for your password. Sometimes this is recommended to keep passwords out of your bash history.
No one mentions anything about the --single-transaction option. People should use it by default for InnoDB tables to ensure data consistency. In this case:
mysqldump --single-transaction -h [remoteserver.com] -u [username] -p [password] [yourdatabase] > [dump_file.sql]
This makes sure the dump is run in a single transaction that's isolated from the others, preventing backup of a partial transaction.
For instance, consider you have a game server where people can purchase gears with their account credits. There are essentially 2 operations against the database:
Deduct the amount from their credits
Add the gear to their arsenal
Now if the dump happens in between these operations, the next time you restore the backup would result in the user losing the purchased item, because the second operation isn't dumped in the SQL dump file.
While it's just an option, there are basically not much of a reason why you don't use this option with mysqldump.
This topic shows up on the first page of my google result, so here's a little useful tip for new comers.
You could also dump the sql and gzip it in one line:
mysqldump -u [username] -p[password] [database_name] | gzip > [filename.sql.gz]
mysqldump -h [domain name/ip] -u [username] -p[password] [databasename] > [filename.sql]
Tried all the combinations here, but this worked for me:
mysqldump -u root -p --default-character-set=utf8mb4 [DATABASE TO BE COPIED NAME] > [NEW DATABASE NAME]
If you haven't install mysql_client yet and using Docker container instead:
sudo docker exec MySQL_CONTAINER_NAME /usr/bin/mysqldump --host=192.168.1.1 -u username --password=password db_name > dump.sql
You can directly pipe it to the remote server where you wish to copy your data to:
mysqldump -u your_db_user_name -p --set-gtid-purged=OFF --triggers --routines --events --compress --skip-lock-tables --verbose your_local_sql_db_name | mysql -u your_db_user_name -p -h your_remote_server_ip your_remote_server_db_name
You need to have created the db on your remote sql server.
Using the above command, I was able to copy from my local sql server version 8.0.23 to my remote sqlserver running 8.0.25
This is how you would restore a backup after you successfully backup your .sql file
mysql -u [username] [databasename]
And choose your sql file with this command:
source MY-BACKED-UP-DATABASE-FILE.sql
I want to copy a mysql database from my local computer to a remote server.
I am trying to use the mysql dump command. All the examples on the internet suggest doing something like
The initial mysql> is just the prompt I get after logging in.
mysql> mysqldump -u user -p pass myDBName | NewDBName.out;
But when I do this I get You have an error in your SQL syntax; check the manual that corresponds ... to use near 'mysqldump -u user -p pass myDBName | NewDBName.out'
Since I have already logged in do I need to use -u and -p? Not doing so gives me the same error. Can you see what is wrong?
In addition to what Alexandre said, you probably don't want to pipe (|) output to NewDBName.out, but rather redirect it there (>).
So from the Windows/Unix command line:
mysqldump -u user -p pass myDBName > NewDBName.out
Note that if you have large binary fields (e.g. BLOBS) in some columns you may need to set an additional option (I think it was --hex-blob, but there might have been another option too). If that applies to you, add a comment and I'll research the setting.
mysqldump is not an SQL statement that you execute inside a mysql session but a distinct binary that should be started from your OS shell.
The are a few ways to use this. One of them is to pipe the output of mysqldump to another MySQL instance:
echo CREATE DATABASE remote_db | mysql -h remote_host -u remote_user -premote_password
mysqldump -h source_host -u root -ppassword source_db | mysql -h remote_host -u remote_user -premote_password -D remote_db
I have had to dump large sets of data recently. From what I have found on a 200Mb database with 10,000+ records in many of the tables is the following. I used the linux 'time' command to get actual time.
12 minutes using:
mysqldump -u user -p pass myDBName > db-backups.sql
7 minutes to clone the database:
mysqldump -u user -p pass myDBName | mysql -u user -p pass cloneDBName
And in less than a second:
mysqlhotcopy -u user -p pass myDBName cloneDBName
The last one blew my mind, but you have to be logged in locally where the database server resides. Personally I think this is much faster than remotely doing a dump, the you can compress the .sql file and transfer it manually.