Creating a cron job for mysqldump - mysql

I'm trying to create a cron job for database backup.
This is what I have so far:
mysqldump.sh
mysqldump -u root -ptest --all-databases | gzip > "/db-backup/backup/backup-$(date)" 2> dump.log
echo "Finished mysqldump $(date)" >> dump.log
Cron job:
32 18 * * * /db-backup/mysqldump.sh
The problem I am having is the job is not executing through cron or when I am not in the directory.
Can someone please advise. Are my paths incorrect?
Also, the following line I'm not sure will output errors to the dump.log:
mysqldump -u root -ptest --all-databases | gzip > "/db-backup/backup/backup-$(date)" 2> dump.log
What worked:
mysqldump -u root -ptest --all-databases | gzip > "../db-backup/backup/backup-$(date).sql.gz" 2> ../db-backup/dump.log
echo "Finished mysqldump $(date)" >> ../db-backup/dump.log

There are a couple of things you can check, though more information is always more helpful (permissions and location of file, entire file contents, etc).
It can never hurt to preface the mysqldump.sh file with the Shebang syntax for your environment. I would venture to guess #!/bin/bash would be sufficient.
Instead of mysqldump -u .... use the absolute path /usr/bin/mysqldump (or where ever it is on your system). Absolute paths are always a good idea in any form of scripting since it's difficult to say if the user has the same environment as you do.
As for storing the errors in dump.log, I don't believe your syntax is correct. I'm fairly sure you're piping the errors from gzip into dump.log, not the errors from mysqldump. This seems like a fairly common question which arrives at the answer of mysqldump $PARAMS | gzip -c dump-$(date)

Related

MYSQLDUMP over Powershell SSH (POSH)

I'm struggeling with this for a day now. Basically I want to backup a MySQL database on our webspace with a powershell script which runs daily on my windows computer.
When I use Putty and enter the following command, a backup file is created:
mysqldump XXXX --add-drop-table -u XXXX -p******* > backup/backup.sql
But when I run it from powershell, it will not create the backup file, even when I invoke the exact same command:
$sshsession = New-SSHSession -ComputerName $sshserver -Credential $Creds -Force -Verbose
[string]$backupcmd = "mysqldump XXXX --add-drop-table -u XXXX -p******* > backup/backup.sql"
Write-Output $backupcmd
$backupdb = Invoke-SSHCommand -SSHSession $sshsession -Command "$backupcmd"
It seems like Posh-SSH has problems with the ">" operator, maybe it does not have enough time to execute, I don't know. Also tried things like Timeout on Invoke-SSHCommand, but nothing did work yet.
I can't do stuff like crons on the remote server, it's just a webspace with limited functionalities. Also starting a bash-script does not work, I have no rights to execute scripts on the remote server.
If your need is specifically regarding the mysqldump command, you can use the --result-file or just -r parameter.
In this case it would look like this:
$ backupcmd = "mysqldump XXXX --add-drop-table -u XXXX -p ******* -r backup/backup.sql"
I did not perform tests because I do not have POSH available at this time, but you can refer to the documentation: https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html#option_mysqldump_result-file
Tell us it worked out this way.
Hope this helps.

mysqldump - Dump multiple databases from separate mysql accounts to one file

The standard mysqldump command that I use is
mysqldump --opt --databases $dbname --host=$dbhost --user=$dbuser --password=$dbpass | gzip > $filename
To dump multiple databases
mysqldump --opt --databases $dbname1 $dbname2 $dbname3 $dbname_etc --host=$dbhost --user=$dbuser --password=$dbpass | gzip > $filename
My question is how do you dump multiple databases from different MySQL accounts into just one file?
UPDATE: When I meant 1 file, I mean 1 gzipped file with the difference sql dumps for the different sites inside it.
Nobody seems to have clarified this, so I'm going to give my 2 cents.
Going to note here, my experiences are in BASH, and may be exclusive to it, so variables and looping might work different in your environment.
The best way to achieve an archive with separate files inside of it is to use either ZIP or TAR, i prefer to use tar due to its simplicity and availability.
Tar itself doesn't do compression, but bundled with bzip2 or gzip it can provide excellent results. Since your example uses gzip I'll use that in my demonstration.
First, let's attack the problem of MySQL dumps, the mysqldump command does not separate the files (to my knowledge anyway). So let's make a small workaround for creating 1 file per database.
mysql -s -r -p$dbpass --user=$dbuser -e 'show databases' | while read db; do mysqldump -p$dbpass --user=$dbuser $db > ${db}.sql; done
So now we have a string that will show databases per file, and export those databases out to where ever you need simply edit the part after the > symbol
Next, let's add some look at the syntax for TAR
tar -czf <output-file> <input-file-1> <input-file-2>
because of this configuration it allows us to specify a great number of files to archive.
The options are broken down as follows.
c - Compress/Create Archive
z - GZIP Compression
f - Output to file
j - bzip compression
Our next problem is keeping a list of all the newly created files, we'll expand our while statement to append to a variable while running through each database found inside of MySQL.
DBLIST=""; mysql -s -r -p$dbpass --user=$dbuser -e 'show databases' | while read db; do mysqldump p$dbpass --user=$dbuser $db > ${db}.sql; DBLIST="$DBLIST $DB"; done
Now we have a DBLIST variable that we can use to have an output of all our files that will be created, we can then modify our 1 line statement to run the tar command after everything has been handled.
DBLIST=""; mysql -s -r -p$dbpass --user=$dbuser -e 'show databases' | while read db; do mysqldump p$dbpass --user=$dbuser $db > ${db}.sql; DBLIST="$DBLIST $DB"; done && tar -czf $filename "$DBLIST"
This is a very rough approach and doesn't allow you to manually specify databases, so to achieve that, using the following command will create you a TAR file that contains all of your specified databases.
DBLIST=""; for db in "<database1-name> <database2-name>"; do mysqldump -p$dbpass --user=$dbuser $db > ${db}.sql; DBLIST="$DBLIST $DB.sql"; done && tar -czf $filename "$DBLIST"
The looping through MySQL databases from the MySQL database comes from the following stackoverflow.com question "mysqldump with db in a separate file" which was simply modified in order to fit your needs.
And to have the script automatically clean it up in a 1 liner simply add the following at the end of the command
&& rm "$DBLIST"
making the command look like this
DBLIST=""; for db in "<database1-name> <database2-name>"; do mysqldump -p$dbpass --user=$dbuser $db > ${db}.sql; DBLIST="$DBLIST $DB.sql"; done && tar -czf $filename "$DBLIST" && rm "$DBLIST"
For every MySQL server account, dump the databases into separate files
For every dump file, execute this command:
cat dump_user1.sql dump_user2.sql | gzip > super_dump.gz
There is a similar post on Superuser.com website: https://superuser.com/questions/228878/how-can-i-concatenate-two-files-in-unix
just in case "multiple db" is literally "all db" for you
mysqldump -u root -p --all-databases > all.sql

MySQL: Why does basic MySQLdump on db table fail with "Permission denied"

This should be quick and simple, but after researching on Google quite a bit I am still stumped. I am mostly newbie with: server admin, CLI, MySQL.
I am developing my PHP site locally, and now need to move some new MySQL tables from my local dev setup to the remote testing site. First step for me is just to dump the tables, one at a time.
I successfully login to my local MySQL like so:
Govind% /usr/local/mysql/bin/mysql -uroot
but while in this dir (and NOT logged into MySQL):
/usr/local/mysql/bin
...when I try this
mysqldump -uroot -p myDBname myTableName > myTestDumpedTable.sql
..then I keep getting this:
"myTestDumpedTable.sql: Permission denied."
Same result if I do any variation on that (try to dump the whole db, drop the '-p', etc.)
I am embarrassed as I am sure this is going to be incredibly simple, or just reveal a gaping (basic) hole in my knowledge. .. but please help ;-)
The answer came from a helpful person on the MySQL list:
As you guys (Anson and krazybean) were thinking - I did not have permission to be writing to the /usr/local/mysql/bin/ dir. But starting from any other directory, calls to mysqldump were failing because my shell PATH var (if I said that right) is not yet set up to handle mysqldump from another dir. Also, for some reason I do not really understand yet, I also needed to use a full path on the output, even if I was calling mysqldump effectively, and even if I had permission to write to the output dir (e.g. ~/myTestDumpedTable.sql. So here was my ticket, for now (quick answer):
Govind% /usr/local/mysql/bin/mysqldump -uroot -p myDBname myTableName > /Users/Govind/myTestDumpedTable.sql
You can write to wherever your shell user has permission to do so. I just chose my user's home dir.
Hope this helps someone someday.
Cheers.
Generally I stick with defining the hostname anyways, but as you being root doesn't seem like it would be the problem, I would question where are you writing this to? What happens when you dump to > ~/myTestDumpedTable.sql
Even I was facing the same problem, the issue is with user access to 'root/bin' dir.
switch your user access as root
sudo -s
Then execute the command
mysqldump -uroot -p homestayadvisorDB > homestayadvisor_backup.sql
This will resolve the issue. Let me know if this doesn't work.
Take a look at the man page for mysqldump for correct argument usage. You need a space between the -u flag and the username, like so:
mysqldump -u root -p myDBname myTableName > myTestDumpedTable.sql
Alternatively you can do
mysqldump --user=root -p myDBname myTableName > myTestDumpedTable.sql
Since you're not providing a password in the list of arguments, you should be prompted for one. You can always provide the password in the list of arguments, but the downside to that is it appears in cleartext and will show up in the shell's command history.
In my case I'd created the directory with $ sudo mkdir /directory/to/store/sql/files. The owner of that directory is root. So changing the owner by using $ sudo chown me:me /directory/to/store/sql/files and also changing permissions to maybe $ sudo chmod 744 /directory/to/store/sql/files did the trick for me.
mysqldump don't work with sudo, if you are using
sudo mysqldump then try below solution:
sudo su
mysqldump -u[username] -p[password] db_name > newbackupfile.bkp
You should provide with a full path for SQL backup file, such as
mysqldump -u root -p databasexxx > /Users/yourusername/Sites/yoursqlfile.sql
I think you're missing the ./ from the command, try:
being inside
/usr/local/mysql/bin$ ./mysqldump -u root -p myDBname > "/Users/yourUserName/Documents/myTestDumpedTable.sql"
So it is a script, and in linux you execute a script with ./myscript.
I found it just today, and for me, in my mac OSX, I didn't use the -p, maybe because password not needed, don't know already. I mean, try also:
./mysqldump -u root myDBname > "/Users/yourUserName/Documents/myTestDumpedTable.sql"

what's the difference between -C and gzipping a mysqldump?

I want to do a mysqldump directly to my remotehost. I've seen suggestions to use the -c switch or use gzip to compress the data on the fly (and not in a file). What's the difference between the two? How do I know if both machines support the -C switch? How would I do a gzip on the fly? I am using linux on both machines.
mysqldump -C -u root -p database_name | mysql -h other-host.com database_name
The -C option uses compression that may be present in the MySQL client-server protocol. Gzip'ing would use the gzip utility in a pipeline. I'm pretty sure that the latter would not do any good since the compression and uncompression would occur on the same machine in this case. If the machine that you are dumping from is local, then the -C option is probably just wasting CPU cycles - it compresses the protocol messages between mysqldump and the mysqld daemon.
The only command pipeline that might make sense here is something like:
mysqldump -u root -p database_name | mysql -C -h other-host -Ddatabase_name -B
The output of mysqldump is going to the pipeline which the mysql command-line client will read. The -C option tells mysql to compress the messages that it is sending to other-host. The -B option disables buffering and interactive behavior in the mysql client which might speed things up a little more.
It would probably be faster to do something like:
mysqldump -u root -p database_name | gzip > dump.gz
scp dump.gz user#other-host:/tmp
ssh user#other-host "gunzip /tmp/dump.gz | mysql -Ddatabase_name -B; rm /tmp/dump.gz"
Provided that you have SSH running on the other machine anyway.
I always read the man pages for these types of things. If you look at the man pages for mysqldump you can see that the -C (thats a capital C) flag makes the mysqldump compress all data in transit only. this makes it stream compressed but arrive, as you will see it, uncompressed. You could dump the file to the local system then transfer a gzip of everything at once too.
from the man page:
o --compress, -C
Compress all information sent between the client and the server if
both support compression.

Importing all MySQL databases

I mysqldump --all-databases nightly as a backup. But on importing this dump into a clean installation, I obviously run into a couple issues.
I obviously can't (and don't want to) overwrite the new information_schema.
All my users and permissions settings are lost, unless I overwrite the mysql database.
What is standard practice in this situation? Parse out information_schema from .sql file before uploading? And do I overwrite the mysql database or not?
you will not have problems with the info schema
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
mysqldump does not dump the INFORMATION_SCHEMA database. If you name that database explicitly on the command line, mysqldump silently ignores it.
For excluding database, try this bash script.
for DB in $(echo "show databases" | mysql -u <username> -p'<password>' | grep -v Database grep -v <some_db_to_exclude>)
do
mysqldump -u <username> -p'<password>' ${DB}
done