We're running a CentOS server with a lot of MySql databases atm, what I need is a really easy way for us to back those up. Since many of them are under a couple of meg. Dumping, zipping them up then sending them to a secure Google Apps account sounds like a pretty good idea.
So what I need is: a script that will dump and zip the database, then email it somewhere, if it fails email somewhere else.
I use the following script to send a small dump to a dedicated mail account.
This of course assumes you can send mails from your machine using the mail command.
#!/bin/bash
gzdate=`/bin/date +%Y-%m-%d_%H%M`;
gzfile=dump_${gzdate}.sql.gz
mailrecpt=recipient#domain.com
dumpuser=username
dbname=mydb
mysqldump --single-transaction --opt -u ${dumpuser} ${dbname} | gzip > ${gzfile}
if [ $? == 0 ]; then
( echo "Database Backup from ${gzdate}:"; uuencode ${gzfile} ${gzfile} ) | mail -s "Database Backup ${gzdate}" ${mailrecpt};
else
( echo "Database Backup from ${gzdate} failed." ) | mail -s "FAILED: Database Backup ${gzdate}" ${mailrecpt};
fi
You just need to adapt the variables at the top.
Related
I have some MySQL databases that I back up nightly from cron, just your standard mysqldump command. I'd like to feed only the errors from mysqldump and/or aws to a script that will then send the error into slack. I'm struggling to figure out how to do that in the middle of the command though. I want to send stderr to slacktee, but stout to gzip and on to aws s3 cp. So this works fine:
* * * * * mysqldump --host=mysql.example.com --user=mysql | gzip | aws s3 cp - s3://backs/backs/whatever.sql.gz
That's just the usual plain ol' backup thing. But I'm trying to squeeze in stderr redirects for the mysqldump command fails, I've tried every combination of 2>&1 I could and each one doesn't do the trick. Every combination either ends with an empty gzip file or stops everything from running.
* * * * * mysqldump --host=mysql.example.com --user=mysql dbname 2>&1 >/dev/null | /usr/local/bin/slacktee | gzip | aws s3 cp - s3://backs/backs/whatever.sql.gz
So if there's an error on the mysqldump command send just the error to /usr/local/bin/slacktee if there's no error, just send the mysqldump output to the pipe over to gzip.
I want the same thing with aws s3 cp, but that seems to be easier, I can just put the redirect at the end.
Edited to add: Ideally I'm hoping to avoid doing a separate script for this and keeping it all in one line in cron.
Also adding another edit. 2>&1 /dev/null was just in this example, I've tried making that 2>&1 /path/to/slacktee as well as different combinations of 2> and 1> and some | in different places as well and every other different way I could think of, and that didn't work either.
I would create a separate script, (mysqlbackup.sh), and change the crontab to:
* * * * * mysqlbackup.sh
Your script could look like (untested):
#!/bin/bash
mysqldump --host=mysql.example.com --user=mysql dbname 2>/tmp/errors | gzip > /tmp/mysqldump.gz
if [ -s /tmp/errors ]; # if file has a size
then
echo "Something went wrong"
else
echo "OK"
fi
This, of course, needs to be expanded with the aws s3 cp... stuff...
i have a database server ( MySQL ) that does not have enough free space to save dump of MySQL. i want to create dump on it and send to archive server straight without save on local host because lack of space. how can i do this on my local database server ?
all of my servers are Linux base.
thanks in advance
You can do it like this:
mysqldump -u MYSQL_USER -p'MYSQL_PASSWORD' YOUR_DATABASE | gzip -c | ssh you#your_host 'cat > ~/backup.sql.gz'
I'm looking for a kind of remote database backup automation.
Then, I came across a scripting language which commonly used for administrative tasks, "Expect scripting" and I believe it could serve my purpose very well.
what I'd like to do is I want to perform login to a remote server using the following bash script from my local linux box. (supposed everything has been set properly, SSH authentication via generated key pair, so no password is required)
For the most important part, I'd like to send a mysqldump command to perform backup for my database on that server.
#!/usr/bin/expect
set login "root"
set addr "192.168.1.1"
spawn ssh $login#$addr
expect "#"
send "cd /tmp\r"
expect "#"
send "mysqldump -u root -ppassword my_database > my_database.sql\r"
expect "#"
send "exit\r"
The only problem I found here was after the line send "mysqldump -u root....... ".
It was never waiting until the process to finish, but immediately exit the shell with 'send "exit\r"' command line.
what do I do to make it waits until mysqldump command finish and log off the SSH properly?
I don't know the answer to your question: add exp_internal 1 to the top of the program to see what's going on.
However, since you have ssh keys set up, you don't really need expect at all:
ssh $login#$addr 'cd /tmp && mysqldump -u root -ppassword my_database > my_database.sql'
This question already has answers here:
Automated or regular backup of mysql data
(2 answers)
Linux shell script for database backup
(11 answers)
Closed 6 years ago.
I want to setup a cron job to run so that it will automatically backup my MySQL database, while the database is running, and then FTP that backup to my backup server.
I assume I can do this using a bash script.
Anyone know of a good way to accomplish this?
Thanks in advance.
This is a very simple approach using the lftp command line ftp client:
backup.sh:
mysqldump -f [database] | gzip > /backup/[database].dump.gz
lftp -f /backup/lftp.script
lftp.script:
open backup.ftp.example.com
user [username] [password]
cd /backup
mv webflag.dump.gz.8 webflag.dump.gz.9
mv webflag.dump.gz.7 webflag.dump.gz.8
mv webflag.dump.gz.6 webflag.dump.gz.7
mv webflag.dump.gz.5 webflag.dump.gz.6
mv webflag.dump.gz.4 webflag.dump.gz.5
mv webflag.dump.gz.3 webflag.dump.gz.4
mv webflag.dump.gz.2 webflag.dump.gz.3
mv webflag.dump.gz.1 webflag.dump.gz.2
mv webflag.dump.gz webflag.dump.gz.1
Note: This approach has a number of issues:
ftp is unencryped, so anyone who is able to sniff the network, is able to see both the password and the database data. Piping it through gpg -e [key] can be used to encrypt the dump but the ftp passwords stays unencrypted (sftp, scp are better alternatives)
if someone hacks the database server, he can use the user information in this script to access the ftp server and depending on rights delete the backups (this has happened in the real world: http://seclists.org/fulldisclosure/2009/Jun/0048.html)
If the database is very large the backup file may not fit to server's hard drive. In this case I suggest you the following way that uses pipes and ncftpput:
mysqldump -u <db_user> -p<db_password> <db_name> | gzip -c | ncftpput -u <ftp_user> -p <ftp_password> -c <ftp_url> <remote_file_name>
It works fine for me.
I was looking for a portable script or command line program that can synchronize two MySQL databases schema. I am not looking for a GUI based solution because that can't be automated or run with the buid/deployment tool.
Basically what it should do is scan database1 and database2. Check the schema difference (tables and indexes) and propose a bunch of SQL statements to run on one so that it gets the similiar structure of the other minimizing data damage as much as possible.
If someone can indicate a PHP, Python or Ruby package where this type of solution is implemented, I can try to copy the code from there.
A lot of MySQL GUI tools probably can do this, but I am looking for a scriptable solution.
Edit: Sorry for not being more clear: What I am looking for is synchronization in table structure while keeping data intact as far as possible. Not data replication.
More info:
Why replication won't work.
The installation bases are spread around the state.
We want the installer to perform dynamic fixes on the DB based on chagnes made in the latest version, regardless of what older version the end user might be using.
Changes are mostly like adding new column to a tables, creating new indexes, or dropping indexes, adding tables or dropping tables used by the system internally (we don't drop user data table).
If it's a GUI: No it can't be used. We don't want to bunddle a 20MB app with our installer just for DB diff. Specially when the original installer is less than 1 MB.
Have you considered using MySQL replication ?
SQLyog does that and it is awesome. We use it in production often.
I know it's an old question but it was the first result on google for what I was searching for (exact same thing as the initial question)
I found the answer still here but I don't remember the URL
it's a script that started from:
mysqldump --skip-comments --skip-extended-insert -u root -p dbName1>file1.sql
mysqldump --skip-comments --skip-extended-insert -u root -p dbName2>file2.sql
diff file1.sql file2.sql
and ended up more like this
#!/bin/sh
echo "Usage: dbdiff [user1:pass1#dbname1] [user2:pass2#dbname2] [ignore_table1:ignore_table2...]"
dump () {
up=${1%%#*}; user=${up%%:*}; pass=${up##*:}; dbname=${1##*#};
mysqldump --opt --compact --skip-extended-insert -u $user -p$pass $dbname $table > $2
}
rm -f /tmp/db.diff
# Compare
up=${1%%#*}; user=${up%%:*}; pass=${up##*:}; dbname=${1##*#};
for table in `mysql -u $user -p$pass $dbname -N -e "show tables" --batch`; do
if [ "`echo $3 | grep $table`" = "" ]; then
echo "Comparing '$table'..."
dump $1 /tmp/file1.sql
dump $2 /tmp/file2.sql
diff -up /tmp/file1.sql /tmp/file2.sql >> /tmp/db.diff
else
echo "Ignored '$table'..."
fi
done
less /tmp/db.diff
rm -f /tmp/file1.sql /tmp/file2.sql
For a long-term, professional solution, you should keep an eye on Schemamatic (http://versabanq.com/products/schemamatic.php). This link shows a GUI app but all it does is manipulate a command-line software. In this page there is link to its google code site where the C# .Net version of Schemamatic can be found. Your perfect solution would be to add support for MySQL to Schemamatic. For SQL Server, it's perfect and does exactly what you mentioned.
Now, for a short-term solution I would suggest dumping the data you want with MySQL's command-line tools like:
mysqldump -A -c -uroot -ppassword >bkpmysql.sql
And play with it, although it should take quite some time to achieve what you want. Schemamatic really seems to me your best choice. Let me know if you need any clarification when/if trying Schemamatic.
You might want to look at some tools such as dbdeploy (this is a java or a .net version) and liquidbase and others.
Although most of these I think will apply sets of changes to a DB in a controlled manner. Don't know if they can reverse engineer from existing schemas and compare.
E.
check this one is codeigniter database diff script generator
https://github.com/vaimeo/ci-database-diff-generator