Executing MySQL commands in shell script? - mysql

I’m looking to create a deploy script that I can run from a terminal and it automatically deploys my site from a repository. The steps I’ve identified are:
Connect to remote server via SSH
Fetch latest version of site from remote repository
Run any SQL patches
Clean up and exit
I’ve placed the SSH connection and git pull commands in my shell file, but what I’m stuck with is MySQL with it being an (interactive?) shell itself. So in my file I have:
#!/bin/bash
# connect to remote server via SSH
ssh $SSH_USER#$SSH_HOST
# update code via Git
git pull origin $GIT_BRANCH
# connect to the database
mysql --user $MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DBNAME
# run any database patches
# disconnect from the database
# TODO
exit 0
As you can see, I’m connecting to the database, but not sure how to then execute any MySQL statements.
At the moment, I have a directory containing SQL patches in numerical order. So 1.sql, 2.sql, and so on. Then in my database, I have a table that simply records the last patch to be run. So I’d need to do a SELECT statement, read the last patch to be ran, and then run any neccesary patches.
How do I issue the SELECT statement to the mysql prompt in my shell script?
Then what would be the normal flow? Close the connection and re-open it, passing a patch file as the input? Or to run all required patches in one connection?
I assume I’ll be checking the last patch file, and doing a do loop for any patches in between?
Help here would be greatly appreciated.

Assuming you want to do all the business on the remote side:
ssh $SSH_USER#$SSH_HOST << END_SSH
git pull origin $GIT_BRANCH
mysql --user $MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DBNAME << END_SQL
<sql statements go here>
END_SQL
END_SSH

You could get the output from mysql using Perl or similar. This could be used to do your control flow.
Put your mysql commands into a file as you would enter them.
Then run as: mysql -u <user> -p -h <host> < file.sqlcommands.
You can also put queries on the mysql command line using '-e'. Put your 'select max(patch) from .' and read the output in your script.

cat *.sql | mysql --user $MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DBNAME

Related

How to copy a sql file from one server another server mysql database

I tried copy a sql file from one server another server mysql database
ssh -i keylocation user#host 'mysql --user=root --password="pass" --host=ipaddress additional_content Additional_Content' | < databasedump.sql
databasedump.sql - file on server A, I wanna copy data from that database file into database on server B, i tried to connec via ssh, to that server,and i need keyfile for that, and then copy the data, but when i run this command into console, nothing happens, any help?
Are you able to secure copy the file over to server B first, and then ssh in for the mysql dump? Example:
scp databasedump.sql user#server-B:/path/to/databasedump.sql
ssh -i keylocation user#host 'mysql --user=root --password="pass" --database=db_name < /path/to/databasedump.sql'
Edit: type-o
I'm not entirely sure how mysql handles stdin, so one thing you can do that should work in one command is
ssh -i keylocation user#host 'cat - | mysql --user=root --password="pass" --host=ipaddress additional_content Additional_Content' < databasedump.sql
However it's better to copy the files first with scp, and then import it with mysql. See Dan's answer.

MySQLdump backup script no longer works, getting "mysqldump: unknown variable 'local-infile=0'"

I've recently upgrade a server to Debian 9 and MySQL to the latest version. I have a simple backup script that I run before performing any work on a production site but this time, when running my script, I encounter the following:
mysqldump: unknown variable 'local-infile=0'
Here is my script. What's going on?
#!/bin/bash
# [skipping commentary]
SITE=prod
# Set the directory that the Drupal root is IN, no trailing slashes
DROOT=[website_root]
# Set the directory for storing backups, no trailing slashes
BUD=/$DROOT/notes/backups
# Don't edit; End of defining variables
echo Doing a full back up...
echo Prepare to enter MySQL password...
# tar -czf $BUD/$SITE-files-$(date +'%Y%m%d%H%M%S').tgz $DROOT/docroot
mysqldump -u mysql_user -p drupal > $BUD/$SITE-drupal-$(date +'%Y%m%d%H%M%S').sql
mysqldump -u mysql_user -p civicrm > $BUD/$SITE-civicrm-$(date +'%Y%m%d%H%M%S').sql
ls -lh $BUD
pwd
echo Finished with backups...
MySQL version 10.1.37-MariaDB-0+deb9u1 Debian 9.6
Edit: When I ssh and run mysqldump with correct permissions I get the same issue. Weirdest thing, cron that runs similar process is backing up my databases as ordered.
The best way to solve this is simply to rename the variable to:
loose-local-infile=1
This will allow mysqldump to merely throw a warning, rather than a fatal error.
The suggestion to comment out the variable is not an option if you want LOAD DATA INFILE functionality out of the box, and MySQL 8+ for security reasons requires you to set this variable for both server (mysqld) and client. It is the [client] variable grouping in your config that chokes mysqldump if you don't add the "loose-" prefix to local-infile.
Seems like the new version you install is compiled without support of local-infile parameter. And because package management system (usually) keep your current configuration file you can try to find this parameter in my.ini file and comment it.
This parameter manage LOAD DATA LOCAL functionality. But seems like this have some potential security issues (more here)

Crontab command not running

I have a database backup command that takes a mysql dump and then uploads that dump file to AWS S3, when I run the command as a normal user it works perfectly but when I use the same command in a cron job it fails.
I have checked the syslog and there is no error message saying there was a problem after the job. There is only a line saying the job is run and then goes on to run the next cron job.
The command is as follows, I have removed the sensitive parts:
mysqldump -u {{ db_user }} -p{{ db_password }} {{ db_name }} > /home/db_backup.sql | aws s3 cp /home/db_backup.sql s3://{{ s3_url }}/$(date --iso-8601=seconds)_db.sql --profile backupprofile
When this command is run by a normal user there is a warning output not to use the the mysql password in command line but this is essential for the command to work without interaction. There is also a second line ofor the S3 to say that the upload worked. Could these outputs be effecting the cronjob is someway?
You will need to have full paths on your cronjobs, I see you missed them out on mysqldump and also your aws for the connection URL. I would do whereis mysqldump and whereis aws to find which full path you need to run it.
Try checking the the environment variable, cron passes a minimal set of environment variables to your jobs. You can set the PATH easily inside crontab
PATH=/usr/local/bin:/usr/sbin
Also many cron execute command using sh, and you might be using another shell in your script . You can tell cron to run all commands in bash by setting the shell at the top of your crontab:
SHELL=/bin/bash
Cron tries to interpret % symbol, you need to escape it if you have that somewhr in your command.
If the output that comes first after running your command is interactive that is if this asks you to hit enter or something like that thn this is the issue otherwise thr shouldn't be any problem with this.

What is wrong with this bash script (cron + mysql)

Im using a bash script (sync.sh), used by cron, that is supposed to sync a file to a MySQL database. It works by copying a file from automatically uploaded location, parse it by calling SQL script which calls other MySQL internally stored scripts, and at the end emails a report text file as an attachment.
But, seems like something is not working as nothing happens to MySQL databases. All other commands are executed (first line and last line: copy initial file and e-mail sending).
MySQL command when run separately works perfectly.
Server is Ubuntu 16.04.
Cron job is run as root user and script is part of crontab for root user.
Here is the script:
#!/bin/bash
cp -u /home/admin/web/mydomain.com/public_html/dailyxchng/warehouse.txt /var/lib/mysql-files
mysql_pwd=syncit4321
cd /home/admin/web/mydomain.com/sync
mysql -u sync -p$mysql_pwd --database=database_name -e "call sp_sync_report();" > results.txt
echo "<h2>Report date $(date '+%d/%m/%Y %H:%M:%S')</h2><br/><br/> <strong>results.txt</strong> is an attached file which contains sync report." | mutt -e "set content_type=text/html" -s "Report date $(date '+%d/%m/%Y %H:%M:%S')" -a results.txt -- recipient#mydomain.com
cron will execute the script using a very stripped environment. you probably want to add the full path to the mysql command to the cron script
you can find the full path by
which mysql
at the prompt,
or you can add an expanded path to the cron invocation
1 2 * * * PATH=/usr/local/bin:$PATH scriptname

How to schedule a MySQL database backup in a remote Ubuntu server to a Dropbox folder in Windows PC?

I have a MySQL database that I want to daily backup to my Dropbox folder on my Windows PC.
How can I do that automatically from Windows 7?
One of the simplest ways to backup a mysql database is by creating a dump file. And that is what mysqldump is for. Please read the documentation for mysqldump.
In its simplest syntax, you can create a dump with the following command:
mysqldump [connection parameters] database_name > dump_file.sql
where the [connection parameters] are those you need to connect your local client to the MySQL server where the database resides.
mysqldump will create a dump file: a plain text file which contains the SQL instructions needed to create and populate the tables of a database. The > character will redirect the output of mysqldump to a file (in this example, dump_file.sql). You can, of course, compress this file to make it more easy to handle.
You can move that file wherever you want.
To restore a dump file:
Create an empty database (let's say restore) in the destination server
Load the dump:
mysql [connection parameters] restore < dump_file.sql
There are, of course, some other "switches" you can use with mysqldump. I frequently use these:
-d: this wil tell mysqldump to create an "empty" backup: the tables and views will be exported, but without data (useful if all you want is a database "template")
-R: include the stored routines (procedures and functions) in the dump file
--delayed-insert: uses insert delayed instead of insert for populating tables
--disable-keys: Encloses the insert statements for each table between alter table ... disable keys and alter table ... enable keys; this can make inserts faster
You can include the mysqldump command and any other compression and copy / move command in a batch file.
My solution to extract a backup and push it onto Dropbox is as below.
A sample of Ubuntu batch file can be downloaded here.
In brief
Prepare a batch script backup.sh
Run backup.sh to create a backup version e.g. backup.sql
Copy backup.sql to Dropbox folder
Schedule Ubuntu/Windows task to run backup.sh task e.g. every day at night
Detail steps
All about backing up and restoring an MySQL database can be found here.
Back up to compressed file
mysqldump -u [uname] -p [dbname] | gzip -9 > [backupfile.sql.gz]
How to remote from Windows to execute the 'backup' command can be found here.
plink.exe -ssh -pw -i "Path\to\private-key\key.ppk" -noagent username#server-ip
How to bring the file to Dropbox can be found here
Create a app
https://www2.dropbox.com/developers/apps
Add an app and choose Dropbox API App. Note the created app key and app secret
Install Dropbox API in Ubuntu; use app key and app secret above
$ wget https://raw.github.com/andreafabrizi/Dropbox-Uploader/master/dropbox_uploader.sh
$ chmod +x dropbox_uploader.sh
Follow the instruction to authorize access for the app e.g.
http://www2.dropbox.com/1/oauth/authorize?oauth_token=XXXXXXX
Test the app if it is working right - should be ok
$ ./dropbox_uploader.sh info
The app is created and a folder associating with it is YourDropbox\Apps\<app name>
Commands to use
List files
$ ./dropbox_uploader.sh list
Upload file
$ ./dropbox_uploader.sh upload <filename> <dropbox location>
e.g.
$ ./dropbox_uploader.sh upload backup.sql .
This will store file backup.sql to YourDropbox\Apps\<app name>\backup.sql
Done
How to schedule a Ubuntu can be view here using crontab
Call command
sudo crontab -e
Insert a line to run backup.sh script everyday as below
0 0 * * * /home/userName/pathTo/backup.sh
Explaination:
minute (0-59), hour (0-23, 0 = midnight), day (1-31), month (1-12), weekday (0-6, 0 = Sunday), command
Or simply we can use
#daily /home/userName/pathTo/backup.sh
Note:
To mornitor crontab tasks, here is a very good guide.