command misinterpreted by bash - mysql

I'm wring a script that I plan to schedule by cron for 1AM each morning to backup a mySql DB.
Normally I use this to dump the database:
mysqldump --no-create-db --single-transaction myDB | gzip > ~/my_backup.sql.gz
In my head what I have written should:
Dump the DB, Write any errors to database.err
Pipe the output to gzip which then zips it up and writes to disk
Read the return code, assuming success write the file to a S3 bucket
For the purposes of testing writes current state to the shell
#!/bin/bash
# This script will run each night to backup
# the mySql DB then will upload to Amazon S3
DB_NAME="myDB"
S3_BUCKET="my.s3.bucket"
BACKUP_PATH="~/backups/${DB_NAME}.sql.gz"
mysqldump --no-create-db --single-transaction ${DB_NAME} 2> database.err | gzip > ${BACKUP_PATH}
if [ "$?" -eq 0 ]
then
echo "Database dump complted sucessuflly... wtiting to S3"
aws s3 cp ${BACKUP_PATH} s3://${S3_BUCKET}/
if [ "$?" -eq 0 ]
then
echo "Backup sucessfully written to S3"
else
echo "Error writing to S3"
fi
else
echo "Mysqldump encountered a problem look in database.err for information"
fi
What it looks like the script is doing is getting to the mysqldump line, but is unable to differentiate between the parameter where i specify the DB and the 2> (file descriptor I think is the term?). This is the error:
./backup-script: line 12: ~/backups/myDB.sql.gz: No such file or directory
mysqldump: Got error: 1049: Unknown database 'myDB 2' when selecting the database
Mysqldump encountered a problem look in database.err for information
Can anyone suggest what is happening here/what I'm doing wrong.

Try putting the database name first
mysqldump "${DB_NAME}" --no-create-db --single-transaction

Related

How can I detect if mysqldump fails in a bash script?

I have a small down and dirty script to dump one of the tables all of a client's databases nightly:
#!/bin/bash
DB_BACKUP="/backups/mysql_backup/`date +%Y-%m-%d`"
DB_USER="dbuser"
DB_PASSWD="dbpass"
# Create the backup directory
mkdir -p $DB_BACKUP
# Remove backups older than 10 days
find /backups/mysql_backup/ -maxdepth 1 -type d -mtime +10 -exec rm -rf {} \;
# Backup each database on the system
for db in $(mysql --user=$DB_USER --password=$DB_PASSWD -e 'show databases' -s --skip-column-names|grep -viE '(staging|performance_schema|information_schema)');
do echo "dumping $db-uploads"; mysqldump --user=$DB_USER --password=$DB_PASSWD --events --opt --single-transaction $db uploads > "$DB_BACKUP/mysqldump-$db-uploads-$(date +%Y-%m-%d).sql";
done
Recently we've had some issues where some of the tables get corrupted, and mysqldump fails with the following message:
mysqldump: Got error: 145: Table './myDBname/myTable1' is marked as crashed and should be repaired when using LOCK TABLES
Is there a way for me to check if this happens in the bash script, and log the errors if so?
Also, as written would such an error halt the script, or would it continue to backup the rest of the databases normally? If it would halt execution is there a way around that?
Every program has an exit status. The exit status of each program is assigned to the $? builtin bash variable. By convention, this is 0 if the command was successful, or some other value 1-255 if the command was not successful. The exact value depends on the code in that program.
You can see the exit codes that mysqldump might issue here: https://github.com/mysql/mysql-server/blob/8.0/client/mysqldump.cc#L65-L72
You can check for this, and log it, output an error message of you choosing, exit the bash script, whatever you want.
mysqldump ...
if [[ $? != 0 ]] ; then
...do something...
fi
You can alternatively write this which does the same thing:
mysqldump ... || {
...do something...
}
The || means to execute the following statement or code block if the exit status of the preceding command is nonzero.
By default, commands that return errors do not cause the bash script to exit. You can optionally make that the behavior of the script by using this statement, and all following commands will cause the script to exit if they fail:
set -e

Import SQL dumps trough bash script

I'm trying to import GZiped MySQL databases listed in a folder.
GZiped files are located at .mysqldumps/.
$NAME tries to extract database name (as files are always named database_name.sql.gz) and pass it to mysql command line.
Also, as username and database name are the same, the same argument is passed ($NAME).
As files are GZiped, we try to zcat them (so gunzip -c) before pipe them to mysql.
The full script is:
#!/bin/bash
FILES='.mysqldumps/*'
PASSWORD='MyPassword'
for f in $FILES
do
NAME=dbprefix_`basename $f .sql.gz`
echo "Processing $f"
set -x
zcat $f | mysql -u "$NAME" -p$PASSWORD "$NAME"
done
But, when i run the script it outputs:
./.mysqlimport
Processing .mysqldumps/first_database.sql.gz
+ mysql -u dbprefix_first_database -pMyPassword dbprefix_first_database
+ zcat .mysqldumps/first_database.sql.gz
ERROR 1044 (42000) at line 22: Access denied for user 'dbprefix_first_database'#'localhost' to database 'first_database'
As you can see, the selected database is 'first_database' instead of 'dbprefix_first_database' and this just trowns an error of corse, and i just can't understand why $NAME is not correctly parse as database name.
What i'm doing wrong?
After some investigation, the problem comes from the DUMP and not from the script.
While using mysqldump the option --databases was used which includes the USE 'dbname'; and when importing, that name was used instead of $NAME.
Problem solved!

Outputting MySQL processes in bash script

Looking for help from the bash experts here.
What I am trying to do is probably incredibly simple, but I cannot find anything in google that is clearly explaining what to do. There is a lot of other stuff in the script but this is the block I need help on:
# import the extracted sql into mysql
for sql_file in $(find -maxdepth 1 -type f -iname "*$1*.sql" | sort); do
echo "Importing: $sql_file"
if mysql -u [USER] -p[PASSWORD] -h [HOST] $db < $sql_file
then
echo "Database $sql_file imported successfully. $db has been updated"
else
echo "ERROR: Database importing $sql_file into $db"
fi
done
If you cannot tell, I am basically trying to import multiple db.sql files into a given destination within a loop. What I would like to do is add the output of the mysql command to display any MySql errors that may be generated at this step. This output would come directly after this line:
echo "ERROR: Database importing $sql_file into $db"
If the import failed at this step, I would like it to output something like this, but not exit the script:
MySQL Import Failed: [Output what mysql would output to command line right here.]
This would inform the person performing the import, what exactly went wrong so they can tell the developers exactly what they encountered.
I dont know if this is too vague or not, but any help would be greatly appreciated.
How about generating individual log and error files as follows:
for sql_file in $(find -maxdepth 1 -type f -iname "*$1*.sql" | sort); do
echo "Importing: $sql_file"
log=$sql_file.log
err=$sql_file.err
if mysql -u [USER] -p[PASSWORD] -h [HOST] $db < $sql_file > $log 2> $err
then
echo "Database $sql_file imported successfully. $db has been updated"
else
echo "ERROR: Database importing $sql_file into $db"
echo "<<<<<"
cat $err
echo ">>>>>"
fi
done
This way you keep track of the processing of each script. For your original question it would suffice to just generate the error file. You can even delete it after the cat if you want.

mysqldump - Dump multiple databases from separate mysql accounts to one file

The standard mysqldump command that I use is
mysqldump --opt --databases $dbname --host=$dbhost --user=$dbuser --password=$dbpass | gzip > $filename
To dump multiple databases
mysqldump --opt --databases $dbname1 $dbname2 $dbname3 $dbname_etc --host=$dbhost --user=$dbuser --password=$dbpass | gzip > $filename
My question is how do you dump multiple databases from different MySQL accounts into just one file?
UPDATE: When I meant 1 file, I mean 1 gzipped file with the difference sql dumps for the different sites inside it.
Nobody seems to have clarified this, so I'm going to give my 2 cents.
Going to note here, my experiences are in BASH, and may be exclusive to it, so variables and looping might work different in your environment.
The best way to achieve an archive with separate files inside of it is to use either ZIP or TAR, i prefer to use tar due to its simplicity and availability.
Tar itself doesn't do compression, but bundled with bzip2 or gzip it can provide excellent results. Since your example uses gzip I'll use that in my demonstration.
First, let's attack the problem of MySQL dumps, the mysqldump command does not separate the files (to my knowledge anyway). So let's make a small workaround for creating 1 file per database.
mysql -s -r -p$dbpass --user=$dbuser -e 'show databases' | while read db; do mysqldump -p$dbpass --user=$dbuser $db > ${db}.sql; done
So now we have a string that will show databases per file, and export those databases out to where ever you need simply edit the part after the > symbol
Next, let's add some look at the syntax for TAR
tar -czf <output-file> <input-file-1> <input-file-2>
because of this configuration it allows us to specify a great number of files to archive.
The options are broken down as follows.
c - Compress/Create Archive
z - GZIP Compression
f - Output to file
j - bzip compression
Our next problem is keeping a list of all the newly created files, we'll expand our while statement to append to a variable while running through each database found inside of MySQL.
DBLIST=""; mysql -s -r -p$dbpass --user=$dbuser -e 'show databases' | while read db; do mysqldump p$dbpass --user=$dbuser $db > ${db}.sql; DBLIST="$DBLIST $DB"; done
Now we have a DBLIST variable that we can use to have an output of all our files that will be created, we can then modify our 1 line statement to run the tar command after everything has been handled.
DBLIST=""; mysql -s -r -p$dbpass --user=$dbuser -e 'show databases' | while read db; do mysqldump p$dbpass --user=$dbuser $db > ${db}.sql; DBLIST="$DBLIST $DB"; done && tar -czf $filename "$DBLIST"
This is a very rough approach and doesn't allow you to manually specify databases, so to achieve that, using the following command will create you a TAR file that contains all of your specified databases.
DBLIST=""; for db in "<database1-name> <database2-name>"; do mysqldump -p$dbpass --user=$dbuser $db > ${db}.sql; DBLIST="$DBLIST $DB.sql"; done && tar -czf $filename "$DBLIST"
The looping through MySQL databases from the MySQL database comes from the following stackoverflow.com question "mysqldump with db in a separate file" which was simply modified in order to fit your needs.
And to have the script automatically clean it up in a 1 liner simply add the following at the end of the command
&& rm "$DBLIST"
making the command look like this
DBLIST=""; for db in "<database1-name> <database2-name>"; do mysqldump -p$dbpass --user=$dbuser $db > ${db}.sql; DBLIST="$DBLIST $DB.sql"; done && tar -czf $filename "$DBLIST" && rm "$DBLIST"
For every MySQL server account, dump the databases into separate files
For every dump file, execute this command:
cat dump_user1.sql dump_user2.sql | gzip > super_dump.gz
There is a similar post on Superuser.com website: https://superuser.com/questions/228878/how-can-i-concatenate-two-files-in-unix
just in case "multiple db" is literally "all db" for you
mysqldump -u root -p --all-databases > all.sql

How do I split the output from mysqldump into smaller files?

I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB.
Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server.
Or is there another solution I have missed?
Edit
The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.
This bash script splits a dumpfile of one database into separate files for each table and names with csplit and names them accordingly:
#!/bin/bash
####
# Split MySQL dump SQL file into one file per table
# based on https://gist.github.com/jasny/1608062
####
#adjust this to your case:
START="/-- Table structure for table/"
# or
#START="/DROP TABLE IF EXISTS/"
if [ $# -lt 1 ] || [[ $1 == "--help" ]] || [[ $1 == "-h" ]] ; then
echo "USAGE: extract all tables:"
echo " $0 DUMP_FILE"
echo "extract one table:"
echo " $0 DUMP_FILE [TABLE]"
exit
fi
if [ $# -ge 2 ] ; then
#extract one table $2
csplit -s -ftable $1 "/-- Table structure for table/" "%-- Table structure for table \`$2\`%" "/-- Table structure for table/" "%40103 SET TIME_ZONE=#OLD_TIME_ZONE%1"
else
#extract all tables
csplit -s -ftable $1 "$START" {*}
fi
[ $? -eq 0 ] || exit
mv table00 head
FILE=`ls -1 table* | tail -n 1`
if [ $# -ge 2 ] ; then
# cut off all other tables
mv $FILE foot
else
# cut off the end of each file
csplit -b '%d' -s -f$FILE $FILE "/40103 SET TIME_ZONE=#OLD_TIME_ZONE/" {*}
mv ${FILE}1 foot
fi
for FILE in `ls -1 table*`; do
NAME=`head -n1 $FILE | cut -d$'\x60' -f2`
cat head $FILE foot > "$NAME.sql"
done
rm head foot table*
based on https://gist.github.com/jasny/1608062
and https://stackoverflow.com/a/16840625/1069083
First dump the schema (it surely fits in 2Mb, no?)
mysqldump -d --all-databases
and restore it.
Afterwards dump only the data in separate insert statements, so you can split the files and restore them without having to concatenate them on the remote server
mysqldump --all-databases --extended-insert=FALSE --no-create-info=TRUE
There is this excellent mysqldumpsplitter script which comes with tons of option for when it comes to extracting-from-mysqldump.
I would copy the recipe here to choose your case from:
1) Extract single database from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract DB --match_str
database-name
Above command will create sql for specified database from specified
"filename" sql file and store it in compressed format to
database-name.sql.gz.
2) Extract single table from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract TABLE --match_str
table-name
Above command will create sql for specified table from specified
"filename" mysqldump file and store it in compressed format to
database-name.sql.gz.
3) Extract tables matching regular expression from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract REGEXP
--match_str regular-expression
Above command will create sqls for tables matching specified regular
expression from specified "filename" mysqldump file and store it in
compressed format to individual table-name.sql.gz.
4) Extract all databases from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract ALLDBS
Above command will extract all databases from specified "filename"
mysqldump file and store it in compressed format to individual
database-name.sql.gz.
5) Extract all table from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract ALLTABLES
Above command will extract all tables from specified "filename"
mysqldump file and store it in compressed format to individual
table-name.sql.gz.
6) Extract list of tables from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract REGEXP
--match_str '(table1|table2|table3)'
Above command will extract tables from the specified "filename"
mysqldump file and store them in compressed format to individual
table-name.sql.gz.
7) Extract a database from compressed mysqldump:
sh mysqldumpsplitter.sh --source filename.sql.gz --extract DB
--match_str 'dbname' --decompression gzip
Above command will decompress filename.sql.gz using gzip, extract
database named "dbname" from "filename.sql.gz" & store it as
out/dbname.sql.gz
8) Extract a database from compressed mysqldump in an uncompressed
format:
sh mysqldumpsplitter.sh --source filename.sql.gz --extract DB
--match_str 'dbname' --decompression gzip --compression none
Above command will decompress filename.sql.gz using gzip and extract
database named "dbname" from "filename.sql.gz" & store it as plain sql
out/dbname.sql
9) Extract alltables from mysqldump in different folder:
sh mysqldumpsplitter.sh --source filename --extract ALLTABLES
--output_dir /path/to/extracts/
Above command will extract all tables from specified "filename"
mysqldump file and extracts tables in compressed format to individual
files, table-name.sql.gz stored under /path/to/extracts/. The script
will create the folder /path/to/extracts/ if not exists.
10) Extract one or more tables from one database in a full-dump:
Consider you have a full dump with multiple databases and you want to
extract few tables from one database.
Extract single database: sh mysqldumpsplitter.sh --source filename
--extract DB --match_str DBNAME --compression none
Extract all tables sh mysqldumpsplitter.sh --source out/DBNAME.sql
--extract REGEXP --match_str "(tbl1|tbl2)" though we can use another option to do this in single command as follows:
sh mysqldumpsplitter.sh --source filename --extract DBTABLE
--match_str "DBNAME.(tbl1|tbl2)" --compression none
Above command will extract both tbl1 and tbl2 from DBNAME database in
sql format under folder "out" in current directory.
You can extract single table as follows:
sh mysqldumpsplitter.sh --source filename --extract DBTABLE
--match_str "DBNAME.(tbl1)" --compression none
11) Extract all tables from specific database:
mysqldumpsplitter.sh --source filename --extract DBTABLE --match_str
"DBNAME.*" --compression none
Above command will extract all tables from DBNAME database in sql
format and store it under "out" directory.
12) List content of the mysqldump file
mysqldumpsplitter.sh --source filename --desc
Above command will list databases and tables from the dump file.
You may later choose to load the files: zcat filename.sql.gz | mysql -uUSER -p -hHOSTNAME
Also once you extract single table which you think is still bigger, you can use linux split command with number of lines to further split the dump.
split -l 10000 filename.sql
That said, if that is your need (coming more often), you might consider using mydumper which actually creates individual dumps you wont need to split!
You say that you don't have access to the second server. But if you have shell access to the first server, where the tables are, you can split your dump by table:
for T in `mysql -N -B -e 'show tables from dbname'`; \
do echo $T; \
mysqldump [connecting_options] dbname $T \
| gzip -c > dbname_$T.dump.gz ; \
done
This will create a gzip file for each table.
Another way of splitting the output of mysqldump in separate files is using the --tab option.
mysqldump [connecting options] --tab=directory_name dbname
where directory_name is the name of an empty directory.
This command creates a .sql file for each table, containing the CREATE TABLE statement, and a .txt file, containing the data, to be restored using LOAD DATA INFILE. I am not sure if phpMyAdmin can handle these files with your particular restriction, though.
Late reply but was looking for same solution and came across following code from below website:
for I in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $I | gzip > "$I.sql.gz"; done
http://www.commandlinefu.com/commands/view/2916/backup-all-mysql-databases-to-individual-files
I wrote a new version of the SQLDumpSplitter, this time with a proper parser, allowing nice things like INSERTs with many values to be split over files and it is multi platform now: https://philiplb.de/sqldumpsplitter3/
You don't need ssh access to either of your servers. Just a mysql[dump] client is fine.
With the mysql[dump], you can dump your database and import it again.
In your PC, you can do something like:
$ mysqldump -u originaluser -poriginalpassword -h originalhost originaldatabase | mysql -u newuser -pnewpassword -h newhost newdatabase
and you're done. :-)
hope this helps
You can split existent file by AWK. It's very quik and simple
Let's split table dump by 'tables' :
cat dump.sql | awk 'BEGIN {output = "comments"; }
$data ~ /^CREATE TABLE/ {close(output); output = substr($3,2,length($3)-2); }
{ print $data >> output }';
Or you can split dump by 'database'
cat backup.sql | awk 'BEGIN {output="comments";} $data ~ /Current Database/ {close(output);output=$4;} {print $data>>output}';
You can dump individual tables with mysqldump by running mysqldump database table1 table2 ... tableN
If none of the tables are too large, that will be enough. Otherwise, you'll have to start splitting the data in the larger tables.
i would recommend the utility bigdump, you can grab it here. http://www.ozerov.de/bigdump.php
this staggers the execution of the dump, in as close as it can manage to your limit, executing whole lines at a time.
Try this: https://github.com/shenli/mysqldump-hugetable
It will dump data into many small files. Each file contains less or equal MAX_RECORDS records. You can set this parameter in env.sh.
I wrote a Python script to split a single large sql dump file into separate files, one for each CREATE TABLE statement. It writes the files to a new folder that you specify. If no output folder is specified, it creates a new folder with the same name as the dump file, in the same directory. It works line-by-line, without writing the file to memory first, so it is great for large files.
https://github.com/kloddant/split_sql_dump_file
import sys, re, os
if sys.version_info[0] < 3:
raise Exception("""Must be using Python 3. Try running "C:\\Program Files (x86)\\Python37-32\\python.exe" split_sql_dump_file.py""")
sqldump_path = input("Enter the path to the sql dump file: ")
if not os.path.exists(sqldump_path):
raise Exception("Invalid sql dump path. {sqldump_path} does not exist.".format(sqldump_path=sqldump_path))
output_folder_path = input("Enter the path to the output folder: ") or sqldump_path.rstrip('.sql')
if not os.path.exists(output_folder_path):
os.makedirs(output_folder_path)
table_name = None
output_file_path = None
smallfile = None
with open(sqldump_path, 'rb') as bigfile:
for line_number, line in enumerate(bigfile):
line_string = line.decode("utf-8")
if 'CREATE TABLE' in line_string.upper():
match = re.match(r"^CREATE TABLE (?:IF NOT EXISTS )?`(?P<table>\w+)` \($", line_string)
if match:
table_name = match.group('table')
print(table_name)
output_file_path = "{output_folder_path}/{table_name}.sql".format(output_folder_path=output_folder_path.rstrip('/'), table_name=table_name)
if smallfile:
smallfile.close()
smallfile = open(output_file_path, 'wb')
if not table_name:
continue
smallfile.write(line)
smallfile.close()
Try csplit(1) to cut up the output into the individual tables based on regular expressions (matching the table boundary I would think).
This script should do it:
#!/bin/sh
#edit these
USER=""
PASSWORD=""
MYSQLDIR="/path/to/backupdir"
MYSQLDUMP="/usr/bin/mysqldump"
MYSQL="/usr/bin/mysql"
echo - Dumping tables for each DB
databases=`$MYSQL --user=$USER --password=$PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema)"`
for db in $databases; do
echo - Creating "$db" DB
mkdir $MYSQLDIR/$db
chmod -R 777 $MYSQLDIR/$db
for tb in `$MYSQL --user=$USER --password=$PASSWORD -N -B -e "use $db ;show tables"`
do
echo -- Creating table $tb
$MYSQLDUMP --opt --delayed-insert --insert-ignore --user=$USER --password=$PASSWORD $db $tb | bzip2 -c > $MYSQLDIR/$db/$tb.sql.bz2
done
echo
done
Check out SQLDumpSplitter 2, I just used it to split a 40MB dump with success. You can get it at the link below:
sqldumpsplitter.com
Hope this help.
I've created MySQLDumpSplitter.java which, unlike bash scripts, works on Windows. It's
available here https://github.com/Verace/MySQLDumpSplitter.
A clarification on the answer of #Vérace :
I specially like the interactive method; you can split a large file in Eclipse. I have tried a 105GB file in Windows successfully:
Just add the MySQLDumpSplitter library to your project:
http://dl.bintray.com/verace/MySQLDumpSplitter/jar/
Quick note on how to import:
- In Eclipse, Right click on your project --> Import
- Select "File System" and then "Next"
- Browse the path of the jar file and press "Ok"
- Select (thick) the "MySQLDumpSplitter.jar" file and then "Finish"
- It will be added to your project and shown in the project folder in Package Explorer in Eclipse
- Double click on the jar file in Eclipse (in Package Explorer)
- The "MySQL Dump file splitter" window opens which you can specify the address of your dump file and proceed with split.