I have a script to dump my databases, like this :
#!/usr/bin/env sh
PATH=/usr/bin:/bin
LOG="mybackup/log/backup_$(date +%Y-%m-%d_%H:%M:%S).log"
# some other command
# backup mysql database
echo "Backing up database" > ~/$LOG
mysqldump -u myusername -pmypassword --ignore-table={db1.table1,db1.table2,db1.table3,db1.table4} db1 -r ~/mybackup/db/db1_$(date +%Y-%m-%d_%H:%M:%S).sql >> ~/$LOG 2>&1
# some other command
When I run the command in terminal, it successfully dump my database without ignored tables. But when I run the command through the script, it dump all tables in the database.
I have tried to escape the curly braces :
mysqldump -u myusername -pmypassword --ignore-table=\{db1.table1,db1.table2,db1.table3,db1.table4\} db1 -r ~/mybackup/db/db1_$(date +%Y-%m-%d_%H:%M:%S).sql
but it still dump all table.
My question is how to escape braces/brackets properly in shell script?
I can't answer why it DOES work on command line, but the documentation says to use the option multiple times:
mysqldump --ignore-table documentation
man mylsqdump
--ignore-table=db_name.tbl_name
Do not dump the given table, which must be specified using both the
database and table names. To ignore multiple tables, use this option
multiple times. This option also can be used to ignore views.
I'm wring a script that I plan to schedule by cron for 1AM each morning to backup a mySql DB.
Normally I use this to dump the database:
mysqldump --no-create-db --single-transaction myDB | gzip > ~/my_backup.sql.gz
In my head what I have written should:
Dump the DB, Write any errors to database.err
Pipe the output to gzip which then zips it up and writes to disk
Read the return code, assuming success write the file to a S3 bucket
For the purposes of testing writes current state to the shell
#!/bin/bash
# This script will run each night to backup
# the mySql DB then will upload to Amazon S3
DB_NAME="myDB"
S3_BUCKET="my.s3.bucket"
BACKUP_PATH="~/backups/${DB_NAME}.sql.gz"
mysqldump --no-create-db --single-transaction ${DB_NAME} 2> database.err | gzip > ${BACKUP_PATH}
if [ "$?" -eq 0 ]
then
echo "Database dump complted sucessuflly... wtiting to S3"
aws s3 cp ${BACKUP_PATH} s3://${S3_BUCKET}/
if [ "$?" -eq 0 ]
then
echo "Backup sucessfully written to S3"
else
echo "Error writing to S3"
fi
else
echo "Mysqldump encountered a problem look in database.err for information"
fi
What it looks like the script is doing is getting to the mysqldump line, but is unable to differentiate between the parameter where i specify the DB and the 2> (file descriptor I think is the term?). This is the error:
./backup-script: line 12: ~/backups/myDB.sql.gz: No such file or directory
mysqldump: Got error: 1049: Unknown database 'myDB 2' when selecting the database
Mysqldump encountered a problem look in database.err for information
Can anyone suggest what is happening here/what I'm doing wrong.
Try putting the database name first
mysqldump "${DB_NAME}" --no-create-db --single-transaction
I have a folder with o lot of sql scripts. I want to run all of them without specifying names of them. Just specify a folder name. Is it possible?
You can not do that natively, but here's simple bash command:
for sql_file in `ls -d /path/to/directory/*`; do mysql -uUSER -pPASSWORD DATABASE < $sql_file ; done
here USER, PASSWORD and DATABASE are the corresponding credentials and /path/to/directory is full path to folder that contains your files.
If you want to filter, for example, only sql files, then:
for sql_file in `ls /path/to/directory/*.sql`; do mysql -uUSER -pPASSWORD DATABASE < $sql_file ; done
That was what worked for me:
1. Created a shell script in the folder of my scripts
for f in *.sql
do
echo "Processing $f file..."
mysql -u user "-pPASSWORD" -h HOST DATABASE < $f
done
Performing automated database backups for starters and testing commands. I've found for performing an action on each line of a text file via BASH CLI is something to the effect of:
# while read line; do
COMMAND $line
done
I've created a list of the database names file:
# mysql -uroot -e "show databases" > databases
Then tried the following against the file to see if it would work correctly.
# while read line; do
"mysqldump -uroot $line > /dbbackups/$line.sql"
done
Seemingly, this would be working correctly but am met with the following error(s):
[04:58:46] [root#theia database-backup-testing]# cat databases | while read line ; do "mysqldump -uroot $line > $line.sql" ; done
-bash: mysqldump -uroot Database > Database.sql: command not found
-bash: mysqldump -uroot information_schema > information_schema.sql: command not found
-bash: mysqldump -uroot cphulkd > cphulkd.sql: command not found
I am not sure why it is giving command not found, when obviously, the output of the commands seems to be correct. I have also tried using the absolute path of mysqldump (/usr/bin/mysqldump) but it gives the same error(s).
Can anyone fill me in on why this is happening?
EDIT: I found a fix:
The script works if the quotes are removed:
# cat databases |
while read line; do
mysqldump -uroot $line > $line.sql
done
Apparently, the quotes causes it to execute as a string and not a command.
why it is giving command not found
Your quotes are not correct, Try something like this:
while read line ; do mysqldump -uroot "$line" > "$line".sql ; done
Haven't used mysqldump so I cant help with the syntax of the specific command.
Here's a bash script that solves this:
https://github.com/jeevandongre/backup-restore
I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB.
Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server.
Or is there another solution I have missed?
Edit
The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.
This bash script splits a dumpfile of one database into separate files for each table and names with csplit and names them accordingly:
#!/bin/bash
####
# Split MySQL dump SQL file into one file per table
# based on https://gist.github.com/jasny/1608062
####
#adjust this to your case:
START="/-- Table structure for table/"
# or
#START="/DROP TABLE IF EXISTS/"
if [ $# -lt 1 ] || [[ $1 == "--help" ]] || [[ $1 == "-h" ]] ; then
echo "USAGE: extract all tables:"
echo " $0 DUMP_FILE"
echo "extract one table:"
echo " $0 DUMP_FILE [TABLE]"
exit
fi
if [ $# -ge 2 ] ; then
#extract one table $2
csplit -s -ftable $1 "/-- Table structure for table/" "%-- Table structure for table \`$2\`%" "/-- Table structure for table/" "%40103 SET TIME_ZONE=#OLD_TIME_ZONE%1"
else
#extract all tables
csplit -s -ftable $1 "$START" {*}
fi
[ $? -eq 0 ] || exit
mv table00 head
FILE=`ls -1 table* | tail -n 1`
if [ $# -ge 2 ] ; then
# cut off all other tables
mv $FILE foot
else
# cut off the end of each file
csplit -b '%d' -s -f$FILE $FILE "/40103 SET TIME_ZONE=#OLD_TIME_ZONE/" {*}
mv ${FILE}1 foot
fi
for FILE in `ls -1 table*`; do
NAME=`head -n1 $FILE | cut -d$'\x60' -f2`
cat head $FILE foot > "$NAME.sql"
done
rm head foot table*
based on https://gist.github.com/jasny/1608062
and https://stackoverflow.com/a/16840625/1069083
First dump the schema (it surely fits in 2Mb, no?)
mysqldump -d --all-databases
and restore it.
Afterwards dump only the data in separate insert statements, so you can split the files and restore them without having to concatenate them on the remote server
mysqldump --all-databases --extended-insert=FALSE --no-create-info=TRUE
There is this excellent mysqldumpsplitter script which comes with tons of option for when it comes to extracting-from-mysqldump.
I would copy the recipe here to choose your case from:
1) Extract single database from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract DB --match_str
database-name
Above command will create sql for specified database from specified
"filename" sql file and store it in compressed format to
database-name.sql.gz.
2) Extract single table from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract TABLE --match_str
table-name
Above command will create sql for specified table from specified
"filename" mysqldump file and store it in compressed format to
database-name.sql.gz.
3) Extract tables matching regular expression from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract REGEXP
--match_str regular-expression
Above command will create sqls for tables matching specified regular
expression from specified "filename" mysqldump file and store it in
compressed format to individual table-name.sql.gz.
4) Extract all databases from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract ALLDBS
Above command will extract all databases from specified "filename"
mysqldump file and store it in compressed format to individual
database-name.sql.gz.
5) Extract all table from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract ALLTABLES
Above command will extract all tables from specified "filename"
mysqldump file and store it in compressed format to individual
table-name.sql.gz.
6) Extract list of tables from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract REGEXP
--match_str '(table1|table2|table3)'
Above command will extract tables from the specified "filename"
mysqldump file and store them in compressed format to individual
table-name.sql.gz.
7) Extract a database from compressed mysqldump:
sh mysqldumpsplitter.sh --source filename.sql.gz --extract DB
--match_str 'dbname' --decompression gzip
Above command will decompress filename.sql.gz using gzip, extract
database named "dbname" from "filename.sql.gz" & store it as
out/dbname.sql.gz
8) Extract a database from compressed mysqldump in an uncompressed
format:
sh mysqldumpsplitter.sh --source filename.sql.gz --extract DB
--match_str 'dbname' --decompression gzip --compression none
Above command will decompress filename.sql.gz using gzip and extract
database named "dbname" from "filename.sql.gz" & store it as plain sql
out/dbname.sql
9) Extract alltables from mysqldump in different folder:
sh mysqldumpsplitter.sh --source filename --extract ALLTABLES
--output_dir /path/to/extracts/
Above command will extract all tables from specified "filename"
mysqldump file and extracts tables in compressed format to individual
files, table-name.sql.gz stored under /path/to/extracts/. The script
will create the folder /path/to/extracts/ if not exists.
10) Extract one or more tables from one database in a full-dump:
Consider you have a full dump with multiple databases and you want to
extract few tables from one database.
Extract single database: sh mysqldumpsplitter.sh --source filename
--extract DB --match_str DBNAME --compression none
Extract all tables sh mysqldumpsplitter.sh --source out/DBNAME.sql
--extract REGEXP --match_str "(tbl1|tbl2)" though we can use another option to do this in single command as follows:
sh mysqldumpsplitter.sh --source filename --extract DBTABLE
--match_str "DBNAME.(tbl1|tbl2)" --compression none
Above command will extract both tbl1 and tbl2 from DBNAME database in
sql format under folder "out" in current directory.
You can extract single table as follows:
sh mysqldumpsplitter.sh --source filename --extract DBTABLE
--match_str "DBNAME.(tbl1)" --compression none
11) Extract all tables from specific database:
mysqldumpsplitter.sh --source filename --extract DBTABLE --match_str
"DBNAME.*" --compression none
Above command will extract all tables from DBNAME database in sql
format and store it under "out" directory.
12) List content of the mysqldump file
mysqldumpsplitter.sh --source filename --desc
Above command will list databases and tables from the dump file.
You may later choose to load the files: zcat filename.sql.gz | mysql -uUSER -p -hHOSTNAME
Also once you extract single table which you think is still bigger, you can use linux split command with number of lines to further split the dump.
split -l 10000 filename.sql
That said, if that is your need (coming more often), you might consider using mydumper which actually creates individual dumps you wont need to split!
You say that you don't have access to the second server. But if you have shell access to the first server, where the tables are, you can split your dump by table:
for T in `mysql -N -B -e 'show tables from dbname'`; \
do echo $T; \
mysqldump [connecting_options] dbname $T \
| gzip -c > dbname_$T.dump.gz ; \
done
This will create a gzip file for each table.
Another way of splitting the output of mysqldump in separate files is using the --tab option.
mysqldump [connecting options] --tab=directory_name dbname
where directory_name is the name of an empty directory.
This command creates a .sql file for each table, containing the CREATE TABLE statement, and a .txt file, containing the data, to be restored using LOAD DATA INFILE. I am not sure if phpMyAdmin can handle these files with your particular restriction, though.
Late reply but was looking for same solution and came across following code from below website:
for I in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $I | gzip > "$I.sql.gz"; done
http://www.commandlinefu.com/commands/view/2916/backup-all-mysql-databases-to-individual-files
I wrote a new version of the SQLDumpSplitter, this time with a proper parser, allowing nice things like INSERTs with many values to be split over files and it is multi platform now: https://philiplb.de/sqldumpsplitter3/
You don't need ssh access to either of your servers. Just a mysql[dump] client is fine.
With the mysql[dump], you can dump your database and import it again.
In your PC, you can do something like:
$ mysqldump -u originaluser -poriginalpassword -h originalhost originaldatabase | mysql -u newuser -pnewpassword -h newhost newdatabase
and you're done. :-)
hope this helps
You can split existent file by AWK. It's very quik and simple
Let's split table dump by 'tables' :
cat dump.sql | awk 'BEGIN {output = "comments"; }
$data ~ /^CREATE TABLE/ {close(output); output = substr($3,2,length($3)-2); }
{ print $data >> output }';
Or you can split dump by 'database'
cat backup.sql | awk 'BEGIN {output="comments";} $data ~ /Current Database/ {close(output);output=$4;} {print $data>>output}';
You can dump individual tables with mysqldump by running mysqldump database table1 table2 ... tableN
If none of the tables are too large, that will be enough. Otherwise, you'll have to start splitting the data in the larger tables.
i would recommend the utility bigdump, you can grab it here. http://www.ozerov.de/bigdump.php
this staggers the execution of the dump, in as close as it can manage to your limit, executing whole lines at a time.
Try this: https://github.com/shenli/mysqldump-hugetable
It will dump data into many small files. Each file contains less or equal MAX_RECORDS records. You can set this parameter in env.sh.
I wrote a Python script to split a single large sql dump file into separate files, one for each CREATE TABLE statement. It writes the files to a new folder that you specify. If no output folder is specified, it creates a new folder with the same name as the dump file, in the same directory. It works line-by-line, without writing the file to memory first, so it is great for large files.
https://github.com/kloddant/split_sql_dump_file
import sys, re, os
if sys.version_info[0] < 3:
raise Exception("""Must be using Python 3. Try running "C:\\Program Files (x86)\\Python37-32\\python.exe" split_sql_dump_file.py""")
sqldump_path = input("Enter the path to the sql dump file: ")
if not os.path.exists(sqldump_path):
raise Exception("Invalid sql dump path. {sqldump_path} does not exist.".format(sqldump_path=sqldump_path))
output_folder_path = input("Enter the path to the output folder: ") or sqldump_path.rstrip('.sql')
if not os.path.exists(output_folder_path):
os.makedirs(output_folder_path)
table_name = None
output_file_path = None
smallfile = None
with open(sqldump_path, 'rb') as bigfile:
for line_number, line in enumerate(bigfile):
line_string = line.decode("utf-8")
if 'CREATE TABLE' in line_string.upper():
match = re.match(r"^CREATE TABLE (?:IF NOT EXISTS )?`(?P<table>\w+)` \($", line_string)
if match:
table_name = match.group('table')
print(table_name)
output_file_path = "{output_folder_path}/{table_name}.sql".format(output_folder_path=output_folder_path.rstrip('/'), table_name=table_name)
if smallfile:
smallfile.close()
smallfile = open(output_file_path, 'wb')
if not table_name:
continue
smallfile.write(line)
smallfile.close()
Try csplit(1) to cut up the output into the individual tables based on regular expressions (matching the table boundary I would think).
This script should do it:
#!/bin/sh
#edit these
USER=""
PASSWORD=""
MYSQLDIR="/path/to/backupdir"
MYSQLDUMP="/usr/bin/mysqldump"
MYSQL="/usr/bin/mysql"
echo - Dumping tables for each DB
databases=`$MYSQL --user=$USER --password=$PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema)"`
for db in $databases; do
echo - Creating "$db" DB
mkdir $MYSQLDIR/$db
chmod -R 777 $MYSQLDIR/$db
for tb in `$MYSQL --user=$USER --password=$PASSWORD -N -B -e "use $db ;show tables"`
do
echo -- Creating table $tb
$MYSQLDUMP --opt --delayed-insert --insert-ignore --user=$USER --password=$PASSWORD $db $tb | bzip2 -c > $MYSQLDIR/$db/$tb.sql.bz2
done
echo
done
Check out SQLDumpSplitter 2, I just used it to split a 40MB dump with success. You can get it at the link below:
sqldumpsplitter.com
Hope this help.
I've created MySQLDumpSplitter.java which, unlike bash scripts, works on Windows. It's
available here https://github.com/Verace/MySQLDumpSplitter.
A clarification on the answer of #Vérace :
I specially like the interactive method; you can split a large file in Eclipse. I have tried a 105GB file in Windows successfully:
Just add the MySQLDumpSplitter library to your project:
http://dl.bintray.com/verace/MySQLDumpSplitter/jar/
Quick note on how to import:
- In Eclipse, Right click on your project --> Import
- Select "File System" and then "Next"
- Browse the path of the jar file and press "Ok"
- Select (thick) the "MySQLDumpSplitter.jar" file and then "Finish"
- It will be added to your project and shown in the project folder in Package Explorer in Eclipse
- Double click on the jar file in Eclipse (in Package Explorer)
- The "MySQL Dump file splitter" window opens which you can specify the address of your dump file and proceed with split.