I download backup of my database every week from my hosting. It generates a .sql file that currently has approximately 800mb. This .sql file contains 44 tables.
Is there any way, through some software that I can split the .sql file with all the tables, in order to individually export each table?
So, if I had to reset the backup at some point, I would do it by table, and I would not have to do to the entire database.
I wouldn't split it after, if you have server access yourself over ssh you could (and in my eyes should) do something like this :
for table in `mysql -u [USER] -p[PASSWORD] -N -B -e 'show tables from [DATABASE]'`;
do
mysqldump --skip-comments --compact -u [USER] -p[PASSWORD] [DATABASE] $table > $table.sql
&& tar -czvf $table.tar.gz $table && rm $table.sql
done;
Which should generate one file per table. Then just .gz whatever directory you put the files in and you should have your backup in the way you want it.
I have a bunch of jpeg images stored as blobs on a mysql database which I need to download to my local machine, the following does not work, can someone please advise?
Note I know the below code just overwrites the same file but for the purpose of this exercise it does not matter.
IFS=$'\n'
for i in `mysql -sN -u******* -p******** -h****** -e "select my_images from mutable"; do
echo $i > myimage.jpg
done
I'm not really sure what is not working with your code, but you should be able to fetch all data and save each image like this:
#!/bin/bash
counter=0;
for i in `mysql -s -N -u******* -p******** -h****** -e"select my_images from mutable"`; do
echo $i > "image${counter}.jpg";
counter=$((counter+1));
done
#!/bin/bash
counter=0;
for i in `mysql -N -u * -p* -e "SELECT id from multable"`; do
mysql -N -u * -p* -e "SELECT my_images FROM multable WHERE id=$i INTO DUMPFILE
'/var/lib/mysql-files/tmp.jpg';"
mv /var/lib/mysql-files/tmp.jpg "image${counter}.jpg"
counter=$((counter+1));
done
This code can extract image from blob and the image will not be invalid.
The keypoint is to use INTO DUMPFILE and then the file can save to the folder indicated by ##secure_file_priv.
You can see the ##secure_file_priv by using the following command
$ echo "SELECT ##secure_file_priv" | mysql -u * -p
And you can change ##secure_file_priv value by setting it in my.cnf.
There are many my.cnf in mysql and you can check the loading sequence by following command
$ mysqld --help --verbose | grep cnf -A2 -B2
But I still suggest to use /var/lib/mysql-files/ to load image file from database first and then copy it because some version of MySQL can only use /var/lib/mysql-files folder even you have changed the ##priv_secure_file to indicate another folder.
I'm trying to create a cron job for database backup.
This is what I have so far:
mysqldump.sh
mysqldump -u root -ptest --all-databases | gzip > "/db-backup/backup/backup-$(date)" 2> dump.log
echo "Finished mysqldump $(date)" >> dump.log
Cron job:
32 18 * * * /db-backup/mysqldump.sh
The problem I am having is the job is not executing through cron or when I am not in the directory.
Can someone please advise. Are my paths incorrect?
Also, the following line I'm not sure will output errors to the dump.log:
mysqldump -u root -ptest --all-databases | gzip > "/db-backup/backup/backup-$(date)" 2> dump.log
What worked:
mysqldump -u root -ptest --all-databases | gzip > "../db-backup/backup/backup-$(date).sql.gz" 2> ../db-backup/dump.log
echo "Finished mysqldump $(date)" >> ../db-backup/dump.log
There are a couple of things you can check, though more information is always more helpful (permissions and location of file, entire file contents, etc).
It can never hurt to preface the mysqldump.sh file with the Shebang syntax for your environment. I would venture to guess #!/bin/bash would be sufficient.
Instead of mysqldump -u .... use the absolute path /usr/bin/mysqldump (or where ever it is on your system). Absolute paths are always a good idea in any form of scripting since it's difficult to say if the user has the same environment as you do.
As for storing the errors in dump.log, I don't believe your syntax is correct. I'm fairly sure you're piping the errors from gzip into dump.log, not the errors from mysqldump. This seems like a fairly common question which arrives at the answer of mysqldump $PARAMS | gzip -c dump-$(date)
I have a big mySQL database dump named forum.sql. I want to restore only one table, but when I restore the full database, it takes a long time to import the "post" table.
Is there any option to restore this database skipping the "post" table?
If you are restoring from a dump file, you can easily build a new dumpfile without this table, just by writting down the line numbers.
Initial line
> grep dumpfile.sql -ne "Dumping data for table \`avoid_tablename\`" -m 1
43:-- Dumping data for table `avoid_tablename`
Total lines
> wc -l dumpfile.sql
63 dumpfile.sql
Make a new file
> head -n 43 dumpfile.sql > dumpfile-lite.sql
> tail -n 20 dumpfile.sql >> dumpfile-lile.sql
20 comes from substracting 63 - 43
not clean, but usefull
Alternatively, extract the table(s) that need to be restored from fulldump.sql using sed:
sed -n -e '/CREATE TABLE.*tableName1/,/CREATE TABLE/p' fulldump.sql > temp.sql
sed -n -e '/CREATE TABLE.*tableName2/,/CREATE TABLE/p' fulldump.sql >> temp.sql
...etc
Now restore from temp.sql.
Restore a single table
First, do the restore with zero inserts:
cat dump.sql | grep -v '^INSERT INTO' | mysql -u <user> -p<pw> <dbname>
Using grep -v here will exclude any statements matching the pattern. The pattern in this case uses ^ to match at the beginning of a line. The result should be a restored schema with zero data.
Next, restore only your desired INSERT statements:
cat dump.sql | grep '^INSERT INTO \\\`<table>\\\`' | mysql -u <user> -p<pw> <dbname>
That will restore data only for the table named <table>. Note the triple backslashes. You need a backslash to escape a backtick and then you need to escape the backslash with 2 more backslashes.
Restore everything except one table
Another technique I use all the time when I want to restore an entire database but exclude the data from a table or two is this... You can filter out any unwanted INSERT statements by passing your dump through a filter before pushing into the db. Here's an example using grep as the filter:
nohup sh -c "cat dump.sql | grep -v 'INSERT INTO \\\`<table>\\\`' | mysql -u <user> -p<pw> <dbname>" &
The nohup command will keep the sh command running even if you log out of your shell. That can be pretty handy if you have a large dump file that will take quite some time to restore.
The -v flag for grep will exclude anything matching the pattern.
The & at the end will send the command to the background.
As far as I know, no.
You would have to manually edit the CREATE and INSERT statements of the undesired table out of the dump file.
I don't think you can do it. But you can dump tables separately when necessary, using --tables myqsldump option. So, you can generate a dump for post table and another dump for the remaining tables.
Example:
mysqldump -u USERNAME -pPASSWORD --tables TABLE_NAME database_name > TABLE_NAME.sql
You could alter your dump statement so that it uses ignore table? http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_ignore-table
I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB.
Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server.
Or is there another solution I have missed?
Edit
The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.
This bash script splits a dumpfile of one database into separate files for each table and names with csplit and names them accordingly:
#!/bin/bash
####
# Split MySQL dump SQL file into one file per table
# based on https://gist.github.com/jasny/1608062
####
#adjust this to your case:
START="/-- Table structure for table/"
# or
#START="/DROP TABLE IF EXISTS/"
if [ $# -lt 1 ] || [[ $1 == "--help" ]] || [[ $1 == "-h" ]] ; then
echo "USAGE: extract all tables:"
echo " $0 DUMP_FILE"
echo "extract one table:"
echo " $0 DUMP_FILE [TABLE]"
exit
fi
if [ $# -ge 2 ] ; then
#extract one table $2
csplit -s -ftable $1 "/-- Table structure for table/" "%-- Table structure for table \`$2\`%" "/-- Table structure for table/" "%40103 SET TIME_ZONE=#OLD_TIME_ZONE%1"
else
#extract all tables
csplit -s -ftable $1 "$START" {*}
fi
[ $? -eq 0 ] || exit
mv table00 head
FILE=`ls -1 table* | tail -n 1`
if [ $# -ge 2 ] ; then
# cut off all other tables
mv $FILE foot
else
# cut off the end of each file
csplit -b '%d' -s -f$FILE $FILE "/40103 SET TIME_ZONE=#OLD_TIME_ZONE/" {*}
mv ${FILE}1 foot
fi
for FILE in `ls -1 table*`; do
NAME=`head -n1 $FILE | cut -d$'\x60' -f2`
cat head $FILE foot > "$NAME.sql"
done
rm head foot table*
based on https://gist.github.com/jasny/1608062
and https://stackoverflow.com/a/16840625/1069083
First dump the schema (it surely fits in 2Mb, no?)
mysqldump -d --all-databases
and restore it.
Afterwards dump only the data in separate insert statements, so you can split the files and restore them without having to concatenate them on the remote server
mysqldump --all-databases --extended-insert=FALSE --no-create-info=TRUE
There is this excellent mysqldumpsplitter script which comes with tons of option for when it comes to extracting-from-mysqldump.
I would copy the recipe here to choose your case from:
1) Extract single database from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract DB --match_str
database-name
Above command will create sql for specified database from specified
"filename" sql file and store it in compressed format to
database-name.sql.gz.
2) Extract single table from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract TABLE --match_str
table-name
Above command will create sql for specified table from specified
"filename" mysqldump file and store it in compressed format to
database-name.sql.gz.
3) Extract tables matching regular expression from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract REGEXP
--match_str regular-expression
Above command will create sqls for tables matching specified regular
expression from specified "filename" mysqldump file and store it in
compressed format to individual table-name.sql.gz.
4) Extract all databases from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract ALLDBS
Above command will extract all databases from specified "filename"
mysqldump file and store it in compressed format to individual
database-name.sql.gz.
5) Extract all table from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract ALLTABLES
Above command will extract all tables from specified "filename"
mysqldump file and store it in compressed format to individual
table-name.sql.gz.
6) Extract list of tables from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract REGEXP
--match_str '(table1|table2|table3)'
Above command will extract tables from the specified "filename"
mysqldump file and store them in compressed format to individual
table-name.sql.gz.
7) Extract a database from compressed mysqldump:
sh mysqldumpsplitter.sh --source filename.sql.gz --extract DB
--match_str 'dbname' --decompression gzip
Above command will decompress filename.sql.gz using gzip, extract
database named "dbname" from "filename.sql.gz" & store it as
out/dbname.sql.gz
8) Extract a database from compressed mysqldump in an uncompressed
format:
sh mysqldumpsplitter.sh --source filename.sql.gz --extract DB
--match_str 'dbname' --decompression gzip --compression none
Above command will decompress filename.sql.gz using gzip and extract
database named "dbname" from "filename.sql.gz" & store it as plain sql
out/dbname.sql
9) Extract alltables from mysqldump in different folder:
sh mysqldumpsplitter.sh --source filename --extract ALLTABLES
--output_dir /path/to/extracts/
Above command will extract all tables from specified "filename"
mysqldump file and extracts tables in compressed format to individual
files, table-name.sql.gz stored under /path/to/extracts/. The script
will create the folder /path/to/extracts/ if not exists.
10) Extract one or more tables from one database in a full-dump:
Consider you have a full dump with multiple databases and you want to
extract few tables from one database.
Extract single database: sh mysqldumpsplitter.sh --source filename
--extract DB --match_str DBNAME --compression none
Extract all tables sh mysqldumpsplitter.sh --source out/DBNAME.sql
--extract REGEXP --match_str "(tbl1|tbl2)" though we can use another option to do this in single command as follows:
sh mysqldumpsplitter.sh --source filename --extract DBTABLE
--match_str "DBNAME.(tbl1|tbl2)" --compression none
Above command will extract both tbl1 and tbl2 from DBNAME database in
sql format under folder "out" in current directory.
You can extract single table as follows:
sh mysqldumpsplitter.sh --source filename --extract DBTABLE
--match_str "DBNAME.(tbl1)" --compression none
11) Extract all tables from specific database:
mysqldumpsplitter.sh --source filename --extract DBTABLE --match_str
"DBNAME.*" --compression none
Above command will extract all tables from DBNAME database in sql
format and store it under "out" directory.
12) List content of the mysqldump file
mysqldumpsplitter.sh --source filename --desc
Above command will list databases and tables from the dump file.
You may later choose to load the files: zcat filename.sql.gz | mysql -uUSER -p -hHOSTNAME
Also once you extract single table which you think is still bigger, you can use linux split command with number of lines to further split the dump.
split -l 10000 filename.sql
That said, if that is your need (coming more often), you might consider using mydumper which actually creates individual dumps you wont need to split!
You say that you don't have access to the second server. But if you have shell access to the first server, where the tables are, you can split your dump by table:
for T in `mysql -N -B -e 'show tables from dbname'`; \
do echo $T; \
mysqldump [connecting_options] dbname $T \
| gzip -c > dbname_$T.dump.gz ; \
done
This will create a gzip file for each table.
Another way of splitting the output of mysqldump in separate files is using the --tab option.
mysqldump [connecting options] --tab=directory_name dbname
where directory_name is the name of an empty directory.
This command creates a .sql file for each table, containing the CREATE TABLE statement, and a .txt file, containing the data, to be restored using LOAD DATA INFILE. I am not sure if phpMyAdmin can handle these files with your particular restriction, though.
Late reply but was looking for same solution and came across following code from below website:
for I in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $I | gzip > "$I.sql.gz"; done
http://www.commandlinefu.com/commands/view/2916/backup-all-mysql-databases-to-individual-files
I wrote a new version of the SQLDumpSplitter, this time with a proper parser, allowing nice things like INSERTs with many values to be split over files and it is multi platform now: https://philiplb.de/sqldumpsplitter3/
You don't need ssh access to either of your servers. Just a mysql[dump] client is fine.
With the mysql[dump], you can dump your database and import it again.
In your PC, you can do something like:
$ mysqldump -u originaluser -poriginalpassword -h originalhost originaldatabase | mysql -u newuser -pnewpassword -h newhost newdatabase
and you're done. :-)
hope this helps
You can split existent file by AWK. It's very quik and simple
Let's split table dump by 'tables' :
cat dump.sql | awk 'BEGIN {output = "comments"; }
$data ~ /^CREATE TABLE/ {close(output); output = substr($3,2,length($3)-2); }
{ print $data >> output }';
Or you can split dump by 'database'
cat backup.sql | awk 'BEGIN {output="comments";} $data ~ /Current Database/ {close(output);output=$4;} {print $data>>output}';
You can dump individual tables with mysqldump by running mysqldump database table1 table2 ... tableN
If none of the tables are too large, that will be enough. Otherwise, you'll have to start splitting the data in the larger tables.
i would recommend the utility bigdump, you can grab it here. http://www.ozerov.de/bigdump.php
this staggers the execution of the dump, in as close as it can manage to your limit, executing whole lines at a time.
Try this: https://github.com/shenli/mysqldump-hugetable
It will dump data into many small files. Each file contains less or equal MAX_RECORDS records. You can set this parameter in env.sh.
I wrote a Python script to split a single large sql dump file into separate files, one for each CREATE TABLE statement. It writes the files to a new folder that you specify. If no output folder is specified, it creates a new folder with the same name as the dump file, in the same directory. It works line-by-line, without writing the file to memory first, so it is great for large files.
https://github.com/kloddant/split_sql_dump_file
import sys, re, os
if sys.version_info[0] < 3:
raise Exception("""Must be using Python 3. Try running "C:\\Program Files (x86)\\Python37-32\\python.exe" split_sql_dump_file.py""")
sqldump_path = input("Enter the path to the sql dump file: ")
if not os.path.exists(sqldump_path):
raise Exception("Invalid sql dump path. {sqldump_path} does not exist.".format(sqldump_path=sqldump_path))
output_folder_path = input("Enter the path to the output folder: ") or sqldump_path.rstrip('.sql')
if not os.path.exists(output_folder_path):
os.makedirs(output_folder_path)
table_name = None
output_file_path = None
smallfile = None
with open(sqldump_path, 'rb') as bigfile:
for line_number, line in enumerate(bigfile):
line_string = line.decode("utf-8")
if 'CREATE TABLE' in line_string.upper():
match = re.match(r"^CREATE TABLE (?:IF NOT EXISTS )?`(?P<table>\w+)` \($", line_string)
if match:
table_name = match.group('table')
print(table_name)
output_file_path = "{output_folder_path}/{table_name}.sql".format(output_folder_path=output_folder_path.rstrip('/'), table_name=table_name)
if smallfile:
smallfile.close()
smallfile = open(output_file_path, 'wb')
if not table_name:
continue
smallfile.write(line)
smallfile.close()
Try csplit(1) to cut up the output into the individual tables based on regular expressions (matching the table boundary I would think).
This script should do it:
#!/bin/sh
#edit these
USER=""
PASSWORD=""
MYSQLDIR="/path/to/backupdir"
MYSQLDUMP="/usr/bin/mysqldump"
MYSQL="/usr/bin/mysql"
echo - Dumping tables for each DB
databases=`$MYSQL --user=$USER --password=$PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema)"`
for db in $databases; do
echo - Creating "$db" DB
mkdir $MYSQLDIR/$db
chmod -R 777 $MYSQLDIR/$db
for tb in `$MYSQL --user=$USER --password=$PASSWORD -N -B -e "use $db ;show tables"`
do
echo -- Creating table $tb
$MYSQLDUMP --opt --delayed-insert --insert-ignore --user=$USER --password=$PASSWORD $db $tb | bzip2 -c > $MYSQLDIR/$db/$tb.sql.bz2
done
echo
done
Check out SQLDumpSplitter 2, I just used it to split a 40MB dump with success. You can get it at the link below:
sqldumpsplitter.com
Hope this help.
I've created MySQLDumpSplitter.java which, unlike bash scripts, works on Windows. It's
available here https://github.com/Verace/MySQLDumpSplitter.
A clarification on the answer of #VĂ©race :
I specially like the interactive method; you can split a large file in Eclipse. I have tried a 105GB file in Windows successfully:
Just add the MySQLDumpSplitter library to your project:
http://dl.bintray.com/verace/MySQLDumpSplitter/jar/
Quick note on how to import:
- In Eclipse, Right click on your project --> Import
- Select "File System" and then "Next"
- Browse the path of the jar file and press "Ok"
- Select (thick) the "MySQLDumpSplitter.jar" file and then "Finish"
- It will be added to your project and shown in the project folder in Package Explorer in Eclipse
- Double click on the jar file in Eclipse (in Package Explorer)
- The "MySQL Dump file splitter" window opens which you can specify the address of your dump file and proceed with split.