How can I export a mySQL #temp table to a .csv file within script? - mysql

I'd like to write a script that allows me to export .csv files from 15-20 temporary tables I created, using a script instead of having to copy and paste in a separate .csv file and then save them down.
:!!sqlcmd -S server -d database-E -Q "SET NOCOUNT ON
SELECT * FROM TABLE" -o "C:\Users\name\Documents\folder\filename.csv"
-W -w 1024 -s ","
I've tried this, which works (not formatting correctly) but it doesn't seem to be work at all for a temp table; the .csv file contains this.
Msg 208 Level 16 State 1 Server SERVERNAME
Invalid object name '#TEMPTABLE'.
I cannot obtain "elevated privileges" to be able to use BCP export, because I cannot write a stored procedure, create a new database, or access the command line. Is there a workaround for this?

Temp tables are ephemeral; they do not persist across sessions. Instead of creating temp tables, create actual tables, either in the database that you're working with, or in tempdb, then export the data from tempdb
An example:
sqlcmd -S server -d database -E -Q "If Exists (select * FROM tempdb.sys.tables WHERE name = 'Tmp_DataExport1') drop TABLE tempdb..Tmp_DataExport1;"
sqlcmd -S server -d database -E -Q "SELECT TOP 5 * INTO tempdb..Tmp_DataExport1 FROM T_SourceTable"
sqlcmd -S server -d database -E -Q "SELECT * FROM tempdb..Tmp_DataExport1" -o "c:\temp\filename.csv" -W -w 1024 -s ","

Related

Oneliner pipe to PostgreSQL from MySQL with bash

So I am piping quite a lot of data using bash everyday between 3 servers:
Server A is mysql (connection over SSH)
Server B is just a centos server where I run the
bash script.
Server C is postgresql 9.6.
All was good until one table got one row with a double quote in the middle of a varchar. This is breaking my pipe at the insertion level (on pg side).
Indeed, when getting the data this way from Mysql, it is not quoted. So, I believe in the end it's because of the basic behaviour of COPY and its QUOTE parameter.
Here is the bash code:
ssh -o ConnectTimeout=5 -i "$SSH_KEY" "$SSH_USER"#"$SSH_IP" 'mysql -h "$MYHOST" -u "$USER"-p"$PWD" prod -e "SELECT * FROM tableA "' | \
psql -h "$DWH_IP" "$PG_DB" -c "COPY tableA FROM stdin WITH CSV HEADER DELIMITER E'\t' NULL AS 'NULL';"
I tried playing with the COPY parameter QUOTE but unsuccessfully.
Should I put some sed in the middle of the pipeline?
I also tried double quoting when getting the data out of mysql but could not find the relevant parameter when mysql is used in a pipe like this.
I'd lik to keep things in one pipe (no MYSQL->CSV then CSV->PG please).#
Thanks!
Here's working sample of importing csv to postgres:
t=# create table so10 (i int,t text);
CREATE TABLE
t=# \q
postgres#vao-VirtualBox:~$ echo "1,Bro" | psql -d t -c "copy so10 from stdin with csv"
COPY 1
postgres#vao-VirtualBox:~$ psql t -c "select * from so10"
i | t
---+-----
1 | Bro
(1 row)
You can open ssh tunnel to mysql and run mysql -h "$MYHOST" -u "$USER"-p"$PWD" prod -e "SELECT * FROM tableA "' locally (instead of echo in my example)

How to ignore certain MySQL tables when importing a database?

I have a large SQL file with one database and about 150 tables. I would like to use mysqlimport to import that database, however, I would like the import process to ignore or skip over a couple of tables. What is the proper syntax to import all tables, but ignore some of them? Thank you.
The accepted answer by RandomSeed could take a long time! Importing the table (just to drop it later) could be very wasteful depending on size.
For a file created using
mysqldump -u user -ppasswd --opt --routines DBname > DBdump.sql
I currently get a file about 7GB, 6GB of which is data for a log table that I don't 'need' to be there; reloading this file takes a couple of hours. If I need to reload (for development purposes, or if ever required for a live recovery) I skim the file thus:
sed '/INSERT INTO `TABLE_TO_SKIP`/d' DBdump.sql > reduced.sql
And reload with:
mysql -u user -ppasswd DBname < reduced.sql
This gives me a complete database, with the "unwanted" table created but empty. If you really don't want the tables at all, simply drop the empty tables after the load finishes.
For multiple tables you could do something like this:
sed '/INSERT INTO `TABLE1_TO_SKIP`/d' DBdump.sql | \
sed '/INSERT INTO `TABLE2_TO_SKIP`/d' | \
sed '/INSERT INTO `TABLE3_TO_SKIP`/d' > reduced.sql
There IS a 'gotcha' - watch out for procedures in your dump that might contain "INSERT INTO TABLE_TO_SKIP".
mysqlimport is not the right tool for importing SQL statements. This tool is meant to import formatted text files such as CSV. What you want to do is feed your sql dump directly to the mysql client with a command like this one:
bash > mysql -D your_database < your_sql_dump.sql
Neither mysql nor mysqlimport provide the feature you need. Your best chance would be importing the whole dump, then dropping the tables you do not want.
If you have access to the server where the dump comes from, then you could create a new dump with mysqldump --ignore-table=database.table_you_dont_want1 --ignore-table=database.table_you_dont_want2 ....
Check out this answer for a workaround to skip importing some table
For anyone working with .sql.gz files; I found the following solution to be very useful. Our database was 25GB+ and I had to remove the log tables.
gzip -cd "./mydb.sql.gz" | sed -r '/INSERT INTO `(log_table_1|log_table_2|log_table_3|log_table_4)`/d' | gzip > "./mydb2.sql.gz"
Thanks to the answer of Don and comment of Xosofox and this related post:
Use zcat and sed or awk to edit compressed .gz text file
Little old, but figure it might still come in handy...
I liked #Don's answer (https://stackoverflow.com/a/26379517/1446005) but found it very annoying that you'd have to write to another file first...
In my particular case this would take too much time and disc space
So I wrote a little bash script:
#!/bin/bash
tables=(table1_to_skip table2_to_skip ... tableN_to_skip)
tableString=$(printf "|%s" "${tables[#]}")
trimmed=${tableString:1}
grepExp="INSERT INTO \`($trimmed)\`"
zcat $1 | grep -vE "$grepExp" | mysql -uroot -p
this does not generate a new sql script but pipes it directly to the database
also, it does create the tables, just doesn't import the data (which was the problem I had with huge log tables)
Unless you have ignored the tables during the dump with mysqldump --ignore-table=database.unwanted_table, you have to use some script or tool to filter out the data you don't want to import from the dump file before passing it to mysql client.
Here is a bash/sh function that would exclude the unwanted tables from a SQL dump on the fly (through pipe):
# Accepts one argument, the list of tables to exclude (case-insensitive).
# Eg. filt_exclude '%session% action_log %_cache'
filt_exclude() {
local excl_tns;
if [ -n "$1" ]; then
# trim & replace /[,;\s]+/ with '|' & replace '%' with '[^`]*'
excl_tns=$(echo "$1" | sed -r 's/^[[:space:]]*//g; s/[[:space:]]*$//g; s/[[:space:]]+/|/g; s/[,;]+/|/g; s/%/[^\`]\*/g');
grep -viE "(^INSERT INTO \`($excl_tns)\`)|(^DROP TABLE (IF EXISTS )?\`($excl_tns)\`)|^LOCK TABLES \`($excl_tns)\` WRITE" | \
sed 's/^CREATE TABLE `/CREATE TABLE IF NOT EXISTS `/g'
else
cat
fi
}
Suppose you have a dump created like so:
MYSQL_PWD="my-pass" mysqldump -u user --hex-blob db_name | \
pigz -9 > dump.sql.gz
And want to exclude some unwanted tables before importing:
pigz -dckq dump.sql.gz | \
filt_exclude '%session% action_log %_cache' | \
MYSQL_PWD="my-pass" mysql -u user db_name
Or you could pipe into a file or any other tool before importing to DB.
If desired, you can do this one table at a time:
mysqldump -p sourceDatabase tableName > tableName.sql
mysql -p -D targetDatabase < tableName.sql
Here is my script to exclude some tables from mysql dump
I use it to restore DB when need to keep orders and payments data
exclude_tables_from_dump.sh
#!/bin/bash
if [ ! -f "$1" ];
then
echo "Usage: $0 mysql_dump.sql"
exit
fi
declare -a TABLES=(
user
order
order_product
order_status
payments
)
CMD="cat $1"
for TBL in "${TABLES[#]}";do
CMD+="|sed 's/DROP TABLE IF EXISTS \`${TBL}\`/# DROP TABLE IF EXIST \`${TBL}\`/g'"
CMD+="|sed 's/CREATE TABLE \`${TBL}\`/CREATE TABLE IF NOT EXISTS \`${TBL}\`/g'"
CMD+="|sed -r '/INSERT INTO \`${TBL}\`/d'"
CMD+="|sed '/DELIMITER\ \;\;/,/DELIMITER\ \;/d'"
done
eval $CMD
It avoid DROP and reCREATE of tables and inserting data to this tables.
Also it strip all FUNCTIONS and PROCEDURES that stored between DELIMITER ;; and DELIMITER ;
I would not use it on production but if I would have to import some backup quickly that contains many smaller table and one big monster table that might take hours to import I would most probably "grep -v unwanted_table_name original.sql > reduced.sql
and then mysql -f < reduced.sql

MySQL - mysqldump --routines to only export 1 stored procedure (by name) and not every routine

So we have a lot of routines that come out from exporting. We often need to get these out in CLI, make changes, and bring them back in. Yes, some of these are managed by different folks and a better change control is required, but for now this is the situation.
If I do:
mysqldump --routines --no-create-info --no-data --no-create-db
then great, I have 200 functions. I need to go through a file to find just the one or set I want.
Is there anyway to mysqldump routines that I want like there is for tables?
Another way to go about this would be the following. Do note, however, that you have to have root privileges in the target database in order to import rows into mysql.proc:
mysqldump --compact --no-create-info --where="db='yourdatabasename' AND type='PROCEDURE' AND name IN ('yoursp1', 'yoursp2')" --databases mysql --tables proc
To answer your exact question: no.
But this will probably give you what you want.
Take a look at SHOW CREATE PROCEDURE and SHOW CREATE FUNCTION:
http://dev.mysql.com/doc/refman/5.0/en/show-create-procedure.html
http://dev.mysql.com/doc/refman/5.0/en/show-create-function.html
Those commands allow you to dump the code for one routine at a time.
It is possible to dump a single function or procedure using the command that Ike Walker mentioned, but the SHOW CREATE PROCEDURE and SHOW CREATE FUNCTION commands don't allow to select only a few columns from the output.
Here is a example of a Windows batch command line to dump a single procedure, using the system table mysql.proc:
mysql --defaults-extra-file=myconfig.cnf --skip-column-names --raw --batch mydatabase -e "SELECT CONCAT('DELIMITER $$\nCREATE PROCEDURE `', specific_name, '`(', param_list, ') AS \n', body_utf8, ' $$\nDELIMITER ;\n') AS `stmt` FROM `mysql`.`proc` WHERE `db` = 'mydatabase' AND specific_name = 'myprocedure';" 1> myprocedure.sql
This will redirect the output of mysql into the file myprocedure.sql.
The --batch option tells the mysql client to remove the table borders from the output.
The --skip-column-names option removes the column headers from the output.
The --raw option tells MySQL to not escape special characters on the output, keeping new lines as is instead of replacing them with \n.
And if you want to dump ALL the procedures in different files, this example in batch should work:
dump-procedures.bat
#echo off
REM set the target database
set database=mydatabase
REM set the connection configuration file
set auth=--defaults-extra-file=myconfig.cnf
REM set the routine type you want to dump
set routine_type=PROCEDURE
set list_file=%routine_type%S.csv
if "%routine_type%"=="PROCEDURE" (
set ending=AS
)
if "%routine_type%"=="FUNCTION" (
set ending=RETURNS ', `returns`, '
)
echo Dumping %routine_type% list to %list_file%
mysql %auth% --skip-column-names --raw %database% -e "SELECT routine_name FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_SCHEMA = DATABASE() AND ROUTINE_TYPE = '%routine_type%';" 1> %list_file%
for /f "tokens=*" %%a in (%list_file%) do (
echo Dumping %routine_type% %%a
mysql %auth% --skip-column-names --raw --batch %database% -e "SELECT CONCAT('DELIMITER $$\nCREATE PROCEDURE `', specific_name, '`(', param_list, ') %ending% \n', body_utf8, ' $$\nDELIMITER ;\n') AS `stmt` FROM `mysql`.`proc` WHERE `db` = '%database%' AND specific_name = '%%a';" 1> %%a.sql
)
It works in 2 steps, where it first dumps the list of all procedures into the file procedures.csv, and then iterates in each line and uses the names of the procedures to dump each procedure in a different file.
In this example, I am also using the option --defaults-extra-file, where some configuration parameters are set in a different file, and allows to invoke the command without needing to type the password each time or writing the password inside the batch itself. I created a file with this content
myconfig.cnf
[client]
host=localhost
port=3306
user=myusername
password=mypassword
This solution also works with function, defining the routine_type variable to:
set routine_type=FUNCTION
I have written the solution within bash.
For one procedure
mysql -NB -u root my_database -e 'show create procedure `myProcedure`' |\
xargs -n 1 -d '\t' echo |\
egrep '^CREATE' |\
xargs --null echo -e >
For all procedures
mysql -NB -u root -e 'SELECT ROUTINE_NAME FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_SCHEMA = "my_database" AND ROUTINE_TYPE = "PROCEDURE"' |\
xargs -n 1 -I {} mysql -NB -u root my_database -e 'show create procedure `{}`' |\
xargs -n 1 -d '\t' echo |\
egrep '^CREATE' |\
xargs --null echo -e
This is trick,
create DATABASE to local.
dump your DATABASE include your STORE PROCEDURE to local.
or
just use mysql*yog and copy database, check STORE PROCEDURE (example: 1 store procedure) to local server.
dump your DATABASE local . yeah, you have 1 store procedure in your dump. remember use --routine if u want sp/events only.
trick is fun, isn't lazy, just tricked.

Dump a mysql database to a plaintext (CSV) backup from the command line

I'd like to avoid mysqldump since that outputs in a form that is only convenient for mysql to read. CSV seems more universal (one file per table is fine). But if there are advantages to mysqldump, I'm all ears. Also, I'd like something I can run from the command line (linux). If that's a mysql script, pointers to how to make such a thing would be helpful.
If you can cope with table-at-a-time, and your data is not binary, use the -B option to the mysql command. With this option it'll generate TSV (tab separated) files which can import into Excel, etc, quite easily:
% echo 'SELECT * FROM table' | mysql -B -uxxx -pyyy database
Alternatively, if you've got direct access to the server's file system, use SELECT INTO OUTFILE which can generate real CSV files:
SELECT * INTO OUTFILE 'table.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM table
In MySQL itself, you can specify CSV output like:
SELECT order_id,product_name,qty
FROM orders
INTO OUTFILE '/tmp/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
From http://www.tech-recipes.com/rx/1475/save-mysql-query-results-into-a-text-or-csv-file/
You can dump a whole database in one go with mysqldump's --tab option. You supply a directory path and it creates one .sql file with the CREATE TABLE DROP IF EXISTS syntax and a .txt file with the contents, tab separated. To create comma separated files you could use the following:
mysqldump --password --fields-optionally-enclosed-by='"' --fields-terminated-by=',' --tab /tmp/path_to_dump/ database_name
That path needs to be writable by both the mysql user and the user running the command, so for simplicity I recommend chmod 777 /tmp/path_to_dump/ first.
The select into outfile option wouldn't work for me but the below roundabout way of piping tab-delimited file through SED did:
mysql -uusername -ppassword -e "SELECT * from tablename" dbname | sed 's/\t/","/g;s/^/"/;s/$/"/' > /path/to/file/filename.csv
Here is the simplest command for it
mysql -h<hostname> -u<username> -p<password> -e 'select * from databaseName.tableNaame' | sed 's/\t/,/g' > output.csv
If there is a comma in the column value then we can generate .tsv instead of .csv with the following command
mysql -h<hostname> -u<username> -p<password> -e 'select * from databaseName.tableNaame' > output.csv
If you really need a "Backup" then you also need database schema, like table definitions, view definitions, store procedures and so on. A backup of a database isn't just the data.
The value of the mysqldump format for backup is specifically that it is very EASY to use it to restore mysql databases. A backup that isn't easily restored is far less useful. If you are looking for a method to reliably backup mysql data to so you can restore to a mysql server then I think you should stick with the mysqldump tool.
Mysql is free and runs on many different platforms. Setting up a new mysql server that I can restore to is simple. I am not at all worried about not being able to setup mysql so I can do a restore.
I would be far more worried about a custom backup/restore based on a fragile format like csv/tsv failing. Are you sure that all your quotes, commas, or tabs that are in your data would get escaped correctly and then parsed correctly by your restore tool?
If you are looking for a method to extract the data then see several in the other answers.
You can use below script to get the output to csv files. One file per table with headers.
for tn in `mysql --batch --skip-page --skip-column-name --raw -uuser -ppassword -e"show tables from mydb"`
do
mysql -uuser -ppassword mydb -B -e "select * from \`$tn\`;" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > $tn.csv
done
user is your user name, password is the password if you don't want to keep typing the password for each table and mydb is the database name.
Explanation of the script: The first expression in sed, will replace the tabs with "," so you have fields enclosed in double quotes and separated by commas. The second one insert double quote in the beginning and the third one insert double quote at the end. And the final one takes care of the \n.
If you want to dump the entire db as csv
#!/bin/bash
host=hostname
uname=username
pass=password
port=portnr
db=db_name
s3_url=s3://buckera/db_dump/
DATE=`date +%Y%m%d`
rm -rf $DATE
echo 'show tables' | mysql -B -h${host} -u${uname} -p${pass} -P${port} ${db} > tables.txt
awk 'NR>1' tables.txt > tables_new.txt
while IFS= read -r line
do
mkdir -p $DATE/$line
echo "select * from $line" | mysql -B -h"${host}" -u"${uname}" -p"${pass}" -P"${port}" "${db}" > $DATE/$line/dump.tsv
done < tables_new.txt
touch $DATE/$DATE.fin
rm -rf tables_new.txt tables.txt
Check out mk-parallel-dump which is part of the ever-useful maatkit suite of tools. This can dump comma-separated files with the --csv option.
This can do your whole db without specifying individual tables, and you can specify groups of tables in a backupset table.
Note that it also dumps table definitions, views and triggers into separate files. In addition providing a complete backup in a more universally accessible form, it also immediately restorable with mk-parallel-restore
Two line PowerShell answer:
# Store in variable
$Global:csv = (mysql -uroot -p -hlocalhost -Ddatabase_name -B -e "SELECT * FROM some_table") `
| ConvertFrom-Csv -Delimiter "`t"
# Out to csv
$Global:csv | Export-Csv "C:\temp\file.csv" -NoTypeInformation
Boom-bata-boom
-D = the name of your database
-e = query
-B = tab-delimited
There's a slightly simpler way to get all the tables into tab delimited fast:
#!/bin/bash
tablenames=$(mysql your_database -e "show tables;" -B |sed "1d")
IFS=$'\n'
tables=($tablenames)
for table in ${tables[#]}; do
mysql your_database -e "select * from ${table}" -B > "${table}.tsv"
done
Here's a basic python script that does the work! You can also choose to export only the headers (column names) or headers & data both.
Just change the database credentials and run the script. It will output all the data to the output folder.
To run the script -
Run: pip install mysql-connector-python
Change database credentials in the "INPUT" section
Run: python filename.py
import mysql.connector
from pathlib import Path
import csv
#========INPUT===========
databaseHost=""
databaseUsername=""
databasePassword=""
databaseName=""
outputDirectory="./WITH-DATA/"
exportTableData=True #MAKING THIS FIELD FALSE WILL STORE ONLY THE TABLE HEADERS (COLUMN NAMES) IN THE CSV FILE
#========INPUT END===========
Path(outputDirectory).mkdir(parents=True, exist_ok=True)
mydb = mysql.connector.connect(
host=databaseHost,
user=databaseUsername,
password=databasePassword
)
mycursor = mydb.cursor()
mycursor.execute("USE "+databaseName)
mycursor.execute("SHOW TABLES")
tables = mycursor.fetchall()
tableNames=[table[0] for table in tables]
print("================================")
print("Total number of tables: "+ str(len(tableNames)))
print(tableNames)
print("================================")
for tableName in tableNames:
print("================================")
print("Processing: "+ str(tableName))
mydb = mysql.connector.connect(
host=databaseHost,
user=databaseUsername,
password=databasePassword
)
mycursor = mydb.cursor()
mycursor.execute("USE "+databaseName)
if exportTableData:
mycursor.execute("SELECT * FROM "+tableName)
else:
mycursor.execute("SELECT * FROM "+tableName+" LIMIT 1")
print(mycursor.column_names)
with open(outputDirectory+tableName+".csv", 'w', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(mycursor.column_names)
if exportTableData:
myresult = mycursor.fetchall()
csvwriter.writerows(myresult)

How do I split the output from mysqldump into smaller files?

I need to move entire tables from one MySQL database to another. I don't have full access to the second one, only phpMyAdmin access. I can only upload (compressed) sql files smaller than 2MB. But the compressed output from a mysqldump of the first database's tables is larger than 10MB.
Is there a way to split the output from mysqldump into smaller files? I cannot use split(1) since I cannot cat(1) the files back on the remote server.
Or is there another solution I have missed?
Edit
The --extended-insert=FALSE option to mysqldump suggested by the first poster yields a .sql file that can then be split into importable files, provided that split(1) is called with a suitable --lines option. By trial and error I found that bzip2 compresses the .sql files by a factor of 20, so I needed to figure out how many lines of sql code correspond roughly to 40MB.
This bash script splits a dumpfile of one database into separate files for each table and names with csplit and names them accordingly:
#!/bin/bash
####
# Split MySQL dump SQL file into one file per table
# based on https://gist.github.com/jasny/1608062
####
#adjust this to your case:
START="/-- Table structure for table/"
# or
#START="/DROP TABLE IF EXISTS/"
if [ $# -lt 1 ] || [[ $1 == "--help" ]] || [[ $1 == "-h" ]] ; then
echo "USAGE: extract all tables:"
echo " $0 DUMP_FILE"
echo "extract one table:"
echo " $0 DUMP_FILE [TABLE]"
exit
fi
if [ $# -ge 2 ] ; then
#extract one table $2
csplit -s -ftable $1 "/-- Table structure for table/" "%-- Table structure for table \`$2\`%" "/-- Table structure for table/" "%40103 SET TIME_ZONE=#OLD_TIME_ZONE%1"
else
#extract all tables
csplit -s -ftable $1 "$START" {*}
fi
[ $? -eq 0 ] || exit
mv table00 head
FILE=`ls -1 table* | tail -n 1`
if [ $# -ge 2 ] ; then
# cut off all other tables
mv $FILE foot
else
# cut off the end of each file
csplit -b '%d' -s -f$FILE $FILE "/40103 SET TIME_ZONE=#OLD_TIME_ZONE/" {*}
mv ${FILE}1 foot
fi
for FILE in `ls -1 table*`; do
NAME=`head -n1 $FILE | cut -d$'\x60' -f2`
cat head $FILE foot > "$NAME.sql"
done
rm head foot table*
based on https://gist.github.com/jasny/1608062
and https://stackoverflow.com/a/16840625/1069083
First dump the schema (it surely fits in 2Mb, no?)
mysqldump -d --all-databases
and restore it.
Afterwards dump only the data in separate insert statements, so you can split the files and restore them without having to concatenate them on the remote server
mysqldump --all-databases --extended-insert=FALSE --no-create-info=TRUE
There is this excellent mysqldumpsplitter script which comes with tons of option for when it comes to extracting-from-mysqldump.
I would copy the recipe here to choose your case from:
1) Extract single database from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract DB --match_str
database-name
Above command will create sql for specified database from specified
"filename" sql file and store it in compressed format to
database-name.sql.gz.
2) Extract single table from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract TABLE --match_str
table-name
Above command will create sql for specified table from specified
"filename" mysqldump file and store it in compressed format to
database-name.sql.gz.
3) Extract tables matching regular expression from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract REGEXP
--match_str regular-expression
Above command will create sqls for tables matching specified regular
expression from specified "filename" mysqldump file and store it in
compressed format to individual table-name.sql.gz.
4) Extract all databases from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract ALLDBS
Above command will extract all databases from specified "filename"
mysqldump file and store it in compressed format to individual
database-name.sql.gz.
5) Extract all table from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract ALLTABLES
Above command will extract all tables from specified "filename"
mysqldump file and store it in compressed format to individual
table-name.sql.gz.
6) Extract list of tables from mysqldump:
sh mysqldumpsplitter.sh --source filename --extract REGEXP
--match_str '(table1|table2|table3)'
Above command will extract tables from the specified "filename"
mysqldump file and store them in compressed format to individual
table-name.sql.gz.
7) Extract a database from compressed mysqldump:
sh mysqldumpsplitter.sh --source filename.sql.gz --extract DB
--match_str 'dbname' --decompression gzip
Above command will decompress filename.sql.gz using gzip, extract
database named "dbname" from "filename.sql.gz" & store it as
out/dbname.sql.gz
8) Extract a database from compressed mysqldump in an uncompressed
format:
sh mysqldumpsplitter.sh --source filename.sql.gz --extract DB
--match_str 'dbname' --decompression gzip --compression none
Above command will decompress filename.sql.gz using gzip and extract
database named "dbname" from "filename.sql.gz" & store it as plain sql
out/dbname.sql
9) Extract alltables from mysqldump in different folder:
sh mysqldumpsplitter.sh --source filename --extract ALLTABLES
--output_dir /path/to/extracts/
Above command will extract all tables from specified "filename"
mysqldump file and extracts tables in compressed format to individual
files, table-name.sql.gz stored under /path/to/extracts/. The script
will create the folder /path/to/extracts/ if not exists.
10) Extract one or more tables from one database in a full-dump:
Consider you have a full dump with multiple databases and you want to
extract few tables from one database.
Extract single database: sh mysqldumpsplitter.sh --source filename
--extract DB --match_str DBNAME --compression none
Extract all tables sh mysqldumpsplitter.sh --source out/DBNAME.sql
--extract REGEXP --match_str "(tbl1|tbl2)" though we can use another option to do this in single command as follows:
sh mysqldumpsplitter.sh --source filename --extract DBTABLE
--match_str "DBNAME.(tbl1|tbl2)" --compression none
Above command will extract both tbl1 and tbl2 from DBNAME database in
sql format under folder "out" in current directory.
You can extract single table as follows:
sh mysqldumpsplitter.sh --source filename --extract DBTABLE
--match_str "DBNAME.(tbl1)" --compression none
11) Extract all tables from specific database:
mysqldumpsplitter.sh --source filename --extract DBTABLE --match_str
"DBNAME.*" --compression none
Above command will extract all tables from DBNAME database in sql
format and store it under "out" directory.
12) List content of the mysqldump file
mysqldumpsplitter.sh --source filename --desc
Above command will list databases and tables from the dump file.
You may later choose to load the files: zcat filename.sql.gz | mysql -uUSER -p -hHOSTNAME
Also once you extract single table which you think is still bigger, you can use linux split command with number of lines to further split the dump.
split -l 10000 filename.sql
That said, if that is your need (coming more often), you might consider using mydumper which actually creates individual dumps you wont need to split!
You say that you don't have access to the second server. But if you have shell access to the first server, where the tables are, you can split your dump by table:
for T in `mysql -N -B -e 'show tables from dbname'`; \
do echo $T; \
mysqldump [connecting_options] dbname $T \
| gzip -c > dbname_$T.dump.gz ; \
done
This will create a gzip file for each table.
Another way of splitting the output of mysqldump in separate files is using the --tab option.
mysqldump [connecting options] --tab=directory_name dbname
where directory_name is the name of an empty directory.
This command creates a .sql file for each table, containing the CREATE TABLE statement, and a .txt file, containing the data, to be restored using LOAD DATA INFILE. I am not sure if phpMyAdmin can handle these files with your particular restriction, though.
Late reply but was looking for same solution and came across following code from below website:
for I in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $I | gzip > "$I.sql.gz"; done
http://www.commandlinefu.com/commands/view/2916/backup-all-mysql-databases-to-individual-files
I wrote a new version of the SQLDumpSplitter, this time with a proper parser, allowing nice things like INSERTs with many values to be split over files and it is multi platform now: https://philiplb.de/sqldumpsplitter3/
You don't need ssh access to either of your servers. Just a mysql[dump] client is fine.
With the mysql[dump], you can dump your database and import it again.
In your PC, you can do something like:
$ mysqldump -u originaluser -poriginalpassword -h originalhost originaldatabase | mysql -u newuser -pnewpassword -h newhost newdatabase
and you're done. :-)
hope this helps
You can split existent file by AWK. It's very quik and simple
Let's split table dump by 'tables' :
cat dump.sql | awk 'BEGIN {output = "comments"; }
$data ~ /^CREATE TABLE/ {close(output); output = substr($3,2,length($3)-2); }
{ print $data >> output }';
Or you can split dump by 'database'
cat backup.sql | awk 'BEGIN {output="comments";} $data ~ /Current Database/ {close(output);output=$4;} {print $data>>output}';
You can dump individual tables with mysqldump by running mysqldump database table1 table2 ... tableN
If none of the tables are too large, that will be enough. Otherwise, you'll have to start splitting the data in the larger tables.
i would recommend the utility bigdump, you can grab it here. http://www.ozerov.de/bigdump.php
this staggers the execution of the dump, in as close as it can manage to your limit, executing whole lines at a time.
Try this: https://github.com/shenli/mysqldump-hugetable
It will dump data into many small files. Each file contains less or equal MAX_RECORDS records. You can set this parameter in env.sh.
I wrote a Python script to split a single large sql dump file into separate files, one for each CREATE TABLE statement. It writes the files to a new folder that you specify. If no output folder is specified, it creates a new folder with the same name as the dump file, in the same directory. It works line-by-line, without writing the file to memory first, so it is great for large files.
https://github.com/kloddant/split_sql_dump_file
import sys, re, os
if sys.version_info[0] < 3:
raise Exception("""Must be using Python 3. Try running "C:\\Program Files (x86)\\Python37-32\\python.exe" split_sql_dump_file.py""")
sqldump_path = input("Enter the path to the sql dump file: ")
if not os.path.exists(sqldump_path):
raise Exception("Invalid sql dump path. {sqldump_path} does not exist.".format(sqldump_path=sqldump_path))
output_folder_path = input("Enter the path to the output folder: ") or sqldump_path.rstrip('.sql')
if not os.path.exists(output_folder_path):
os.makedirs(output_folder_path)
table_name = None
output_file_path = None
smallfile = None
with open(sqldump_path, 'rb') as bigfile:
for line_number, line in enumerate(bigfile):
line_string = line.decode("utf-8")
if 'CREATE TABLE' in line_string.upper():
match = re.match(r"^CREATE TABLE (?:IF NOT EXISTS )?`(?P<table>\w+)` \($", line_string)
if match:
table_name = match.group('table')
print(table_name)
output_file_path = "{output_folder_path}/{table_name}.sql".format(output_folder_path=output_folder_path.rstrip('/'), table_name=table_name)
if smallfile:
smallfile.close()
smallfile = open(output_file_path, 'wb')
if not table_name:
continue
smallfile.write(line)
smallfile.close()
Try csplit(1) to cut up the output into the individual tables based on regular expressions (matching the table boundary I would think).
This script should do it:
#!/bin/sh
#edit these
USER=""
PASSWORD=""
MYSQLDIR="/path/to/backupdir"
MYSQLDUMP="/usr/bin/mysqldump"
MYSQL="/usr/bin/mysql"
echo - Dumping tables for each DB
databases=`$MYSQL --user=$USER --password=$PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema)"`
for db in $databases; do
echo - Creating "$db" DB
mkdir $MYSQLDIR/$db
chmod -R 777 $MYSQLDIR/$db
for tb in `$MYSQL --user=$USER --password=$PASSWORD -N -B -e "use $db ;show tables"`
do
echo -- Creating table $tb
$MYSQLDUMP --opt --delayed-insert --insert-ignore --user=$USER --password=$PASSWORD $db $tb | bzip2 -c > $MYSQLDIR/$db/$tb.sql.bz2
done
echo
done
Check out SQLDumpSplitter 2, I just used it to split a 40MB dump with success. You can get it at the link below:
sqldumpsplitter.com
Hope this help.
I've created MySQLDumpSplitter.java which, unlike bash scripts, works on Windows. It's
available here https://github.com/Verace/MySQLDumpSplitter.
A clarification on the answer of #VĂ©race :
I specially like the interactive method; you can split a large file in Eclipse. I have tried a 105GB file in Windows successfully:
Just add the MySQLDumpSplitter library to your project:
http://dl.bintray.com/verace/MySQLDumpSplitter/jar/
Quick note on how to import:
- In Eclipse, Right click on your project --> Import
- Select "File System" and then "Next"
- Browse the path of the jar file and press "Ok"
- Select (thick) the "MySQLDumpSplitter.jar" file and then "Finish"
- It will be added to your project and shown in the project folder in Package Explorer in Eclipse
- Double click on the jar file in Eclipse (in Package Explorer)
- The "MySQL Dump file splitter" window opens which you can specify the address of your dump file and proceed with split.