Dump a mysql database to a plaintext (CSV) backup from the command line - mysql

I'd like to avoid mysqldump since that outputs in a form that is only convenient for mysql to read. CSV seems more universal (one file per table is fine). But if there are advantages to mysqldump, I'm all ears. Also, I'd like something I can run from the command line (linux). If that's a mysql script, pointers to how to make such a thing would be helpful.

If you can cope with table-at-a-time, and your data is not binary, use the -B option to the mysql command. With this option it'll generate TSV (tab separated) files which can import into Excel, etc, quite easily:
% echo 'SELECT * FROM table' | mysql -B -uxxx -pyyy database
Alternatively, if you've got direct access to the server's file system, use SELECT INTO OUTFILE which can generate real CSV files:
SELECT * INTO OUTFILE 'table.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM table

In MySQL itself, you can specify CSV output like:
SELECT order_id,product_name,qty
FROM orders
INTO OUTFILE '/tmp/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
From http://www.tech-recipes.com/rx/1475/save-mysql-query-results-into-a-text-or-csv-file/

You can dump a whole database in one go with mysqldump's --tab option. You supply a directory path and it creates one .sql file with the CREATE TABLE DROP IF EXISTS syntax and a .txt file with the contents, tab separated. To create comma separated files you could use the following:
mysqldump --password --fields-optionally-enclosed-by='"' --fields-terminated-by=',' --tab /tmp/path_to_dump/ database_name
That path needs to be writable by both the mysql user and the user running the command, so for simplicity I recommend chmod 777 /tmp/path_to_dump/ first.

The select into outfile option wouldn't work for me but the below roundabout way of piping tab-delimited file through SED did:
mysql -uusername -ppassword -e "SELECT * from tablename" dbname | sed 's/\t/","/g;s/^/"/;s/$/"/' > /path/to/file/filename.csv

Here is the simplest command for it
mysql -h<hostname> -u<username> -p<password> -e 'select * from databaseName.tableNaame' | sed 's/\t/,/g' > output.csv
If there is a comma in the column value then we can generate .tsv instead of .csv with the following command
mysql -h<hostname> -u<username> -p<password> -e 'select * from databaseName.tableNaame' > output.csv

If you really need a "Backup" then you also need database schema, like table definitions, view definitions, store procedures and so on. A backup of a database isn't just the data.
The value of the mysqldump format for backup is specifically that it is very EASY to use it to restore mysql databases. A backup that isn't easily restored is far less useful. If you are looking for a method to reliably backup mysql data to so you can restore to a mysql server then I think you should stick with the mysqldump tool.
Mysql is free and runs on many different platforms. Setting up a new mysql server that I can restore to is simple. I am not at all worried about not being able to setup mysql so I can do a restore.
I would be far more worried about a custom backup/restore based on a fragile format like csv/tsv failing. Are you sure that all your quotes, commas, or tabs that are in your data would get escaped correctly and then parsed correctly by your restore tool?
If you are looking for a method to extract the data then see several in the other answers.

You can use below script to get the output to csv files. One file per table with headers.
for tn in `mysql --batch --skip-page --skip-column-name --raw -uuser -ppassword -e"show tables from mydb"`
do
mysql -uuser -ppassword mydb -B -e "select * from \`$tn\`;" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > $tn.csv
done
user is your user name, password is the password if you don't want to keep typing the password for each table and mydb is the database name.
Explanation of the script: The first expression in sed, will replace the tabs with "," so you have fields enclosed in double quotes and separated by commas. The second one insert double quote in the beginning and the third one insert double quote at the end. And the final one takes care of the \n.

If you want to dump the entire db as csv
#!/bin/bash
host=hostname
uname=username
pass=password
port=portnr
db=db_name
s3_url=s3://buckera/db_dump/
DATE=`date +%Y%m%d`
rm -rf $DATE
echo 'show tables' | mysql -B -h${host} -u${uname} -p${pass} -P${port} ${db} > tables.txt
awk 'NR>1' tables.txt > tables_new.txt
while IFS= read -r line
do
mkdir -p $DATE/$line
echo "select * from $line" | mysql -B -h"${host}" -u"${uname}" -p"${pass}" -P"${port}" "${db}" > $DATE/$line/dump.tsv
done < tables_new.txt
touch $DATE/$DATE.fin
rm -rf tables_new.txt tables.txt

Check out mk-parallel-dump which is part of the ever-useful maatkit suite of tools. This can dump comma-separated files with the --csv option.
This can do your whole db without specifying individual tables, and you can specify groups of tables in a backupset table.
Note that it also dumps table definitions, views and triggers into separate files. In addition providing a complete backup in a more universally accessible form, it also immediately restorable with mk-parallel-restore

Two line PowerShell answer:
# Store in variable
$Global:csv = (mysql -uroot -p -hlocalhost -Ddatabase_name -B -e "SELECT * FROM some_table") `
| ConvertFrom-Csv -Delimiter "`t"
# Out to csv
$Global:csv | Export-Csv "C:\temp\file.csv" -NoTypeInformation
Boom-bata-boom
-D = the name of your database
-e = query
-B = tab-delimited

There's a slightly simpler way to get all the tables into tab delimited fast:
#!/bin/bash
tablenames=$(mysql your_database -e "show tables;" -B |sed "1d")
IFS=$'\n'
tables=($tablenames)
for table in ${tables[#]}; do
mysql your_database -e "select * from ${table}" -B > "${table}.tsv"
done

Here's a basic python script that does the work! You can also choose to export only the headers (column names) or headers & data both.
Just change the database credentials and run the script. It will output all the data to the output folder.
To run the script -
Run: pip install mysql-connector-python
Change database credentials in the "INPUT" section
Run: python filename.py
import mysql.connector
from pathlib import Path
import csv
#========INPUT===========
databaseHost=""
databaseUsername=""
databasePassword=""
databaseName=""
outputDirectory="./WITH-DATA/"
exportTableData=True #MAKING THIS FIELD FALSE WILL STORE ONLY THE TABLE HEADERS (COLUMN NAMES) IN THE CSV FILE
#========INPUT END===========
Path(outputDirectory).mkdir(parents=True, exist_ok=True)
mydb = mysql.connector.connect(
host=databaseHost,
user=databaseUsername,
password=databasePassword
)
mycursor = mydb.cursor()
mycursor.execute("USE "+databaseName)
mycursor.execute("SHOW TABLES")
tables = mycursor.fetchall()
tableNames=[table[0] for table in tables]
print("================================")
print("Total number of tables: "+ str(len(tableNames)))
print(tableNames)
print("================================")
for tableName in tableNames:
print("================================")
print("Processing: "+ str(tableName))
mydb = mysql.connector.connect(
host=databaseHost,
user=databaseUsername,
password=databasePassword
)
mycursor = mydb.cursor()
mycursor.execute("USE "+databaseName)
if exportTableData:
mycursor.execute("SELECT * FROM "+tableName)
else:
mycursor.execute("SELECT * FROM "+tableName+" LIMIT 1")
print(mycursor.column_names)
with open(outputDirectory+tableName+".csv", 'w', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(mycursor.column_names)
if exportTableData:
myresult = mycursor.fetchall()
csvwriter.writerows(myresult)

Related

Exporting data from mysql RDS for import into questDb

I've got a big table (~500m rows) in mysql RDS and I need to export specific columns from it to csv, to enable import into questDb.
Normally I'd use into outfile but this isn't supported on RDS as there is no access to the file system.
I've tried using workbench to do the export but due to size of the table, I keep getting out-of-memory issues.
Finally figured it out with help from this: Exporting a table from Amazon RDS into a CSV file
This solution works well as long as you have a sequential column of some kind, e.g. an auto incrementing integer PK or a date column. Make sure you have your date column indexed if you have a lot of data!
#!bin/bash
# Maximum number of rows to export/total rows in table, set a bit higher if live data being written
MAX=500000000
# Size of each export batch
STEP=1000000
mkdir -p parts
for (( c=0; c<= $MAX; c = c + $STEP ))
do
mysql --port 3306 --protocol=TCP -h <rdshostname> -u <username> -p<password> --quick --database=<db> -e "select column1, column2, column3 <table> order by <timestamp> ASC limit $STEP offset $c" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > export$c.csv
# split down in to chunks under questdbs 65k line limit
split -d -l 64999 --additional-suffix=.csv $FILE_NAME.csv ./parts/$FILE_NAME
done
# print out import statements to a file
for i in $(ls -v ./parts); do echo "COPY reading from '$i';" >> import.sql; done;
A slightly different approach which may be faster depending on indexing you have in place is step through the data month by month:
#!bin/bash
START_YEAR=2020
END_YEAR=2022
mkdir -p parts
for (( YEAR=$START_YEAR; YEAR<=$END_YEAR; YEAR++ ))
do
for (( MONTH=1; MONTH<=12; MONTH++ ))
do
NEXT_MONTH=1
let NEXT_YEAR=$YEAR+1
if [ $MONTH -lt 12 ]
then
let NEXT_MONTH=$MONTH+1
NEXT_YEAR=$YEAR
fi
FILE_NAME="export-$YEAR-$MONTH-to-$NEXT_YEAR-$NEXT_MONTH"
mysql --port 3306 --protocol=TCP -h <rdshost> -u app -p<password> --quick --database=<database> -e "select <column1>, <column2>, round(UNIX_TIMESTAMP(<dateColumn>)) * 1000000 as date from <table> where <table>.<dateColumn> >= '$YEAR-$MONTH-01 00:00:00' and table.<dateColumn> < '$NEXT_YEAR-$NEXT_MONTH-01 00:00:00' order by <table>.<dateColumn> ASC" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > $FILE_NAME.csv
# split down in to chunks under questdbs 65k line limit
split -d -l 64999 --additional-suffix=.csv $FILE_NAME.csv ./parts/$FILE_NAME
done
done
# print out import statements to a file
for i in $(ls -v ./parts); do echo "COPY reading from '$i';" >> import.sql; done;
The above scripts will output a import.sql containing all the sql statements you need to import your data. See: https://questdb.io/docs/guides/importing-data/
Edit: this solution would work only if exporting the whole table, not when exporting specific columns
You could try using mysqldump with extra params for CSV conversion. AWS documents how to use mysqldump with RDS and you can see at this stackoverflow question how to use extra params to convert into CSV.
I am quoting here the relevant part from that last link (since there are a lot of answers and comments)
mysqldump <DBNAME> <TABLENAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
You can use the SELECT ... INTO OUTFILE syntax to export the data to a file on the server.
You can then use the mysql command line client to connect to the RDS instance and retrieve the file from the server.
The only slight snag is that mysql won't connect to the RDS instance unless the instance is in a VPC, so if it isn't you'll need to connect to a bastion host first, then connect to the RDS instance from there.
SELECT * FROM mydb.mytable INTO OUTFILE '/tmp/mytable.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n';
You can then get the file from the server:
mysql -uusername -p -hmyrds.rds.amazonaws.com -P3306
When you have a prompt from the mysql command line client you can retrieve the file using the SELECT command:
SELECT LOAD_FILE('/tmp/mytable.csv');
You can then pipe the output to a file using:
SELECT LOAD_FILE('/tmp/mytable.csv') INTO OUTFILE '/tmp/mytable_out.csv';
You can then use the mysql command line client to connect to your questDB instance and load the data.
If you want to retrieve a specific column then you can specify the column name in the SELECT command when creating the file on the RDS server:
SELECT column1, column2, column3 FROM mydb.mytable INTO OUTFILE '/tmp/mytable.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n';

Export as csv in beeline hive

I am trying to export my hive table as a csv in beeline hive. When I run the command !sql select * from database1 > /user/bob/output.csv it gives me syntax error.
I have successfully connected to the database at this point using the below command. The query outputs the correct results on console.
beeline -u 'jdbc:hive2://[databaseaddress]' --outputformat=csv
Also, not very clear where the file ends up. It should be the file path in hdfs correct?
When hive version is at least 0.11.0 you can execute:
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/directoryWhereToStoreData'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY "\n"
SELECT * FROM yourTable;
from hive/beeline to store the table into a directory on the local filesystem.
Alternatively, with beeline, save your SELECT query in yourSQLFile.sql and run:
beeline -u 'jdbc:hive2://[databaseaddress]' --outputformat=csv2 -f yourSQlFile.sql > theFileWhereToStoreTheData.csv
Also this will store the result into a file in the local file system.
From hive, to store the data somewhere into HDFS:
CREATE EXTERNAL TABLE output
LIKE yourTable
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION 'hfds://WhereDoYou/Like';
INSERT OVERWRITE TABLE output SELECT * from yourTable;
then you can collect the data to a local file using:
hdfs dfs -getmerge /WhereDoYou/Like
This is another option to get the data using beeline only:
env HADOOP_CLIENT_OPTS="-Ddisable.quoting.for.sv=false" beeline -u "jdbc:hive2://your.hive.server.address:10000/" --incremental=true --outputformat=csv2 -e "select * from youdatabase.yourtable"
Working on:
Connected to: Apache Hive (version 1.1.0-cdh5.10.1)
Driver: Hive JDBC (version 1.1.0-cdh5.10.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.1.0-cdh5.10.1 by Apache Hive
You can use this command to save output in CSV format from beeline:
beeline -u 'jdbc:hive2://bigdataplatform-dev.nam.nsroot.net:10000/;principal=hive/bigdataplatform-dev.net#NAMUXDEV.NET;ssl=true' --outputformat=csv2 --verbose=false --fastConnect=true --silent=true -f $query_file>out.csv
Save your SQL query file into $query_file.
Result will be in out.csv.
I have complete example here: hivehoney
Following worked for me
hive --silent=true --verbose=false --outputformat=csv2 -e "use <db_name>; select * from <table_name>" > table_name.csv
One advantage over using beeline is that you don't have have to provide hostname or user/pwd if you are running on hive node.
When some of the columns have string values having commas, tsv (tab separated) works better
hive --silent=true --verbose=false --outputformat=tsv -e "use <db_name>; select * from <table_name>" > table_name.tsv
Output format in CSV:
$ beeline -u jdbc:hive2://192.168.0.41:10000/test_db -n user1 -p password **--outputformat=csv2** -e "select * from t1";
Output format in custom delimiter:
$ beeline -u jdbc:hive2://192.168.0.41:10000/test_db -n user1 -p password **--outputformat=dsv** **--delimiterForDSV='|'** -e "select * from t1";
Running command in background and redirect out to file:
$nohup `$ beeline -u jdbc:hive2://192.168.0.41:10000/test_db -n user1 -p password --outputformat=csv2 -e "select * from t1"; > output.csv 2> log` &
Reference URLs:
https://dwgeek.com/export-hive-table-into-csv-format-using-beeline-client-example.html/
https://dwgeek.com/hiveserver2-beeline-command-line-shell-options-examples.html/
From Beeline
beeline -u 'jdbc:hive2://123.12.4132:345/database_name' --outputformat=csv2 -e "select col1, col2, col3 from table_name" > /path/to/dump.csv

How to ignore certain MySQL tables when importing a database?

I have a large SQL file with one database and about 150 tables. I would like to use mysqlimport to import that database, however, I would like the import process to ignore or skip over a couple of tables. What is the proper syntax to import all tables, but ignore some of them? Thank you.
The accepted answer by RandomSeed could take a long time! Importing the table (just to drop it later) could be very wasteful depending on size.
For a file created using
mysqldump -u user -ppasswd --opt --routines DBname > DBdump.sql
I currently get a file about 7GB, 6GB of which is data for a log table that I don't 'need' to be there; reloading this file takes a couple of hours. If I need to reload (for development purposes, or if ever required for a live recovery) I skim the file thus:
sed '/INSERT INTO `TABLE_TO_SKIP`/d' DBdump.sql > reduced.sql
And reload with:
mysql -u user -ppasswd DBname < reduced.sql
This gives me a complete database, with the "unwanted" table created but empty. If you really don't want the tables at all, simply drop the empty tables after the load finishes.
For multiple tables you could do something like this:
sed '/INSERT INTO `TABLE1_TO_SKIP`/d' DBdump.sql | \
sed '/INSERT INTO `TABLE2_TO_SKIP`/d' | \
sed '/INSERT INTO `TABLE3_TO_SKIP`/d' > reduced.sql
There IS a 'gotcha' - watch out for procedures in your dump that might contain "INSERT INTO TABLE_TO_SKIP".
mysqlimport is not the right tool for importing SQL statements. This tool is meant to import formatted text files such as CSV. What you want to do is feed your sql dump directly to the mysql client with a command like this one:
bash > mysql -D your_database < your_sql_dump.sql
Neither mysql nor mysqlimport provide the feature you need. Your best chance would be importing the whole dump, then dropping the tables you do not want.
If you have access to the server where the dump comes from, then you could create a new dump with mysqldump --ignore-table=database.table_you_dont_want1 --ignore-table=database.table_you_dont_want2 ....
Check out this answer for a workaround to skip importing some table
For anyone working with .sql.gz files; I found the following solution to be very useful. Our database was 25GB+ and I had to remove the log tables.
gzip -cd "./mydb.sql.gz" | sed -r '/INSERT INTO `(log_table_1|log_table_2|log_table_3|log_table_4)`/d' | gzip > "./mydb2.sql.gz"
Thanks to the answer of Don and comment of Xosofox and this related post:
Use zcat and sed or awk to edit compressed .gz text file
Little old, but figure it might still come in handy...
I liked #Don's answer (https://stackoverflow.com/a/26379517/1446005) but found it very annoying that you'd have to write to another file first...
In my particular case this would take too much time and disc space
So I wrote a little bash script:
#!/bin/bash
tables=(table1_to_skip table2_to_skip ... tableN_to_skip)
tableString=$(printf "|%s" "${tables[#]}")
trimmed=${tableString:1}
grepExp="INSERT INTO \`($trimmed)\`"
zcat $1 | grep -vE "$grepExp" | mysql -uroot -p
this does not generate a new sql script but pipes it directly to the database
also, it does create the tables, just doesn't import the data (which was the problem I had with huge log tables)
Unless you have ignored the tables during the dump with mysqldump --ignore-table=database.unwanted_table, you have to use some script or tool to filter out the data you don't want to import from the dump file before passing it to mysql client.
Here is a bash/sh function that would exclude the unwanted tables from a SQL dump on the fly (through pipe):
# Accepts one argument, the list of tables to exclude (case-insensitive).
# Eg. filt_exclude '%session% action_log %_cache'
filt_exclude() {
local excl_tns;
if [ -n "$1" ]; then
# trim & replace /[,;\s]+/ with '|' & replace '%' with '[^`]*'
excl_tns=$(echo "$1" | sed -r 's/^[[:space:]]*//g; s/[[:space:]]*$//g; s/[[:space:]]+/|/g; s/[,;]+/|/g; s/%/[^\`]\*/g');
grep -viE "(^INSERT INTO \`($excl_tns)\`)|(^DROP TABLE (IF EXISTS )?\`($excl_tns)\`)|^LOCK TABLES \`($excl_tns)\` WRITE" | \
sed 's/^CREATE TABLE `/CREATE TABLE IF NOT EXISTS `/g'
else
cat
fi
}
Suppose you have a dump created like so:
MYSQL_PWD="my-pass" mysqldump -u user --hex-blob db_name | \
pigz -9 > dump.sql.gz
And want to exclude some unwanted tables before importing:
pigz -dckq dump.sql.gz | \
filt_exclude '%session% action_log %_cache' | \
MYSQL_PWD="my-pass" mysql -u user db_name
Or you could pipe into a file or any other tool before importing to DB.
If desired, you can do this one table at a time:
mysqldump -p sourceDatabase tableName > tableName.sql
mysql -p -D targetDatabase < tableName.sql
Here is my script to exclude some tables from mysql dump
I use it to restore DB when need to keep orders and payments data
exclude_tables_from_dump.sh
#!/bin/bash
if [ ! -f "$1" ];
then
echo "Usage: $0 mysql_dump.sql"
exit
fi
declare -a TABLES=(
user
order
order_product
order_status
payments
)
CMD="cat $1"
for TBL in "${TABLES[#]}";do
CMD+="|sed 's/DROP TABLE IF EXISTS \`${TBL}\`/# DROP TABLE IF EXIST \`${TBL}\`/g'"
CMD+="|sed 's/CREATE TABLE \`${TBL}\`/CREATE TABLE IF NOT EXISTS \`${TBL}\`/g'"
CMD+="|sed -r '/INSERT INTO \`${TBL}\`/d'"
CMD+="|sed '/DELIMITER\ \;\;/,/DELIMITER\ \;/d'"
done
eval $CMD
It avoid DROP and reCREATE of tables and inserting data to this tables.
Also it strip all FUNCTIONS and PROCEDURES that stored between DELIMITER ;; and DELIMITER ;
I would not use it on production but if I would have to import some backup quickly that contains many smaller table and one big monster table that might take hours to import I would most probably "grep -v unwanted_table_name original.sql > reduced.sql
and then mysql -f < reduced.sql

Dump all tables in CSV format using 'mysqldump'

I need to dump all tables in MySQL in CSV format.
Is there a command using mysqldump to just output every row for every table in CSV format?
First, I can give you the answer for one table:
The trouble with all these INTO OUTFILE or --tab=tmpfile (and -T/path/to/directory) answers is that it requires running mysqldump on the same server as the MySQL server, and having those access rights.
My solution was simply to use mysql (not mysqldump) with the -B parameter, inline the SELECT statement with -e, then massage the ASCII output with sed, and wind up with CSV including a header field row:
Example:
mysql -B -u username -p password database -h dbhost -e "SELECT * FROM accounts;" \
| sed "s/\"/\"\"/g;s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g"
"id","login","password","folder","email"
"8","mariana","xxxxxxxxxx","mariana",""
"3","squaredesign","xxxxxxxxxxxxxxxxx","squaredesign","mkobylecki#squaredesign.com"
"4","miedziak","xxxxxxxxxx","miedziak","miedziak#mail.com"
"5","Sarko","xxxxxxxxx","Sarko",""
"6","Logitrans
Poland","xxxxxxxxxxxxxx","LogitransPoland",""
"7","Amos","xxxxxxxxxxxxxxxxxxxx","Amos",""
"9","Annabelle","xxxxxxxxxxxxxxxx","Annabelle",""
"11","Brandfathers and
Sons","xxxxxxxxxxxxxxxxx","BrandfathersAndSons",""
"12","Imagine
Group","xxxxxxxxxxxxxxxx","ImagineGroup",""
"13","EduSquare.pl","xxxxxxxxxxxxxxxxx","EduSquare.pl",""
"101","tmp","xxxxxxxxxxxxxxxxxxxxx","_","WOBC-14.squaredesign.atlassian.net#yoMama.com"
Add a > outfile.csv at the end of that one-liner, to get your CSV file for that table.
Next, get a list of all your tables with
mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"
From there, it's only one more step to make a loop, for example, in the Bash shell to iterate over those tables:
for tb in $(mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"); do
echo .....;
done
Between the do and ; done insert the long command I wrote in Part 1 above, but substitute your tablename with $tb instead.
This command will create two files in /path/to/directory table_name.sql and table_name.txt.
The SQL file will contain the table creation schema and the txt file will contain the records of the mytable table with fields delimited by a comma.
mysqldump -u username -p -t -T/path/to/directory dbname table_name --fields-terminated-by=','
If you are using MySQL or MariaDB, the easiest and performant way dump CSV for single table is -
SELECT customer_id, firstname, surname INTO OUTFILE '/exportdata/customers.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM customers;
Now you can use other techniques to repeat this command for multiple tables. See more details here:
https://mariadb.com/kb/en/the-mariadb-library/select-into-outfile/
https://dev.mysql.com/doc/refman/5.7/en/select-into.html
mysqldump has options for CSV formatting:
--fields-terminated-by=name
Fields in the output file are terminated by the given
--lines-terminated-by=name
Lines in the output file are terminated by the given
The name should contain one of the following:
`--fields-terminated-by`
\t or "\""
`--fields-enclosed-by=name`
Fields in the output file are enclosed by the given
and
--lines-terminated-by
\r
\n
\r\n
Naturally you should mysqldump each table individually.
I suggest you gather all table names in a text file. Then, iterate through all tables running mysqldump. Here is a script that will dump and gzip 10 tables at a time:
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
SQLSTMT="SELECT CONCAT(table_schema,'.',table_name)"
SQLSTMT="${SQLSTMT} FROM information_schema.tables WHERE table_schema NOT IN "
SQLSTMT="${SQLSTMT} ('information_schema','performance_schema','mysql')"
mysql ${MYSQL_CONN} -ANe"${SQLSTMT}" > /tmp/DBTB.txt
COMMIT_COUNT=0
COMMIT_LIMIT=10
TARGET_FOLDER=/path/to/csv/files
for DBTB in `cat /tmp/DBTB.txt`
do
DB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $1}'`
TB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $2}'`
DUMPFILE=${DB}-${TB}.csv.gz
mysqldump ${MYSQL_CONN} -T ${TARGET_FOLDER} --fields-terminated-by="," --fields-enclosed-by="\"" --lines-terminated-by="\r\n" ${DB} ${TB} | gzip > ${DUMPFILE}
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
This worked well for me:
mysqldump <DBNAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
Or if you want to only dump a specific table:
mysqldump <DBNAME> <TABLENAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
I'm dumping to /var/lib/mysql-files/ to avoid this error:
mysqldump: Got error: 1290: The MySQL server is running with the --secure-file-priv option so it cannot execute this statement when executing 'SELECT INTO OUTFILE'
It looks like others had this problem also, and there is a simple Python script now, for converting output of mysqldump into CSV files.
wget https://raw.githubusercontent.com/jamesmishra/mysqldump-to-csv/master/mysqldump_to_csv.py
mysqldump -u username -p --host=rdshostname database table | python mysqldump_to_csv.py > table.csv
You also can do it using Data Export tool in dbForge Studio for MySQL.
It will allow you to select some or all tables and export them into CSV format.

How do I store a MySQL query result into a local CSV file?

How do I store a MySQL query result into a local CSV file? I don't have access to the remote machine.
You're going to have to use the command line execution '-e' flag.
$> /usr/local/mysql/bin/mysql -u user -p -h remote.example.com -e "select t1.a,t1.b from db_schema.table1 t1 limit 10;" > hello.txt
Will generate a local hello.txt file in your current working directory with the output from the query.
Use the concat function of mysql
mysql -u <username> -p -h <hostname> -e "select concat(userid,',',surname,',',firstname,',',midname) as user from dbname.tablename;" > user.csv
You can delete the first line which contains the column name "user".
We can use command line execution '-e' flag and a simple python script to generate result in csv format.
create a python file (lets name = tab2csv) and write the following code in it..
#!/usr/bin/env python
import csv
import sys
tab_in = csv.reader(sys.stdin, dialect=csv.excel_tab)
comma_out = csv.writer(sys.stdout, dialect=csv.excel)
for row in tab_in:
comma_out.writerow(row)
Run the following command by updating mysql credentials correctly
mysql -u orch -p -h database_ip -e "select * from database_name.table_name limit 10;" | python tab2csv > outfile.csv
Result will be stored in outfile.csv.
I haven't had a chance to test it against content with difficult characters yet, but the fantastic mycli may be a solution for many.
Command line
mycli --csv -e "select * from table;" mysql://user#host:port/db > file.csv
Interactive mode:
\n \T csv ; \o ~/file.csv ; select * from table1; \P
\n disables pager that requires pressing space to display each page
\T csv ; - sets the output format to csv
\o <filename> ; - appends next output to a file
<query> ;
\P turns the pager back on
I am facing this problem and I've been reading some time for a solution: importing into excel, importing into access, saving as text file...
I think the best solution for windows is the following:
use the command insert...select to create a "result" table. The ideal scenario whould be to automatically create the fields of this result table, but this is not possible in mysql
create an ODBC connection to the database
use access or excel to extract the data and then save or process in the way you want
For Unix/Linux I think that the best solution might be using this -e option tmarthal said before and process the output through a processor like awk to get a proper format (CSV, xml, whatever).
Run the MySQl query to generate CSV from your App like below
SELECT order_id,product_name,qty FROM orders INTO OUTFILE '/tmp/orders.csv'
FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n'
It will create the csv file in the tmp folder of the application.
Then you can add logic to send the file through headers.
Make sure the database user has permissions to write into the remote file-system.
You can try doing :
SELECT a,b,c
FROM table_name
INTO OUTFILE '/tmp/file.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
We use the OUTFILE clause here to store the output into a CSV file.We enclose the fields in double-quotes to handle field values that have the comma in them and we separate the fields by comma and separate individual line using newline.