Easily import a MySQL --tab dump - mysql

I have dumped a MySQL database with the --tab option, which creates 2 files per table (a SQL file with the create table and a tab-separated-values file with the data).
Is there an easy way to import this directory of files back into a MySQL server? I can't find the option in mysqlimport.

for i in `ls *.sql`; do
sql_file=$i;
table_name=`echo $sql_file | sed "s/.sql$//"`
mysql -u root database_name < $sql_file
echo "LOAD DATA LOCAL INFILE '$table_name.txt' INTO TABLE $table_name" | mysql -u root database_name
done

You can do this several ways - the most direct would be
mysql db < sql_structure_file
This creates the tables. Then do (from mysql client)
LOAD DATA LOCAL INFILE tab_delimited_file INTO TABLE
(with appropriate names, delimiters, etc )

I'm using this bash script which first imports all the sql files to build the tables, then the txt files. The data is loaded using background processes in parallel — basically emulating the multi-thread option in mysqlimport. Usage is like this:
./import_table.sh database_name /path/to/dump/files
SCRIPT:
#!/bin/bash
DIR=$(echo $2 | sed 's/\/$//')
function import_sql() {
mysql $1 < $2;
echo "mysql $1 < '$2'";
}
function import_txt() {
mysqlimport --silent $1 $2;
echo "mysqlimport --silent $1 '$2'";
}
for filename in $DIR/*.sql; do
[ -e "$filename" ] || continue
import_sql $1 $filename &
done
wait
echo 'ALL SQL IMPORTED';
for filename in $DIR/*.txt; do
[ -e "$filename" ] || continue
import_txt $1 $filename &
done
wait
echo 'ALL TXT IMPORTED';

Related

Export sql table with data that has commas to a csv file [duplicate]

Is there an easy way to run a MySQL query from the Linux command line and output the results in CSV format?
Here's what I'm doing now:
mysql -u uid -ppwd -D dbname << EOQ | sed -e 's/ /,/g' | tee list.csv
select id, concat("\"",name,"\"") as name
from students
EOQ
It gets messy when there are a lot of columns that need to be surrounded by quotes, or if there are quotes in the results that need to be escaped.
From Save MySQL query results into a text or CSV file:
SELECT order_id,product_name,qty
FROM orders
WHERE foo = 'bar'
INTO OUTFILE '/var/lib/mysql-files/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';
Note: That syntax may need to be reordered to
SELECT order_id,product_name,qty
INTO OUTFILE '/var/lib/mysql-files/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM orders
WHERE foo = 'bar';
in more recent versions of MySQL.
Using this command, columns names will not be exported.
Also note that /var/lib/mysql-files/orders.csv will be on the server that is running MySQL. The user that the MySQL process is running under must have permissions to write to the directory chosen, or the command will fail.
If you want to write output to your local machine from a remote server (especially a hosted or virtualize machine such as Heroku or Amazon RDS), this solution is not suitable.
mysql your_database --password=foo < my_requests.sql > out.tsv
This produces a tab-separated format. If you are certain that commas do not appear in any of the column data (and neither do tabs), you can use this pipe command to get a true CSV (thanks to user John Carter):
... .sql | sed 's/\t/,/g' > out.csv
mysql --batch, -B
Print results using tab as the column separator, with each row on a
new line. With this option, mysql does not use the history file.
Batch mode results in non-tabular output format and escaping of
special characters. Escaping may be disabled by using raw mode; see
the description for the --raw option.
This will give you a tab-separated file. Since commas (or strings containing comma) are not escaped, it is not straightforward to change the delimiter to comma.
Here's a fairly gnarly way of doing it[1]:
mysql --user=wibble --password mydatabasename -B -e "select * from vehicle_categories;" | sed "s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g" > vehicle_categories.csv
It works pretty well. Once again, though, a regular expression proves write-only.
Regex Explanation:
s/// means substitute what's between the first // with what's between the second //
the "g" at the end is a modifier that means "all instance, not just first"
^ (in this context) means beginning of line
$ (in this context) means end of line
So, putting it all together:
s/'/\'/ Replace ' with \'
s/\t/\",\"/g Replace all \t (tab) with ","
s/^/\"/ at the beginning of the line place a "
s/$/\"/ At the end of the line, place a "
s/\n//g Replace all \n (newline) with nothing
[1] I found it somewhere and can't take any credit.
Pipe it through 'tr' (Unix/Cygwin only):
mysql <database> -e "<query here>" | tr '\t' ',' > data.csv
N.B.: This handles neither embedded commas, nor embedded tabs.
This saved me a couple of times. It is fast and it works!
--batch
Print results using tab as the column separator, with each row on a
new line.
--raw disables character escaping (\n, \t, \0, and \)
Example:
mysql -udemo_user -p -h127.0.0.1 --port=3306 \
--default-character-set=utf8mb4 --database=demo_database \
--batch --raw < /tmp/demo_sql_query.sql > /tmp/demo_csv_export.tsv
For completeness you could convert to CSV (but be careful because tabs could be inside field values - e.g., text fields)
tr '\t' ',' < file.tsv > file.csv
The OUTFILE solution given by Paul Tomblin causes a file to be written on the MySQL server itself, so this will work only if you have FILE access, as well as login access or other means for retrieving the file from that box.
If you don't have such access, and tab-delimited output is a reasonable substitute for CSV (e.g., if your end goal is to import to Excel), then serbaut's solution (using mysql --batch and optionally --raw) is the way to go.
MySQL Workbench can export recordsets to CSV, and it seems to handle commas in fields very well. The CSV opens up in OpenOffice Calc fine.
All of the solutions here to date, except the MySQL Workbench one, are incorrect and quite possibly unsafe (i.e., security issues) for at least some possible content in the MySQL database.
MySQL Workbench (and similarly phpMyAdmin) provide a formally correct solution, but they are designed for downloading the output to a user's location. They're not so useful for things like automating data export.
It is not possible to generate reliably correct CSV content from the output of mysql -B -e 'SELECT ...' because that cannot encode carriage returns and white space in fields. The '-s' flag to mysql does do backslash escaping, and might lead to a correct solution. However, using a scripting language (one with decent internal data structures that is, not Bash), and libraries where the encoding issues have already been carefully worked out is far safer.
I thought about writing a script for this, but as soon as I thought about what I'd call it, it occurred to me to search for preexisting work by the same name. While I haven't gone over it thoroughly, mysql2csv looks promising. Depending on your application, the YAML approach to specifying the SQL commands might or might not appeal though. I'm also not thrilled with the requirement for a more recent version of Ruby than comes as standard with my Ubuntu 12.04 (Precise Pangolin) laptop or Debian 6.0 (Squeeze) servers. Yes, I know I could use RVM, but I'd rather not maintain that for such a simple purpose.
Use:
mysql your_database -p < my_requests.sql | awk '{print $1","$2}' > out.csv
Many of the answers on this page are weak, because they don't handle the general case of what can occur in CSV format. E.g., commas and quotes embedded in fields and other conditions that always come up eventually. We need a general solution that works for all valid CSV input data.
Here's a simple and strong solution in Python:
#!/usr/bin/env python
import csv
import sys
tab_in = csv.reader(sys.stdin, dialect=csv.excel_tab)
comma_out = csv.writer(sys.stdout, dialect=csv.excel)
for row in tab_in:
comma_out.writerow(row)
Name that file tab2csv, put it on your path, give it execute permissions, then use it like this:
mysql OTHER_OPTIONS --batch --execute='select * from whatever;' | tab2csv > outfile.csv
The Python CSV-handling functions cover corner cases for CSV input format(s).
This could be improved to handle very large files via a streaming approach.
From your command line, you can do this:
mysql -h *hostname* -P *port number* --database=*database_name* -u *username* -p -e *your SQL query* | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > *output_file_name.csv*
Credits: Exporting table from Amazon RDS into a CSV file
This answer uses Python and a popular third party library, PyMySQL. I'm adding it because Python's csv library is powerful enough to correctly handle many different flavors of .csv and no other answers are using Python code to interact with the database.
import contextlib
import csv
import datetime
import os
# https://github.com/PyMySQL/PyMySQL
import pymysql
SQL_QUERY = """
SELECT * FROM my_table WHERE my_attribute = 'my_attribute';
"""
# embedding passwords in code gets nasty when you use version control
# the environment is not much better, but this is an example
# https://stackoverflow.com/questions/12461484
SQL_USER = os.environ['SQL_USER']
SQL_PASS = os.environ['SQL_PASS']
connection = pymysql.connect(host='localhost',
user=SQL_USER,
password=SQL_PASS,
db='dbname')
with contextlib.closing(connection):
with connection.cursor() as cursor:
cursor.execute(SQL_QUERY)
# Hope you have enough memory :)
results = cursor.fetchall()
output_file = 'my_query-{}.csv'.format(datetime.datetime.today().strftime('%Y-%m-%d'))
with open(output_file, 'w', newline='') as csvfile:
# http://stackoverflow.com/a/17725590/2958070 about lineterminator
csv_writer = csv.writer(csvfile, lineterminator='\n')
csv_writer.writerows(results)
I encountered the same problem and Paul's Answer wasn't an option since it was Amazon RDS. Replacing the tab with the commas did not work as the data had embedded commas and tabs. I found that mycli, which is a drop-in alternative for the mysql-client, supports CSV output out of the box with the --csv flag:
mycli db_name --csv -e "select * from flowers" > flowers.csv
This is simple, and it works on anything without needing batch mode or output files:
select concat_ws(',',
concat('"', replace(field1, '"', '""'), '"'),
concat('"', replace(field2, '"', '""'), '"'),
concat('"', replace(field3, '"', '""'), '"'))
from your_table where etc;
Explanation:
Replace " with "" in each field --> replace(field1, '"', '""')
Surround each result in quotation marks --> concat('"', result1, '"')
Place a comma between each quoted result --> concat_ws(',', quoted1, quoted2, ...)
That's it!
Also, if you're performing the query on the Bash command line, I believe the tr command can be used to substitute the default tabs to arbitrary delimiters.
$ echo "SELECT * FROM Table123" | mysql Database456 | tr "\t" ,
You can have a MySQL table that uses the CSV engine.
Then you will have a file on your hard disk that will always be in a CSV format which you could just copy without processing it.
To expand on previous answers, the following one-liner exports a single table as a tab-separated file. It's suitable for automation, exporting the database every day or so.
mysql -B -D mydatabase -e 'select * from mytable'
Conveniently, we can use the same technique to list out MySQL's tables, and to describe the fields on a single table:
mysql -B -D mydatabase -e 'show tables'
mysql -B -D mydatabase -e 'desc users'
Field Type Null Key Default Extra
id int(11) NO PRI NULL auto_increment
email varchar(128) NO UNI NULL
lastName varchar(100) YES NULL
title varchar(128) YES UNI NULL
userName varchar(128) YES UNI NULL
firstName varchar(100) YES NULL
Here's what I do:
echo $QUERY | \
mysql -B $MYSQL_OPTS | \
perl -F"\t" -lane 'print join ",", map {s/"/""/g; /^[\d.]+$/ ? $_ : qq("$_")} #F ' | \
mail -s 'report' person#address
The Perl script (snipped from elsewhere) does a nice job of converting the tab spaced fields to CSV.
Building on user7610, here is the best way to do it. With mysql outfile there were 60 mins of file ownership and overwriting problems.
It's not cool, but it worked in 5 mins.
php csvdump.php localhost root password database tablename > whatever-you-like.csv
<?php
$server = $argv[1];
$user = $argv[2];
$password = $argv[3];
$db = $argv[4];
$table = $argv[5];
mysql_connect($server, $user, $password) or die(mysql_error());
mysql_select_db($db) or die(mysql_error());
// fetch the data
$rows = mysql_query('SELECT * FROM ' . $table);
$rows || die(mysql_error());
// create a file pointer connected to the output stream
$output = fopen('php://output', 'w');
// output the column headings
$fields = [];
for($i = 0; $i < mysql_num_fields($rows); $i++) {
$field_info = mysql_fetch_field($rows, $i);
$fields[] = $field_info->name;
}
fputcsv($output, $fields);
// loop over the rows, outputting them
while ($row = mysql_fetch_assoc($rows)) fputcsv($output, $row);
?>
Not exactly as a CSV format, but the tee command from the MySQL client can be used to save the output into a local file:
tee foobar.txt
SELECT foo FROM bar;
You can disable it using notee.
The problem with SELECT … INTO OUTFILE …; is that it requires permission to write files at the server.
In my case from table_name ..... before INTO OUTFILE ..... gives an error:
Unexpected ordering of clauses. (near "FROM" at position 10)
What works for me:
SELECT *
INTO OUTFILE '/Volumes/Development/sql/sql/enabled_contacts.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM table_name
WHERE column_name = 'value'
What worked for me:
SELECT *
FROM students
WHERE foo = 'bar'
LIMIT 0,1200000
INTO OUTFILE './students-1200000.csv'
FIELDS TERMINATED BY ',' ESCAPED BY '"'
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n';
None of the solutions on this thread worked for my particular case. I had pretty JSON data inside one of the columns, which would get messed up in my CSV output. For those with a similar problem, try lines terminated by \r\n instead.
Also another problem for those trying to open the CSV with Microsoft Excel, keep in mind there is a limit of 32,767 characters that a single cell can hold, above that it overflows to the rows below. To identify which records in a column have the issue, use the query below. You can then truncate those records or handle them as you'd like.
SELECT id,name,CHAR_LENGTH(json_student_description) AS 'character length'
FROM students
WHERE CHAR_LENGTH(json_student_description)>32767;
Using the solution posted by Tim Harding, I created this Bash script to facilitate the process (root password is requested, but you can modify the script easily to ask for any other user):
#!/bin/bash
if [ "$1" == "" ];then
echo "Usage: $0 DATABASE TABLE [MYSQL EXTRA COMMANDS]"
exit
fi
DBNAME=$1
TABLE=$2
FNAME=$1.$2.csv
MCOMM=$3
echo "MySQL password: "
stty -echo
read PASS
stty echo
mysql -uroot -p$PASS $MCOMM $DBNAME -B -e "SELECT * FROM $TABLE;" | sed "s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g" > $FNAME
It will create a file named: database.table.csv
If you have PHP set up on the server, you can use mysql2csv to export an (actually valid) CSV file for an arbitrary MySQL query. See my answer at MySQL - SELECT * INTO OUTFILE LOCAL ? for a little more context/info.
I tried to maintain the option names from mysql so it should be sufficient to provide the --file and --query options:
./mysql2csv --file="/tmp/result.csv" --query='SELECT 1 as foo, 2 as bar;' --user="username" --password="password"
"Install" mysql2csv via
wget https://gist.githubusercontent.com/paslandau/37bf787eab1b84fc7ae679d1823cf401/raw/29a48bb0a43f6750858e1ddec054d3552f3cbc45/mysql2csv -O mysql2csv -q && (sha256sum mysql2csv | cmp <(echo "b109535b29733bd596ecc8608e008732e617e97906f119c66dd7cf6ab2865a65 mysql2csv") || (echo "ERROR comparing hash, Found:" ;sha256sum mysql2csv) ) && chmod +x mysql2csv
(Download content of the gist, check checksum and make it executable.)
The following produces tab-delimited and valid CSV output. Unlike most of the other answers, this technique correctly handles escaping of tabs, commas, quotes, and new lines without any stream filter like sed, AWK, or tr.
The example shows how to pipe a remote MySQL table directly into a local SQLite database using streams. This works without FILE permission or SELECT INTO OUTFILE permission. I have added new lines for readability.
mysql -B -C --raw -u 'username' --password='password' --host='hostname' 'databasename'
-e 'SELECT
CONCAT('\''"'\'',REPLACE(`id`,'\''"'\'', '\''""'\''),'\''"'\'') AS '\''id'\'',
CONCAT('\''"'\'',REPLACE(`value`,'\''"'\'', '\''""'\''),'\''"'\'') AS '\''value'\''
FROM sampledata'
2>/dev/null | sqlite3 -csv -separator $'\t' mydb.db '.import /dev/stdin mycsvtable'
The 2>/dev/null is needed to suppress the warning about the password on the command line.
If your data has NULLs, you can use the IFNULL() function in the query.
A simple solution in Python that writes a standard-format CSV file with headers and writes data as a stream (low memory use):
import csv
def export_table(connection, table_name, output_filename):
cursor = connection.cursor()
cursor.execute("SELECT * FROM " + table_name)
# thanks to https://gist.github.com/madan712/f27ac3b703a541abbcd63871a4a56636 for this hint
header = [descriptor[0] for descriptor in cursor.description]
with open(output_filename, 'w') as csvfile:
csv_writer = csv.writer(csvfile, dialect='excel')
csv_writer.writerow(header)
for row in cursor:
csv_writer.writerow(row)
You could use it like:
import mysql.connector as mysql
# (or https://github.com/PyMySQL/PyMySQL should work but I haven't tested it)
db = mysql.connect(
host="localhost",
user="USERNAME",
db="DATABASE_NAME",
port=9999)
for table_name in ['table1', 'table2']:
export_table(db, table_name, table_name + '.csv')
db.close()
For simplicity, this intentionally doesn't include some fancier stuff from another answer like using an environment variable for credentials, contextlib, etc. There is a subtlety mentioned there about line endings that I haven't evaluated.
Tiny Bash script for doing simple query to CSV dumps, inspired by Tim Harding's answer.
#!/bin/bash
# $1 = query to execute
# $2 = outfile
# $3 = mysql database name
# $4 = mysql username
if [ -z "$1" ]; then
echo "Query not given"
exit 1
fi
if [ -z "$2" ]; then
echo "Outfile not given"
exit 1
fi
MYSQL_DB=""
MYSQL_USER="root"
if [ ! -z "$3" ]; then
MYSQL_DB=$3
fi
if [ ! -z "$4" ]; then
MYSQL_USER=$4
fi
if [ -z "$MYSQL_DB" ]; then
echo "Database name not given"
exit 1
fi
if [ -z "$MYSQL_USER" ]; then
echo "Database user not given"
exit 1
fi
mysql -u $MYSQL_USER -p -D $MYSQL_DB -B -s -e "$1" | sed "s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g" > $2
echo "Written to $2"
If you are getting an error of secure-file-priv then, also after shifting your destination file location inside the C:\ProgramData\MySQL\MySQL Server 8.0\Uploads and also after then the query -
SELECT * FROM attendance INTO OUTFILE 'C:\ProgramData\MySQL\MySQL Server 8.0\Uploads\FileName.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
is not working, you have to just change \(backsplash) from the query to / (forwardsplash)
And that works!!
Example:
SELECT * FROM attendance INTO OUTFILE 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/FileName.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
Each time when you run the successful query, it will generate the new CSV file each time!
Cool, right?
The following Bash script works for me. It optionally also gets the schema for the requested tables.
#!/bin/bash
#
# Export MySQL data to CSV
#https://stackoverflow.com/questions/356578/how-to-output-mysql-query-results-in-csv-format
#
# ANSI colors
#http://www.csc.uvic.ca/~sae/seng265/fall04/tips/s265s047-tips/bash-using-colors.html
blue='\033[0;34m'
red='\033[0;31m'
green='\033[0;32m' # '\e[1;32m' is too bright for white bg.
endColor='\033[0m'
#
# A colored message
# params:
# 1: l_color - the color of the message
# 2: l_msg - the message to display
#
color_msg() {
local l_color="$1"
local l_msg="$2"
echo -e "${l_color}$l_msg${endColor}"
}
#
# Error
#
# Show the given error message on standard error and exit
#
# Parameters:
# 1: l_msg - the error message to display
#
error() {
local l_msg="$1"
# Use ANSI red for error
color_msg $red "Error:" 1>&2
color_msg $red "\t$l_msg" 1>&2
usage
}
#
# Display usage
#
usage() {
echo "usage: $0 [-h|--help]" 1>&2
echo " -o | --output csvdirectory" 1>&2
echo " -d | --database database" 1>&2
echo " -t | --tables tables" 1>&2
echo " -p | --password password" 1>&2
echo " -u | --user user" 1>&2
echo " -hs | --host host" 1>&2
echo " -gs | --get-schema" 1>&2
echo "" 1>&2
echo " output: output CSV directory to export MySQL data into" 1>&2
echo "" 1>&2
echo " user: MySQL user" 1>&2
echo " password: MySQL password" 1>&2
echo "" 1>&2
echo " database: target database" 1>&2
echo " tables: tables to export" 1>&2
echo " host: host of target database" 1>&2
echo "" 1>&2
echo " -h|--help: show help" 1>&2
exit 1
}
#
# show help
#
help() {
echo "$0 Help" 1>&2
echo "===========" 1>&2
echo "$0 exports a CSV file from a MySQL database optionally limiting to a list of tables" 1>&2
echo " example: $0 --database=cms --user=scott --password=tiger --tables=person --output person.csv" 1>&2
echo "" 1>&2
usage
}
domysql() {
mysql --host $host -u$user --password=$password $database
}
getcolumns() {
local l_table="$1"
echo "describe $l_table" | domysql | cut -f1 | grep -v "Field" | grep -v "Warning" | paste -sd "," - 2>/dev/null
}
host="localhost"
mysqlfiles="/var/lib/mysql-files/"
# Parse command line options
while true; do
#echo "option $1"
case "$1" in
# Options without arguments
-h|--help) usage;;
-d|--database) database="$2" ; shift ;;
-t|--tables) tables="$2" ; shift ;;
-o|--output) csvoutput="$2" ; shift ;;
-u|--user) user="$2" ; shift ;;
-hs|--host) host="$2" ; shift ;;
-p|--password) password="$2" ; shift ;;
-gs|--get-schema) option="getschema";;
(--) shift; break;;
(-*) echo "$0: error - unrecognized option $1" 1>&2; usage;;
(*) break;;
esac
shift
done
# Checks
if [ "$csvoutput" == "" ]
then
error "output CSV directory is not set"
fi
if [ "$database" == "" ]
then
error "MySQL database is not set"
fi
if [ "$user" == "" ]
then
error "MySQL user is not set"
fi
if [ "$password" == "" ]
then
error "MySQL password is not set"
fi
color_msg $blue "exporting tables of database $database"
if [ "$tables" = "" ]
then
tables=$(echo "show tables" | domysql)
fi
case $option in
getschema)
rm $csvoutput$database.schema
for table in $tables
do
color_msg $blue "getting schema for $table"
echo -n "$table:" >> $csvoutput$database.schema
getcolumns $table >> $csvoutput$database.schema
done
;;
*)
for table in $tables
do
color_msg $blue "exporting table $table"
cols=$(grep "$table:" $csvoutput$database.schema | cut -f2 -d:)
if [ "$cols" = "" ]
then
cols=$(getcolumns $table)
fi
ssh $host rm $mysqlfiles/$table.csv
cat <<EOF | mysql --host $host -u$user --password=$password $database
SELECT $cols FROM $table INTO OUTFILE '$mysqlfiles$table.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';
EOF
scp $host:$mysqlfiles/$table.csv $csvoutput$table.csv.raw
(echo "$cols"; cat $csvoutput$table.csv.raw) > $csvoutput$table.csv
rm $csvoutput$table.csv.raw
done
;;
esac

Access Access Design View field descriptions

I have an Access database with field descriptions that (theoretically) are visible in Design View. I don't have a copy of access. I can export the data and schema using mdbtools, but those don't come with the descriptions. Are there ways to programmatically extract those descriptions?
Turns out there was an un/under-documented mdbutils command that will give metadata for a table: mdb-prop. Here's a shell script that will list out the metadata of every field, adapted from a script whose provenance I have forgotten:
#!/usr/bin/env bash
# Usage: mdb-export-all.sh full-path-to-db
command -v mdb-tables >/dev/null 2>&1 || {
echo >&2 "I require mdb-tables but it's not installed. Aborting.";
exit 1;
}
command -v mdb-export >/dev/null 2>&1 || {
echo >&2 "I require mdb-export but it's not installed. Aborting.";
exit 1;
}
fullfilename=$1
filename=$(basename "$fullfilename")
dbname=${filename%.*}
mkdir "$dbname"
IFS=$'\n'
for table in $(mdb-tables -1 "$fullfilename"); do
echo "Check table $table"
# Save a file with with all metadata for every field
mdb-prop "$fullfilename" "$table" > "$dbname/$table.txt"
# Save a file with with just the descriptions:
cat "$dbname/$table.txt" | grep -E 'name|Description' > "$dbname/info_$table.txt"
done

Store mysql result in a bash array variable

I am trying to store MySQL result into a global bash array variable but I don't know how to do it.
Should I save the MySQL command result in a file and read the file line by line in my for loop for my other treatment?
Example:
user password
Pierre aaa
Paul bbb
Command:
$results = $( mysql –uroot –ppwd –se « SELECT * from users );
I want that results contains the two rows.
Mapfile for containing whole table into one bash variable
You could try this:
mapfile result < <(mysql –uroot –ppwd –se "SELECT * from users;")
Than
echo ${result[0]%$'\t'*}
echo ${result[0]#*$'\t'}
or
for row in "${result[#]}";do
echo Name: ${row%$'\t'*} pass: ${row#*$'\t'}
done
Nota This will work fine while there is only 2 fields by row. More is possible but become tricky
Read for reading table row by row
while IFS=$'\t' read name pass ;do
echo name:$name pass:$pass
done < <(mysql -uroot –ppwd –se "SELECT * from users;")
Read and loop to hold whole table into many variables:
i=0
while IFS=$'\t' read name[i] pass[i++];do
:;done < <(mysql -uroot –ppwd –se "SELECT * from users;")
echo ${name[0]} ${pass[0]}
echo ${name[1]} ${pass[1]}
New (feb 2018) shell connector
There is a little tool (on github) or on my own site: (shellConnector.sh you could use:
Some preparation:
cd /tmp/
wget -q http://f-hauri.ch/vrac/shell_connector.sh
. shell_connector.sh
newSqlConnector /usr/bin/mysql '–uroot –ppwd'
Following is just for demo, skip until test for quick run
Thats all. Now, creating temporary table for demo:
echo $SQLIN
3
cat >&3 <<eof
CREATE TEMPORARY TABLE users (
id bigint(20) unsigned NOT NULL PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(30), date DATE)
eof
myMysql myarray ';'
declare -p myarray
bash: declare: myarray: not found
The command myMysql myarray ';' will send ; then execute inline command,
but as mysql wont anwer anything, variable $myarray wont exist.
cat >&3 <<eof
INSERT INTO users VALUES (1,'alice','2015-06-09 22:15:01'),
(2,'bob','2016-08-10 04:13:21'),(3,'charlie','2017-10-21 16:12:11')
eof
myMysql myarray ';'
declare -p myarray
bash: declare: myarray: not found
Operational Test:
Ok, then now:
myMysql myarray "SELECT * from users;"
printf "%s\n" "${myarray[#]}"
1 alice 2015-06-09
2 bob 2016-08-10
3 charlie 2017-10-21
declare -p myarray
declare -a myarray=([0]=$'1\talice\t2015-06-09' [1]=$'2\tbob\t2016-08-10' [2]=$'3\tcharlie\t2017-10-21')
This tool are in early step of built... You have to manually clear your variable before re-using them:
unset myarray
myMysql myarray "SELECT name from users where id=2;"
echo $myarray
bob
declare -p myarray
declare -a myarray=([0]="bob")
If you're looking to get a global variable inside your script you can simply assign a value to a varname:
VARNAME=('var' 'name') # no space between the variable name and value
Doing this you'll be able to access VARNAME's value anywhere in your script after you initialize it.
If you want your variable to be shared between multiple scripts you have to use export:
script1.sh:
export VARNAME=('var' 'name')
echo ${VARNAME[0]} # will echo 'var'
script2.sh
echo ${VARNAME[1]} # will echo 'name', provided that
# script1.sh was executed prior to this one
NOTE that export will work only when running scripts in the same shell instance. If you want it to work cross-instance you would have to put the export variable code somewhere in .bashrc or .bash_profile
The answer from #F. Hauri seems really complicated.
https://stackoverflow.com/a/38052768/470749 helped me realize that I needed to use parentheses () wrapped around the query result to treat is as an array.
#You can ignore this function since you'll do something different.
function showTbl {
echo $1;
}
MOST_TABLES=$(ssh -vvv -t -i ~/.ssh/myKey ${SERVER_USER_AND_IP} "cd /app/ && docker exec laradock_mysql_1 mysql -u ${DB} -p${REMOTE_PW} -e 'SELECT table_name FROM information_schema.tables WHERE table_schema = \"${DB}\" AND table_name NOT LIKE \"pma_%\" AND table_name NOT IN (\"mail_webhooks\");'")
#Do some string replacement to get rid of the query result header and warning. https://stackoverflow.com/questions/13210880/replace-one-substring-for-another-string-in-shell-script
warningToIgnore="mysql\: \[Warning\] Using a password on the command line interface can be insecure\."
MOST_TABLES=${MOST_TABLES/$warningToIgnore/""}
headerToIgnore="table_name"
MOST_TABLES=${MOST_TABLES/$headerToIgnore/""}
#HERE WAS THE LINE THAT I NEEDED TO ADD! Convert the string to array:
MOST_TABLES=($MOST_TABLES)
for i in ${MOST_TABLES[#]}; do
if [[ $i = *[![:space:]]* ]]
then
#Remove whitespace from value https://stackoverflow.com/a/3232433/470749
i="$(echo -e "${i}" | tr -d '[:space:]')"
TBL_ARR+=("$i")
fi
done
for t in ${TBL_ARR[#]}; do
showTbl $t
done
This successfully shows me that ${TBL_ARR[#]} has all the values from the query result.
results=($( mysql –uroot –ppwd –se "SELECT * from users" ))
if [ "$?" -ne 0 ]
then
echo fail
exit
fi

dump all mysql tables into separate files automatically?

I'd like to get dumps of each mysql table into separate files. The manual indicates that the syntax for this is
mysqldump [options] db_name [tbl_name ...]
Which indicates that you know the table names before hand. I could set up the script that knows each table name now, but say I add a new table down the road and forget to update the dump script. Then I'm missing dumps for one or more table.
Is there a way to automagically dump each existing table into a separate file? Or am I going to have to do some script-fu; query the database, get all the table names, and dump them by name.
If I go the script-fu route, what scripting langauges can access a mysql database?
Here's a script that dumps table data as SQL commands into separate, compressed files. It does not require being on the MySQL server host, doesn't hard-code the password in the script, and is just for a specific db, not all db's on the server:
#!/bin/bash
# dump-tables-mysql.sh
# Descr: Dump MySQL table data into separate SQL files for a specified database.
# Usage: Run without args for usage info.
# Author: #Trutane
# Ref: http://stackoverflow.com/q/3669121/138325
# Notes:
# * Script will prompt for password for db access.
# * Output files are compressed and saved in the current working dir, unless DIR is
# specified on command-line.
[ $# -lt 3 ] && echo "Usage: $(basename $0) <DB_HOST> <DB_USER> <DB_NAME> [<DIR>]" && exit 1
DB_host=$1
DB_user=$2
DB=$3
DIR=$4
[ -n "$DIR" ] || DIR=.
test -d $DIR || mkdir -p $DIR
echo -n "DB password: "
read -s DB_pass
echo
echo "Dumping tables into separate SQL command files for database '$DB' into dir=$DIR"
tbl_count=0
for t in $(mysql -NBA -h $DB_host -u $DB_user -p$DB_pass -D $DB -e 'show tables')
do
echo "DUMPING TABLE: $DB.$t"
mysqldump -h $DB_host -u $DB_user -p$DB_pass $DB $t | gzip > $DIR/$DB.$t.sql.gz
tbl_count=$(( tbl_count + 1 ))
done
echo "$tbl_count tables dumped from database '$DB' into dir=$DIR"
The mysqldump command line program does this for you - although the docs are very unclear about this.
One thing to note is that ~/output/dir has to be writable by the user that owns mysqld. On Mac OS X:
sudo chown -R _mysqld:_mysqld ~/output/dir
mysqldump --user=dbuser --password --tab=~/output/dir dbname
After running the above, you will have one tablename.sql file containing each table's schema (create table statement) and tablename.txt file containing the data.
If you want a dump with schema only, add the --no-data flag:
mysqldump --user=dbuser --password --no-data --tab=~/output/dir dbname
You can accomplish this by:
Get the list of databases in mysql
dump each database with mysqldump
# Optional variables for a backup script
MYSQL_USER="root"
MYSQL_PASS="something"
BACKUP_DIR=/srv/backup/$(date +%Y-%m-%dT%H_%M_%S);
test -d "$BACKUP_DIR" || mkdir -p "$BACKUP_DIR"
# Get the database list, exclude information_schema
for db in $(mysql -B -s -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' | grep -v information_schema)
do
# dump each database in a separate file
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS "$db" | gzip > "$BACKUP_DIR/$db.sql.gz"
done
Here is the corresponding import.
#!/bin/bash
# import-files-mysql.sh
# Descr: Import separate SQL files for a specified database.
# Usage: Run without args for usage info.
# Author: Will Rubel
# Notes:
# * Script will prompt for password for db access.
[ $# -lt 3 ] && echo "Usage: $(basename $0) <DB_HOST> <DB_USER> <DB_NAME> [<DIR>]" && exit 1
DB_host=$1
DB_user=$2
DB=$3
DIR=$4
DIR=$DIR/*
echo -n "DB password: "
read -s DB_pass
echo
echo "Importing separate SQL command files for database '$DB' into '$DB'"
file_count=0
for f in $DIR
do
echo "IMPORTING FILE: $f"
gunzip -c $f | mysql -h $DB_host -u $DB_user -p$DB_pass $DB
(( file_count++ ))
done
echo "$file_count files importing to database '$DB'"
#!/bin/bash
for i in $(mysql -uUser -pPASSWORD DATABASE -e "show tables;"|grep -v Tables_in_);do mysqldump -uUSER -pPASSWORD DATABASE $i > /backup/dir/$i".sql";done
tar -cjf "backup_mysql_"$(date +'%Y%m%d')".tar.bz2" /backup/dir/*.sql
I have had recently the need to backup a big database (more than 250GB uncompressed dump file) and I found the answers to this question really helpful.
I started using #Trutane approach and it worked like a charm. But I was concerned about dumping tables in different mysql sessions because that could, in some moment, drive to a non-consistent backup.
After some research and testing, I have developed a different solution based on gawk. The basic idea is creating a dump of the whole database using mysqldump with --single-transaction=true and then process the output with gawk to produce a different file for every table.
So I can call:
mysqldump --single-transaction=true -u DBUSERNAME -p DBNAME | \
gawk -v 'database=DBNAME' -f 'backup.awk' -
And it produces, in current folder, a bunch of $database.$table.sql files with the schema of every table and $database.$table.sql.gz files with the content of every table. Thanks to the param --single-transaction=true, all the dump happens in a single transaction and data consistency is ensured.
The content of backup.awk is:
# Split mysqldump output in different files, two per table:
# * First file is named $database.$table.sql and it contains the table schema
# * Second file is named $database.$table.sql.gz and it contains the table data
# The 'database' variable is expected to be provided in command-line
BEGIN {
insert=0
filename=sprintf("%s.header.sql", database);
}
# A line starting with "INSERT INTO" activates inserting mode
/INSERT INTO/ { insert=1 }
# A line containing "-- Table structure for table `name-of-table`" finishes inserting mode
# It is also used to detect table name and change file names accordingly
match($0, /-- Table structure for table `(.*)`/, m) {
insert=0;
table=m[1];
filename=sprintf("%s.%s.sql", database, table);
print sprintf("Dumping table %s\n", table);
}
# If in inserting mode, line is piped to a gzipped file,
# if it is not, it is redirected to an uncompressed schema file
{
if (insert == 1) {
output = sprintf("gzip > %s.gz", filename);
print | output
} else {
print > filename;
}
}
It looks everybody here forgot of autocommit=0;SET unique_checks=0;SET foreign_key_checks=0; that is suppose to speed up the import process ...
#!/bin/bash
MYSQL_USER="USER"
MYSQL_PASS="PASS"
if [ -z "$1" ]
then
echo "Dumping all DB ... in separate files"
for I in $(mysql -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' -s --skip-column-names);
do
echo "SET autocommit=0;SET unique_checks=0;SET foreign_key_checks=0;" > "$I.sql"
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS $I >> "$I.sql";
echo "SET autocommit=1;SET unique_checks=1;SET foreign_key_checks=1;commit;" >> "$I.sql"
gzip "$I.sql"
done
echo "END."
else
echo "Dumping $1 ..."
echo "SET autocommit=0;SET unique_checks=0;SET foreign_key_checks=0;" > "$1.sql"
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS $1 >> "$1.sql";
echo "SET autocommit=1;SET unique_checks=1;SET foreign_key_checks=1;commit;" >> "$1.sql"
gzip "$1.sql"
fi
If You want to dump all tables from all databases just combine Elias Torres Arroyo's and Trutane's answer:
And if You don't want to give Your password on terminal, just store Your password in an extra config file (chmod 0600)- see Mysqldump launched by cron and password security
#!/bin/bash
# this file
# a) gets all databases from mysql
# b) gets all tables from all databases in a)
# c) creates subfolders for every database in a)
# d) dumps every table from b) in a single file
# this is a mixture of scripts from Trutane (http://stackoverflow.com/q/3669121/138325)
# and Elias Torres Arroyo (https://stackoverflow.com/a/14711298/8398149)
# usage:
# sk-db.bash parameters
# where pararmeters are:
# d "dbs to leave"
# t " tables to leave"
# u "user who connects to database"
# h "db host"
# f "/backup/folder"
user='root'
host='localhost'
backup_folder=''
leave_dbs=(information_schema mysql)
leave_tables=()
while getopts ":d:t:u:h:f:" opt; do
case $opt in
d) leave_dbs=( $OPTARG )
;;
t) leave_tables=( $OPTARG )
;;
u) user=$OPTARG
;;
h) host=$OPTARG
;;
f) backup_folder=$OPTARG
;;
\?) echo "Invalid option -$OPTARG" >&2
;;
esac
done
echo '****************************************'
echo "Database Backup with these options"
echo "Host $host"
echo "User $user"
echo "Backup in $backup_folder"
echo '----------------------------------------'
echo "Databases to emit:"
printf "%s\n" "${leave_dbs[#]}"
echo '----------------------------------------'
echo "Tables to emit:"
printf "%s\n" "${leave_tables[#]}"
echo '----------------------------------------'
BACKUP_DIR=$backup_folder/$(date +%Y-%m-%dT%H_%M_%S);
CONFIG_FILE=/root/db-config.cnf
function contains() {
local n=$#
local value=${!n}
for ((i=1;i < $#;i++)) {
if [ "${!i}" == "${value}" ]; then
echo "y"
return 0
fi
}
echo "n"
return 1
}
test -d "$BACKUP_DIR" || mkdir -p "$BACKUP_DIR"
# Get the database list, exclude information_schema
database_count=0
tbl_count=0
for db in $(mysql --defaults-extra-file=$CONFIG_FILE -B -s -u $user -e 'show databases' )
do
if [ $(contains "${leave_dbs[#]}" "$db") == "y" ]; then
echo "leave database $db as requested"
else
# dump each database in a separate file
(( database_count++ ))
DIR=$BACKUP_DIR/$db
[ -n "$DIR" ] || DIR=.
test -d $DIR || mkdir -p $DIR
echo
echo "Dumping tables into separate SQL command files for database '$db' into dir=$DIR"
for t in $(mysql --defaults-extra-file=$CONFIG_FILE -NBA -h $host -u $user -D $db -e 'show tables')
do
if [ $(contains "${leave_tables[#]}" "$db.$t") == "y" ]; then
echo "leave table $db.$t as requested"
else
echo "DUMPING TABLE: $db.$t"
# mysqldump --defaults-extra-file=$CONFIG_FILE -h $host -u $user $db $t > $DIR/$db.$t.sql
tbl_count=$(( tbl_count + 1 ))
fi
done
echo "Database $db is finished"
echo '----------------------------------------'
fi
done
echo '----------------------------------------'
echo "Backup completed"
echo '**********************************************'
And also, this helped:
Check if bash array contains value
arrays in bash
named arguments in script
I'm not bash master, but I'd just do it with a bash script. Without hitting MySQL, with knowledge of the data directory and database name, you could just scan for all .frm files (one for every table in that db/directory) for a list of tables.
I'm sure there are ways to make it slicker and accept arguments or whatnot, but this worked well for me.
tables_in_a_db_to_sql.sh
#!/bin/bash
database="this_is_my_database"
datadir="/var/lib/mysql/"
datadir_escaped="\/var\/lib\/mysql\/"
all_tables=($(ls $datadir$database/*.frm | sed s/"$datadir_escaped$database\/"/""/g | sed s/.frm//g))
for t in "${all_tables[#]}"; do
outfile=$database.$t.sql
echo "-- backing up $t to $outfile"
echo "mysqldump [options] $database $t > $outfile"
# mysqldump [options] $database $t > $outfile
done
Fill in the [options] and desired outfile convention as you need, and uncomment the last mysqldump line.
For Windows Servers, you can use a batch file like so:
set year=%DATE:~10,4%
set day=%DATE:~7,2%
set mnt=%DATE:~4,2%
set hr=%TIME:~0,2%
set min=%TIME:~3,2%
IF %day% LSS 10 SET day=0%day:~1,1%
IF %mnt% LSS 10 SET mnt=0%mnt:~1,1%
IF %hr% LSS 10 SET hr=0%hr:~1,1%
IF %min% LSS 10 SET min=0%min:~1,1%
set backuptime=%year%-%mnt%-%day%-%hr%-%min%
set backupfldr=C:\inetpub\wwwroot\backupfiles\
set datafldr="C:\Program Files\MySQL\MySQL Server 5.5\data"
set zipper="C:\inetpub\wwwroot\backupfiles\zip\7za.exe"
set retaindays=21
:: Switch to the data directory to enumerate the folders
pushd %datafldr%
:: Get all table names and save them in a temp file
mysql --skip-column-names --user=root --password=mypassword mydatabasename -e "show tables" > tables.txt
:: Loop through all tables in temp file so that we can save one backup file per table
for /f "skip=3 delims=|" %%i in (tables.txt) do (
set tablename = %%i
mysqldump --user=root --password=mypassword mydatabasename %%i > "%backupfldr%mydatabasename.%backuptime%.%%i.sql"
)
del tables.txt
:: Zip all files ending in .sql in the folder
%zipper% a -tzip "%backupfldr%backup.mydatabasename.%backuptime%.zip" "%backupfldr%*.sql"
echo "Deleting all the files ending in .sql only"
del "%backupfldr%*.sql"
echo "Deleting zip files older than 21 days now"
Forfiles /p %backupfldr% /m *.zip /d -%retaindays% /c "cmd /c del /q #path"
Then schedule it using Windows Task Scheduler.
Also, if you want to exclude certain tables in your backup, note that you can use a where clause on the "show tables" statement, but the column name depends on your database name.
So for example, if your database name is "blah" then your column name in the "show tables" result set will be "tables_in_blah". Which means you could add a where clause something similar to this:
show tables where tables_in_blah <> 'badtable'
or
show tables where tables_in_blah like '%goodtable%'
They complete the route where they would store the backups. East
case we are creating one per day of the week, so we have 7 days of backup and they are recycled.
check how many databases it has and then how many tables each database has. and create a file named by db.tablename.sql
which can then be restored.
regards
#!/bin/bash
USER="root"
MYSQL_PASSWORD="password"
RUTA=/hdd/backup/mysql
diasemana=$(date +\%w)
mkdir -m 7777 $RUTA
mkdir -m 7777 $RUTA/infodb
mkdir -m 7777 $RUTA/$diasemana
mysql -u$USER -p$MYSQL_PASSWORD -e "SHOW DATABASES where \`Database\` <> 'information_schema' and \`Database\` <> 'mysql' and \`Database\` <> 'sys' and \`Database\` <> 'performance_schema';" -N > $RUTA/infodb/db.txt;
for i in $(cat $RUTA/infodb/db.txt);
do
mysql -u$USER -p$MYSQL_PASSWORD -e "USE $i;show tables;" -N >$RUTA/infodb/$i.txt;
for j in $(cat $RUTA/infodb/$i.txt);
do
mysqldump -u$USER -p$MYSQL_PASSWORD $i $j > $RUTA/$diasemana/$i"_"$j".sql";
echo $RUTA/$diasemana/$i"_"$j".sql"
done
done
See the following article by Pauli Marcus:
Howto split a SQL database dump into table-wise files
Splitting a sql file containing a whole database into per-table files
is quite easy: Grep the .sql for any occurence of DROP TABLE. Generate
the file name from the table name that is included in the DROP TABLE
statement. Echo the output to a file. Here is a little script that
expects a .sql file as input:
#!/bin/bash
file=$1 # the input file
directory="$file-splitted" # the output directory
output="$directory/header" # the first file containing the header
GREP="DROP TABLE" # what we are looking for
mkdir $directory # create the output directory
while read line
do
# if the current line contains the wanted statement
if [ $(echo "$line" | grep -c "$GREP") == "1" ]
then
# extract the file name
myfile=$(echo $line | awk '{print $5}' | sed -e 's/`//g' -e 's/;//g')
# set the new file name
output="$directory/$myfile"
fi
echo "$line" >> $output # write to file
done < $file

How can I output MySQL query results in CSV format?

Is there an easy way to run a MySQL query from the Linux command line and output the results in CSV format?
Here's what I'm doing now:
mysql -u uid -ppwd -D dbname << EOQ | sed -e 's/ /,/g' | tee list.csv
select id, concat("\"",name,"\"") as name
from students
EOQ
It gets messy when there are a lot of columns that need to be surrounded by quotes, or if there are quotes in the results that need to be escaped.
From Save MySQL query results into a text or CSV file:
SELECT order_id,product_name,qty
FROM orders
WHERE foo = 'bar'
INTO OUTFILE '/var/lib/mysql-files/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';
Note: That syntax may need to be reordered to
SELECT order_id,product_name,qty
INTO OUTFILE '/var/lib/mysql-files/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM orders
WHERE foo = 'bar';
in more recent versions of MySQL.
Using this command, columns names will not be exported.
Also note that /var/lib/mysql-files/orders.csv will be on the server that is running MySQL. The user that the MySQL process is running under must have permissions to write to the directory chosen, or the command will fail.
If you want to write output to your local machine from a remote server (especially a hosted or virtualize machine such as Heroku or Amazon RDS), this solution is not suitable.
mysql your_database --password=foo < my_requests.sql > out.tsv
This produces a tab-separated format. If you are certain that commas do not appear in any of the column data (and neither do tabs), you can use this pipe command to get a true CSV (thanks to user John Carter):
... .sql | sed 's/\t/,/g' > out.csv
mysql --batch, -B
Print results using tab as the column separator, with each row on a
new line. With this option, mysql does not use the history file.
Batch mode results in non-tabular output format and escaping of
special characters. Escaping may be disabled by using raw mode; see
the description for the --raw option.
This will give you a tab-separated file. Since commas (or strings containing comma) are not escaped, it is not straightforward to change the delimiter to comma.
Here's a fairly gnarly way of doing it[1]:
mysql --user=wibble --password mydatabasename -B -e "select * from vehicle_categories;" | sed "s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g" > vehicle_categories.csv
It works pretty well. Once again, though, a regular expression proves write-only.
Regex Explanation:
s/// means substitute what's between the first // with what's between the second //
the "g" at the end is a modifier that means "all instance, not just first"
^ (in this context) means beginning of line
$ (in this context) means end of line
So, putting it all together:
s/'/\'/ Replace ' with \'
s/\t/\",\"/g Replace all \t (tab) with ","
s/^/\"/ at the beginning of the line place a "
s/$/\"/ At the end of the line, place a "
s/\n//g Replace all \n (newline) with nothing
[1] I found it somewhere and can't take any credit.
Pipe it through 'tr' (Unix/Cygwin only):
mysql <database> -e "<query here>" | tr '\t' ',' > data.csv
N.B.: This handles neither embedded commas, nor embedded tabs.
This saved me a couple of times. It is fast and it works!
--batch
Print results using tab as the column separator, with each row on a
new line.
--raw disables character escaping (\n, \t, \0, and \)
Example:
mysql -udemo_user -p -h127.0.0.1 --port=3306 \
--default-character-set=utf8mb4 --database=demo_database \
--batch --raw < /tmp/demo_sql_query.sql > /tmp/demo_csv_export.tsv
For completeness you could convert to CSV (but be careful because tabs could be inside field values - e.g., text fields)
tr '\t' ',' < file.tsv > file.csv
The OUTFILE solution given by Paul Tomblin causes a file to be written on the MySQL server itself, so this will work only if you have FILE access, as well as login access or other means for retrieving the file from that box.
If you don't have such access, and tab-delimited output is a reasonable substitute for CSV (e.g., if your end goal is to import to Excel), then serbaut's solution (using mysql --batch and optionally --raw) is the way to go.
MySQL Workbench can export recordsets to CSV, and it seems to handle commas in fields very well. The CSV opens up in OpenOffice Calc fine.
All of the solutions here to date, except the MySQL Workbench one, are incorrect and quite possibly unsafe (i.e., security issues) for at least some possible content in the MySQL database.
MySQL Workbench (and similarly phpMyAdmin) provide a formally correct solution, but they are designed for downloading the output to a user's location. They're not so useful for things like automating data export.
It is not possible to generate reliably correct CSV content from the output of mysql -B -e 'SELECT ...' because that cannot encode carriage returns and white space in fields. The '-s' flag to mysql does do backslash escaping, and might lead to a correct solution. However, using a scripting language (one with decent internal data structures that is, not Bash), and libraries where the encoding issues have already been carefully worked out is far safer.
I thought about writing a script for this, but as soon as I thought about what I'd call it, it occurred to me to search for preexisting work by the same name. While I haven't gone over it thoroughly, mysql2csv looks promising. Depending on your application, the YAML approach to specifying the SQL commands might or might not appeal though. I'm also not thrilled with the requirement for a more recent version of Ruby than comes as standard with my Ubuntu 12.04 (Precise Pangolin) laptop or Debian 6.0 (Squeeze) servers. Yes, I know I could use RVM, but I'd rather not maintain that for such a simple purpose.
Use:
mysql your_database -p < my_requests.sql | awk '{print $1","$2}' > out.csv
Many of the answers on this page are weak, because they don't handle the general case of what can occur in CSV format. E.g., commas and quotes embedded in fields and other conditions that always come up eventually. We need a general solution that works for all valid CSV input data.
Here's a simple and strong solution in Python:
#!/usr/bin/env python
import csv
import sys
tab_in = csv.reader(sys.stdin, dialect=csv.excel_tab)
comma_out = csv.writer(sys.stdout, dialect=csv.excel)
for row in tab_in:
comma_out.writerow(row)
Name that file tab2csv, put it on your path, give it execute permissions, then use it like this:
mysql OTHER_OPTIONS --batch --execute='select * from whatever;' | tab2csv > outfile.csv
The Python CSV-handling functions cover corner cases for CSV input format(s).
This could be improved to handle very large files via a streaming approach.
From your command line, you can do this:
mysql -h *hostname* -P *port number* --database=*database_name* -u *username* -p -e *your SQL query* | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > *output_file_name.csv*
Credits: Exporting table from Amazon RDS into a CSV file
This answer uses Python and a popular third party library, PyMySQL. I'm adding it because Python's csv library is powerful enough to correctly handle many different flavors of .csv and no other answers are using Python code to interact with the database.
import contextlib
import csv
import datetime
import os
# https://github.com/PyMySQL/PyMySQL
import pymysql
SQL_QUERY = """
SELECT * FROM my_table WHERE my_attribute = 'my_attribute';
"""
# embedding passwords in code gets nasty when you use version control
# the environment is not much better, but this is an example
# https://stackoverflow.com/questions/12461484
SQL_USER = os.environ['SQL_USER']
SQL_PASS = os.environ['SQL_PASS']
connection = pymysql.connect(host='localhost',
user=SQL_USER,
password=SQL_PASS,
db='dbname')
with contextlib.closing(connection):
with connection.cursor() as cursor:
cursor.execute(SQL_QUERY)
# Hope you have enough memory :)
results = cursor.fetchall()
output_file = 'my_query-{}.csv'.format(datetime.datetime.today().strftime('%Y-%m-%d'))
with open(output_file, 'w', newline='') as csvfile:
# http://stackoverflow.com/a/17725590/2958070 about lineterminator
csv_writer = csv.writer(csvfile, lineterminator='\n')
csv_writer.writerows(results)
I encountered the same problem and Paul's Answer wasn't an option since it was Amazon RDS. Replacing the tab with the commas did not work as the data had embedded commas and tabs. I found that mycli, which is a drop-in alternative for the mysql-client, supports CSV output out of the box with the --csv flag:
mycli db_name --csv -e "select * from flowers" > flowers.csv
This is simple, and it works on anything without needing batch mode or output files:
select concat_ws(',',
concat('"', replace(field1, '"', '""'), '"'),
concat('"', replace(field2, '"', '""'), '"'),
concat('"', replace(field3, '"', '""'), '"'))
from your_table where etc;
Explanation:
Replace " with "" in each field --> replace(field1, '"', '""')
Surround each result in quotation marks --> concat('"', result1, '"')
Place a comma between each quoted result --> concat_ws(',', quoted1, quoted2, ...)
That's it!
Also, if you're performing the query on the Bash command line, I believe the tr command can be used to substitute the default tabs to arbitrary delimiters.
$ echo "SELECT * FROM Table123" | mysql Database456 | tr "\t" ,
You can have a MySQL table that uses the CSV engine.
Then you will have a file on your hard disk that will always be in a CSV format which you could just copy without processing it.
To expand on previous answers, the following one-liner exports a single table as a tab-separated file. It's suitable for automation, exporting the database every day or so.
mysql -B -D mydatabase -e 'select * from mytable'
Conveniently, we can use the same technique to list out MySQL's tables, and to describe the fields on a single table:
mysql -B -D mydatabase -e 'show tables'
mysql -B -D mydatabase -e 'desc users'
Field Type Null Key Default Extra
id int(11) NO PRI NULL auto_increment
email varchar(128) NO UNI NULL
lastName varchar(100) YES NULL
title varchar(128) YES UNI NULL
userName varchar(128) YES UNI NULL
firstName varchar(100) YES NULL
Here's what I do:
echo $QUERY | \
mysql -B $MYSQL_OPTS | \
perl -F"\t" -lane 'print join ",", map {s/"/""/g; /^[\d.]+$/ ? $_ : qq("$_")} #F ' | \
mail -s 'report' person#address
The Perl script (snipped from elsewhere) does a nice job of converting the tab spaced fields to CSV.
Building on user7610, here is the best way to do it. With mysql outfile there were 60 mins of file ownership and overwriting problems.
It's not cool, but it worked in 5 mins.
php csvdump.php localhost root password database tablename > whatever-you-like.csv
<?php
$server = $argv[1];
$user = $argv[2];
$password = $argv[3];
$db = $argv[4];
$table = $argv[5];
mysql_connect($server, $user, $password) or die(mysql_error());
mysql_select_db($db) or die(mysql_error());
// fetch the data
$rows = mysql_query('SELECT * FROM ' . $table);
$rows || die(mysql_error());
// create a file pointer connected to the output stream
$output = fopen('php://output', 'w');
// output the column headings
$fields = [];
for($i = 0; $i < mysql_num_fields($rows); $i++) {
$field_info = mysql_fetch_field($rows, $i);
$fields[] = $field_info->name;
}
fputcsv($output, $fields);
// loop over the rows, outputting them
while ($row = mysql_fetch_assoc($rows)) fputcsv($output, $row);
?>
Not exactly as a CSV format, but the tee command from the MySQL client can be used to save the output into a local file:
tee foobar.txt
SELECT foo FROM bar;
You can disable it using notee.
The problem with SELECT … INTO OUTFILE …; is that it requires permission to write files at the server.
In my case from table_name ..... before INTO OUTFILE ..... gives an error:
Unexpected ordering of clauses. (near "FROM" at position 10)
What works for me:
SELECT *
INTO OUTFILE '/Volumes/Development/sql/sql/enabled_contacts.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM table_name
WHERE column_name = 'value'
What worked for me:
SELECT *
FROM students
WHERE foo = 'bar'
LIMIT 0,1200000
INTO OUTFILE './students-1200000.csv'
FIELDS TERMINATED BY ',' ESCAPED BY '"'
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n';
None of the solutions on this thread worked for my particular case. I had pretty JSON data inside one of the columns, which would get messed up in my CSV output. For those with a similar problem, try lines terminated by \r\n instead.
Also another problem for those trying to open the CSV with Microsoft Excel, keep in mind there is a limit of 32,767 characters that a single cell can hold, above that it overflows to the rows below. To identify which records in a column have the issue, use the query below. You can then truncate those records or handle them as you'd like.
SELECT id,name,CHAR_LENGTH(json_student_description) AS 'character length'
FROM students
WHERE CHAR_LENGTH(json_student_description)>32767;
Using the solution posted by Tim Harding, I created this Bash script to facilitate the process (root password is requested, but you can modify the script easily to ask for any other user):
#!/bin/bash
if [ "$1" == "" ];then
echo "Usage: $0 DATABASE TABLE [MYSQL EXTRA COMMANDS]"
exit
fi
DBNAME=$1
TABLE=$2
FNAME=$1.$2.csv
MCOMM=$3
echo "MySQL password: "
stty -echo
read PASS
stty echo
mysql -uroot -p$PASS $MCOMM $DBNAME -B -e "SELECT * FROM $TABLE;" | sed "s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g" > $FNAME
It will create a file named: database.table.csv
If you have PHP set up on the server, you can use mysql2csv to export an (actually valid) CSV file for an arbitrary MySQL query. See my answer at MySQL - SELECT * INTO OUTFILE LOCAL ? for a little more context/info.
I tried to maintain the option names from mysql so it should be sufficient to provide the --file and --query options:
./mysql2csv --file="/tmp/result.csv" --query='SELECT 1 as foo, 2 as bar;' --user="username" --password="password"
"Install" mysql2csv via
wget https://gist.githubusercontent.com/paslandau/37bf787eab1b84fc7ae679d1823cf401/raw/29a48bb0a43f6750858e1ddec054d3552f3cbc45/mysql2csv -O mysql2csv -q && (sha256sum mysql2csv | cmp <(echo "b109535b29733bd596ecc8608e008732e617e97906f119c66dd7cf6ab2865a65 mysql2csv") || (echo "ERROR comparing hash, Found:" ;sha256sum mysql2csv) ) && chmod +x mysql2csv
(Download content of the gist, check checksum and make it executable.)
The following produces tab-delimited and valid CSV output. Unlike most of the other answers, this technique correctly handles escaping of tabs, commas, quotes, and new lines without any stream filter like sed, AWK, or tr.
The example shows how to pipe a remote MySQL table directly into a local SQLite database using streams. This works without FILE permission or SELECT INTO OUTFILE permission. I have added new lines for readability.
mysql -B -C --raw -u 'username' --password='password' --host='hostname' 'databasename'
-e 'SELECT
CONCAT('\''"'\'',REPLACE(`id`,'\''"'\'', '\''""'\''),'\''"'\'') AS '\''id'\'',
CONCAT('\''"'\'',REPLACE(`value`,'\''"'\'', '\''""'\''),'\''"'\'') AS '\''value'\''
FROM sampledata'
2>/dev/null | sqlite3 -csv -separator $'\t' mydb.db '.import /dev/stdin mycsvtable'
The 2>/dev/null is needed to suppress the warning about the password on the command line.
If your data has NULLs, you can use the IFNULL() function in the query.
A simple solution in Python that writes a standard-format CSV file with headers and writes data as a stream (low memory use):
import csv
def export_table(connection, table_name, output_filename):
cursor = connection.cursor()
cursor.execute("SELECT * FROM " + table_name)
# thanks to https://gist.github.com/madan712/f27ac3b703a541abbcd63871a4a56636 for this hint
header = [descriptor[0] for descriptor in cursor.description]
with open(output_filename, 'w') as csvfile:
csv_writer = csv.writer(csvfile, dialect='excel')
csv_writer.writerow(header)
for row in cursor:
csv_writer.writerow(row)
You could use it like:
import mysql.connector as mysql
# (or https://github.com/PyMySQL/PyMySQL should work but I haven't tested it)
db = mysql.connect(
host="localhost",
user="USERNAME",
db="DATABASE_NAME",
port=9999)
for table_name in ['table1', 'table2']:
export_table(db, table_name, table_name + '.csv')
db.close()
For simplicity, this intentionally doesn't include some fancier stuff from another answer like using an environment variable for credentials, contextlib, etc. There is a subtlety mentioned there about line endings that I haven't evaluated.
Tiny Bash script for doing simple query to CSV dumps, inspired by Tim Harding's answer.
#!/bin/bash
# $1 = query to execute
# $2 = outfile
# $3 = mysql database name
# $4 = mysql username
if [ -z "$1" ]; then
echo "Query not given"
exit 1
fi
if [ -z "$2" ]; then
echo "Outfile not given"
exit 1
fi
MYSQL_DB=""
MYSQL_USER="root"
if [ ! -z "$3" ]; then
MYSQL_DB=$3
fi
if [ ! -z "$4" ]; then
MYSQL_USER=$4
fi
if [ -z "$MYSQL_DB" ]; then
echo "Database name not given"
exit 1
fi
if [ -z "$MYSQL_USER" ]; then
echo "Database user not given"
exit 1
fi
mysql -u $MYSQL_USER -p -D $MYSQL_DB -B -s -e "$1" | sed "s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g" > $2
echo "Written to $2"
If you are getting an error of secure-file-priv then, also after shifting your destination file location inside the C:\ProgramData\MySQL\MySQL Server 8.0\Uploads and also after then the query -
SELECT * FROM attendance INTO OUTFILE 'C:\ProgramData\MySQL\MySQL Server 8.0\Uploads\FileName.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
is not working, you have to just change \(backsplash) from the query to / (forwardsplash)
And that works!!
Example:
SELECT * FROM attendance INTO OUTFILE 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/FileName.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
Each time when you run the successful query, it will generate the new CSV file each time!
Cool, right?
The following Bash script works for me. It optionally also gets the schema for the requested tables.
#!/bin/bash
#
# Export MySQL data to CSV
#https://stackoverflow.com/questions/356578/how-to-output-mysql-query-results-in-csv-format
#
# ANSI colors
#http://www.csc.uvic.ca/~sae/seng265/fall04/tips/s265s047-tips/bash-using-colors.html
blue='\033[0;34m'
red='\033[0;31m'
green='\033[0;32m' # '\e[1;32m' is too bright for white bg.
endColor='\033[0m'
#
# A colored message
# params:
# 1: l_color - the color of the message
# 2: l_msg - the message to display
#
color_msg() {
local l_color="$1"
local l_msg="$2"
echo -e "${l_color}$l_msg${endColor}"
}
#
# Error
#
# Show the given error message on standard error and exit
#
# Parameters:
# 1: l_msg - the error message to display
#
error() {
local l_msg="$1"
# Use ANSI red for error
color_msg $red "Error:" 1>&2
color_msg $red "\t$l_msg" 1>&2
usage
}
#
# Display usage
#
usage() {
echo "usage: $0 [-h|--help]" 1>&2
echo " -o | --output csvdirectory" 1>&2
echo " -d | --database database" 1>&2
echo " -t | --tables tables" 1>&2
echo " -p | --password password" 1>&2
echo " -u | --user user" 1>&2
echo " -hs | --host host" 1>&2
echo " -gs | --get-schema" 1>&2
echo "" 1>&2
echo " output: output CSV directory to export MySQL data into" 1>&2
echo "" 1>&2
echo " user: MySQL user" 1>&2
echo " password: MySQL password" 1>&2
echo "" 1>&2
echo " database: target database" 1>&2
echo " tables: tables to export" 1>&2
echo " host: host of target database" 1>&2
echo "" 1>&2
echo " -h|--help: show help" 1>&2
exit 1
}
#
# show help
#
help() {
echo "$0 Help" 1>&2
echo "===========" 1>&2
echo "$0 exports a CSV file from a MySQL database optionally limiting to a list of tables" 1>&2
echo " example: $0 --database=cms --user=scott --password=tiger --tables=person --output person.csv" 1>&2
echo "" 1>&2
usage
}
domysql() {
mysql --host $host -u$user --password=$password $database
}
getcolumns() {
local l_table="$1"
echo "describe $l_table" | domysql | cut -f1 | grep -v "Field" | grep -v "Warning" | paste -sd "," - 2>/dev/null
}
host="localhost"
mysqlfiles="/var/lib/mysql-files/"
# Parse command line options
while true; do
#echo "option $1"
case "$1" in
# Options without arguments
-h|--help) usage;;
-d|--database) database="$2" ; shift ;;
-t|--tables) tables="$2" ; shift ;;
-o|--output) csvoutput="$2" ; shift ;;
-u|--user) user="$2" ; shift ;;
-hs|--host) host="$2" ; shift ;;
-p|--password) password="$2" ; shift ;;
-gs|--get-schema) option="getschema";;
(--) shift; break;;
(-*) echo "$0: error - unrecognized option $1" 1>&2; usage;;
(*) break;;
esac
shift
done
# Checks
if [ "$csvoutput" == "" ]
then
error "output CSV directory is not set"
fi
if [ "$database" == "" ]
then
error "MySQL database is not set"
fi
if [ "$user" == "" ]
then
error "MySQL user is not set"
fi
if [ "$password" == "" ]
then
error "MySQL password is not set"
fi
color_msg $blue "exporting tables of database $database"
if [ "$tables" = "" ]
then
tables=$(echo "show tables" | domysql)
fi
case $option in
getschema)
rm $csvoutput$database.schema
for table in $tables
do
color_msg $blue "getting schema for $table"
echo -n "$table:" >> $csvoutput$database.schema
getcolumns $table >> $csvoutput$database.schema
done
;;
*)
for table in $tables
do
color_msg $blue "exporting table $table"
cols=$(grep "$table:" $csvoutput$database.schema | cut -f2 -d:)
if [ "$cols" = "" ]
then
cols=$(getcolumns $table)
fi
ssh $host rm $mysqlfiles/$table.csv
cat <<EOF | mysql --host $host -u$user --password=$password $database
SELECT $cols FROM $table INTO OUTFILE '$mysqlfiles$table.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';
EOF
scp $host:$mysqlfiles/$table.csv $csvoutput$table.csv.raw
(echo "$cols"; cat $csvoutput$table.csv.raw) > $csvoutput$table.csv
rm $csvoutput$table.csv.raw
done
;;
esac