MySQL query to print output as CSV to standard output - mysql

I want to do the following mysql -uuser -ppass -h remote.host.tld database < script.sql
where script.sql contains the following
SELECT *
FROM webrecord_wr25mfz_20101011_175524
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
I want CSV output directed to standard out. The reason is because running this query with an INTO OUTFILE 'blah.csv' will save the file on the remote host. I want the file saved on the local host.
If I could just redirect the standard output to a file, that would be dandy.

The answers above don't seem to fully answer the original question, and I'm not sure if this does either, but hopefully this might help someone:
See How to output MySQL query results in CSV format? for a lot of comments regarding how to sed. For example, based on the original parameters, the following might be sufficient:
mysql --batch -u user -h remote.host.tld database --port 3306 -ppass -e "SELECT * FROM webrecord_wr25mfz_20101011_175524;" | sed 's/\t/,/g' 2>&1
This is similar to the answer above, but redirecting to stdout instead of blah.csv.
However, (although not sure if this will work if you need to preserve tabs, there are many ways to address this though), I've used https://stackoverflow.com/a/2543226/2178980 to correctly escape double quotations and convert to comma-separated:
mysql --batch -u user -h remote.host.tld database --port 3306 -ppass -e "SELECT * FROM webrecord_wr25mfz_20101011_175524;" | perl -lpe 's/"/\\"/g; s/^|$/"/g; s/\t/","/g' 2>&1
Execute the sql "SELECT * FROM webrecord_wr25mfz_20101011_175524;" via mysql (this output will be tab-separated)
Convert to comma-separated by piping to perl -lpe 's/"/\\"/g; s/^|$/"/g; s/\t/","/g'
Have the output go to stdout by appending 2>&1

In part a duplicate question to MySQL SELECT INTO OUTFILE to a different server. FIELDS TERMINATED BY can't be used without into outfile
A (not so elegant) alternative is using the --batch option to produce tab separated output and sedding the stdout. Something like this:
mysql --batch -uuser -ppass -h remote.host.tld database < stack.sql | sed 's/\t/,/g' > blah.csv
Be aware that --batch escapes special characters so depending on the data you have and its predictability, you might need to change the sed

Try this: mysql -uuser -ppass -h remote.host.tld database < script.sql 2> blah.csv
This will redirect the stderr

Related

connect to mysql db and execute query and export result to variable - bash script

I want to connect to mysql databse and execute some queries and export its result to a varibale, and do all of these need to be done entirely by bash script
I have a snippet code but does not work.
#!/bin/bash
BASEDIR=$(dirname $0)
cd $BASEDIR
mysqlUser=n_userdb
mysqlPass=d2FVR0NA3
mysqlDb=n_datadb
result=$(mysql -u $mysqlUser -p$mysqlPass -D $mysqlDb -e "select * from confs limit 1")
echo "${result}" >> a.txt
whats the problem ?
The issue was resolved in the chat by using the correct password.
If you further want to get only the data, use mysql with -NB (or --skip-column-names and --batch).
Also, the script needs to quote the variable expansions, or there will be issues with usernames/passwords containing characters that are special to the shell. Additionally, uppercase variable names are usually reserved for system variables.
#!/bin/sh
basedir=$(dirname "$0")
mysqlUser='n_userdb'
mysqlPass='d2FVR0NA3'
mysqlDb='n_datadb'
cd "$basedir" &&
mysql -NB -u "$mysqlUser" -p"$mysqlPass" -D "$mysqlDb" \
-e 'select * from confs limit 1' >a.txt 2>a-err.txt
Ideally though, you'd use a my.cnf file to configure the username and password.
See e.g.
MySQL Utilities - ~/.my.cnf option file
mysql .my.cnf not reading credentials properly?
Do this:
result=$(mysql -u $mysqlUser -p$mysqlPass -D $mysqlDb -e "select * from confs limit 1" | grep '^\|' | tail -1)
The $() statement of Bash has trouble handling variables which contain multiple lines so the above hack greps only the interesting part: the data

gnu parallel mysql LOAD DATA LOCAL INFILE

I am trying to use GNU parallel to execute several LOAD DATA LOCAL INFILE mysql commands where:
{1} is the name of the file which I obtain from a UNIX find command pipe
{2} is the result of a chop.pl script that prints out a certain token from the file string according to certain rules
It seems that I am calling GNU parallel the correct way, except it
does not keep the double-quotes around the mysql command after the
-e, and it causes it not work.
E.g.
find /my/folder/ -name "*.txt" | while read i; do chop.pl $i; echo $i; done | parallel -t -N 2 mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e "LOAD DATA LOCAL INFILE '{2}' IGNORE INTO TABLE tblname IGNORE 1 LINES (col1,col2,col3,col4) set col5='{1}', col6='foo'"
The command it is attempting, lacking the double quotes after -e, is like so:
mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e LOAD DATA LOCAL INFILE '/my/file/name/yadda_yadda-12345678.txt' IGNORE INTO TABLE tblname IGNORE 1 LINES (col1,col2,col3,col4) set col5='yadda_yadda', col6='foo'
Any ideas how to add back the double-quotes after the -e?
The lazy and effective way: put the mysql command in a function, then have parallel call that, passing {1} and {2}.
Using functions is actually suggested by the parallel man pages:
https://www.gnu.org/software/parallel/man.html#QUOTING
Answering my own question, the solution was to escape all double-quotes, single-quotes and parenthesis signs:
find /my/folder/ -name "*.txt" | while read i; do chop.pl $i; echo $i; done | parallel -t -N 2 mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e \"LOAD DATA LOCAL INFILE \'{2}\' IGNORE INTO TABLE tblname IGNORE 1 LINES \(col1,col2,col3,col4\) set col5=\'{1}\', col6=\'foo\'\"

Export MySQL Table to .tsv from a remote server without using `INTO INFILE`

I have a very large table on an Amazon RDS instance that I need to export as a .tsv with specific settings. I cannot use INTO OUTFILE on the RDS instance. So I must export the table onto the local drive of the server I'm logging into the RDS MySQL instance with.
I have specific settings I need to specify for the .tsv. They are:
terminate with \t
wrap with nothing
escape with a backslash
null values are blank
How do I do this from the command line?
You can do this-
mysql -uroot -proot -h "mysql.host.url" -N -B -e "select * from world.city" | sed 's/NULL/ /g' > test.tsv
-N tells it not to print column headers. -B is "batch mode", and uses tabs to separate fields. NULL is replaced by space.
For 100 GB databases , this may help-
mysql -uroot -proot -h "mysql.host.url" -N -B -e "select * from world.city" > test.tsv
sed 's/NULL/ /g' < test.tsv > new.tsv
But i recommend For huge databases you should use ETL Tools.

Dump all tables in CSV format using 'mysqldump'

I need to dump all tables in MySQL in CSV format.
Is there a command using mysqldump to just output every row for every table in CSV format?
First, I can give you the answer for one table:
The trouble with all these INTO OUTFILE or --tab=tmpfile (and -T/path/to/directory) answers is that it requires running mysqldump on the same server as the MySQL server, and having those access rights.
My solution was simply to use mysql (not mysqldump) with the -B parameter, inline the SELECT statement with -e, then massage the ASCII output with sed, and wind up with CSV including a header field row:
Example:
mysql -B -u username -p password database -h dbhost -e "SELECT * FROM accounts;" \
| sed "s/\"/\"\"/g;s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g"
"id","login","password","folder","email"
"8","mariana","xxxxxxxxxx","mariana",""
"3","squaredesign","xxxxxxxxxxxxxxxxx","squaredesign","mkobylecki#squaredesign.com"
"4","miedziak","xxxxxxxxxx","miedziak","miedziak#mail.com"
"5","Sarko","xxxxxxxxx","Sarko",""
"6","Logitrans
Poland","xxxxxxxxxxxxxx","LogitransPoland",""
"7","Amos","xxxxxxxxxxxxxxxxxxxx","Amos",""
"9","Annabelle","xxxxxxxxxxxxxxxx","Annabelle",""
"11","Brandfathers and
Sons","xxxxxxxxxxxxxxxxx","BrandfathersAndSons",""
"12","Imagine
Group","xxxxxxxxxxxxxxxx","ImagineGroup",""
"13","EduSquare.pl","xxxxxxxxxxxxxxxxx","EduSquare.pl",""
"101","tmp","xxxxxxxxxxxxxxxxxxxxx","_","WOBC-14.squaredesign.atlassian.net#yoMama.com"
Add a > outfile.csv at the end of that one-liner, to get your CSV file for that table.
Next, get a list of all your tables with
mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"
From there, it's only one more step to make a loop, for example, in the Bash shell to iterate over those tables:
for tb in $(mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"); do
echo .....;
done
Between the do and ; done insert the long command I wrote in Part 1 above, but substitute your tablename with $tb instead.
This command will create two files in /path/to/directory table_name.sql and table_name.txt.
The SQL file will contain the table creation schema and the txt file will contain the records of the mytable table with fields delimited by a comma.
mysqldump -u username -p -t -T/path/to/directory dbname table_name --fields-terminated-by=','
If you are using MySQL or MariaDB, the easiest and performant way dump CSV for single table is -
SELECT customer_id, firstname, surname INTO OUTFILE '/exportdata/customers.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM customers;
Now you can use other techniques to repeat this command for multiple tables. See more details here:
https://mariadb.com/kb/en/the-mariadb-library/select-into-outfile/
https://dev.mysql.com/doc/refman/5.7/en/select-into.html
mysqldump has options for CSV formatting:
--fields-terminated-by=name
Fields in the output file are terminated by the given
--lines-terminated-by=name
Lines in the output file are terminated by the given
The name should contain one of the following:
`--fields-terminated-by`
\t or "\""
`--fields-enclosed-by=name`
Fields in the output file are enclosed by the given
and
--lines-terminated-by
\r
\n
\r\n
Naturally you should mysqldump each table individually.
I suggest you gather all table names in a text file. Then, iterate through all tables running mysqldump. Here is a script that will dump and gzip 10 tables at a time:
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
SQLSTMT="SELECT CONCAT(table_schema,'.',table_name)"
SQLSTMT="${SQLSTMT} FROM information_schema.tables WHERE table_schema NOT IN "
SQLSTMT="${SQLSTMT} ('information_schema','performance_schema','mysql')"
mysql ${MYSQL_CONN} -ANe"${SQLSTMT}" > /tmp/DBTB.txt
COMMIT_COUNT=0
COMMIT_LIMIT=10
TARGET_FOLDER=/path/to/csv/files
for DBTB in `cat /tmp/DBTB.txt`
do
DB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $1}'`
TB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $2}'`
DUMPFILE=${DB}-${TB}.csv.gz
mysqldump ${MYSQL_CONN} -T ${TARGET_FOLDER} --fields-terminated-by="," --fields-enclosed-by="\"" --lines-terminated-by="\r\n" ${DB} ${TB} | gzip > ${DUMPFILE}
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
This worked well for me:
mysqldump <DBNAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
Or if you want to only dump a specific table:
mysqldump <DBNAME> <TABLENAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
I'm dumping to /var/lib/mysql-files/ to avoid this error:
mysqldump: Got error: 1290: The MySQL server is running with the --secure-file-priv option so it cannot execute this statement when executing 'SELECT INTO OUTFILE'
It looks like others had this problem also, and there is a simple Python script now, for converting output of mysqldump into CSV files.
wget https://raw.githubusercontent.com/jamesmishra/mysqldump-to-csv/master/mysqldump_to_csv.py
mysqldump -u username -p --host=rdshostname database table | python mysqldump_to_csv.py > table.csv
You also can do it using Data Export tool in dbForge Studio for MySQL.
It will allow you to select some or all tables and export them into CSV format.

Dump a mysql database to a plaintext (CSV) backup from the command line

I'd like to avoid mysqldump since that outputs in a form that is only convenient for mysql to read. CSV seems more universal (one file per table is fine). But if there are advantages to mysqldump, I'm all ears. Also, I'd like something I can run from the command line (linux). If that's a mysql script, pointers to how to make such a thing would be helpful.
If you can cope with table-at-a-time, and your data is not binary, use the -B option to the mysql command. With this option it'll generate TSV (tab separated) files which can import into Excel, etc, quite easily:
% echo 'SELECT * FROM table' | mysql -B -uxxx -pyyy database
Alternatively, if you've got direct access to the server's file system, use SELECT INTO OUTFILE which can generate real CSV files:
SELECT * INTO OUTFILE 'table.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM table
In MySQL itself, you can specify CSV output like:
SELECT order_id,product_name,qty
FROM orders
INTO OUTFILE '/tmp/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
From http://www.tech-recipes.com/rx/1475/save-mysql-query-results-into-a-text-or-csv-file/
You can dump a whole database in one go with mysqldump's --tab option. You supply a directory path and it creates one .sql file with the CREATE TABLE DROP IF EXISTS syntax and a .txt file with the contents, tab separated. To create comma separated files you could use the following:
mysqldump --password --fields-optionally-enclosed-by='"' --fields-terminated-by=',' --tab /tmp/path_to_dump/ database_name
That path needs to be writable by both the mysql user and the user running the command, so for simplicity I recommend chmod 777 /tmp/path_to_dump/ first.
The select into outfile option wouldn't work for me but the below roundabout way of piping tab-delimited file through SED did:
mysql -uusername -ppassword -e "SELECT * from tablename" dbname | sed 's/\t/","/g;s/^/"/;s/$/"/' > /path/to/file/filename.csv
Here is the simplest command for it
mysql -h<hostname> -u<username> -p<password> -e 'select * from databaseName.tableNaame' | sed 's/\t/,/g' > output.csv
If there is a comma in the column value then we can generate .tsv instead of .csv with the following command
mysql -h<hostname> -u<username> -p<password> -e 'select * from databaseName.tableNaame' > output.csv
If you really need a "Backup" then you also need database schema, like table definitions, view definitions, store procedures and so on. A backup of a database isn't just the data.
The value of the mysqldump format for backup is specifically that it is very EASY to use it to restore mysql databases. A backup that isn't easily restored is far less useful. If you are looking for a method to reliably backup mysql data to so you can restore to a mysql server then I think you should stick with the mysqldump tool.
Mysql is free and runs on many different platforms. Setting up a new mysql server that I can restore to is simple. I am not at all worried about not being able to setup mysql so I can do a restore.
I would be far more worried about a custom backup/restore based on a fragile format like csv/tsv failing. Are you sure that all your quotes, commas, or tabs that are in your data would get escaped correctly and then parsed correctly by your restore tool?
If you are looking for a method to extract the data then see several in the other answers.
You can use below script to get the output to csv files. One file per table with headers.
for tn in `mysql --batch --skip-page --skip-column-name --raw -uuser -ppassword -e"show tables from mydb"`
do
mysql -uuser -ppassword mydb -B -e "select * from \`$tn\`;" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > $tn.csv
done
user is your user name, password is the password if you don't want to keep typing the password for each table and mydb is the database name.
Explanation of the script: The first expression in sed, will replace the tabs with "," so you have fields enclosed in double quotes and separated by commas. The second one insert double quote in the beginning and the third one insert double quote at the end. And the final one takes care of the \n.
If you want to dump the entire db as csv
#!/bin/bash
host=hostname
uname=username
pass=password
port=portnr
db=db_name
s3_url=s3://buckera/db_dump/
DATE=`date +%Y%m%d`
rm -rf $DATE
echo 'show tables' | mysql -B -h${host} -u${uname} -p${pass} -P${port} ${db} > tables.txt
awk 'NR>1' tables.txt > tables_new.txt
while IFS= read -r line
do
mkdir -p $DATE/$line
echo "select * from $line" | mysql -B -h"${host}" -u"${uname}" -p"${pass}" -P"${port}" "${db}" > $DATE/$line/dump.tsv
done < tables_new.txt
touch $DATE/$DATE.fin
rm -rf tables_new.txt tables.txt
Check out mk-parallel-dump which is part of the ever-useful maatkit suite of tools. This can dump comma-separated files with the --csv option.
This can do your whole db without specifying individual tables, and you can specify groups of tables in a backupset table.
Note that it also dumps table definitions, views and triggers into separate files. In addition providing a complete backup in a more universally accessible form, it also immediately restorable with mk-parallel-restore
Two line PowerShell answer:
# Store in variable
$Global:csv = (mysql -uroot -p -hlocalhost -Ddatabase_name -B -e "SELECT * FROM some_table") `
| ConvertFrom-Csv -Delimiter "`t"
# Out to csv
$Global:csv | Export-Csv "C:\temp\file.csv" -NoTypeInformation
Boom-bata-boom
-D = the name of your database
-e = query
-B = tab-delimited
There's a slightly simpler way to get all the tables into tab delimited fast:
#!/bin/bash
tablenames=$(mysql your_database -e "show tables;" -B |sed "1d")
IFS=$'\n'
tables=($tablenames)
for table in ${tables[#]}; do
mysql your_database -e "select * from ${table}" -B > "${table}.tsv"
done
Here's a basic python script that does the work! You can also choose to export only the headers (column names) or headers & data both.
Just change the database credentials and run the script. It will output all the data to the output folder.
To run the script -
Run: pip install mysql-connector-python
Change database credentials in the "INPUT" section
Run: python filename.py
import mysql.connector
from pathlib import Path
import csv
#========INPUT===========
databaseHost=""
databaseUsername=""
databasePassword=""
databaseName=""
outputDirectory="./WITH-DATA/"
exportTableData=True #MAKING THIS FIELD FALSE WILL STORE ONLY THE TABLE HEADERS (COLUMN NAMES) IN THE CSV FILE
#========INPUT END===========
Path(outputDirectory).mkdir(parents=True, exist_ok=True)
mydb = mysql.connector.connect(
host=databaseHost,
user=databaseUsername,
password=databasePassword
)
mycursor = mydb.cursor()
mycursor.execute("USE "+databaseName)
mycursor.execute("SHOW TABLES")
tables = mycursor.fetchall()
tableNames=[table[0] for table in tables]
print("================================")
print("Total number of tables: "+ str(len(tableNames)))
print(tableNames)
print("================================")
for tableName in tableNames:
print("================================")
print("Processing: "+ str(tableName))
mydb = mysql.connector.connect(
host=databaseHost,
user=databaseUsername,
password=databasePassword
)
mycursor = mydb.cursor()
mycursor.execute("USE "+databaseName)
if exportTableData:
mycursor.execute("SELECT * FROM "+tableName)
else:
mycursor.execute("SELECT * FROM "+tableName+" LIMIT 1")
print(mycursor.column_names)
with open(outputDirectory+tableName+".csv", 'w', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(mycursor.column_names)
if exportTableData:
myresult = mycursor.fetchall()
csvwriter.writerows(myresult)