How to export a MySQL database to CSV - mysql

What is the best way to export a MySQL database to a CSV file without including indexes, table structures etc?
I just need to get all the data, I have a lot tables so I don't want to do it one by one.
I'm using 0xdbe and Workbench running on Linux.
Thanks!

mysqldump has a mode to dump tab-separated files, one per table.
mysqldump -u <username> -p<password> -T <output_directory> --no-create-info <database_name>
With a bit of tweaking this can be make to look like a CSV file.
mysqldump -u <username> -p<password> -T <output_directory> --fields-terminated-by ',' --fields-enclosed-by '"' --fields-escaped-by '\' --no-create-info <database_name>
Note that the file is written by the database, so whatever user your database is running as needs to have write access to the output directory!

This worked well for me:
mysqldump DBNAME TABLENAME --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
I'm dumping to /var/lib/mysql-files/ to avoid this error:
mysqldump: Got error: 1290: The MySQL server is running with the --secure-file-priv option so it cannot execute this statement when executing 'SELECT INTO OUTFILE'

mysqldump might be useful in this case. Even though it's not the csv output, it has all the indices and table structures including the data.
mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql

Related

How to use a file to migrate data from mysql to a clickhouse?

I need to migrate the data from Mysql to ClickHouse and do some testing. These two database networks are not working, I have to use files to transfer. The first thing I think of is that I can use the mysqldump tool to export .sql files.
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 -uroot -proot database_name table_name > test.sql
Then I found that there are 120 million pieces of data in the mysql table. The insert statement of the .sql file exported in this way is very long. How to avoid this situation, such as exporting 1000 data each time as an insert statement ?
In addition, this .sql file is too big, can it be divided into small files, what needs to be done?
mysqldump has an option to turn on or off using multi-value inserts. You can do either of the following according to which you prefer:
Separate Insert statements per value:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --skip-extended-insert -uroot -proot database_name table_name > test.sql
Multi-value insert statements:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --extended-insert -uroot -proot database_name table_name > test.sql
So what you can do is dump the schema first with the following:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --no-data -uroot -proot database_name > dbschema.sql
Then dump the data as individual insert statements by themselves:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --skip-extended-insert --no-create-info -uroot -proot database_name table_name > test.sql
You can then split the INSERT file into as many pieces as possible. If you're on UNIX use the split command, for example.
And if you're worried about how long the import takes you might also want to add the --disable-keys option to speed up inserts as well..
BUT my recommendation is not to worry about this so much. mysqldump should not exceed MySQL's ability to import in a single statement and it should run faster than individual inserts. As to file size, one nice thing about SQL is that it compresses beautifully. That multi-gigabyte SQL dump will turn into a nicely compact gzip or bzip or zip file.
EDIT: If you really want to adjust the amount of values per insert in a multi-value insert dump, you can add the --max_allowed_packet option. E.g. --max_allowed_packet=24M . Packet size determines the size of a single data packet (e.g. an insert) so if you set it low enough it should reduce the number of values per insert. Still, I'd try it as is before you start messing with that.
clickhouse-client --host="localhost" --port="9000" --max_threads="1" --query="INSERT INTO database_name.table_name FORMAT Native" < clickhouse_dump.sql

How take mysqldump with UTF8?

I am trying to take mysql dump with command:
mysqldump -u xxxx -p dbxxx > xxxx270613.sql
what is command to take mysqldump with UTF8 ?
Hi please try the following.
mysqldump -u [username] –p[password] --default-character-set=utf8 -N --routines --skip-triggers --databases [database_name] > [dump_file.sql]
I had the problem, that even with applied utf-8 flags when creating the dump I could not avoid broken characters importing a dump that was created from a DB with many text columns using latin1.
Some googling and especially this site helped me to finally figure it out.
mysqldump with --skip-set-charset --default-character-set=latin1 flags,
to avoid MySQL attempt of reconversion and setting a charset.
fix the dump by replacing the charset strings using sed on terminal
sed -i 's/latin1_swedish_ci/utf8mb4/g' mysqlfile.sql
sed -i 's/latin1/utf8mb4/g' mysqlfile.sql
to make sure you don't miss anything you can do grep -i 'latin1' mysqlfile.sql before step 2 - and then come up with more sed orders. Introduction to sed here
create a clean DB
CREATE DATABASE mydatabase CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci;
apply fixed dump
--default-character-set=utf8 is the option you are looking for the one can be used together with these others:
mysqldump --events \
--routines \
--triggers \
--add-drop-database \
--compress \
--hex-blob \
--opt \
--skip-comments \
--single-transaction \
--skip-set-charset \
--default-character-set=utf8 \
--databases dbname > my.dump
Also, check the --hex-blob it helps to dump binary strings in hexadecimal format, so I can guaranty (be more portable) making the import to work.
The --databases option causes all names on the command line to be treated as database names. Without this option, mysqldump treats the first name as a database name and those following as table names.
With --all-databases or --databases, mysqldump writes CREATE DATABASE and USE statements prior to the dump output for each database. This ensures that when the dump file is reloaded, it creates each database if it does not exist and makes it the default database so database contents are loaded into the same database from which they came. If you want to cause the dump file to force a drop of each database before recreating it, use the --add-drop-database option as well. In this case, mysqldump writes a DROP DATABASE statement preceding each CREATE DATABASE statement.
This helps to restore using:
# mysql < dump.sql
Instead of:
# mysql dbname < dump.sql

mysqldump wont dump my data

here is the command I'm using:
mysqldump.exe -u root -d capstone -verbse --skip-quote-names > capstone.sql
and the output I get
mysqldump: Warning: Can't set SQL_QUOTE_SHOW_CREATE option ()
-- Skipping dump data for table 'users', --no-data was used
any ideas? if I dump to XML it works but the place I'm importing it to doesn't handle XML and my data ruins the CSV output somehow too.
the -d option is alias of --no-data, see https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_no-data
perhaps you intended to state "use database capstone" but in that case it wouldn't be -d capstone, the database name doesn't need any switch/option, just put it in there
shell> mysqldump [options] db_name [tbl_name ...]
shell> mysqldump [options] --databases db_name ...
shell> mysqldump [options] --all-databases
https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#mysqldump-syntax
I think you mean to use either -B / --databases (which includes allows you to indicate multiple databases to dump instead of a database and tables) or no such argument at all. I think you also mistyped --verbose.
Note that if you include --databases a CREATE DATABASE statement is also included. This could be important depending up on how you intend to use the data.

Dump all tables in CSV format using 'mysqldump'

I need to dump all tables in MySQL in CSV format.
Is there a command using mysqldump to just output every row for every table in CSV format?
First, I can give you the answer for one table:
The trouble with all these INTO OUTFILE or --tab=tmpfile (and -T/path/to/directory) answers is that it requires running mysqldump on the same server as the MySQL server, and having those access rights.
My solution was simply to use mysql (not mysqldump) with the -B parameter, inline the SELECT statement with -e, then massage the ASCII output with sed, and wind up with CSV including a header field row:
Example:
mysql -B -u username -p password database -h dbhost -e "SELECT * FROM accounts;" \
| sed "s/\"/\"\"/g;s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g"
"id","login","password","folder","email"
"8","mariana","xxxxxxxxxx","mariana",""
"3","squaredesign","xxxxxxxxxxxxxxxxx","squaredesign","mkobylecki#squaredesign.com"
"4","miedziak","xxxxxxxxxx","miedziak","miedziak#mail.com"
"5","Sarko","xxxxxxxxx","Sarko",""
"6","Logitrans
Poland","xxxxxxxxxxxxxx","LogitransPoland",""
"7","Amos","xxxxxxxxxxxxxxxxxxxx","Amos",""
"9","Annabelle","xxxxxxxxxxxxxxxx","Annabelle",""
"11","Brandfathers and
Sons","xxxxxxxxxxxxxxxxx","BrandfathersAndSons",""
"12","Imagine
Group","xxxxxxxxxxxxxxxx","ImagineGroup",""
"13","EduSquare.pl","xxxxxxxxxxxxxxxxx","EduSquare.pl",""
"101","tmp","xxxxxxxxxxxxxxxxxxxxx","_","WOBC-14.squaredesign.atlassian.net#yoMama.com"
Add a > outfile.csv at the end of that one-liner, to get your CSV file for that table.
Next, get a list of all your tables with
mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"
From there, it's only one more step to make a loop, for example, in the Bash shell to iterate over those tables:
for tb in $(mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"); do
echo .....;
done
Between the do and ; done insert the long command I wrote in Part 1 above, but substitute your tablename with $tb instead.
This command will create two files in /path/to/directory table_name.sql and table_name.txt.
The SQL file will contain the table creation schema and the txt file will contain the records of the mytable table with fields delimited by a comma.
mysqldump -u username -p -t -T/path/to/directory dbname table_name --fields-terminated-by=','
If you are using MySQL or MariaDB, the easiest and performant way dump CSV for single table is -
SELECT customer_id, firstname, surname INTO OUTFILE '/exportdata/customers.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM customers;
Now you can use other techniques to repeat this command for multiple tables. See more details here:
https://mariadb.com/kb/en/the-mariadb-library/select-into-outfile/
https://dev.mysql.com/doc/refman/5.7/en/select-into.html
mysqldump has options for CSV formatting:
--fields-terminated-by=name
Fields in the output file are terminated by the given
--lines-terminated-by=name
Lines in the output file are terminated by the given
The name should contain one of the following:
`--fields-terminated-by`
\t or "\""
`--fields-enclosed-by=name`
Fields in the output file are enclosed by the given
and
--lines-terminated-by
\r
\n
\r\n
Naturally you should mysqldump each table individually.
I suggest you gather all table names in a text file. Then, iterate through all tables running mysqldump. Here is a script that will dump and gzip 10 tables at a time:
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
SQLSTMT="SELECT CONCAT(table_schema,'.',table_name)"
SQLSTMT="${SQLSTMT} FROM information_schema.tables WHERE table_schema NOT IN "
SQLSTMT="${SQLSTMT} ('information_schema','performance_schema','mysql')"
mysql ${MYSQL_CONN} -ANe"${SQLSTMT}" > /tmp/DBTB.txt
COMMIT_COUNT=0
COMMIT_LIMIT=10
TARGET_FOLDER=/path/to/csv/files
for DBTB in `cat /tmp/DBTB.txt`
do
DB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $1}'`
TB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $2}'`
DUMPFILE=${DB}-${TB}.csv.gz
mysqldump ${MYSQL_CONN} -T ${TARGET_FOLDER} --fields-terminated-by="," --fields-enclosed-by="\"" --lines-terminated-by="\r\n" ${DB} ${TB} | gzip > ${DUMPFILE}
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
This worked well for me:
mysqldump <DBNAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
Or if you want to only dump a specific table:
mysqldump <DBNAME> <TABLENAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
I'm dumping to /var/lib/mysql-files/ to avoid this error:
mysqldump: Got error: 1290: The MySQL server is running with the --secure-file-priv option so it cannot execute this statement when executing 'SELECT INTO OUTFILE'
It looks like others had this problem also, and there is a simple Python script now, for converting output of mysqldump into CSV files.
wget https://raw.githubusercontent.com/jamesmishra/mysqldump-to-csv/master/mysqldump_to_csv.py
mysqldump -u username -p --host=rdshostname database table | python mysqldump_to_csv.py > table.csv
You also can do it using Data Export tool in dbForge Studio for MySQL.
It will allow you to select some or all tables and export them into CSV format.

Dump only the data with mysqldump without any table information?

I am looking for the syntax for dumping all data in my mysql database. I don't want any table information.
mysqldump --no-create-info ...
Also you may use:
--skip-triggers: if you are using triggers
--no-create-db: if you are using --databases ... option
--compact: if you want to get rid of extra comments
This should work:
# To export to file (data only)
mysqldump -u [user] -p[pass] --no-create-info mydb > mydb.sql
# To export to file (structure only)
mysqldump -u [user] -p[pass] --no-data mydb > mydb.sql
# To import to database
mysql -u [user] -p[pass] mydb < mydb.sql
NOTE: there's no space between -p & [pass]
If you just want the INSERT queries, use the following:
mysqldump --skip-triggers --compact --no-create-info
>> man -k mysqldump [enter in the terminal]
you will find the below explanation
--no-create-info, -t
Do not write CREATE TABLE statements that re-create each dumped table.
Note This option does not not exclude statements creating log file
groups or tablespaces from mysqldump output; however, you can use the
--no-tablespaces option for this purpose.
--no-data, -d
Do not write any table row information (that is, do not dump table
contents). This is useful if you want to dump only the CREATE TABLE
statement for the table (for example, to create an empty copy of the
table by loading the dump file).
# To export to file (data only)
mysqldump -t -u [user] -p[pass] -t mydb > mydb_data.sql
# To export to file (structure only)
mysqldump -d -u [user] -p[pass] -d mydb > mydb_structure.sql
Best to dump to a compressed file
mysqldump --no-create-info -u username -hhostname -p dbname | gzip > /backupsql.gz
and to restore using pv apt-get install pv to monitor progress
pv backupsql.gz | gunzip | mysql -uusername -hhostip -p dbname
Would suggest using the following snippet. Works fine even with huge tables (otherwise you'd open dump in editor and strip unneeded stuff, right? ;)
mysqldump --no-create-info --skip-triggers --extended-insert --lock-tables --quick DB TABLE > dump.sql
At least mysql 5.x required, but who runs old stuff nowadays.. :)
Just dump the data in delimited-text format.
Try to dump to a delimited file.
mysqldump -u [username] -p -t -T/path/to/directory [database] --fields-enclosed-by=\" --fields-terminated-by=,
When attempting to export data using the accepted answer I got an error:
ERROR 1235 (42000) at line 3367: This version of MySQL doesn't yet support 'multiple triggers with the same action time and event for one table'
As mentioned above:
mysqldump --no-create-info
Will export the data but it will also export the create trigger statements. If like me your outputting database structure (which also includes triggers) with one command and then using the above command to get the data you should also use '--skip-triggers'.
So if you want JUST the data:
mysqldump --no-create-info --skip-triggers