How to use mysqldump to dump specified column only from specified table in database?
I need something like this
mysqldump --skip-lock-tables -q -Q -c -e -h localhost -u username -pPassword DatabaseName TableName Field1 Field5 | gzip > /tmp/dump.sql.gz
But I get errors only
Using mysqldump it's not possible right now, but you may use into outfile utility to get the desired output. In your case the query will look like:
SELECT col1, col2 FROM DatabaseName.TableName INTO OUTFILE "c:/output.txt" FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY "\n";
Later on you can use this file to upload in another table called TableName2 with just two columns (ie. col1 and col2) by using following sql:
LOAD DATA LOCAL INFILE 'c:/output.txt' INTO TABLE TableName2
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n';
Related
I need a shell script to load data into mysql db. The script is the next:
# !bin/bash
qry="DROP TABLE IF EXISTS tmp_x;
CREATE TEMPORARY TABLE tmp_x AS SELECT * FROM x.y LIMIT 0;
LOAD DATA INFILE 'path/xxx.csv'
INTO TABLE tmp_x
FIELDS TERMINATED BY "\,"
ENCLOSED BY "\""
LINES TERMINATED BY "\\n"
IGNORE 1 ROWS;"
mysql --host=xxx --user=xxx --password=xxx db << EOF
$qry
EOF
I get the following error message:
ERROR 1064 (42000) at line 3: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the
right syntax to use near '
ENCLOSED BY "
LINES TERMINATED BY \n
IGNORE 1 ROWS' at line 3
I think it is something to do escaping some character, I tried changing to single quotes but it does not work neither.
I am workin on Ubuntu 18.
Any help will be very grateful.
Try this:
#!/bin/bash
mysql --host=xxx --user=xxx --password=xxx db << EOF
DROP TABLE IF EXISTS tmp_x;
CREATE TEMPORARY TABLE tmp_x AS SELECT * FROM x.y LIMIT 0;
LOAD DATA INFILE 'path/xxx.csv'
INTO TABLE tmp_x
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\\n'
IGNORE 1 ROWS;
EOF
If you really must use a variable, you'll need to play with quoting:
#!/bin/bash
qry="DROP TABLE IF EXISTS tmp_x;
CREATE TEMPORARY TABLE tmp_x AS SELECT * FROM x.y LIMIT 0;
LOAD DATA INFILE 'path/xxx.csv'
INTO TABLE tmp_x
FIELDS TERMINATED BY \",\"
ENCLOSED BY \"\\\"\"
LINES TERMINATED BY \"\\n\"
IGNORE 1 ROWS;"
mysql --host=xxx --user=xxx --password=xxx db << EOF
$qry
EOF
It can be troublesome to use double-quoted strings in your SQL, since you're using double-quotes as the string delimiter in bash. In other words, which is the double-quote that ends the bash string, and which should be treated as a literal double-quote character in the SQL?
To resolve this, use single-quotes for string delimiters in the SQL.
Another issue: There's no need to put a backslash before , for the field terminator.
Another issue: The \n needs another backslash.
Here's what I tried and it seems to work:
qry="DROP TABLE IF EXISTS tmp_x;
CREATE TEMPORARY TABLE tmp_x AS SELECT * FROM x.y LIMIT 0;
LOAD DATA INFILE 'path/xxx.csv'
INTO TABLE tmp_x
FIELDS TERMINATED BY ','
ENCLOSED BY '\"'
LINES TERMINATED BY '\\\n'
IGNORE 1 ROWS;"
I only printed the query, I haven't tested running it.
i'm new to mysql database. i'm trying to create maintable by joining existing two tables in MySQL. The following command i had used. But it throws the following error.
create table maintable as select * from table1 union select * from table2;
Error 126 (HY000): incorrect key file for table 'c:\temp'; try to repair it
i had googled and increased tmp_table_size to 2G.
My configuration file looks like this.
[client]
port=3306
[mysql]
default-character-set=UTF8
[mysqld]
port=3306
max_allowed_packet=128M
basedir="C:/Program Files/MySQL/MySQL Server 5.5/"
datadir="C:/ProgramData/MySQL/MySQL Server 5.5/Data/"
character-set-server=UTF8
default-storage-engine=INNODB
sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
max_connections=100
query_cache_size=0
table_cache=2G
tmp_table_size=2G
max_heap_table_size=2G
thread_cache_size=32
myisam_max_sort_file_size=100G
myisam_sort_buffer_size=126M
read_buffer_size=128K
read_rnd_buffer_size=612K
sort_buffer_size=566K
innodb_additional_mem_pool_size=512M
innodb_flush_log_at_trx_commit=1
innodb_log_buffer_size=50M
innodb_buffer_pool_size=127M
innodb_log_file_size=24M
But nothing seems to resolve the error... Your help is really appreciated. Thank you
If both tables have same no. of columns and in same order as it will be as you are using union which can work only if no. of columns will be same then you can use any one approach out of below, which will be faster than simple insert method-
Through dump:
step1: take table1 backup through dump with structure and data.
Step2: take table2 dump of only data.
Step3: restore table1.
Step4: restore table2 data only in table1.
Through export/import method:
Step1: take both table backup in csv.
select * INTO OUTFILE 'd:\\backup\\table1.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\r\n' FROM table1;
select * INTO OUTFILE 'd:\\backup\\table1.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\r\n' FROM table2;
Step3: now import both csv data into table, first create table-
LOAD DATA LOCAL INFILE 'd:\\backup\\table1.csv' INTO TABLE mytable FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\r\n' ignore 1 lines;
LOAD DATA LOCAL INFILE 'd:\\backup\\table2.csv' INTO TABLE mytable FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\r\n' ignore 1 lines;
The beloved my perl script code,
#!/usr/bin/perl -w
..
.// db connection
..
$sth=$dbh->prepare(" SELECT * FROM Table_nm ") or warn $dbh->errstr;
$sth->execute or die "can't execute the query: $sth1->errstr";
// I want to export the result of mysql query into csv file.
Please help. Thanx!
Unless you need to process the result set prior to outputting it to a csv file, then it would be more efficient to export it directly by adjusting the sql statement.
SELECT a,b,a+b INTO OUTFILE '/tmp/result.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM test_table;
13.2.9.1 SELECT ... INTO Syntax
EDIT
To be able to get the header line, you need to add a second select statement which specifies the fields and union that with the other select statement.
Example:
mysql> select 'OTOBJID','OTTRANSID'
union
select * into outfile 'd:/test.csv'
fields terminated by ','
optionally enclosed by '"'
from objecttransports;
D:\>type test.csv
"OTOBJID","OTTRANSID"
"0","0"
"0","1"
"0","2"
"0","3"
"0","4"
"0","5"
"1","0"
"1","1"
"1","2"
"1","4"
"1","5"
Consider this code:
mysql> select * into outfile 'atmout12.csv' fields terminated by ',' optionally enclosed by '"' lines terminated by '\n' from atm_atm;
ERROR 1086 (HY000): File 'atmout12.csv' already exists
mysql> select * into outfile 'atmout1.csv' fields terminated by ',' optionally enclosed by '"' lines terminated by '\n' from atm_atm;
Query OK, 2822 rows affected (0.02 sec)
I used the above snippet to convert a table data to a CSV file. As you can see the query ran fine, but I am unable to locate where the file is.
I do an ls in the folder and can't locate it. I am using Ubuntu 11.04
The file will be locate in your data directory.
example: datadir=/opt/data/db_name.
Inside the particular database (db_name) folder/dir will contain your .csv file.
OR else we can give the output file in particular location , to generate like that the user should have super privileges.
example :
mysql > use db_name
mysql> select * into outfile 'atmout1.csv' from atm_atm;
or
mysql> select * into outfile '/opt/example.csv' from atm_atm;
NOTE :above output file will be in db_name folder.
Say I have a view in my database, and I want to send a file to someone to create that view's output as a table in their database.
mysqldump of course only exports the 'create view...' statement (well, okay, it includes the create table, but no data).
What I have done is simply duplicate the view as a real table and dump that. But for a big table it's slow and wasteful:
create table tmptable select * from myview
Short of creating a script that mimics the behaviour of mysqldump and does this, is there a better way?
One option would be to do a query into a CSV file and import that. To select into a CSV file:
From http://www.tech-recipes.com/rx/1475/save-mysql-query-results-into-a-text-or-csv-file/
SELECT order_id,product_name,qty
FROM orders
INTO OUTFILE '/tmp/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
OK, so based on your CSV failure comment, start with Paul's answer. Make the following change to it:
- FIELDS TERMINATED BY ','
+ FIELDS TERMINATED BY ',' ESCAPED BY '\'
When you're done with that, on the import side you'll do a "load data infile" and use the same terminated / enclosed / escaped statements.
Same problem here my problem is that I want to export view definition (84 fields and millions of records) as a "create table" statement, because view can variate along time and I want an automatic process. So that's what I did:
Create table from view but with no records
mysql -uxxxx -pxxxxxx my_db -e "create table if not exists my_view_def as select * from my_view limit 0;"
Export new table definition. I'm adding a sed command to change table name my_view_def to match original view name ("my_view")
mysqldump -uxxxx -pxxxxxx my_db my_view_def | sed s/my_view_def/my_view/g > /tmp/my_view.sql
drop temporary table
mysql -uxxxx -pxxxxxx my_db -e "drop table my_view_def;"
Export data as a CSV file
SELECT * from my_view into outfile "/tmp/my_view.csv" fields terminated BY ";" ENCLOSED BY '"' LINES TERMINATED BY '\n';
Then you'll have two files, one with the definition and another with the data in CSV format.