I have a table with a nullable column:
CREATE TABLE test (id INT, value INT);
INSERT INTO test VALUES (1, null);
I export it to csv using
SELECT * FROM test
INTO OUTFILE '/tmp/test.csv'
CHARACTER SET 'utf8'
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '"'
The output file looks like this:
1,"N
Then I try to import it:
LOAD DATA LOCAL INFILE '/tmp/test.csv'
INTO TABLE `test`.`test`
CHARACTER SET 'utf8'
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '"';
Query OK, 1 row affected, 1 warning (0.01 sec)
Records: 1 Deleted: 0 Skipped: 0 Warnings: 1
And it does't work and generates the following warning:
SHOW WARNINGS;
Warning 1366: Incorrect integer value: '"N' for column 'value' at row 1
And it doesn't insert this row. For some reason escaping doesn't work for this case. It does work if field are escaped and enclosed by different symbols.
But I can't use different symbols because I can't change export query because I use Google Cloud SQL and export to csv can be done only from google web console without ability to edit the query.
I've seen multiple examples on the internet that double quote in both cases should work. Am I doing something wrong?
Related
mysql> LOAD DATA LOCAL INFILE 'students.csv' INTO TABLE students FIELDS TERMINATED BY ',' LINES TERMINATED BY '\r\n' (studentid,fname,mname,lname,psuid,state,country,major);
Query OK, 997 rows affected, 2276 warnings (0.02 sec)
Records: 997 Deleted: 0 Skipped: 0 Warnings: 2276
students table image
After loading my data into the table my first part of the data was missing the first name is gone. Which is why the major section is empty and everything is shifted over.
Please help! Thank you :)
CSV image
I assumed studentid column is defined with AUTO_INCREMENT attribute. You should remove definition of studentid in your LOAD DATA INFILE statement and let MySQL auto generate it.
Try this:
LOAD DATA LOCAL INFILE 'students.csv'
INTO TABLE students
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\r\n'
(fname,mname,lname,psuid,state,country,major);
Your data in the file does not seem to be in the correct order.
I am trying to import one big csv file encoded in UCS2-Big Indian into a mysql table.
This is the mysql code:
DROP TABLE if exists PAPERS;
CREATE TABLE `PAPERS` (
ID_RESEARCHER VARCHAR(20),
PAPER_ACCESSOR_NUMBER VARCHAR(20),
primary key(ID_RESEARCHER,PAPER_ACCESSOR_NUMBER)
) ENGINE=InnoDB DEFAULT CHARSET=ucs2;
load data local infile '...dump_all_papers_test_2.csv'
into table PAPERS
CHARACTER SET ucs2
fields terminated by '\t' enclosed by '"'
lines terminated by '\n'
(ID_RESEARCHER, PAPER_ACCESSOR_NUMBER);
And the content of csv (format ucs-2 Big Indian, said by notepad++)
"100" "A1974U626600001"
"100" "A1974U626600001"
"100" "A1974U626600001"
A copy of csv sample:
http://pastebin.com/HMssuxCf
And the error is:
1 row(s) affected, 2 warning(s):
1265 Data truncated for column 'ID_RESEARCHER' at row 1
1261 Row 1 doesn't contain data for all columns
Records: 1 Deleted: 0 Skipped: 0 Warnings: 2
What is happening here?
The action only fills the first field. Is not mysql support ucs2?
Ok.
The solution is in the documentation:
Note that it is currently not possible to load data files that use the ucs2 character set.
http://dev.mysql.com/doc/refman/5.0/en/load-data.html
I tried the following script:
LOAD DATA LOCAL INFILE 'myfile.csv'
REPLACE INTO TABLE `mydb`.`mytable`
CHARACTER SET latin1 FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"' ESCAPED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES (`field1`, `field1`, `field1`, `field1`, `field1`, `field1`);
when I use a file of 500K records it works, but when I try a csv file of 4 million record it returns:
Query OK, 0 rows affected (2.79 sec)
Records: 0 Deleted: 0 Skipped: 0 Warnings: 0
And of course nothing will be added in 2.70 secs!
My RAM is 4GB and my input file (the large one) is 370MB.
Can anyone suggest a solution?
It's possible that the line endings in the large file are not '\r\n'.
Change the LINES TERMINATED BY '\r\n' format to '\n'.
Hi I am trying to load data from a file to Mysql DB using hibernate.
here is the query,
session.createSQLQuery("LOAD DATA INFILE E:/uploaded/NumSerie/NS/NumSerie.txt INTO TABLE prod CHARACTER SET latin1 FIELDS TERMINATED BY ';' LINES TERMINATED BY '\n' IGNORE 1 LINES;").executeUpdate();
But i get the following error,
org.hibernate.QueryException: Space is not allowed after parameter prefix ':' [LOAD DATA INFILE E:/uploaded/NumSerie/NS/NumSerie.txt INTO TABLE prod CHARACTER SET latin1 FIELDS TERMINATED BY ';' LINES TERMINATED BY '
' IGNORE 1 LINES;]
at org.hibernate.engine.query.ParameterParser.parse(ParameterParser.java:92)
at org.hibernate.engine.query.ParamLocationRecognizer.parseLocations(ParamLocationRecognizer.java:75)
How can I rewrite this query so this is executed properly?
Thanks in advance!
Try creating a parameterised query
I'm no Hibernate guru but this could work:
session.createSQLQuery("LOAD DATA INFILE :file INTO TABLE prod CHARACTER SET latin1 FIELDS TERMINATED BY ';' LINES TERMINATED BY '\n' IGNORE 1 LINES;")
.setString("file", "E:/uploaded/NumSerie/NS/NumSerie.txt")
.executeUpdate();
Consider this code:
mysql> select * into outfile 'atmout12.csv' fields terminated by ',' optionally enclosed by '"' lines terminated by '\n' from atm_atm;
ERROR 1086 (HY000): File 'atmout12.csv' already exists
mysql> select * into outfile 'atmout1.csv' fields terminated by ',' optionally enclosed by '"' lines terminated by '\n' from atm_atm;
Query OK, 2822 rows affected (0.02 sec)
I used the above snippet to convert a table data to a CSV file. As you can see the query ran fine, but I am unable to locate where the file is.
I do an ls in the folder and can't locate it. I am using Ubuntu 11.04
The file will be locate in your data directory.
example: datadir=/opt/data/db_name.
Inside the particular database (db_name) folder/dir will contain your .csv file.
OR else we can give the output file in particular location , to generate like that the user should have super privileges.
example :
mysql > use db_name
mysql> select * into outfile 'atmout1.csv' from atm_atm;
or
mysql> select * into outfile '/opt/example.csv' from atm_atm;
NOTE :above output file will be in db_name folder.