I've been searching for a solution to my problem but found none.
I have a text file (>150K rows), tab delimited.
Numbers with decimals have a comma.
My fields are set as FLOAT.
The problem is that MySql is expecting a dot as decimal separator, not a comma, so all decimals are just ignored.
I tried to use different characters set, or even try replacing the comma with dot during import, but with no luck.
My import step
USE stat;
LOAD DATA LOCAL INFILE 'tbl_t1.txt' INTO TABLE `tbl_t1`
FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\r\n' IGNORE 1 LINES;
My raw data
Partno grossw netw ......up to 132 col
Item1 1,4753 1,1325
Item2 0,1673 0,0184
Item3 2,1357 2,0361
Once imported I get
Item1 1 1 .........
Item2 0 0
Item3 2 2
When trying below code I get zero rows, my table is emptied
USE stat;
LOAD DATA LOCAL INFILE 'tbl_t1.txt' INTO TABLE `tbl_t1`
FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\r\n' IGNORE 1 LINES
(#col1) SET grossw = REPLACE(#col1, ',' , '.');
UPDATE
I found why the table is emptied, the issue is that I have 132 columns in the table, and #col1 handle just 1 column. My above was just an example.
I need to investigate more.
Any help is very much appreciated.
Thanks.
Related
I have csv file with over a million lines and i need to write only the lines starting by '01' into database.
.csv file looks like this
01;104;5586;20;1000;
01;105;5586;80;1000;
01;106;5586;80;1000;
04;104;5586;20;1000;
06;105;5586;80;1000;
05;106;5586;80;1000;
SQL looks linke this
LOAD DATA LOCAL INFILE '$uploadfile'
REPLACE INTO TABLE mytable
FIELDS TERMINATED BY ';' ENCLOSED BY ''
IGNORE 1 LINES
(`a`, `b`, `c`, `d`, `e`);
So this works to import all lines. But how can i get only the lines which start with 01;....
You can try this one -
LOAD DATA LOCAL INFILE 'filename.csv'
REPLACE INTO TABLE mytable
FIELDS TERMINATED BY ';' ENCLOSED BY ''
LINES STARTING BY '01' TERMINATED BY '\r\n'
(`a`, `b`, `c`, `d`, `e`)
SET `a` = '01';
and you will get something like this -
01 104 5586 20 1000
01 105 5586 80 1000
01 106 5586 80 1000
Check line separator you use - '\r\n' or '\n' in TERMINATED BY clause.
Devart's solution would work if the data in the line never contains "01" again.
I found out that LINES STARTING BY is not working as expected according to this article:
https://bugs.mysql.com/bug.php?id=3632
"Another problem was that LINES STARTING BY xxx means that the MySQL will assume the lines start is at the next occurence of xxx".
So if my data also include 01, not just the start of the line, MySQL read some data from these lines. So this will insert bad data:
01;104;5586;01;1000
01;105;8586;80;1000
01;106;5586;80:0123
I wonder why there is no solution like the suggetions in the article from 2012:
(1) let LINES STARTING BY 'X' continue to mean "line contains 'X' anywhere and data prior to 'X' in the record is skipped" and document this as such.
(2) add LINES STARTING BY 'X' POSITION N which means "line contains 'X' beginning at character position N relative to the beginning of the record.
I ended up spliting the files or adding a prefix (X) at the start of each line.
The .csv like this
X01;104;5586;20;1000;
X01;105;5586;01;1000;
X01;106;5586;80;1000;
X04;104;5586;01;0123;
X06;105;5586;80;1000;
X05;106;5586;80;1000;
And the code like this:
LOAD DATA LOCAL INFILE 'filename.csv'
REPLACE INTO TABLE mytable
FIELDS TERMINATED BY ';' ENCLOSED BY ''
LINES STARTING BY 'X01' TERMINATED BY '\r\n'
(`a`, `b`, `c`, `d`, `e`) SET `a` = '01';
Maybe someone else out there had the same missunderstanding with LINES STARTING BY.
I am trying to import a csv file to mysql table, But I need to remove First two characters on particular column before importing to mysql.
This is my statment :
string strLoadData = "LOAD DATA LOCAL INFILE 'E:/park/Export.csv' INTO TABLE tickets FIELDS terminated by ',' ENCLOSED BY '\"' lines terminated by '\n' IGNORE 1 LINES (SiteId,DateTime,Serial,DeviceId,AgentAID,VehicleRegistration,CarPark,SpaceNumber,GpsAddress,VehicleType,VehicleMake,VehicleModel,VehicleColour,IssueReasonCode,IssueReason,NoticeLocation,Points,Notes)";
Column IssueReasoncode' has data like 'LU12' , But i need to remove the first 2 characters it should have only integers on it and not alpha numeric .
I need to remove 'LU' from that column.
Is it possible to write like this on left(IssueReasonCode +' '2). This column is varchar(45) and cant be changed now because of large data on it.
Thanks
LOAD DATA INFILE has the ability to perform a function on the data for each column as you read it in (q.v. here). In your case, if you wanted to remove the first two characters from the IssueReasonCode column, you could use:
RIGHT(IssueReasonCode, CHAR_LENGTH(IssueReasonCode) - 2)
to remove the first two characters. You specify such column mappings at the end of the LOAD DATA statement using SET. Your statement should look something like the following:
LOAD DATA LOCAL INFILE 'E:/park/Export.csv' INTO TABLE tickets
FIELDS terminated by ','
ENCLOSED BY '\"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(SiteId, DateTime, Serial, DeviceId, AgentAID, VehicleRegistration, CarPark, SpaceNumber,
GpsAddress, VehicleType, VehicleMake, VehicleModel, VehicleColour, IssueReasonCode,
IssueReason, NoticeLocation, Points, Notes)
SET IssueReasonCode = RIGHT(IssueReasonCode, CHAR_LENGTH(IssueReasonCode) - 2)
Referencing this and quoting this example , you can try the below to see if it works
User variables in the SET clause can be used in several ways. The
following example uses the first input column directly for the value
of t1.column1, and assigns the second input column to a user variable
that is subjected to a division operation before being used for the
value of t1.column2:
LOAD DATA INFILE 'file.txt' INTO TABLE t1 (column1, #var1) SET
column2 = #var1/100;
string strLoadData = "LOAD DATA LOCAL INFILE 'E:/park/Export.csv' INTO TABLE tickets FIELDS terminated by ',' ENCLOSED BY '\"' lines terminated by '\n' IGNORE 1 LINES (SiteId,DateTime,Serial,DeviceId,AgentAID,VehicleRegistration,CarPark,SpaceNumber,GpsAddress,VehicleType,VehicleMake,VehicleModel,VehicleColour,#IRC,IssueReason,NoticeLocation,Points,Notes) SET IssueReasonCode = substr(#IRC,2) ;";
I have two columns, q1 and q2, that I'd like to sum together and put in the destination column, q.
The way I do it now, I put the data in an intermediate table, then sum during loading, but I'm wondering if it's possible to do it during extraction instead?
Here's my script:
LOAD DATA INFILE 'C:\temp\foo.csv'
INTO TABLE new_foo
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(q1,q1)
INSERT INTO foo (q) SELECT q1+q2 AS q
FROM foo_temp;
Try:
LOAD DATA INFILE 'C:\temp\foo.csv'
INTO TABLE `foo`
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(#`q1`, #`q2`)
SET `q` = #`q1` + #`q2`;
I am importing my data from a csv file like this:
LOAD DATA LOCAL INFILE "c:\myfile.csv" INTO TABLE historic FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
This ALMOST works perfect. My table has one different row, the first: An auto incremented id column. Otherwise the csv file and the MySQL table match perfectly. When I run the command above, it puts the first column from the csv into the id field. It want it to actually go into the second.
Is there a way I can modify the above statement to either specify the columns or just offset it by 1? (skipping the first column on import).
You can optionally name the columns to be populated by LOAD DATA, and simply omit your id column:
LOAD DATA LOCAL INFILE "c:\myfile.csv" INTO TABLE historic
FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n'
(column2, colum3, column4, column5, ...)
You can load your data specifing the order columns that you're going to use into your table:
LOAD DATA LOCAL FILE '/tmp/test.txt' INTO TABLE historic
(name_field1, name_field2, name_field3...);
Or if you prefer you can load it first to a temporary table and use a select into statement to laod it into your final table (it's slower).
I have the following code to load csv's into mysql.
SET #count = 0;
LOAD DATA INFILE 'TELE_AFC_03.03.2014.csv'
INTO TABLE failure_report.master_entry
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
ignore 2 rows
(#sno,#var1,line,section,station,system,sub_system,equipment,sub_equipment,failure_time,rectified_time,duration,failure_description,rectification,lmd,origin,attended_by,reported_by,urr)
set id = (#count :=#count+1), failure_date =STR_TO_DATE(#var1, '%c/%e/%Y');
It is working fine for some files but for some csv's it gives an error
Row 1 was truncated; it contained more data than there were input columns
although all files have same number of columns.
This csv gives error
http://speedy.sh/bSDTE/TELE-AFC-05.03.2014.csv
while this csv works fine although both have same columns
http://speedy.sh/h4cVt/TELE-AFC-03.03.2014.csv