How to LOAD csv file with extra empty row? - mysql

I've got the CSV file which I need to load to mysql database. Problem is that every row of data is followed by empty row:
open_test_uuid|open_uuid|asn|bytes_download
O0037c645-0c7b-4bd0-a6dc-1983e6d0f814|Pf13e1f22-92f6-4a49-9bd3-2882373d0266|25255|11704458
O0037c645-0c7b-4bd0-a6dc-1983e6d0f814|Pf13e1f22-92f6-4a49-9bd3-2882373d0266|25255|11704458
I tried differend combinations of LOAD command but nothing works.
LOAD DATA LOCAL INFILE '/Users/BauchMAC/Desktop/details_201301.csv'
INTO TABLE netztest
FIELDS TERMINATED BY '|'
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n\r' <--HERE should be some magic line
IGNORE 1 ROWS;
I have no idea how to solve that...
Thank you for every idea

You could try
LINES TERMINATED BY '\n\n'
or
LINES TERMINATED BY '\r\n\r\n'
and if that doesn't work you could pre-process the file before feed it to MySQL.
If you're on linux you could run this:
cat file.csv | grep -v '^\s*$' > fixed.csv
UPDATE
Perhaps a little Perl will do the trick:
perl -e '$_=`cat file.csv`; s/[\r\n]+/\n/g; print'

Related

Importing CSV file to HeidiSQL, error 1148

I'm completely novice to MySQL, and I'm trying to load some CSV sheets to the server using HeidiSQL -> tools -> Import CSV. But it keeps giving me this error:
Error 1148: The used command is not allowed with this MySQL version".
Is there a way to fix that, or maybe another way to load a CSV?
For query based csv import you can use load data ... to load csv files.
For more info refer here
Example:
To skip the first line which is header in csv you can use ignore 1 lines
load data local infile 'D:std_table.csv' into table local.student
fields terminated by ','
enclosed by '"'
lines terminated by '\r\n'
ignore 1 lines;
For windows based system use \r\n for line termination and for Linux use \n
Side Note:
You could try using MySQL Workbench for MySQL from official site here.
try this one:
LOAD DATA INFILE 'c:/tmp/discounts.csv'
INTO TABLE discounts
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS;

merging type of EOL, (End of File)

I have some files, They're text files and I imported them via LOAD DATA mysql command into a database table. But I have problem with some of them.
All of them are 6236 lines:
$ wc -l ber.mensur.txt
6236 ber.mensur.txt
When I import ber.mensure.txt , only I have 1611 record in my table. But other files have 6236 row.
My LOAD DATA command is :
LOAD DATA INFILE '/home/mohsen/codes/haq/translation-tmp/ber.mensur.txt'
INTO TABLE tr_tmp
FIELDS TERMINATED BY ''
ENCLOSED BY '"'
LINES TERMINATED BY '\n' (aya);
I use linux and I'm force to \n for end of line(EOL).
When I examine my databse, Some records contains more than one line. I think my end of lines has problem.
Do you have any solution to solve it?
UPDATE:
My file is here
By the way, vim can know my txt file as 6236 lines.
You can do that via :
fd = open(YOURFILE,'r')
lines = readlines()
It works good.

MySQL import csv only gets half the lines

I use the following command to read in a csv:
LOAD DATA INFILE '/Users/Tyler/Desktop/players_escaped.txt'
INTO TABLE players
FIELDS TERMINATED BY ','
ENCLOSED BY '^'
LINES TERMINATED BY '\n';
The csv looks like this:
^1^,^False^,^False^,^Ovie^,^Soko^,^6^,^8^,^210^,^^,^M^,^London^,^^,^^,^^,^^,^0^,^2009^,^^,^False^,^False^,^{299F909C-88D9-4D26-8ADC-3EC1A66168BB}^,^844^,^2013^,^^,^^,^0^,^^,^^,^2011-02-16 20:53:34.877000000^,^^,^2011-02-16 20:53:34.877000000^,^^,^^,^^,^^,^^,^^,^^,^^,^^,^1^,^2^,^^,^^,^^,^^,^^,^^,^^,^^,^^,^^
^2^,^False^,^False^,^Jordan^,^Swing^,^6^,^6^,^200^,^^,^M^,^Birmingham^,^AL^,^35218^,^^,^^,^0^,^2009^,^^,^False^,^False^,^{299F909C-88D9-4D26-8ADC-3EC1A66168BB}^,^844^,^2013^,^^,^^,^0^,^^,^^,^2011-02-16 20:53:34.877000000^,^^,^2011-02-16 20:53:34.877000000^,^^,^^,^^,^^,^^,^^,^^,^^,^^,^1^,^2^,^^,^^,^^,^^,^^,^^,^^,^^,^^,^^
I also tried \ as a delimiter and got the same results.
I'm only getting the odd numbered rows.
There are 250k records in the csv.
Thanks
Have you checked the line endings of the .txt file?

Mysql: How can I use RTRIM in my LOAD DATA INFILE query?

In my code I have a query that looks like this:
$load_query = "LOAD DATA LOCAL INFILE '{$file}' INTO TABLE `{$table}`
FIELDS TERMINATED BY ',' ENCLOSED BY '\"';";
Here is an example row included in the file that I am trying to load:
"MC318199","06160","1","P","00750","00000","TN598792","04/16/2009","91X"
You will notice that at the end of the example row there are quite a few spaces. These spaces all occur before the new line character "\n" that terminates the line. The problem is that these spaces get entered into the database.
How can I remove the additional spaces when running the LOAD DATA INFILE command? Is there a way to use RTRIM?
Edit
As ajreal suggested I had to re-prepare the file. After the file was prepared it was inserted into the database correctly. I modified the bash script found at: http://gabeanderson.com/2008/02/01/unixlinux-find-replace-in-multiple-files/ to accomplish this. The code is shown below:
#!/bin/bash
for fl in *.txt; do
mv $fl $fl.old
sed 's/[ \t]*$//' $fl.old > $fl
rm -f $fl.old
done
$load_query = "LOAD DATA LOCAL INFILE '{$file}' INTO TABLE `{$table}`
FIELDS TERMINATED BY ',' ENCLOSED BY '\"' (col1, col2, ... , #col_with_spaces)
SET col_with_spaces = RTRIM(#col_with_spaces);";
That way you won't need to pre-process the file nor create other updates on the table.
dun think too complicate, right after u loaded the data
update $table set $last_column=rtrim($last_column);
or manually remove the space in the $file using vi (or any editor)
or re-preapare the $file
How about simply trimming the imported data in a second step?
$fix_query = "UPDATE `{$table}` SET LastFieldName = RTRIM(LastFieldName);";
use LINES TERMINATED BY '\n'
$load_query = "LOAD DATA LOCAL INFILE '{$file}' INTO TABLE `{$table}`
FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\n' " ;

LOAD DATA INFILE only 1 record inserted

I have a csv file that I'm trying to import via the command line. But only 1 row is being inserted.
They are comma separated values. I'm on a Mac using Excel Mac. I save as a csv file. How can I tell if the lines are terminated by \r or \n or both? Here is the code I used:
LOAD DATA LOCAL INFILE '/Users/eric/Documents/contacts_test.csv' INTO TABLE `contacts_tmp` FIELDS TERMINATED BY ',' ESCAPED BY '\\' LINES TERMINATED BY '\n' (clientid,contactid,title,fname,mname,lname,suffixname,salutation,occupation,employer,home_addr1,home_addr2,home_city,home_state,home_zip,home_county,primary_addr1,primary_addr2,primary_city,primary_state,primary_zip,primary_county,work_addr1,work_addr2,work_city,work_state,work_zip,work_county,email,phone_home,phone_mobile,phone_work,fax,phone_other,codes);
thanks
I would just recommend trying the same command with ... LINES TERMINATED BY '\r\n' ... and see if you have better luck.
Had the same issue, LINES TERMINATED BY '\r' did the job.
If in your file the line is terminated by the ,, then you should add LINES TERMINATED BY ',\r\n'.
This will solve your issue as it did with mine.
Mac text files usually end in \r but you can find this out by using a hex editor and seeing what the lines end with.
Just try removing LINES TERMINATED BY '\n' altogether from your query.
If you don't specify a delimiter, MySQL fill figure it out automatically.