Under what conditions does MySQL skip rows? - mysql

I'm reading a data.csv file into MySQL via the LOAD SQL command. My SQL code goes like this:
LOAD DATA LOCAL INFILE 'C:/DATA FILES/foo.csv' INTO TABLE foo FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
This works fine, but out of the 366,000 odd lines, it skips 138 of them. There are no duplicate entries or anything, it just skips them out of the blue. When I try to manually input the row into the table, it works fine. But I can't identify all 138 of the lines, so I'm wondering how exactly to pinpoint the possible reasons mySQL would skip these lines of data.
Any ideas?

Related

How to stop mysql from exiting query on error?

I'm loading a lot of data from long files into my database with the LOAD DATA INFILE command, however whenever there's a line that either doesn't conform to the column amount, uses incorrect string values, or is too long for the column, the entire query simply exits.
Is there a way to force the query to just skip the erroring line whenever there's an error? I tried the --force argument, but that doesn't seem to do anything to solve it.
My query:
LOAD DATA CONCURRENT INFILE 'file_name' INTO TABLE table_name FIELDS TERMINATED BY ':DELIMITER:' LINES TERMINATED BY '\n' (#col1,#col2) SET column_1=#col1,column_2=#col2
I don't mind dropping the lines with errors in them since they're not useful data anyway, but I do care about all the data after the erroring line.
Adding IGNORE to your command will skip the erroneous lines
Example:
LOAD DATA CONCURRENT INFILE 'file_name' IGNORE INTO TABLE table_name FIELDS TERMINATED BY ':DELIMITER:' LINES TERMINATED BY '\n' (#col1,#col2) SET column_2=#col1,column_2=#col2

LOAD DATA INFILE not working on my server

I have a csv with 400k records that I am trying to import into an existing database table but it is not working. I get no errors it simply says MYSQL returned an empty result.
Here is the query I am executing.
LOAD DATA INFILE 'C:\\xampp\\htdocs\\projects\\csvTest\\newDataList.csv'
INTO TABLE peeps
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
(first, last,address,city,state,zip,imb,email,startDate,endDate,jobName)
SET id = NULL
Funny thing is I run this exact code on my local host machine and it works, but if I run it on my server machine it does not work.
Start and End are reserved words so you either need to enclose the column names in backticks, or better yet re-name those columns

Loading data into Mysql table

I'm quite sure this question probably have been posted here somewhere but I can't find it. My problem seems a bit different. I tried loading a .txt file from my local computer into MYSQL table but the output comes out really scrambled. The data are flying all over the place and not in order as I want it to be.
Could anyone please explain why it is so and a way out?
I used the query; LOAD DATA LOCAL INFILE \home\minilab\desktop\data.txt1.txt INTO clientrecord;
Hope txt values in a line match the fields in table, If the data are separated by ',' use fields terminated by ',' or if separated by tab use fields terminated by '\t' and add lines terminated by '\n'
also share the sample data of ur infile
Ex:
LOAD DATA LOCAL INFILE \home\minilab\desktop\data.txt1.txt INTO
clientrecord fields terminated by ',' lines terminated by '\n';
if you not using the proper formating the values will scattered across the table.
reference: http://dev.mysql.com/doc/refman/5.7/en/load-data.html

MySQL-MySQL skips some rows when import .csv file

I wanted to import a .csv file with ~14k rows to MySQL. However I only got a table in MySQL with about 13k rows. I checked and found out that MySQL skips some rows in the middle of my .csv file.
I used LOAD DATA INFILE and I really cannot understand why MySQL skips those rows. Really appreciate if someone could help me with this.
Here is my query:
LOAD DATA LOCAL INFILE 'd:/Data/Tanbinh - gui Tuan/hm2.csv'
INTO TABLE q.hmm
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n' IGNORE 1 LINES;
There is no warning message at all
I assume, that the difference in numbers of rows caused by unique key duplicates.
From the reference:
With LOAD DATA LOCAL INFILE, data-interpretation and duplicate-key
errors become warnings and the operation continues
Actually there are no warnings produced on duplicates. To check wich rows were skipped, you can load data into another similar table without unique keys and compare those tables.

How to import large csv into mysql table?

For smaller csv files, the following has always worked for me...
LOAD DATA LOCAL INFILE 'C:/store_emails2.csv' INTO TABLE
mytable FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
Now, I am trying to do this with a file that has 100K rows. Mysql only takes about 50K rows, though. The odd part is that it takes random rows, not just the first 50K. I have no idea what the problem could be, as the file is formatted correctly and I've imported data this way many times. Any ideas as to what may be the problem?
EDIT: including what part of the csv looks like...
CUST_NUM,LAST_NAME,FIRST_NAME,MIDDLE_INIT,PREFIX,SUFFIX,ADDRESS_1,ADDRESS_2,CITY,STATE,ZIP,HOME_TEL,BUS,MAILING,MOD_DATE,SIGN_UP_DATE,BIRTHDAY_DATE,CASHIER,STR,EMAIL
1234556767,name,name,,,,,,,,,9999999999,,Y,6007,607,0101,341,71,email