How to import large csv into mysql table? - mysql

For smaller csv files, the following has always worked for me...
LOAD DATA LOCAL INFILE 'C:/store_emails2.csv' INTO TABLE
mytable FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
Now, I am trying to do this with a file that has 100K rows. Mysql only takes about 50K rows, though. The odd part is that it takes random rows, not just the first 50K. I have no idea what the problem could be, as the file is formatted correctly and I've imported data this way many times. Any ideas as to what may be the problem?
EDIT: including what part of the csv looks like...
CUST_NUM,LAST_NAME,FIRST_NAME,MIDDLE_INIT,PREFIX,SUFFIX,ADDRESS_1,ADDRESS_2,CITY,STATE,ZIP,HOME_TEL,BUS,MAILING,MOD_DATE,SIGN_UP_DATE,BIRTHDAY_DATE,CASHIER,STR,EMAIL
1234556767,name,name,,,,,,,,,9999999999,,Y,6007,607,0101,341,71,email

Related

How to upload more than 1000 entries using LOAD DATA INFILE?

I'm trying to import an csv file to my database table, the origin of this file was a previous database with the same structure. My issue is that it imports only 1000 rows instead of the whole 62k+ file. The script i'm using is:
LOAD DATA INFILE 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/covid19.csv'
INTO TABLE covid19.covid19
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(id,date,iso2,iso3,country,continent,cases,deaths,population);
Some clients have a option, where they reduce the number of returned Rows with a LIMIT 1000.
You should check, how many rows you actually have with
SELECT COUNT(*) FROM covid19.covid19;
You should see the actual number of inserted rows, as the command didn't show any warnungs or errors.

MySQL importing large csv

I have a large CSV file I am trying to import into MySQL (around 45GB, around 150mil rows, most columns small but one with variable length text, can get up to KBs of size). I am using LOAD DATA LOCAL INFILE to try and import it but the server always times out my connection before it finishes. I have tried modifying the global connection timeout variables to fix this, but it already has some hours before it times out. Is there another way to import a database this large, or am I doing something wrong with this approach?
LOAD DATA LOCAL INFILE 'filename.csv' INTO TABLE table CHARACTER SET latin1
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\'
LINES TERMINATED BY '\r\n';
I am executing this command on Windows 10, with the MySQL command line. My MySQL version is 8.0.
The way I have handled this in the past is by writing a php script that reads the file and outputs the top 50% into a new file, then deletes those rows. Then perform two load data infiles, one for original and one for the new.

Loading data into Mysql table

I'm quite sure this question probably have been posted here somewhere but I can't find it. My problem seems a bit different. I tried loading a .txt file from my local computer into MYSQL table but the output comes out really scrambled. The data are flying all over the place and not in order as I want it to be.
Could anyone please explain why it is so and a way out?
I used the query; LOAD DATA LOCAL INFILE \home\minilab\desktop\data.txt1.txt INTO clientrecord;
Hope txt values in a line match the fields in table, If the data are separated by ',' use fields terminated by ',' or if separated by tab use fields terminated by '\t' and add lines terminated by '\n'
also share the sample data of ur infile
Ex:
LOAD DATA LOCAL INFILE \home\minilab\desktop\data.txt1.txt INTO
clientrecord fields terminated by ',' lines terminated by '\n';
if you not using the proper formating the values will scattered across the table.
reference: http://dev.mysql.com/doc/refman/5.7/en/load-data.html

MySQL-MySQL skips some rows when import .csv file

I wanted to import a .csv file with ~14k rows to MySQL. However I only got a table in MySQL with about 13k rows. I checked and found out that MySQL skips some rows in the middle of my .csv file.
I used LOAD DATA INFILE and I really cannot understand why MySQL skips those rows. Really appreciate if someone could help me with this.
Here is my query:
LOAD DATA LOCAL INFILE 'd:/Data/Tanbinh - gui Tuan/hm2.csv'
INTO TABLE q.hmm
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n' IGNORE 1 LINES;
There is no warning message at all
I assume, that the difference in numbers of rows caused by unique key duplicates.
From the reference:
With LOAD DATA LOCAL INFILE, data-interpretation and duplicate-key
errors become warnings and the operation continues
Actually there are no warnings produced on duplicates. To check wich rows were skipped, you can load data into another similar table without unique keys and compare those tables.

Under what conditions does MySQL skip rows?

I'm reading a data.csv file into MySQL via the LOAD SQL command. My SQL code goes like this:
LOAD DATA LOCAL INFILE 'C:/DATA FILES/foo.csv' INTO TABLE foo FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
This works fine, but out of the 366,000 odd lines, it skips 138 of them. There are no duplicate entries or anything, it just skips them out of the blue. When I try to manually input the row into the table, it works fine. But I can't identify all 138 of the lines, so I'm wondering how exactly to pinpoint the possible reasons mySQL would skip these lines of data.
Any ideas?