LOAD DATA INFILE not working on my server - mysql

I have a csv with 400k records that I am trying to import into an existing database table but it is not working. I get no errors it simply says MYSQL returned an empty result.
Here is the query I am executing.
LOAD DATA INFILE 'C:\\xampp\\htdocs\\projects\\csvTest\\newDataList.csv'
INTO TABLE peeps
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
(first, last,address,city,state,zip,imb,email,startDate,endDate,jobName)
SET id = NULL
Funny thing is I run this exact code on my local host machine and it works, but if I run it on my server machine it does not work.

Start and End are reserved words so you either need to enclose the column names in backticks, or better yet re-name those columns

Related

How to stop mysql from exiting query on error?

I'm loading a lot of data from long files into my database with the LOAD DATA INFILE command, however whenever there's a line that either doesn't conform to the column amount, uses incorrect string values, or is too long for the column, the entire query simply exits.
Is there a way to force the query to just skip the erroring line whenever there's an error? I tried the --force argument, but that doesn't seem to do anything to solve it.
My query:
LOAD DATA CONCURRENT INFILE 'file_name' INTO TABLE table_name FIELDS TERMINATED BY ':DELIMITER:' LINES TERMINATED BY '\n' (#col1,#col2) SET column_1=#col1,column_2=#col2
I don't mind dropping the lines with errors in them since they're not useful data anyway, but I do care about all the data after the erroring line.
Adding IGNORE to your command will skip the erroneous lines
Example:
LOAD DATA CONCURRENT INFILE 'file_name' IGNORE INTO TABLE table_name FIELDS TERMINATED BY ':DELIMITER:' LINES TERMINATED BY '\n' (#col1,#col2) SET column_2=#col1,column_2=#col2

MySQL importing large csv

I have a large CSV file I am trying to import into MySQL (around 45GB, around 150mil rows, most columns small but one with variable length text, can get up to KBs of size). I am using LOAD DATA LOCAL INFILE to try and import it but the server always times out my connection before it finishes. I have tried modifying the global connection timeout variables to fix this, but it already has some hours before it times out. Is there another way to import a database this large, or am I doing something wrong with this approach?
LOAD DATA LOCAL INFILE 'filename.csv' INTO TABLE table CHARACTER SET latin1
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\'
LINES TERMINATED BY '\r\n';
I am executing this command on Windows 10, with the MySQL command line. My MySQL version is 8.0.
The way I have handled this in the past is by writing a php script that reads the file and outputs the top 50% into a new file, then deletes those rows. Then perform two load data infiles, one for original and one for the new.

Table `generic_table_name` doesn't exist using 'LOAD DATA LOCAL INFILE' for MySQL

I'm under the impression that this command will create a table for you. Is this the wrong idea? I have this:
LOAD DATA LOCAL INFILE '/Users/michael/WebstormProjects/d3_graphs_with_node/public/DFData/ADWORDS_AD_DATA.csv'
INTO TABLE ADWORDS_AD_DATA
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(campaign_id,adgroup_id,ad_group,campaign,account,week,keyword,keyword_state,match_type,max_cpc,clicks,impressions,ctr,avg_cpc,avg_cpm,cost,avg_position,quality_score,first_page_cpc,first_position_cpc,converted_clicks,click_conversion_rate,cost_converted_click,cost_all_conv,total_conv_value,conversions,cost_conv,conv_rate)
I counted the columns and there is 28 and I made exactly 28 entries after IGNORE 1 LINES. I looked at these SO posts and tried to follow the syntax outlined by the MySQL documentation here. Any ideas?
mysql-load-local-infile
import-csv-to-mysql-table
Okay manually added table and set the column names and it worked!
A side note, I had 2 versions of MySQL running and that was also giving me confusion as to why the table/database I created in phpmyadmin wasn't showing up in the MySQL shell in the command line.
MAMP MYSQL vs system MYSQL

MySQL-MySQL skips some rows when import .csv file

I wanted to import a .csv file with ~14k rows to MySQL. However I only got a table in MySQL with about 13k rows. I checked and found out that MySQL skips some rows in the middle of my .csv file.
I used LOAD DATA INFILE and I really cannot understand why MySQL skips those rows. Really appreciate if someone could help me with this.
Here is my query:
LOAD DATA LOCAL INFILE 'd:/Data/Tanbinh - gui Tuan/hm2.csv'
INTO TABLE q.hmm
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n' IGNORE 1 LINES;
There is no warning message at all
I assume, that the difference in numbers of rows caused by unique key duplicates.
From the reference:
With LOAD DATA LOCAL INFILE, data-interpretation and duplicate-key
errors become warnings and the operation continues
Actually there are no warnings produced on duplicates. To check wich rows were skipped, you can load data into another similar table without unique keys and compare those tables.

MySQL append/insert from a different server

I have a table on the development box with exactly the same format as another one on the production server. The data on the development need to be appended/inserted into the production where I don't have the permission to create a table.
I was thinking about doing something like insert into production_table select * from develop_table, however, since I cannot create a new table develop_table, then this is impossible to do.
I am using Sequal Pro, and I don't know is there a way to export my development table to a file (CSV/SQL), then I can run some command on my client side to load from that file into the production without overwriting the production table?
Assuming your production table has primray / unique key(s), you can export the data in your development server as a .csv file, and load it into your production server with load data, specifying if you want to replace/ignore the duplicated rows.
Example:
In your development server you must export the data to a .csv file. You can use select into... to do that:
select *
into outfile '/home/user_dev/your_table.csv'
fields terminated by ',' optionally enclosed by '"'
lines terminated by '\n'
from your_table;
In your production server, copy the your_table.csv file and load it using load data...:
load data infile '/home/user_prod/your_table.csv'
replace -- This will replace any rows with duplicated primary | unique key values.
-- If you don't want to replace the rows, use "ignore" instead of "replace"
into table your_table
fields terminated by ',' optionally enclosed by '"'
lines terminated by '\n';
Read the reference manual (links provided above) for additional information.