sqlldr - how to load multiple csv to multple table - sql-loader

1st question: I am trying to load test1.csv, test2.csv and test3.csv to table1, table2 and table3 respectively using SQLLDR. Please bear with my lack if knowledge in this area, I couldn't get it quite right while defining this in .ctl file, only I can think of is the below code but this is not correct. so my question is how can I make this right or is this possible?
OPTIONS (SKIP=1)
LOAD DATA
INFILE 'test1.csv'
INFILE 'test2.csv'
INFILE 'test2.csv'
TRUNCATE
INTO TABLE table1
fields terminated by ',' optionally enclosed by '"'
TRAILING NULLCOLS
(
Col1 "TRIM(:Col1)",
Col2 "TRIM(:Col2)"
)
INTO TABLE table2
fields terminated by ',' optionally enclosed by '"'
TRAILING NULLCOLS
(
Colx "TRIM(:Colx)",
Coly "TRIM(:Coly)"
)
INTO TABLE table3
fields terminated by ',' optionally enclosed by '"'
TRAILING NULLCOLS
(
Colp "TRIM(:Colp)",
Colq "TRIM(:Colq)"
)
2nd question: This is an alternative to this first question. Since I couldn't figure it out the first one, what I have done is splitting the loads for each table into multiple .ctl files and calling those all three in a .bat file. This works at least but my question is there a way to process all these 3 .ctl files in a session without mentioning user/password 3 times as below?
sqlldr userid=user/pass#server control=test1.ctl
sqlldr userid=user/pass#server control=test2.ctl
sqlldr userid=user/pass#server control=test3.ctl

If there is a field that can be used which can indicate which table that file's data is intended for, you can do something like this using multiple INFILE satements. Let's say the first field is that indicator and it will not be loaded (define it as a FILLER so sqlldr will ignore it):
...
INTO TABLE table1
WHEN (01) = 'TBL1'
fields terminated by ',' optionally enclosed by '"'
TRAILING NULLCOLS
(
rec_skip filler POSITION(1),
Col1 "TRIM(:Col1)",
Col2 "TRIM(:Col2)"
)
INTO TABLE table2
WHEN (01) = 'TBL2'
fields terminated by ',' optionally enclosed by '"'
TRAILING NULLCOLS
(
rec_skip filler POSITION(1),
Colx "TRIM(:Colx)",
Coly "TRIM(:Coly)"
)
INTO TABLE table3
WHEN (01) = 'TBL3'
fields terminated by ',' optionally enclosed by '"'
TRAILING NULLCOLS
(
rec_skip filler POSITION(1),
Colp "TRIM(:Colp)",
Colq "TRIM(:Colq)"
)
So logically, each INTO table WHEN section handles each file. Not that flexible though and arguably harder to maintain. For ease of maintenance you may just want to have a control file for each file? If all files and tables are the same layout you could also load them all into the same staging table (with the indicator) for ease of loading, then split them out programatically into the separate tables afterwards. Pros of that method are faster and easier loading and more control over the splitting into the separate table part of the process. Just some other ideas. I have done each method, depends on the requirements and what you are able to change.

Related

MySQL: Load Data Infile from CSV that consist of comma in VARCHAR field

Below is the code that I use to import CSV file to MySQL database. It works well to divide all the field and its record.
LOAD DATA INFILE 'file.csv'
INTO TABLE customer FIELDS TERMINATED BY ',' LINES TERMINATED BY '\r\n'
(
ID, name, salary, address, status
);
However, when there is a VARCHAR or TEXT field which consist of comma (','), it works improperly. It is because I use FIELDS TERMINATED BY ',' that used to separate each field record.
So, for example, if a customer with salary 50,000 (double), it split the field normally. But, if the customer address is Java Road 15, Hong Kong (varchar/text), Java Road 15 will be saved in address field, while the Hong Kong will be saved to status field. This basically remove any record inside the status field. Any clue for this problem? Thanks in advance.
Are the fields enclosed by double quotes or something else? If so, you can add the "ENCLOSED BY" in your query.
FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\r\n';
"Enclosed by" specifies the character to identify the start and end of a field. In your case, field is enclosed by a double quote such as "Java Road 15, Hong Kong". It helps MYSQL to extract the field correctly even if there is a field delimiter in the field.
MYSQL manual: https://dev.mysql.com/doc/refman/5.7/en/load-data.html
LOAD DATA INFILE 'file.csv'
INTO TABLE customer FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
(
ID, name, salary, address, status
);
Try this one.
If anyone happens to stumble upon this answer, I wanted to share b/c this took me way too long to figure out - using mac, datagrip, mysql 5.7+, survey response questions:
DROP TABLE IF EXISTS surveyQuestion_ID;
#create table
CREATE TABLE surveyQuestion_ID
(
surveyQuestion_ID INT(11) NOT NULL,
surveyDescription TEXT,
surveyResponse VARCHAR(25) DEFAULT NULL,
PRIMARY KEY (surveyQuestion_ID)
);
#load query
LOAD DATA LOCAL INFILE 'file_location/file.csv' INTO TABLE surveyQuestion_ID
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES
(surveyQuestion_ID, surveyDescription, surveyResponse);
Hope this helps.

ignore first two characters on a column while importing csv to mysql

I am trying to import a csv file to mysql table, But I need to remove First two characters on particular column before importing to mysql.
This is my statment :
string strLoadData = "LOAD DATA LOCAL INFILE 'E:/park/Export.csv' INTO TABLE tickets FIELDS terminated by ',' ENCLOSED BY '\"' lines terminated by '\n' IGNORE 1 LINES (SiteId,DateTime,Serial,DeviceId,AgentAID,VehicleRegistration,CarPark,SpaceNumber,GpsAddress,VehicleType,VehicleMake,VehicleModel,VehicleColour,IssueReasonCode,IssueReason,NoticeLocation,Points,Notes)";
Column IssueReasoncode' has data like 'LU12' , But i need to remove the first 2 characters it should have only integers on it and not alpha numeric .
I need to remove 'LU' from that column.
Is it possible to write like this on left(IssueReasonCode +' '2). This column is varchar(45) and cant be changed now because of large data on it.
Thanks
LOAD DATA INFILE has the ability to perform a function on the data for each column as you read it in (q.v. here). In your case, if you wanted to remove the first two characters from the IssueReasonCode column, you could use:
RIGHT(IssueReasonCode, CHAR_LENGTH(IssueReasonCode) - 2)
to remove the first two characters. You specify such column mappings at the end of the LOAD DATA statement using SET. Your statement should look something like the following:
LOAD DATA LOCAL INFILE 'E:/park/Export.csv' INTO TABLE tickets
FIELDS terminated by ','
ENCLOSED BY '\"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(SiteId, DateTime, Serial, DeviceId, AgentAID, VehicleRegistration, CarPark, SpaceNumber,
GpsAddress, VehicleType, VehicleMake, VehicleModel, VehicleColour, IssueReasonCode,
IssueReason, NoticeLocation, Points, Notes)
SET IssueReasonCode = RIGHT(IssueReasonCode, CHAR_LENGTH(IssueReasonCode) - 2)
Referencing this and quoting this example , you can try the below to see if it works
User variables in the SET clause can be used in several ways. The
following example uses the first input column directly for the value
of t1.column1, and assigns the second input column to a user variable
that is subjected to a division operation before being used for the
value of t1.column2:
LOAD DATA INFILE 'file.txt' INTO TABLE t1 (column1, #var1) SET
column2 = #var1/100;
string strLoadData = "LOAD DATA LOCAL INFILE 'E:/park/Export.csv' INTO TABLE tickets FIELDS terminated by ',' ENCLOSED BY '\"' lines terminated by '\n' IGNORE 1 LINES (SiteId,DateTime,Serial,DeviceId,AgentAID,VehicleRegistration,CarPark,SpaceNumber,GpsAddress,VehicleType,VehicleMake,VehicleModel,VehicleColour,#IRC,IssueReason,NoticeLocation,Points,Notes) SET IssueReasonCode = substr(#IRC,2) ;";

Tell sqlldr control file to load missing values as NULL

I have a CSV file. How can I tell the sqlldr control file to load missing values as NULL. (ie the table schema allows NULL for certain column)
Example of CSV
1,Name1
2,Name2
3,
4,Name3
Could you help me to edit my control file here so that a line 3 , the missing value is inserted as NULL in my table
Table
Create table test
( id Number(2), name Varchar(10), primary key (id) );
Control file
LOAD DATA INFILE '{path}\CSVfile.txt'
INSERT INTO test
FIELDS TERMINATED BY ','
(id CHAR,
name CHAR
)
I believe all you should have to do is this:
name CHAR(10) NULLIF(name=BLANKS)
You would have to hint to SQL*Loader that there might be nulls in your data.
2 ways to give that hint to SQL*Loader.
Use TRAILING NULLCOLS option.
LOAD DATA INFILE '{path}\CSVfile.txt'
INSERT INTO test<br>
FIELDS TERMINATED BY ','
TRAILING NULLCOLS
(id CHAR,
name CHAR
)
Recreate your CSV files with enclosed fields and then use OPTIONALLY ENCLOSED BY '"' which lets SQL*Loader clearly see the nulls in your data (nothing between quotes) that looks like "abcd",""
LOAD DATA INFILE '{path}\CSVfile.txt'
INSERT INTO test<br>
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(id CHAR,
name CHAR
)
I found that using TRAILING NULLCOLS will do the job BUT it has to be for "blanks" at the end of the record line.
LOAD DATA INFILE {path}\Your_File
INSERT INTO TABLE Your_Table
TRAILING NULLCOLS
FIELDS TERMINATED BY ","
(
... your fields
)

Is it possible to perform functions during LOAD DATA INFILE without creating a stored procedure?

I have two columns, q1 and q2, that I'd like to sum together and put in the destination column, q.
The way I do it now, I put the data in an intermediate table, then sum during loading, but I'm wondering if it's possible to do it during extraction instead?
Here's my script:
LOAD DATA INFILE 'C:\temp\foo.csv'
INTO TABLE new_foo
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(q1,q1)
INSERT INTO foo (q) SELECT q1+q2 AS q
FROM foo_temp;
Try:
LOAD DATA INFILE 'C:\temp\foo.csv'
INTO TABLE `foo`
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(#`q1`, #`q2`)
SET `q` = #`q1` + #`q2`;

Different number of rows exporting a table to a csv file in MySQL

I have loaded a table in mysql (xampp) with around 40,000,000 rows, with it I created another table with around 6,000,000 rows and I exported it to a csv file using:
(SELECT ...)
UNION
(SELECT ...
FROM ctr_train0
INTO OUTFILE 'C:/.../file.csv'
FIELDS ENCLOSED BY '"' TERMINATED BY ',' ESCAPED BY '"'
LINES TERMINATED BY '\n');
no errors, but this command creates a csv file with around 200,000 rows less than the original table, What happens? How can I export all the 6,000,000 rows?. Thanks in advance.
My best guess -- given the limited information -- is the use of union. This removes duplicates from the output, and the duplicates are both between tables and within tables. So, if your data has duplicates, this removes them.
Try running the query with union all instead:
(SELECT ...)
UNION ALL
(SELECT ...
FROM ctr_train0
INTO OUTFILE 'C:/.../file.csv'
FIELDS ENCLOSED BY '"' TERMINATED BY ',' ESCAPED BY '"'
LINES TERMINATED BY '\n');