SQL Loader script to load data into multiple table not working records are getting discarded - sql-loader

Need help with the below SQL*Loader script. I have created the below test script to load the data into the 3 tables based on the location and the work_type columns.But data is not getting inserted correctly into the 3 tables. I am not able to understand how to do multiple comparisons on the data in when clause. Please help.
LOAD DATA
INFILE *
REPLACE
INTO
TABLE
xx_dumy_test_emp_table
WHEN (WORK_TYPE='EMP') AND (Work_Location = 'IND')
FIELDS TERMINATED BY ';'
TRAILING NULLCOLS
(
Work_Type
,Employee_Role
,First_name
,Last_Name
,Work_Location
)
INTO
TABLE
xx_dumy_test_oversea_employess
WHEN (WORK_TYPE='EMP') AND (Work_Location <> 'IND')
FIELDS TERMINATED BY ';'
TRAILING NULLCOLS
(
Work_Type POSITION(1:3)
,Employee_Role
,First_name
,Last_Name
,Work_Location
)
INTO
TABLE
xx_dumy_test_mgr_table
WHEN (WORK_TYPE='MGR') AND (Work_Location = 'IND')
FIELDS TERMINATED BY ';'
TRAILING NULLCOLS
(
Work_Type POSITION(1:3)
,Employee_Role
,First_name
,Last_Name
,Work_Location
)
--DATA OF THE SCRIPT
EMP;Test Ops1;Gautam;Hoshing;IND;
MGR;Test Ops2;Steve;Tyler;IND;
EMP;Test Ops3;Hyana;Motler;JPY;
Regards,
Gautam.

Related

mysql table formatting and infile issue

Each time I go and create my table and then add the .csv file, it keeps not fully loading in all of the data. Below is my code:
CREATE TABLE Parts_Maintenance (
Vehicle_ID BIGINT(20),
State VARCHAR(255),
Repair VARCHAR(255),
Reason VARCHAR(255),
Year YEAR,
Make VARCHAR(255),
Body_Type VARCHAR(255),
PRIMARY KEY (Vehicle_ID)
);
LOAD DATA INFILE '/home/codio/workspace/FleetMaintenanceRecords.csv'
INTO TABLE Parts_Maintenance
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\\n'
IGNORE 1 ROWS;
SELECT * FROM Parts_Maintenance;
Here is a photo of what it looks like in Codio:
And here is a photo of some of the data being brought in:
Could someone please help me pinpoint what I am doing wrong?
Tried to create table and bring in a .csv file. Table was created but the data is not all there and the table looks messed up
I agree with #barmer Your LINES TERMINATED BY is probably wrong, it's probably \r
You May use
LOAD DATA INFILE '/home/codio/workspace/FleetMaintenanceRecords.csv'
INTO TABLE Parts_Maintenance
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 ROWS;
Alternatively, you can use another method with workbench.
Right Click on Table
- Table data import wizard
- Choose the file path
Select the destination table
- Check and configure import settings
- Click on Next to execute.
This is the best and simple solution.

Update specific column in MySQL from CSV

I am looking to import data from a CSV file into MySQL. The CSV is roughly 100 lines and looks as follows.
data1,num1
data2,num2
The MySQL database looks as follows.
<th>aaa </th>|<th>b</th>|<th>c</th>|<th>...</th>|<th>ggg</th>|
----|-|-|---|---|
<td>data</td>|<td>*</td>|<td>*</td>|<td>...</td>|<td>num</td>|
What I am looking to do is find data1 in the MySQL table and replace num with num1 on that line, and to do this for each line in the CSV.
My database knowledge is on the lower end so any help would be appreciated.
**UPDATE
Here is where I am sitting at now. I cannot seem to get this to update more than the first line from the .csv file.
DROP TEMPORARY TABLE IF EXISTS tempTable;
CREATE TEMPORARY TABLE tempTable
(`data` varchar(20) NOT NULL,
`num` varchar(20) NOT NULL
);
LOAD DATA LOCAL INFILE 'file.csv'
INTO TABLE tempTable
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\r\n';
UPDATE DBTable t1
JOIN tempTable t2 ON t1.data = t2.data
SET t1.num = t2.num;
DROP TEMPORARY TABLE tempTable;
You could load CSV file into a new table, then extract data with a JOIN.
Create a temporary table; try to use for data same data type you have on your table.
CREATE TABLE IF NOT EXISTS `temp_table_csv` (
`data` char(20) NOT NULL,
`num` bigint(20) NOT NULL
);
Then load CSV into this new temporary table; we suppose your CSV file has columns header.
LOAD DATA INFILE 'csvfile.csv'
INTO TABLE `temp_table_csv`
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
Then make a JOIN and write into a file.
SELECT `sx`.`data`, `dx`.`num`
FROM `temp_table_csv` AS `sx` INNER JOIN `datatable` AS `dx`
ON `dx`.`data` = `sx`.`data`
INTO OUTFILE 'newcsv.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';
At last you can remove temporary table previously created.
DROP TABLE `temp_table_csv`;

LOAD DATA INFILE - Two Default Columns

I want to load data from a CSV file into a table in my database. However, the first two columns are an ID (AUTO INCREMENT, PRIMARY KEY) and CURRENT DATE (CURRENT DATE ON UPDATE) field. The source file will therefore not have these fields.
How would I get the file to load? The below code doesn't work. I have also tried leaving out id and action_date from the fields in the brackets.
LOAD DATA INFILE '/upliftments//UPLIFTMENTS_20160901.csv'
INTO TABLE database.return_movements
CHARACTER SET latin1
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
IGNORE 1 LINES
( id
, action_date
, current_code
, serial_number
, new_code
);

Tell sqlldr control file to load missing values as NULL

I have a CSV file. How can I tell the sqlldr control file to load missing values as NULL. (ie the table schema allows NULL for certain column)
Example of CSV
1,Name1
2,Name2
3,
4,Name3
Could you help me to edit my control file here so that a line 3 , the missing value is inserted as NULL in my table
Table
Create table test
( id Number(2), name Varchar(10), primary key (id) );
Control file
LOAD DATA INFILE '{path}\CSVfile.txt'
INSERT INTO test
FIELDS TERMINATED BY ','
(id CHAR,
name CHAR
)
I believe all you should have to do is this:
name CHAR(10) NULLIF(name=BLANKS)
You would have to hint to SQL*Loader that there might be nulls in your data.
2 ways to give that hint to SQL*Loader.
Use TRAILING NULLCOLS option.
LOAD DATA INFILE '{path}\CSVfile.txt'
INSERT INTO test<br>
FIELDS TERMINATED BY ','
TRAILING NULLCOLS
(id CHAR,
name CHAR
)
Recreate your CSV files with enclosed fields and then use OPTIONALLY ENCLOSED BY '"' which lets SQL*Loader clearly see the nulls in your data (nothing between quotes) that looks like "abcd",""
LOAD DATA INFILE '{path}\CSVfile.txt'
INSERT INTO test<br>
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(id CHAR,
name CHAR
)
I found that using TRAILING NULLCOLS will do the job BUT it has to be for "blanks" at the end of the record line.
LOAD DATA INFILE {path}\Your_File
INSERT INTO TABLE Your_Table
TRAILING NULLCOLS
FIELDS TERMINATED BY ","
(
... your fields
)

if exists update else insert csv data MySQL

I am populating a MySQL table with a csv file pulled from a third party source. Every day the csv is updated and I want to update rows in MySQL table if an occurrence of column a, b and c already exists, else insert the row. I used load data infile for the initial load but I want to update against a daily csv pull. I am familiar with INSERT...ON DUPLICATE, but not in the context of a csv import. Any advice on how to nest LOAD DATA LOCAL INFILE within INSERT...ON DUPLICATE a, b, c - or if that is even the best approach would be greatly appreciated.
LOAD DATA LOCAL INFILE 'C:\\Users\\nick\\Desktop\\folder\\file.csv'
INTO TABLE db.tbl
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 lines;
Since you use LOAD DATA LOCAL INFILE, it is equivalent to specifying IGNORE: i.e. duplicates would be skipped.
But
If you specify REPLACE, input rows replace existing rows. In other words, rows that have the same value for a primary key or unique index as an existing row.
So you update-import could be
LOAD DATA LOCAL INFILE 'C:\\Users\\nick\\Desktop\\folder\\file.csv'
REPLACE
INTO TABLE db.tbl
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 lines;
https://dev.mysql.com/doc/refman/5.6/en/load-data.html
If you need a more complicated merge-logic, you could import CSV to a temp table and then issue INSERT ... SELECT ... ON DUPLICATE KEY UPDATE
I found that the best way to do this is to insert the file with the standard LOAD DATA LOCAL INFILE
LOAD DATA LOCAL INFILE
INTO TABLE db.table
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 lines;
And use the following to delete duplicates. Note that the below command is comparing db.table to itself by defining it as both a and b.
delete a.* from db.table a, db.table b
where a.id > b.id
and a.field1 = b.field1
and a.field2 = b.field2
and a.field3 = b.field3;
To use this method it is essential that the id field is an auto incremental primary key.The above command then deletes rows that contain duplication on field1 AND field2 AND field3. In this case it will delete the row with the higher of the two auto incremental ids, this works just as well if we were to use < instead of >.