How to load data to 1 table with 3 columns from a CSV file that has 5 columns? - sql-loader

My CSV data:
10,ABC,10000,101,DEPARTMENT
11,XYZ,,,DEPT2
I wanted to insert it into the table with 3 columns:
EMPID,EMPNAME,DEPARTMENT

In the control file, name the fields in the data that you don't want with a name that does not match a column in the table and call them FILLER. FILLER basically causes sqlldr to ignore that field. You should have identified which fields in the csv mapped to which columns but I will assume fields 1, 2 and 5 map to id, name and dept:
...
(
empid,
empname,
x1 FILLER,
x2 FILLER,
department
)

Related

Reporting Services - show multiple values for a column horizontally rather than vertically

I have a report where row data can have the same data, apart from the data in the last column. Just adding the data to a table results in this:
Column 1
Column 2
Column 3
Column 4
1
abc
1111
234345
1
def
2222
435656
1
def
2222
423233
1
xyz
1234
145423
I want to show the data like this, where if a row has multiple values for Column 4 value, additional Column 4's are added horizontally:
Column 1
Column 2
Column 3
Column 4
Column 4
1
abc
1111
234345
1
def
2222
435656
423233
1
xyz
1234
145423
I've tried adding a Parent Group to Column 4, which is close to what I want, but every row is given it's own column for the Column 4 value so it ends up like this:
Column 1
Column 2
Column 3
Column 4
Column 4
Column 4
Column 4
1
abc
1111
234345
1
def
2222
435656
423233
1
xyz
1234
145423
etc...
Is there a way to achieve the layout I require?
You can do this with a small change to your dataset query.
Here I have recreated your sample data table as a table variable called `#t' . Then I query the table and add a column which gives us a unique index for each 'Column4' value within each Column1-3 group
DECLARE #t TABLE (Column1 int, Column2 varchar(10), Column3 int, Column4 int)
INSERT INTO #t VALUES
(1, 'abc', 1111, 234345) ,
(1, 'def', 2222, 435656) ,
(1, 'def', 2222, 423233) ,
(1, 'xyz', 1234, 145423)
SELECT
*
, ROW_NUMBER() OVER(PARTITION BY Column1, Column2, Column3 ORDER BY Column4) as Col4Index
FROM #t
Now in your report, add a matrix with one rowgroup. This will group on Column1, Column2 and Column3
Now add a column group that is grouped on Col4Index
Add your first 3 columns to the matrix making sure they are all in the single rowgroup (just add additional columns inside the group first and then select the correct field for each.
Drop the Column4 field into the [Data] placeholder and finally set the header for this column to an expression (optional) like this
="Column4_" & Fields!Col4Index.Value
The report design looks like this
The final output looks like this

How can I copy rows from one to another table with a different colnm data

I had two tables Table 1 & Table 2 AS shown here
Table:1
ID
IMG_PATH
CAT_ID
166
hfhbf
1
164
jgj
2
162
ggd
1
160
mfnf
1
158
dbd
2
Table:2
ID
IMG_PARENT_ID
Here I want to print table 1's ID column data Example:166
Here (ID-1) Example:165
Here I want to print table 1's ID column data Example:164
Here (ID-1) Example:163
Here I want to print table 1's ID column data Example:162
Here (ID-1) Example:161
Here I want to print table 1's ID column data Example:160
Here (ID-1) Example:159
Here I want to print table 1's ID column data Example:158
Here (ID-1) Example:157
AS SHOWN IN TABLE 2 I NEED FOLLOWING VALUE...
and dont try this manually method:
INSERT INTO tabla2
SELECT * FROM tabla1
WHERE id = 1 //Here we write the condition
I want to fetch data because arround 10,000's row are inserted in this table
Lots of tries but didnt get it
based on what you provided info about your question, this is what I understand about this.
Assuming that table 1 is auto_increment with ID of 1-10,000.
Then you can use this to select the even IDs in table 1 and insert it to table 2
insert into table2 (ID) select ID from table1 group by ID having mod(ID, 2) = 0;
To select odd IDs from table 1 and insert it to table 2 you can use this
insert into table2 (IMG_PARENT_ID) select ID from table1 group by ID having mod(ID, 2) = 1;

How to pickup date from long string Name column in oracle

I have table with column 'ID', 'File_Name'
Table
ID File_Name
123 ROSE1234_LLDAtIInstance_03012014_04292014_190038.zip
456 ROSE1234_LLDAtIInstance_08012014_04292014_190038.zip
All I need is to pickup the first date given in file name.
Required:
ID Date
123 03012014
456 08012014
Here's one method assuming 8 characters after 2nd _ is always true.
It finds the position of the first _ then looks for the position of the 2nd _ using the position of the first _+1 then it looks for the 8 characters after the 2nd _
SELECT Id
, substr(File_name, instr(File_name,'_',instr(File_name,'_')+1)+1,8) as Date
FROM Table
or
a more elegant way would be to use a RegExp_Instr Function which eliminates the need for nesting instr.
SELECT Id, substr(File_name,REGEXP_INSTR(FileName,'_',1,2)+1,8) as date
FROM dual;
Why don't you simply put the date in separate column? E.g. you can than query the (indexed) date. The theory says the date is a property of the file. It's about avoiding errors, maintainability and so on. What in the zip files? Excel sheets I suppose :-)
Use a much simplified call to REGEXP_SUBSTR( ):
SQL> with tbl(ID, File_name) as (
2 select 123, 'ROSE1234_LLDAtIInstance_03012014_04292014_190038.zip' from dual
3 union
4 select 456, 'ROSE1234_LLDAtIInstance_08012014_04292014_190038.zip' from dual
5 )
6 select ID,
7 REGEXP_SUBSTR(File_name, '_(\d{8})_', 1, 1, NULL, 1) "Date"
8 from tbl;
ID Date
---------- ----------------------------------------------------
123 03012014
456 08012014
SQL>
For 11g, click here for the parameters to REGEXP_SUBSTR( ).
EDIT: Making this a virtual column would be another way to handle it. Thanks to Epicurist's post for the idea. The virtual column will contain a date value holding the filename date once the ID and filename are committed. Add it like this:
alter table X_TEST add (filedate date generated always as (TO_DATE(REGEXP_SUBSTR(Filename, '_(\d{8})_', 1, 1, NULL, 1), 'MMDDYYYY')) virtual);
So now just insert the ID and Filename, commit and there's your filedate. Note that its read-only.

How can I get missing values recorded as NULL when importing from csv

I have multiple, large, csv files, each of which has missing values in many places. When I import the csv file into SQLite, I would like to have the missing values recorded as NULL for the reason that another application expects missing data to be indicated by NULL. My current method does not produce the desired result.
An example CSV file (test.csv) is:
12|gamma|17|delta
67||19|zeta
96|eta||theta
98|iota|29|
The first line is complete; each of the other lines has (or is meant to show!) a single missing item. When I import using:
.headers on
.mode column
.nullvalue NULL
CREATE TABLE t (
id1 INTEGER PRIMARY KEY,
a1 TEXT,
n1 INTEGER,
a2 TEXT
);
.import test.csv t
SELECT
id1, typeof(id1),
a1, typeof(a1),
n1, typeof(n1),
a2, typeof(a2)
FROM t;
the result is
id1 typeof(id1) a1 typeof(a1) n1 typeof(n1) a2 typeof(a2)
---- ----------- ------ ---------- -- ---------- ------ ----------
12 integer gamma text 17 integer delta text
67 integer text 19 integer zeta text
96 integer eta text text theta text
98 integer iota text 29 integer text
so the missing values have become text. I would appreciate some guidance on how to ensure that all missing values become NULL.
sqlite3 imports values as text and there does not seem to be a way to make it treat empty values as nulls.
However, you can update the tables yourself after import, setting empty strings to nulls, like
UPDATE t SET a1=NULL WHERE a1='';
Repeat for each column.
You can also create a trigger for such updates:
CREATE TRIGGER trig_a1 AFTER INSERT ON t WHEN new.a1='' BEGIN
UPDATE t SET a1=NULL WHERE rowid=new.rowid;
END;
For the cases where you cannot update after import because the import will fail when the empty string (text columns) or 0 (integer columns) is inserted instead of NULL, see my answer to this other stackoverflow question

SQL Loader- data not uploaded to the table. All went to .bad

I tried to upload some records into my table ABC. None of the records went through and they all showed up in the .bad log.
I am pretty new to sqlldr. Not quite sure where did I messed up. Let me show you the steps I took.
First, I created an empty table called ABC.
create table abc
(
location_id varchar2(10),
sold_month date,
item_name varchar2(30),
company_id varchar2(10),
qty_sold number(10),
total_revenue number(14,3),
promotional_code varchar2(10)
);
Here is my flat file abcflat.dat. The columns correspond to the columns in the table above.
"1000","02/01/1957","Washing Machine","200011","10","10000","ABCDE"
"1000","05/02/2013","Computer","200012","5","5000","ABCDE"
"1000","05/01/2013","Bolt","200010","100","500","ABCDE"
"1000","05/03/2013","Coca Cola","200011","1000","1000","ABCDE"
Here is my control file abc.ctl
LOAD DATA
INFILE 'C:\Users\Public\abcflat.dat'
INTO TABLE ABC
FIELDS TERMINATED BY ","
enclosed by '"'
(
Location_ID
, Sold_month
, item_name
, Company_id
, QTY_Sold
, Total_revenue
, Promotional_Code
)
And my last step
sqlldr hr/open#xe control=c:\users\public\abc.ctl
It says
Commit point reached - logical record count 3
Commit point reached - logical record count 4
but none of the record showed up on my ABC table.
Thank You
It's most probably the date format, try this:
LOAD DATA
INFILE 'C:\Users\Public\abcflat.dat'
INTO TABLE ABC
FIELDS TERMINATED BY ","
enclosed by '"'
(
Location_ID
, Sold_month DATE "DD/MM/YYYY"
, item_name
, Company_id
, QTY_Sold
, Total_revenue
, Promotional_Code
)