File Path :
'D:\TEST\Sales Data-20Nov2017.csv
Table1
EMP NAME SALES Till Date SALES 2015-16
A Sam 50 30
B Bob 40 60
C Cat 30 20
D Doll 20 50
E Eric 10 25
F Fed 15 10
How to Import csv file from Above path with dynamic date using SQL query?
Csv File contents above data
Kindly suggest SQL query using above path with dynamic Date
Related
I am using MYSQL to run my sql queries
Below is the structure of the table
Table-1: job_data
job_id: unique identifier of jobs
actor_id: unique identifier of actor
event: decision/skip/transfer
language: language of the content
time_spent: time spent to review the job in seconds
org: organization of the actor
ds: date in the yyyy/mm/dd format. It is stored in the form of text and we use presto to run. no need for date function
Dataset:
dates
job_id
actor_id
event
language
time_spent
org
11/30/2020
21
1001
skip
English
15
A
11/30/2020
22
1006
transfer
Arabic
25
B
11/29/2020
23
1003
decision
Persian
20
C
11/28/2020
23
1005
transfer
Persian
22
D
11/28/2020
25
1002
decision
Hindi
11
B
11/27/2020
11
1007
decision
French
104
D
11/26/2020
23
1004
skip
Persian
56
A
11/25/2020
20
1003
transfer
Italian
45
C
I am trying to find percentage share of each language: Share of each language for different contents. Calculate the percentage share of each language in the last 30 days?
My Query
SELECT language,
ROUND(100.0 * SUM(IF(event IN ('transfer', 'decision'), 1, 0)) / COUNT(job_id), 2) AS percentage_share
FROM job_data
WHERE ds BETWEEN DATE_SUB(CURDATE(), INTERVAL 7 DAY) AND CURDATE()
GROUP BY language;
0 rows returned
I am not getting any result whatsoever
You need to parse the dates since they're not stored in the format that MySQL can parse automatically.
WHERE STR_TO_DATE(ds, '%m/%d/%Y') BETWEEN ...
I need to convert data of two sheets from one excel into csv files. Data section starts from 8th row and 2nd column in sheet. Column header is on 7th row then data. How can this be done in Unix shell scripting.
https://linoxide.com/linux-how-to/methods-convert-xlsx-format-files-csv-linux-cli/
I read couple of article but none giving idea to start the read/convert of sheet from certain column and row
Excel data of sheet is as:
This is the information of employee in company FRDN
This is data of year 2019
EMPLOYEE_ID FIRST_NAME EMAIL PHONE_NUMBER HIRE_DATE JOB_ID SALARY COMMISSION_PCT MANAGER_ID DEPARTMENT_ID LAST_NAME
100 Steven SKING 515.123.4567 6/17/1987 AD_PRES 24000 90 King
101 Neena NKOCHHAR 515.123.4568 9/21/1989 AD_VP 17000 100 90 Kochhar
102 Lex LDEHAAN 515.123.4569 1/13/1993 AD_VP 17000 100 90 De Haan
103 Alexander AHUNOLD 590.423.4567 1/3/1990 IT_PROG 9000 102 60 Hunold
104 Bruce BERNST 590.423.4568 5/21/1991 IT_PROG 6000 103 60 Ernst
csv file is needed for the sheet of excel and data start from a certain row and column.
This command will help you to convert your Excel file into CSV.
libreoffice -display :0 --headless --convert-to csv --outdir "/path/" "/path/FileName.xls"
So I think I am having a simple problem.
I have a dataframe (my_data), that looks like this
Treatment Amount Duration
a 5 3000
b 8 2000
c 6 1000
d 2 5000
Now I want to create a new dataframe (my_data_1) which adds a new column based on a simple function Duration/Amount.
My_data_1 should look like this:
Treatment Amount Duration Mean duration
a 5 3000 600
b 8 2000 250
c 6 1000 167
d 2 5000 2500
I tried to write a function and implement it into my dataframe
mean_duration <- function(md){my_data$Duration / my_data$Amount}
my_data_1$md <- with(my_data, ave(Duration, Amount, FUN = mean_duration))
Where did I go wrong?
Im kinda new to all the SSIS stuff. And im stuck with it. i want to combine multiple CSV files and then put them into a database. The files all have the same info. Examples:
File 1
Week Text1
22-10-2018 58
29-10-2018 12
File 2
Week Text2
22-10-2018 55
29-10-2018 48
File 3
Week Text3
22-10-2018 14
29-10-2018 99
Expected result:
Result in DB
Week Text1 Text2 Text3
22-10-2018 58 55 14
29-10-2018 12 48 99
I got this far by selecting the documents, use a sort and then a join merge. For 3 documents this took me 3 sorts and 2 join merge's. I have to do this for about 86 documents.. there has to be an easier way.
Thanks in advance.
I agree with KeithL, I recommend that your final table look like this:
Week Outcome Value DateModified
=======================================================
22-10-2018 AI 58 2018-10-23 20:49
29-10-2018 AI 32 2018-10-23 20:49
22-10-2018 Agile 51 2018-10-23 20:49
29-10-2018 Agile 22 2018-10-23 20:49
If you want to pivot Weeks or outcomes, do it in your reporting tool.
Don't create tables with dynamic named columns - that's a bad idea
Anyway here is an approach that uses a staging table.
Create a staging table that your file will be inserted into:
Script 1:
CREATE TABLE Staging (
[Week] VARCHAR(50),
Value VARCHAR(50),
DateModified DATETIME2(0) DEFAULT(GETDATE())
)
Import the entire file in, including headings. In other words, when defining the file format, don't tick 'columns in first row'
We do this for two reasons:
SSIS can't import files with with different heading names using the same data flow
We need to capture the heading name in our staging table
After you import a file your staging table looks like this:
Week Value DateModified
=======================================
Week Agile 2018-10-23 20:49
22-10-2018 58 2018-10-23 20:49
29-10-2018 32 2018-10-23 20:49
Now select out the data in the shape we want to load it in. Run this in your database after importing the data to check:
Script 2:
SELECT Week, Value,
(SELECT TOP 1 Value FROM Staging WHERE Week = 'Week') Outcome
FROM staging
WHERE Week <> 'Week'
Now add an INSERT and some logic to stop duplicates. Put this into an execute SQL task after the data import
Script 3:
WITH SRC As (
SELECT Week, Value,
(SELECT TOP 1 Value FROM Staging WHERE Week = 'Week') Outcome
FROM staging As SRC
WHERE Week <> 'Week'
)
INSERT INTO FinalTable (Week,Value, Outcome)
select Week, Value, Outcome
FROM SRC
WHERE NOT EXISTS (
SELECT * FROM FinalTable TGT
WHERE TGT.Week = SRC.Week
AND TGT.Outcome = SRC.Outcome
)
Now you wrap this up in a for each file loop that repeats this for each file in the folder. Don't forget that you need to TRUNCATE TABLE staging before importing each file.
In Summary:
Set up a for each file iterator
Inside this goes:
A SQL Task with TRUNCATE TABLE Staging;
A data flow to import the text file from the iterator into the staging table
A SQL Task with Script 3 in it
I've put the DateModified columns in the tables to help you troubleshoot.
Good things: you can run this over and over and reimport the same file and you won't get duplicates
Bad thing: Possibility of cast failures when inserting VARCHAR into DATE or INT
You can read your file(s) using a simple C# script component (Source).
You need to add your 3 columns to output0.
Week as DT_Date
Type as DT_STR
Value as DT_I4
string[] lines = System.IO.File.ReadAllLines([filename]);
int ctr = 0;
string type;
foreach(string line in lines)
{
string[] col = line.Split(',');
if(ctr==0) //First line is header
{
type = col[1];
}
else
{
Output0Buffer.AddRow();
Output0Buffer.Week = DateTime.Parse(col[0]);
Output0Buffer.Type = type;
Output0Buffer.Value = int.Parse(col[1]);
}
ctr++;
}
After you load to a table you can always create a view with a dynamic pivot.
In My Sql i have a table
Select * from Dummy;
Column like below
id name salary binary_col
1 a 100 A.pdf
2 b 200 b.pdf
3 c 300 c.pdf
4 d 400 d.pdf
5 e 500 e.pdf
I am using SQLyog 7.01
Totally i have 200 record in My Sql Database table Know i want EXPORT all the 200 Record and IMPORT ORACLE database table.And my (binary_col Datatype is longBLOB)
Thanks