I have excel source and sql server table .
Excel Source column is
Mno Price1 Price2
111 10 20
222 30 25
333 40 30
444 34 09
555 23 abc
Sql server Table
Product table name
PId Mno Sprice BPrice
1 111 3 50
2 222 14 23
3 444 32 34
4 555 43 45
5 666 21 67
I want to compare excel source Mno(Model number) with sql server Product table Mno (Model number), and if it is same i want to update Sql server Product table SPrice and Bprice.
Please tell me what are the steps i want to do?
I want to validate that excel sheet also, because in excel Price2 column have string values
if it's string value i want to send mail which row data are wrong.
I am new for SSIS so please give me details.
Read your new data in a source, use a lookup component for existing data. Direct row matches to a oledb command for update, and a destination for your non-matches for inserts (if you want to enter new products).
Personally I think the simplest way to do this is to use a dataflow to bring the excel file into a staging table and do any clean up if need be. Then as the next step inmteh control flow have an Execute SQl task that does the update. Or if you need either an update or an insert if the record is new, use a Merge statement in the Execute SQl task.
You can use a Merge Join Transformation with a full outer join (remember to sort your datasets before they input to the Merge Join Transformation), then have the output go to a Conditional Split Transformation. The Conditional Split Transformation can determine whether or not a row needs to be updated, inserted, or deleted and direct the flow to the appropriate transform to do that.
This was off the top of my head, and there may be a simpler transform for this. I haven't had the opportunity to work with SSIS in almost a year, so I might be getting a bit rusty.
Related
Im kinda new to all the SSIS stuff. And im stuck with it. i want to combine multiple CSV files and then put them into a database. The files all have the same info. Examples:
File 1
Week Text1
22-10-2018 58
29-10-2018 12
File 2
Week Text2
22-10-2018 55
29-10-2018 48
File 3
Week Text3
22-10-2018 14
29-10-2018 99
Expected result:
Result in DB
Week Text1 Text2 Text3
22-10-2018 58 55 14
29-10-2018 12 48 99
I got this far by selecting the documents, use a sort and then a join merge. For 3 documents this took me 3 sorts and 2 join merge's. I have to do this for about 86 documents.. there has to be an easier way.
Thanks in advance.
I agree with KeithL, I recommend that your final table look like this:
Week Outcome Value DateModified
=======================================================
22-10-2018 AI 58 2018-10-23 20:49
29-10-2018 AI 32 2018-10-23 20:49
22-10-2018 Agile 51 2018-10-23 20:49
29-10-2018 Agile 22 2018-10-23 20:49
If you want to pivot Weeks or outcomes, do it in your reporting tool.
Don't create tables with dynamic named columns - that's a bad idea
Anyway here is an approach that uses a staging table.
Create a staging table that your file will be inserted into:
Script 1:
CREATE TABLE Staging (
[Week] VARCHAR(50),
Value VARCHAR(50),
DateModified DATETIME2(0) DEFAULT(GETDATE())
)
Import the entire file in, including headings. In other words, when defining the file format, don't tick 'columns in first row'
We do this for two reasons:
SSIS can't import files with with different heading names using the same data flow
We need to capture the heading name in our staging table
After you import a file your staging table looks like this:
Week Value DateModified
=======================================
Week Agile 2018-10-23 20:49
22-10-2018 58 2018-10-23 20:49
29-10-2018 32 2018-10-23 20:49
Now select out the data in the shape we want to load it in. Run this in your database after importing the data to check:
Script 2:
SELECT Week, Value,
(SELECT TOP 1 Value FROM Staging WHERE Week = 'Week') Outcome
FROM staging
WHERE Week <> 'Week'
Now add an INSERT and some logic to stop duplicates. Put this into an execute SQL task after the data import
Script 3:
WITH SRC As (
SELECT Week, Value,
(SELECT TOP 1 Value FROM Staging WHERE Week = 'Week') Outcome
FROM staging As SRC
WHERE Week <> 'Week'
)
INSERT INTO FinalTable (Week,Value, Outcome)
select Week, Value, Outcome
FROM SRC
WHERE NOT EXISTS (
SELECT * FROM FinalTable TGT
WHERE TGT.Week = SRC.Week
AND TGT.Outcome = SRC.Outcome
)
Now you wrap this up in a for each file loop that repeats this for each file in the folder. Don't forget that you need to TRUNCATE TABLE staging before importing each file.
In Summary:
Set up a for each file iterator
Inside this goes:
A SQL Task with TRUNCATE TABLE Staging;
A data flow to import the text file from the iterator into the staging table
A SQL Task with Script 3 in it
I've put the DateModified columns in the tables to help you troubleshoot.
Good things: you can run this over and over and reimport the same file and you won't get duplicates
Bad thing: Possibility of cast failures when inserting VARCHAR into DATE or INT
You can read your file(s) using a simple C# script component (Source).
You need to add your 3 columns to output0.
Week as DT_Date
Type as DT_STR
Value as DT_I4
string[] lines = System.IO.File.ReadAllLines([filename]);
int ctr = 0;
string type;
foreach(string line in lines)
{
string[] col = line.Split(',');
if(ctr==0) //First line is header
{
type = col[1];
}
else
{
Output0Buffer.AddRow();
Output0Buffer.Week = DateTime.Parse(col[0]);
Output0Buffer.Type = type;
Output0Buffer.Value = int.Parse(col[1]);
}
ctr++;
}
After you load to a table you can always create a view with a dynamic pivot.
I'm working MS Access 2016, on a table that has student results: So the fields are simply studentID, Test, and Score. For reporting purposes, I need to generate a CSV file that has a student's TestScore values all in one row. So if I had:
StudentID: Test: TestScore:
A123 TestA 80
A123 TestB 90
B123 TestA 70
B123 TestB 95
How do I generate a table for export that looks like:
StudentID: TestA: TestB:
A123 80 90
B123 70 95
I don't think crosstabs would work because not all students in the table have taken all the same tests. And there are several thousand cases. I also have come to understand that may not be possible via SQL in MS Access.
Many thanks in advance for any helpful advice.
You can add the ColumnHeadings property of the Crosstab Query, include all tests - "TestA";"TestB";...
Are there only a set number or could this grow?
So thank you to Andre and Alex for guiding me to this solution, which works for the original question:
Table name here is "Output".
TRANSFORM Last(Output.[TestScore]) AS LastOfTestScore
SELECT Output.[StudentID]
FROM [Output]
GROUP BY Output.[StudentID]
PIVOT Output.[Test];
For someone else searching for a similar solution, note that I used "Last" instead of something else (you might want First, Count, etc.).
I am trying to make a calculation in MS Access that uses the previous periods values. My approach is to make a second field for each variable that I am working with and to shift the table.
If my query qry1 is:
aa bb cc
--- --- ---
12 34 56
78 91 01
My result should be:
aa0 bb0 cc0 aa1 bb1 cc1
---- ---- ---- ---- ---- ----
12 34 56 NULL NULL NULL
78 91 01 12 34 56
NULL NULL NULL 78 91 01
I am encountering two problems:
I tried adding the empty rows on top and at the bottom in two extra queries and somehow rows get lost.
When I combine these two queries using JOIN I get a cartesian product, i.e. the rows are not side by side.
You are trying to use Ms Access like you would use Excel. It might be easier to do what you want using an Excel macro + formulae.
The power of access is that it is a SQL system (which is why you get a Cartesian product when you JOIN, that's the idea) it sounds like what you want could be done pretty easily with a flat spreadsheet. You could write VBA to add rows for you if want to automate that part.
If that doesn't help, then maybe describe in more detail what you actually want to accomplish.
EDIT:
I missed the part in your post where you said you were doing a calculation using previous values. This is how I would accomplish that within a single Access query:
qry_Data: (based on tbl_Data)
id aa bb cc calc
--- --- --- --- ---
01 12 34 56 =Dlookup("aa","tbl_Data","id = " & [id] - 1)
02 78 91 01 =Dlookup("aa","tbl_Data","id = " & [id] - 1)
You shouldn't add columns to your table as part of a normal operation. Access tables should be set up to hold all the data they will ever need to (until your model changes). If you are designing an operation in access and it seems like it should add columns, then you should be adding a linked table or using a calculated field. In the above example, the field calc will be equal to the value "aa" from the record with an ID# one less than the current record. This will only work if there is a record with that ID#, so if there is a possibility of there being a gap in ID numbers you shouldn't use this exact method. Since you said that the table is sorted on specific critiera you might need to use a different method to determine the previous record. One way to do that in VBA would be like this:
Dim RS as Recordset
Set RS = CurrentDB.QueryDefs("myQuery")
RS.MoveLast
RS.MovePrevious
At that point you are on the second to last record and can access any values from that record with:
RS.Fields("aa")
Without knowing exactly what you're trying to do I can't make any more specific recommendations except that if you familiarize yourself with basic SQL concepts and you will find working in Access much easier.
I have 2 tables with different number of columns, and I need to export the data using SSIS to a text file. For example, I have customer table, tblCustomers; order table, tblOrders
tblCustomers (id, name, address, state, zip)
id name address state zip’
100 custA address1 NY 12345
99 custB address2 FL 54321
and
tblOrders(id, cust_id, name, quantity, total, date)
id cust_id name quantity total date
1 100 candy 10 100.00 04/01/2014
2 99 veg 1 2.00 04/01/2014
3 99 fruit 2 0.99 04/01/2014
4 100 veg 1 3.99 04/05/2014
The result file would be as following
“custA”, “100”, “recordtypeA”, “address1”, “NY”, “12345”
“custA”, “100”, “recordtypeB”, “candy”, “10”, “100.00”, “04/01/2014”
“custA”, “100”, “recordtypeB”, “veg”, “1”, “3.99”, “04/05/2014”
“custB”, “99”, “recordtypeA”, “address2”, “FL”, “54321”
“custB”, “99”, “recordtypeB”, “veg”, “1”, “2.00”, “04/01/2014”
“custB”, “99”, “recordtypeB”, “fruit”, “2”, “0.99”, “04/01/2014”
Can anyone please guild me as how to do this?
I presume you meant "guide", not "guild" - I hope your typing is more careful when you code?
I would create a Data Flow Task in an SSIS package. In that I would first add an OLE DB Source and point it at tblOrders. Then I would add a Lookup to add the data from tblCustomers, by matching tblOrders.Cust_id to tblCustomers.id.
I would use a SQL Query that joins the tables, and sets up the data, use that as a source and export that.
Note that the first row has 6 columns and the second one has 7. It's generally difficult (well not as easy as a standard file) to import these types of header/detail files. How is this file being used once created? If it needs to be imported somewhere you'd be better of just joining the data up and having 10 columns, or exporting them seperately.
I would like help with sql query code to push the consequent data in a specific column down by a row.
For example in a random table like the following,
x column y column
6 6
9 4
89 30
34 15
the results should be "pushed" down a row, meaning
x column y column
6 null or 0 (preferably)
9 6
89 4
34 30
SQL tables have no inherent concept of ordering. Hence, the concept of "next row" does not make sense.
Your example has no column that specifies the order for the rows. There is no definition of next. So, what you want to do cannot be done.
I am not aware of a simple way to do this with the way you are showing the table being formatted. If your perhaps added two consecutively numbered integer fields that provide row number and row number + 1 values, you could join the table to itself and get that information.
After taking a backup of you table:
Make a PHP function that will:
- Load all values of Y into an array
- Set Y = 0 (MYSQL UPDATE)
- load the values back from PHP array to MYSQL