send email task on error in sql - ssis

HI All,
I have a simple lookup transformation which finds matched and unmatched records, it creates table for matched and unmatched records. Now I want to send an email when the package finds any unmatched records and package should stop right there.
Thanks
Nick

Here are a few solutions:
In the Lookup Transformation Editor screen you can set the Specify how to handle rows with no matching entries field to Fail component and set the job that runs the package to send a notification on job failure. The downsides of this approach are you must run the package from a job schedule, the email that is generated is non-descriptive, and you can only define one operator for this job which may need to be different than your normal operator list (if you are notifying business users instead of IT folks).
In the Lookup Transformation Editor screen you can set the component to fail as described in 1 and setup a precedence constraint on the control flow that leads to sending the email on failure. You can also set the MaximumErrorCount for the parent object to 2. Some downsides to this approach include the fact that other errors may occur in the package and the package will still succeed or their may be an error on the source part of the data flow and you may want to handle those kinds of errors separately.
In the Lookup Transformation Editor screen you can Redirect rows to no match output, create a variable called RowCount of Data Type Int32, direct the Lookup No Match Output to a Row Count data transformation, send the Lookup Match Output to the normal destination, in the Control Flow set a precedence constraint to the next step where the Evaluation operations is Expression and Constraint where Value is equal to Success and Expression is equal to #[RowCount] == 0, add a Send Mail Task to the Control Flow which has a precendence constraint from the previous step where Value is equal to Success and Expression is equal to #[RowCount] > 1. This will allow you to let the package succeed and send only one email. The downside of this approach is that the data flow destination will still be populated by the matched source data and it won't immediately stop once a No Match is detected. It will only stop once the data flow itself completes.
I hope this meets your business needs. Let me know if you need any further assistance.

Related

How to skip irregular header information of a Flat File in SSIS?

I have a file like as seen below: Just Ex:
kwqif h;wehf uhfeqi f ef
fekjfnkenfekfh ijferihfq eiuh qfe iwhuq fbweq
fjqlbflkjqfh iufhquwhfe hued liuwfe
jewbkfb flkeb l jdqj jvfqjwv yjwfvjyvdfe
enjkfne khef kurehf2 kuh fkuwh lwefglu
gjghjgyuhhh jhkvv vytvgyvyv vygvyvv
gldw nbb ouyyu buyuy bjbuy
ID Name Address
1 Andrew UK
2 John US
3 Kate AUS
I want to dynamically skip header information and load flatfile to DB
Like below:
ID Name Address
1 Andrew UK
2 John US
3 Kate AUS
The header information may vary (not fixed no. of rows) from file to file.
Any help..Thanks in advance.
The generic SSIS components cannot meet this requirement. You need to code for this e.g. in an SSIS Script task.
I would code that script to read through the file looking for that header row ID Name Address, and then write that line and the rest of the file out to a new file.
Then I would load that new file using the SSIS Flat File Source component.
You might be able to avoid a script task if you'd prefer not to use one. I'll offer a few ideas here as it's not entirely clear which will be best from your example data. To some extent it's down to personal preference anyway, and also the different ideas might help other people in future:
Convert ID and ignore failures: Set the file source so that it expects however many columns you're forced into having by the header rows, and simply pull everything in as string data. In the data flow - immediately after the source component - add a data conversion component or conditional split component. Try to convert the first column (with the ID) into a number. Add a row count component and set the error output of the data conversion or conditional split to be redirected to that row count rather than causing a failure. Send the rest of the data on its way through the rest of your data flow.
This should mean you only get the rows which have a numeric value in the ID column - but if there's any chance you might get real failures (i.e. the file comes in with invalid ID values on rows you otherwise would want to load), then this might be a bad idea. You could drop your failed rows into a table where you can check for anything unexpected going on.
Check for known header values/header value attributes: If your header rows have other identifying features then you could avoid relying on the error output by simply setting up the conditional split to check for various different things: exact string matches if the header rows always start with certain values, strings over a certain length if you know they're always much longer than the ID column can ever be, etc.
Check for configurable header values: You could also put a list of unacceptable ID values into a table, and then do a lookup onto this table, throwing out the rows which match the lookup - then if you need to update the list of header values, you just have to update the table and not your whole SSIS package.
Check for acceptable ID values: You could set up a table like the above, but populate this with numbers - not great if you have no idea how many rows might be coming in or if the IDs are actually unique each time, but if you're only loading in a few rows each time and they always start at 1, you could chuck the numbers 1 - 100 into a table and throw away and rows you load which don't match when doing a lookup onto this table.
Staging table: This is probably the way I'd deal with it if I didn't want to use a script component, but in part that's because I tend to implement initial staging tables like this anyway, and I'm comfortable working in SQL - so your mileage may vary.
Pick up the file in a data flow and drop it into a staging table as-is. Set your staging table data types to all be large strings which you know will hold the file data - you can always add a derived column which truncates things or set the destination to ignore truncation if you think there's a risk of sometimes getting abnormally large values. In a separate data flow which runs after that, use SQL to pick up the rows where ID is numeric, and carry on with the rest of your processing.
This has the added bonus that you can just pick up the columns which you know will have data you care about in (i.e. columns 1 through 3), you can do any conversions you need to do in SQL rather than in SSIS, and you can make sure your columns have sensible names to be used in SSIS.

SSIS trigger failure if match found

I need to prevent loading data into my fact table if any of the incoming data has a [DateId] that already exists in the fact table. The field [DateId] is an integer value.
The Lookup action in SSIS allows you to fail on non-matches, but I actually need a failure if any match is found. How can I get the package to fail when there's a match?
If you just want non-matches to flow through the lookup, just use the "Lookup No Match Output" to connect to the next component in your data flow.
Since the Lookup Match Output isn't hooked up to anything, all that data will just "stop" there. This is the equivalent of the SQL pattern LEFT JOIN WHERE --some left column-- IS NULL.
Use either a merge-join (with a conditional split) or a lookup with a nomatch output (without hooking the match to anything).
No answer after 8 years. I had the same issue today and I can't think of anything else than this workaround.
Context:
Writing from excel to SQL Server. Need to fail if the date is already present. This is also the parent in the hierarchy of data flow tasks in the control flow so wanted the child transformations to not execute if the parent fails.
Workaround:
Created a dummy table with one row and 2 columns for date and description. Entered the value "1900-01-02 12:34:56.000" in date and put a description in the other column.
Passed the Lookup Match Output to another lookup which looks up the dates in excel to this dummy table and will fail eventually.
At least I'm able to fail this and avoid writing duplicate data to 6 other tables that are written as a part of the 6 other data flow tasks which would otherwise execute if the parent task does not fail.

SSIS Precedence Constraint Expression not working

I have an int variable User::FileLineCount scoped in a for loop container and in a task within the loop, I want to proceed from the task depending in this variable's value.
In the Precedence Constraint Editor I have chosen Evaluation Operation as Expression and the Expression as #FileLineCount!=0. There is another version to some other task as #FileLineCount==0. When I debug, I can see that the User::FileLineCount value is 0 but when I step Over the task I get Unable to step. Not Implemented. error.
Thanks for the help
EDIT: Apparently the debugger could not step over so that was the reason for the error but the conditions still do not work properly.
EDIT2:
The other one is #FileLineCount==0. Doesnt work without OR'in as in the picture.
I had two paths leaving a data flow task one would go to a sql task and the other would go to another task. I was struggling with this until I realized that two paths leaving the same data flow task would be an or if they were different paths. I assume that you would use AND if you had multiple tasks going into one task and you needed them all to be true for it to proceed. I'm not sure if this exactly what you are asking.
This would be or, because I want one or the other.
Where below I want all three to be true to continue to send the email.

Controlling the flow in SSIS package based on a condition

Is there a way to conditionally (through a script task or anything else), control the flow of program in SSIS?
Currently I have a package that would create 5 different excel sheets (through Execute SQL Task) dynamically. There maybe times when all 5 will have data or only 1 may have data. When its just 1 that has data, it is fine. But the real problem arises when there are 5 DFT's that are trying to write the data simultaneously to the same workbook (albeit different sheets inside that). The package fails with an OLEDB error.
After a lot of head breaking, I finally figured out that it was a concurrency control issue that wasn't allowing me to write to the excel file simultaneously. To further my solution, I used expressions on precedence constraints to control if the sheets get created or not.
But the real trouble is that after creating the sheets, the package would fail trying to write data to 2 different sheets simultaneously.
Is there a way, I can assign an 'Execution Order' for the DFT's? This is the reason I am looking for a script task so that when a particular sheet's count is 0 then it does no work and the control moves to another branch.
I hope I have not confused you here. But if I have, I'll be glad to provide more details on this question. Thanks for reading.
My first thought is to have a bunch of sequence containers, one per possible Excel sheet, each of which holds three tasks:
A script task to figure out whether or not to create the sheet, and set a boolean package variable accordingly
An SQL task to create the worksheet
A data flow task to populate the worksheet
The precedence constraint between tasks 1 and 2 would be an expression of the boolean being true:
The precedence constraint between tasks 2 and 3 would be a success constraint, as would the precedence constraints between the sequence containers. Overall, it would look like this:

SSIS Look up - ignore certain records

I am doing an SSIS look up transformation, looking up in a voyages table, however some of my records don't have voyages, so I get errors. Is there any way that I can skip the look up on those records.
To expand on unclepaul84's answer, you can configure your lookup component to perform one of three actions on a failed lookup.
Fail Component (the default and the action you have now from your question. Fails the job step (and maybe the entire package) when there are no matches to the row in a lookup attempt.)
Ignore Failure (Doesn't fail your job step, leaves a null in the field you brought in from the lookup i.e. Voyage name? )
Redirect Row (Doesn't fail your job step, allows you to direct rows with no voyage to a different processing flow for handling (i.e. if you want to put a default 'No Voyages' message in your Voyage Name field))
Alternatively, as John Saunders mentioned in his comment, you could test the VoyageID column and split your data flow into two paths depending upon if the VoyageID column is null. Since the Lookup component can handle this, I prefer using the single lookup rather than a conditional split followed by a lookup on one of the paths.
You could tell the lookup of component to ignore lookup failures.