I have 2 questions about ssis buffer.
1) In my ssis package i am using 1 data flow task contain 2 oledb source,2 oledb destination.Both are independent.All setting are keeping default,We know the DefaultBufferMaxRows is 10000 rows.So,if i run this package,whether the number of record per buffer is 10000 per each or 5000 per each?
2) I have tried to use ssis logging(xml and database),but i dont know,it is not showing any thing.It is creating xml file,but there is no usefull information(other that some xml tags).also it is not creating any table.
logging event window also not displaying any thing.Can u please help me...
Addressing the second question as I'd need to do research on the first.
I find logging to SQL Server to be my preferred destination. The other options (file, event viewer, trace file, etc) are fine and dandy but to sieve through that data, a query is my tool.
You state it isn't showing anything useful, what did you expect it to show and what have you selected?
I generally select the following events: OnInformation, OnError, OnWarning, OnPreExecute, OnPostExecute. The first three provide information about what's wrong, potentially wrong or could be improved with my packages. The last two I use to establish duration of various tasks.
Check them only at the top level. I had a coworker that had checked the above events at the package level but on each sub task they checked one event. They had anticipated it would inherit the logging established at the root and add in the single event selected at that level. The reverse was true: only the innermost checked items were logged.
Where does everything get logged? In 2005, it'll be found in dbo.sysdtslog90. For 2008 forward, it can be found in dbo.sysssislog The master copy of this table exists in msdb but if you point your OLEDB connection to a different catalog (SALES, Adventureworks, etc) the first invocation of the package will result in that table being copied down into the target catalog.
Related
I'm really new to SSIS and I would like to automate some of my workflow. However, most of my tasks require that other jobs/packages in SQL Server Job Agent have previously been run for the day/week by other team members. My question is: how can I direct my workflow based on the status of these other packages? Control Flow doesn't have a Conditional Split tool, and I can't kick off a separate job/package from Data Flow. I can easily write a SQL statement to determine whether any or all of the related packages have been run, and even have it return a boolean value if needed, but I'm at a loss for how to make this direct the workflow to either proceed to the next step or fail the entire package if any the others haven't been run (desired result).
I really appreciate any help in this. Thank you!
In SSIS:
You can set conditions in the Control Flow by double-clicking on the arrow that links one task to another. If you set up an Execute SQL Task to run a query to check whether a package has run and then return a result into a variable, you can then set up a condition to check that variable, and only continue to the next task if it is (or isn't) a certain value.
Here's an example of how you might set up a precedence constraint so that the next step would only happen if the previous one was successful, and also a particular variable did not have a value of C:
In SQL Server Agent:
You could also - if you want to avoid running an entire package - set something up in the SQL Server Agent job instead. You could add an initial step which checks on the status of a package, then quits the job reporting success. This does mean that if the step fails for another reason (say the database it's trying to run the select statement on is down), it will still quit and report success, so be careful - you might want to set up some other specific steps first which check for such situations.
Here's an article I used a while back when I wanted to understand how to set up a SQL Server Agent job in this way:
http://sqlactions.com/2012/08/05/how-to-create-custom-schedule-for-sql-server-agent-job/
I have a package that is essentially trying to copy 26 tables from oracle to sql server.
Its not a complete table copy, we are looking for records that belong to certain 'Regions' of our company.
I pull the data from oracle
I started just doing this with elbow grease, but each of the 26 tables required several variable to do the deletes, the fetches etc.
Long story short, I decided to use variables to represent the table names (source, temp and target).
This allowed me to copy/paste one sequence and effectively bypass a lot of click click in bids.
The problem I am running into is that the meta data seems to be very fragile. Sequences all seem to run on their owwn, but when i run the whole package, it breaks. And never in the same place.
Is this approach just a bad idea w/ SSIS?
So just to take this off the board....
Each sequence container had the following ops
Script task - set variables
Execute SQL task - delete from temp
data flow SourceToTemp -
ole db source - used a generic select * from tbl to temp_tbl
derived column1 - insert a timestamp column
oledb destination - map all the columns into a temp table (**THIS IS THE BIG PROBLEM CHILD)
Execute SQL task - delete from target
Execute SQL task - insert target select from temp
the oleDB destination is the piece that kept breaking.
Since it references variables, I had to be very careful at design time to set the variables correctly before opening one of the data flows.
I am pretty sure this is the problem. Since I can not say w/ certainty when SSIS refreshes meta data in the design environment, I cant be sure if/when sequence X refreshed while the variables were set to support sequence Y.
So while it conceptually should work at run time, dev time is a change control night mare.
I have changed all the oleDB destinations to point to a hard table name. this is really a small concession since there are 4 sql statements that are still driven by variables. (saving me a lot of clicking and typing)
This small change has eliminated the 'shifting sands' problem.
Take-a-way lesson : dont have an oledb destination be basesd on a variable.
thanks for the comments
I have a ssis project with 3 ssis packages, one is a parent package which calls the other 2 packages based on some condition. In the parent package I have a foreach loop container which will read multiple .csv files from a location and based on the file name one of the two child packages will be executed and the data is uploaded into the tables present in MS SQL Server 2008. Since multiple files are read, if any of the file generates an error in the the child packages, I have to log the details of error (like the filename, error message, row number etc) in a custom database table, delete all the records that got uploaded in the table and read the next file and the package should not stop for the files which are valid and doesn't generate any error when they are read.
Say if a file has 100 rows and there is a problem at row number 50, then we need to log the error details in a table, delete rows 1 to 49 which got uploaded in the database table and the package to start executing the next file.
How can I achieve this in SSIS?
You will have to set TransactionOption=*Required* on your foreach loop container and TransactionOption=*Supported* on the control flow items within it. This will allow for your transactions to be rolled back if any complications happen in your child packages. More information on 'TransactionOption' property can be found # http://msdn.microsoft.com/en-us/library/ms137690.aspx
Custom logging can be performed within the child packages by redirecting the error output of your destination to your preferred error destination. However, this redirection logging only occurs on insertion errors. So if you wish to catch errors that occur anywhere in your child package, you will have to set up an 'OnError' event handler or utilize the built-in error logging for SSIS (SSIS -> Logging..)
I suggest you try the creation of two dataflows in your loop container. The main idea here is to have a set of three tables to better and more easily handle the error situations. In the same flow you do the following:
1st dataflow:
Should read .csv file and load data to a temp table. If the file is processed with errors you simply truncate the temp table. In addition, you should also configure the flat file source output to redirect the errors to an error log table.
2nd dataflow:
On the other hand, in case of processing error-free, you need to transfer the rows from temp into the destination table. So, here, the OLEDB datasource is "temp table" and the OLEDB destination is "final table".
DonĀ“t forget to truncate the temp table in both cases, as the next file will need an empty table.
Let's break this down a bit.
I assume that you have a data flow that processes an individual file at a time. The data flow would read the input file via a source connection, transform it and then load the data into the destination. You would basically need to implement the Error Handler flow in your transformations by choosing "Redirect Row". Details on the Error Flow are available here: https://learn.microsoft.com/en-us/sql/integration-services/data-flow/error-handling-in-data.
If you need to skip an entire file due to a bad format, you will need to implement a Precedence Constraint for failure on the file system task.
My suggestion would be to get a copy of the exam preparation book for exam 70-463 - it has great practice examples on exactly the kind of scenarios that you have run into.
We do something similar with Excel files
We have an ErrorsFound variable which is reset each time a new file is read within the for each loop.
A script component validates each row of the data and sets the ErrorsFound variable to true if an error is found, and builds up a string containing any error details.
Then - based on the ErrorsFound variable - either the data is imported or the error is recorded in a log table.
It gets a bit more tricky when the Excel files are filled in badly enough for the process not to be able to read them at all - for example when text is entered in a date, number or currency field. In this case we use the OnError Event Handler of the Data flow task to record an error in the log but won't know which row(s) caused the problem
I am running a SSIS package using SQL Server 2008 Job. The package crash at some point while running. I have created my own mechanism to grab the error and record it in a table. So I can see that there is an error with an specific task, but could not find what the error is.
When I run the same package from BIDS, it works perfect. no error.
What I want to do is, I need to write that error string to my own table which shown in the "Execution Result" tab.
So the question is which system variable holds the error string in SSIS.
The error is stored in the ErrorDescription system variable. See Handling Errors in the Data Flow for an example of how to get the error description.
Also, if you want to capture error information into a table, SSIS supports logging to a table using the SQL Server Log Provider. You can also customize the logging.
Too easy.
Left-Click (highlight) on the object you want to capture the error event (Script, or Data Flow, etc.)
Click on 'Event Handlers' - screen should open with Executable = object you clicked and Event Handler = OnError
Click URL (click here to create....)
Drag Execute SQL object from SSIS Toolbox
Configure to the database/table you want to house the error message
Write INSERT INTO DB.Schema.Table(DBName, SchemaName, TableName,ErrorMessage,DateAdded)
Write VALUES (?,?,?,'I am smart',getdate())
Click Parameters and select the USER::Variables for the ?'s + my comment.
Since this is ran at the database server it will pass in the ?'s. My SAC is already at the database as a value but you will have selected System::ErrorDescription as parameter 3. Remember, this array is 0 based. DO NOT TRY TO NAME THE PARAMETERS. Instead, number them 0 to ~? The datatypes are based on what you have going in; mine are all VARCHAR so... :)
This is a much better solution than just logging whatever the server allows you to.
I can also add a counter variable and adjust it wherever I like; then pass it to the event OnError. This will allow me to pinpoint exactly where the last successful object completed; works best in scripting objects but also available in other areas.
I'm using this so I can process thousands of cycles without actually failing the package. If a table doesn't exist or a column doesn't exist I simply log it for further review later. Oh yeah, I'm cycling through hundreds of databases capturing their architecture and maximum column size used; not to be confused with maximum column size.
Example: TelephoneNumber comes from a source column of char(500) (definitely bad programming but...you can't change everything so..). I capture the max len of that column and adjust the destination column to accommodate that size +/- a certain percentage.
If a table doesn't exist or a column doesn't exist anymore I log the error and keep churning. At the end, I can evaluate those entries and see if I can actually remove them from my warehouse. This happens more in the TEST and STAGE environments than in PROD. However, when a change goes through to PROD I most definitely will identify it as it's coming in to the warehouse.
Everything is configured, this includes dynamic MERGE/JOINs, INSERT, SELECT, ELEMENTS, SIZES, USAGESIZE, IDENTITY, SOURCEORDER, etc. with conversions of data to destination datatypes.
ALL that because the systemic version of logging will not provide you with the granularity you might need for this type of operation. This OnError Event Handler can if setup properly.
Check this out! He has explained with a Step by step process on how to configure SSIS logging which has the error message parameter.
I am in process of creating an ssis package that need to do following in specified order:
process some data
move that data to some other tables
Get some data and push it in a plain text file.
I have created 3 store procedure for these, I have 2 "Execute SQL tasks" for 1 and 2 and a "Data Flow task" for 3rd.
Now when i run the package i can see all 3 step are completed (no errors) but they are not running in correct order.
I see step 3 is run first then step 1 and 2, i think then step 3 runs again. Normally i can ignore it but as the data in the text file can be 700 mb, i need to find a way to get SSIS to run these task in sequence.
I have tried "Sequence Container" but no luck.
Can some one help me with this please?
KA
You need to use precedence constraints to tell SSIS what order your tasks need to be executed in.
Drag the green arrow from task one to task two, and from task two to task three.
You could connect as
first SQL execute task
precedence constraint on success
second SQL execute task
precedence constraint on success
data flow
SSIS will follow the sequence as we required.
thanks
prav
I had exactly this problem. Tasks were being executed in something like the order I'd created them rather than the sequence I specified later. It turned out that I'd managed to get a task that belonged to the first sequence container to appear in the last sequence container without loosing it's allegiance to the first. I discovered this by taking a backup and deleting sequence containers - the rogue task disappeared when I deleted the first sequence container.
The fix was to cut and past the task into the desired sequence container.
I encounterd an issue on SQL Server Denali when individual components were running out of sequence even though they were joined by success constraints. The problem seemed to occur when I had cut and pasted the components and the constraint. By deleting and reapplying the constraints, the package then ran in the correct order.
In my case, if I want to decide execute order in sequence containers, I will use [sub sequence containers] between execute sql task and data flow task. Hope useful for you.
The best is to use Sequence Containers... basically they help in creating a Sequence.
But since it does not work in your case, create Child Packages for all your different process
and then create the Master Package which will have a link to those child packages, USE "Execute Package task"