I'm using SSIS on SQL Server 2008. I have a data flow with a lookup component with the no matching entries option specified to "Fail component". I'm looking at the log of a previous execution of the package and I can see the following error message from the data flow:
Row yielded no match during lookup.
Later error messages indicate this is from my lookup component. However after that I can see an information message (from the same data flow and the same execution) saying that the destination component wrote several thousand rows:
"component "OLE_DST ..." (578)" wrote 9924 rows.
An execution on another environment resulted in the same "Row yielded no match during lookup" error but then wrote zero rows to the destination.
The SSIS package is exactly the same in both environments. The data was slightly different but had the same characteristics - source rows, a small number with no matching lookup entry.
Is this behaviour allowed? Can the data flow begin writing an arbitrary number of rows before a lookup fails and then stop writing rows?
Tom,
Yeah, this behaviour is plausible. However I think (best to check this) it can be affected by FastLoadMaxInsertCommitSize because that property determines how many rows are inserted before being committed.
Read more: Default value for OLE DB Destination FastLoadMaxInsertCommitSize in SQL Server 2008
cheers
JT
Related
I have used a Lookup in my data flow task. When I use Full Cache mode, the data flow task runs fine. But when I use Partial Cache or no Cache in my lookup, the records do not go past the lookup task and it keeps running for hours. I have checked for errors but there aren't any errors displayed. Could anyone please help me on this?
A Lookup is not appropriate for your task. Instead:
Add an OLE DB Source to pull in the data
Sort the records from the incoming source and the OLE DB Source
Perform a merge join (Full outer).
Add a Derived Column Transformation to check for ISNULL on the two joining columns. Create a new output column Called Action. For the NULLs in the target then you will tag that as an INSERT record.
Add a conditional split to send the INSERT record to an OLE DB Destination to insert the new records.
You can also check to see if there are matches between the two populations and perform updates, or look for NULLs in the source and DELETE in the destination.
I have an SSIS package that imports a flat CSV file, there are approx 200,000 records in the file. I've set the table that the data imports into with a primary unique key of the account number. There shouldn't be any duplicates in the source data (application controlled - outside of my influence)
However there is 1 duplicate row in the CSV, however when i add the primary key it redirect 7k rows... these aren't duplicate rows it just appears to redirect a load for no reason?
If I manually remove the single duplicate row it works perfectly. There is nothing special about the data or the files, it should just import the data and redirect the error row.
This behavior is due to OLE DB Destination and fast insert mode used.
With fast insert mode, OLE DB destination issues INSERT BULK command and does insert in batches. If one of rows within a batch violates table constraints, the whole batch is failed and got redirected to Error Output. This accounts for strange at the first glance behavior - reject more than 1 row.
What you can do with it - depends on your goal and limitations
If simply filter out consecutive duplicates - switch OLE DB Dest to regular insert mode at cost of significant performance decrease. Simplest way.
If performance drop is not an option, and you need to keep it simple - use Sort Component at Dataflow and tick discard duplicate rows flag. Caveat - you do not have a control on which row will be discarded.
If you need to implement some business rules on what data should pass - then you have to implement some scoring column and use it to filter rows. See Todd McDermid's article on this.
I am trying to import the data from Excel file into SQL Server database. I am unable to do so because I am getting following errors in the log file. Please help. The log erros are as as follows:-
[OLE DB Destination [42]] Error: An OLE DB error has occurred. Error code: 0x80040E21. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E21 Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.".
[OLE DB Destination [42]] Error: There was an error with input column "Copy of F2" (5164) on input "OLE DB Destination Input" (55). The column status returned was: "The value violated the integrity constraints for the column.".
[OLE DB Destination [42]] Error: The "input "OLE DB Destination Input" (55)" failed because error code 0xC020907D occurred, and the error row disposition on "input "OLE DB Destination Input" (55)" specifies failure on error. An error occurred on the specified object of the specified component.
[DTS.Pipeline] Error: The ProcessInput method on component "OLE DB Destination" (42) failed with error code 0xC0209029. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.
[DTS.Pipeline] Error: Thread "WorkThread0" has exited with error code 0xC0209029.
[Excel Source [174]] Error: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.
[DTS.Pipeline] Error: The PrimeOutput method on component "Excel Source" (174) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
It usually happens when Allow Nulls option is unchecked.
Solution:
Look at the name of the column for this error/warning.
Go to SSMS and find the table
Allow Null for that Column
Save the table
Rerun the SSIS
Try these steps. It worked for me.
See this link
Delete empty rows from Excel after your last row of data!
Some times empty rows in Excel are still considered as data, therefore trying to import them in a table with one or more non nullable columns violates the constrains of the column.
Solution: select all of the empty rows on your sheet, even those after your last row of data, and click delete rows.
Obviously, if some of your data really does vioalte any of your table's constraints, then just fix your data to match the rules of your database..
As a slight alternative to #FazianMubasher's answer, instead of allowing NULL for the specified column (which may for many reasons not be possible), you could also add a Conditional Split Task to branch NULL values to an error file, or just to ignore them:
It's as the error message says "The value violated the integrity constraints for the column" for column "Copy of F2"
Make it so it doesn't violate the value in the target table. What the allowable values are, data types, etc are not provided in your question so we cannot be more specific in answering.
To address the downvote, No, really it's as it says: you are putting something into a column that is not allowed. It could be Faizan points out, that you're putting a NULL into a NOT NULLable column, but it could be a whole host of other things and as the original poster never provided any update, we're left to guess. Was there a foreign key constraint that the insert violated? Maybe there's a check constraint that got blown? Maybe the source column in Excel has a valid date value for Excel that is not valid for the target column's date/time data type.
Thus, baring concrete information, the best possible answer is "don't do the thing that breaks it" In this case, something about "Copy of F2" is bad for the target column. Give us table definitions, supplied values, etc, then you can specific answers.
Telling people to make a NOT NULLable column into a NULLable one might be the right answer. It might also be the most horrific answer known to mankind. If an existing process expects there to always be a value in column "Copy of F2" changing the constraint to NULL can wreak havoc on existing queries. For example
SELECT * FROM ArbitraryTable AS T WHERE T.[Copy of F2] = '';
Currently, that query retrieves everything that was freshly imported because Copy of F2 is a poorly named status indicator. That data needs to get fed into the next system so... bills can get paid. As soon as you make it such that unprocessed rows can have a NULL value, the above query no longer satisfies that. Bills don't get paid, collections repos your building and now you're out of a job, all because you didn't do impact analysis, etc, etc.
I've found that this can happen due to a number of various reasons.
In my case when I scroll to the end of the SQL import "Report", under the "Post-execute (Success)" heading it will tell me how many rows were copied and it's usually the next row in sheet which has the issue. Also you can tell which column by the import messages (in your case it was "Copy of F2") so you can generally find out which was the offending cell in Excel.
I've seen this happen for very silly reasons such as the date format in Excel being different than previous rows. For example cell A2 being "05/02/2017" while A3 being "5/2/2017" or even "05-02-2017". It seems the import wants things to be perfectly consistent.
It even happens if the Excel formats are different so if B2 is "512" but an Excel "Number" format and B3 is "512" but an Excel "Text" format then the Cell will cause an error.
I've also had situations where I literally had to delete all the "empty" rows below my data rows in the Excel sheet. Sometimes they appear empty but Excel considers them having "blank" data or something like that so the import tries to import them as well. This usually happens if you've had previous data in your Excel sheet which you've cleared but haven't properly deleted the rows.
And then there's the obvious reasons of trying to import text value into an integer column or insert a NULL into a NOT NULL column as mentioned by the others.
Teradata table or view stores NULL as "?" and SQL considers it as a character or string. This is the main reason for the error "The value violated the integrity constraints for the column." when data is ported from Teradata source to SQL destination.
Solution 1: Allow the destination table to hold NULL
Solution 2: Convert the '?' character to be stored as some value in the destination table.
You can replace the values "null" from the original file & field/column.
the point can be if you are not using valid login for linked server. Problem is on destination server side.
There are few steps to try:
Align db user and login on destination server:
alter user [DBUSER_of_linkedserverlogin] with login = [linkedserverlogin]
recreate login on destination server used by linked server.
Backup table and recreate it.
2nd resolved my issue with "The value violated the integrity constraints for the column.".
I have a ssis project with 3 ssis packages, one is a parent package which calls the other 2 packages based on some condition. In the parent package I have a foreach loop container which will read multiple .csv files from a location and based on the file name one of the two child packages will be executed and the data is uploaded into the tables present in MS SQL Server 2008. Since multiple files are read, if any of the file generates an error in the the child packages, I have to log the details of error (like the filename, error message, row number etc) in a custom database table, delete all the records that got uploaded in the table and read the next file and the package should not stop for the files which are valid and doesn't generate any error when they are read.
Say if a file has 100 rows and there is a problem at row number 50, then we need to log the error details in a table, delete rows 1 to 49 which got uploaded in the database table and the package to start executing the next file.
How can I achieve this in SSIS?
You will have to set TransactionOption=*Required* on your foreach loop container and TransactionOption=*Supported* on the control flow items within it. This will allow for your transactions to be rolled back if any complications happen in your child packages. More information on 'TransactionOption' property can be found # http://msdn.microsoft.com/en-us/library/ms137690.aspx
Custom logging can be performed within the child packages by redirecting the error output of your destination to your preferred error destination. However, this redirection logging only occurs on insertion errors. So if you wish to catch errors that occur anywhere in your child package, you will have to set up an 'OnError' event handler or utilize the built-in error logging for SSIS (SSIS -> Logging..)
I suggest you try the creation of two dataflows in your loop container. The main idea here is to have a set of three tables to better and more easily handle the error situations. In the same flow you do the following:
1st dataflow:
Should read .csv file and load data to a temp table. If the file is processed with errors you simply truncate the temp table. In addition, you should also configure the flat file source output to redirect the errors to an error log table.
2nd dataflow:
On the other hand, in case of processing error-free, you need to transfer the rows from temp into the destination table. So, here, the OLEDB datasource is "temp table" and the OLEDB destination is "final table".
DonĀ“t forget to truncate the temp table in both cases, as the next file will need an empty table.
Let's break this down a bit.
I assume that you have a data flow that processes an individual file at a time. The data flow would read the input file via a source connection, transform it and then load the data into the destination. You would basically need to implement the Error Handler flow in your transformations by choosing "Redirect Row". Details on the Error Flow are available here: https://learn.microsoft.com/en-us/sql/integration-services/data-flow/error-handling-in-data.
If you need to skip an entire file due to a bad format, you will need to implement a Precedence Constraint for failure on the file system task.
My suggestion would be to get a copy of the exam preparation book for exam 70-463 - it has great practice examples on exactly the kind of scenarios that you have run into.
We do something similar with Excel files
We have an ErrorsFound variable which is reset each time a new file is read within the for each loop.
A script component validates each row of the data and sets the ErrorsFound variable to true if an error is found, and builds up a string containing any error details.
Then - based on the ErrorsFound variable - either the data is imported or the error is recorded in a log table.
It gets a bit more tricky when the Excel files are filled in badly enough for the process not to be able to read them at all - for example when text is entered in a date, number or currency field. In this case we use the OnError Event Handler of the Data flow task to record an error in the log but won't know which row(s) caused the problem
I have SSIS packages to extract fact tables into the staging tables. I have a control table which contains the last extract date for each table. So, the package extract rows where > control table date. The problem I have is I want to redirect rows with error to an error file in the data flow task of the package. If I do that the package will not fail (so I can't rollback) and some rows might actually go through which if I coninue with the process will ultimately get to my fact table. Now, next time when I run the package if I had updated the control table, I will miss the rows which had erros. If I had not updated the control table with the date, I will re-extract the rows which went through. What is the best practice for this?
How about adding a Row Count Transformation onto the error branch? It sounds like you are using the transaction option in SSIS so put the Data Flow in a sequence container and post Data Flow, evaluate the value of your row count variable. If it's greater than zero, rollback/abort processing.