I have a json file that comes with around 125 columns and I need to load it to a DB Table.I'm using SSIS package and after dumping all the JSON file contents to a DB DUMP Table,I need to validate the data and load only the data that is valid to the MASTER Table and Send the rest to a failure table.The failure Table has 250 columns with ERROR for each column.If the first column fails validation,I need to write the error message to the corresponding error column and continue with the validation of second column...Is there some utility IN SSIS that helps in achieving the requirement.
I've tried using Conditional Split but appears like it doesn't fit the bill..
Thanks,
Vijay
I agree with Alleman's suggestion of getting this done via stored procedure. In terms of implementation there are various ways with which you can go about. I am listing one way here
In the database you can create some 10 stored procedures as follows
dbo.usp_ValidateData_Columns1_To_Columns25
dbo.usp_ValidateData_Columns26_To_Columns50
....
....
dbo.usp_ValidateData_Columns226_To_Columns250
In each of this procedures you can have the validate your data in bulk across columns. If validation fails you can insert into the respective error columns.
Once you have this in place you can then call all the above procedures in parallel as part of your SSIS Package.
Post that you would need one more DFT, to pick all those records which are good to be transferred to MASTER.
Basically you are modularizing the whole setup.
Related
I have a requirement where I have execute a MYSQL procedure (procedure name is same across all the sites) for different sites, I have a For each loop which get the server details and database details and within the Foreach Loop container I have dataflow task, in the data flow task I created a source connection for a MySql connection and calling the stored procedure like below,
CALL spITProd( '2022-09-25 20:04:22.847000000', '2022-10-25 20:04:22.847000000' );
This gives different metadata for different sites, to be precise first 16 columns are same for all the sites and after that number of columns and column names can vary, my requirement is to store first 16 columns in a destination table and all other columns in a different table by unpivoting them, I was able to achieve this for one site as shown below SSIS data flow task,
I want to automate this for process for all the sites, I know I can create multiple data flow tasks but that's not what I want to do, when source connects to the new site it fails as it encounters new metadata (as I said number of columns and column name changes after first 16 columns), Please suggest any ideas on how to dynamically read the columns from the procedure. I don't have any permission son the MYSQL database it is managed by the vendor I just have permission to the call the procedure, I tried to store the data into System.Object variable but I am not sure how to use that later as I cant create a temp table on destination using that, Appreciate any ideas on how to handle this requirement.
Mentioned the details above
I am implementing a SSIS package and currently trying to do the following.
Truncate the destination table
Fetch the data by executing the stored procedure and insert it into the destination table.
I have created an Execute SQL task to address step 1 and dataflow with oledb source and oledb destination to address the second point. It been working successfully so far but isn't working for one my stored procedure that uses temp tables.
When I edit the oledb source and click the preview button, I get the error no column returned
I know that SSIS has an issue with generating column while executing stored procedures that depend on temp tables. I have converted the stored proc to use temporary table variables and its now able to return columns in SSIS when I do a preview. The only downside is that the stored procedure is taking longer time to execute. Its taking 1 hour 15 mins as compared to 15 mins while using temp tables.
I did see a suggestion to use SET FMTONLY before executing the stored procedure as an alternate solution to changing to temp table variables but that didn't seem to work as I am getting syntax or permission denied error.
Could somebody tell me a solution to my problem which does not compromise on the performance.
Sounds like you've already read all the approaches to using Temp tables in SSIS, including the IF 1=0... trick? If you haven't seen that one yet, google it.
You say that using Table Variables causes your stored procedure to take about 5 times longer than using Temp Tables. The most likely reason for that is that you are indexing your temp tables but not your table variables. If you didn't know that table variables can be indexed, they can. You might try that.
Finally, a solution that you haven't mentioned is that you can replace your temporary table with a real table that gets truncated when you're done using it.
Short comment:
Try EXEC WITH RESULT SETS and specify the metadata yourself for a proc with temp tables; or use the Script Component as a source and specify the Output columns yourself.
Long comment:
Technically speaking, it is the driver/database you are using in SSIS that would decide the behavior when working with temp tables.
Metadata is an important factor when using SSIS's pipeline components. By metadata, I mean the names of the columns, their data types etc that a pipeline component uses. When designing a data flow, someone/something should provide this metadata to the components that require it.
In most cases, SSIS automatically retreives the metadata. Components that do not connect to a external data source, like Conditional Split etc, get their metadata from the other components they are connected to. For the pipeline components that connect to a external data source (like Oledb source, oledb destination, Lookup etc.), SSIS provides a mechanism to get this metadata without human involvement. This mechanism involves the driver connecting to the database and retrieving the metadata of the output. If the driver/database is capable of returning the metadata, then that metadata is used. If the driver/database is incapable, then you get the errors you are seeing. The rest of my comments are based on the assumption that you are using a SQL Server database in your question.
When working with a SQL Server database in SSIS, typically, we use the native client drivers provided by Microsoft. When trying to get the metadata, these drivers try to get the metadata without actually executing the SQL Statement (actual execution can have side effects; and also, might take more than a few seconds/minutes/hours; and you dont want side effects and long wait times during package design time.) So to get the metadata, the driver relies on the metadata of the actual objects used in the sql command. If the command uses a physical table or view, SQL Server already has the metadata available and can supply it to the driver. If it is a temp table, SQL Server does not have the metadata until it can create the temp table. If using FMT ONLY option, you can use it in such a way to create the temp tables, but avoid any heavy processing/side affects and thus be able to retrieve metadata without penalties. Post 2012, these native client drivers rely on some newer functionality to retrieve metadata than the drivers before 2012. In 2012 and after, the driver uses the sp_describe_first_result_set proc to retrieve metadata. So, whether you can get metadata or not is determined by the ability of the sp_describe_first_result_set proc.
So while SSIS can automatically get the metadata (because of the driver/database), it does not automatically get the metadata in some cases (again because of the driver/database). In cases involving the second scenario, some other process (typically a human) can help the driver infer metadata or provide the metadata to the component directly.
To help the driver, in case of SQL Server 2012 and after, you can use the WITH RESULTSETS clause to specify the output metadata. When this clause is present, the driver will use it and doesnt try to query the metadata from system objects; and thus avoid the error which you would otherwise get. If you are using the drivers that came with SQL Server 2008, you can use FMT ONLY. This option is at the driver/database level.
Another option could be to use a Script Component as the Source and in the Output columns, you can specify the columns/metadata. SSIS would not try to retrieve metadata from the datasource in this case, but would rely on the definitions you provided in the Output section of the Script Component.
As you can see, both options involve a human (or some other process) specifying the metadata instead of SSIS trying to retrieve the metadata in an automated fashion. I would prefer the first option if working with SQL Server and the second option if working with databases like MySql.
I have created many SSIS packages in the past, though the need for this one is a bit different than the others which I have written.
Here's the quick description of the business need:
We have a small database on our end sourced from a 3rd party vendor, and this needs to be overwritten nightly.
The source of this data is a bunch of flat files (CSV) from the 3rd party vendor.
Current setup: we truncate the tables of this database, and we then insert the new data from the files, all via SSIS.
Problem: There are times when the files fail to come, and what happens is that we truncate the old data, though we don't have the fresh data set. This leaves us without a database where we would prefer to have yesterday's data over no data at all.
Desired Solution: I would like some sort of mechanism to see if the new data truly exists (these files) prior to truncating our current data.
What I have tried: I tried to capture the data from the files and add them to an ADO recordset and only proceeding if this part was successful. This doesn't seem to work for me, as I have all the data capture activities in one data flow and I don't see a way for me to reuse that data. It would seem wasteful of resources for me to do that and let the in-memory tables just sit there.
What have you done in a similar situation?
If files are not present update some flags like IsFile1Found to false and pass these flags to stored procedure which truncates on conditional basis.
If file is empty then Using powershell through Execute Process Task you can extract first two rows if there are two rows (header + data row) then it means data file is not empty. Then you can truncate the table and import the data.
other approach could be
you can load data into some staging table and from these staging table insert data to the destination table using SQL stored procedure and truncate these staging tables after data is moved to all the destination table. In this way before truncating destination table you can check if staging tables are empty or not.
I looked around and found that some others were struggling with the same issue, though none of them had a very elegant solution, nor do I.
What I ended up doing was to create a flat file connection to each file of interest and have a task count records and save to a variable. If a file isn't there, the package fails and you can stop execution at that point. There are some of these files whose actual count is interesting to me, though for the most part, I don't care. If you don't care what the counts are, you can keep recycling the same variable; this will reduce the creation of variables on your end (I needed 31). In order to preserve resources (read: reduce package execution time), I excluded all but one of the columns in each data source; it made a tremendous difference.
I have a simple DataFlow with two objects the source which is a mdb file and the destination which is an MSSQL database.
The idea is to migrate the data from one to another.
The problem is that the data is extracted from an Access query, and one column has ~1000 characters, and in SSIS in advanced properties the external column has the default 255 length so when i execute the task it tries to truncate it. To disable the throw error on truncate is not an option, and modifying the Length of the external column cannot be done, it throws and error regarding the metadata.
First of all can anyone explain WHY?
Second of all i need a resolution and i need it fast because it's kinda driving me crazy.
This kind of problem occours, because the ssis task "guesses" the length of the column by inspecting the first 100(afaik) rows. So if all rows from 1 to 100 have a length of 10 and the row 101 has the legnth of 11, the task will fail, because the length was "guessed" to 10.
Modifying throws an error, because you have validateExternalMetadata set to true. To solve this problem, go to advanced options of your import task (access) and set the value to false.
This means, the task will accept modified values you entered without checking it.
Did you try to SSIS Import and Export Wizard to import the data, from within the BI development environment? That is the easiest way with MsAccess as this not only imports the data but also saves the package. If you get an error during the import ( using the wizard), please post it, as this helps in further investigation. Also, as #stb suggested, try having the first record over 1000 characters.
Access supports queries which are the equivalent to views in MSSQL.
The column size is defined not by looking at a few results but by the default column length of the column data type.
I created another table with the desired data types and before the data flow i've put in the package 2 sql scripts: one to delete all the data in the table and one to execute the query against the table, as to treat it as a temporary table.
Then the actual data flow is executed against this pseudo-temporary table.
This solved my problem.
I have a senario where i have to check the first column of the excel has a valid data and not in the other columns from the CSV file. If the data is not present in the first column my SSIS package should log an exeption.
Can any one help me in this senario please.
Thanks,
Sateesh.
In SSIS you can use a conditional Split task to do this and send the good data to where you want it and the bad data to an exception table.
Personally I always prefer to start with putting the data for any import into two tables, one with the raw unchanged data and one that will contain the cleaned up data before the import to the prod tables. This makes it easier to see what the cause is when you inevitably have to research why some bad data got into the database (if you are doing your job right, 90+% of the time it's bad data you were sent _ you can't know the contract expires on 4/12/2012 when you were sent 4/12/2011 to pick a not so random example). Also always make sure to save the input file to an archive loaction. Trust me, you will need one or more of those archived files some day.