SSIS Copy table if not empty - ssis

I would like to make a package that would copy data from a table only if table is not empty. I know how to do count and how to make a package for copying data but problem is that Source can't have any inputs so I don't know how to do it. Any suggestions?

I don't understand your comment about dragging a "green line from a package to a source" but instead of trying to determine in advance if the table is empty, just do your copy anyway and then see how many rows were copied:
Create a package variable for the rowcount
Populate the variable using the rowcount transformation
Use an expression in the precedence constraint to check the variable: if it's greater than zero then continue executing the rest of your package

#Pondlife I don't think you can use precedence constraint on the data flow task, can you?
I believe you can use it only on the control flow.
I would add a "Execute SQL Task" with the count, sending the result to a variable and from this task, I would drag the green arrow to the Data Flow task that makes the copy and on this arrow I would add the expression on the precedence constraint.

As you have correctly noted, a data flow source does not accept input so one cannot perform logic in the dataflow to determine whether this task should run.
Cannot create connector.
The destination component does not have any available inputs for use in creating a path.
However, there's nothing stopping you from setting up this logic in your control flow. I would use a query that hits the DMVs for a fast rowcount on the destination system, filtered to only the tables I wished to replicate.
Armed with the list of empty tables, it'd probably depend how I'd handle it. For a small number of tables, I'd define N dataflows all with a do nothing script task as a precedent and then use an expression on table name to enable a path, much like I did on this question.
If there are many tables, I'd define a package per table and then invoke execute package task with the package name built dynamically based on the empty table name.

Related

State/Execution ID in SSIS

I have a problem with multiple executions of the same SSIS package.
I would like to allow parallel executions each of which handles a subset of data.
So far, I am thinking of using some state variable, but I don't know where to store it.
One option is to use keep the connection open and use temp tables to coordinate the task load. However, temptables cause lots of compilation issues, and they are not maintable.
Are there any other ways to identify the current execution id of the package or scope of execution? I have found no state (either in memory or stored elsewhere) in SSIS so far which I can use to partition/isolate each execution.
So based on my comments above you can try this. I dont know it is quite what your looking for, but maybe it can give you a hint to get further.
I am calling the example with workflowid 1. This is what i mean you can change in your SQL Job agent steps, and then change the parameter on each step, so fx you could add 2 steps executing workflowid 1 and workflowid 4. Then it will only run that sequence container where the constraint is success.
Create a package variable
Create your package flow
Edit your SQL Task Get WorkflowID
Add Parametermapping to your package variable
Get the resultset into local variable called WorkflowIDrun
Make your precedence constraints so it only allows one id to pass through
Notice: You could add parentworkflowid's so that you can diverse your flow inside the sequence container if you need some of the same logic
End result when package is run with workflowid 1
Create a new SQL Job in your agent. Add the needed steps Notice; I Created two steps for workflowid 1 and 2. Truncate and delete
I then edit my step and correct the variable with the right value. This will be workflowid 1 for truncate and workflowid 2 for delete
This could of cause also be in another job you do it, that depends on your needs.

Store a sql query result in pentaho variable

I am new in PDI (passing from SSIS) and I am having some troubles by handling the variables issue.
I would like to perform this:
From a sql select query I would like to save the result into a variable.
For that reason I have created one job and two transformations, given that in pentaho every step is executed in parallel.
The first transformation is going to be on charge of setting the variable and the second transformation is going to use this result as an input.
But in the first transformation I am having troubles by setting the variable, I do not understand where do I have to instanciate this variable to implement the "set season variable" step. And then how to get this result in the next transformation.
If anyone knows about this, or if you could recommend any link with a good example, I'll really appreciate it.
This can indeed be confusing for SSIS users. In PDI, you don't create a recordset variable as you do in SSIS. Simply creating a job creates one for you. Each job has two different types of "Results". One for recordset rows and one for filenames.
These variables are not directly accessible; they are just part of the job. There are steps that interact with them directly. For example under the "Job" branch when you're creating a transform, there is a Get rows from results step and a Copy rows to results step. They work directly with the job's row results.
Be aware that you must manually manage the metadata for the results. This is a pain, but over-all I find PDI's method of doing this more intuitive and easier than SSIS. I find SSIS more flexible in this regard.
There are also Get files from result and Set files in result. These interact with the job's built in file results. This is simply a list of every file touched by any step configured in the job. On the job tab there are tasks that deal with it directly such as Process result filenames, Add filenames to result and Delete filenames from results. These tasks operate on the built in file results list for the job and provide an easy way to, say, archive all the files loaded by the transform you just ran.
Be aware when using these steps that they record EVERY file touched by EVERY step in the job. If you look through most of the steps in transformations (data flows) that deal with files, there's usually an "Add files to results" checkbox that is checked by default. If you uncheck this, it will not add the file names to the jobs file results. You can also delete specific files from the file results with the Delete filenames from result step.
From your Job, start a Transformation:
Overload transformation variable into global variable in your job and use it:

SSIS PACKAGE, Only want Derived Columns

I have a SSIS Package that I have a For Each Loop which imports multiple txt files into a SQL Server table. That runs fine.
What I am trying to accomplish is to store the distinct filename and date it was imported into a separate table. I created a separate For Each Loop for this and then archive the txt file after it's complete with a File System Task.
The issue I am having is I put an event handler to invoke a SQL Task and Send Email task if there is a warning (I was hoping for a warning only if there were no files in the directory where the package is importing from).
However, I found a warning that a column in the Data Flow task was not being used and should be removed if not needed. But the Data Flow task requires at least one field for me to put a Derived Column task
Derived Column Field1: pulls the #User: CurrentFile from the ForEachLoop Container.
Field2 pulls the current date.
Is there a way to perform this without the warning?
It sounds like you're over-complicating thing.
You have a ForEach loop and you're therefore assigning a value into some Variable to contain the file name, #User::CurrentFile. You can get the date it was loaded through either a call to GETDATE() or reference the system scoped variable, StartTime #[System::StarTime]
The most straight forward option would be to add an Execute SQL Task wired up to the OnSuccess Precedent Constraint from your Data Flow Task. The Execute SQL Task will then have a statement like INSERT INTO dbo.MyLog(FileName, InsertDate) SELECT ?, ?, assuming OLE DB Connection Manger, and then you map in your two variables.
Easy, clean, no warnings fired about unused columns in your data flow.
What I think you have is something like this, based on
I created a separate For Each Loop for this

Trasnfer multiple table from one databsse to another database using SSIS

Can somebody please help me to transfer around 15 tables from one database to another database. At present I can do this one by one using Data Flow task, but then I need to do this task 15 times which is very time consuming.
Why don't you just use a task? Maybe tasks->export is what you're looking for.
Otherwise you'll need to create separate blocks for each table or:
Create a variable of type object
Script Task: Add to your list all table names.
Iterate over this object variable with For each loop container
Inside the loop create a source from a variable. In this variable specify the connection dynamically depending on the current loop value.
you can use SSIS package, select Transfer SQL server objects from SSIS toolbox , in Object specify the source and destination servers and database. for copyAllObjects make it false . ObjectToCopy select CopyAllTables true or make it false and pick from the list the table you want to copy.

SSIS OLE DB conditional "insert"

I have no idea whether this can be done or not, but basically, I have the following data flow:
Extracts the data from an XML file (works fine)
Simply splits the records based on an enclosed condition (works fine)
Had to add a derived column object due to some character set issues (might be better methods, but it works)
Now "Step 4" is where I'm running into a scenario where I'd only like to insert the values that have a corresponding match in my database, for instance, the XML has about 6000 records, and from those, I have maybe 10 of them that I need to match back against and insert them instead of inserting all 6000 of them and doing the compare after the fact (which I could also do, but was hoping there'd be another method). I was thinking that I might be able to perform a sql insert command within the OLE DB DESTINATION object where the ID value in the file matches, but that's what I'm not 100% clear on or if it's even possible for that matter. Should I simply go the temp table route and scrub the data after the fact, or can I do this directly in the destination piece? Any suggestions would be greatly appreciated.
EDIT
Thanks to the last comment from billinkc, I managed to get bit closer, where I can identify the matches and use that result set, but somehow it seems to be running the data flow twice, which is strange.... I took the lookup object out to see whether it was causing it and somehow it seems to be the case, any reason why it would run this entire flow twice with the addition of the lookup? I should have a total of 8 matches, which I confirmed with the data viewer output, but then it seems to be running it a second time for the same file.
Is there a reason you can't use a Lookup transformation to find existing records. Configure it so that it routes non-match records to the no match output and then only connect the match found connector to the "Navigator Staging Manager Funds"
I believe that answers what you've asked but I wonder if you're expressing the right desire? My assumption is the lookup would go against the existing destination and so the lookup returns the id 10 for a row. All of the out of the box destinations in SSIS only perform inserts, so that row that found a match would now get doubled. As you are looking for existing rows, that usually implies you'd want to perform an update to an existing row. If that's the case, there is a specially designed transformation, the OLE DB Command. It is the component that allows for updates. There is a performance problem with that component, it issues a single update statement per row flowing through it. For 10 rows, I think it'd be fine. Otherwise, the pattern you'd use is to write all the new rows (inserts) into your destination table and then write all of your changed rows (updates) into a second staging-type table. After the data flow is complete, then use an Execute SQL Task to perform a set based update statement.
There are third party options that handle combined upserts. I know Pragmatic Works has an option and there are probably others on the tasks and components site.