How SSIS choose data flow tasks to be executed parallel? - ssis

I have a SSIS package which has many (more than 50) dataflow tasks. When I'm executing the package I notice that maximally 6 data flow tasks executed parallelly. What I want to know is How the SSIS choose which task to be executed first? note that data flow tasks are not connected. If anyone knows any documentation regarding this, please share.

There's no official/supported flag anywhere that enforces the order.
You can define precedence constrains to force order, but then you will loose some flexibility of parallelism.
You can increase the number of concurrent tasks (6 - default is number of cores plus 2) by setting MaxConcurrentExecutables.
You can refer to the following link for more information:
https://www.sqlservercentral.com/blogs/parallel-execution-in-ssis-with-maxconcurrentexecutables

Related

Getting the maximum concurrent executables at runtime in SSIS

I have a SSIS package that is executing some SQL task over a big list of servers. Since the number is quite big I am trying to split the workload and make it process in parallel. The problem is that I need to know exactly in how many parts I can split it, depending on the number of Logical Processors of the machine that runs it.
Is there any way to get the number of logical processors in SSIS so the work can be organized based on that ?
A C# script task returning System.Environment.ProcessorCount , https://msdn.microsoft.com/en-us/library/system.environment.processorcount.aspx .
Or if you want the more specific details, it looks like you need to execute WMI queries, How to find the Number of CPU Cores via .NET/C#? .

SSIS Parallel loop

Am trying to design a SSIS package where the first step gets data from a table and for each record it executes a VB script using execute Process task in parallel based on the output from Step 1.
I understand SSIS supports for loop and parallel processing for repetative tasks, but i cannot use for loop because itis not parallel and i cannot design parallel tasks so it will depend on input data. The records from step 1 could be 0,1,10(which have to be executed in parallel).
We dont have the ability to use Script component.
Any suggestions are much appreciated.
thanks
SSIS is pretty restricted if it comes to parallel execution. If you can't use script components / script tasks, it's even worse.
However, you can still create a certain number of execute process tasks and steer via parameter / variable, how many of them are executed and which values are passed to them. But as you might guess, this leaves the bitter taste of the question "What if I need several more tasks?".
Maybe you might want to consider to purchase a third party component - there are several available on the net.

SSIS Best Practice - Do 1 of 2 dozen things

I have a SSIS package that is processing a queue.
I currently have a singel package that is broken into 3 containers
1. gather some meta data
2. do the work
3. re-examine meta data, update the queue w/ what we think happened (success of flavor of failure )
I am not super happy with the speed, part of it is that I am running on a hamster powered server, but that is out of my control.
The middle piece may offer an opportunity for an improvement...
There are 20 tables that may need to be updated.
Each queue item will update 1 table.
I currently have a sequence that contains 20 sequence containers.
They all do essentially the same thing, but I couldnt figure out a way to abstract them.
The first box in each is an empty script action. There is a conditional flow to 'the guts' if there is a match on tablename.
So I open up 20 sequence tasks, 20 empty script tasks and do 20 T/F checks.
Watching the yellow/green light show, this seems to be slow.
Is there a more efficient way? The only way I can think to make it better is to have the 20 empty scripts outside the sequence containers. What that would save is opening the container. I cant believe that is all that expensive to open a sequence container. Does it possibly reverify every task in the container every time?
Just fishing, if anyone has any thoughts I would be very happy to hear them.
Thanks
Greg
Your main issue right now is that you are running this in BIDS. This is designed to make development and debugging of packages easy, so yes to your point it validates all of the objects as it runs. Plus, the "yellow/green light show" is more overhead to show you what is happening in the package as it runs. You will get much better performance when you run it with DTSExec or as part of a scheduled task from Sql server. Are you logging your packages? If so, run from the server and look at the logs to verify how long the process actually takes on the server. If it is still taking too long at that point, then you can implement some of #registered user 's ideas.
Are you running each of the tasks in parallel? If it has to cycle through all 60 objects serially, then your major room for improvement is running each of these in parallel. If you are trying to parallelize the processes, then you could do a few solutions:
Create all 60 objects, each chains of 3 objects. This is labor intensive to setup, but it is the easiest to troubleshoot and allows you to customize it when necessary. Obviously this does not abstract away anything!
Create a parent package and a child package. The child package would contain the structure of what you want to execute. The parent package contains 20 Execute Package tasks. This is similar to 1, but it offers the advantage that you only have one set of code to maintain for the 3-task sequence container. This likely means you will move to a table-driven metadata model. This works well in SSIS with the CozyRoc Data Flow Plus task if you are transferring data from one server to another. If you are doing everything on the same server, then you're really probably organizing stored procedure executions which would be easy to do with this model.
Create a package that uses the CozyRoc Parallel Task and Data Flow Plus. This can allow you to encapsulate all the logic in one package and execute all of them in parallel. WARNING I tried this approach in SQL Server 2008 R2 with great success. However, when SQL Server 2012 was released, the CozyRoc Parallel Task did not behave the way it did in previous versions for me due to some under the cover changes in SSIS. I logged this as a bug with CozyRoc, but as best as I know this issue has not been resolved (as of 4/1/2013). Also, this model may abstract away too much of the ETL and make initial loads and troubleshooting individual table loads in the future more difficult.
Personally, I use solution 1 since any of my team members can implement this code successfully. Metadata driven solutions are sexy, but much harder to code correctly.
May I suggest wrapping your 20 updates in a single stored procedure. Not knowing how variable your input data is, I don't know how suitable this is, but this is my first reaction.
well - here is what I did....
I added a dummy task at the 'top' of the parent sequence container. From that I added 20 flow links to each of the child sequence containers (CSC). Now each CSC gets opened only if necessary.
My throughput did increase by about 30% (26 rpm--> 34 rpm on minimal sampling).
I could go w/ either zmans answer or registeredUsers. Both were helpful. I choose zmans because the real answer always starts with looking at the log to see exactly how long something takes (green/yellow is not real reliable in my experience).
thanks

SSIS - Limiting Concurrent Connections

I am using SSIS to connect to a legecy mainframe database and this allows only 5 concurrent connections at a time.
I have a dataflow task with many tables to transfer and it kicks outs because of this limitation.
I have split up the Data Flow task into seperate data flows and this is working for the moment, but it is not optiomal as they need to be sequenced and 1 large transfer in a flow is holding up subsequent transfers.
Anyone any idea of how to limit the number of connections in a single data flow, I had a look at using the Engine Threads but this did not make any difference.
Any help much appericated.
The connection object you are using for your tasks should have a property named 'RetainSameConnection'. This should cause the same connection to be used across all tasks. At least this is true for OLEDB connection types. I don't know if ADO.NET connections have the same property. They probably do.
Here is an article for more information: http://munishbansal.wordpress.com/2009/04/01/how-to-retain-same-data-connection-across-multiple-tasks-in-ssis/

How to make this SSIS scenario more parallel

I have a million rows in a database table. For each row I have to run a custom exe, parse the output and update another database table
How can I run process multiple rows in parallel?
I now have a simple dataflow task ->GetData->Run Script (Run Process , Parse Output)->Store Data
For 6000 rows it took 3 hours.Way too much.
There is the single bottleneck here, running the process per each row. Increasing "EngineThreads" would not help at all, as there will be only one thread running this particular script transform anyway. The time spent in other transforms probably does not matter at all. Processes are heavy weight objects, and running thousands of them will never be cheap.
I can think of following ideas to make it better:
1) The best way to fix it is to convert your custom EXE into an assembly and call it from the script transform - to avoid the overhead of creating processes, parsing the output etc.
2) If you have to use the separate processes, you can try to run these processes in parallel. It will help if the process mostly waits for some input/output (i.e. it is I/O bound). If the processes are memory bound or CPU bound, you would not win much by running them in parallel.
2A) Complex script, simple package.
To run them in parallel, modify the ProcessInput method in your script to start the process asynchronously, and don't wait for the process completion - move to the next row and create the next process. Subscribe to process output and process Exited event, so you know when it has finished. Limit the number of processes run in parallel - otherwise you'll run out of memory. Wait until all the processes are done before returning from ProcessInput call.
2B) Simple script, complex package.
Keep the current sequential script, but partition the data using SSIS. Add conditional split transform, and split the input stream into multiple streams, based on some hash expression - something that will make each output to receive approximately the same amount of data. The number of streams equals the number of process instances you want to run in parallel. Add your script transform to each output of conditional split. Now you should also increase "Engine Threads" property :) and these transforms will run in parallel. (Note: based on tag, I assume you use SSIS 2008. You'll need to insert additional Union All transforms to make it work in SSIS 2005).
This should make it perform better, but millions of processes is a lot. You'll hardly get really good performance here.
If you are executing this process using the "data flow" container, then there is a property on it called "EngineThreads" which defaults to a value of 5. You can set it to a higher number like 20, which will devote more threads to processing those rows.
That is just a performance tweak or optmisation, if your ssis package is still running really slowly then I would perhaps address the architecture and design of your package.