SSIS: Package Sequence and Performance - ssis

I have a main SSIS package that runs all my other packages. Even though some packages are not dependent on each other, is it always better for performance to put them in a sequence or is it better to run them at the same time (no sequence)?

As Eric has mentioned it truly depends on what the packages do, but I would say if the packages are related to different tables, from my limited experience I have seen better results with having packages run in parallel. I would advice you to go by the dependencies and arrange packages in sequence containers based on which ones can be run parallel. The SSIS engine does a pretty good job of running parallel tasks.

Related

DTS/SSIS vs. Informatica Power Center

I'm sure that this is a pretty vague question that is difficult to answer but I would be grateful for any general thoughts on the subject.
Let me give you a quick background.
A decade ago, we used to write data loads reading input flat files from legacy applications and load them into our Datamart. Originally, our load programs were written in VB6 and cursored through the flat file and for each record, performed this general process:
1) Look up the record. If found, update it
2) else insert new record
Then we ended up changing this process to use SQL Server to DTS the flat file in a temp table and then we would perform a massive set base join on the temp table with the target production table, taking the data from the temp table and using it to update the target table. Records that didn't join were inserted.
This is a simplification of the process, but essentially, the process went from an iterative approach to "set based", no longer performing updates 1 record at a time. As a result, we got huge performance gains.
Then we created what was in my opinion a powerful set of shared functions in a DLL to perform common functions/update patterns using this approach. It greatly abstracted the development and really cut down on the development time.
Then Informatica PowerCenter, an ETL tool, came around and mgt wants to standardize on the tool and rewrite the old VB loads that used DTS.
I heard that PowerCenter processes records iteratively, but I know that it does do some optimization tricks, so I am curious how Informatica would perform.
Does anyone have any experience with using DTS or SSIS to be able to make a gut performance predition as to which would generally perform better?
I joined an organization that used both Informatica PowerCenter 8.1.1. Although I can't speak for general Informatica setups, I can say that at this company Informatica was exceedingly inefficient. The main problem is that Informatica generated some really henious SQL code in the back-end. When I watched what it was doing with profiler and from reviewing the text logs, it generated separate insert, update, and delete statements for each row that needed to be inserted/updated/deleted. Instead of trying to fix the Informatica implementation, I simply replaced it with SSIS 2008.
Another problem I had with Informatica was managing parallelization. In both DTS and SSIS, parallelizing tasks was pretty simple -- don't define precedence constraints and your tasks will run in parallel. In Informatica, you define a starting point and then define the branches for running processes in parallel. I couldn't find a way for it to limit the number of parallel processes unless I explicitly defined them by chaining the worklets or tasks.
In my case, SSIS substantially outperformed Informatica. Our load process with Informatica took about 8-12 hours. Our load process with SSIS and SQL Server Agent Jobs was about 1-2 hours. I am certain had we properly tuned Informatica we could have reduced the load to 3-4 hours, but I still don't think it would have done much better.

SSIS Handling Extenal Issues

I have an SSIS package that works fine. The package runs every night and takes about 4 hours to complete. I have am a newb to SSIS, so I want to see what my options are. I am not finding anything on the web about these two issues, so any advice is greatly appreciated.
What to do when I have an external
issue such as a power
failure/accidental restart. Is there
a way to alert someone or have the
package begin again on restart.
A couple weeks ago there was a
process that got hung and locked
table, making the process not
execute. How is the best way to
handle ensuring I have the proper
access before starting and if not,
get the access. I am ok with killing
the processes etc.
Looking for best practice info. Thanks
For #1 - there is no inherent "restart" mechanism in SSIS, since to start with, there is no inherent "start" mechanism. You'll have to look at the process that you've got managing the scheduled execution of your packages, which I assume could be SQL Agent.
Given that, your options for determining if a SQL Agent job failed, and/or restarting that job are the same whether the contents of the job are SSIS packages or not. There are quite a few stored procedures for monitoring and querying job execution and results. You could also implement your own mechanism for recording job/package status.
SSIS does offer "checkpoints" to help you restart packages from certain points, but the general concensus on that feature is that it is limited in it's applicability - your mileage may vary.
Personally, I always include a failure route in my job to email someone on failure of the job, and configure my jobs and packages to be idempotent - that is, they can be re-run without fear of improperly conducting the same operations twice. They either "reset" the environment (delete and reload), or they can detect exactly where they left off.
Item #2 is a difficult question and depends greatly on your environment and scenario. You can use simple Tasks like an Execute SQL Task to run "test" commands that are tested to fail if sufficient privileges or locks exist. Or you may be able to inquire directly through SPs or other mechanisms to determine if you need to take remedial action before you attempt to run the meat of your package.
Using Precedence Constraints "on failure" can assist with that kind of logic. So can Event Handlers.

Run 100+ SSIS packages in parallel from a parent package

I have 100+ child packages and I need to run them in parallel from a parent package. For this I will have to create 100+ Execute Package tasks and then 100+ File Connections. This doesn't look appealing to me and it is repetative and error prone. Is there any other way to do this. Keep two things in mind.
Child package Execution should be in parallel (so no For loop and stuffs)
I am using CheckPoint based restart-ability and hence need control flow items at compile time (no script component based solutions too)
UPDATE: Even if you have massive hardware, windows limits the number of concurrent tasks you can start simultaneously due to an inherent design issue. Though I achieved parallel execution using jobs, I had to limit it to 25 parallel packages at a time to avoid random failures due to the windows issue.
Does it have to be file connections? Have you looked at the options of having the packages stored in the SSIS package store and referencing it from there.
You would still have your 100+ components, but not your 100+ file connections.
I give up. There is no way AFAIK. I decided to create 100+ jobs, one job per package and using the same schedule. Creating jobs was easier using Dynamic SQL.
You could create the package dynamically with EzAPI.
http://blogs.msdn.com/b/mattm/archive/2008/12/30/ezapi-alternative-package-creation-api.aspx

Scheduling identical SSIS packages to run in parallel

We've got an architecture where we intend to use SSIS as a data-loading engine for incoming batches. The intent is to reduce the need for manual intervention & configuration and automate the function as much as possible so we're looking at setting up our "batch monitoring" package to run as scheduled SQL Server Agent jobs.
Is it possible to schedule several SQL Server Agent jobs using the same package, possibly looking at different folders or working on different data chunks (grouped by batch ids?
We might also have 3 or 4 “jobs” all running the same package and all monitoring the same folder for incoming files, but at slightly different intervals to avoid file contention issues.
I don't know of any reason you couldn't do this. You could launch the packages each with a different configuration (or configurations) pointing to different working directories, input folders, etc.

How do I ensure that only one of a certain category of job runs at once in Hudson?

I use Hudson to automate the testing of a very large important product. I want to have my testing-hosts able to run as many concurrent builds as they will theoretically support with the exception of excel-tests which must only run one per machine at any time. Any number of non-excel tests can run concurrently, however at most one excel test at a time must run per machine.
Background:
Most of my tests are normal unit-tests - the sort of thing that I can easily run in parallel. Unfortunately a substantial and time consuming part of my unit-testing plan consists of tests which have been implemented in Excel.
You might think it crazy to implement a test in Excel - actually there's an important reason: Most of our users access our system via a Excel. Excel has it's own quirky ways of handling data so the only way to guarantee that our stuff works for Excel users is to literally implement our reg-test our application Excel.
I've written a test-runner tool which allows me to easily fire off a group of excel tests: Each test is a single .xls file. Each group is a folder full of excel files. I've got about 30 groups which need to be run for an end-to-end test. My tool converts the result of each of the tests into JUnit style XML which Hudson is able to understand. The tests use the pywin32com library to automate excel. When run on their own they are reliable.
I've got a group of computers which are dedicated to running tests. Each machine is quad-core and can theoretically run quite a lot of stuff at once. Unfortunately I've found that COM cannot be used to safely control more than 1 excel per machine at a time.
That is to say if a 2nd build stars which tries to talk to Excel via COM it might interfere with the one which is already running and cause both tests to fail.
I can run as many other non-excel processes as the machine will allow but I need to find a way so that Hudson does not attempt to launch any more than 1 process which requires excel on any one machine concurrently.
Sounds like the Locks and Latches plugin might help you.
http://hudson.gotdns.com/wiki/display/HUDSON/Locks+and+Latches+plugin
Isn't hudson java?
Since you've tagged this post python, I'll point out that buildbot, has slave locks to limit individual steps on individual slaves (or use them as more coarse locks if you'd like).