I am using SSIS2017 and part of what I am doing involves running several (30ish) SQL scripts to be output into flat files into the same folder. My question is, to do this do I have to create 30 New File Connections or is there a way to define the folder I want all the outputs to go to, and have them saved there?
I am only really thinking of keeping a tidy Connection Manager tab. If there's a more efficient way to do it than 30something file connections that would be great?
A data flow is tightly bound to the columns and types defined within for performance reasons.
If your use case is "I need to generate an extract of sales by year for the past 30ish" then yes, you can make do with a single Flat File Connection Manager because the columns and data types will not change - you're simply segmenting the data.
However, if your use case is "I need to extract Sales, Employees, Addresses, etc" then you will need a Flat File Connection Manager (and preferably a data flow) per entity/data shape.
It's my experience that you would be nicely served by designing this as 30ish packages (SQL Source -> Flat File Destination) with an overall orchestrator package that uses Execute Package Task to run the dependent processes. Top benefits
You can have a team of developers work on the individual packages
Packages can be re-run individually in the event of failure
Better performance
Being me, I'd also look at Biml and see whether you can't just script all that out.
Addressing comments
To future proof location info, I'd define a project parameter of something like BaseFilePath (assuming the most probably change is that dev I use a path of something like C:\ssisdata\input\file1.txt, C:\ssisdata\input\file3.csv and then production would be \server\share\input\file1.txt or E:\someplace\for\data\file1.txt) which I would populate with the dev value C:\ssisdata\input and then assign the value of \\server\share\input for the production to the project via configuration.
The crucial piece would be to ensure that an Expression exists on the Flat File Connection Manager's ConnectionString property to driven, in part, by the parameter's value. Again, being a programmatically lazy person, I have a Variable named CurrentFilePath with an expression like #[Project$::BaseFilePath] + "\\file1.csv"
The FFCM then uses #[User::CurrentFilePath] to ensure I write the file to the correct location. And since I create 1 package per extract, I don't have to worry about creating a Variable per flat file connection manager as it's all the same pattern.
Related
If I have to load multiple flat file say 3 file each with different structure into SQL database .Should I create 3 different flat file connection manager in ssis?
Yes.
A Flat File Connection manager provides the contract between the shape of the source data and the reader component within your Data Flow Task.
The exact file name and the path to find said file can be changed on the fly during execution. But an additional/missing column, will cause the data flow to fail validation and the package will blow up.
If you had two files with the exact same layout but they served different purposes, I'd encourage you to make separate connection managers just to avoid complications - DistributorSales vs RetailSales. It might have the same columns (date, amount, item, purchaser) but the type of file implies an additional data point.
First time posting a question, please forgive if I don't have enough information.
I have a tiered BIML Script that has the following tiers:
10-Connection – create the connection nodes
20- Model – loop through the connections to build database, table nodes
30-Create/Drop Staging Tables – This is included as the package/s need to be run prior to the remainder of the creation process.
30- Flat File – loop through the table object to create flat file formats and connections
40-Packages – loop through the table objects and create extract and load packages
45-Project Params & Connections– attaches the project params and connections (using named connections and GUIDs from 10-Connections). Project params manually created in SSIS.
The process successfully connects to the source SQL Server database, generates the Create/Drop Staging Tables packages with correct metadata, and will create the extract packages successfully the first time.
Upon a second attempt to process the same BIML scripts with no changes made to the files, the process fails with “Object reference not set to an instance of an object.” & “Unable to Query on Connection” on the OleDBSource Node.
The BIML files generated in preview and output debugging have valid queries and source metadata that indicate a positive connection and proper model. I have used emitted queries in SSMS without error. When I move the BIML files to a new project, the process is successful the first time and fails subsequently.
I have tried the following:
Connection Managers
Delete project connection managers prior to package re-generation
GUIDs annotated and used in PackageProject and Packages Nodes.
Delay Validation/Validate External Metadata – I have tried with both true and false on Package, DFT and OleDBSource
Project
Delete .proj files from directory
Direct PackageProject to new ProjectSubpath
I also tried simply hard coding the BimlScript to simplify and remove any variables with the same result.
The most maddening point is that the metadata and queries all indicate the process can connect and query this exact table and it functions, but only on initial creation. Adding or re-generating during testing fails. Anyone ever come across this before?
Great thanks and shout out to cathrine-wilhelmsen, billinkc, whose posts and tutorials have been very helpful. Any and all help would be greatly appreciated.
I changed the driver from SQLNCLI11 to SQLOLEDB with no changes to code. I tested different drivers after seeing a few example connection strings with different drivers.
I wish I could explain why.
I'm trying to build a large package with different source providers. A lot of processes have common tasks like lookups and data conversions, till now the only way I found to reuse is to just copy/paste, but if I suddenly decide to change something I have to change across all data flow tasks.
In general I want to connect several sources to the same lookup or transformation task.
I have a feeling that I'm missing something, thanks.
OK, basically you need to store in variable (object type) your connection strings, then in for each loop take one connection and map to other string variable and map to your dataflow trough variables... So in the end you get same package but with different connections (database name and etc..)
http://www.codeproject.com/Articles/14341/Using-the-Foreach-ADO-Enumerator-in-SSIS
I have an SSIS data flow task that reads a CSV file with certain fields, tweaks it a little and inserts results into a table. The source file name is a package parameter. All is good and fine there.
Now, I need to process slightly different kind of CSV files with an extra field. This extra field can be safely ignored, so the processing is essentially the same. The only difference is in the column mapping of the data source..
I could, of course, create a copy of the whole package and tweak the data source to match the second file format. However, this "solution" seems like terrible duplication: if there are any changes in the course of processing, I will have to do them twice. I'd rather pass another parameter to the package that would tell it what kind of file to process.
The trouble is, I don't know how to make SSIS read from one data source or another depending on parameter, hence the question.
I would duplicate the Connection Manager (CSV definition) and Data Flow in the SSIS package and tweak them for the new file format. Then I would use the parameter you described to Enable/Disable either Data Flow.
In essence, SSIS doesnt work with variable metadata. If this is going to be a recurring pattern I would deal with it upstream from SSIS, building a VB / C# command-line app to shred the files into SQL tables.
You could make the connection manager push all the data into 1 column. Then use a script transformation component to parse out the data to the output, depending on the number of fields in the row.
You can split the data based on delimiter into say a string array (I googled for help when I needed to do this). With the array you can tell the size of it and thus what type of file it is that has been connected to.
Then, your mapping to the destination can remain the same. No need to duplicate any components either.
I had to do something similar myself once, because although the files I was using were meant to always be the same format - depending on version of the system sending the file, it could change - and thus by handling it in a script transformation this way I was able to handle the minor variations to the file format. If the files are 99% always the same that is ok.. if they were radically different you would be better to use a separate file connection manager.
This question is going to be a purely organizational question about SSIS project best practice for medium sized imports.
So I have source database which is continuously being enriched with new data. Then I have a staging database in which I sometimes load the data from the source database so I can work on a copy of the source database and migrate the current system. I am actually using a SSIS Visual Studio project to import this data.
My issue is that I realised the actual design of my project is not really optimal and now I would like to move this project to SQL Server so I can schedule the import instead of running manually the Visual Studio project. That means the actual project needs to be cleaned and optimized.
So basically, for each table, the process is simple: truncate table, extract from source and load into destination. And I have about 200 tables. Extractions cannot be parallelized as the source database only accepts one connection at a time. So how would you design such a project?
I read from Microsoft documentation that they recommend to use one Data Flow per package, but managing 200 different package seems quite impossible, especially that I will have to chain for scheduling import. On the other hand a single package with 200 Data Flows seems unamangeable too...
Edit 21/11:
The first apporach I wanted to use when starting this project was to extract my table automatically by iterating on a list of table names. This could have worked out well if my source and destination tables had all the same schema object names, but the source and destination database being from different vendor (BTrieve and Oracle) they also have different naming restrictions. For example BTrieve does not reserve names and allow more than 30 characters names, which Oracle does not. So that is how I ended up manually creating 200 data flows with a semi-automatic column mapping (most were automatic).
When generating the CREATE TABLE query for the destination database, I created a reusable C# library containing the methods to generate the new schema object names, just in case the methodology could automated. If there was any custom tool to generate the package that could use an external .NET library, then this might do the trick.
Have you looked into BIDS Helper's BIML (Business Intelligence Markup Language) as a package generation tool? I've used it to create multiple packages that all follow the same basic truncate-extract-load pattern. If you need slightly more cleverness than what's built into BIML, there's BimlScript, which adds the ability to embed C# code into the processing.
From your problem description, I believe you'd be able to write one BIML file and have that generate two hundred individual packages. You could probably use it to generate one package with two hundred data flow tasks, but I've never tried pushing SSIS that hard.
You can basically create 10 child packages each having 20 data flow tasks and create a master package which triggers these child pkgs.Using parent to child configuration create a single XML file configuration file .Define the precedence constraint for executing the package in serial fashion in master pkg. In this way maintainability will be better compared to having 200 packages or single package with 200 data flow tasks.
Following link may be useful to you.
Single SSIS Package for Staging Process
Hope this helps!