I'm using the foreach loop container in my SSIS job and I have a folder where we add new files every day containing the name + date.
I need to run the package using the oldest file; after that we load the second one.
Can we change this bulk insert in SSIS jobs?
FI: I used variables to load them (FolderPath and folderName) but they are processed at the same time; I prefer another solution than using script.
Your question is confusing, but I'll try to answer.
If you don't want to use the script component I will suggest...
1.Using a foreach loop container (folder as source) read the file names into a two column table (Filename (full path), datepart from file name)
2.Using another foreach loop container (using ado object as source) select from that table in the desired order to fill ado object.
Load files that match the path
Related
I have an issue regarding a deletion of files based on a selection.
So, I have an SQL table in which there is a column with the file name.
What I have to do is take the file name from there and delete the physical file with that name from a specified path.
What I have tried is to execute an sql task with the selection of full file paths (including file names) which need to be deleted and store them in an object variable and then try to do a ForEach Loop Container with a 'from variable enumerator' to take those results and delete the files using a file system task, but I don't think that's the right approach.
The execution of the package ends in success, but it ends at the ForEach Loop Container without reaching (executing) the file system task, so the files are not being deleted.
Select statement (also added the full path of the file as a column in order for the file system task to know where the physical file is stored) :
select file_full_path from t1
where flag = 'YES'
Do you guys know a solution to this?
What you can do is set the loop to Foreach ADO Enumerator , so your process would be:
1: SQL Task - map file_full_path to an object variable, say 'filesdel'
2: Foreach Loop set to Foreach ADO Enumerator
3: Map the the the for loop items to a string variable, say 'filename':
4: Use the string variable in the File System Task
I'm trying to get all the file names from a folder directory along with their row counts. (Also file size in bytes if possible) I am using Microsoft Visual Studio 2010 Shell. Here's what I've done so far:
I have created a Foreach Loop Container, set the Enumerator to Foreach File Enumerator and Expressions to a variable to the folder I want to loop over. I left the Files section with *.* and asked to retrieve Name Only. I have changed the Variable Mappings to a New Variable called FullFilePath, Container is Package, Value type is String and Value: is blank.
I then added a Data Flow to the Loop. Added a flat file source, row count, and OLE DB Destination. I changed the Flat file Source properties expression to the same Folder Variable in the Foreach Loop Container Expression. I added the Variable RecordCount to the Row Count function (Int32, value 0). The OLE DB Destination creates a new table with the name OLE DB Destination.
The next step is a Execute SQL Task that does and Insert Into DBO.FileData (FileName,RowCount) Values (?,?). I set 2 parameter mappings - 1) Variable Name from the Foreach Loop Container, FullFilePath and Data Type VarChar, 2) Variable from Row Count, RecordCount and Data Type Long.
I then have another Execute SQL Task that drops the table created by the data flow task. The problem is that with all the these step the Package still does not complete. It actually gets hung up and fails on the pre-execute. It says:
Warning: Access is denied. Error: Cannot open the datafile 'FullFilePath' Error: Flat File Source failed the pre-execute phase and returned error code 0xC020200E.
Anything you see I could be doing wrong? Let me know if pictures would help.
So I figured this out finally. In order to loop over all of the files with varying headers and column counts I decided to change the option in the Flat File Source to unselect "File contains headers." Doing this allowed the all the files to have the same #1 Column, which by default is Column 0(the first column in all of my files is some sort of a numeric field or ID). I was able to map this through row count and insert into a SQL table. Then I was able to finish the Foreach Loop and scribe the file name and row count into another SQL table to record the counts. It is however taking a really really really long time, i.e. it has been running for over 14 hours and it has only counted through 13 files. Granted some files are 250K+ rows but I wouldn't think it would take this long.
I'm having files in a folder as below
text_20150625_142434.csv ( text_yymmdd_hhmmss)
text_20150626_184023.csv
text_20150623_174312.csv
temp_20150419_203908.csv
here i want to load the data from files that begin with "text" string and also i want to sort the files(only "text" starting files) based on the date and time in the file name after that i have to loop through the sorted files by for each loop container and load each file data into the destination.
Actually for each loop container is not sorting the files based on date and time, is there any way to sort the files?
You can do this with custom C# scripting.
Start by defining your own custom sort by creating a custom C# script component to list the files, sort them by your own method, and then load the sorted list into a collection data type or ADO record set (as a SSIS "object" type variable).
Then, you could use a ForEach ADO Enumerator or ForEach Variable Enumerator (similar to what is done here and here) to iterate through each item in the collection, first setting your file name to the source in the string (or defining your connection to use a variable which the foreach loop updates each time through), processing it, etc.
I have a complex task that I need to complete. It worked well before since there was only one file but this is now changing. Each file has one long row that is first bulk inserted into a staging table. From here I'm supposed to save the file name into another table and then insert the the broken up parts of the staging table data. This is not the problem. We might have just one file or even multiple files to load at once. What needs to happen is this:
The first SSIS task is a script task that does some checks. The second task prepares the file list.
The staging table is truncated.
The third task is currently a Foreach loop container task that uses the files from the file list and processes it:
File is loaded into table using Bulk Insert task.
The file name needs to be passed as a variable to the next process. This was done with a C# task before but it is now a bit more complex since there could be more than one file and each file name needs to be saved separately.
The last task is a SQL task that executes a stored procedure with the file name as input variable.
My problem is that before it was only one file. This was easy enough. What would the best way be to go about it now?
In Data Flow Task which imports your file create a derrived column. Populate it with system variable value of filename. Load filename into the same table.
Use a Execute SQL task to retrieve distinc list of filenames into a recordset (Object type variable).
Use For Each Loop container to loop through the recordset. Place your code inside the container. Code will recieve filename from the loop as a value of a variable and process the file.
Use Execute SQL task in For Each Loop container to call SP. Pass filename as a parameter like:
Exec sp_MyCode param1, param2, ?
Where ? will pass filename INPUT as a string
EDIT
To make Flat File Connection to pick up the file specified by a variable - use Connection String property of the Flat File Connection
Select FF Connection, right click and select Properties
Click on empty field for Expressions and then click ellipsis that appears. With Expressions you can define every property of the object listed there using variables. Many objects in SSIS can have Expressions specified.
Add an Expression, select Connection String Property and define an expression with absolute path to the file (just to be on a safe side, it can be a UNC path too).
All the above can be accomplished using C# code in the script task itself. You can loop through all the files one by one and for each file :
1. Bulk Copy the data to the staging
2. Insert the filename to the other table
You can modify the logic as per your requirement and desired execution flow.
Add a colunm to your staging table - FileName
Capture the filename in a SSIS Variable (using expressions) then run something like this each loop:
UPDATE StagingTable SET FileName=? WHERE FileName IS NULL
Why are you messing about with C#? From your description it's totally unnecessary.
I have a source folder which contains 4 csv files with different no of columns in each of the file. I need to fetch only 3 columns(metadata same this 3 columns in all the 4 files) from each csv and load the columns inside Raw Destination from all the files avaiable in source folder. And Raw destination Output file name has to be like wht the inputfilename we are fetching + time stamp.
And at next level, i need to fetch this output raw as raw source and insert this records into oledb destination . and the destination table also has to be in dynamic.
for example i have 4 csv files called, test1.csv(10 columns). test2.csv(8), test3.csv(6), test4.csv(10) along with time stamps.
all this 4 files has columns position_id, asofdate, sumassured in common, now i want to load only these 3 columns to raw destination. If i load test1.csv then my raw destination outputfile name has to be RW_test1_20120119_222222.RW. similalrly if i load second file its filename as raw destination output..
Thanks
Satish
As always, decompose your problems until you've got it into a something you can manage.
Processing CSVs via queries
Following the two questions and answers below will result in a package with an OLEDB Connection Manager configured to operate on CSVs in the folder #[User::InputFolder]. 3 variables CurrentFileName, InputFolder and Query have been defined with an expression set on Query.
The expression for your #[User::Query] would look like "SELECT position_id, asofdate, sumassured FROM " + #[User::CurrentFileName]
Reference answers
SSIS FlatFile Acces via Jet
SSIS Task for inconsistent column count import?
At this point, your package should resemble the center piece below. Verify you can correctly enumerate all of the CSVs in the folder and the OLEDB query piece works.
RAW files
I'm not an expert on RAW file usage so there may be better ways of interacting with them. This will use the fourth variable, RawFileName. Set an expression on it like #[User::InputFolder] + "RawFile.raw" which would result in the file being written to C:\ssisdata\so\satishkumar\RawFile.raw
My general approach is to have a dataflow with a script task that sends no rows into a RAW File Destination.
Configure your destination as
Access mode: File name from variable
Variable name: User::RawFileName
Write option: Create Always
Process CSVs
The concept here is to append all the data into the RAW file that was created in the initial step.
Your source should already be configured as
OLE DB connection manager: FlatFile
Data access mode: SQL command from variable
Variable name: User::Query
Configure your destination as
Access mode: File name from variable
Variable name: User::RawFileName
Write option: Append
Extract from RAW
At this point, the foreach enumerator has completed and all the data has been loaded into the staging file. Now it is time to consume that and send data on to the destination.
Drag a Raw File Source Transformation onto your data flow. Unsurprisingly, you will configure as
Access mode: File name from variable
Variable name: User::RawFileName
Instead of Simulate destination, wire it up to the correct data destination.
Caveat
Be careful when using an expression with GETDATE/GETUTCDATE to define filenames as they are constantly evaluated. In 2005, we had used FileName_HHMMSS and had issues because processing didn't complete in the same second between the creation of a file and the next task that consumed the file. Instead, I have had better success using a dynamic but fixed starting point and generally, that is the system variable, StartTime #[System::StartTime]
You can use ForEach Loop Container on the Control Flow Diagram to iterate txt and csv files.