I have around 500 SSIS package. I wanted to get the list of SSIS package where linked server is used. The reason we have to get this is that we are now removing the linked server. I don't want to open every SSIS package and check all the task to see if link server is available.
Is there any way we can do this?
If you don't want to use powershell, I use something called FnR.EXE which you can google and download. Again you just search through the XML which is just a text file. If you know that names of your linked servers, that's good. If you don't know the names of your linked server you'll have to search for something of the form %.%.%.%. It would be much more reliable to know all the linked server names (it should be quicker to work that out than go through all of your packages)
You also need to consider if your package uses a view which in turn references a linked server. Then the linked server name won't actually appear in the package.
It's not really an answer, however it is too long for comment. You could simply search given text in SSIS packages. Those are nothing more than xml files.
You could use f.ex. PowerShell:
Get-ChildItem -recurse | Select-String -pattern "YOUR_LINKED_SERVER" | group path | select name
This will at least give you list of packages with linked server. Then depending on where is you linked server you might want to:
If it's SQL strings, just replace its name with empty string (PowerShell or something else)
If in other components, you might want to look into Microsoft.SqlServer.Dts.Runtime name space and write either PowerShell script or .NET app and alter files from code.
Related
Since excel source has constatnt problems with truncating either numbers or texts - can't get it to work properly with mixed data in one column, i've figured out Powerquery source would be the answer.
I managed to import one file.
Now i try to iterate over all files in the folder.
Problem is in Description of Connection manager - can I somehow use wildcards for all files ? otherwise it crashes with error for incorrect credentials.
As of connection manager - no problem as i can use expressions to use variables
As far as I know Power Query still on preview, very limited compared with all the functions in the for example Power BI desktop version.
In your case do the query using Power BI Desktop, select new source > from Folder, do the transformations and the copy and paste the code in SSIS PQY. In that way you don't have to resort using wildcards in the SSIS flow to iterate over files in the same folder.
Using Microsoft Visual Studio Community 2015.
Goal of project
-create "*\temp\email" directory
-start program to extract all emails that include xls attachments to the previously created folder
-use for each loop to cycle through each file in the folder, process, and shift to sql table.
The problem I am running into is caused by either a blank excel document (which is occasionally sent from a remote location) or some of the original xls reports only contain 5 columns instead of 6 that I have mapped now. Is there any way to separate files that include the correct columns from those that do not match?
** as Long as these two problems do not exist I can run the ssis package and everything runs without issue.
Control flow;
File System Task (creates directory --->Execute Process Task (xls extraction)-->ForEach Loop(Data flow Task "email2Sql")
Data Flow;
Excel Source (uses expression ExcelFilePath,#user:filepath) delay validation ==true
(columns are initially set to f1-f6 and are mapped to for ex. a,b,c,d,e,f. The Older files that get mixed in only include a,b,c,d,e.) This is where I want to be able to separate the xls files
Conditional Transformation split (column names are not in row 1, this helps remove "null" values)
Ole Db destination (sql table)
Sorry for the amount of reading, but for the first post I tried to include anything that I thought may be relevant.
There are some tools out there which would allow you to open the excel doc and read it. However, I think the simplest thing to do would be to use SSIS out of the box:
1 - add a file system task after the data flow which reads the file.
2 - Make the precedence constraint from the data flow to the file system task "failure." This will cause that to only fire when the data flow task fails.
3 - set the file task to move the "bad" files to another folder
This will allow you to loop through all the files and move the failed ones. Ultimately, the package will end in failure. If you don't want that behavior you can change the ForceExecutionResult property to be success. However, it might be good to know that there were problems with some files so that they can be addressed.
m
I'm using Talend Open Studio for Big Data and I have a job where I use tFileInputDelimited to load a CSV file and use it as a lookup with a tMap.
Currently the file is loaded from the disk using an absolute path (C:\work\jobs\lookup.csv) and everything works fine locally.
The issue is that when I deploy the task, it obviously doesn't take the lookup.csv file with it.
Which begs a question:
Is there any way to "bundle" this file (lookup.csv) into the job so I can later deploy them together?
With static data such as this your best bet is to hard code the data into the job using a tFixedFlowInput instead.
As an example, if we want to use a list of country names, their ISO2 and ISO3 codes you might have these in a CSV that you'd normally access with a tFileInputDelimited. However, to save bundling this CSV with every build (which could be done with ANT/Maven) you can just hard code this data into a tFixedFlowInput:
You then just need to make sure your schema is set up as the same as your delimited file would have been (so in this case we have 3 columns: Country_Name, ISO2 and ISO3).
I developed a SSIS package that creates several .txt files. These files are zipped and then the .txt files need to be removed. Using a foreach file enumerator, I loop through all the .txt files for a specific folder. The folder is retrieved from a variable in configuration and looks something like: C:\Folder\
The foreach loop uses: *.txt to gather all .txt files, does not traverse subfolder and uses the full qualified name.
In the Variable Mappings the "FileName" variable gets filled with the 0 index.
Within the foreachloop I use a File system task.
This task removes the .txt files which are generated before, using the FileName variable that is filled in the loop.
On the development machine this runs like a charm. All greens, no problem at all. Now I copy the package and the configuration file to the test environment. A basic version without the file removing was running perfectly fine here. I replaced the package. Nothing big.
Now I run the SQl Server Agent Job and it starts running. I can see all the text files appearing, and disappearing after it created the zipfiles. However, when all files are removed the package results with errors. Namely the error shown above in the title.
I tried looking for the connectionmanager that might have been removed
Looked for connection managers named in the config that don't exist in the package.
No such thing found. Annoying part is that the package is fully functioning, but still results with the error.
EDIT: I noticed that if I run the package using the execute package utility with the dev. config it gives the same errors.
Hopefully someone is able to help me out.
Thanks in advance!
I managed to "fix" the issue. Remove the File System Component responsible for deleting the files. Then add it again and configure it again.
I think this happens if you accidentally change General parameters before changing the Operation parameter. It holds the metadata to irrelevant parameters and upon execution says: "Wait, you defined this parameter but I don't need it, but I'm checking for it anyway, and it's not there!"
It's a bug for sure
Is there an iSeries command to export the data in a table to CSV format?
I know about the Windows utilities, but since this needs to be run automatically I need to run this from a CL program.
You can use CPYTOIMPF and specify the TOSTMF option to place a CSV file on the IFS.
Example:
CPYTOIMPF FROMFILE(DBFILE) TOSTMF('/outputfile.csv') STMFCODPAG(*PCASCII) RCDDLM(*CRLF)
If you want the data to be downloaded directly to a PC, you can use the "Data Transfer from iSeries" function of IBM iSeries Client Access to create a .CSV file. In the file output details dialog, set the file type to Comma Separated Variable (CSV).
You can save the transfer description to be reused later.
You could use a trigger. The iSeries Client Access software wont do since that is a windows application, what I understand is that you need the data to be exported each time that the file is written. Check this link to know more about triggers.
You are going to need FTP to perform that action.
If your iSeries shop uses ZMOD/FTP your shortest solution is a few lines of code away -- 3 lines to be exact -- the three lines are to Start FTP, Put DBF, and finally, End FTP.
IF you don't use ZMOD/FTP:
- You could use native FTP/400 to accomplish what you need to do, but it is quite involved!!!
- you may probably need to use an RPGLE program to parse, format, and move, data into a "flatfile", then use native FTP/400 to FTP the file out
- and yes, a CL will need as a wrapper!
You can do it all in one very simple CL program:
CPYTOIMPF the file TOSTMF -> the cvs file will be in the IFS
FTP the file elsewhere (to a server or a PC)
It works like a charm