Pipeline out in map step don't always contains the output (reference doc) of the Flow - webmethods

Using the WebMethods 9.7 Designer, when I create a Flow Service, with a Map steps, the output pipeline of the map step don't always contains the output of the Flow Service.
My Questions are :
Is it a designer bug, or something I have not understand (i assume it's the second one, but i can't see what) ?
How can i add my Reference Doc (already in in Output reference) in the pipeline out of the map step ?

Designer only shows the service's output arguments in the very last step of your flow service on the right Pipeline Out side of the step's pipeline view, and it only does this as a hint to show you that you need to create them if they don't already exist by mapping something to them. Move your map step to be the last step in your flow service to see what I mean.
You can declare any variables you want in a map step's Pipeline Out, and then map values as required from the left Pipeline In side to the right Pipeline Out side.
The easiest way to get a variable with the same name and type as the service's output argument is to copy (CTRL-C) SiebelMessage from the Input/Output tab and then paste (CTRL-V) it into the right Pipeline Out side of the step in which you want to create it. You will then need to either map values to it or set values on it to actually create it: copying it into the Pipeline Out of a map step doesn't create the variable, it just creates a placeholder in the Designer UI into which you need to map or set values to actually create it in the pipeline.
The long way is to manually create a variable with the same name and type as the service's output argument by right-clicking on the right Pipeline Out side of the step's pipeline (or by clicking in the Pipeline Out area to give it focus and then choosing the Insert a Variable action on the pipeline toolbar) and inserting a new document reference variable with the name SiebelMessage and choose the same document reference that you used when you created the service's output argument.

Related

Which is the best way of parsing CSV-data in a logic app without using a custom connector?

I have an SFTP trigger in a logic app which fires when a file is added to a certain file area. It is a CSV-formatted file and I want the rows to be parsed and coverted into json. Which is the best way to convert CSV-data into json without using any custom connectors?
I cannot find any built-in connectors doing this job. And as far as I know there are no logic apps functions doing the job either.
Right now, there is no connector/action in logic app that can provide the out of box solution for your requirement. You need to loop in through the array and perform the calculation as per your requirement but I will not suggest you leverage the loop, variables action as it may take time and cost you more.
The alternative would be leveraging the inline code (JavaScript code) to do the calculation as per your requirement. Please note that you will need Integration Account to run your inline code.
Please refer to javascript code and modified if needed according to your needs. I have used '_' for differentiating the nested objects. For more details you can refer to previous discussion here.
For complex calculation you can offload this functionality to azure function and write your code as per the supported languages and call azure function from logic app.
1.Created logic app as shown below:
2 .Created container in storage account and uploaded a CSV file in container.
3.Next using compose action to split the contents of the CSV file on every new line into an array.
a. Here is the expression used in SplitLines compose action:
split(body('Get_blob_content_(V2)'),decodeUriComponent('%0D%0A'))
b. Follow the below MS Doc to write expressions:
4. Removing last(empty) line from previous output using another compose action as shown below ,
take(outputs('SplitLines'),add(length(outputs('SplitLines')),-1))
5.Separating filed names using compose action
split(first(outputs('SplitLines')), ',')
Forming json as shown below using Select action,
**From**: **`skip(outputs('RemoveLastLine'), 1)`**
**Map:**
**`outputs('SplitFieldName')[0]`** **`split(item(), ',')?[0]`**
**`outputs('SplitFieldName')[1]`** **`split(item(), ',')?[1]`**
Tested logic app and it is running successfully. 
Content of CSV file is as shown below:
Csv data is formatted as json:
Reference:Use data operations in Power Automate (contains video) — Power Automate | Microsoft Docs
Credit: #Iason Koulas

Download files from object storage in visual app of Oracle Visual Builder. How to use data from ATP Database to get dinamically the name of the files

I am trying to download specific files from the Object Storage in the Oracle Cloud in my Oracle Visual Builder App (my visual builder is inside OIC, Oracle Integration Cloud).
I'd like to use the name of the "File URL" column (see the picture above) as the file name of the file to download from the Object storage, but this file name should be different from every download button (again in the picture above, you can see that every download button should download the file that has the name of the value of the "File URL" column). The File URL column is the field of a business object which is linked to the SDP variable and the data arrives from an ATP database that is inside the Oracle Cloud Infrastructure. The column "First File" contains the Download buttons. In the properties of this button there is an ojAction event which is linked to an action chain (see picture below).
I followed this guide (Download from OCI Storage section) to download one file, but I mapped the "filename" input parameter with a fixed value (the name of an existing files inside the object storage). Now, I'd like to make the filename value dynamic, but I don't know how to create a variable that gathers all the values of the specific column (File URL) in the DB and how to pass the single value of this column to the filename parameter. I have tried to create an SDP type variable that gets only the File URL values, but it's not getting the values of the file names. Do you have suggestions or have you seen a guide that is maybe useful to solve this issue?
If I understood the problem correctly, you can't retrive the specific file URL linked to the row button the user click in order to download the right file.
If that's the case, then instead of using a button, you could use the "first-selected-row" event linked to the table itself; that will pass all the values of the selected row as parameter, and will allow you to link "$variables.rowData.fileURL" or whatever the field is called to the REST call.

Totally new to Talend ESB

I'm completely brand new to Talend ESB (not so much Talend for data integration, but ESB totally.)
That being said, I'm trying to build a simple route that watches a specific file path and get the filename of any file dropped into it. Then it will pass that filename to the childjob (cTalendJob) and the child job will do something to the file.
I'm able to watch the directory, procure the filename itself and System.out.println the filename. but I can't seem to 'pass' it down to the child job. When it runs, the route goes into an endless loop.
Any help is GREATLY appreciated.
You must add a context parameter to your Talend job, and then pass the filename from the route to the job by assigning it to the parameter.
In my example I added a parameter named "Param" to my job. In the Context Param view of cTalendJob, click the + button and select it from the list of available parameters, and assign a value to it.
You can then do context.Param in your child job to use the filename.
I think you are making this more difficult than you need...
I don't think you need your cProcessor or cSetBody steps.
In your tRouteInput if you want the filename, then map "${header.CamelFileName}" to a field in your schema, and you will get the filename. Mapping "${in.body}" would give you the file contents, but if you don't need that you can just map the required heading. If your job would read the file as a whole, you could skip that step and just map the message body.
Also, check the default behaviour of the camel file component - it is intended to put the contents of the file into a message, moving the file to a .camel subdirectory once complete. If your job writes to the directory cFile is monitoring, it will keep running indefinitely, as it keeps finding a "new" file - you would want to write any updated files to a different directory, or a filename mask that isn't monitored by the cFile component.

Skip a Data Flow Component on Error

I want to skip a component of my data flow task, when this component throws a specific error.
To be precise, I read data from different source files/connections in my dataflow and process them.
The problem is that I can't be sure if all source files/connections will be found.
Instead of checking each source that I can connect to, I want to continue the execution of the data flow by skipping the component that reads data from the source.
Is there any possibility to continue the data flow after the component, which originally threw the error by jumping back from the On_Error-Eventhandler (of the data flow task) into the next component? Or is there any other way in continuing the data flow task execution by skipping the component?
As #praveen observed, out of the box you cannot disabled data flow components.
That said, I could see a use case for this, perhaps a secondary source that augments existing data which may or may not be available. If I had that specific need, then I'd need to write a script component which performs the data reading, parsing, casting of data types, etc when a file is present and sends nothing, but keeps the metadata in tact when no source is available.
You can do the following based on what I understand:
1) Create a script component that will check which source to go and check
2) Based on the Source connection you can assign the Source

Iterate Through Rows of CSV File and Assign Particular Row Value to Package Variable

I am currently using SSIS 2008 and am fairly new to it. I have a programming background with some Java, VBA, and VB.NET.
I have a connection to a csv file that contains a list of URLs.
There about a thousand rows in the file and with each row, I want to add the URL to a package variable that will be used to see if the most current link has already been downloaded and updated or not.
I've set up a Foreach Loop Container that is intended to loop through each row of of the csv file.
However, I cannot figure out how to "look at" each row. Once I can do that I know it will be no problem to assign the URL to the variable but I am stuck mid-way. Does anyone have any suggestions?
You want to do something to each row from a given source. That's usually a data flow type of activity. Drop a Data Flow Task onto your Control Flow. Inside that data flow, add a Flat File Source. In the Flat file connection manager, click New and fill out the details for your file. I assume it's just one data element (url) per line. Click OKs and then you should have a working data source.
Great, now all you need to do is that "something" to the data coming in which in your case is "see if the most current link has already been downloaded and updated or not." I'm not sure exactly what that translates to but whatever you attach (look up task, script task, etc) to the output from the Flat File Source will perform that operation for every row flowing through it.