ESQL to read value from cache property file - message-queue

AM new to esql, in my message flow have a lookup file which contains some values for forking messages. Now i have a new requirement to read a value from lookup cache file and search for string, so if contains particular string duplicate the messages and fork to multiple queues if string doesn't exists fork to single queue. Can someone help with this ??
Thanks,
Vinoth

You should not read the file for every message, but instead cache the contents of the file in SHARED variables.
For this your message flow should have 2 input queues, one for getting the messages to route, and the second for a technical queue which would receive messages to initiate the reload of the file into cache.
This second part of the flow should look like this: MQ Input -> File Read -> Compute
And put the logic of storing the file contents to SHARED variables into the Compute.
So as you see, you don't read the file in ESQL, you do that by using the File Read node in your flow, and use ESQL only to process the file contents. And you can access the values stored in the SHARED variables in the first part of the flow where you do the routing.

Related

How to take "access token" value inside an output json file and pass the same "access token" to another REST GET request in Azure Data Factory?

I have got access token and expiry time as two columns in a JSON file after doing a POST request and stored the file in Blob storage.
Now I need to look inside the JSON file the I stored before and take the value of Access token and use it as a parameter to another REST request.
Please help...
Depending on your scenario, there are a couple ways you can do this. I assume that you need the access token for a completely different pipeline since you are storing the Get access token output to a file in Blob.
So in order to reference the values within the json Blob file, you can just use a lookup activity in Azure Data Factory. Within this Lookup Activity you will use a dataset for a json file referencing a linked service connection to your Azure Blob Storage.
Here is an illustration with a json file in my Blob container:
The above screenshot uses a lookup using the Json File dataset on a Blob Storage Linked service to get the contents of the file. It then saves the outputted contents of the file to variables, one for access token, and another for expiration time. You don't have to save them to variables, and instead can call the output of the activity directly in the subsequent web activity. Here are the details of the outputs and settings:
Hopefully this helps, and let me know if you need clarification on anything.
EDIT:
I forgot to mention, if you need to get the access token using a web activity, then just need to use it again for another web activity in the same pipeline, then you can just get the AccessToken Value in the first web activity, and call that output in the next web activity. Just like I showed in the Lookup Activity, but instead you'd be using the response from the first web activity that retrieves the Access Token. Apologies if that's hard to follow, so here is an illustration of what I mean:
A simple way to read JSON files into a pipeline is to use the Lookup activity.
Here is a test JSON file loaded into Blob Storage in a container named json:
Create a JSON Dataset that just points to the container, you won't need to configure or parameterize the Folder or file name values, although you certainly can if that suits your purpose:
Use a Lookup activity that references the Dataset. Populate the Wildcard folder and file name fields. [This example leaves the "Wildcard folder path" blank, because the file is in the container root.] To read a single JSON file, leave "First row only" checked.
This will load the file contents into the Lookup Activity's output:
The Lookup activity's output.firstRow property will become your JSON root. Process the object as you would any other JSON structure:
NOTE: the Lookup activity has a limit of 5,000 rows and 4MB.

KNIME - Execute a EXE program in a Workflow

I have a workflow Knime, in the middle I must execute an external program to create an Excel file.
Exists some node that allows me to achieve this? I don't need to put any input or output, only execute the program and wait to generate the Excel file (I require to use this Excel for the next nodes).
There are (at least) two “External Tool” nodes which allow running executables on the command line:
External Tool
External Tool (Labs)
In case that should not be enough, you can always go for a Java Snippet node. The java.lang.Runtime class should be your entry point.
It's could be used the External tool node. The node requires inputs and outputs... but, you can use a table creator node for input:
This create an empty table.
In the external tool node, you must include an Input file and Output file, depending on your request, this config could be meaningless but require to the Node works.
In this case, the external app creates a text with the result of the execution, so, in the initial table (Table creator node), will be read the file and get the information into Knime.

Totally new to Talend ESB

I'm completely brand new to Talend ESB (not so much Talend for data integration, but ESB totally.)
That being said, I'm trying to build a simple route that watches a specific file path and get the filename of any file dropped into it. Then it will pass that filename to the childjob (cTalendJob) and the child job will do something to the file.
I'm able to watch the directory, procure the filename itself and System.out.println the filename. but I can't seem to 'pass' it down to the child job. When it runs, the route goes into an endless loop.
Any help is GREATLY appreciated.
You must add a context parameter to your Talend job, and then pass the filename from the route to the job by assigning it to the parameter.
In my example I added a parameter named "Param" to my job. In the Context Param view of cTalendJob, click the + button and select it from the list of available parameters, and assign a value to it.
You can then do context.Param in your child job to use the filename.
I think you are making this more difficult than you need...
I don't think you need your cProcessor or cSetBody steps.
In your tRouteInput if you want the filename, then map "${header.CamelFileName}" to a field in your schema, and you will get the filename. Mapping "${in.body}" would give you the file contents, but if you don't need that you can just map the required heading. If your job would read the file as a whole, you could skip that step and just map the message body.
Also, check the default behaviour of the camel file component - it is intended to put the contents of the file into a message, moving the file to a .camel subdirectory once complete. If your job writes to the directory cFile is monitoring, it will keep running indefinitely, as it keeps finding a "new" file - you would want to write any updated files to a different directory, or a filename mask that isn't monitored by the cFile component.

Skip a Data Flow Component on Error

I want to skip a component of my data flow task, when this component throws a specific error.
To be precise, I read data from different source files/connections in my dataflow and process them.
The problem is that I can't be sure if all source files/connections will be found.
Instead of checking each source that I can connect to, I want to continue the execution of the data flow by skipping the component that reads data from the source.
Is there any possibility to continue the data flow after the component, which originally threw the error by jumping back from the On_Error-Eventhandler (of the data flow task) into the next component? Or is there any other way in continuing the data flow task execution by skipping the component?
As #praveen observed, out of the box you cannot disabled data flow components.
That said, I could see a use case for this, perhaps a secondary source that augments existing data which may or may not be available. If I had that specific need, then I'd need to write a script component which performs the data reading, parsing, casting of data types, etc when a file is present and sends nothing, but keeps the metadata in tact when no source is available.
You can do the following based on what I understand:
1) Create a script component that will check which source to go and check
2) Based on the Source connection you can assign the Source

SSIS - "switchable" file output for debug?

In an SSIS data-flow task, I'm using a Multicast transform at a key part of the flow which I want to hang a File Output destination off.
This, in itself, is no problem to do. However I only want output in the file if I enable it; i.e., I'd be using it for debugging the data if the flow fails unexpectedly and it's not immediately obvious from the default log message output why this occured.
My initial thought was to create a File Output whose output file was obtained from a variable, and by default, the variable would contain 'nul' - i.e., the Windows bit-bucket - which I could override through configuration in the event of needing to dig further.
Unfortuantly this isn't working: the File Output complains saying that "The filename is a device or contains invalid characters". So it looks like I can't use the bit-bucket.
Is anyone aware of a way to make output "switchable"? This would make enabling debug a less risky proposition than editing the package and dropping a File Output in directly.
I suppose I could have a Conditional Split off the multi-cast which basically sends output if a variable is set to some given value, but this seems overly messy, I'll be poking other options, but if anyone has any suggestions/solutions, they'd be welcome.
I'd go for the conditional split, redirecting rows to the konesans trash destination adaptor if your variable wasn't set, otherwise send to your file.