Assistance with a simple WSO2 ESB project - esb

I'm very very new to this and need some help writing an ESB script to take an event posted via HTTPS on Port 9090 in WSO2 and transform it into a message to be apended to an XML file on the server:
The HTTPS data will contain : “ID=Servername|Severity=sevtype” (Where Servername is a device name and Sevtype can either be "WARNING" or "OK" depending on whether the server is down or up)
This then needs to be transformed and appended to an existing XML file in the following format:
<event>
<componentID>Servername</componentID>
<timestamp>2012-04-27 01:37:10</timestamp> ***(Date and time the event was received)***
<severity>NORMAL</severity> ***(If original is WARNING then severity = SEVERE else it = NORMAL)***
<eti>NodeStatus</eti><etivalue>Up</etivalue> ***(If original is WARNING then severity = Down else it = Up)***
<\event>
Please could someone assist me i'm really floundering with what seems to be a simple thing
Many Many Thanks
Simon

You can write a simple task to poll your data into the server and can do a xslt transformation to construct that particular xml format message.
Some references to write a task;
http://wso2.org/project/esb/java/4.0.3/docs/configuration_language.html#TaskConcept
http://docs.wso2.org/wiki/display/ESB403/Writing+Tasks
http://wso2.org/library/2900

How do you get the ID and Severity? are they http headers?
Generally you can use payload factory mediator[1] to build the payload messages with some input parameter data.
[1] http://wso2.org/project/esb/java/4.0.3/docs/samples/message_mediation_samples.html#Sample17

Related

Nifi RestLookupService in LookupRecord gives JsonParseException

I have a basic NIFI flow with one GenerateFlowFile processor and one LookupRecord processor with RestLookupService.
I need to do a rest lookup and enrich the incoming flow file with rest response.
I am getting errors that the lookup service is unable to lookup coordinates with the value I am extracting from the incoming flow file.
GenerateFlowFile is configured with simple JSON
LookupRecord is configured to extract the key from the JSON and populate it to the RestLookupService. Also, JsonReader and JsonSetWriter is configured to read the incoming flow file and to write the response back to the flow file
The RestLookupService itself exits with JsonParseException about unexpected character '<'
RestLookupService is configured with my API running in the cloud in which I am trying to use the extracted KEY from the incoming flow file.
The most interesting bug is that when I configure the URL to point for example to mocky.io everything works correctly so it means that the issue itself is tight with the API URL I am using (http://iotosk.ddns.net:3006/devices/${key}/getParsingData). I have tried also removing the $key, using the exact URL, using different URLs..
Of course the API is working OK over postman/curl anything else. I have checked the logs on the container that the API is running on and there is no requests in the logs what means that nifi is failing even before reaching the API. At least on application level...
I am absolutely out of options without any clue how to solve this. And with nifi also google is not helpful.
Does anybody see any issue in the configuration or can point me in some direction what can cause this issue?
After some additional digging. The issue was connected with authentication logic even
before the API receives it and that criples the request and and returned XML as Bryan Bende suggested in the comment.
But definitely better logging in nifi will help to solve this way faster...

Generate Schema xml returning headers only

I'm facing some issues when generating a schema using a RFC connection to SAP while calling the function "BAPI_COMPANYCODE_GETLIST" which then creates a schema in my integration account using the content generated previously. However, after the schema is generated, when i access it all it has are the headers that are on the SAP table and none of its content. In this case the headers are COMP_NAME and COMP_CODE and it should return 122 rows but it does not and it also doesnt return any error so i cannot understand why it can retrieve the headers of the table but not its content.
I've tried enabling safe typing but after that the SAP connection doesnt work anymore, also tried calling different functions but the results are the same with different headers. Since this connector is recent i'm not able to find any solutions for this issue at this moment
The flow first receives an HTTP Request, afterwards it calls the BAPI function to generate the schema which is then used to create the schema in the integration account with the following properties:
{
"Content": "#{base64ToString(items('For_each')?['Content'])} ",
"ContentType": "application/xml",
"SchemaType": "Xml"
}
The schema is just that – the metadata describing the structure of an XML document. It is not the XML document itself.
The schema will contain two parts, the request message structure and the response message structure. You need to use the request message structure to form the BAPI get list, then can use the response message structure to parse the response. Either of the generic send message to SAP or the targeted Call BAPI actions of the SAP connector can be used to send the request message.

Parse Json in Logic App from Stream Analytics -> Service Hub -> Logic Apps

I'm trying to build a logic app that inserts data into a Sql database. The data is coming from s Stream Analytics job, outputting it on a Service Bus topic, consumed in Logic Apps in Service Bus trigger.
To populate the properties of the row inserted (lets say it only has one column 'Name'), I've found that this should work using following syntax:
"body": {
"Name": "#{json(decodeBase64(triggerBody()['ContentData'])).Name}"
},
Provided the message body contains a 'Name' property.
However I get following error message when running this:
{"code":"InvalidTemplate","message":"Unable to process template language expressions in action 'Insert_row' inputs at line '1' and column '2017': 'The template language function 'json' parameter is not valid. The provided value '#\u0006string\b3http://schemas.microsoft.com/2003/10/Serialization/��{\"time\":\"2016-05-25T10:29:17.4953250Z\",\"Name\":\"Y-Axis\",\"Value\":81.0,\"Date\":\"2016-05-25T10:29:17.4953250\",\"EventProcessedUtcTime\":\"2016-05-25T10:29:17.5525449Z\",\"PartitionId\":2,\"EventEnqueuedUtcTime\":\"2016-05-25T10:29:17.2220000Z\"}\u0001' cannot be parsed: 'Unexpected character encountered while parsing value: #. Path '', line 0, position 0.'. Please see https://aka.ms/logicexpressions#json for usage details.'."}
So it seems like that the content is enclosed in another envelope that is preventing json parsing to work.
1) Any simple way how to get around this?
2) Isn't such an integration all within Microsoft Stack just supposed to work without this mocking around?
Thanks,
Stefan
with the limited string functions available in he workflow definition language I had to use a long winded way for removing the additional strings
#{json(substring(replace(decodeBase64(triggerBody()['ContentData']),'#string3http://schemas.microsoft.com/2003/10/Serialization/��', ''),0,sub(length(replace(decodeBase64(triggerBody()['ContentData']),'#string3http://schemas.microsoft.com/2003/10/Serialization/��', '')),1))).fieldname}
is there a better way to do this
Thanks for reporting this, you're right it should simply work. There is a known issue where ASA ServiceBus output JSON is being wrapped in a XML header. It will be addressed in a near future but can't specify a particular date. Could you please workaround it (maybe using substring/replace) until then ?
cheers,
Chetan
Sorry for the delay in getting back, this issue is now fixed. Its exposed under a new compatibility level (1.1) to avoid breaking existing solutions. Please use this URL to configure your job with compat level 1.1 and give it a try.
cheers!

How can I display an XML page instead of JSON, for a dataset

I am using the pycsw extension to produce a CSW file. I have harvested data from one CKAN instance [1], into another [2], and am now looking to run the pycsw 'paster load' command:
paster ckan-pycsw load -p /etc/ckan/default/pycsw.cfg -u [CKAN INSTANCE]
I get the error:
Could not pass xml doc from [ID], Error: Start tag expected, '<' not found, line 1, column 1
I think it is because when I visit this url:
[CKAN INSTANCE 2]/harvest/object/[ID]
It comes up with a JSON file as opposed to an XML (which it is expecting)
I have run the pycsw load command on other ckan instances and have had no problems with them. They also display an XML file at the url stated above, so I wanted to know how to get CKAN to serve an XML file instead of JSON?
Thanks in advance for any help!
As you've worked out, your datasets need to be in ISO(XML) format to load into a CSW server. A CKAN only has a copy of the dataset in ISO(XML) format if it harvested them from a CSW.
If you use the CKAN(-to-CKAN) harvester in the chain then the ISO(XML) record doesn't get transferred with it. So you'd either need to add this functionality to the CKAN(-to-CKAN) harvester, or get rid of the CKAN-to-CKAN harvest step.
Alternatively if the record originated in a CKAN, then it has no ISO(XML) version anyway, and you'd need to create that somehow.

HTTP request from XQuery code inside MarkLogic pipe

I want to send text using POST method to another HTTP server and receive a JSON file in response from XQuery code of pipe in MarkLogic. The idea is whenever XML or JSON documents are inserted into MarkLogic, it triggers a pipe to read it and sends one element to another web server; in my situation, I want to send to Rosoka server to do natural language processing, after that I want to store the returned data "it is json file" in MarkLogic.
I appreciate if you could help.
marklogic only mentioned that is possible but no further help
There are many ways to do this.
A start is to look at "Triggers"
https://docs.marklogic.com/guide/app-dev/triggers
There is also Content Processing Framework ( "CPF" which is a higher level workflow based on triggers).
https://docs.marklogic.com/guide/cpf
if you read those guides you should be able to ask more specific questions if needed.