Logic App not parsing body from ADF anymore - json

I'm triggering Logic Apps (around 30) from the Data Factory V2. I am passing a body to the HTTP trigger, which is in JSON in Data Factory V2. The body is different for almost all Logic Apps.
Last week there was an issue that the 'When HTTP Request is received' step is not processing the body from the Data Factory in a correct matter.
Please note that both the Logic Apps and Data Factory haven't changed in months and were working without any problems up to last week.
This happened last week also, but this resolved 'itself', suggesting it was an issue at Logic App side. Currently all Logic Apps keep failing. I've tried rerunning the Logic Apps many times. #AzureSupport redirected me to our CSP, but they are not really helping at the moment.
Body in the ADF pipeline (sanitized the url):
"typeProperties": {
"url": "https://prod-50.westeurope.logic.azure.com:443 /<....>",
"method": "POST",
"body": {
"customer": "#pipeline().parameters.customer",
"token": "#pipeline().parameters.token",
"tennant": "#pipeline().parameters.tennant",
"baseuri": "#pipeline().parameters.baseuri",
"connectorTrans": "#pipeline().parameters.connectorTrans",
"connectorNonTrans": "#pipeline().parameters.connectorNonTrans",
"datum": "#formatDateTime(adddays(utcnow(),-1),'s')"
}
}
The last succesful run parsed the body from the Data Factory as follows (sanitized ofcourse):
"body": {
"customer": "<customerName>",
"token": "<token>",
"tennant": null,
"baseuri": "<baseUri>",
"connectorTrans": "<connectorName>",
"connectorNonTrans": "<connectorName2>",
"datum": "<date>"
}
The runs that are failing are all showing the same problem, the body is not being parsed correctly:
"body": "{\r\n \"customer\": \"<customerName>\",\r\n \"token\": \"<token>\",\r\n \"tennant\": null,\r\n \"baseuri\": \"<baseUri>\",\r\n \"connectorTrans\": \"<connectorName>\",\r\n \"connectorNonTrans\": \"<connectorName2>\",\r\n \"datum\": \"<date>\"\r\n}"
It is all in one single line, including \r\n and escape characters.
This is resulting in the Logic App not being able to use the values in the fields passed on by the Data Factory.
All help or pointers are much appreciated.
Running the Logic App from Postman, with the exact same body as from the Data Factory is working without any problems.

I faced the same issue, you need to add header content type application/json in your web component in ADF which calling logic app.

Related

What is a useful Azure IoT Hub JSON message structure for consumption in Time Series Insights

The title sounds quite comprehensive, but my baseline question is quite simple, I guess.
Context
I Azure, I have an IoT hub, which I am sending messages to. I use a modified version one of the samples from the Azure Iot SDK for python.
Sending works fine. However, instead of a string, I send a JSON structure.
When I watch the events flowing into the IoT hub, using the Cloud shell, it looks like this:
PS /home/marcel> az iot hub monitor-events --hub-name weathertestiothub
This extension 'azure-cli-iot-ext' is deprecated and scheduled for removal. Please remove and add 'azure-iot' instead.
Starting event monitor, use ctrl-c to stop...
{
"event": {
"origin": "raspberrypi-zero-wh",
"payload": "{ \"timestamp\": \"1608643863720\", \"locationDescription\": \"Attic\", \"temperature\": \"21.941\", \"relhumidity\": \"71.602\" }"
}
}
Issue
The data seems fine, except the payload looks strange here. BUT, the payload is literally what I send from the device, using the SDK sample.
Is this the correct way to do it? At the end, I have a very hard time to actually get the data into the Time Series Insights model. So I guess, my structure is to be blamed.
Question
What is a recommended JSON data structure to send to the IoT hub for later use?
You should add the following 2 lines to your message in your python SDK sample:
msg.content_encoding = "utf-8"
msg.content_type = "application/json"
This should resolve your formatting concern.
We've also updated our samples to reflect this: https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/samples/sync-samples/send_message.py
I ended up using the tip by #elhorton, but it was not the key change. Nonetheless, the formatting in the Azure Shell Monitor looks now much better:
"event": {
"origin": "raspberrypi-zero-wh",
"payload": {
"temperature": 21.543947753906245,
"humidity": 69.22964477539062,
"locationDescription": "Attic"
}
}
The key was:
include the message source time in ISO format
from datetime import datetime
timestampIso = datetime.now().isoformat()
message.custom_properties["iothub-creation-time-utc"] = timestampIso
Using the locationDescription as the Time Series ID Property See https://learn.microsoft.com/en-us/azure/time-series-insights/how-to-select-tsid (Maybe I could also have taken the iothub-connection-device-id, but I did not test that alone specifically)
I guess using "iothub-connection-device-id" will make "raspberrypi-zero-wh" as the name of the time series instance. I agree with your choice of using "locationDescription" as TSID; so Attic becomes the time series instance name, temperature and humidity will be your variables.

Forge convertion to obj only returning svf

I'm following the step-by-step instructions Extract Geometry tutorial , and everything seems to work fine, except when I check the manifest after posting the job, it always returns the manifest for the initial conversion to SVF.
The tutorial specifically states that you must convert to SVF first. This takes a few seconds to a few minutes, starting at 0% and going until 100%. I await completion, and when I post the second job with the following payload (verifying that the payload is as requested)
let objPayload = {
"input": {
"urn": job.urn # urn retrieved from the file upload / svf conversion
},
"output": {
"formats": [
{
"type": "obj"
, "advanced": {
"modelGuid": metaData[0].guid,
"objectIds": [-1]
}
}]
}
}
( where metaData[0].guid is the provided guid from Step 1's call to /modelderivative/v2/designdata/${urn}/metadata)
, then the job actually starts at about 99%. It sometimes takes a few moments to complete, but when it does, the call to retrieve the manifest returns the previous manifest where the output format is marked at "svf".
The POST Job page states that
Derivatives are stored in a manifest that is updated each time this endpoint is used on a source file.
So I would expect the the returned manifest to be updated to return the requested 'obj'. But it is not.
What am I missing here?
As Cyrille pointed out, the translate job only works consistently when translating to SVF. If translating to OBJ, you can only do so from specific formats, listed in this table.
At the time of this writing, if you request a job outside that table (eg IFC->OBJ), it will still accept your job, and simply not do it. So if you're following the "Extract Geometry" tutorial, when you request the manifest, it is still pointing to the original SVF translation.

Cannot find field: fullfillmentText in message, while it is present

I started using API.AI and Dialogflow in its first versions for some small time projects.
Recently I wanted to try and dive into the new V2 of Dialogflow and see how I can continue to build nice Google Assistant apps with that.
When trying to formulate a response (based on the documentation here https://dialogflow.com/docs/reference/api-v2/rest/v2beta1/WebhookResponse) I am unable to actually render responses of any kind. Everytime I do it just gives me a webhook error back.
The intent That I'm using in my demo project is (still fairly simple as I'm just trying to get a response back):
My Webhook (Elixir based) returns the following response (actual production response):
When inspecting the "Show JSON" After doing the test on the right-hand side of the Dialogflow screen I receive:
I must be doing something wrong, should the whole response that I send now be wrapped in something?
Update:
When removing "fullfillmentText" and just keeping "fullfillmentMessages" I seem to get the same error, but then for fullfillmentMessages. It looks like DialogFlow doesn't understand the JSON parameters I'm sending to it. example:
Man, what a typo here... Managed to fix it in the end by writing "fulfillmentMessage".
Protip for everyone starting with this and wanting to know the structure of data:
Make a simple intention, just as a test
Add some google or other responses trough the GUI
Save the intention
Trigger the intention from the "tryout now" function on the right-hand side.
Click SHOW JSON to inspect how a response would need to look.
Final result Code sample:
{
"fulfillmentMessages": [
{
"platform": "ACTIONS_ON_GOOGLE",
"simpleResponses": {
"simpleResponses": [
{
"displayText": "Sorry, something went wrong",
"ssml": "<speak>Sorry, Something went wrong <break time=\"200ms\"/> Please try again later</speak>"
}
]
}
}
]
}

Logic App designer removes foreach loop without error or exception

I have a logic app that uses a webhook trigger on a service bus message queue to post selected messages to an API app using a HTTP+Swagger action.
Azure Logic App Designer
Here’s the json array (DBChanges) from the triggerBody that the foreach is supposed to iterate over
"DBChanges":[{"Key":"ItemID","Value":"101"},{"Key":"Description","Value":"Decript the message"},{"Key":"Owner","Value":"Samuel"}]
This is the logic app code for the DBChanges POST. The foreach loop is supposed to iterate over all the elements of the DBChanges array which is a key, value pair in the swagger metadata.
Azure Logic App Code View
When I switch to design mode after adding the foreach loop the designer strips the foreach code out even though it appears syntactically correct.
Does anyone know why the logic app designer strips out the foreach when switching between Design and Code views?
Try update your condition to "expression": "#equals(triggerBody()['Description'], 'Create')",.
Within the condition, you want to have a for-each first, then HTTP action inside it, not the other way around (I'm using Compose as an example, but you can substitute it to HTTP+Swagger in your case), note how I use #item() to reference each value inside the DBChanges array.
"actions": {
"For_each": {
"actions": {
"Compose": {
"inputs": "#item()",
"runAfter": {},
"type": "Compose"
}
},
"foreach": "#triggerBody()['DBChanges']",
"runAfter": {},
"type": "Foreach"
}
}
To help you reference the right value in a JSON blob, we recently added action Parse JSON, you can add it after your HTTP Webhook trigger, provide the body as input, and include a schema of the JSON it will be returning. You will be able to use friendly tokens from the picker instead of having to handcraft them. :)
Hope this helps, feel free to reach out to me at Derek.Li (at) Microsoft dot com.

Slow responce whie parsing large JSON responce

I have developed a web application based on RESTFul design where the application takes JSON responce from JAVA based web service and displays in UI and it refreshes the data in every 5 seconds.
The application uses Bootstrap for UI design, Backbone and require.js for implementing an MVC stucture where JSON response is parsed as Backbone collection.
When an admin is using this application the JSON response size is too large(from 800 to 1100 objects).
This is where things get messy. As per my analysis the browser is taking up too much resource.So rest of the application is very slow. For eg if I try to open a modal, system freezes for some time and opens slowly thus giving a very poor user experience.
As per my analysis time is being taken in parsing the data
As a remedy I am removing all comments in code and trying to implement Gzip compression for JSON files/html/css/js.
Sample of the JSON object is pasted below
{
"name": "TEST",
"state": "Lunch",
"time": "00:00:09",
"manager": "TEST",
"site": "C",
"skill": "TEST",
"center": "TEST",
"teamLead": "TEST",
"workGroup": "TEST",
"lanId": "TEST",
"dbID": "TETS",
"loginId": "TEST",
"avgAcwTime": "nn",
"avgHandleTime": "nn",
"avgTalkTime": "nn",
"callsAnswered": "nn",
"dispSkill": "-",
"errCode": null,
"errDesc": null,
"avgAcwTimeth": "medium",
"avgHandleTimeth": "high",
"avgTalkTimeTh": "medium",
"callsAnsweredTh": "medium",
"stateTh": "high"
}
Pagenation can't be done due to some requirements.
Can any one suggest something to improve the perfomance
Also I am fetching data using Backbone.Collection.fetch()
getAgentMetric(){
this.metrices.fetch({
url : (isLocal) ? ('http://localhost:8080/jsons/agent.json') : (prev_this.url + '/agentstat'),
data: JSON.stringify(param),
type: "POST",
dataType: "JSON",
contentType: "application/json",
})
.done(function() {
// passing the datasource from ajax call
prev_this.agentLoacalSource.localdata = prev_this.metrices.toJSON();
});
timeout = setTimeout(_.bind(this.getAgentMetric, this), 5000);
},
Browsers can handle a heck of a lot more than a thousand objects without any strain, so I don't think it's the fact that you are simply requesting a large amount of data from the backend. It's more likely that some of your parsing or rendering code is slow.
A few possibilities without seeing any more of your code:
It really depends on what you're doing here, but I'm going to assume that you aren't using a templating library (hoganjs, handlebarsjs, etc). You should definitely look into using one as they speed things up quite a bit and make generating html a lot easier.
Are you running .append() for each individual model that you render? This will really slow things down. You should generate all of the html that needs to be generated, and then run .append() once.
What kind of event listeners are you adding for each model (if any)? Listing to scroll events without a debounce ends up slowing down your browser, especially if you add a bunch of them.
Unrelated to your slowness issues, there are a few problems that I see with this code:
Your timeout should be called from an .always() function in ajax to prevent concurrent requests from going out if for whatever reason a request is slow.
this.metrices.fetch(...)
.always(function() {
timeout = setTimeout(...);
}.bind(this));
Requests that are simply fetching data should use a GET instead of a POST request type. You can see https://stackoverflow.com/a/3477374/5780021 for more info about this.
I would recommend timing some of your code to see where the slowness is actually happening. This will allow you to actually determine how long things are taking between to points in code.
Firefox console.time
Chrome console.time
IE console.time