Schema update validation fails Azure logic apps - json

In my Logic app I am using HTTP trigger which fires every 3 hours and using a get request on it. After the API responds, the parse JSON call does schema validation. So far with HTTP trigger and Parse JSON, I don't know a way to disable validation of JSON. I know 'when HTTP request is received' as disable validation option but in my case I don't need 'When http request is recieved', I need HTTP trigger on recurrence basis.
Here are my two questions
1) Is there a way to disable schema validation when using HTTP trigger that fires every so often on recurrence basis?
2)
The API calls that we make to a company, it seems like the company frequently updates their JSON return schema. On Monday the calls were going through correctly, on Tuesday, they were not. Upon asking them, this is their response.
"But almost all changes are "adding", rather than "removing" or "renaming" endpoints and information. I.e. the changes are backward compatible."
My question is adding to schema.. is the validation supposed to fail upon addition because logic app for sure is not validating their additions whereas their representative makes it sound like 'adding' should be backwards compatible. Do addition in schema's cause validation to fail?
I am learning Azure logic apps and not familiar with all components at hand.
What is the best way to handle JSON parsing in case schema changes frequently and how to turn it off if that is a solution.

In your second screenshot, it seems there is nothing to do with the "HTTP Trigger". The problem is caused by the schema in "Parse JSON"(Parse Organization and Group Information) action. You need to confirm with the company and unify the structure of the json data with them.
In your first screenshot, I think you need to confirm with the company if they have the array in the response json data. Apart from this, we can judge if the array exists and then do the "Select" action, shown as below:
The fx expression in the "Condition" is:
empty(body('Parse_JSON')?['array'])
Hope it helps~

Related

Azure common alert schema sets the commonPropertie as "null"

Well I will explain all about my case.
Im trying to set up Azure alerts that sends a custom mails, to do so I need a logic app that parse the info about the said alerts.
The problem is, even if I enable the common alert schema, and fill the custom properties field, as you can see in the image.
But what this alert sends to my Logic App in the customProerties field is a Null value, I don't get why.
But more than that, if I disable the common alert schema, the custom properties field will be sent without problems.
I don't understand if common alert schema doesn't allow customProperties, or if Im doing something bad, I need help.
Thanks for read and ask for it if anything of this post is bad explained.
I have just confirmed this issue with Microsoft support.
If I point an Activity Log Alert Rule to an Action Group Webhook with Common Schema enabled then the Custom Properties don't appear in the JSON payload. If I disable the Common Schema then the property does appear in the payload.
If I do the same for a Metric alert or Log Query alert, the Custom Properties do appear at the Webhook endpoint regardless of whether the Common Schema is enabled or not.
Microsoft pointed that the schemas for each type are documented (no custom property on the Activity Log Common Schema) and that this is not a bug. Well... the Alert Rule form does allow to configure the Custom Properties for each type of alert so... ah well, nevermind.
They also said "There are plans to align the behaviour on all alert types including activity logs, although there is no definite ETA though. For now, the best option for you to be able to customize the payloads of activity log alerts is by using logic app as an action group."

What is the use of all GET, PUT, DELETE when anything can be done by POST in the most secured way of communication in REST API calls

I read a lot about each and every function of those mentioned in the title,
But I always have doubt that what is the primary use of all individual functions. Can someone explain to me in detail? Thank you.
What is the use of all GET, PUT, DELETE when anything can be done by POST
This is a pretty important question.
Note that, historically, SOAP essentially did everything by POST; effectively reducing HTTP from an application protocol to a transport protocol.
The big advantage of GET/PUT/DELETE is that the additional semantics that they promise (meaning, the semantics that are part of the uniform interface agreed to by all resources) allow us to build general purpose components that can do interesting things with the meta data alone, without needing to understand anything specific about the body of the message.
The most important of these is GET, which promises that the action of the request is safe. What this means is that, for any resource in the world, we can just try a GET request to "see what happens".
In other words, because GET is safe, we can have web crawlers, and automated document indexing, and eventually Google.
(Another example - today, I can send you a bare URI, like https://www.google.com and it "just works" because GET is understood uniformly, and does not require that we share any further details about a payload or metadata.)
Similarly, PUT and DELETE have additional semantic constraints that allow general purpose components to do interesting things like automatically retry lost requests when the network is unreliable.
POST, however, is effectively unconstrained, and this greatly restricts the actions of general purpose components.
That doesn't mean that POST is the wrong choice; if the semantics of a request aren't worth standardizing, then POST is fine.
In API perspective,
GET - It is to retrieve record/data from a source. API would need no data from client/UI to retrieve all records , or would need query param / path param to filter records based on what is required - either record with a particular ID or other properties.
POST - it is to store a new record at a source . API would get that record from client/UI through request body and store it.
PUT - it is to update an existing record at a source . API would receive updated record along with Id and update it with existing record whose id match with one passed from UI.
DELETE - it is to delete a record present in source. UI would send nothing to delete whole all records at source or send id to remove a particular record.
Source refers to any database.

Non-standard JSON and Azure Logic Apps

I have an API that produces JSON like this:
)]}',
{
//JSON DATA
}
The //JSON DATA is valid JSON, but the )]}', up top is not.
When I try to GET this data via a Logic App, I get:
BadRequest. Http request failed: the content was not a valid JSON.
So, a few related questions:
1) Can I tell the logic app to return the invalid JSON anyway?
2) How can debug the issue better? I happen to know that the response is invalid, but what if I didn't? Can I see the raw data somewhere?
3) This is all done via the Azure web portal. Are there better tools? Visual Studio?
I should also mention that if I call a route on the same API that returns XML instead of JSON, then the Logic App works fine. So it definitely doesn't like the JSON response in particular.
Thanks!
First of all, please do not post three questions as a single question.
Question 1). The best thing you can do is make the API return a valid JSON object. This is good for million reasons. Here're a few:
it's pretty much a standard (either valid JSON or XML -- yeah, old school way);
therefore, no users of this API (including you) will need to struggle and guess what's going on and why;
your Logic App's step will just work without adding extra complexity;
you will make this world and your karma better.
If API-side changes are not within your reach, I don't think you can do much. If you're lucky and the HTTP action is successful (Status Code 2xx), you can try to use a Query Action with a function that truncates the first characters. It will look something like this (I don't know the exact syntax): #Substring(body('myHttpGet'), 4, length(body('myHttpGet')) - 4) where myHttpGet is the id of the Http Get action.
However, once again, if possible, I strongly recommend fixing up the API which is the root cause of the problem, instead of dealing with garbage response after that.
UPDATE Another thing you can do is wrap the dirty API. For example, you could create a trivial Azure Function that invokes the API you don't directly control, and sanitizes the response for you consumption requirements. This Azure Function function should be easy to call from the Logic App. It costs almost nothing (unless we're talking millions of requests/month). The only drawback here is the increasing latency, which may be not an issue at all -- test it and see whether it adds less than 100ms or so... Oh, and don't forget to file a ticket with the API owner, they make our world a bad place!
Question 2) In Azure Logic App web UI you can Look into the execution details and the error will definitely be there.
Question 3) You're asking for a tool recommendation which is by definition a highly subjective thing and is off-topic on StackOverflow.
TL/DR: The other app is not producing valid JSON.
Meaning, this is not a problem for you to solve. The other app has to return valid JSON if the owner claims it should.
If they cannot or will not produce valid JSON, then the first thing you need to do is inform your management that you will have to spend a lot of extra time accommodating their non-standard format.

How can I tell whether a web service is "Restful" (as it claims to be)?

I am trying to work with a service that its creators describe as "restful"
To make a request to this service I have to post some Json e.g.
{
"#type" : "Something"
"$value" : 1
}
This is posted to a URL similar to this;
https://someSite.com/api/query/execute
No matter what the nature of the request, whether I am retrieving info, adding or updating it I must always use this URL (along with some header values to verify my credentials). The effects of posting to this service are determined by the JSON I send.
Depending on the nature of the call I will receive some JSON very similar to the sample above. This JSON never includes another URL (or part of one). It is always a "data object" i.e. a set of properties and their values. Sometimes I receive an empty response but know that the request has had an effect because I can view those effects through a website provided by the service provider
I have particular issues with ENUM values that I must send because I have no idea of the allowed values (they are always passed as strings)
No documentation has been provided for this service.
I am relatively new to RESTful services and JSON and would like to know whether this is truly a restful service, and if not why not?
Due to my lack of experience in this area I may have omitted some important information that would be required to properly answer this question. I will watch the comments closely and try to provide any additional clarification requested
know whether this is truly a restful service, and if not why not?
It isn't.
One of the main principles of REST is that "things" are identified by URLs. Having a single URL for all interaction with the API violates that principle.

Review before writing to database from UI

This is more of a question on design approach. I have an application which has the following details:
UI in Angular
UI uses an api which is in Node/Express
Database is just a JSON file for now.
I want to move to mongoDb from the JSON file. What I'd like is, whenever anyone uses the UI to make changes to the database, I'd like to review the changes before they are updated in the database. what is the best way to achieve this?
This was easier for me with the JSON file because I was creating a pull request on git where I would review all the changes and then update.
Things that I have thought:
Let the UI write to a separate clone collection(table) and then review them and update the main collection accordingly. Not sure if this is the right way to do it.
Are you yourself wanting to review changes, or wanting an end user to review before saving? If it's you, you have a few options:
You can create a mongodb collection of pending objects that will get moved to a different collection once they're approved. This is OK, but not great because you'll end up shuttling objects around and it's probably more reasonable to use a flag to do aggregate grouping instead of collection-based delineation
You can simply use a property on an object as a flag and send objects that are pending review to your db with that flag enabled (using some property like true, 1, or another way of saying "this is true/on/enabled etc.")
If you want an end-user to be able to save, you can use mongoose hooks/middleware to fire off validators or whatever you want and return a response with meaningful data back to your angular UI. From there, you can have a user 'review' what they're saving. This doesn't persist or get saved, it's only saved once they send everything back up again (if that's how you choose to build the save process).