How to prevent deserialisation of JSON containing duplicate keys in Dropwizard - json

I'm working with a dropwizard app and one of the endpoints accepts JSON.
Currently, if a json object like this is passed as a POST or PUT request, using Postman for example, it holds onto the last values.
{
"key", "value1",
"key", "value2"
}
it gets deserialised into an object and then if a GET is performed, the response will be like this.
{
"key", "value2"
}
Dropwizard seems to deal with this by overriding the map with latest values.
I was wondering if there is a way to basically prevent JSON with duplicate keys from being accepted as part of the PUT request, and just give error back to user.
Any help much appreciated. Thanks

Related

How to convert a Json to Graphql mutation format [duplicate]

I have a Json object and I want to convert it graphql mutation query inorder to invoke a graphql api.
So my requirement is I need to create a springboot project and expose a POST API which will take input as a json and convert it into graphql mutation query format,
then I need to consume a graphql api with the mutation query. Can any one please provide any pointer to it as I am not able to proceed.
Pass your object as a variable. You are able to put it in JSON format in the Graphql variables definitions.
If you look at the official tutorial on mutations, in the query, they pass the ReviewInput as a variable, $review:
mutation CreateReviewForEpisode($ep: Episode!, $review: ReviewInput!) {
createReview(episode: $ep, review: $review) {
stars
commentary
}
}
As the ReviewInput is, at least in my use case, a standard object from the model, used for both REST and GraphQl, the JSON can be directly pasted in the GraphQl in the corresponding review field in the GraphQl variables:
{
"ep": "JEDI",
"review": {
"stars": 5,
"commentary": "This is a great movie!"
}
}
{ "stars": 5, "commentary": "This is a great movie!" } being your JSON.
So, in short, no conversion is required, you just need your API to put the JSON in the correct mutation template.

escaping dots in json evaluation in WSO2 Datamapper

I have a JSON object as payload, which contains a dot (".") in one of the identifier names and I want to map this object to another JSON object using the datamapper mediator.
The problem I am facing is that the JSON evaluation uses the dot notation for nested elements. The field "example":
{ "a": { "b": "example"} }
is evaluated by asking for a.b
My object however looks like:
{ "a": { "b.c": "example"} }
I cannot evaluate a.b.c, because it thinks b and c are two seperate nested elements.
Escaping this identifier name in the datamapper.dmc javascript code does not seem to work. No matter what I try ('', "", [''], [""]) I get the error:
Error while reading input stream. Script engine unable to execute the script javax.script.ScriptException: <eval>:8:43 Expected ident but found [
This may not be exact solution as I did not specifically tried it for Data Mapper, but I had similar problem in WSO2 Property Mediator while parsing incoming JSON to get a value and set it to property. I was able to parse such JSON using following syntax
json-eval($.A.['b.c'])
Where 'A' is JSON object containing 'b.c' JSON element.
I saw you mentioned that you already tried something similar, but just wanted to give my working example in case it helps.

Can Loggly parse (derive values from) a JSON array (received using HTTP/S Bulk Endpoint)

I have a Loggly free trial account set up and receiving events
.
The JSON is an array of objects, devliered over a webhooks https POST. Each object is an event (as would seem reasonable for a bulk load interface). Simplified example:
[
{
"msys": {
"track_event": {
"event_id": "319115158633969504",
"friendly_from": "traffic.gen#example.com"
}
}
},
{
"msys": {
"track_event": {
"event_id": "319115158633970211",
"friendly_from": "traffic2.gen#example.com"
}
}
}
]
Can Loggly parse the contents of the JSON and extract values (e.g. event_id)? I've tried using the "Create Derived Fields" dialog, but this seems to be regex/line based rather than having JSON awareness.
Logically, I thought passing a JSON array would work, however after much trial-and-error, I found if you pass multiple, separate JSON objects separated by a newline character, these get parsed as separate events:
curl -H "content-type:application/json" -d $'{"timestamp":"2018-09-02T17:16:15.003123Z", "message":"test1"}\n{"timestamp":"2018-09-02T17:17:15.003123Z", "message":"test2"}' http://logs-01.loggly.com/bulk/<your-token>
A couple of things do bother me with this method (e.g. its not strictly, valid application/json you are passing, despite the content-type, and its not a JSON array), but this may get you over the hump in the problems you are having, but I'm hoping there is a more elegant answer than this.
See the following for more information: Loggly - Bulk Endpoint and Loggly - Automated Parsing

sparkjobserver adds a [ in front of every { and [

I am using sparkjobserver restful service. Everything works fine except the returned json string has an extra [] around every object and array. Each array becomes
[[.......]]. and each object becomes
[{.....}]
has anyone seen this problem before? Any solutions?
I would suggest you to return json string as result. If you return Scala object, then it is marshalled by sprays complete API

Source of documentation for a standard JSON document structure?

I am working on a (.NET) REST API which is returning some JSON data. The consumer of the API is an embedded client. We have been trying to establish the structure of the JSON we will be working with. The format the embedded client wants to use is something I have not seen before in working with JSON. I suggested that it is no "typical" JSON. I was met with the question "Where is 'typical' JSON format documented"?
As an example of JSON I "typically" see:
{
"item" : {
"users": [ ... list of user objects ... ],
"times": [ ... list of time objects ...],
}
}
An example of the non-typical JSON:
{
"item" : [
{
"users": [ ... list of user objects ... ]
},
{
"times": [ ... list of time objects ...]
},
]
}
In the second example, item contains an array of objects, which each contain a property whose value is an array of entities. This is valid JSON. However, I have not encountered another instance of JSON that is structured this way when it is not an arbitrary array of objects but is in fact a set list of properties on the "item" object.
In searching json.org, stackoverflow.com and other places on the interwebs I have not found any guidelines on why the structure of JSON follows the "typical" example above rather than the second example.
Can you provide links to documentation that would provide recommendations for one format or the other above?
Not a link, but just straightforward answer: Items are either indexed (0, 1, 2, ...) or keyed (users, times). No matter what software you use, you can get at indexed or keyed data equally easily and quickly. But not with what you call "non-typical" JSON: To get at the users, I have to iterate through the array and find one dictionary that has a key "users". But there might be two or more dictionaries with that key. So what am I supposed to do then? If you use JSON schema, the "non-typical" JSON is impossible to check. In iOS, in the typical case I write
NSArray* users = itemDict [#"users"];
For the non-typical JSON I have to write
NSArray* users = nil;
for (NSDictionary* dict in itemArray)
if (dict [#"users"] != nil)
users = dict [#"users"];
but that still has no error checking for multiple dicts with the key "users". Which is an error that in the first case isn't even possible. So just tell them what the are asking for is rubbish and creates nothing but unnecessary work. For other software, you probably have the same problems.