Camunda rest engine json number as BigDecimal - json

I've downloaded the camunda bpm platform in order to evaluate dmn decisions by using the rest api.
For that I deploy this decision table:
And send this json request:
{
"variables":
{
"a": {
"value": 100.01
},
"b": {
"value": 10.01
}
}
}
I receive the following response:
[
{
"result": {
"type": "Double",
"value": 110.02000000000001,
"valueInfo": {}
}
}
]
I expect that the value of "result" were "110.02" but instead it gives "110.02000000000001".
The issue is that camunda "engine-rest" receives numbers as "Double", so by making a sum it looses precision.
It's there a way for making camunda "engine-rest" receives numbers from json as "BigDecimal" instead of "Double" in order to not loose precision?.

This thread discusses your problem. Looks like it's an open bug.
"Below, you can find a test class which evaluates a simple decision table with a decimal result type, showcasing it being rounded to the nearest double value if it's not surrounded by quotes."
Sounds just like what you're looking for.
Best regards
Sebastian

Related

Is returning only IDs for a JSON API collection allowed?

So let's say I have a resources called articles. These have a numeric id and you can access them under something such as:
GET /articles/1 for a specific article.
And let's say that returns something like:
{
"data": {
"type": "articles",
"id": "1",
"attributes": {
"title": "JSON:API paints my bikeshed!",
"body": "A bunch of text here"
}
}
}
Now my question is how to handle a request to GET /articles. I.e. how to deal with the request to the collection.
You see, accessing the body of the article is slow and painful. The last thing I want this REST API to do is actually try to get all that information. Yet as far as I can tell the JSON API schema seems to assume that you can always return full resources.
Is there any "allowed" way to return just the IDs (or partial attributes, like "title") under JSON API while actively not providing the ability to get the full resource?
Something like:
GET /articles returning:
{
"data": [
{
"type": "article_snubs",
"id": 1,
"attributes": {
"title": "JSON:API paints my bikeshed!"
}
}, {
"type": "article_snubs",
"id": 2,
"attributes": {
"title": "Some second thing here"
}
}
]
}
Maybe with links to the full articles?
Basically, is this at all possible while following JSON API or a REST standard? Because there is absolutely no way that GET /articles is ever going to be returning full resources due to the associate cost of getting the data, which I do not think is a rare situation to be in.
As far as I understand the JSON API specification there is no requirement that an API must return all fields (attributes and relationships) of a resource by default. The only MUST statement regarding fields inclusion that I'm aware of is related to Sparse Fieldsets (fields query param):
Sparse Fieldsets
[...]
If a client requests a restricted set of fields for a given resource type, an endpoint MUST NOT include additional fields in resource objects of that type in its response.
https://jsonapi.org/format/#fetching-sparse-fieldsets
Even so this is not forbidden by spec I would not recommend that approach. Returning only a subset of fields makes consuming your API much harder as you have to consult the documentation in order to get a list of all supported fields. It's much more within the meaning of the spec to let the client decide which information (and related resources) should be included.
The "attributes" object of a JSON-API doc does not need to be a complete representation:
attributes: an attributes object representing some of the resource’s data.
You can provide a "self" link to get the full representation, or perhaps even a "body" link to get just the body:
links: a links object containing links related to the resource.
E.g.
{
"data": [
{
"type": "articles_snubs",
"id": "1",
"attributes": {
"title": "JSON API paints my bikeshed!"
},
"links": {
"self": "/articles/1",
"body": "/articles/1/body"
}
},
{
"type": "article_snubs",
"id": "2",
"attributes": {
"title": "Some second thing here"
},
"links": {
"self": "/articles/2",
"body": "/articles/2/body"
}
}
]
}

Need documentation for *.analysis.windows.net/public/reports/querydata

I am reverse engineering an app that sends queries to
SOMESERVERNAME.analysis.windows.net/public/reports/querydata via an HTTP POST of an JSON-structured query.
Some initial lines of a sample query are at the end of this message.
I can't find any documentation on this anywhere. I don't know if this is some secret API or what. I ultimately would like to just ignore the aggregations altogether and just dump the raw data, which seems to sit in some flat-file type container on the back-end, but without some API documentation I'm stuck with just re-running the super basic handful of queries I've been able to intercept.
Note: this app is an embedded analytics page created with PowerBI, but the only REST API I can find for PowerBI has nothing to do with querying, but just basic object management.
Thanks!
{
"version": "1.0.0",
"queries": [
{
"Query": {
"Commands": [
{
"SemanticQueryDataShapeCommand": {
"Query": {
"Version": 2,
"From": [
{
"Name": "s",
"Entity": "Sheet1"
}
],
"Select": [
{
"Aggregation": {
"Expression": {
"Column": {
"Expression": {
"SourceRef": {
"Source": "s"
}
},
"Property": "Total"
}
},
"Function": 0
},
"Name": "Sum(Sheet1.Total)"
}
],
"Where": [
{
"Condition": {
"In": {
"Expressions": [
{
"Column": {
"Expression": {
"SourceRef": {
"Source": "s"
}
},
"Property": "Year"
}
}
],
"Values": [
[
{
"Literal": {
"Value": "'2018'"
}
}
]
]
}
}
},
............
I have built a client that scrapes data off a specific Power BI report using the same API, but probably you'll be able to adapt it to your use case. Maybe we can even abstract the code into a more generalized Power BI client!
Having tinkered with the API for two days, I realised that there are many ways the data can be formatted:
"nested"/multidimensional data can be unflattened, flattened by 1 degree, etc.
a primary "table" of a result dataset (in data.PH) can reference others (in data.SH)
The basics are as follows:
A dataset is structured like a multidimensional table, with cells containing values.
In a set of cells, the first always has a field S that contains the schema of its and all subsequent cells.
The schema maps a field of each cell's object with a selection from your query, e.g. the G0 field with the queried column age.
My client seems to work only with a specific type of query (SemanticQueryDataShapeCommand), a specific nr of dimensions and a specific column marked as primary (via Binding.Primary). But maybe that helps! https://github.com/derhuerst/fetch-bvg-occupancy/blob/1ebb864b1ff7130f9d2f0ab031c6d78bcabdd633/lib/parse-dataset.js
The only documented way to use this API is through the ADOMD.NET or OleDb provider.
If you want to send a DAX/MDX query and retrieve data programmatically, there's a sample of how to front-end the service with a simple REST API here.

rename invalid keys from JSON

I have following flow in NIFI , JSON has (1000+) objects in it.
invokeHTTP->SPLIT JSON->putMongo
Flow works fine, till I receive some keys in json with "." in the name. e.g. "spark.databricks.acl.dfAclsEnabled".
my current solution is not optimal, I have jotted down bad keys, and using multiple replace text processor to replace "." with "_". I am not using REGEX, I am using string literal find/replace. So each time I am getting failure in putMongo processor, I am inserting new replaceText processor.
This is not maintainable. I am wondering if I can use JOLT for this? couple of info regarding input JSON.
1) no set structure, only thing that is confirmed is. everything will be in events array. But event object itself is free form.
2) maximum list size = 1000.
3) 3rd party JSON, so I cant ask for change in format.
Also, key with ".", can appear anywhere. So I am looking for JOLT spec that can cleanse at all level and then rename it.
{
"events": [
{
"cluster_id": "0717-035521-puny598",
"timestamp": 1531896847915,
"type": "EDITED",
"details": {
"previous_attributes": {
"cluster_name": "Kylo",
"spark_version": "4.1.x-scala2.11",
"spark_conf": {
"spark.databricks.acl.dfAclsEnabled": "true",
"spark.databricks.repl.allowedLanguages": "python,sql"
},
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS3_v2",
"autotermination_minutes": 10,
"enable_elastic_disk": true,
"cluster_source": "UI"
},
"attributes": {
"cluster_name": "Kylo",
"spark_version": "4.1.x-scala2.11",
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS3_v2",
"autotermination_minutes": 10,
"enable_elastic_disk": true,
"cluster_source": "UI"
},
"previous_cluster_size": {
"autoscale": {
"min_workers": 1,
"max_workers": 8
}
},
"cluster_size": {
"autoscale": {
"min_workers": 1,
"max_workers": 8
}
},
"user": ""
}
},
{
"cluster_id": "0717-035521-puny598",
"timestamp": 1535540053785,
"type": "TERMINATING",
"details": {
"reason": {
"code": "INACTIVITY",
"parameters": {
"inactivity_duration_min": "15"
}
}
}
},
{
"cluster_id": "0717-035521-puny598",
"timestamp": 1535537117300,
"type": "EXPANDED_DISK",
"details": {
"previous_disk_size": 29454626816,
"disk_size": 136828809216,
"free_space": 17151311872,
"instance_id": "6cea5c332af94d7f85aff23e5d8cea37"
}
}
]
}
I created a template using ReplaceText and RouteOnContent to perform this task. The loop is required because the regex only replaces the first . in the JSON key on each pass. You might be able to refine this to perform all substitutions in a single pass, but after fuzzing the regex with the look-ahead and look-behind groups for a few minutes, re-routing was faster. I verified this works with the JSON you provided, and also JSON with the keys and values on different lines (: on either):
...
"spark_conf": {
"spark.databricks.acl.dfAclsEnabled":
"true",
"spark.databricks.repl.allowedLanguages"
: "python,sql"
},
...
You could also use an ExecuteScript processor with Groovy to ingest the JSON, quickly filter all JSON keys that contain ., perform a collect operation to do the replacement, and re-insert the keys in the JSON data if you want a single processor to do this in a single pass.

API response: difference between string and numeric type

I have an API used by a mobile app. It has an endpoint named /feed
It returns a collection of objects with different types.
[
{
"type": "USER",
"value": 5632,
},
{
"type": "IMAGE",
"value": 1412,
},
]
I am wondering, whether the type should be a string or a number like this:
[
{
"type": 100,
"value": 5632,
},
{
"type": 200,
"value": 1412,
},
]
Is there a significant difference between the two? The IOS developer of the app stated that numbers are easier to compare than strings.
Found a similar question but it does not have any answers.
Yes, numbers are faster to compare than strings, that's a fact. Then, whether it is a significant gain or not will depend on the algorithm where this json is parsed, the quantity of data it contains, etc. And furthermore, sacrificing code design for performances is often a bad option. It really depends on your project.

Obtain a different JSON object structure in AngularJS

I'm Working on AngularJS.
In this part of the project my goal is to obtain a JSON structure after filling a form with some particulars values.
Here's the fiddle of my simple form: Fiddle
With the form I will do a query to KairosDB, that is my NoSql Database, I will query data from it by a JSON object. The form is structured in this way:
a Name
a certain Number of Tags, with Tag Id ("ch" for example) and tag value ("932" for example)
a certain Number of Aggregators to manipulate data coming from DB
Start Timestamp and End Timestamp (now they are static and only included in the final JSON Object)
After filling this form, with my code I'll obtain for example this JSON object:
{
"metrics": [
{
"tags": [
{
"id": "ch",
"value": "932"
},
{
"id": "ch",
"value": "931"
}
],
"aggregators": {
"name": "sum",
"sampling": [
{
"value": "1",
"unit": "milliseconds",
"type": "SUM"
}
]
}
}
],
"cache_time": 0,
"start_absolute": 123,
"end_absolute": 1234
}
Unfortunately, KairosDB accepts a different structure, and as you could see, Tag id "ch" doesn't hase an "id" string before, or for example, Tag values coming from the same tag id are grouped together
{
"metrics": [
{
"tags": {
"ch": [
"932",
"931"
]
},
"name": "AIENR",
"aggregators": [
{
"name": "sum",
"sampling": {
"value": "1",
"unit": "milliseconds"
}
}
]
}
],
"cache_time": 0,
"start_absolute": 1367359200000,
"end_absolute": 1386025200000
}
My question is: Is there a way to obtain the JSON structure like the one accepted by Kairos DB with an Angular JS form?. Thanks to everyone.
I've seen this topic as the one more similar to mine but it isn't in AngularJS.
Personally, I'd do the refactoring work in the backend - Have what ever server interfaces sends and receives data do the manipulation - Otherwise you'll end up needing to refactor your data inside Angular anywhere you want to use that dataset.
Where as doing it in the backend would put it in a single access point.
Of course, you could do it in Angular, just replace userString in the submitData method with a copy of the array and replace the tags section with data in the new format, and likewise refactor the returned result to the correct format when you get a reply.