JSON schema help, array of objects - json

I am trying to write a JSON object where the key "pStock" is the total stock of an array of bike sizes 'size'. Each size has an inventory or 'count'. I have two versions of the same code. the first one returns an error message even though the syntax looks correct to my eye.
"pStock": [
{
"size": {
"type": "string",
"count": {
"type": "number"
}
}
}
}
]
Here is the second version which returns no errors but I'm not quite sure it's saying what I want it to say.
"pStock": {
"type": ["object"],
"size": {
"type": "string",
"count": {
"type": "number"
}
}
}
EDIT 1
I appreciate all of these responses. I made a silly error in posting. Below is the correct "wrong" code that isn't working. I get the error. 'Error, schema is invalid: data/properties/pStock should be object,boolean
at Ajv.validateSchema' Rephrasing. the below code still does not work and received the error 'Error, schema is invalid: data/properties/pStock should be object,boolean
at Ajv.validateSchema'
"pStock": [
{
"size": {
"type": "string",
"count": {
"type": "number"
}
}
}
]
Any help would be greatly appreciated.

Count the opening and closing curly braces on your first JSON. It has 3 opening and 4 closing.
"pStock": [
{ // Open 1
"size": { // Open 2
"type": "string",
"count": { // Open 3
"type": "number"
} // Close 3
} // Close 2
} // Close 1
} // Close what?
]
Just remove the last one and it will work.

You are missing the closing square bracket ] on the pStock array because you have an extra brace } i.e.
"pStock": [
{
"size": {
"type": "string",
"count": {
"type": "number"
}
}
}
} <--- this is wrong
]
should be
{
"pStock":[
{
"size":{
"type":"string",
"count":{
"type":"number"
}
}
}
]
}

The first version should look like that:
"pStock": [
{
"size": {
"type": "string",
"count": {
"type": "number"
}
}
}
]
You had too many } (line 7)
The second version does not represent what you wanted, it does not contain the array of sizes.
But you can create this (pStock with multiple keys of different sizes. Then in each size write the inventory/count):
"pStock": {
"size1": {
inventory: "5",
count: 4
},
"size2": {
inventory: "5",
count: 4
}
}

Related

In Logic Apps JSON Array while parsing throwing error for single object but for multiple objects it is working fine

While parsing JSON in Azure Logic App in my array I can get single or multiple values/objects (Box as shown in below example)
Both type of inputs are correct but when only single object is coming then it is throwing an error "Invalid type. Expected Array but got Object "
Input 1 (Throwing error) : -
{
"MyBoxCollection":
{
"Box":{
"BoxName": "Box 1"
}
}
}
Input 2 (Working Fine) : -
{
"MyBoxCollection":
[
{
"Box":{
"BoxName": "Box 1"
},
"Box":{
"BoxName": "Box 2"
}
}]
}
JSON Schema :
"MyBoxCollection": {
"type": "object",
"properties": {
"box": {
"type": "array",
items": {
"type": "object",
"properties": {
"BoxName": {
"type": "string"
},
......
.....
..
}
Error Details :-
[
{
"message": "Invalid type. Expected Array but got Object .",
"lineNumber": 0,
"linePosition": 0,
"path": "Order.MyBoxCollection.Box",
"schemaId": "#/properties/Root/properties/MyBoxCollection/properties/Box",
"errorType": "type",
"childErrors": []
}
]
I used to use the trick of injecting a couple of dummy rows in the resultset as suggested by the other posts, but I recently found a better way. Kudos to Thomas Prokov for providing the inspiration in his NETWORG blog post.
The JSON parse schema accepts multiple choices as type, so simply replace
"type": "array"
with
"type": ["array","object"]
and your parse step will happily parse either an array or a single value (or no value at all).
You may then need to identify which scenario you're in: 0, 1 or multiple records in the resultset? I'm pasting below how you can create a variable (ResultsetSize) which takes one of 3 values (rs_0, rs_1 or rs_n) for your switch:
"Initialize_ResultsetSize": {
"inputs": {
"variables": [
{
"name": "ResultsetSize",
"type": "string",
"value": "rs_n"
}
]
},
"runAfter": {
"<replace_with_name_of_previous_action>": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Check_if_resultset_is_0_or_1_records": {
"actions": {
"Set_ResultsetSize_to_0": {
"inputs": {
"name": "ResultsetSize",
"value": "rs_0"
},
"runAfter": {},
"type": "SetVariable"
}
},
"else": {
"actions": {
"Set_ResultsetSize_to_1": {
"inputs": {
"name": "ResultsetSize",
"value": "rs_1"
},
"runAfter": {},
"type": "SetVariable"
}
}
},
"expression": {
"and": [
{
"equals": [
"#string(body('<replace_with_name_of_Parse_JSON_action>')?['<replace_with_name_of_root_element>']?['<replace_with_name_of_list_container_element>']?['<replace_with_name_of_item_element>']?['<replace_with_non_null_element_or_attribute>'])",
""
]
}
]
},
"runAfter": {
"Initialize_ResultsetSize": [
"Succeeded"
]
},
"type": "If"
},
"Process_resultset_depending_on_ResultsetSize": {
"cases": {
"Case_no_record": {
"actions": {
},
"case": "rs_0"
},
"Case_one_record_only": {
"actions": {
},
"case": "rs_1"
}
},
"default": {
"actions": {
}
},
"expression": "#variables('ResultsetSize')",
"runAfter": {
"Check_if_resultset_is_0_or_1_records": [
"Succeeded",
"Failed",
"Skipped",
"TimedOut"
]
},
"type": "Switch"
}
For this problem, I met another stack overflow post which is similar to this problem. While there is one "Box", it will be shown as {key/value pair} but not [array] when we convert it to json format. I think it is caused by design, so maybe we can just add a record "Box" at the source of your xml data such as:
<Box>specific_test</Box>
And do some operation to escape the "specific_test" in the next steps.
Another workaround for your reference:
If your json data has only one array, we can use it to do a judgment. We can judge the json data if it contains "[" character. If it contains "[", the return value is the index of the "[" character. If not contains, the return value is -1.
The expression shows as below:
indexOf('{"MyBoxCollection":{"Box":[aaa,bbb]}}', '[')
The screenshot above is the situation when it doesn't contain "[", it return -1.
Then we can add a "If" condition. If >0, do "Parse JSON" with one of the schema. If =-1, do "Parse JSON" with the other schema.
Hope it would be helpful to your problem~
We faced a similar issue. The only solution we find is by manipulating the XML before conversion. We updated XML nodes which needs to be an array even when we have single element using this. We used a Azure function to update the required XML attributes and then returned the XML for conversion in Logic Apps. Hope this helps someone.

GraphQL Query returns null objects both in GraphiQL and App from JSON data source

I'm trying to get my mocked JSON data via GraphQL in Gatsby. The response shows the correct data, but also two null objects as well. Why is it happening?
I'm using the gatsby-transformer.json plugin to query my data and gatsby-source-filesystem to point the transformer to my json files.
categories.json
the mock file I'm trying to get to work :)
{
"categories": [
{
"title": "DEZERTY",
"path": "/dezerty",
"categoryItems": [
{
"categoryName": "CUKRIKY",
"image": "../../../../static/img/dessertcategories/cukriky.jpg"
},
{
"categoryName": "NAHODNE",
"image": "../../../../static/img/dessertcategories/nahodne.jpg"
},
]
},
{
"title": "CANDY BAR",
"path": "/candy-bar",
"categoryItems": [
{
"categoryName": "CHEESECAKY",
"image": "../../../../static/img/dessertcategories/cheesecaky.jpg"
},
{
"categoryName": "BEZLEPKOVÉ TORTY",
"image": "../../../../static/img/dessertcategories/bezlepkove-torty.jpg"
},
]
}
]
}
GraphQL query in GraphiQL
query Collections {
allMockJson {
edges {
node {
categories {
categoryItems {
categoryName
image
}
title
path
}
}
}
}
}
And the response GraphiQL gives me
{
"data": {
"allMockJson": {
"edges": [
{
"node": {
"categories": null
}
},
{
"node": {
"categories": null
}
},
{
"node": {
"categories": [
{
"categoryItems": [
{
"categoryName": "CHEESECAKY",
"image": "../../../../static/img/dessertcategories/cheesecaky.jpg"
},
{
"categoryName": "BEZLEPKOVÉ TORTY",
"image": "../../../../static/img/dessertcategories/bezlepkove-torty.jpg"
}
],
"title": "DEZERTY",
"path": "/dezerty"
},
{
"categoryItems": [
{
"categoryName": "CUKRIKY",
"image": "../../../../static/img/dessertcategories/CUKRIKY.jpg"
},
{
"categoryName": "NAHODNE",
"image": "../../../../static/img/dessertcategories/NAHODNE.jpg"
}
],
"title": "CANDY BAR",
"path": "/candy-bar"
}
]
}
}
]
}
}
}
I expected only to get the DEZERTY and CANDY BAR sections. Why are there null categories and how do I fix it?
Thanks in advance
Your JSON contains syntax errors in the objects DEZERTY and CANDY BAR. It silently fails without telling you. Try this json linter.
Error: Parse error on line 12: },
Error: Parse error on line 25: },
Try again. Your query should work now.
You should look into an IDE that highlights these types of errors and saves you time and frustration.

ElasticSearch multiple terms search json

I have a ton of items in a Db with many columns. I need to search across two of these columns to get one data set.
The first column, genericCode, would group together any of the rows that have that code.
The second column, genericId, is calling out a specific row to add because it is missing the list of genericCode's i'm looking for.
The back-end C# sets up my json for me as follows, but it returns nothing.
{
"from": 0,
"size": 50,
"aggs": {
"paramA": {
"nested": {
"path": "paramA"
},
"aggs": {
"names": {
"terms": {
"field": "paramA.name",
"size": 10
},
"aggs": {
"specValues": {
"terms": {
"field": "paramA.value",
"size": 10
}
}
}
}
}
}
},
"query": {
"bool": {
"must": [
{
"term": {
"locationId": {
"value": 1
}
}
},
{
"terms": {
"genericCode": [
"10A",
"20B"
]
}
},
{
"terms": {
"genericId": [
11223344
]
}
}
]
}
}
}
I get and empty result set. If I remove either of the "terms" I get what I would expect. So, I just need to combine those terms into one search.
I've gone through a lot of the documentation here: https://www.elastic.co/guide/en/elasticsearch/reference/current/search.html
and still can't seem to find what I'm looking for.
Let's say I'm Jelly Belly and I want to create a bag of jelly beans with all the Star Wars and Disney jelly beans, and I also want to add all beans of the color green. That is basically what I'm trying to do.
EDIT: Changing the "must" to '"should"` isn't quite right either. I need it to be (in pseudo sql):
SELECT *
FROM myTable
Where locationId = 1
AND (
genericCode = "this", "that
OR
genericId = 1234, 5678
)
The locationId separates our data in an important way.
I found this post: elasticsearch bool query combine must with OR
and it has gotten me closer, but not all the way there...
I've tried several iterations of should > must > should > must building this query and get varying results, but nothing accurate.
Here is the query that is working. It helped when I realized I was passing in the wrong data for one of my parameters. doh
Nest the should inside the must as #khachik noted in the comment above. I had this some time ago but it wasn't working due to the above blunder.
{
"from": 0,
"size": 10,
"aggs": {
"specs": {
"nested": {
"path": "paramA"
},
"aggs": {
"names": {
"terms": {
"field": "paramA.name",
"size": 10
},
"aggs": {
"specValues": {
"terms": {
"field": "paramA.value",
"size": 10
}
}
}
}
}
}
},
"query": {
"bool": {
"must": [
{
"term": {
"locationId": {
"value": 1
}
}
},
{
"bool": {
"should": [
{
"terms": {
"genericCode": [
"101",
"102"
]
}
},
{
"terms": {
"genericId": [
3078711,
3119430
]
}
}
]
}
}
]
}
}
}

mongoDB delete from array

I am wondering how I can delete certain elements from an array in mongoDB.
I have the following json saved in the collection geojsons:
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {
"ID": "1753242",
"TYPE": "8003"
}
},
{
"type": "Feature",
"properties": {
"ID": "4823034",
"TYPE": "7005"
}
},
{
"type": "Feature",
"properties": {
"ID": "4823034",
"TYPE": "8003"
}
}
]
}
And I want to delete every element in the array features, where properties.TYPE equals 8003.
I tried it with the following statement, but it does not delete anything.
db.geojsons.aggregate([
{$match: {'features': {$elemMatch : {"properties.TYPE": '8003' }}}},
{$project: {
features: {$filter: {
input: '$features',
as: 'feature',
cond: {$eq: ['$$feature.properties.TYPE', '8003']}
}}
}}
])
.forEach(function(doc) {
doc.features.forEach(function(feature) {
db.collection.deleteOne(
{ "feature.properties.TYPE": "8003" }
);
print( "in ")
});
});
Does anybody know, why this does not delete anything or how to delete the matching elements?
For me it seems that the failure is inside the forEach, since the print statement gets executed as expected.
No need of aggregation here, you can use $pull of update
db.geojsons.update(
{ "type": "FeatureCollection" }, //or any matching criteria
{$pull: { "features": { "properties.TYPE": "8003"} }}
);
You can use $pull to delete specific elements from the array
db.geojsons.update({}, {
$pull: {features: {'properties.TYPE':'8003'}}
}, {multi:true})
I guess in this case the correct solution is reading the matching documents, modify and save them again. So simply create a search query that matches the documents you want: remove the elements in the array that you don't want and save it again.
Moreover, this statement is not doing what you want:
db.collection.deleteOne(
{ "feature.properties.TYPE": "8003" }
);
In fact it is searching for a document that has structure such that feature.properties.TYPE matches it like this:
{
"type": "FeatureCollection",
"features": {
"properties": {
"TYPE": "8003"
}
}
}
And this is definitely not the structure you have.

How do Elasticsearch's "include_in_parent" / "include_in_root" work? Should it show in '_source'?

In a simple Elasticsearch mapping like this:
{
"personal_document": {
"analyzer": "standard",
"_timestamp": {
"enabled": true
},
"properties": {
"description": {
"type": "multi_field",
"fields": {
"sort": {
"type": "string",
"index": "not_analyzed"
},
"description": {
"type": "string",
"include_in_root": true
}
}
},
"my_nested": {
"type": "nested",
"include_in_root": true,
"properties": {
"description": {
"type": "string"
}
}
}
}
}
}
.... isn't "include_in_root": true supposed to add the field my_nested.description to the root document?
And during a query am I not supposed to see THAT field into the _source field?
and
Specifying an highlight directive on the field 'my_nested.description' would automatically retrieve the _included_in_root value_ instead of the nested field?
(something like this)
"highlight": {
"fields": {
"description": {},
"my_nested.description": {}
}
}
Or do I have some misunderstanding of the official nested type documentation?
(that is not really clear)
If the include_in_parent or include_in_root options are enabled on the nested documents then Elasticsearch internally indexes the data with nested fields flattened on the parent document. However, this is just internal for Elasticsearch and you'll never see them in the _source field.
If the user field is of type object, this document would be indexed
internally something like this...
as it is refered here.
Thus, you continue to perform actions (like the highlights that you mention) by referring to the nested document's fields. The highlight syntax that you refer to should look like this
"highlight": {
"fields": {
"my_nested.description": {}
}
}
and not
"highlight": {
"fields": {
"description": {}
}
}
You can use a wildcard for specifying highlight field:
POST /test-1/page/_search
{
"query": {
"query_string": {
"query": "Lorem ipsum"
}
},
"highlight" : {
"fields" : {
"*" : {}
}
}
}
If it's a good idea, I don't know. I guess it depends on your application.
This also works with nested documents, btw --- but seems to hickup when doing attachments on nested documents without include_in_root