JSON structure with missing fields - Advice needed- Is it good practice? - json

I have this JSON structure:
{
"object1": [
{
"field1": {
"first": null,
"last": "",
},
"array1": [
{
"title": "1",
},
{
"title": "2",
},
]
},
{
"array1": [
{
"title": "4",
},
{
"title": "5",
}
]
},
...
]
}
Here the field1 is missing in the second object and I also save it in this way on a mongo database. The reason for this decision is that I only have and need field1 on the first object. Is it okay or should I just add field1 for other objects but just let them blank?

It appears as if field1 might be some kind of pointer (or reference) to first & last document/item. Personally (and without knowing what you are needing to achieve), I would store this field1 in each element - with as you suggest null values where appropriate.
For instance - what happens if you can can delete elements - and the first one gets wiped? What will this mean for your structure?

Related

Retrieve specific value from a JSON blob in MS SQL Server, using a property value?

In my DB I have a column storing JSON. The JSON looks like this:
{
"views": [
{
"id": "1",
"sections": [
{
"id": "1",
"isToggleActive": false,
"components": [
{
"id": "1",
"values": [
"02/24/2021"
]
},
{
"id": "2",
"values": []
},
{
"id": "3",
"values": [
"5393",
"02/26/2021 - Weekly"
]
},
{
"id": "5",
"values": [
""
]
}
]
}
]
}
]
}
I want to create a migration script that will extract a value from this JSON and store them in its own column.
In the JSON above, in that components array, I want to extract the second value from the component with an ID of "3" (among other things, but this is a good example). So, I want to extract the value "02/26/2021 - Weekly" to store in its own column.
I was looking at the JSON_VALUE docs, but I only see examples for specifing indexes for the json properties. I can't figure out what kind of json path I'd need. Is this even possible to do with JSON_VALUE?
EDIT: To clarify, the views and sections components can have static array indexes, so I can use views[0].sections[0] for them. Currently, this is all I have with my SQL query:
SELECT
*
FROM OPENJSON(#jsonInfo, '$.views[0].sections[0]')
You need to use OPENJSON to break out the inner array, then filter it with a WHERE and finally select the correct value with JSON_VALUE
SELECT
JSON_VALUE(components.value, '$.values[1]')
FROM OPENJSON (#jsonInfo, '$.views[0].sections[0].components') components
WHERE JSON_VALUE(components.value, '$.id') = '3'

Compare 2 JSON-files and create a new key if values match

I have 2 sets of JSON-files looking like below, data-A.json and data-B.json.
I need to somehow compare the key URL in data-A.json with the same key in data-B.json. Where there is a match take data from the key Position in data-A.json and write to new key PreviousPosition in data-B.json. If there is no matching URL, write a null value for this new key in data-B.json
Please see examples:
data-A.json
[
{
"Position": "1",
"TrackName": "One hit wonder",
"URL": "https://domain.local/xyz123"
},
{
"Position": "2",
"TrackName": "Random song",
"URL": "https://domain.local/123qwe"
},
{
"Position": "3",
"TrackName": "Dueling banjos",
"URL": "https://domain.local/asd456"
}
]
data-B.json
[
{
"Position": "1",
"TrackName": "Rocket",
"URL": "https://domain.local/nbs678"
},
{
"Position": "2",
"TrackName": "Dueling banjos",
"URL": "https://domain.local/asd456"
},
{
"Position": "3",
"TrackName": "One hit wonder",
"URL": "https://domain.local/xyz123"
}
]
(desired) data-B.json
[
{
"Position": "1",
"TrackName": "Rocket",
"URL": "https://domain.local/nbs678",
"PreviousPosition": null
},
{
"Position": "2",
"TrackName": "Dueling banjos",
"URL": "https://domain.local/asd456",
"PreviousPosition": "3"
},
{
"Position": "3",
"TrackName": "One hit wonder",
"URL": "https://domain.local/xyz123",
"PreviousPosition": "1"
}
]
I have done some mediocre attemps to solve this using jq with no luck. Also tried some PowerShell and Python but I just can't figure it out.
Any suggestions?
If a straightforward, two-line solution is what you're looking for, then jq is a good choice:
(INDEX($A[]; .URL) | map_values(.Position)) as $dict
| map( .PreviousPosition = $dict[ .URL ] )
This is perhaps more straightforward than it looks, as the expression in the first line is a commonly found idiom (namely INDEX(...) | map_values(...)) for creating a dictionary. In the first line, it is assumed that $A holds the JSON in data-A.json.
The second line just applies the lookup rule specified in the question.
The only tricky bit here is getting the command-line invocation right. The following will suffice:
jq --argfile A data-A.json -f program.jq data-B.json
where program.jq contains the above two-line program.

Why does a numeric key in the JSON Structure always get displayed first

(Cannot summarize the problem in a single statement, hence the ambiguous title)
I create a JSON structure via Angular Typescript, wherein when a user interacts with certains parts of the component the JSON Structure gets updated.
Steps
Initially, the JSON under consideration is by default set to the following:
{
"keyword": {
"value": "product",
"type": "main"
}
}
For example, a user chooses some parameter Name. Once the user complies to certain steps in the UI, the JSON structure gets updated to the following:
{
"keyword": {
"value": "product",
"type": "main"
},
"Name": {
"value": " <hasProperty> Name",
"type": "dataprop"
}
}
Once the user selects a numeric value for a parameter like dryTime, the JSON gets updated to the following:
{
"20": { // WHY WOULD 20 be here?
"value": "<hasValue> 20",
"type": "fValue"
},
"keyword": {
"value": "Varnish",
"type": "main"
},
"Name": {
"value": " <hasProperty> Name",
"type": "dataprop"
},
"dryingTime": {
"value": " <hasProperty> dryingTime",
"type": "dataprop"
}
}
I understand that a JSON is an unordered data structure. But a previous implementation of something similar actually worked well, i.e., the value 20 here was 20.0 before and it was displayed after dryingTime in my JSON.
The order is critical for me as I parse all the Keys in the above mentioned JSON using a for loop and store it in an array. This array needs to show all the keys in the order of the User Interaction.
Where am I going wrong here if I decide to stay with JSON and not with an array to store such interactions?
Yes, JSON fields are unordered. JSON array is ordered.
If you want to keep the order of elements insterted, you could build your JSON like so:
{
"keyword": {
"value": "Varnish",
"type": "main"
},
"props": [
{
"name": "dryingTime",
"value": 20
},
{
"name": "anotherOrderedField",
"value": "fieldValue"
}
]
}

Elastic Search + JSON import (ELK Stack)

I'm currently trying to do a basic JSON file import into my ELK stack. I tried importing it directly via a POST request like this:
curl -XPOST http://localhost:9200/kwd_results/TS_Cart -d #/home/local/TS_Cart.json
ES says ok for the import, but when I'm trying to view the logs in Kibanna, they are not indexed by the nodes of the JSON file. I'm guessing I need like a template mapping to view it properly.
My JSON file looks like this:
{
"testResults": {
"FitNesseVersion": "v20160618",
"rootPath": "K1System.CountryDe.DriverFirefox.TestCases.MainFolder.TestVariants.SmokeTests_B2C.TS_Cart",
"result": [
{
"counts": {
"right": "16",
"wrong": "2",
"ignores": "3",
"exceptions": "1"
},
"date": "2017-05-10T00:01:11+02:00",
"runTimeInMillis": "117242",
"relativePageName": "TestCase_1",
"pageHistoryLink": "K1System.CountryDe.DriverFirefox.TestCases.MainFolder.TestVariants.SmokeTests_B2C.TS_Cart.B2CFreeCatalogueOrder?pageHistory&resultDate=20170510000111",
"tags": "de, at"
},
{
"counts": {
"right": "16",
"wrong": "0",
"ignores": "0",
"exceptions": "0"
},
"date": "2017-05-10T00:03:08+02:00",
"runTimeInMillis": "85680",
"relativePageName": "TestCase_2",
"pageHistoryLink": "K1System.CountryDe.DriverFirefox.TestCases.MainFolder.TestVariants.SmokeTests_B2C.TS_Cart.B2CGiftCardOrderWithAdvancePayment?pageHistory&resultDate=20170510000308",
"tags": "at, de"
}
],
"finalCounts": {
"right": "4",
"wrong": "1",
"ignores": "0",
"exceptions": "0"
},
"totalRunTimeInMillis": "482346"
}
}
Basically I would need rootPath to be used as an index, while having the following childs: counts, relativePageName, date and tags. Notice that I have two nodes that are childs of the result[] array.
Any help would be greatly appreciated!
Thank you.
Well, it's one JSON document so Elasticsearch treats it as such.
You'll need to (programmatically) split up the document into the right documents and then you can store them (potentially with one _bulk request).
For the index name:
Must be lowercase, so you'll need to cast that value.
Will you have many different root paths with jut a few docs each? Then you shouldn't make all of them an index since there is an overhead for each one of them (actually the underlying shards).

JSON is it best practice to give each element in an array an id attribute?

Is it best practice in JSON to give objects in an array an id similar to below?. Im trying to decide on a JSON format for a restful service im implementing and decide include it or not... If it is to be modified by CRUD operations is it a good idea?
{
"tables": [
{
"id": 1,
"tablename": "Table1",
"columns": [
{
"name": "Col1",
"data": "-5767703747778052096"
},
{
"name": "Col2",
"data": "-5803732544797016064"
}
]
},
{
"id": 2,
"tablename": "Table2",
"columns": [
{
"name": "Col1",
"data": "-333333"
},
{
"name": "Col2",
"data": "-44444"
}
]
}
]
}
Client-Generated IDs
A server MAY accept a client-generated ID along with a request to
create a resource. An ID MUST be specified with an "id" key, the value
of which MUST be a universally unique identifier. The client SHOULD
use a properly generated and formatted UUID as described in RFC 4122
[RFC4122].
jsonapi.org