JSON structure quesiton: multiple and different JSON entries, one txt file - json

I am trying to do some work with log visualization tools (Elastic and/or Splunk), but first I need to produce and format the log files from a simulation I am writing. My question, which I can't seem to find clear guidance on is:
How to store multiple, what I believe are root element JSON entries in a single text file
How to work with nested JSON structures
I am ultimately trying to have every entry follow the same form:
{"entry_id": 1,
"TIME": "12:00:12Z012/01/2022",
"LOG_TYPE":"ERROR_REPORT",
"DATA": {
"FIELD A" : "ABC",
"FIELD B" : "DEF"
}
},
{"entry_id": 2,
"TIME": "12:15:12Z012/01/2022",
"LOG_TYPE":"STATUS_REPORT",
"DATA": {
"FIELD C" : "HIJ",
"FIELD D" : 123
}
}
Some options I saw
Use an array []
Use NDJSON
Use some log template??
Any insight would be helpful

JSON files need to be a single object and can't be INVALID themselves.
Option 1: Create a single file for each of the objects, using a numeric naming system for the files, then iterating over the files in your method.
Option 2: Create a single file but have each entry contained in an array eg:
{
"entries": [
{
"entry_id": 1,
"TIME": "12:00:12Z012/01/2022",
"LOG_TYPE": "ERROR_REPORT",
"DATA": {
"FIELD A": "ABC",
"FIELD B": "DEF"
}
},
{
"entry_id": 2,
"TIME": "12:15:12Z012/01/2022",
"LOG_TYPE": "STATUS_REPORT",
"DATA": {
"FIELD C": "HIJ",
"FIELD D": 123
}
}
]
}

Related

Validate JSON schema for expected values

I have a JSON which I would like to validate.
There are is an object inside an array, within each object there is a property called name.
I want 1st validate that there are 3 objects.
And I want to validate the value of each of the property.
{
"hello": [
{
"world": "value 1"
},
{
"world": "value 2"
},
{
"world": "value 3"
}
]
}
I want to validate that the JSON has value 1, value 2, value 3 using a JS0N schema
Using the language of JSON Extended Structural Schemas (JESS), the three requirements could be written in JSON as follows (assuming that you meant world rather than name):
["&",
{ "hello": [ {"world": "string"} ] },
{"forall": ".[hello]|length", "equal": 3 },
{"setof": ".[hello][]|.[world]", "supersetof": ["value 1", "value 2", "value 3" ]}
]
This may not be exactly what you want, e.g. perhaps you want the constraints to be written without reference to the name of the top-level key. This could be accomplished as follows:
["&",
{"forall": ".[]", "schema": [ {"world": "string"} ] },
{"forall": ".[]|length", "equal": 3 },
{"setof": ".[][]|.[world]", "supersetof": ["value 1", "value 2", "value 3" ]}
]
Also you could modify the above to express the requirements without preventing the objects from having additional keys. It all depends on what you really want.
Note that the JESS checker requires jq to run. There is a ruby gem for jq.

Elastic Search + JSON import (ELK Stack)

I'm currently trying to do a basic JSON file import into my ELK stack. I tried importing it directly via a POST request like this:
curl -XPOST http://localhost:9200/kwd_results/TS_Cart -d #/home/local/TS_Cart.json
ES says ok for the import, but when I'm trying to view the logs in Kibanna, they are not indexed by the nodes of the JSON file. I'm guessing I need like a template mapping to view it properly.
My JSON file looks like this:
{
"testResults": {
"FitNesseVersion": "v20160618",
"rootPath": "K1System.CountryDe.DriverFirefox.TestCases.MainFolder.TestVariants.SmokeTests_B2C.TS_Cart",
"result": [
{
"counts": {
"right": "16",
"wrong": "2",
"ignores": "3",
"exceptions": "1"
},
"date": "2017-05-10T00:01:11+02:00",
"runTimeInMillis": "117242",
"relativePageName": "TestCase_1",
"pageHistoryLink": "K1System.CountryDe.DriverFirefox.TestCases.MainFolder.TestVariants.SmokeTests_B2C.TS_Cart.B2CFreeCatalogueOrder?pageHistory&resultDate=20170510000111",
"tags": "de, at"
},
{
"counts": {
"right": "16",
"wrong": "0",
"ignores": "0",
"exceptions": "0"
},
"date": "2017-05-10T00:03:08+02:00",
"runTimeInMillis": "85680",
"relativePageName": "TestCase_2",
"pageHistoryLink": "K1System.CountryDe.DriverFirefox.TestCases.MainFolder.TestVariants.SmokeTests_B2C.TS_Cart.B2CGiftCardOrderWithAdvancePayment?pageHistory&resultDate=20170510000308",
"tags": "at, de"
}
],
"finalCounts": {
"right": "4",
"wrong": "1",
"ignores": "0",
"exceptions": "0"
},
"totalRunTimeInMillis": "482346"
}
}
Basically I would need rootPath to be used as an index, while having the following childs: counts, relativePageName, date and tags. Notice that I have two nodes that are childs of the result[] array.
Any help would be greatly appreciated!
Thank you.
Well, it's one JSON document so Elasticsearch treats it as such.
You'll need to (programmatically) split up the document into the right documents and then you can store them (potentially with one _bulk request).
For the index name:
Must be lowercase, so you'll need to cast that value.
Will you have many different root paths with jut a few docs each? Then you shouldn't make all of them an index since there is an overhead for each one of them (actually the underlying shards).

How to load JSON to D3 in Tree Diagram

I am a real dumb with HTML and JavaScript, so please excuse any dumbness.
I am using D3 Tree Diagram, but I need to load a JSON file instead of writing it inside the JS script, which the name of the file to be loaded will be chose by the user in a select tag. Here's the D3 code
First, how can I load/read a JSON file, lets say exampleNodes.json,
And then, how can I pass the name of the selected select tag so that it reads the appropriate JSON?
Thanks for your patience, and help. Thank you.
in code
var treeData = [
{
"name": "Top Level",
"parent": "null",
"children": [
{
"name": "Level 2: A",
"parent": "Top Level",
"children": [
{
"name": "Son of A",
"parent": "Level 2: A"
},
{
"name": "Daughter of A",
"parent": "Level 2: A"
}
]
},
{
"name": "Level 2: B",
"parent": "Top Level"
}
]
}
];
you have to save it on data.json file like
{
"treeData" : [ ... your data array ...]
}
after that in d3.json() function you will receive this object
d3.json("data.json",function(json){
// do your coding
// or all code put inside one function and call it after data loaded
});
if you are using google chrome than it will gave you error on data reading from json because security Google Chrome not allow read files from file system you can get data in Firefox. to make it run upload your code on some local server. i.e in WampServer or Apache tomcat etc.

JSON Slurper Offsets

I have a large JSON file that I'm trying to parse with JSON Slurper. The JSON file consists of information about bugs so it has things like issue keys, descriptions, and comments. Not every issue has a comment though. For example, here is a sample of what the JSON input looks like:
{
"projects": [
{
"name": "Test Project",
"key": "TEST",
"issues": [
{
"key": "BUG-1",
"priority": "Major",
"comments": [
{
"author": "a1",
"created": "d1",
"body": "comment 1"
},
{
"author": "a2",
"created": "d2",
"body": "comment 2"
}
]
},
{
"key": "BUG-2",
"priority": "Major"
},
{
"key": "BUG-3",
"priority": "Major",
"comments": [
{
"author": "a3",
"created": "d3",
"body": "comment 3"
}
]
}
]
}
]
}
I have a method that creates Issue objects based on the JSON parse. Everything works well when every issue has at least one comment, but, once an issue comes up that has no comments, the rest of the issues get the wrong comments. I am currently looping through the JSON file based on the total number of issues and then looking for comments using how far along in the number of issues I've gotten. So, for example,
parsedData.issues.comments.body[0][0][0]
returns "comment 1". However,
parsedData.issues.comments.body[0][1][0]
returns "comment 3", which is incorrect. Is there a way I can see if a particular issue has any comments? I'd rather not have to edit the JSON file to add empty comment fields, but would that even help?
You can do this:
parsedData.issues.comments.collect { it?.body ?: [] }
So it checks for a body and if none exists, returns an empty list
UPDATE
Based on the update to the question, you can do:
parsedData.projects.collectMany { it.issues.comments.collect { it?.body ?: [] } }

Jqgrid. How to handle server response before in will be passed to grid

I have a common structure of JSON data that come back from server, it contains some additional info about errors etc. How can I handle this data(check error info) and then pass only required data to grid.
This is JSON data structure:
{
"errorinfo": "foo",
"errormsg": "foo",
"errorCode": "foo"
"**jqgridData**": [
{
"total": "xxx",
"page": "yyy",
"records": "zzz",
"rows" : [
{"id" :"1", "cell" :["cell11", "cell12", "cell13"]},
{"id" :"2", "cell":["cell21", "cell22", "cell23"]},
...
]
}
]
}
So I wand to process this JSON data and pass to grid only "jqgridData"
Thanks for help.
First of all the JSON data has one small error. The string
{ "errorinfo": "foo", "errormsg": "foo", "errorCode": "foo" "jqgridData": [ {
must be changed to
{ "errorinfo": "foo", "errormsg": "foo", "errorCode": "foo", "jqgridData": [ {
(comma between "errorCode": "foo" and "jqgridData" must be inserted). I hope the problem come during posting of the data in the question text only.
To your main question. The jsonReader allows you to read you practically any data. You data should be read with the following jsonReader:
jsonReader: {
root: "jqgridData.0.rows",
page: "jqgridData.0.page",
total: "jqgridData.0.total",
records: "jqgridData.0.records"
}
(where '0' element is needed as the index because of jqgridData is additionally an array).