Include Json object multiple times in a loop - json

I'm a newbie to JSON and currently using a *.json file define some actions to fetch some system level information. For example I'm trying to monitor CPU temperature and check active application in a loop.
Code snippet:
{
"Action": "Check_System",
"duration": 600
},
{
"Action": "Check_CPU_temp",
"duration": 60
},
{
"Action": "Check_top",
"duration": 30
}
My JSON object contains 2 values: (1) Action -> which internally invokes my cpp libraries to perform specific functionality and (2) duration -> time to sleep until performing next activity.
In my code snippet above, I would like to perform Action 2 and 3, 'N' number of times. By which I mean:
{
"Action": "Check_System",
"duration": 600
},
-------------------------
{
"Action": "Check_CPU_temp",
"duration": 60
},
{
"Action": "Check_top",
"duration": 30
},
-------------------------
{
"Action": "Check_CPU_temp",
"duration": 60
},
{
"Action": "Check_top",
"duration": 30
},
-------------------------
{
"Action": "Check_CPU_temp",
"duration": 60
},
{
"Action": "Check_top",
"duration": 30
},
-------------------------
.....
Rather copy/pasting objects, is there a way of creating this loop within json, something like:
{
"Action": "Check_System",
"duration": 600
},
{perform:
"times" : 1000,
{
"Action": "Check_CPU_temp",
"duration": 60
},
{
"Action": "Check_top",
"duration": 30
}
}
Any inputs are appreciated. Thanks in advance.

Related

How to create table separate objects out of json using JSONATA

I want to transform a given JSON structure into another json format using the JSONATA API.
Basically, I need to break the hierarchical structure into separate records.
Input JSON:
{
"2022-09-22": [
{
"name": "modules/dynatrace",
"count": 60
},
{
"name": "modules/dynatrace/monitors/http-monitors/basic",
"count": 4
},
{
"name": "modules/splunk/hec-token",
"count": 14
},
{
"name": "modules/aws/lambda/logs_streaming_splunk",
"count": 29
}
]
}
Output:
[
{
"date" : "2022-09-22",
"name": "modules/dynatrace",
"count": 60
},
{
"date" : "2022-09-22",
"name": "modules/dynatrace/monitors/http-monitors/basic",
"count": 4
},
{
"date" : "2022-09-22",
"name": "modules/splunk/hec-token",
"count": 14
},
{
"date" : "2022-09-22",
"name": "modules/aws/lambda/logs_streaming_splunk",
"count": 29
}
]
You can use the $each function to convert object into an array:
$each($$, function($entries, $date) {
$entries.($merge([{ "date": $date }, $]))
})
Interactive link: https://stedi.link/ZBoBY2F

How do I return the "text" field of this JSON file with jq?

I am looking to extract the cat facts from this JSON file:
[
{
"status": {
"verified": true,
"feedback": "",
"sentCount": 1
},
"_id": "5887e1d85c873e0011036889",
"user": "5a9ac18c7478810ea6c06381",
"text": "Cats make about 100 different sounds. Dogs make only about 10.",
"__v": 0,
"source": "user",
"updatedAt": "2020-09-03T16:39:39.578Z",
"type": "cat",
"createdAt": "2018-01-15T21:20:00.003Z",
"deleted": false,
"used": true
},
{
"status": {
"verified": true,
"sentCount": 1
},
"_id": "588e746706ac2b00110e59ff",
"user": "588e6e8806ac2b00110e59c3",
"text": "Domestic cats spend about 70 percent of the day sleeping and 15 percent of the day grooming.",
"__v": 0,
"source": "user",
"updatedAt": "2020-08-26T20:20:02.359Z",
"type": "cat",
"createdAt": "2018-01-14T21:20:02.750Z",
"deleted": false,
"used": true
},
{
"status": {
"verified": true,
"sentCount": 1
},
"_id": "58923f2fc3878c0011784c79",
"user": "5887e9f65c873e001103688d",
"text": "I don't know anything about cats.",
"__v": 0,
"source": "user",
"updatedAt": "2020-08-23T20:20:01.611Z",
"type": "cat",
"createdAt": "2018-02-25T21:20:03.060Z",
"deleted": false,
"used": false
},
{
"status": {
"verified": true,
"sentCount": 1
},
"_id": "5894af975cdc7400113ef7f9",
"user": "5a9ac18c7478810ea6c06381",
"text": "The technical term for a cat’s hairball is a bezoar.",
"__v": 0,
"source": "user",
"updatedAt": "2020-11-25T21:20:03.895Z",
"type": "cat",
"createdAt": "2018-02-27T21:20:02.854Z",
"deleted": false,
"used": true
},
{
"status": {
"verified": true,
"sentCount": 1
},
"_id": "58e007cc0aac31001185ecf5",
"user": "58e007480aac31001185ecef",
"text": "Cats are the most popular pet in the United States: There are 88 million pet cats and 74 million dogs.",
"__v": 0,
"source": "user",
"updatedAt": "2020-08-23T20:20:01.611Z",
"type": "cat",
"createdAt": "2018-03-01T21:20:02.713Z",
"deleted": false,
"used": false
}
]
Its url is https://cat-fact.herokuapp.com/facts. I know access to the information is not a problem because when I run curl 'https://cat-fact.herokuapp.com/facts' | jq '.', I get the entire file in return.
After running curl 'https://cat-fact.herokuapp.com/facts' | jq '. | {text}',
I get the error jq: error (at <stdin>:0): Cannot index array with string "text".
After running curl 'https://cat-fact.herokuapp.com/facts' | jq '. | {.text}',
This is returned: (23) Failed writing body
After running curl 'https://cat-fact.herokuapp.com/facts' | jq '.[] | {text: .commit.text}',
This is returned:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed 100 1675
100 1675 0 0 9463 0 --:--:-- --:--:-- --:--:-- 9463
{
"text": null
}
{
"text": null
}
{
"text": null
}
{
"text": null
}
{
"text": null
}
I'd use the array/object value iterator .[] and then ["text"] to filter:
jq '.[]["text"]'
Giving you example file as input would produce this output:
"Cats make about 100 different sounds. Dogs make only about 10."
"Domestic cats spend about 70 percent of the day sleeping and 15 percent of the day grooming."
"I don't know anything about cats."
"The technical term for a cat’s hairball is a bezoar."
"Cats are the most popular pet in the United States: There are 88 million pet cats and 74 million dogs."
The above is also the exact output you'd (currently) get from:
curl -s 'https://cat-fact.herokuapp.com/facts' | jq '.[]["text"]'

How to calculate max in sth-comet?

sth-comet offers the possibility of calculate max, min and other functions, as described [here]https://github.com/telefonicaid/fiware-sth-comet/blob/master/doc/manuals/aggregated-data-retrieval.md
But I have tried different types and it doesn't gives the aggregated result.
A simplified version of my entity (I use the attribute temperature in this example) is:
{
"id": "Beach:27",
"type": "Beach",
"flag": {
"type": "Property",
"value": "Verde"
},
"temperature": {
"type": "Number",
"value": 45
}
I have make this query that should give the maximum value:
http://{{sth-comet}}/STH/v1/contextEntities/type/Beach/id/Beach:27/attributes/temperature?aggrMethod=max&hLimit=100&hOffset=0
but the result is not the max but all the changes of the attribute:
{
"contextResponses": [
{
"contextElement": {
"attributes": [
{
"name": "temperature",
"values": [
{
"recvTime": "2019-09-15T18:32:18.166Z",
"attrType": "Number",
"attrValue": "43"
},
{
"recvTime": "2019-09-15T18:32:24.645Z",
"attrType": "Number",
"attrValue": "44"
},
{
"recvTime": "2019-09-15T18:32:28.931Z",
"attrType": "Number",
"attrValue": "45"
}
]
}
],
"id": "Beach:27",
"isPattern": false,
"type": "Beach"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}
]
}
Which is the type that the property must have to work correctly? I tried "Number", "integer","string", "Property" but I don't obtain the "max" value.
Thank you for your time
The requests for aggregated time series context information can use the following query parameters:
aggrMethod: (mandatory) You can use max, min, sum and sum2.
aggrPeriod: (mandatory) you can use month, day, hour, minute and second.
dateFrom and dateTo: (optional) to range of data you query.
In your request you don't put the mandatory parameter (aggrPeriod). Can you test something else:
( http://{{sth-comet}}/STH/v1/contextEntities/type/Beach/id/Beach:27/attributes/temperature?aggrMethod=max&aggrPeriod=second&dateFrom=2019-09-15T00:00:00.000Z&dateTo=2019-09-15T23:59:59.999Z )
https://github.com/telefonicaid/fiware-sth-comet/blob/master/doc/manuals/aggregated-data-retrieval.md

Multi Object JSON containing static and dynamic data

I have more than 1 million rows of data in excel and I want to convert it to JSON so i can visualize it using D3js and other web based applications. Data is comprised of two subsets:
General information of each data points, including geographical location, ID.... (static data not changing once it is written)
Monthly measurements at each data point. This data set updates monthly once new data arrives
This is how data looks like:
ID: 2411976, State: Texas, County: DEWITT, Latitude: 29 Longitude:-96,
Data: 11/1/2013 27.516; 12/1/2013 15.3566; 1/1/2014 27.6418;
2/1/2014 13.45; 3/1/2014 11.21; 4/1/2014 20
ID: 2321771, State: Texas, County: DEWITT, Latitude: 29 Longitude:-96,
Data: 11/1/20134 19; 12/1/2014 21; 1/1/2015 30; 2/1/2015 50; 3/1/2015 10;
4/1/2015 5
.....
Is it possible to wrap all data points in one JSON document that contains both temporal data and static data?
This is indeed possible, as you can represent arrays/objects in a nested structure, like this:
{
"locations": [{
"id": 2411976,
"state": "Texas",
"county": "DEWITT",
"latitude": 29,
"longitude":-96,
"data": [{
"date": "2013-11-01T00:00:00.000Z",
"value": 27.516
}, {
"date": "2013-12-01T00:00:00.000Z",
"value": 15.3566
}, {
"date": "2014-01-01T00:00:00.000Z",
"value": 27.6418
}, {
"date": "2014-02-01T00:00:00.000Z",
"value": 13.45
}, {
"date": "2014-03-01T00:00:00.000Z",
"value": 11.21
}, {
"date": "2014-04-01T00:00:00.000Z",
"value": 20
}]
}, {
"id": 2321771,
"state": "Texas",
"county": "DEWITT",
"latitude": 29,
"longitude":-96,
"data": [{
"date": "2014-11-01T00:00:00.000Z",
"value": 19
}, {
"date": "2014-12-01T00:00:00.000Z",
"value": 21
}, {
"date": "2015-01-01T00:00:00.000Z",
"value": 30
}, {
"date": "2015-02-01T00:00:00.000Z",
"value": 50
}, {
"date": "2015-03-01T00:00:00.000Z",
"value": 10
}, {
"date": "2015-04-01T00:00:00.000Z",
"value": 5
}]
}]
}
This is just one way of doing it. Depending on what the consumer of this data expects as input, you could adapt accordingly.

ElasticSearch: exact has lower score than partial match

I am trying to implement address autocomplete using ElasticSearch.
Suppose, I have three fields, which I would like to implement search on:
{
"address_name": "George st.",
"number": "1",
"city_name": "London"
}
According to this article, I have have configured my index and type like this:
{
"settings": {
"analysis": {
"filter": {
"nGram_filter": {
"type": "nGram",
"min_gram": 1,
"max_gram": 20,
"token_chars": [
"letter",
"digit",
"punctuation",
"symbol"
]
}
},
"analyzer": {
"nGram_analyzer": {
"type": "custom",
"tokenizer": "whitespace",
"filter": [
"lowercase",
"asciifolding",
"nGram_filter"
]
},
"whitespace_analyzer": {
"type": "custom",
"tokenizer": "whitespace",
"filter": [
"lowercase",
"asciifolding"
]
}
}
}
},
"mappings": {
"address": {
"_all": {
"analyzer": "nGram_analyzer",
"search_analyzer": "whitespace_analyzer"
},
"properties": {
"address_name": {
"type": "string"
},
"number": {
"type": "string",
"boost": 2
},
"city_name": {
"type": "string"
},
"local": {
"type": "integer",
"include_in_all": false,
"index": "no"
},
"place_id": {
"type": "integer",
"include_in_all": false,
"index": "no"
},
"has_number": {
"type": "integer",
"include_in_all": false,
"index": "no"
}
}
}
}
}
Full search query:
{
"size": 100,
"query": {
"match": {
"_all": {
"query": "George st. 1 London",
"operator": "and"
}
}
}
}
As I search by query George st. 1 London, ElasticSearch firstly returns me George st. 19 London, George st. 17 London, etc. but the exact match George st. 1 London is returned only in X-th place and has lowest score than the first ones.
I was trying to understand why it happens by adding explain query to the end of the search URL, but it didn't help.
Is there any way to solve this problem?
Thank you.
Basically, since you're running all fields through an nGram token filter at indexing time, it means that for the number field,
17 will be tokenized as 1 and 17 and
19 will be tokenized as 1 and 19
Hence, all three documents you mention will have then token 1 indexed for their number field.
Then at query time, you're using the whitespace analyzer, which means that George st. 1 London will be tokenized into the following tokens: George, st, 1 and London.
From there, we can draw two conclusions:
all three documents will match no matter what (since all tokens match a given field)
there's no way with the current settings and mapping that you can give more weight to the document George st. 1 London than to the others.
The easiest way out of this is to not apply nGram to the number field so that the street number needs to be matched exactly and not with prefixes.