HERE is sample json file.
sample.json
{
"apps": [
{
"name": "app1"
},
{
"name": "app2"
},
{
"name": "app3"
}
],
"test": [
{
"name": "test1"
},
{
"name": "test2"
}
]
}
I want to divide the above JSON file into the following two files.
I want to manage the entire configuration file with one JSON, divide the file when necessary and give it to the tool.
apps.json
{
"apps": [
{
"name": "app1"
},
{
"name": "app2"
},
{
"name": "app3"
}
}
test.json
{
"test": [
{
"name": "test1"
},
{
"name": "test1"
}
]
}
jq .apps sample.json outputs only value.
[
// Not contain the key
{
"name": "app1"
},
{
"name": "app2"
},
{
"name": "app3"
}
]
Can you have any idea?
Construct a new object using {x} which is a shorthand for {x: .x}.
jq '{apps}' sample.json
{
"apps": [
{
"name": "app1"
},
{
"name": "app2"
},
{
"name": "app3"
}
]
}
Demo
And likewise with {test}.
You can do
{apps}, {test}
Demo
https://jqplay.org/s/P_9cc2uANV
I have two JSON objects in two files (result_0.json and result_1.json) which look like this
{
"data": {
"pools": [
{
"id": "1"
},
{
"id": "2"
}
]
}
}
and like this:
{
"data": {
"pools": [
{
"id": "3"
},
{
"id": "4"
}
]
}
}
What I would like to get looks like this:
{
"data": {
"pools": [
{
"id": "1"
},
{
"id": "2"
},
{
"id": "3"
},
{
"id": "4"
}
]
}
}
How can it be done? I tried
jq -s add result_0.json result_1.json
but it just overwrites the values in result_0.json with the values of result_1.json.
If .data and .pool are the only keys in the json files, you can use
jq -n '{ data: { pools: [inputs.data.pools] | add } }' result0 result1
This will create the desired output:
{
"data": {
"pools": [
{
"id": "1"
},
{
"id": "2"
},
{
"id": "3"
},
{
"id": "4"
}
]
}
}
Regarding the inputs keyword, consider reading JQ's docs on this part.
I want to do 2 operations in my JSON file . I tried to do it with JQ and SHELL .
First one : I want to tranform the parents elements to an pure text value
Second one : I want to remove one specific level in the JSON tree
Input :
{
"template_first": {
"order": 0,
"index_patterns": [
"first"
],
"settings": {
"index": {
"codec": "best_compression",
"refresh_interval": "30s",
"analysis": {
"normalizer": {
"norm_case_insensitive": {
"filter": "lowercase",
"type": "custom"
}
}
},
"number_of_shards": "1",
"number_of_replicas": "1"
}
},
"mappings": {
"_doc": {
"dynamic": true,
"dynamic_templates": [
{
"strings": {
"mapping": {
"type": "keyword"
},
"match_mapping_type": "string"
}
}
],
"properties": {
"log.id": {
"type": "keyword"
},
"host.indexer.hostname": {
"type": "keyword"
},
"ts_indexer": {
"format": "strict_date_optional_time||epoch_millis",
"type": "date"
}
}
}
}
},
"template_second": {
"order": 0,
"index_patterns": [
"second"
],
"settings": {
"index": {
"codec": "best_compression",
"refresh_interval": "30s",
"analysis": {
"normalizer": {
"norm_case_insensitive": {
"filter": "lowercase",
"type": "custom"
}
}
},
"number_of_shards": "1",
"number_of_replicas": "1"
}
},
"mappings": {
"_doc": {
"dynamic": true,
"dynamic_templates": [
{
"strings": {
"mapping": {
"type": "keyword"
},
"match_mapping_type": "string"
}
}
],
"properties": {
"log.id": {
"type": "keyword"
},
"host.indexer.hostname": {
"type": "keyword"
},
"ts_indexer": {
"format": "strict_date_optional_time||epoch_millis",
"type": "date"
}
}
}
}
}
}
You see there two JSON object in the file
{
"template_first" : { ...},
"template_second" : { ... }
}
The first modification comes from the appearance of this command
PUT _template/template_number
instead of the key of the first JSON object.
So the expected result
PUT _template/template_first
{...}
PUT _template/template_second
{...}
The second change comes with the removal of _doc level
Before :
"mappings": {
"_doc": {
"dynamic": true,
"dynamic_templates": [
{
"strings": {
"mapping": {
"type": "keyword"
},
"match_mapping_type": "string"
}
}
],
"properties": {
"log.id": {
"type": "keyword"
},
"host.indexer.hostname": {
"type": "keyword"
},
"ts_indexer": {
"format": "strict_date_optional_time||epoch_millis",
"type": "date"
}
}
}
}
Expected result
"mappings": {
"dynamic": true,
"dynamic_templates": [
{
"strings": {
"mapping": {
"type": "keyword"
},
"match_mapping_type": "string"
}
}
],
"properties": {
"log.id": {
"type": "keyword"
},
"host.indexer.hostname": {
"type": "keyword"
},
"ts_indexer": {
"format": "strict_date_optional_time||epoch_millis",
"type": "date"
}
}
}
So the actual result look like this
PUT _template/template_first
{
"order": 0,
"index_patterns": [
"first"
],
"settings": {
"index": {
"codec": "best_compression",
"refresh_interval": "30s",
"analysis": {
"normalizer": {
"norm_case_insensitive": {
"filter": "lowercase",
"type": "custom"
}
}
},
"number_of_shards": "1",
"number_of_replicas": "1"
}
},
"mappings": {
"dynamic": true,
"dynamic_templates": [
{
"strings": {
"mapping": {
"type": "keyword"
},
"match_mapping_type": "string"
}
}
],
"properties": {
"log.id": {
"type": "keyword"
},
"host.indexer.hostname": {
"type": "keyword"
},
"ts_indexer": {
"format": "strict_date_optional_time||epoch_millis",
"type": "date"
}
}
}
}
PUT _template/template_second
{
"order": 0,
"index_patterns": [
"second"
],
"settings": {
"index": {
"codec": "best_compression",
"refresh_interval": "30s",
"analysis": {
"normalizer": {
"norm_case_insensitive": {
"filter": "lowercase",
"type": "custom"
}
}
},
"number_of_shards": "1",
"number_of_replicas": "1"
}
},
"mappings": {
"dynamic": true,
"dynamic_templates": [
{
"strings": {
"mapping": {
"type": "keyword"
},
"match_mapping_type": "string"
}
}
],
"properties": {
"log.id": {
"type": "keyword"
},
"host.indexer.hostname": {
"type": "keyword"
},
"ts_indexer": {
"format": "strict_date_optional_time||epoch_millis",
"type": "date"
}
}
}
}
I achieved to do the second change : delete one level of the JSON array by using the command
jq 'keys[] as $k | map( .mappings =.mappings._doc )' template.json
But i don't know how to do the first change and the 2nd change in the same time .
I tried to loop into the array like this , without success
for row in $(jq 'keys[] as $k | "\($k)"' template.json); do
_jq() {
echo ${row}
}
echo $(_jq '.name')
done
Calling jq just once, and having it write a NUL-delimited list of template-name / modified-template-content pairs (which a bash while read loop can then iterate over):
while IFS= read -r -d '' template_name && IFS= read -r -d '' template_content; do
echo "We want to do PUT the following to _template/$template_name"
printf '%s\n' "$template_content"
done < <(
jq -j '
to_entries[] |
.key as $template_name |
.value as $template_content |
($template_name, "\u0000",
($template_content | (.mappings = .mappings._doc) | tojson), "\u0000")
' <infile.json
)
I had some trouble with the done < <( that caused syntax error in my shell (don't know why ) .
So I modified your script like this :
jq -j 'to_entries[] | .key as $template_name | .value as $template_content | ($template_name, "\u0000", ($template_content | (.mappings = .mappings._doc) | tojson), "\u0000")' < infile.json |
while IFS= read -r -d '' template_name && IFS= read -r -d '' template_content; do
echo "PUT _template/$template_name"
printf '%s\n' "$template_content"
done
Which perfectly does the job !
Thanks Charles
I am trying to merge the files using --argjson but it throws
argument list too long
Is there any other way I can merge these files?
Tried to use --slurpfile but can't get desired output
[
{
"path": "1"
},
{
"path": "a/1"
},
{
"path": "a/2"
}
]
jq --argjson groupInfo "$(jq .data file1.json)" '.records[].version.data+=$groupInfo' file2.json
File 1:
{
"id": "test",
"data": [
{
"path": "a/1"
},
{
"path": "a/2"
}
],
"information": {
"id": "1"
}
}
File 2:
{
"records": [
{
"version": {
"data": [
{
"path": "1"
}
]
}
}
]
}
Output File:
{
"records": [
{
"version": {
"data": [
{
"path": "1"
},
{
"path": "a/1"
},
{
"path": "a/2"
}
]
}
}
]
}
The --arg and --argjson options are intended for small bits of JSON. Although the --argfile option is technically deprecated, it fits nicely here with your approach:
jq --argfile groupInfo <(jq .data file1.json) '
.records[].version.data+=$groupInfo' file2.json
There are other options. E.g.
jq -s '
.[0].data as $groupInfo
| .[1]
| .records[].version.data+=$groupInfo
' file1.json file2.json
I'll let you figure out how to use --slurpfile :-)
I have different language files like these:
file1
{
"Pack": [
{
"id": "item1",
"lang": {
"en": {
}
}
},
{
"id": "item2",
"lang": {
"en": {
}
}
}
]
}
file2
{
"Pack": [
{
"id": "item1",
"lang": {
"sp": {
}
}
}
]
}
and I need to merge the same ids by lang field. Final file should looks like:
{
"Pack": [
{
"id": "item1",
"lang": {
"en": {
},
"sp": {
}
}
},
{
"id": "item2",
"lang": {
"en": {
}
}
}
]
}
I think I should use something like more complex command but my starting point is:
jq -s '{ attributes: map(.attributes[0]) }' file*.json
First you'll want to read in all files as input, then combine all Pack items and aggregating them into groups by id, then take those groups and arrange them to the result you need.
$ jq -n '
{Pack: ([inputs.Pack[]] | group_by(.id) | map({id: .[0].id, lang: (map(.lang) | add)}))}
' file*.json
This results in:
{
"Pack": [
{
"id": "item1",
"lang": {
"en": {},
"sp": {}
}
},
{
"id": "item2",
"lang": {
"en": {}
}
}
]
}