How can I copy a JSON block to another file? - json

I have two JSON files with the following content:
foo.json:
{
"name": "foo",
"block": {
"one": 1,
"two": "2"
},
"otherData": {
"two": 1,
"one": "2"
}
}
bar.json:
{
"name": "bar"
}
I want to copy the block from foo.json to bar.json in one line so bar.json looks like this:
{
"name": "bar",
"block": {
"one": 1,
"two": "2"
}
}
I tried:
jq --argjson block '{"block": "$(jq '.block' ./foo.json)"}' '. += [$block]' ./bar.json | sponge ./bar.json

The + operator can be used to combine multiple objects together. Having the object enclosed with {} selects the whole object for inclusion.
jq ' . + ( input | {block} )' bar.json foo.json | sponge bar.json
Note: sponge is a utility from the moreutils package, which needs to be installed separately on your system. See setup instructions on the moreutils page
Exercise caution while using the tool, because it overwrites whatever content that is coming in from the standard input to the target file specified. If not completely sure, verify the result by writing to standard output before running sponge.

Related

Transfer or merge only some properties from one JSON file to another with jq

I have two JSON files:
$ jq . a.json b.json
{
"id": "ZGVhZGJlZWY=",
"name": "first file",
"version": 1,
"description": "just a simple json file"
}
{
"version": 2,
"name": "fake name",
"dependencies": [
4,
2
],
"comment": "I'm just sitting here, ignore me"
}
and want to merge them into a single file (think of file 1 as "template" and file 2 as "actual values"). I don't want to merge all properties, I only want to transfer some properties of the second file (specifically only version and dependencies). version should overwrite the value in the original file and dependencies should be added to the new file. name must not be overwritten and the original name must be kept.
This is the expected result:
{
"id": "ZGVhZGJlZWY=",
"name": "first file",
"version": 2,
"description": "just a simple json file",
"dependencies": [
4,
2
]
}
I know that jq supports the + and * operators to merge or merge recursively, but how can I apply those to only some properties and not all? How can I access both files in my jq program; do I have to preprocess the file and then use --arg in a second jq call?
Obviously, jq '. + {version, dependencies}' a.json b.json does not work. What is the correct program here?
What would the solution look like if description should also be dropped from the output?
If you want simplicity, brevity, and efficiency, consider:
jq '. + (input|{version, dependencies})' a.json b.json
If the first file might have .dependencies, and if in that case you want to add in the second file's:
jq '. as $a | input as $b | $a + ($b|{version}) | .dependencies += $b.dependencies'
To drop .description, you could append | del(.description) to either of these filters.
+ or * can be used here, correct. Let's first see how + works:
$ jq -n '{a:1,b:2} + {b:3,c:4}'
{
"a": 1,
"b": 3,
"c": 4
}
Properties only present in the left object are kept
Properties of the right object overwrite properties of the left object
Properties only present in the right object are added
Perfect, now how to get the objects from two unrelated files? --slurpfile can be used, which reads all JSON entities in the file into an array and puts it into a variable.
$ jq --slurpfile b b.json '. + $b[0]' a.json
{
"id": "ZGVhZGJlZWY=",
"name": "fake name",
"version": 2,
"description": "just a simple json file",
"dependencies": [
4,
2
],
"comment": "I'm just sitting here, ignore me"
}
We are getting closer, but are not quite there yet. name is overwritten and comment is added; both of which we do not want. To solve this, we can transform the slurped object into a new object which only contains the properties we care about
$ jq --slurpfile b b.json '. + ($b[0] | {version,dependencies})' a.json
{
"id": "ZGVhZGJlZWY=",
"name": "first file",
"version": 2,
"description": "just a simple json file",
"dependencies": [
4,
2
]
}
Now let's address part two of the question: "can some properties of the first file be dropped?"
There are basically two options:
Creating a new object containing only the required properties and then adding the second object (any property part of the second file can be ignored, since it will be added anyway): {id,name} + ($b[0] | {version,dependencies})
Deleting the unneeded properties: del(.description) + ($b[0] | {version,dependencies}) or . + ($b[0] | {version,dependencies}) | del(.description)
Depending on the number of properties you want to keep/drop, one or the other solution might be simpler to use. Creating a new object has the advantage of being able to rename properties in one go.
Executing solution 2:
$ jq --slurpfile b b.json 'del(.description) + ($b[0] | {version,dependencies})' a.json
{
"id": "ZGVhZGJlZWY=",
"name": "first file",
"version": 2,
"dependencies": [
4,
2
]
}

How to update a json file by the contents read from other files using jq?

I have several text files, each one has a title inside. For example:
echo 'title: hello' > 1.txt
echo 'title: world' > 2.txt
echo 'title: good' > 3.txt
And I have a JSON file called abc.json generated by a shell script like this:
{
"": [
{
"title": "",
"file": "1"
},
{
"title": "",
"file": "2"
},
{
"title": "",
"file": "3"
}
]
}
What I want is to update the title value in the abc.json by the title in the respective text file, like this:
{
"": [
{
"title": "hello",
"file": "1"
},
{
"title": "world",
"file": "2"
},
{
"title": "good",
"file": "3"
}
]
}
The text files and the JSON files are in the same directory like this:
➜ tmp.uFtH6hMC ls
1.txt 2.txt 3.txt abc.json
Thank you very much!
Update requirement
Sorry, guys. All your answers are perfect for the above requirement.
But some important detailed information I missed:
The filename of text files may contain space, so the current directory should be like this:
➜ $ gfind . -maxdepth 1 -type f -printf '%P\n'
The text file contain one title line and more content.txt
The title identifier in the text file is fixed.txt
The filename of text file may contain space.txt
abc.json
The text files include one title-line which contains the title-value that will be extracted into the abc.json, i.e., ## hello means that "hello" need to be put into title field in abc.json. The title-line, could be anyline in the file, looks like ## <title-value>. The title-identifier ## is fixed and sperated with title-value by one single whitespace which is the first whitespace in the title-line. So the text files content could look like this:
The text file contain one title line and more content.txt:
## hello world
some more content below...
...
The title identifier in the text file is fixed.txt:
## How are you?
some more content below...
...
The filename of text file may contain space.txt:
some pre-content...
...
## I'm fine, thank you.
some more content below...
...
Before updating, the abc.json looks like this:
{
"": [
{
"title": "",
"file": "The filename of text file may contain space"
},
{
"title": "",
"file": "The text file contain one title line and more content"
},
{
"title": "",
"file": "The title identifier in the text file is fixed"
}
]
}
After updating, the abc.json should be like this:
{
"": [
{
"title": "I'm fine, thank you.",
"file": "The filename of text file may contain space"
},
{
"title": "hello world",
"file": "The text file contain one title line and more content"
},
{
"title": "How are you?",
"file": "The title identifier in the text file is fixed"
}
]
}
Sorry again...thank you for your patience and great help!
You can use a shell loop to iterate over your files, extract the second column, create each array element and then transform the stream of array elements into your final object:
for f in *.txt; do
cut -d' ' -f2- "$f" | jq -R --arg file "$f" '{title:.,file:($file/"."|first)}';
done | jq -s '{"":.}'
It is also possible to remove the file extension in shell directly, which makes the jq filter a little bit simpler:
for f in *.txt; do
cut -d' ' -f2- "$f" | jq -R --arg file "${f%.txt}" '{title:.,$file}';
done | jq -s '{"":.}'
cut extracts the title value and must be adapted if the files are structured differently, e.g. by using grep, sed, or awk to extract the title and then feed it to jq.
Since the .title and .files has the same number, we can use that to index it from the input.
So using cut we can read all the *.txt files, split on and then get the second to last field, this gives:
cat *.txt | cut -d ' ' -f 1-
hello
world
good
(titles with spaces will work due to the -f 1-)
Using --arg we pass that to jq, which we then parse into an array:
($inputs | split("\n")) as $parsed
Now that $parsed looks like:
[
"hello",
"world",
"good"
]
To update the value, loop over each object in the "" array, then get the matching value from $parsed by using .file | tonumber - 1 (since array are 0-indexed)
jq --arg inputs "$(cat *.txt | cut -d ' ' -f 1-)" \
'($inputs | split("\n")) as $parsed
| .""[]
|= (.title = $parsed[.file | tonumber - 1])' \
abc.json
Output:
{
"": [
{
"title": "hello",
"file": "1"
},
{
"title": "world",
"file": "2"
},
{
"title": "good",
"file": "3"
}
]
}
Use input_filename to get the input files' names, read their raw content with the -R flag set, and use select to find the right item to update; all in one go:
jq -Rn --argfile base abc.json '
reduce (inputs | [
ltrimstr("title: "),
(input_filename | rtrimstr(".txt"))
]) as [$title, $file] ($base;
(.[""][] | select(.file == $file)).title = $title
)
' *.txt
If the left part of the text files' contents ("title" in the samples) should be a dynamic field name, you could capture it as well:
jq -Rn --argfile base abc.json '
reduce (inputs | [
capture("^(?<key>.*): (?<value>.*)$"),
(input_filename | rtrimstr(".txt"))
]) as [{$key, $value}, $file] ($base;
(.[""][] | select(.file == $file))[$key] = $value
)
' *.txt
Output:
{
"": [
{
"title": "hello",
"file": "1"
},
{
"title": "world",
"file": "2"
},
{
"title": "good",
"file": "3"
}
]
}

jq merge json via dynamic sub keys

I think I'm a step off from figuring out how to jq reduce via filter a key to another objects sub-key.
I'm trying to combine files (simplified from Elasticsearch's ILM Explain & ILM Policy API responses):
$ echo '{".siem-signals-default": {"modified_date": "siem", "version": 1 }, "kibana-event-log-policy": {"modified_date": "kibana", "version": 1 } }' > ip1.json
$ echo '{"indices": {".siem-signals-default-000001": {"action": "complete", "index": ".siem-signals-default-000001", "policy" : ".siem-signals-default"} } }' > ie1.json
Such that the resulting JSON is:
{
".siem-signals-default-000001": {
"modified_date": "siem",
"version": 1
"action": "complete",
"index": ".siem-signals-default-000001",
"policy": ".siem-signals-default"
}
}
Where ie1 is base JSON and for a child-object, its sub-element policy should line up to ip1's key and copy its sub-elements into itself. I've been trying to build off this, this, and this (from StackOverflow, also this, this, this from external sources). I'll list various rabbit hole attempts building off these, but they're all insufficient:
$ ((cat ie1.json | jq '.indices') && cat ip1.json) | jq -s 'map(to_entries)|flatten|from_entries' | jq '. as $v| reduce keys[] as $k({}; if true then .[$k] += $v[$k] else . end)'
{
".siem-signals-default": {
"modified_date": "siem",
"version": 1
},
".siem-signals-default-000001": {
"action": "complete",
"index": ".siem-signals-default-000001",
"policy": ".siem-signals-default"
},
"kibana-event-log-policy": {
"modified_date": "kibana",
"version": 1
}
}
$ jq --slurpfile ip1 ip1.json '.indices as $ie1|$ie1+{ilm: $ip1 }' ie1.json
{
".siem-signals-default-000001": {
"action": "complete",
"index": ".siem-signals-default-000001",
"policy": ".siem-signals-default"
},
"ilm": [
{
".siem-signals-default": {
"modified_date": "siem",
"version": 1
},
"kibana-event-log-policy": {
"modified_date": "kibana",
"version": 1
}
}
]
}
I also expected something like this to work, but it compile errors
$ jq -s ip1 ip1.json '. as $ie1|$ie1 + {ilm:(keys[] as $k; $ip1 | select(.policy == $ie1[$k]) | $ie1[$k] )}' ie1.json
jq: error: ip1/0 is not defined at <top-level>, line 1:
ip1
jq: 1 compile error
From this you can see, I've determined various ways to join the separate files, but though I have code I thought would play into filtering, it's not correct / taking effect. Does anyone have an idea how to get the filter part working? TIA
This assumes you are trying to combine the .indices object stored in ie1.json with an object within the object stored in ip1.json. As the keys upon to match are different, I further assumed that you want to match the field name from the .indices object, reduced by cutting off everything that comes after the last dash -, to the same key in the object from ip1.json.
To this end, ip1.json is read in from input as $ip (alternatively you can use jq --argfile ip ip1.json for that), then the .indices object is taken from the first input ie1.json and to the inner object accessed via with_entries(.value …) is added the result of a lookup within $ip at the matching and accordingly reduced .key.
jq '
input as $ip | .indices | with_entries(.value += $ip[.key | sub("-[^-]*$";"")])
' ie1.json ip1.json
{
".siem-signals-default-000001": {
"action": "complete",
"index": ".siem-signals-default-000001",
"policy": ".siem-signals-default",
"modified_date": "siem",
"version": 1
}
}
Demo
If instead of the .indices object's inner field nane you want to have the content of field .index as reference (which in your sample data has the same value), you can go with map_values instead of with_entries as you don't need the field's name anymore.
jq '
input as $ip | .indices | map_values(. += $ip[.index | sub("-[^-]*$";"")])
'ie1.json ip1.json
Demo
Note: I used sub with a regex to manipulate the key name, which you can easily adjust to your liking if in reality it is more complicated. If, however, the pattern is infact as simple as cutting off after the last dash, then using .[:rindex("-")] instead will also get the job done.
I also received offline feedback of a simple "workable for my use case" but not exact answer:
$ jq '.indices | map(. * input[.policy])' ie1.json ip1.json
[
{
"action": "complete",
"index": ".siem-signals-default-000001",
"policy": ".siem-signals-default",
"modified_date": "siem",
"version": 1
}
]
Posting in case someone runs into similar, but other answer's better.

Jq: appending an object from 1 file into another file

Using jq, how can I take a json object from a file (input_02.json), and append it to output.json, while retaining everything already in output.json (e.g. an object originating from file input_01.json).
The object to be appended in both cases is literally the entire contents of the file, with the file's "id" field as the object's key.
I'm taking a large list of input files (all with the same syntax) and essentially combining them like that.
The command i'm using to create the object to be appended is as follows:
jq '{(.id):(.)} ' input_01.json
which gives me:
{
"input1_id": {
}
}
input_1.json:
{
"id": "input1_id",
"val": "testVal1"
}
input2.json:
{
"id": "input2_id",
"val": "testVal2"
}
desired output:
{
"input1_id": {
"id": "input1_id",
"val": "testVal1"
},
"input2_id": {
"id": "input2_id",
"val": "testVal2"
}
}
You’re on the right track with {(.id):(.)}. The following should handle the case you mentioned, and might give you some ideas about similar cases:
program.jq: map({(.id):(.)}) | add
Invocation:
jq -s -f program.jq input_01.json input_02.json
You could use "jf" for this from https://pypi.python.org/pypi/jf
$ pip install jf
$ jf 'chain(), {y["id"]: y for y in x}' input1.json input2.json
{
"input2_id": {
"id": "input2_id",
"val": "testVal2"
},
"input1_id": {
"id": "input1_id",
"val": "testVal1"
}
}

json jq add same element to each object/array

I have files with json structure like this:
[
{
"uid": 11111,
"something": {
(...)
}
},
{
"uid": 22222,
"something": {
(...)
}
}
]
I'll read all files at one time (cat *) and i'd like to know which part is from which file, so i need to group it in some way.
So, my idea is to move content of each file to higher (parent) object with own members.
[
{
"var1": "val1"
"var2": "val2"
{
"uid": 11111,
"something": {
(...)
}
},
{
"uid": 22222,
"something": {
(...)
}
}
}
How to do that with jq?
#!/bin/bash
# For simplicity, assume each file in FILELIST contains a single JSON entity.
# Then instead of using cat FILELIST, use mycat FILELIST, e.g. mycat *.json
function mycat {
for file
do
jq --arg file "$file" '{"file": $file, "contents": .}' "$file"
done
}
If you have a sufficiently recent version of jq (e.g. jq 1.5) then one alternative would be:
jq '{file: input_filename, contents: .}' FILELIST