Replacing specific fields in JSON from text file - json

I have a json structure and would like to replace strings in 2 fields that are in a seperate text file.
Here is the json file with 2 records:
{
"events" : {
"-KKQQIUR7FAVxBOPOFhr" : {
"dateAdded" : 1487592568926,
"owner" : "62e6aaa0-a50c-4448-a381-f02efde2316d",
"type" : "boycott"
},
"-KKjjM-pAXvTuEjDjoj_" : {
"dateAdded" : 1487933370561,
"owner" : "62e6aaa0-a50c-4448-a381-f02efde2316d",
"type" : "boycott"
}
},
"geo" : {
"-KKQQIUR7FAVxBOPOFhr" : {
".priority" : "qw3yttz1k9",
"g" : "qw3yttz1k9",
"l" : [ 40.762632, -73.973837 ]
},
"-KKjjM-pAXvTuEjDjoj_" : {
".priority" : "qw3yttx6bv",
"g" : "qw3yttx6bv",
"l" : [ 41.889019, -87.626291 ]
}
},
"log" : "null",
"users" : {
"62e6aaa0-a50c-4448-a381-f02efde2316d" : {
"events" : {
"-KKQQIUR7FAVxBOPOFhr" : {
"type" : "boycott"
},
"-KKjjM-pAXvTuEjDjoj_" : {
"type" : "boycott"
}
}
}
}
}
And here is the txt file that I want to substitue in:
49.287130, -123.124026
36.129770, -115.172811
There are lots more records but I kept this to 2 for brevity.
Any help would be appreciated. Thank you.

The problem description seems to assume that the ordering of the key-value pairs within a JSON object is fixed. Different JSON-oriented tools (and indeed different versions of jq) have different takes on this. In any case, the following assumes a version of jq that respects the ordering (e.g. jq 1.5); it also assumes that inputs is available, though that is inessential.
The key to the following solution is the helper function, map_nth_value/2, which modifies the value of the nth key in a JSON object:
def map_nth_value(n; filter):
to_entries
| (.[n] |= {"key": .key, "value": (.value | filter)} )
| from_entries ;
[inputs | select(length > 0) | split(",") | map(tonumber)] as $lists
| reduce range(0; $lists|length) as $i
( $object;
.geo |= map_nth_value($i; .l = $lists[$i] ) )
With the above jq program in a file (say program.jq), and with the text file in a file (say input.txt) and the JSON object in a file (say object.json), the following invocation:
jq -R -n --argfile object object.json -f program.jq input.txt
produces:
{
"events": {
"-KKQQIUR7FAVxBOPOFhr": {
"dateAdded": 1487592568926,
"owner": "62e6aaa0-a50c-4448-a381-f02efde2316d",
"type": "boycott"
},
"-KKjjM-pAXvTuEjDjoj_": {
"dateAdded": 1487933370561,
"owner": "62e6aaa0-a50c-4448-a381-f02efde2316d",
"type": "boycott"
}
},
"geo": {
"-KKQQIUR7FAVxBOPOFhr": {
".priority": "qw3yttz1k9",
"g": "qw3yttz1k9",
"l": [
49.28713,
-123.124026
]
},
"-KKjjM-pAXvTuEjDjoj_": {
".priority": "qw3yttx6bv",
"g": "qw3yttx6bv",
"l": [
36.12977,
-115.172811
]
}
},
"log": "null",
"users": {
"62e6aaa0-a50c-4448-a381-f02efde2316d": {
"events": {
"-KKQQIUR7FAVxBOPOFhr": {
"type": "boycott"
},
"-KKjjM-pAXvTuEjDjoj_": {
"type": "boycott"
}
}
}
}
}

Related

Fill arrays in the first input with elements from the second based on common field

I have two files and I would need to merge the elements of the second file into an object array in the first file based on searching the reference field.
The first file:
[
{
"reference": 25422,
"order_number": "10_1",
"details" : []
},
{
"reference": 25423,
"order_number": "10_2",
"details" : []
}
]
The second file:
[
{
"record_id" : 1,
"reference": 25422,
"row_description": "descr_1_0"
},
{
"record_id" : 2,
"reference": 25422,
"row_description": "descr_1_1"
},
{
"record_id" : 3,
"reference": 25423,
"row_description": "descr_2_0"
}
]
I would like to get:
[
{
"reference": 25422,
"order_number": "10_1",
"details" : [
{
"record_id" : 1,
"reference": 25422,
"row_description": "descr_1_0"
},
{
"record_id" : 2,
"reference": 25422,
"row_description": "descr_1_1"
}
]
},
{
"reference": 25423,
"order_number": "10_2",
"details" :[
{
"record_id" : 3,
"reference": 25423,
"row_description": "descr_2_0"
}
]
}
]
Below is my code in es_func.jq file launched by this command:
jq -n --argfile f1 es_file1.json --argfile f2 es_file2.json -f es_func.jq
INDEX($f2[] ; .reference) as $details
| $f1
| map( ($details[.reference|tostring]| .row_description) as $vn
| if $vn then .details = [{"row_description" : $vn}] else . end)
I get the result only for the last record in 25422 reference with "row description": "descr_1_1" and not have "row_description": "descr_1_0"
[
{
"reference": 25422,
"order_number": "10_1",
"details": [
{
"row_description": "descr_1_1"
}
]
},
{
"reference": 25423,
"order_number": "10_2",
"details": [
{
"row_description": "descr_2_0"
}
]
}
]
I think I'm close to the solution but something is still missing. Thank you
This would be way easier if you used reduce instead.
jq 'reduce inputs[] as $rec (INDEX(.reference);
.[$rec.reference | tostring].details += [$rec]
) | map(.)' es_file1.json es_file2.json
Online demo
Here's a straightforward, reduce-free solution:
jq '
group_by(.reference)
| INDEX(.[]; .[0]|.reference|tostring) as $dict
| input
| map_values(. + {details: $dict[.reference|tostring]})
' 2.json 1.json

Using jq to extract multiple fields and create a new object

I have this particular json object,
[
{
"userid" : "fe2e48b7-858b-4a0d-964a-efb8483a00c4",
"lastupdateddate" : "84798000-13cd-11ea-8080-808080808080",
"transactionid" : "10383117.2216238756",
"accountid" : "10383117.10921962",
"misctransactiondata" : null,
"rawtransactiondata" : "{\"id\":\"1234567\",\"account_id\":\"456451962\"}",
"source" : "gateway",
"transactiondatajson" : "{\"version\":\"v1\",\"transactionId\":\"4234234.2216238756\",\"accountId\":\"345345345.10921962\"}",
"version" : "v1"
}
]
which I'd like to transform into,
{
"transactions": [
{
"version": "v1",
"transactionId": "4234234.2216238756",
"accountId": "345345345.10921962",
"rawData": {
"id": "1234567",
"account_id": "456451962"
}
}
]
}
Currently I have,
jq '{transactions: [.[0] | (.transactiondatajson|fromjson) ]}'
which creates the transactions array of objects however I'm not entirely sure how to create the rawData nested object from .rawtransactiondata
How to best append the object with jq ?
One of many possibilities:
.[]
| {transactions:
[(.transactiondatajson|fromjson)
+ {rawData: (.rawtransactiondata|fromjson)} ] }

JQ add properties to nested object in nested array

I have the following json:
{
"first": {
"second" : "A"
},
"array": [
{
"name" : "AAA",
"something": {
"hola": "hi"
}
},
{
"name" : "BBB",
"something": {
"hola": "hi"
}
}
]
}
I would like to trasform it adding a property to the something object, using the value from the name property of the parent, like:
I have the following json:
{
"first": {
"second" : "A"
},
"array": [
{
"name" : "AAA",
"something": {
"hola": "hi",
"NEW_PROPERTY": "AAA"
}
},
{
"name" : "BBB",
"something": {
"hola": "hi",
"NEW_PROPERTY": "BBB"
}
}
]
}
Which jq expression can do this?
Try this jq script:
<file jq '.array = [ .array[] | .something.NEW_PROPERTY = .name ]'
This is replacing the array by another one that is the same as the original one with one more key NEW_PROPERTY.
You could simply use the filter:
.array |= map(.something.NEW_PROPERTY = .name)
or if map's not your thing (or if you want to save typing one character):
.array[] |= (.something.NEW_PROPERTY = .name)

Using jq to parse keys present in two lists (even though it might not exist in one of those)

(It was hard to come up with a title that summarizes the issue, so feel free to improve it).
I have a JSON file with the following content:
{
"Items": [
{
"ID": {
"S": "ID_Complete"
},
"oldProperties": {
"L": [
{
"S": "[property_A : value_A_old]"
},
{
"S": "[property_B : value_B_old]"
}
]
},
"newProperties": {
"L": [
{
"S": "[property_A : value_A_new]"
},
{
"S": "[property_B : value_B_new]"
}
]
}
},
{
"ID": {
"S": "ID_Incomplete"
},
"oldProperties": {
"L": [
{
"S": "[property_B : value_B_old]"
}
]
},
"newProperties": {
"L": [
{
"S": "[property_A : value_A_new]"
},
{
"S": "[property_B : value_B_new]"
}
]
}
}
]
}
I would like to manipulate the data using jq in such a way that for each item in Items[] that has a new value for property_A (under newProperties list) generate an output with the corresponding id, old and new (see desired output below) fields regardless of the value that property has in the oldProperties list. Moreover, if property_A does not exist in the oldProperties, I still need the old field to be populated with a null (or any fixed string for what it's worth).
Desired output:
{
"id": "id_Complete",
"old": "[property_A : value_A_old]",
"new": "[property_A : value_A_new]"
}
{
"id": "ID_Incomplete",
"old": null,
"new": "[property_A : value_A_new]"
}
Note: Even though property_A doesn't exist in the oldProperties list, other properties may (and will) exist.
The problem I am facing is that I am not able to get an output when the desired property does not exist in the oldProperties list. My current jq command looks like this:
jq -r '.Items[] |
{ id:.ID.S,
old:.oldProperties.L[].S | select(. | contains("property_A")),
new:.newProperties.L[].S | select(. | contains("property_A")) }'
Which renders only the ID_Complete case, while I need the other as well.
Is there any way to achieve this using this tool?
Thanks in advance.
Your list of properties appear to be values of some object. You could map them out into an object to then diff the objects, then report on the results.
You could do something like this:
def make_object_from_properties:
[.L[].S | capture("\\[(?<key>\\w+) : (?<value>\\w+)\\]")]
| from_entries
;
def diff_objects($old; $new):
def _prop($key): select(has($key))[$key];
([($old | keys[]), ($new | keys[])] | unique) as $keys
| [ $keys[] as $k
| ({ value: $old | _prop($k) } // { none: true }) as $o
| ({ value: $new | _prop($k) } // { none: true }) as $n
| (if $o.none then "add"
elif $n.none then "remove"
elif $o.value != $n.value then "change"
else "same"
end) as $s
| { key: $k, status: $s, old: $o.value, new: $n.value }
]
;
def diff_properties:
(.oldProperties | make_object_from_properties) as $old
| (.newProperties | make_object_from_properties) as $new
| diff_objects($old; $new) as $diff
| foreach $diff[] as $d ({ id: .ID.S };
select($d.status != "same")
| .old = ((select(any("remove", "change"; . == $d.status)) | "[\($d.key) : \($d.old)]") // null)
| .new = ((select(any("add", "change"; . == $d.status)) | "[\($d.key) : \($d.new)]") // null)
)
;
[.Items[] | diff_properties]
This yields the following output:
[
{
"id": "ID_Complete",
"old": "[property_A : value_A_old]",
"new": "[property_A : value_A_new]"
},
{
"id": "ID_Complete",
"old": "[property_B : value_B_old]",
"new": "[property_B : value_B_new]"
},
{
"id": "ID_Incomplete",
"old": null,
"new": "[property_A : value_A_new]"
},
{
"id": "ID_Incomplete",
"old": "[property_B : value_B_old]",
"new": "[property_B : value_B_new]"
}
]
It seems like your data is in some kind of encoded format too. For a more robust solution, you should consider defining some functions to decode them. Consider approaches found here on how you could do that.
This filter produces the desired output.
def parse: capture("(?<key>\\w+)\\s*:\\s*(?<value>\\w+)") ;
def print: "[\(.key) : \(.value)]";
def norm: [.[][][] | parse | select(.key=="property_A") | print][0];
.Items
| map({id:.ID.S, old:.oldProperties|norm, new:.newProperties|norm})[]
Sample Run (assumes filter in filter.jq and data in data.json)
$ jq -M -f filter.jq data.json
{
"id": "ID_Complete",
"old": "[property_A : value_A_old]",
"new": "[property_A : value_A_new]"
}
{
"id": "ID_Incomplete",
"old": null,
"new": "[property_A : value_A_new]"
}
Try it online!

jq: group and key by property

I have a list of objects that look like this:
[
{
"ip": "1.1.1.1",
"component": "name1"
},
{
"ip": "1.1.1.2",
"component": "name1"
},
{
"ip": "1.1.1.3",
"component": "name2"
},
{
"ip": "1.1.1.4",
"component": "name2"
}
]
Now I'd like to group and key that by the component and assign a list of ips to each of the components:
{
"name1": [
"1.1.1.1",
"1.1.1.2"
]
},{
"name2": [
"1.1.1.3",
"1.1.1.4"
]
}
I figured it out myself. I first group by .component and then just create new lists of ips that are indexed by the component of the first object of each group:
jq ' group_by(.component)[] | {(.[0].component): [.[] | .ip]}'
The accepted answer doesn't produce valid json, but:
{
"name1": [
"1.1.1.1",
"1.1.1.2"
]
}
{
"name2": [
"1.1.1.3",
"1.1.1.4"
]
}
name1 as well as name2 are valid json objects, but the output as a whole isn't.
The following jq statement results in the desired output as specified in the question:
group_by(.component) | map({ key: (.[0].component), value: [.[] | .ip] }) | from_entries
Output:
{
"name1": [
"1.1.1.1",
"1.1.1.2"
],
"name2": [
"1.1.1.3",
"1.1.1.4"
]
}
Suggestions for simpler approaches are welcome.
If human readability is preferred over valid json, I'd suggest something like ...
jq -r 'group_by(.component)[] | "IPs for " + .[0].component + ": " + (map(.ip) | tostring)'
... which results in ...
IPs for name1: ["1.1.1.1","1.1.1.2"]
IPs for name2: ["1.1.1.3","1.1.1.4"]
As a further example of #replay's technique, after many failures using other methods, I finally built a filter that condenses this Wazuh report (excerpted for brevity):
{
"took" : 228,
"timed_out" : false,
"hits" : {
"total" : {
"value" : 2806,
"relation" : "eq"
},
"hits" : [
{
"_source" : {
"agent" : {
"name" : "100360xx"
},
"data" : {
"vulnerability" : {
"severity" : "High",
"package" : {
"condition" : "less than 78.0",
"name" : "Mozilla Firefox 68.11.0 ESR (x64 en-US)"
}
}
}
}
},
{
"_source" : {
"agent" : {
"name" : "100360xx"
},
"data" : {
"vulnerability" : {
"severity" : "High",
"package" : {
"condition" : "less than 78.0",
"name" : "Mozilla Firefox 68.11.0 ESR (x64 en-US)"
}
}
}
}
},
...
Here is the jq filter I use to provide an array of objects, each consisting of an agent name followed by an array of names of the agent's vulnerable packages:
jq ' .hits.hits |= unique_by(._source.agent.name, ._source.data.vulnerability.package.name) | .hits.hits | group_by(._source.agent.name)[] | { (.[0]._source.agent.name): [.[]._source.data.vulnerability.package | .name ]}'
Here is an excerpt of the output produced by the filter:
{
"100360xx": [
"Mozilla Firefox 68.11.0 ESR (x64 en-US)",
"VLC media player",
"Windows 10"
]
}
{
"WIN-KD5C4xxx": [
"Windows Server 2019"
]
}
{
"fridxxx": [
"java-1.8.0-openjdk",
"kernel",
"kernel-headers",
"kernel-tools",
"kernel-tools-libs",
"python-perf"
]
}
{
"mcd-xxx-xxx": [
"dbus",
"fribidi",
"gnupg2",
"graphite2",
...