How to access the value of an unknown key? - json

I have the following JSON file
{
"https://test.com/gg": [
"msg",
"popup",
"url"
]
}
What I want to achieve is to parse the values to output the following
https://test.com/gg?msg=gg
https://test.com/gg?popup=gg
https://test.com/gg?url=gg
I'm assuming it can be done using jq but I'm not sure how.
The way i know is if the elemets were named like bellow:
{
"url":"https://test.com/gg": [
"p1":"msg",
]
}
I would pull the elements like:
cat json | jq "url.[p1]"
But in my case is it not named.

jq --raw-output 'to_entries[0] | .key as $url | .value[] | "\($url)?\(.)=gg"' <your json file here>
Where
to_entries[0] yields {"key":"https://test.com/gg","value":["msg","popup","url"]}
(Save .key as $url for later)
Then "emit" all values with .value[]
For each "emitted" value, produce the string "\($url)?\(.)=gg" where . is the current value

Related

output arrays values as single object

I need to be able to produce the following but without having to explicit array's indexes, so that I don't need to know input array's lenght
echo '[{"name":"John", "age":30, "car":null},{"name":"Marc", "age":32, "car":null}]' | jq -r '{(.[0].name):.[0].age,(.[1].name):.[1].age}'
Produces :
{ "John": 30, "Marc": 32}
Use add to merge the objects.
jq '[ .[] | { (.name) : .age } ] | add'

Filter in jq based on an array of string

This is close to this question: jq filter JSON array based on value in list , the only difference being the condition. I don't want equality as a condition, but a "contains", but I am struggling to do it correctly.
The problem:
If want to filter elements of an input file based on the presence of a specific value in an entry
The input file:
[{
"id":"type 1 is great"
},{
"id":"type 2"
},
{
"id":"this is another type 2"
},
{
"id":"type 4"
}]
The filter file:
ids.txt:
type 2
type 1
The select.jq file:
def isin($a): . as $in | any($a[]; contains($in));
map( select( .id | contains($ids) ) )
The jq command:
jq --argjson ids "$(jq -R . ids.txt | jq -s .)" -f select.jq test.json
The expected result should be something like:
[{
"id":"type 1 is great"
},{
"id":"type 2"
},
{
"id":"this is another type 2"
}]
The problem is obviously in the "isin" function of the select.jq file, so how to write it correctly to check if an entry contains one of the string of another array?
Here is a solution that illustrates that there is no need to invoke jq more than once. In accordance with the jq program shown in the question, it also assumes that the selection is to be based on the values of the "id" key.
< ids.txt jq -nR --argfile json input.json '
# Is the input an acceptable value?
def acceptable($array): any(contains($array[]); .);
[inputs] as $substrings
| $json
| map( select(.id | acceptable($substrings)) )
'
Note on --argfile
--argfile has been deprecated, so you may wish to use --slurpfile instead, in which case you'd have to write $json[0].

Convert JSON file to CSV file using jq

i have JSON file called myresponse.json.
"status":"CONTENT",
"valid":true,
"success":true,
"failure":false,
"content":{
"id":0,"resources":[{
"id":0,"value":52.51742935180664
},
{
"id":1,"value":13.392845153808594
},
{
"id":5,"value":"2021-02-09T13:15:15Z"
},
{
"id":6,"value":20.754192352294922
}]}}}
"status":"CONTENT",
"valid":true,
"success":true,
"failure":false,
"content":{
"id":0,"resources":[{
"id":0,"value":52.51742935180664
},
{
"id":1,"value":13.392845153808594
},
{
"id":5,"value":"2021-02-09T13:15:15Z"
},
{
"id":6,"value":20.754192352294922
}]}}}
obtained with a curl.
how can i use jq to convert json to csv file where "0,1,5,6" must be the columns and the values ​​of "0,1,5,6" must respectively occupy each row of the csv file, like this:
0,1,5,6
52.51742935180664, 13.392845153808594, "2021-02-09T13:15:15Z", 20.754192352294922
52.51742935180664, 13.392845153808594, "2021-02-09T13:15:15Z", 20.754192352294922
Thanks for your help!
The following assumes the input consists of a stream of valid JSON objects along the lines shown in the question.
If the relevant values of .id are known beforehand, then generating the header row is trivial, and a solution implemented with flexibility in mind is as follows:
def oneline:
.content.resources
| INDEX(.[]; .id) | map_values(.value);
def emit($keys):
[.[ $keys[] ]];
[0,1,5,6] as $keys
| $keys,
(inputs | oneline | emit($keys))
| join(",")
Since this relies on inputs to read the input, jq should be invoked with the -r and -n options (e.g. jq -rn -f program.jq)
Using the first object to determine the relevant .id values
If the relevant values of .id are determined by the first JSON object in the stream, the above defs can be reused with the following:
(input | oneline) as $first
| ($first | keys) as $keys
| $keys,
($first | emit($keys)),
(inputs | oneline | emit($keys))
| join(",")
This solution would be used with jq's -r and -n options.

jq - parsing& replacement based on key-value pairs within json

I have a json file in the form of a key-value map. For example:
{
"users":[
{
"key1":"user1",
"key2":"user2"
}
]
}
I have another json file. The values in the second file has to be replaced based on the keys in first file.
For example 2nd file is:
{
"info":
{
"users":["key1","key2","key3","key4"]
}
}
This second file should be replaced with
{
"info":
{
"users":["user1","user2","key3","key4"]
}
}
Because the value of key1 in first file is user1. this could be done with any python program, but I am learning jq and would like to try if it is possible with jq itself. I tried different combinations with reading file using slurpfile, then select & walk etc. But couldn't arrive at the required solution.
Any suggestions for the same will be appreciated.
Since .users[0] is a JSON dictionary, it would make sense to use it as such (e.g. for efficiency):
Invocation:
jq -c --slurpfile users users.json -f program.jq input.json
program.jq:
$users[0].users[0] as $dict
| .info.users |= map($dict[.] // .)
Output:
{"info":{"users":["user1","user2","key3","key4"]}}
Note: the above assumes that the dictionary contains no null or false values, or rather that any such values in the dictionary should be ignored. This avoids the double lookup that would otherwise be required. If this assumption is invalid, then a solution using has or in (e.g. as provided by RomanPerekhrest) would be appropriate.
Solution to supplemental problem
(See "comments".)
$users[0].users[0] as $dict
| second
| .info.users |= (map($dict[.] | select(. != null)))
sponge
It is highly inadvisable to use redirection to overwrite an input file.
If you have or can install sponge, then it would be far better to use it. For further details, see e.g. "What is jq's equivalent of sed -i?" in the jq FAQ.
jq solution:
jq --slurpfile users 1st.json '$users[0].users[0] as $users
| .info.users |= map(if in($users) then $users[.] else . end)' 2nd.json
The output:
{
"info": {
"users": [
"user1",
"user2",
"key3",
"key4"
]
}
}

Bash jq modify json : get and set

I use jq to parse and modify cURL response and it works perfect for all of my requirements except one. I wish to modify a key value in the json, like:
A) Input json
[
{
"id": 169,
"path": "dir1/dir2"
}
]
B) Output json
[
{
"id": 169,
"path": "dir1"
}
]
So the last directory is removed from the path. I use the script:
curl --header -X GET -k "${URL}" | jq '[.[] | {id: .id, path: .path_with_namespace}]' | jq '(.[] | .path) = "${.path%/*}"'
The last pipe is ofcourse not correct and this is where I am stuck. The point is to get the path value and modify it. Any help is appreciated.
One way to do this is to use split and join to process the path, and use |= to bind the correct expression to the .path attribute.
... | jq '.[] | .path|=(split("/")[:-1]|join("/"))
split("/") takes a string and returns an array
x[:-1] returns an array consisting of all but the last element of x
join("/") combines the elements of the incoming array with / to return a single string.
.path|=x takes the value of .path, feeds it through the filter x, and assigns the resulting value to .path again.