jq bash Adding a json field to a json file - json

i'm stuck on jq input problem. I have a json file that looks like this:
{
"main_object": {
"child1": ["banana", "apple", "orange"]
}
}
I need to add another child object and rewrite this file, the problem is that this child object needs to be generated dynamically. so i'm doing this:
added_string=$(printf '.main_object += {%s: %s}' "$child_name" "$fruits")
Then I wrote this line, which worked well on my mac shell:
edited_json=$(cat $json_variable_file | jq $added_string)
When i tried to run all of this from a bash script i got this error:
jq: error: Could not open file +=: No such file or directory
jq: error: Could not open file {"child2":: No such file or directory
jq: error: Could not open file ["orange","potato","watermelon"]}: No such file or directory
So I tried many things so far, most of them still give me the same error, also tried doing this:
edited_json=$(cat $json_variable_file | jq <<< $added_string)
The error i got is this:
parse error: Invalid numeric literal at line 1, column 23
Really appreciate your time, the weird thing here is that it works completely fine, generating the needed json on my zsh but it does not work on bash.

With bash and zsh:
child_name="child2"
fruits='["orange","potato","watermelon"]'
added_string=$(printf '.main_object += {%s: %s}' "$child_name" "$fruits")
cat file | jq "$added_string" # quotes are important
Output:
{
"main_object": {
"child1": [
"banana",
"apple",
"orange"
],
"child2": [
"orange",
"potato",
"watermelon"
]
}
}

Related

How can I prettyprint JSON on the command line, but allow invalid JSON objects to pass though?

I'm currently tailing some logs in bash that are half JSON, half text like below:
{"response":{"message":"asdfasdf"}}
{"log":{"example":"asdfasdf"}}
here is some text
{"another":{"example":"asdfasdf"}}
more text
Each line is either a full valid JSON object or some text that would fail a JSON parser.
I've looked at jq and underscore-cli to see if they have options to return the invalid object in the case of failure, but I'm not seeing any.
I've also tried to use a || operator to cat the piped input, but I'm losing the value somehow. Maybe I should read up on pipes more? Example: getLogs -t | (underscore print || cat)
I think I could write a script that stores the input. Format it, and return the output if successful. If it fails returned the stored value. I feel like there should be a simpler way though. Any thoughts?
You can use this node library
install with
$ npm install -g js-beautify
Here is what I did:
$ js-beautify -r test.js
beautified test.js
I tested it with an incomplete json file and it worked
jq can check for invalid json
#!/bin/bash
while read p; do
if jq -e . >/dev/null 2>&1 <<<"$p"; then
echo $p | jq
else
echo 'Skipping invalid json'
fi
done < /tmp/tst.txt
{
"response": {
"message": "asdfasdf"
}
}
{
"log": {
"example": "asdfasdf"
}
}
Skipping invalid json
{
"another": {
"example": "asdfasdf"
}
}
Skipping invalid json

Convert json filtered into csv with jq

I have file that looks like this:
$ cat sample-test.json |jq .
{
"logRef": "c4fa4367-23f6-462f-b5fd-f972d0916a30",
"timestamp": 1563268297545,
"someOtherField": "nonImportantValue"
}
{
"logRef": "c4fa4367-23f6-462f-b5fd-f972d0916a31",
"timestamp": 1563268297595,
"someOtherField2": "nonImportantValue3"
}
And I would like to convert it to csv like this:
logRef;timestamp
c4fa4367-23f6-462f-b5fd-f972d0916a30;1563268297545
c4fa4367-23f6-462f-b5fd-f972d0916a31;1563268297595
I was trying
$ cat sample-test.json |jq '.logRef, .timestamp |#csv'
jq: error (at <stdin>:1): string ("c4fa4367-2...) cannot be csv-formatted, only array
jq: error (at <stdin>:2): string ("c4fa4367-2...) cannot be csv-formatted, only array
Your input is fine (it's a JSON stream).
The problem with your filter is that #csv expects an array. So this will work:
[.logRef,.timestamp] | #csv
However it quotes strings, so if you want your strings unquoted (which might mean the result won't be CSV), then you could use:
"\(.logRef),\(.timestamp)"
In all cases, you'll need to use jq's-r command-line option.
The problem in your json file. Looks like it has incorrect format (without root array element [] and commas between documents). If you fix it, jq will work as expected.
> cat sample-test.json
[{
"logRef": "c4fa4367-23f6-462f-b5fd-f972d0916a30",
"timestamp": 1563268297545,
"someOtherField": "nonImportantValue"
},
{
"logRef": "c4fa4367-23f6-462f-b5fd-f972d0916a31",
"timestamp": 1563268297595,
"someOtherField2": "nonImportantValue3"
}]
cat sample-test.json |jq -r 'map(.logRef), map(.timestamp) | #csv'
"c4fa4367-23f6-462f-b5fd-f972d0916a30","c4fa4367-23f6-462f-b5fd-f972d0916a31"
1563268297545,1563268297595
I've also fixed the command with map() function.

Invalid numeric literal with jq

I have a large amount of JSON from a 3rd party system which I would like to pre-process with jq, but I am having difficulty composing the query, test case follows:
$ cat test.json
{
"a": "b",
"c": "d",
"e": {
"1": {
"f": "g",
"h": "i"
}
}
}
$ cat test.json|jq .e.1.f
jq: error: Invalid numeric literal at EOF at line 1, column 3 (while parsing '.1.') at <top-level>, line 1:
.e.1.f
How would I get "g" as my output here? Or how do I cast that 1 to a "1" so it is handled correctly?
From jq manual :
You can also look up fields of an object using syntax like .["foo"]
(.foo above is a shorthand version of this, but only for
identifier-like strings).
You also need quotes and use -r if you want raw output :
jq -r '.e["1"].f' test.json
I wrote a shell script function that calls the curl command, and pipes it into the jq command.
function getName {
curl http://localhost:123/getname/$1 | jq;
}
export -f getName
When I ran this from the CLI,
getName jarvis
I was getting this response:
parse error: Invalid numeric literal at line 1, column 72
I tried removing the | jq from the curl command, and I got back the result without jq parsing:
<Map><timestamp>1234567890</timestamp><status>404</status><error>Not Found</error><message>....
I first thought that I had a bad character in the curl command, or that I was using the function param $1 wrong.
Then I counted the number of chars in the result string, and I noticed that the 72nd char in that string was the empty space between "Not Found".
The underlying issue was that I didn't have a method named getname yet in my spring REST controller, so the response was coming back 404 Not Found. But in addition, jq wasn't handling the empty space in the response except by outputting the error message.
I'm new to jq so maybe there is a way to get around the empty space issue, but that's for another day.

jq read .txt file and write the values to json file

I want to use jq to parse a .txt file with a list of country codes and write them to the value in a JSON object.
Here is what I have so far:
cat myfile.json |
jq -R -f test_id.txt 'select(.country == []).country = "test_id.txt"' > newfile.json
Where .txt file looks like this:
"NSC"
"KZC"
"KCC"
"KZL"
"NZG"
"VRU"
"ESM"
"KZF"
"SFU"
"EWF"
"KQY"
"KQV"
and my JSON looks like this:
{
"scsRequestId": null,
"includeMetadata": true,
"includeHoldings": true,
"country": [],
"region": [],
"oclcSymbol": []
}
Here is the error I am getting:
jq: error: syntax error, unexpected QQSTRING_START, expecting $end (Unix shell quoting issues?) at <top-level>, line 2:
"KZC"
jq: 1 compile error
I want the list of country codes to go into the country array.
-f's argument is the file to read the filter to run from. If you want to read data from a file, that's a use for --slurpfile, not -f.
Thus:
jq --slurpfile countries test_id.txt '.country=$countries' <myfile.json >newfile.json
When run with your provided inputs, the resulting contents in newfile.json are:
{
"scsRequestId": null,
"includeMetadata": true,
"includeHoldings": true,
"country": [
"NSC",
"KZC",
"KCC",
"KZL",
"NZG",
"VRU",
"ESM",
"KZF",
"SFU",
"EWF",
"KQY",
"KQV"
],
"region": [],
"oclcSymbol": []
}

Need help! - Unable to load JSON using COPY command

Need your expertise here!
I am trying to load a JSON file (generated by JSON dumps) into redshift using copy command which is in the following format,
[
{
"cookieId": "cb2278",
"environment": "STAGE",
"errorMessages": [
"70460"
]
}
,
{
"cookieId": "cb2271",
"environment": "STG",
"errorMessages": [
"70460"
]
}
]
We ran into the error - "Invalid JSONPath format: Member is not an object."
when I tried to get rid of square braces - [] and remove the "," comma separator between JSON dicts then it loads perfectly fine.
{
"cookieId": "cb2278",
"environment": "STAGE",
"errorMessages": [
"70460"
]
}
{
"cookieId": "cb2271",
"environment": "STG",
"errorMessages": [
"70460"
]
}
But in reality most JSON files from API s have this formatting.
I could do string replace or reg ex to get rid of , and [] but I am wondering if there is a better way to load into redshift seamlessly with out modifying the file.
One way to convert a JSON array into a stream of the array's elements is to pipe the former into jq '.[]'. The output is sent to stdout.
If the JSON array is in a file named input.json, then the following command will produce a stream of the array's elements on stdout:
$ jq ".[]" input.json
If you want the output in jsonlines format, then use the -c switch (i.e. jq -c ......).
For more on jq, see https://stedolan.github.io/jq