Parse JSON file without having to save it in a file - json

I have a command that returns a JSON dump that won't get saved into any file.
I have to parse a particular field from the JSON response without saving the output.
I am able to achieve it if I save the output of the command and then parse it using jq and grep like this:
platform json_dump platform_id >resp.json
jq . resp.json | grep elbName
But, I do not want to write the output of my command platform json_dump platform_id which is a JSON dump into any file. I want to parse the elbName directly from the out of the command.
Is there a way I can do that?

Just pipe the program's output to jq:
platform json_dump platform_id | jq .elbName
or whatever.
PS: Use jq to get the value you want, not grep. Example of doing that.:
$ echo '{"elbName":"foo"}' | jq .elbName
"foo"

You can try another pipe to pass the result to jq command
platform json_dump platform_id | jq .elbName

I'm assuming you have python :)
platform json_dump platform_id | python -c 'import sys,json; print(json.load(sys.stdin)["elbName"])' # a bit long ? :)

Related

Using jq with a large JSON file

I have an extremely large JSON file that I am working with, on my Linux box. When I jq the file I get an output in this format, which is perfect:
{
“ID:” 12345
“Name:” joe
“Address:” 123 first street
“Email:” joe#example.com
My goal is to be able to grep for a particular field but get all related fields to return. So if I did a grep for “123 first street” I would also get the ID , name, and email that was with that group of data.
Thus far, I have gotten here:
jq . Myfile.json | grep “123 first street”
Can anyone help with me with getting this query right? I would like to stay with this JSON format and stay in the Linux box.
jq . Myfile.json | grep “123 first street”
This should return all JSON objects with "field".
jq '.[] | select(has("field"))'

Fetch json data using Regex on linux

I know we should use JQ for parsing json data, but I want to parse it using regex. I want to fetch the value of a json key into a variable in my shell script. As of now, I am using JQ for parsing.
So my abc.json is
{"value1":5.0,"value2":2.5,"value3":"2019-10-24T15:26:00.000Z","modifier":[],"value4":{"value41":{"value411":5}}}
Currently, my XYZ.sh has these lines to fetch the data
data1 =$(cat abc.json | jq -r '.value4.value41.value411')
I want data1 to have value of value411. How can I achieve this?
ps- The JSON is mutable. The above JSON is just a part of the JSON file that I want to fetch.
Is your json structure immutable? If you have to use it, consider the following
┌──[root#vms83.liruilongs.github.io]-[~]
└─$cat abc.json | awk -F: '{print $NF}' | grep -o '[[:digit:]]'
5
I think your problem was you had a space between data and =. There can't be a space there.
This works as you want it to (I removed the unnecessary cat)
data1=$(jq -r '.value4.value41.value411' abc.json)
echo $data1

How to convert the simplest JSON to CSV or Text using jq?

My json like this from the command like curl https://sm.ms/api/upload:
{
"code": "error",
"msg": "No files were uploaded."
}
Now I want to get a text like this (all values of this json):
"error", "No files were uploaded."
I use the command curl https://sm.ms/api/upload | jq -r . but I can only get the json content. The document seems like not to support this funtion.
I search for the web for a long time but there's nothing helpful, so I hope you can help me and thank you~
The output can be formatted as csv by using the --raw-output option:
jq --raw-output '"\"\(.code)\", \"\(.msg)\""'
You might like to consider a generic solution:
jq -r '[.[]] | #csv'

How to parse JSON from stdin at Native Messaging host?

Utilizing the code at How do I use a shell-script as Chrome Native Messaging host application as a template and given the file file.json which contains
{"text":"abc"}
following the code at Iterate over json with jq and the jq documentation
$ cat file.json | jq --raw-output '.text'
outputs
abc
Am not certain how to incorporate the pattern at this Answer
while read -r id name date; do
echo "Do whatever with ${id} ${name} ${date}"
done< <(api-producing-json | jq --raw-output '.newList[] | "\(.id) \(.name) \(.create.date)"')
into the template at the former Answer for the purpose of capturing the single property "text" (abc) from the JSON within the loop using jq for the ability to pass that text to another system call then printf the message to client.
What we are trying to achieve is
json=$(<bash program> <captured JSON property>)
message='{"message": "'$json'"}'
where the {"text":"abc"} is sent to the Native Messaging host from client (Chromium app).
How to use jq within the code at the former Answer to get the JSON property as a variable?
Assuming that file.json contains the JSON as indicated, I believe all you will need is:
json=$(jq '{message: .}' file.json)
If you then echo "$json", the result will be:
{
"message": {
"text": "abc"
}
}

Ignore Unparseable JSON with jq

I'm using jq to parse some of my logs, but some of the log lines can't be parsed for various reasons. Is there a way to have jq ignore those lines? I can't seem to find a solution. I tried to use the --seq argument that was recommended by some people, but --seq ignores all the lines in my file.
Assuming that each log entry is exactly one line, you can use the -R or --raw-input option to tell jq to leave the lines unparsed, after which you can prepend fromjson? | to your filter to make jq try to parse each line as JSON and throw away the ones that error.
I have log stream where some messages are in json format.
I want to pipe the json messages through jq, and just echo the rest.
The json messages are on a single line.
Solution: use grep and tee to split the lines in two streams, those starting with "^{" pipe through jq and the rest just echo to terminal.
kubectl logs -f web-svjkn | tee >(grep -v "^{") | grep "^{" | jq .
or
cat logs | tee >(grep -v "^{") | grep "^{" | jq .
Explanation:
tee generates 2nd stream, and grep -v prints non json info, 2nd grep only pipes what looks like json opening bracket to jq.
This is an old thread, but here's another solution fully in jq. This allows you to both process proper json lines and also print out non-json lines.
jq -R . as $line | try (fromjson | <further processing for proper json lines>) catch $line'
There are several Q&As on the FAQ page dealing with the topic of "invalid JSON", but see in particular the Q:
Is there a way to have jq keep going after it hits an error in the input file?
In particular, this shows how to use --seq.
However, from the the sparse details you've given (SO recommends a minimal example be given), it would seem it might be better simply to use inputs. The idea is to process one JSON entity at a time, using "try/catch", e.g.
def handle: inputs | [., "length is \(length)"] ;
def process: try handle catch ("Failed", process) ;
process
Don't forget to use the -n option when invoking jq.
See also Processing not-quite-valid JSON.
If JSON in curly braces {}:
grep -Pzo '\{(?>[^\{\}]|(?R))*\}' | jq 'objects'
If JSON in square brackets []:
grep -Pzo '\[(?>[^\[\]]|(?R))*\]' | jq 'arrays'
This works if there are no []{} in non-JSON lines.