How to parse JSON from stdin at Native Messaging host? - json

Utilizing the code at How do I use a shell-script as Chrome Native Messaging host application as a template and given the file file.json which contains
{"text":"abc"}
following the code at Iterate over json with jq and the jq documentation
$ cat file.json | jq --raw-output '.text'
outputs
abc
Am not certain how to incorporate the pattern at this Answer
while read -r id name date; do
echo "Do whatever with ${id} ${name} ${date}"
done< <(api-producing-json | jq --raw-output '.newList[] | "\(.id) \(.name) \(.create.date)"')
into the template at the former Answer for the purpose of capturing the single property "text" (abc) from the JSON within the loop using jq for the ability to pass that text to another system call then printf the message to client.
What we are trying to achieve is
json=$(<bash program> <captured JSON property>)
message='{"message": "'$json'"}'
where the {"text":"abc"} is sent to the Native Messaging host from client (Chromium app).
How to use jq within the code at the former Answer to get the JSON property as a variable?

Assuming that file.json contains the JSON as indicated, I believe all you will need is:
json=$(jq '{message: .}' file.json)
If you then echo "$json", the result will be:
{
"message": {
"text": "abc"
}
}

Related

Transform file content into json string with jq doesn't work in command substitution

This is working as expected:
$ cat /etc/tfe-config/sources/fluent-bit.conf.tpl | jq -R -s
$ "[OUTPUT]\n Name cloudwatch_logs\n Match *\n region eu-central-1\n log_group_name TFE-LogForwarding\n log_stream_name TFE-AllLogs"
However, assignment to a variable does not work:
$ MY_VARIABLE=$(cat /etc/tfe-config/sources/fluent-bit.conf.tpl | jq -R -s)
$ echo $MY_VARIABLE
jq - commandline JSON processor [version 1.5]
Usage: jq [options] <jq filter> [file...]
jq is a tool for processing JSON inputs, applying the
given filter to its JSON text inputs and producing the
filter's results as JSON on standard output.
The simplest filter is ., which is the identity filter,
copying jq's input to its output unmodified (except for
formatting).
For more advanced filters see the jq(1) manpage ("man jq")
and/or https://stedolan.github.io/jq
Some of the options include:
-c compact instead of pretty-printed output;
.... trimmed
I am on AWS EC2 machine with the latest Amazon Linux 2 image.
What is going on here?
The file looks like this:
[OUTPUT]
Name cloudwatch_logs
Match *
region eu-central-1
log_group_name TFE-LogForwarding
log_stream_name TFE-AllLogs
Two things:
You have to specify a filter for jq – just . to get the entire input
Once in a variable with whitespace, you must quote the string, else it shows up differently when you print it
var=$(cat /etc/tfe-config/sources/fluent-bit.conf.tpl | jq -R -s '.')
echo "$var"
Relevant Q&A:
How to use `jq` in a shell pipeline? (and also this GitHub issue)
I just assigned a variable, but echo $variable shows something else

Jq parse error: Invalid numeric literal at line 1, column 9

I am trying to get values from a json file from a url using curl and then printing specific keys with jq command (e.g. company). Unfortunately when I use:
jq '.[] | .company' JB.json
I get the error:
parse error: Invalid numeric literal at line 1, column 9. I have checked the downloaded file with less and it looks exactly like in the url.
Some people suggested to use the -R option but it prints:
jq: error: Cannot iterate over string
The url of the file is: https://jobs.github.com/positions.json?description=python&location=new+york
If you use curl -sS to retrieve the file, the '//'-style comments are skipped:
curl -Ss 'https://jobs.github.com/positions.json?description=python&location=new+york' | jq '.[] | .company'
"BentoBox"
"Aon Cyber Solutions"
"Sesame"
"New York University"
So presumably your JB.json contains the "//"-style comments. The simplest workaround would probably be to filter out those first two lines (e.g. using sed (or jq!)) first.
Here's a jq-only solution:
< JB.json jq -Rr 'select( test("^//")|not)' |
jq '.[] | .company'

Is there a better solution to cURL this JSON?

I have this function that iterates over the keys of a given JSON using curl and jq as the JSON processor, My code looks like this : (bash)
function getJSONContent {
option=""
for option in "header" "developer" "category" "description"
do
content=$(curl -s $extension_url | jq ".[\"$extension\"][\"$option\"]")
printf "$content\n"
done
}
But the problem is that it curl's 4 time and I haven't found a better solution to this without getting an error.
Is doing this okay? Or is there just a better solution to do this in Bash / Shellscript?
is there just a better solution ... ?
Yes!
You evidently only need one call to curl and one to jq, but at the very
least, you should avoid calling curl more than once.
Avoid constructing the jq command "on the fly". Instead, you can pass in the shell (or environment) variables
on the command line, e.g. using --arg or --argjson
In this specific case, it looks like you can avoid calling jq more than once by simply using jq's ',' operator.
In brief, try something along the following lines:
curl -s "$extension_url" |
jq --arg extension "$extension" '
.[$extension]["header","developer","category","description"]'

Ignore Unparseable JSON with jq

I'm using jq to parse some of my logs, but some of the log lines can't be parsed for various reasons. Is there a way to have jq ignore those lines? I can't seem to find a solution. I tried to use the --seq argument that was recommended by some people, but --seq ignores all the lines in my file.
Assuming that each log entry is exactly one line, you can use the -R or --raw-input option to tell jq to leave the lines unparsed, after which you can prepend fromjson? | to your filter to make jq try to parse each line as JSON and throw away the ones that error.
I have log stream where some messages are in json format.
I want to pipe the json messages through jq, and just echo the rest.
The json messages are on a single line.
Solution: use grep and tee to split the lines in two streams, those starting with "^{" pipe through jq and the rest just echo to terminal.
kubectl logs -f web-svjkn | tee >(grep -v "^{") | grep "^{" | jq .
or
cat logs | tee >(grep -v "^{") | grep "^{" | jq .
Explanation:
tee generates 2nd stream, and grep -v prints non json info, 2nd grep only pipes what looks like json opening bracket to jq.
This is an old thread, but here's another solution fully in jq. This allows you to both process proper json lines and also print out non-json lines.
jq -R . as $line | try (fromjson | <further processing for proper json lines>) catch $line'
There are several Q&As on the FAQ page dealing with the topic of "invalid JSON", but see in particular the Q:
Is there a way to have jq keep going after it hits an error in the input file?
In particular, this shows how to use --seq.
However, from the the sparse details you've given (SO recommends a minimal example be given), it would seem it might be better simply to use inputs. The idea is to process one JSON entity at a time, using "try/catch", e.g.
def handle: inputs | [., "length is \(length)"] ;
def process: try handle catch ("Failed", process) ;
process
Don't forget to use the -n option when invoking jq.
See also Processing not-quite-valid JSON.
If JSON in curly braces {}:
grep -Pzo '\{(?>[^\{\}]|(?R))*\}' | jq 'objects'
If JSON in square brackets []:
grep -Pzo '\[(?>[^\[\]]|(?R))*\]' | jq 'arrays'
This works if there are no []{} in non-JSON lines.

Read a json file with bash

I kwould like to read the json file from http://freifunk.in-kiel.de/alfred.json in bash and separate it into files named by hostname of each element in that json string.
How do I read json with bash?
How do I read json with bash?
You can use jq for that. First thing you have to do is extract the list of hostnames and save it to a bash array. Running a loop on that array you would then run again a query for each hostname to extract each element based on them and save the data through redirection with the filename based on them as well.
The easiest way to do this is with two instances of jq -- one listing hostnames, and another (inside the loop) extracting individual entries.
This is, alas, a bit inefficient (since it means rereading the file from the top for each record to extract).
while read -r hostname; do
[[ $hostname = */* ]] && continue # paranoia; see comments
jq --arg hostname "$hostname" \
'.[] | select(.hostname == $hostname)' <alfred.json >"out-${hostname}.json"
done < <(jq -r '.[] | .hostname' <alfred.json)
(The out- prefix prevents alfred.json from being overwritten if it includes an entry for a host named alfred).
You can use python one-liner in similar way like (I haven't checked):
curl -s http://freifunk.in-kiel.de/alfred.json | python -c '
import json, sys
tbl=json.load(sys.stdin)
for t in tbl:
with open(tbl[t]["hostname"], "wb") as fp:
json.dump(tbl[t], fp)
'