convert JSON object to Prometheus metrics format using jq - json

Consider a JSON object like
{
"foo": 42,
"baz": -12,
"bar{label1=\"value1\"}": 12.34
}
constructed by jq using some data source. The actual key names and their amount may vary, but the result will always be an object with numbers (int or float) as values. The keys may contain quotation marks, but no whitespaces.
Can I use jq to format the object into a Prometheus-compatible format so I can just use the output to push the data to a Prometheus Pushgateway?
The required result would look like
foo 42
bar{label1="value1"} 12.34
baz -12
i.e. space-separated with newlines (no \r) and without quotes except for the label value.
I can't use bash for post-processing and would therefore prefer a pure jq solution if possible.

Use keys_unsorted to get object keys (keys does the same as well but the former is faster), generate desired output by means of string interpolation.
$ jq -r 'keys_unsorted[] as $k | "\($k) \(.[$k])"' file
foo 42
baz -12
bar{label1="value1"} 12.34
And, by adding -j option and printing line feed manually as #peak suggested you can make this portable.

On a Windows platform, jq will normally use CR-LF for newlines; to prevent this, use the -j command-line option and manually insert the desired 'newline' characters like so:
jq -rj 'to_entries[] | "\(.key) \(.value)\n"' file

Related

Cannot match integer value using regex - error (at <stdin>:6): number not a string or array

I have multiple JSON files with a person's age and I want to match specific ages using regex, however, I cannot be able to match even a single integer in a file.
I can select age using following jq,
jq -r .details.Age
I can match Name using following jq,
jq -r 'select(.details.Name | match("r.*"))'
But when I try to use test or match with Age I get following error,
jq -r 'select(.details.Age | match(32))'
jq: error (at <stdin>:6): number not a string or array
Here is code,
{
"details": {
"Age": 32,
"Name": "reverent"
}
}
I want to be able to match Age using jq something like this,
jq -r 'select(.details.Age | match(\d))'
Your .Age value is a number, but regexes work on strings, so if you really want to use regexes, you would have to transform the number to a string. This can be done using tostring, but please remember that the tostring representation of a JSON number might not always be what you think it will be.
–––
p.s. That should be match("\\d")

How to pass a key to a jq file

I would like to write a simple jq file that allows me to count items grouped by a specified key.
I expect the script contents to be something similar too:
group_by($group) | map({group: $group, cnt: length})
and to invoke it something like
cat my.json | jq --from-file count_by.jq --args group .header.messageType
Whatever I've tried the argument always ends up as a string and is not usable as a key.
Since you have not followed the minimal complete verifiable example
guidelines, it's a bit difficult to know what the best approach to your problem will be, but whatever approach you take, it is important to bear in mind that --arg always passes in a JSON string. It cannot be used to pass in a jq program fragment unless the fragment is a JSON string.
So let's consider one option: passing in a JSON object representing a path that you can use in your program.
So the invocation could be:
jq -f count_by.jq --argjson group '["header", "messageType"]'
and the program would begin with:
group_by(getpath($group)) | ...
Having your cake ...
If you really want to pass in arguments such as .header.messageType, there is a way: convert the string $group into a jq path:
($group|split(".")|map(select(length>0))) as $path
So your jq filter would look like this:
($group|split(".")|map(select(length>0))) as $path
| group_by(getpath($path)) | map({group: $group, cnt: length})
Shell string interpolation
If you want a quick bash solution that comes with many caveats:
group=".header.messageType"
jq 'group_by('"$group"') | map({group: "'"$group"'", cnt: length}'

map, but with newline characters between keypairs

Say I have input like
{"DESCRIPTION": "Need to run script to do stuff", "PRIORITY": "Medium"}
but also get input like
{"STACK_NAME": "applecakes", "BACKEND_OR_INTEGRATIONS": "integrations", "PRIORITY": "Medium"}
ie, the parameters can be completely different.
I need to get the output in a format more friendly to send to Jira to make tickets. Specifically, I would like to strip the json formatting away, and insert a \n between each keypair. Here's what the above samples should look like:
DESCRIPTION: Need to run script to do stuff\nPRIORITY: Medium
STACK_NAME: applecakes\nBACKEND_OR_INTEGRATIONS: integrations\nPRIORITY: Medium
There can be a little flexibility in that if, for example, more spaces were needed or whatever.
So far I've got this worked out (assuming my input is stored in a variable called description
echo $description | jq -r "to_entries|map(\"\(.key)=\(.value|tostring)\")|.[]"
This works to strip away the JSON formatting, but doesn't handle newlines. I'm stumped on how to make sure I split only on each keypair, not on say every space or anything equally messy. What do I need to add to include newlines? Is a map even my best choice?
Just join what the array of strings with \\n (the sequence of the \ character which we need to escape and the n character) and use raw-output :
jq --raw-output 'to_entries | map("\(.key) : \(.value)") | join("\\n")'
Try it here.
Or more efficiently and more simply:
jq -r 'to_entries[] | "\(.key) : \(.value)"'
This produces one line per key-value pair.
The two-character sequence \n as a join-string
With your sample JSON, the invocation:
jq -j -r 'to_entries[] | "\(.key) : \(.value)", "\\n" '
would produce:
STACK_NAME : applecakes\nBACKEND_OR_INTEGRATIONS : integrations\nPRIORITY : Medium\n
Notice the trailing "\n".

I cannot get jq to give me the value I'm looking for.

I'm trying to use jq to get a value from the JSON that cURL returns.
This is the JSON cURL passes to jq (and, FTR, I want jq to return "VALUE-I-WANT" without the quotation marks):
[
{
"success":{
"username":"VALUE-I-WANT"
}
}
]
I initially tried this:
jq ' . | .success | .username'
and got
jq: error (at <stdin>:0): Cannot index array with string "success"
I then tried a bunch of variations, with no luck.
With a bunch of searching the web, I found this SE entry, and thought it might have been my saviour (spoiler, it wasn't). But it led me to try these:
jq -r '.[].success.username'
jq -r '.[].success'
They didn't return an error, they returned "null". Which may or may not be an improvement.
Can anybody tell me what I'm doing wrong here? And why it's wrong?
You need to pipe the output of .[] into the next filter.
jq -r '.[] | .success.username' tmp.json
tl;dr
# Extract .success.username from ALL array elements.
# .[] enumerates all array elements
# -r produces raw (unquoted) output
jq -r '.[].success.username' file.json
# Extract .success.username only from the 1st array element.
jq -r '.[0].success.username' file.json
Your input is an array, so in order to access its elements you need .[], the array/object-value iterator (as the name suggests, it can also enumerate the properties of an object):
Just . | sends the input (.) array as a whole through the pipeline, and an array only has numerical indices, so the attempt to index (access) it with .success.username fails.
Thus, simply replacing . | with .[] | in your original attempt, combined with -r to get raw (unquoted output), should solve your problem, as shown in chepner's helpful answer.
However, peak points out that since at least jq 1.3 (current as of this writing is jq 1.5) you don't strictly need a pipeline, as demonstrated in the commands at the top.
So the 2nd command in your question should work with your sample input, unless you're using an older version.

Convert bash output to JSON

I am running the following command:
sudo clustat | grep primary | awk 'NF{print $1",""server:"$2 ",""status:"$3}'
Results are:
service:servicename,server:servername,status:started
service:servicename,server:servername,status:started
service:servicename,server:servername,status:started
service:servicename,server:servername,status:started
service:servicename,server:servername,status:started
My desired result is:
{"service":"servicename","server":"servername","status":"started"}
{"service":"servicename","server":"servername","status":"started"}
{"service":"servicename","server":"servername","status":"started"}
{"service":"servicename","server":"servername","status":"started"}
{"service":"servicename","server":"servername","status":"started"}
I can't seem to put the qoutation marks withour srewing up my output.
Use jq:
sudo clustat | grep primary |
jq -R 'split(" ")|{service:.[0], server:.[1], status:.[2]}'
The input is read as raw text, not JSON. Each line is split on a space (the argument to split may need to be adjusted depending on the actual input). jq ensures that values are properly quoted when constructing the output objects.
Don't do this: Instead, use #chepner's answer, which is guaranteed to generate valid JSON as output with all possible inputs (or fail with a nonzero exit status if no JSON representation is possible).
The below is only tested to generate valid JSON with the specific inputs shown in the question, and will quite certainly generate output that is not valid JSON with numerous possible inputs (strings with literal quotes, strings ending in literal backslashes, etc).
sudo clustat |
awk '/primary/ {
print "{\"service\":\"" $1 "\",\"server\":\"" $2 "\",\"status\":\""$3"\"}"
}'
For JSON conversion of common shell commands, a good option is jc (JSON Convert)
There is no parser for clustat yet though.
clustat output does look table-like, so you may be able to use the --asciitable parser with jc.