How can extract the value using a sed or awk command where jp or any JSON content in not allowed?
{"test":{"components":[{"metric":"complexity","value":"90"}]}}
Output:
90
I tried with the below command, but I am getting the below error:
def value=sh (script: 'grep -Po \'(?<="value":")[^"\\]*(?:\\.[^"\\]*)*\' sample.txt')
But getting the below error from the Jenkins script:
grep: missing terminating ] for character class
Though passing JSON content should be done by a JSON parser, since the OP told the jq tool is not allowed to adding to this solution here, strictly written and tested with shown samples only.
awk 'match($0, /"value":"[0-9]+/){print substr($0, RSTART+9, RLENGTH-9)}' Input_file
Related
I'm doing the following to capture some ADO JSON data:
iteration="$(az boards iteration team list --team Test --project Test --timeframe current)"
Normally, the output of that command contains a JSON key/value pair like the following:
"path": "Test\\Sprint1"
But after capturing the STDOUT into that iteration variable, if I do
echo "$iteration"
That key/value pair becomes
"path": "Test\Sprint1"
And if I attempt to use jq on that output, it breaks because it's not recognized as valid JSON any longer. I'm very unfamiliar with Bash. How can I get that JSON to remain valid all the way through?
As already commented by markp-fuso:
It looks like your echo command is interpreting the backslashes. You can confirm this by running echo 'a\\b' and looking at the output.
The portable way to deal with such problems is to use printf instead of echo:
printf %s\\n "$iteration"
I'm using jq to parse some of my logs, but some of the log lines can't be parsed for various reasons. Is there a way to have jq ignore those lines? I can't seem to find a solution. I tried to use the --seq argument that was recommended by some people, but --seq ignores all the lines in my file.
Assuming that each log entry is exactly one line, you can use the -R or --raw-input option to tell jq to leave the lines unparsed, after which you can prepend fromjson? | to your filter to make jq try to parse each line as JSON and throw away the ones that error.
I have log stream where some messages are in json format.
I want to pipe the json messages through jq, and just echo the rest.
The json messages are on a single line.
Solution: use grep and tee to split the lines in two streams, those starting with "^{" pipe through jq and the rest just echo to terminal.
kubectl logs -f web-svjkn | tee >(grep -v "^{") | grep "^{" | jq .
or
cat logs | tee >(grep -v "^{") | grep "^{" | jq .
Explanation:
tee generates 2nd stream, and grep -v prints non json info, 2nd grep only pipes what looks like json opening bracket to jq.
This is an old thread, but here's another solution fully in jq. This allows you to both process proper json lines and also print out non-json lines.
jq -R . as $line | try (fromjson | <further processing for proper json lines>) catch $line'
There are several Q&As on the FAQ page dealing with the topic of "invalid JSON", but see in particular the Q:
Is there a way to have jq keep going after it hits an error in the input file?
In particular, this shows how to use --seq.
However, from the the sparse details you've given (SO recommends a minimal example be given), it would seem it might be better simply to use inputs. The idea is to process one JSON entity at a time, using "try/catch", e.g.
def handle: inputs | [., "length is \(length)"] ;
def process: try handle catch ("Failed", process) ;
process
Don't forget to use the -n option when invoking jq.
See also Processing not-quite-valid JSON.
If JSON in curly braces {}:
grep -Pzo '\{(?>[^\{\}]|(?R))*\}' | jq 'objects'
If JSON in square brackets []:
grep -Pzo '\[(?>[^\[\]]|(?R))*\]' | jq 'arrays'
This works if there are no []{} in non-JSON lines.
I am running the following command:
sudo clustat | grep primary | awk 'NF{print $1",""server:"$2 ",""status:"$3}'
Results are:
service:servicename,server:servername,status:started
service:servicename,server:servername,status:started
service:servicename,server:servername,status:started
service:servicename,server:servername,status:started
service:servicename,server:servername,status:started
My desired result is:
{"service":"servicename","server":"servername","status":"started"}
{"service":"servicename","server":"servername","status":"started"}
{"service":"servicename","server":"servername","status":"started"}
{"service":"servicename","server":"servername","status":"started"}
{"service":"servicename","server":"servername","status":"started"}
I can't seem to put the qoutation marks withour srewing up my output.
Use jq:
sudo clustat | grep primary |
jq -R 'split(" ")|{service:.[0], server:.[1], status:.[2]}'
The input is read as raw text, not JSON. Each line is split on a space (the argument to split may need to be adjusted depending on the actual input). jq ensures that values are properly quoted when constructing the output objects.
Don't do this: Instead, use #chepner's answer, which is guaranteed to generate valid JSON as output with all possible inputs (or fail with a nonzero exit status if no JSON representation is possible).
The below is only tested to generate valid JSON with the specific inputs shown in the question, and will quite certainly generate output that is not valid JSON with numerous possible inputs (strings with literal quotes, strings ending in literal backslashes, etc).
sudo clustat |
awk '/primary/ {
print "{\"service\":\"" $1 "\",\"server\":\"" $2 "\",\"status\":\""$3"\"}"
}'
For JSON conversion of common shell commands, a good option is jc (JSON Convert)
There is no parser for clustat yet though.
clustat output does look table-like, so you may be able to use the --asciitable parser with jc.
Wanted to parse json:
{"FileStatus":"accessTime":1472892839430,"blockSize":134217728,"childrenNum":0,"fileId":17226,"group":"admin","length":115714,"modificationTime":1469649837471,"owner":"admin","pathSuffix":"","permission":"755","replication":2,"storagePolicy":0,"type":"FILE"}}
I tried something like this but not able to get it.
$ {"FileStatus":{"accessTime":1472892839430}} | jq '.FileStatus.accessTime'
Error:
-bash: {FileStatus:{accessTime:1472892839430}}: command not found`
Can someone help me to parse this whole json.
To make a command read a string on stdin in bash, use a "here string" like this:
$ jq '.FileStatus.accessTime' <<<'{"FileStatus":{"accessTime":1472892839430}}'
1472892839430
Also, you need to properly quote the text, so that bash doesn't try to interpret in some way you don't intend. When you want to preserve it literally, use single quotes (').
I have a json file including the sample lines of code below:
[{"tarih":"20130824","tarihView":"24-08-2013"},{"tarih":"20130817","tarihView":"17-08-2013"},{"tarih":"20130810","tarihView":"10-08-2013"},{"tarih":"20130803","tarihView":"03-08-2013"},{"tarih":"20130727","tarihView":"27-07-2013"},{"tarih":"20130720","tarihView":"20-07-2013"},{"tarih":"20130713","tarihView":"13-07-2013"},{"tarih":"20130706","tarihView":"06-07-2013"}]
I need to extract all the dates in the yy/mm/dd format into a text format with proper line endings:
20130824
20130817
20130810
20130803
...
20130706
How can I do this by using sed or similar console utility?
Many thanks for your help.
this line works for your example:
grep -Po '\d{8}' file
or with BRE:
grep -o '[0-9]\{8\}' file
it outputs:
20130824
20130817
20130810
20130803
20130727
20130720
20130713
20130706
if you want to extract the string after "tarih":", you could :
grep -Po '"tarih":"\K\d{8}' file
it gives same output.
Note that regex won't do date string validation.
This is VERY easy in python:
#!/bin/bash
python -c "vals=$(cat jsonfile)
for curVal in vals: print curVal['tarih']"
If I paste your example to jsonfile I get this output
20130824
20130817
20130810
20130803
20130727
20130720
20130713
20130706
Which is exactly what you need, right?
This works because in python [] is a list and {} is a dictionary, so it is very easy to get any data from that structure. This method is very safe as well, because it wont fail if some field in your data contains { , " or any other character that sed will probably look for. Also it does not depend on the field position or the number of fields.