Convert ps aux to json - json

I try to convert the output of ps aux into Json format without using Perl or Python! For these I have read about jq. But I have success to convert the commandline output into json.
How to convert a simpe ps aux to Json?

ps aux | awk '
BEGIN { ORS = ""; print " [ "}
{ printf "%s{\"user\": \"%s\", \"pid\": \"%s\", \"cpu\": \"%s\"}",
separator, $1, $2, $3
separator = ", "
}
END { print " ] " }';
Just adjust columns which you need from ps aux output.

jq can read non-JSON input. You'll want to pre-process the input with awk first:
ps aux |
awk -v OFS=, '{print $1, $2}' |
jq -R 'split(",") | {user: .[0], pid: .[1]}'
If you want an array instead of a sequence of objects, pipe the output through jq --slurp 'add'. (I swear there's a way to do that without an extra call to jq, but it escapes me at the moment.)

Here's an only-jq solution based on tokenization.
Tokenization can be done using:
def tokens:
def trim: sub("^ +";"") | sub(" +$";"");
trim | splits(" +");
For illustration and brevity, let's consider only the first 10 tokens:
[tokens] | .[0:9]
Invocation:
$ ps aux | jq -c -R -f tokens.jq
Or as a one-liner, you could get away with:
$ ps aux | jq -cR '[splits(" +")] | .[0:9]'
First few lines of output:
["USER","PID","%CPU","%MEM","VSZ","RSS","TT","STAT","STARTED"]
["p","1595","55.9","0.4","2593756","32832","??","R","24Jan17"]
["p","12472","26.6","12.6","4951848","1058864","??","R","Sat01AM"]
["p","13239","10.9","1.5","4073756","128324","??","R","Sun12AM"]
["p","12482","7.8","1.2","3876628","101736","??","R","Sat01AM"]
["p","32039","7.7","1.4","4786968","118424","??","R","12Feb17"]
["_windowserver","425","7.6","0.8","3445536","65052","??","Ss","24Jan17"]
Using the headers as object keys
See e.g.
https://github.com/stedolan/jq/wiki/Cookbook#convert-a-csv-file-with-headers-to-json

I have a gist that to convert ps output to json. It uses jq under the covers so you need to install that. But you do not need to know jq

Dumps the fields specified by the -o flag as an array of PID objects:
ps ax -o "stat,euid,ruid,tty,tpgid,sess,pgrp,ppid,pid,wchan,sz,pcpu,command" \
| jq -sRr ' sub("\n$";"") | split("\n") | ([.[0]|splits(" +")]) as $header | .[1:] | [.[] | [. as $x | range($header|length) | {"key": $header[.], "value": (if .==($header|length-1) then ([$x|splits(" +")][.:]|join(" ")|tojson|.[1:length-1]) else ([$x|splits(" +")][.]) end) } ] | from_entries]'
This builds an array of the header fields, maps an array of {key, value} objects per output object, and then uses the built-in from_entries filter to object-aggregate these into the outputs.

Related

get files from directory in bash and build JSON object using jq

I am trying to build list of JSON objects with the files in a particular directory. I am looping thru the files and creating the expected output object as string. I am sure there is a better way of doing this using jq.
Can someone please help me out here?
# input
files=($( ls * ))
prefix="myawesomeprefix"
# expected output
{
"listoffiles": [
{"file":"myawesomeprefix/file1.txt"},
{"file":"myawesomeprefix/file2.txt"},
{"file":"myawesomeprefix/file3.txt"},
]
}
If you don't have any "problematic" file names, e.g. ones that have new lines as part of their name, the following should work:
ls -1 | jq -Rn '{ listoffiles: [inputs | { file: "prefix/\(.)" }] }'
It reads each line as string, and reads them through the inputs filter (must be combined with -n null-input). It then builds your object.
$ cat <<LS | jq -Rn '{ listoffiles: [inputs | {file:"prefix/\(.)"}] }'
file1
file2
file with spaces
LS
{
"listoffiles": [
{
"file": "prefix/file1"
},
{
"file": "prefix/file2"
},
{
"file": "prefix/file with spaces"
}
]
}
You could use for with a glob which should handle new lines in file names as well. But it requires you to chain 2 jq commands:
for f in *; do
printf '%s' "$f" | jq -Rs '{file:"prefix/\(.)"}';
done | jq -s '{listoffiles:.}'
To specify the prefix as variable from the outside, use --arg, e.g.
jq --arg prefix "yourprefixvalue" '$prefix + .'
You can try the nice little command line tool jc:
ls | jc --ls
It converts the output of many shell commands to JSON. For reference have a look there in Github https://github.com/kellyjonbrazil/jc .
Then you can transform the result using jq:
ls | jc --ls | jq "{ listoffiles: [.[] | { file: (\"$prefix/\" + .filename) }] }"
You shouldn't parse the output of ls. If installed, you could use tree with the -J option to produce a JSON listing, which you can transform to your needs using jq:
tree -aJL 1 | jq '
{listoffiles: first.contents | map({file: ("myawesomeprefix/" + .name)})}
'
Or more comfortably using --arg:
tree -aJL 1 | jq --arg prefix myawesomeprefix '
{listoffiles: first.contents | map({file: "\($prefix)/\(.name)"})}
'
This is another alternative :
jq -n --arg prefix "myawesomeprefix"\
'.listoffiles = ($ARGS.positional |
map({file:($prefix+"/"+.)}))'\
--args *

What is returned through a jq filter on a json object in bash? [duplicate]

I parsed a json file with jq like this :
# cat test.json | jq '.logs' | jq '.[]' | jq '._id' | jq -s
It returns an array like this : [34,235,436,546,.....]
Using bash script i described an array :
# declare -a msgIds = ...
This array uses () instead of [] so when I pass the array given above to this array it won't work.
([324,32,45..]) this causes problem. If i remove the jq -s, an array forms with only 1 member in it.
Is there a way to solve this issue?
We can solve this problem by two ways. They are:
Input string:
// test.json
{
"keys": ["key1","key2","key3"]
}
Approach 1:
1) Use jq -r (output raw strings, not JSON texts) .
KEYS=$(jq -r '.keys' test.json)
echo $KEYS
# Output: [ "key1", "key2", "key3" ]
2) Use #sh (Converts input string to a series of space-separated strings). It removes square brackets[], comma(,) from the string.
KEYS=$(<test.json jq -r '.keys | #sh')
echo $KEYS
# Output: 'key1' 'key2' 'key3'
3) Using tr to remove single quotes from the string output. To delete specific characters use the -d option in tr.
KEYS=$((<test.json jq -r '.keys | #sh')| tr -d \')
echo $KEYS
# Output: key1 key2 key3
4) We can convert the comma-separated string to the array by placing our string output in a round bracket().
It also called compound Assignment, where we declare the array with a bunch of values.
ARRAYNAME=(value1 value2 .... valueN)
#!/bin/bash
KEYS=($((<test.json jq -r '.keys | #sh') | tr -d \'\"))
echo "Array size: " ${#KEYS[#]}
echo "Array elements: "${KEYS[#]}
# Output:
# Array size: 3
# Array elements: key1 key2 key3
Approach 2:
1) Use jq -r to get the string output, then use tr to delete characters like square brackets, double quotes and comma.
#!/bin/bash
KEYS=$(jq -r '.keys' test.json | tr -d '[],"')
echo $KEYS
# Output: key1 key2 key3
2) Then we can convert the comma-separated string to the array by placing our string output in a round bracket().
#!/bin/bash
KEYS=($(jq -r '.keys' test.json | tr -d '[]," '))
echo "Array size: " ${#KEYS[#]}
echo "Array elements: "${KEYS[#]}
# Output:
# Array size: 3
# Array elements: key1 key2 key3
To correctly parse values that have spaces, newlines (or any other arbitrary characters) just use jq's #sh filter and bash's declare -a. (No need for a while read loop or any other pre-processing)
// foo.json
{"data": ["A B", "C'D", ""]}
str=$(jq -r '.data | #sh' foo.json)
declare -a arr="($str)" # must be quoted like this
$ declare -p arr
declare -a arr=([0]="A B" [1]="C'D" [2]="")
The reason that this works correctly is that #sh will produce a space-separated list of shell-quoted words:
$ echo "$str"
'A B' 'C'\''D' ''
and this is exactly the format that declare expects for an array definition.
Use jq -r to output a string "raw", without JSON formatting, and use the #sh formatter to format your results as a string for shell consumption. Per the jq docs:
#sh:
The input is escaped suitable for use in a command-line for a POSIX shell. If the input is an array, the output will be a series of space-separated strings.
So can do e.g.
msgids=($(<test.json jq -r '.logs[]._id | #sh'))
and get the result you want.
From the jq FAQ (https://github.com/stedolan/jq/wiki/FAQ):
𝑸: How can a stream of JSON texts produced by jq be converted into a bash array of corresponding values?
A: One option would be to use mapfile (aka readarray), for example:
mapfile -t array <<< $(jq -c '.[]' input.json)
An alternative that might be indicative of what to do in other shells is to use read -r within a while loop. The following bash script populates an array, x, with JSON texts. The key points are the use of the -c option, and the use of the bash idiom while read -r value; do ... done < <(jq .......):
#!/bin/bash
x=()
while read -r value
do
x+=("$value")
done < <(jq -c '.[]' input.json)
++ To resolve this, we can use a very simple approach:
++ Since I am not aware of you input file, I am creating a file input.json with the following contents:
input.json:
{
"keys": ["key1","key2","key3"]
}
++ Use jq to get the value from the above file input.json:
Command: cat input.json | jq -r '.keys | #sh'
Output: 'key1' 'key2' 'key3'
Explanation: | #sh removes [ and "
++ To remove ' ' as well we use tr
command: cat input.json | jq -r '.keys | #sh' | tr -d \'
Explanation: use tr delete -d to remove '
++ To store this in a bash array we use () with `` and print it:
command:
KEYS=(`cat input.json | jq -r '.keys | #sh' | tr -d \'`)
To print all the entries of the array: echo "${KEYS[*]}"

How to ignore particular keys inside .properties files while converting to json

I have .property file which I'm trying to convert to a json file using bash command(s) and I wanted to exclude particular keys being shown in the json file. Below are my .properties inside the property file, I want to exclude property 4 and 5 being converted to json
app.database.address=127.0.0.70
app.database.host=database.myapp.com
app.database.port=5432
app.database.user=dev-user-name
app.database.pass=dev-password
app.database.main=dev-database
Here's my bash command used for converting to json but it converts all the properties to json
cat fle.properties | jq -R -s 'split("\n") | map(split("=")) | map({(.[0]): .[1]}) | add' > zppprop.json
Is there any way we can include these parameters to exclude from converting to json
With xidel:
XPath + JSONiq solution
$ xidel -s fle.properties -e '
{|
x:lines($raw)[not(position() = (4,5))] ! {
substring-before(.,"="):substring-after(.,"=")
}
|}
'
{
"app.database.address": "127.0.0.70",
"app.database.host": "database.myapp.com",
"app.database.port": "5432",
"app.database.main": "dev-database"
}
x:lines($raw) is a shorthand for tokenize($raw,'\r\n?|\n') and turns $raw, the raw input, into a sequence where every new line is another item.
[not(position() = (4,5))] if it's always the 4th and 5th line you want to exclude. Otherwise, use [not(contains(.,"user") or contains(.,"pass"))] as seen below.
XQuery solution
$ xidel -s --xquery '
map:merge(
for $x in file:read-text-lines("fle.properties")[not(contains(.,"user") or contains(.,"pass"))]
let $kv:=tokenize($x,"=")
return
{$kv[1]:$kv[2]}
)
'
{
"app.database.address": "127.0.0.70",
"app.database.host": "database.myapp.com",
"app.database.port": "5432",
"app.database.main": "dev-database"
}
You can use file:read-text-lines() to do everything "in-query".
Playground.
You may filter out unneeded lines with grep:
cat fle.properties | grep -v -E "user|pass" | jq -R -s 'split("\n") | map(select(length > 0)) | map(split("=")) | map({(.[0]): .[1]}) | add'
It is also needed to remove the empty string at the end of the array returned by the split function. This is what map(select(length > 0)) is doing.
You can do the exclusion within the jq script:
properties2json
#!/usr/bin/env -S jq -sRf
split("\n") |
map(split("=")) |
map(
if .[0] | test(".*\\.(user|pass)";"i")
then
{}
else
{(.[0]): .[1]}
end
) |
add
# Make it executable
chmod +x properties2json
# Run it
./properties2json file.properties >file.json

Use jq to parse json into environment variable format

I would like to below code to output
KNOCK_KNOCK="Who is there?"
TEST_FILE='{
"KNOCK_KNOCK": "Who is there?"
}'
for s in $(echo $TEST_FILE | jq -r "to_entries|map(\"\
(.key)=\(.value|tostring)\")|.[]" ); do
echo $s
done
I got the loop from this post: Exporting JSON to environment variables
and cannot figure out how to modify to get my expected output. The problem seems to be the spaces in the .value
For the following test case I get the expected results:
TEST_FILE='{
"KNOCK_KNOCK": "Whoisthere?",
"POSTGRES_URI": "postgress://user:pass#testdb.com",
"WILL": "I.AM"
}'
for s in $(echo $TEST_FILE | jq -r "to_entries|map(\"\(.key)=\
(.value|tostring)\")|.[]" ); do
echo $s
done
KNOCK_KNOCK=Whoisthere?
POSTGRES_URI=postgress://user:pass#testdb.com
WILL=I.AM
I went with the following solution, which works for me, but the accepted answer is ok.
TEST_FILE='{
"KNOCK_KNOCK": "Who is there?"
}'
echo $TEST_FILE | sed 's_[{}]__'| sed 's_: _=_' | sed 's_ _export _'
One of many possibilities:
jq -r 'to_entries[] | [.key,.value] | join("=")' <<< "$TEST_FILE"
It would probably be advisable to use #sh as well:
$ echo $'{"KNOCK_KNOCK": "Who \'is\' there?"}'
| jq -r 'to_entries[] | [.key,(.value|#sh)] | join("=")'
KNOCK_KNOCK='Who '\''is'\'' there?'
$ KNOCK_KNOCK='Who '\''is'\'' there?'
$ echo "$KNOCK_KNOCK"
Who 'is' there?
$
Note also that if you are going to use $TEST_FILE, the double-quotes might be necessary.
You almost got it but you can do this without a loop with this line:
$ echo '{"KNOCK_KNOCK": "Who is there?"}' | jq -r 'to_entries[] | .key + "=\"" + (.value|tostring) + "\""'
KNOCK_KNOCK="Who is there?"
Explanation
to_entries transforms your object to an array of objects like this:
$ echo '{"KNOCK_KNOCK": "Who is there?"}' | jq -r 'to_entries'
[
{
"key": "KNOCK_KNOCK",
"value": "Who is there?"
}
]
You can then take each array element with the [] and use the key and value field to do simple string concatenation using the + operator.

How do I print line number with the output for JQ in json file

I know there is a -n option and tried many combinations but couldn't get it to work. I'd like to print the line number and the length of the each line for the json file
cat -n traffictest.json | jq '. |length'
jq -C . | cat -n traffictest.json | jq '. |length'
jq has a built-in filter, input_line_number, which emits the line number of the input being read. For example, given this input:
[1,2]
"abcd"
{"a":1,"b":1,"c":1,"d":1, "e":1}
the invocation:
jq -r "\(input_line_number): \(length)"
yields:
1: 2
3: 4
5: 5
If you are just interested in line-number and length, I would use awk instead, e.g.:
awk '{ print NR, length, $0 }' traffictest.json
Or if you want to keep the syntax highlighting:
paste <(jq . traffictest.json | awk '{ print NR, length }' OFS='\t') \
<(jq -C . traffictest.json)