Unix command-line JSON parser? [closed] - json

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Can anyone recommend a Unix (choose your flavor) JSON parser that could be used to introspect values from a JSON response in a pipeline?

I prefer python -m json.tool which seems to be available per default on most *nix operating systems per default.
$ echo '{"foo":1, "bar":2}' | python -m json.tool
{
"bar": 2,
"foo": 1
}
Note: Depending on your version of python, all keys might get sorted alphabetically, which can or can not be a good thing. With python 2 it was the default to sort the keys, while in python 3.5+ they are no longer sorted automatically, but you have the option to sort by key explicitly:
$ echo '{"foo":1, "bar":2}' | python3 -m json.tool --sort-keys
{
"bar": 2,
"foo": 1
}

If you're looking for a portable C compiled tool:
https://stedolan.github.io/jq/
From the website:
jq is like sed for JSON data - you can use it to slice and filter and map and transform structured data with the same ease that sed, awk, grep and friends let you play with text.
jq can mangle the data format that you have into the one that you want with very little effort, and the program to do so is often shorter and simpler than you’d expect.
Tutorial: https://stedolan.github.io/jq/tutorial/
Manual: https://stedolan.github.io/jq/manual/
Download: https://stedolan.github.io/jq/download/

I have created a module specifically designed for command-line JSON manipulation:
https://github.com/ddopson/underscore-cli
FLEXIBLE - THE "swiss-army-knife" tool for processing JSON data - can be used as a simple pretty-printer, or as a full-powered Javascript command-line
POWERFUL - Exposes the full power and functionality of underscore.js (plus underscore.string)
SIMPLE - Makes it simple to write JS one-liners similar to using "perl -pe"
CHAINED - Multiple command invokations can be chained together to create a data processing pipeline
MULTI-FORMAT - Rich support for input / output formats - pretty-printing, strict JSON, etc [coming soon]
DOCUMENTED - Excellent command-line documentation with multiple examples for every command
It allows you to do powerful things really easily:
cat earthporn.json | underscore select '.data .title'
# [ 'Fjaðrárgljúfur canyon, Iceland [OC] [683x1024]',
# 'New town, Edinburgh, Scotland [4320 x 3240]',
# 'Sunrise in Bryce Canyon, UT [1120x700] [OC]',
# ...
# 'Kariega Game Reserve, South Africa [3584x2688]',
# 'Valle de la Luna, Chile [OS] [1024x683]',
# 'Frosted trees after a snowstorm in Laax, Switzerland [OC] [1072x712]' ]
cat earthporn.json | underscore select '.data .title' | underscore count
# 25
underscore map --data '[1, 2, 3, 4]' 'value+1'
# prints: [ 2, 3, 4, 5 ]
underscore map --data '{"a": [1, 4], "b": [2, 8]}' '_.max(value)'
# [ 4, 8 ]
echo '{"foo":1, "bar":2}' | underscore map -q 'console.log("key = ", key)'
# key = foo
# key = bar
underscore pluck --data "[{name : 'moe', age : 40}, {name : 'larry', age : 50}, {name : 'curly', age : 60}]" name
# [ 'moe', 'larry', 'curly' ]
underscore keys --data '{name : "larry", age : 50}'
# [ 'name', 'age' ]
underscore reduce --data '[1, 2, 3, 4]' 'total+value'
# 10
And it has one of the best "smart-whitespace" JSON formatters available:
If you have any feature requests, comment on this post or add an issue in github. I'd be glad to prioritize features that are needed by members of the community.

You can use this command-line parser (which you could put into a bash alias if you like), using modules built into the Perl core:
perl -MData::Dumper -MJSON::PP=from_json -ne'print Dumper(from_json($_))'

Checkout TickTick.
It's a true Bash JSON parser.
#!/bin/bash
. /path/to/ticktick.sh
# File
DATA=`cat data.json`
# cURL
#DATA=`curl http://foobar3000.com/echo/request.json`
tickParse "$DATA"
echo ``pathname``
echo ``headers["user-agent"]``

There is also JSON command line processing toolkit if you happen to have node.js and npm in your stack.
And another "json" command for massaging JSON on your Unix command line.
And here are the other alternatives:
jq: http://stedolan.github.io/jq/
fx: https://github.com/antonmedv/fx
json:select: https://github.com/dominictarr/json-select
json-command: https://github.com/zpoley/json-command
JSONPath: http://goessner.net/articles/JsonPath/, http://code.google.com/p/jsonpath/wiki/Javascript
jsawk: https://github.com/micha/jsawk
jshon: http://kmkeen.com/jshon/
json2: https://github.com/vi/json2
Related: Command line tool for parsing JSON input for Unix?

Anyone mentioned Jshon or JSON.sh?
https://github.com/keenerd/jshon
pipe json to it, and it traverses the json objects and prints out the path to the current object (as a JSON array) and then the object, without whitespace.
http://kmkeen.com/jshon/
Jshon loads json text from stdin, performs actions, then displays the last action on stdout and also was made to be part of the usual text processing pipeline.

You could try jsawk as suggested in this answer.
Really you could whip up a quick python script to do this though.

For Bash/Python, here is a basic wrapper around python's simplejson:
json_parser() {
local jsonfile="my_json_file.json"
local tc="import simplejson,sys; myjsonstr=sys.stdin.read(); "`
`"myjson=simplejson.loads(myjsonstr);"
# Build python print command based on $#
local printcmd="print myjson"
for (( argn=1; argn<=$#; argn++ )); do
printcmd="$printcmd['${!argn}']"
done
local result=$(python -c "$tc $printcmd.keys()" <$jsonfile 2>/dev/null \
|| python -c "$tc $printcmd" <$jsonfile 2>/dev/null)
# For returning space-separated values
echo $result|sed -e "s/[]|[|,|']//g"
#echo $result
}
It really only handles the nested-dictionary style of data, but it works for what I needed, and is useful for walking through the json. It could probably be adapted to taste.
Anyway, something homegrown for those not wanting to source in yet another external dependency. Except for python, of course.
Ex. json_parser {field1} {field2} would run print myjson['{field1}']['{field2}'], yielding either the keys or the values associated with {field2}, space-separated.

I just made jkid which is a small command-line json explorer that I made to easily explore big json objects. Objects can be explored "transversally" and a "preview" option is there to avoid console overflow.
$ echo '{"john":{"size":20, "eyes":"green"}, "bob":{"size":30, "eyes":"brown"}}' > test3.json
$ jkid . eyes test3.json
object[.]["eyes"]
{
"bob": "brown",
"john": "green"
}

Related

How do I get and clean up a value in a JSON variable with only grep? [duplicate]

I'm trying to parse JSON returned from a curl request, like so:
curl 'http://twitter.com/users/username.json' |
sed -e 's/[{}]/''/g' |
awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}'
The above splits the JSON into fields, for example:
% ...
"geo_enabled":false
"friends_count":245
"profile_text_color":"000000"
"status":"in_reply_to_screen_name":null
"source":"web"
"truncated":false
"text":"My status"
"favorited":false
% ...
How do I print a specific field (denoted by the -v k=text)?
There are a number of tools specifically designed for the purpose of manipulating JSON from the command line, and will be a lot easier and more reliable than doing it with Awk, such as jq:
curl -s 'https://api.github.com/users/lambda' | jq -r '.name'
You can also do this with tools that are likely already installed on your system, like Python using the json module, and so avoid any extra dependencies, while still having the benefit of a proper JSON parser. The following assume you want to use UTF-8, which the original JSON should be encoded in and is what most modern terminals use as well:
Python 3:
curl -s 'https://api.github.com/users/lambda' | \
python3 -c "import sys, json; print(json.load(sys.stdin)['name'])"
Python 2:
export PYTHONIOENCODING=utf8
curl -s 'https://api.github.com/users/lambda' | \
python2 -c "import sys, json; print json.load(sys.stdin)['name']"
Frequently Asked Questions
Why not a pure shell solution?
The standard POSIX/Single Unix Specification shell is a very limited language which doesn't contain facilities for representing sequences (list or arrays) or associative arrays (also known as hash tables, maps, dicts, or objects in some other languages). This makes representing the result of parsing JSON somewhat tricky in portable shell scripts. There are somewhat hacky ways to do it, but many of them can break if keys or values contain certain special characters.
Bash 4 and later, zsh, and ksh have support for arrays and associative arrays, but these shells are not universally available (macOS stopped updating Bash at Bash 3, due to a change from GPLv2 to GPLv3, while many Linux systems don't have zsh installed out of the box). It's possible that you could write a script that would work in either Bash 4 or zsh, one of which is available on most macOS, Linux, and BSD systems these days, but it would be tough to write a shebang line that worked for such a polyglot script.
Finally, writing a full fledged JSON parser in shell would be a significant enough dependency that you might as well just use an existing dependency like jq or Python instead. It's not going to be a one-liner, or even small five-line snippet, to do a good implementation.
Why not use awk, sed, or grep?
It is possible to use these tools to do some quick extraction from JSON with a known shape and formatted in a known way, such as one key per line. There are several examples of suggestions for this in other answers.
However, these tools are designed for line based or record based formats; they are not designed for recursive parsing of matched delimiters with possible escape characters.
So these quick and dirty solutions using awk/sed/grep are likely to be fragile, and break if some aspect of the input format changes, such as collapsing whitespace, or adding additional levels of nesting to the JSON objects, or an escaped quote within a string. A solution that is robust enough to handle all JSON input without breaking will also be fairly large and complex, and so not too much different than adding another dependency on jq or Python.
I have had to deal with large amounts of customer data being deleted due to poor input parsing in a shell script before, so I never recommend quick and dirty methods that may be fragile in this way. If you're doing some one-off processing, see the other answers for suggestions, but I still highly recommend just using an existing tested JSON parser.
Historical notes
This answer originally recommended jsawk, which should still work, but is a little more cumbersome to use than jq, and depends on a standalone JavaScript interpreter being installed which is less common than a Python interpreter, so the above answers are probably preferable:
curl -s 'https://api.github.com/users/lambda' | jsawk -a 'return this.name'
This answer also originally used the Twitter API from the question, but that API no longer works, making it hard to copy the examples to test out, and the new Twitter API requires API keys, so I've switched to using the GitHub API which can be used easily without API keys. The first answer for the original question would be:
curl 'http://twitter.com/users/username.json' | jq -r '.text'
To quickly extract the values for a particular key, I personally like to use "grep -o", which only returns the regex's match. For example, to get the "text" field from tweets, something like:
grep -Po '"text":.*?[^\\]",' tweets.json
This regex is more robust than you might think; for example, it deals fine with strings having embedded commas and escaped quotes inside them. I think with a little more work you could make one that is actually guaranteed to extract the value, if it's atomic. (If it has nesting, then a regex can't do it of course.)
And to further clean (albeit keeping the string's original escaping) you can use something like: | perl -pe 's/"text"://; s/^"//; s/",$//'. (I did this for this analysis.)
To all the haters who insist you should use a real JSON parser -- yes, that is essential for correctness, but
To do a really quick analysis, like counting values to check on data cleaning bugs or get a general feel for the data, banging out something on the command line is faster. Opening an editor to write a script is distracting.
grep -o is orders of magnitude faster than the Python standard json library, at least when doing this for tweets (which are ~2 KB each). I'm not sure if this is just because json is slow (I should compare to yajl sometime); but in principle, a regex should be faster since it's finite state and much more optimizable, instead of a parser that has to support recursion, and in this case, spends lots of CPU building trees for structures you don't care about. (If someone wrote a finite state transducer that did proper (depth-limited) JSON parsing, that would be fantastic! In the meantime we have "grep -o".)
To write maintainable code, I always use a real parsing library. I haven't tried jsawk, but if it works well, that would address point #1.
One last, wackier, solution: I wrote a script that uses Python json and extracts the keys you want, into tab-separated columns; then I pipe through a wrapper around awk that allows named access to columns. In here: the json2tsv and tsvawk scripts. So for this example it would be:
json2tsv id text < tweets.json | tsvawk '{print "tweet " $id " is: " $text}'
This approach doesn't address #2, is more inefficient than a single Python script, and it's a little brittle: it forces normalization of newlines and tabs in string values, to play nice with awk's field/record-delimited view of the world. But it does let you stay on the command line, with more correctness than grep -o.
On the basis that some of the recommendations here (especially in the comments) suggested the use of Python, I was disappointed not to find an example.
So, here's a one-liner to get a single value from some JSON data. It assumes that you are piping the data in (from somewhere) and so should be useful in a scripting context.
echo '{"hostname":"test","domainname":"example.com"}' | python -c 'import json,sys;obj=json.load(sys.stdin);print obj["hostname"]'
Following martinr's and Boecko's lead:
curl -s 'http://twitter.com/users/username.json' | python -mjson.tool
That will give you an extremely grep-friendly output. Very convenient:
curl -s 'http://twitter.com/users/username.json' | python -mjson.tool | grep my_key
You could just download jq binary for your platform and run (chmod +x jq):
$ curl 'https://twitter.com/users/username.json' | ./jq -r '.name'
It extracts "name" attribute from the json object.
jq homepage says it is like sed for JSON data.
Using Node.js
If the system has Node.js installed, it's possible to use the -p print and -e evaluate script flags with JSON.parse to pull out any value that is needed.
A simple example using the JSON string { "foo": "bar" } and pulling out the value of "foo":
node -pe 'JSON.parse(process.argv[1]).foo' '{ "foo": "bar" }'
Output:
bar
Because we have access to cat and other utilities, we can use this for files:
node -pe 'JSON.parse(process.argv[1]).foo' "$(cat foobar.json)"
Output:
bar
Or any other format such as an URL that contains JSON:
node -pe 'JSON.parse(process.argv[1]).name' "$(curl -s https://api.github.com/users/trevorsenior)"
Output:
Trevor Senior
Use Python's JSON support instead of using AWK!
Something like this:
curl -s http://twitter.com/users/username.json | \
python -c "import json,sys;obj=json.load(sys.stdin);print(obj['name']);"
macOS v12.3 (Monterey) removed /usr/bin/python, so we must use /usr/bin/python3 for macOS v12.3 and later.
curl -s http://twitter.com/users/username.json | \
python3 -c "import json,sys;obj=json.load(sys.stdin);print(obj['name']);"
You've asked how to shoot yourself in the foot and I'm here to provide the ammo:
curl -s 'http://twitter.com/users/username.json' | sed -e 's/[{}]/''/g' | awk -v RS=',"' -F: '/^text/ {print $2}'
You could use tr -d '{}' instead of sed. But leaving them out completely seems to have the desired effect as well.
If you want to strip off the outer quotes, pipe the result of the above through sed 's/\(^"\|"$\)//g'
I think others have sounded sufficient alarm. I'll be standing by with a cell phone to call an ambulance. Fire when ready.
Using Bash with Python
Create a Bash function in your .bashrc file:
function getJsonVal () {
python -c "import json,sys;sys.stdout.write(json.dumps(json.load(sys.stdin)$1))";
}
Then
curl 'http://twitter.com/users/username.json' | getJsonVal "['text']"
Output:
My status
Here is the same function, but with error checking.
function getJsonVal() {
if [ \( $# -ne 1 \) -o \( -t 0 \) ]; then
cat <<EOF
Usage: getJsonVal 'key' < /tmp/
-- or --
cat /tmp/input | getJsonVal 'key'
EOF
return;
fi;
python -c "import json,sys;sys.stdout.write(json.dumps(json.load(sys.stdin)$1))";
}
Where $# -ne 1 makes sure at least 1 input, and -t 0 make sure you are redirecting from a pipe.
The nice thing about this implementation is that you can access nested JSON values and get JSON content in return! =)
Example:
echo '{"foo": {"bar": "baz", "a": [1,2,3]}}' | getJsonVal "['foo']['a'][1]"
Output:
2
If you want to be really fancy, you could pretty print the data:
function getJsonVal () {
python -c "import json,sys;sys.stdout.write(json.dumps(json.load(sys.stdin)$1, sort_keys=True, indent=4))";
}
echo '{"foo": {"bar": "baz", "a": [1,2,3]}}' | getJsonVal "['foo']"
{
"a": [
1,
2,
3
],
"bar": "baz"
}
Update (2020)
My biggest issue with external tools (e.g., Python) was that you have to deal with package managers and dependencies to install them.
However, now that we have jq as a standalone, static tool that's easy to install cross-platform via GitHub Releases and Webi (webinstall.dev/jq), I'd recommend that:
Mac, Linux:
curl -sS https://webi.sh/jq | bash
Windows 10:
curl.exe -A MS https://webi.ms/jq | powershell
Cheat Sheet: https://webinstall.dev/jq
Original (2011)
TickTick is a JSON parser written in bash (less than 250 lines of code).
Here's the author's snippet from his article, Imagine a world where Bash supports JSON:
#!/bin/bash
. ticktick.sh
``
people = {
"Writers": [
"Rod Serling",
"Charles Beaumont",
"Richard Matheson"
],
"Cast": {
"Rod Serling": { "Episodes": 156 },
"Martin Landau": { "Episodes": 2 },
"William Shatner": { "Episodes": 2 }
}
}
``
function printDirectors() {
echo " The ``people.Directors.length()`` Directors are:"
for director in ``people.Directors.items()``; do
printf " - %s\n" ${!director}
done
}
`` people.Directors = [ "John Brahm", "Douglas Heyes" ] ``
printDirectors
newDirector="Lamont Johnson"
`` people.Directors.push($newDirector) ``
printDirectors
echo "Shifted: "``people.Directors.shift()``
printDirectors
echo "Popped: "``people.Directors.pop()``
printDirectors
This is using standard Unix tools available on most distributions. It also works well with backslashes (\) and quotes (").
Warning: This doesn't come close to the power of jq and will only work with very simple JSON objects. It's an attempt to answer to the original question and in situations where you can't install additional tools.
function parse_json()
{
echo $1 | \
sed -e 's/[{}]/''/g' | \
sed -e 's/", "/'\",\"'/g' | \
sed -e 's/" ,"/'\",\"'/g' | \
sed -e 's/" , "/'\",\"'/g' | \
sed -e 's/","/'\"---SEPERATOR---\"'/g' | \
awk -F=':' -v RS='---SEPERATOR---' "\$1~/\"$2\"/ {print}" | \
sed -e "s/\"$2\"://" | \
tr -d "\n\t" | \
sed -e 's/\\"/"/g' | \
sed -e 's/\\\\/\\/g' | \
sed -e 's/^[ \t]*//g' | \
sed -e 's/^"//' -e 's/"$//'
}
parse_json '{"username":"john, doe","email":"john#doe.com"}' username
parse_json '{"username":"john doe","email":"john#doe.com"}' email
--- outputs ---
john, doe
johh#doe.com
Parsing JSON with PHP CLI
It is arguably off-topic, but since precedence reigns, this question remains incomplete without a mention of our trusty and faithful PHP, am I right?
It is using the same example JSON, but let’s assign it to a variable to reduce obscurity.
export JSON='{"hostname":"test","domainname":"example.com"}'
Now for PHP goodness, it is using file_get_contents and the php://stdin stream wrapper.
echo $JSON | php -r 'echo json_decode(file_get_contents("php://stdin"))->hostname;'
Or as pointed out using fgets and the already opened stream at CLI constant STDIN.
echo $JSON | php -r 'echo json_decode(fgets(STDIN))->hostname;'
If someone just wants to extract values from simple JSON objects without the need for nested structures, it is possible to use regular expressions without even leaving Bash.
Here is a function I defined using bash regular expressions based on the JSON standard:
function json_extract() {
local key=$1
local json=$2
local string_regex='"([^"\]|\\.)*"'
local number_regex='-?(0|[1-9][0-9]*)(\.[0-9]+)?([eE][+-]?[0-9]+)?'
local value_regex="${string_regex}|${number_regex}|true|false|null"
local pair_regex="\"${key}\"[[:space:]]*:[[:space:]]*(${value_regex})"
if [[ ${json} =~ ${pair_regex} ]]; then
echo $(sed 's/^"\|"$//g' <<< "${BASH_REMATCH[1]}")
else
return 1
fi
}
Caveats: objects and arrays are not supported as values, but all other value types defined in the standard are supported. Also, a pair will be matched no matter how deep in the JSON document it is as long as it has exactly the same key name.
Using the OP's example:
$ json_extract text "$(curl 'http://twitter.com/users/username.json')"
My status
$ json_extract friends_count "$(curl 'http://twitter.com/users/username.json')"
245
Unfortunately the top voted answer that uses grep returns the full match that didn't work in my scenario, but if you know the JSON format will remain constant you can use lookbehind and lookahead to extract just the desired values.
# echo '{"TotalPages":33,"FooBar":"he\"llo","anotherValue":100}' | grep -Po '(?<="FooBar":")(.*?)(?=",)'
he\"llo
# echo '{"TotalPages":33,"FooBar":"he\"llo","anotherValue":100}' | grep -Po '(?<="TotalPages":)(.*?)(?=,)'
33
# echo '{"TotalPages":33,"FooBar":"he\"llo","anotherValue":100}' | grep -Po '(?<="anotherValue":)(.*?)(?=})'
100
Version which uses Ruby and http://flori.github.com/json/
< file.json ruby -e "require 'rubygems'; require 'json'; puts JSON.pretty_generate(JSON[STDIN.read]);"
Or more concisely:
< file.json ruby -r rubygems -r json -e "puts JSON.pretty_generate(JSON[STDIN.read]);"
This is yet another Bash and Python hybrid answer. I posted this answer, because I wanted to process more complex JSON output, but, reducing the complexity of my bash application. I want to crack open the following JSON object from http://www.arcgis.com/sharing/rest/info?f=json in Bash:
{
"owningSystemUrl": "http://www.arcgis.com",
"authInfo": {
"tokenServicesUrl": "https://www.arcgis.com/sharing/rest/generateToken",
"isTokenBasedSecurity": true
}
}
In the following example, I created my own implementation of jq and unquote leveraging Python. You'll note that once we import the Python object from json to a Python dictionary we can use Python syntax to navigate the dictionary. To navigate the above, the syntax is:
data
data[ "authInfo" ]
data[ "authInfo" ][ "tokenServicesUrl" ]
By using magic in Bash, we omit data and only supply the Python text to the right of data, i.e.
jq
jq '[ "authInfo" ]'
jq '[ "authInfo" ][ "tokenServicesUrl" ]'
Note, with no parameters, jq acts as a JSON prettifier. With parameters, we can use Python syntax to extract anything we want from the dictionary including navigating subdictionaries and array elements.
Here are the Bash Python hybrid functions:
#!/bin/bash -xe
jq_py() {
cat <<EOF
import json, sys
data = json.load( sys.stdin )
print( json.dumps( data$1, indent = 4 ) )
EOF
}
jq() {
python -c "$( jq_py "$1" )"
}
unquote_py() {
cat <<EOF
import json,sys
print( json.load( sys.stdin ) )
EOF
}
unquote() {
python -c "$( unquote_py )"
}
Here's a sample usage of the Bash Python functions:
curl http://www.arcgis.com/sharing/rest/info?f=json | tee arcgis.json
# {"owningSystemUrl":"https://www.arcgis.com","authInfo":{"tokenServicesUrl":"https://www.arcgis.com/sharing/rest/generateToken","isTokenBasedSecurity":true}}
cat arcgis.json | jq
# {
# "owningSystemUrl": "https://www.arcgis.com",
# "authInfo": {
# "tokenServicesUrl": "https://www.arcgis.com/sharing/rest/generateToken",
# "isTokenBasedSecurity": true
# }
# }
cat arcgis.json | jq '[ "authInfo" ]'
# {
# "tokenServicesUrl": "https://www.arcgis.com/sharing/rest/generateToken",
# "isTokenBasedSecurity": true
# }
cat arcgis.json | jq '[ "authInfo" ][ "tokenServicesUrl" ]'
# "https://www.arcgis.com/sharing/rest/generateToken"
cat arcgis.json | jq '[ "authInfo" ][ "tokenServicesUrl" ]' | unquote
# https://www.arcgis.com/sharing/rest/generateToken
There is an easier way to get a property from a JSON string. Using a package.json file as an example, try this:
#!/usr/bin/env bash
my_val="$(json=$(<package.json) node -pe "JSON.parse(process.env.json)['version']")"
We're using process.env, because this gets the file's contents into Node.js as a string without any risk of malicious contents escaping their quoting and being parsed as code.
Now that PowerShell is cross platform, I thought I'd throw its way out there, since I find it to be fairly intuitive and extremely simple.
curl -s 'https://api.github.com/users/lambda' | ConvertFrom-Json
ConvertFrom-Json converts the JSON into a PowerShell custom object, so you can easily work with the properties from that point forward. If you only wanted the 'id' property for example, you'd just do this:
curl -s 'https://api.github.com/users/lambda' | ConvertFrom-Json | select -ExpandProperty id
If you wanted to invoke the whole thing from within Bash, then you'd have to call it like this:
powershell 'curl -s "https://api.github.com/users/lambda" | ConvertFrom-Json'
Of course, there's a pure PowerShell way to do it without curl, which would be:
Invoke-WebRequest 'https://api.github.com/users/lambda' | select -ExpandProperty Content | ConvertFrom-Json
Finally, there's also ConvertTo-Json which converts a custom object to JSON just as easily. Here's an example:
(New-Object PsObject -Property #{ Name = "Tester"; SomeList = #('one','two','three')}) | ConvertTo-Json
Which would produce nice JSON like this:
{
"Name": "Tester",
"SomeList": [
"one",
"two",
"three"
]
}
Admittedly, using a Windows shell on Unix is somewhat sacrilegious, but PowerShell is really good at some things, and parsing JSON and XML are a couple of them. This is the GitHub page for the cross platform version: PowerShell
I can not use any of the answers here. Neither jq, shell arrays, declare, grep -P, lookbehind, lookahead, Python, Perl, Ruby, or even Bash, is available.
The remaining answers simply do not work well. JavaScript sounded familiar, but the tin says Nescaffe - so it is a no go, too :) Even if available, for my simple needs - they would be overkill and slow.
Yet, it is extremely important for me to get many variables from the JSON formatted reply of my modem. I am doing it in Bourne shell (sh) with a very trimmed down BusyBox at my routers! There aren't any problems using AWK alone: just set delimiters and read the data. For a single variable, that is all!
awk 'BEGIN { FS="\""; RS="," }; { if ($2 == "login") {print $4} }' test.json
Remember I don't have any arrays? I had to assign within the AWK parsed data to the 11 variables which I need in a shell script. Wherever I looked, that was said to be an impossible mission. No problem with that, either.
My solution is simple. This code will:
parse .json file from the question (actually, I have borrowed a working data sample from the most upvoted answer) and picked out the quoted data, plus
create shell variables from within the awk assigning free named shell variable names.
eval $( curl -s 'https://api.github.com/users/lambda' |
awk ' BEGIN { FS="""; RS="," };
{
if ($2 == "login") { print "Login=""$4""" }
if ($2 == "name") { print "Name=""$4""" }
if ($2 == "updated_at") { print "Updated=""$4""" }
}' )
echo "$Login, $Name, $Updated"
There aren't any problems with blanks within. In my use, the same command parses a long single line output. As eval is used, this solution is suited for trusted data only.
It is simple to adapt it to pickup unquoted data. For a huge number of variables, a marginal speed gain can be achieved using else if. Lack of arrays obviously means: no multiple records without extra fiddling. But where arrays are available, adapting this solution is a simple task.
#maikel's sed answer almost works (but I can not comment on it). For my nicely formatted data - it works. Not so much with the example used here (missing quotes throw it off). It is complicated and difficult to modify. Plus, I do not like having to make 11 calls to extract 11 variables. Why? I timed 100 loops extracting 9 variables: the sed function took 48.99 seconds and my solution took 0.91 second! Not fair? Doing just a single extraction of 9 variables: 0.51 vs. 0.02 second.
Someone who also has XML files, might want to look at my Xidel. It is a command-line interface, dependency-free JSONiq processor. (I.e., it also supports XQuery for XML or JSON processing.)
The example in the question would be:
xidel -e 'json("http://twitter.com/users/username.json")("name")'
Or with my own, nonstandard extension syntax:
xidel -e 'json("http://twitter.com/users/username.json").name'
You can try something like this -
curl -s 'http://twitter.com/users/jaypalsingh.json' |
awk -F=":" -v RS="," '$1~/"text"/ {print}'
One interesting tool that hasn't be covered in the existing answers is using gron written in Go which has a tagline that says Make JSON greppable! which is exactly what it does.
So essentially gron breaks down your JSON into discrete assignments see the absolute 'path' to it. The primary advantage of it over other tools like jq would be to allow searching for the value without knowing how nested the record to search is present at, without breaking the original JSON structure
e.g., I want to search for the 'twitter_username' field from the following link, I just do
% gron 'https://api.github.com/users/lambda' | fgrep 'twitter_username'
json.twitter_username = "unlambda";
% gron 'https://api.github.com/users/lambda' | fgrep 'twitter_username' | gron -u
{
"twitter_username": "unlambda"
}
As simple as that. Note how the gron -u (short for ungron) reconstructs the JSON back from the search path. The need for fgrep is just to filter your search to the paths needed and not let the search expression be evaluated as a regex, but as a fixed string (which is essentially grep -F)
Another example to search for a string to see where in the nested structure the record is under
% echo '{"foo":{"bar":{"zoo":{"moo":"fine"}}}}' | gron | fgrep "fine"
json.foo.bar.zoo.moo = "fine";
It also supports streaming JSON with its -s command line flag, where you can continuously gron the input stream for a matching record. Also gron has zero runtime dependencies. You can download a binary for Linux, Mac, Windows or FreeBSD and run it.
More usage examples and trips can be found at the official Github page - Advanced Usage
As for why you one can use gron over other JSON parsing tools, see from author's note from the project page.
Why shouldn't I just use jq?
jq is awesome, and a lot more powerful than gron, but with that power comes complexity. gron aims to make it easier to use the tools you already know, like grep and sed.
You can use jshon:
curl 'http://twitter.com/users/username.json' | jshon -e text
Here's one way you can do it with AWK:
curl -sL 'http://twitter.com/users/username.json' | awk -F"," -v k="text" '{
gsub(/{|}/,"")
for(i=1;i<=NF;i++){
if ( $i ~ k ){
print $i
}
}
}'
Here is a good reference. In this case:
curl 'http://twitter.com/users/username.json' | sed -e 's/[{}]/''/g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) { where = match(a[i], /\"text\"/); if(where) {print a[i]} } }'
Parsing JSON is painful in a shell script. With a more appropriate language, create a tool that extracts JSON attributes in a way consistent with shell scripting conventions. You can use your new tool to solve the immediate shell scripting problem and then add it to your kit for future situations.
For example, consider a tool jsonlookup such that if I say jsonlookup access token id it will return the attribute id defined within the attribute token defined within the attribute access from standard input, which is presumably JSON data. If the attribute doesn't exist, the tool returns nothing (exit status 1). If the parsing fails, exit status 2 and a message to standard error. If the lookup succeeds, the tool prints the attribute's value.
Having created a Unix tool for the precise purpose of extracting JSON values you can easily use it in shell scripts:
access_token=$(curl <some horrible crap> | jsonlookup access token id)
Any language will do for the implementation of jsonlookup. Here is a fairly concise Python version:
#!/usr/bin/python
import sys
import json
try: rep = json.loads(sys.stdin.read())
except:
sys.stderr.write(sys.argv[0] + ": unable to parse JSON from stdin\n")
sys.exit(2)
for key in sys.argv[1:]:
if key not in rep:
sys.exit(1)
rep = rep[key]
print rep
A two-liner which uses Python. It works particularly well if you're writing a single .sh file and you don't want to depend on another .py file. It also leverages the usage of pipe |. echo "{\"field\": \"value\"}" can be replaced by anything printing a JSON file to standard output.
echo "{\"field\": \"value\"}" | python -c 'import sys, json
print(json.load(sys.stdin)["field"])'
If you have the PHP interpreter installed:
php -r 'var_export(json_decode(`curl http://twitter.com/users/username.json`, 1));'
For example:
We have a resource that provides JSON content with countries' ISO codes: http://country.io/iso3.json and we can easily see it in a shell with curl:
curl http://country.io/iso3.json
But it looks not very convenient, and not readable. Better parse the JSON content and see a readable structure:
php -r 'var_export(json_decode(`curl http://country.io/iso3.json`, 1));'
This code will print something like:
array (
'BD' => 'BGD',
'BE' => 'BEL',
'BF' => 'BFA',
'BG' => 'BGR',
'BA' => 'BIH',
'BB' => 'BRB',
'WF' => 'WLF',
'BL' => 'BLM',
...
If you have nested arrays this output will looks much better...
There is also a very simple, but powerful, JSON CLI processing tool, fx.
Examples
Use an anonymous function:
echo '{"key": "value"}' | fx "x => x.key"
Output:
value
If you don't pass anonymous function parameter → ..., code will be automatically transformed into an anonymous function. And you can get access to JSON by this keyword:
$ echo '[1,2,3]' | fx "this.map(x => x * 2)"
[2, 4, 6]
Or just use dot syntax too:
echo '{"items": {"one": 1}}' | fx .items.one
Output:
1
You can pass any number of anonymous functions for reducing JSON:
echo '{"items": ["one", "two"]}' | fx "this.items" "this[1]"
Output:
two
You can update existing JSON using spread operator:
echo '{"count": 0}' | fx "{...this, count: 1}"
Output:
{"count": 1}
Just plain JavaScript. There isn't any need to learn new syntax.
Later version of fx has an interactive mode! -
I needed something in Bash that was short and would run without dependencies beyond vanilla Linux LSB and Mac OS for both Python 2.7 & 3 and handle errors, e.g. would report JSON parse errors and missing property errors without spewing Python exceptions:
json-extract () {
if [[ "$1" == "" || "$1" == "-h" || "$1" == "-?" || "$1" == "--help" ]] ; then
echo 'Extract top level property value from json document'
echo ' Usage: json-extract <property> [ <file-path> ]'
echo ' Example 1: json-extract status /tmp/response.json'
echo ' Example 2: echo $JSON_STRING | json-extract status'
echo ' Status codes: 0 - success, 1 - json parse error, 2 - property missing'
else
python -c $'import sys, json;\ntry: obj = json.load(open(sys.argv[2])); \nexcept: sys.exit(1)\ntry: print(obj[sys.argv[1]])\nexcept: sys.exit(2)' "$1" "${2:-/dev/stdin}"
fi
}

Trying to write a shell script that will parse thru a json and fetch me ids [duplicate]

I'm trying to parse JSON returned from a curl request, like so:
curl 'http://twitter.com/users/username.json' |
sed -e 's/[{}]/''/g' |
awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}'
The above splits the JSON into fields, for example:
% ...
"geo_enabled":false
"friends_count":245
"profile_text_color":"000000"
"status":"in_reply_to_screen_name":null
"source":"web"
"truncated":false
"text":"My status"
"favorited":false
% ...
How do I print a specific field (denoted by the -v k=text)?
There are a number of tools specifically designed for the purpose of manipulating JSON from the command line, and will be a lot easier and more reliable than doing it with Awk, such as jq:
curl -s 'https://api.github.com/users/lambda' | jq -r '.name'
You can also do this with tools that are likely already installed on your system, like Python using the json module, and so avoid any extra dependencies, while still having the benefit of a proper JSON parser. The following assume you want to use UTF-8, which the original JSON should be encoded in and is what most modern terminals use as well:
Python 3:
curl -s 'https://api.github.com/users/lambda' | \
python3 -c "import sys, json; print(json.load(sys.stdin)['name'])"
Python 2:
export PYTHONIOENCODING=utf8
curl -s 'https://api.github.com/users/lambda' | \
python2 -c "import sys, json; print json.load(sys.stdin)['name']"
Frequently Asked Questions
Why not a pure shell solution?
The standard POSIX/Single Unix Specification shell is a very limited language which doesn't contain facilities for representing sequences (list or arrays) or associative arrays (also known as hash tables, maps, dicts, or objects in some other languages). This makes representing the result of parsing JSON somewhat tricky in portable shell scripts. There are somewhat hacky ways to do it, but many of them can break if keys or values contain certain special characters.
Bash 4 and later, zsh, and ksh have support for arrays and associative arrays, but these shells are not universally available (macOS stopped updating Bash at Bash 3, due to a change from GPLv2 to GPLv3, while many Linux systems don't have zsh installed out of the box). It's possible that you could write a script that would work in either Bash 4 or zsh, one of which is available on most macOS, Linux, and BSD systems these days, but it would be tough to write a shebang line that worked for such a polyglot script.
Finally, writing a full fledged JSON parser in shell would be a significant enough dependency that you might as well just use an existing dependency like jq or Python instead. It's not going to be a one-liner, or even small five-line snippet, to do a good implementation.
Why not use awk, sed, or grep?
It is possible to use these tools to do some quick extraction from JSON with a known shape and formatted in a known way, such as one key per line. There are several examples of suggestions for this in other answers.
However, these tools are designed for line based or record based formats; they are not designed for recursive parsing of matched delimiters with possible escape characters.
So these quick and dirty solutions using awk/sed/grep are likely to be fragile, and break if some aspect of the input format changes, such as collapsing whitespace, or adding additional levels of nesting to the JSON objects, or an escaped quote within a string. A solution that is robust enough to handle all JSON input without breaking will also be fairly large and complex, and so not too much different than adding another dependency on jq or Python.
I have had to deal with large amounts of customer data being deleted due to poor input parsing in a shell script before, so I never recommend quick and dirty methods that may be fragile in this way. If you're doing some one-off processing, see the other answers for suggestions, but I still highly recommend just using an existing tested JSON parser.
Historical notes
This answer originally recommended jsawk, which should still work, but is a little more cumbersome to use than jq, and depends on a standalone JavaScript interpreter being installed which is less common than a Python interpreter, so the above answers are probably preferable:
curl -s 'https://api.github.com/users/lambda' | jsawk -a 'return this.name'
This answer also originally used the Twitter API from the question, but that API no longer works, making it hard to copy the examples to test out, and the new Twitter API requires API keys, so I've switched to using the GitHub API which can be used easily without API keys. The first answer for the original question would be:
curl 'http://twitter.com/users/username.json' | jq -r '.text'
To quickly extract the values for a particular key, I personally like to use "grep -o", which only returns the regex's match. For example, to get the "text" field from tweets, something like:
grep -Po '"text":.*?[^\\]",' tweets.json
This regex is more robust than you might think; for example, it deals fine with strings having embedded commas and escaped quotes inside them. I think with a little more work you could make one that is actually guaranteed to extract the value, if it's atomic. (If it has nesting, then a regex can't do it of course.)
And to further clean (albeit keeping the string's original escaping) you can use something like: | perl -pe 's/"text"://; s/^"//; s/",$//'. (I did this for this analysis.)
To all the haters who insist you should use a real JSON parser -- yes, that is essential for correctness, but
To do a really quick analysis, like counting values to check on data cleaning bugs or get a general feel for the data, banging out something on the command line is faster. Opening an editor to write a script is distracting.
grep -o is orders of magnitude faster than the Python standard json library, at least when doing this for tweets (which are ~2 KB each). I'm not sure if this is just because json is slow (I should compare to yajl sometime); but in principle, a regex should be faster since it's finite state and much more optimizable, instead of a parser that has to support recursion, and in this case, spends lots of CPU building trees for structures you don't care about. (If someone wrote a finite state transducer that did proper (depth-limited) JSON parsing, that would be fantastic! In the meantime we have "grep -o".)
To write maintainable code, I always use a real parsing library. I haven't tried jsawk, but if it works well, that would address point #1.
One last, wackier, solution: I wrote a script that uses Python json and extracts the keys you want, into tab-separated columns; then I pipe through a wrapper around awk that allows named access to columns. In here: the json2tsv and tsvawk scripts. So for this example it would be:
json2tsv id text < tweets.json | tsvawk '{print "tweet " $id " is: " $text}'
This approach doesn't address #2, is more inefficient than a single Python script, and it's a little brittle: it forces normalization of newlines and tabs in string values, to play nice with awk's field/record-delimited view of the world. But it does let you stay on the command line, with more correctness than grep -o.
On the basis that some of the recommendations here (especially in the comments) suggested the use of Python, I was disappointed not to find an example.
So, here's a one-liner to get a single value from some JSON data. It assumes that you are piping the data in (from somewhere) and so should be useful in a scripting context.
echo '{"hostname":"test","domainname":"example.com"}' | python -c 'import json,sys;obj=json.load(sys.stdin);print obj["hostname"]'
Following martinr's and Boecko's lead:
curl -s 'http://twitter.com/users/username.json' | python -mjson.tool
That will give you an extremely grep-friendly output. Very convenient:
curl -s 'http://twitter.com/users/username.json' | python -mjson.tool | grep my_key
You could just download jq binary for your platform and run (chmod +x jq):
$ curl 'https://twitter.com/users/username.json' | ./jq -r '.name'
It extracts "name" attribute from the json object.
jq homepage says it is like sed for JSON data.
Using Node.js
If the system has Node.js installed, it's possible to use the -p print and -e evaluate script flags with JSON.parse to pull out any value that is needed.
A simple example using the JSON string { "foo": "bar" } and pulling out the value of "foo":
node -pe 'JSON.parse(process.argv[1]).foo' '{ "foo": "bar" }'
Output:
bar
Because we have access to cat and other utilities, we can use this for files:
node -pe 'JSON.parse(process.argv[1]).foo' "$(cat foobar.json)"
Output:
bar
Or any other format such as an URL that contains JSON:
node -pe 'JSON.parse(process.argv[1]).name' "$(curl -s https://api.github.com/users/trevorsenior)"
Output:
Trevor Senior
Use Python's JSON support instead of using AWK!
Something like this:
curl -s http://twitter.com/users/username.json | \
python -c "import json,sys;obj=json.load(sys.stdin);print(obj['name']);"
macOS v12.3 (Monterey) removed /usr/bin/python, so we must use /usr/bin/python3 for macOS v12.3 and later.
curl -s http://twitter.com/users/username.json | \
python3 -c "import json,sys;obj=json.load(sys.stdin);print(obj['name']);"
You've asked how to shoot yourself in the foot and I'm here to provide the ammo:
curl -s 'http://twitter.com/users/username.json' | sed -e 's/[{}]/''/g' | awk -v RS=',"' -F: '/^text/ {print $2}'
You could use tr -d '{}' instead of sed. But leaving them out completely seems to have the desired effect as well.
If you want to strip off the outer quotes, pipe the result of the above through sed 's/\(^"\|"$\)//g'
I think others have sounded sufficient alarm. I'll be standing by with a cell phone to call an ambulance. Fire when ready.
Using Bash with Python
Create a Bash function in your .bashrc file:
function getJsonVal () {
python -c "import json,sys;sys.stdout.write(json.dumps(json.load(sys.stdin)$1))";
}
Then
curl 'http://twitter.com/users/username.json' | getJsonVal "['text']"
Output:
My status
Here is the same function, but with error checking.
function getJsonVal() {
if [ \( $# -ne 1 \) -o \( -t 0 \) ]; then
cat <<EOF
Usage: getJsonVal 'key' < /tmp/
-- or --
cat /tmp/input | getJsonVal 'key'
EOF
return;
fi;
python -c "import json,sys;sys.stdout.write(json.dumps(json.load(sys.stdin)$1))";
}
Where $# -ne 1 makes sure at least 1 input, and -t 0 make sure you are redirecting from a pipe.
The nice thing about this implementation is that you can access nested JSON values and get JSON content in return! =)
Example:
echo '{"foo": {"bar": "baz", "a": [1,2,3]}}' | getJsonVal "['foo']['a'][1]"
Output:
2
If you want to be really fancy, you could pretty print the data:
function getJsonVal () {
python -c "import json,sys;sys.stdout.write(json.dumps(json.load(sys.stdin)$1, sort_keys=True, indent=4))";
}
echo '{"foo": {"bar": "baz", "a": [1,2,3]}}' | getJsonVal "['foo']"
{
"a": [
1,
2,
3
],
"bar": "baz"
}
Update (2020)
My biggest issue with external tools (e.g., Python) was that you have to deal with package managers and dependencies to install them.
However, now that we have jq as a standalone, static tool that's easy to install cross-platform via GitHub Releases and Webi (webinstall.dev/jq), I'd recommend that:
Mac, Linux:
curl -sS https://webi.sh/jq | bash
Windows 10:
curl.exe -A MS https://webi.ms/jq | powershell
Cheat Sheet: https://webinstall.dev/jq
Original (2011)
TickTick is a JSON parser written in bash (less than 250 lines of code).
Here's the author's snippet from his article, Imagine a world where Bash supports JSON:
#!/bin/bash
. ticktick.sh
``
people = {
"Writers": [
"Rod Serling",
"Charles Beaumont",
"Richard Matheson"
],
"Cast": {
"Rod Serling": { "Episodes": 156 },
"Martin Landau": { "Episodes": 2 },
"William Shatner": { "Episodes": 2 }
}
}
``
function printDirectors() {
echo " The ``people.Directors.length()`` Directors are:"
for director in ``people.Directors.items()``; do
printf " - %s\n" ${!director}
done
}
`` people.Directors = [ "John Brahm", "Douglas Heyes" ] ``
printDirectors
newDirector="Lamont Johnson"
`` people.Directors.push($newDirector) ``
printDirectors
echo "Shifted: "``people.Directors.shift()``
printDirectors
echo "Popped: "``people.Directors.pop()``
printDirectors
This is using standard Unix tools available on most distributions. It also works well with backslashes (\) and quotes (").
Warning: This doesn't come close to the power of jq and will only work with very simple JSON objects. It's an attempt to answer to the original question and in situations where you can't install additional tools.
function parse_json()
{
echo $1 | \
sed -e 's/[{}]/''/g' | \
sed -e 's/", "/'\",\"'/g' | \
sed -e 's/" ,"/'\",\"'/g' | \
sed -e 's/" , "/'\",\"'/g' | \
sed -e 's/","/'\"---SEPERATOR---\"'/g' | \
awk -F=':' -v RS='---SEPERATOR---' "\$1~/\"$2\"/ {print}" | \
sed -e "s/\"$2\"://" | \
tr -d "\n\t" | \
sed -e 's/\\"/"/g' | \
sed -e 's/\\\\/\\/g' | \
sed -e 's/^[ \t]*//g' | \
sed -e 's/^"//' -e 's/"$//'
}
parse_json '{"username":"john, doe","email":"john#doe.com"}' username
parse_json '{"username":"john doe","email":"john#doe.com"}' email
--- outputs ---
john, doe
johh#doe.com
Parsing JSON with PHP CLI
It is arguably off-topic, but since precedence reigns, this question remains incomplete without a mention of our trusty and faithful PHP, am I right?
It is using the same example JSON, but let’s assign it to a variable to reduce obscurity.
export JSON='{"hostname":"test","domainname":"example.com"}'
Now for PHP goodness, it is using file_get_contents and the php://stdin stream wrapper.
echo $JSON | php -r 'echo json_decode(file_get_contents("php://stdin"))->hostname;'
Or as pointed out using fgets and the already opened stream at CLI constant STDIN.
echo $JSON | php -r 'echo json_decode(fgets(STDIN))->hostname;'
If someone just wants to extract values from simple JSON objects without the need for nested structures, it is possible to use regular expressions without even leaving Bash.
Here is a function I defined using bash regular expressions based on the JSON standard:
function json_extract() {
local key=$1
local json=$2
local string_regex='"([^"\]|\\.)*"'
local number_regex='-?(0|[1-9][0-9]*)(\.[0-9]+)?([eE][+-]?[0-9]+)?'
local value_regex="${string_regex}|${number_regex}|true|false|null"
local pair_regex="\"${key}\"[[:space:]]*:[[:space:]]*(${value_regex})"
if [[ ${json} =~ ${pair_regex} ]]; then
echo $(sed 's/^"\|"$//g' <<< "${BASH_REMATCH[1]}")
else
return 1
fi
}
Caveats: objects and arrays are not supported as values, but all other value types defined in the standard are supported. Also, a pair will be matched no matter how deep in the JSON document it is as long as it has exactly the same key name.
Using the OP's example:
$ json_extract text "$(curl 'http://twitter.com/users/username.json')"
My status
$ json_extract friends_count "$(curl 'http://twitter.com/users/username.json')"
245
Unfortunately the top voted answer that uses grep returns the full match that didn't work in my scenario, but if you know the JSON format will remain constant you can use lookbehind and lookahead to extract just the desired values.
# echo '{"TotalPages":33,"FooBar":"he\"llo","anotherValue":100}' | grep -Po '(?<="FooBar":")(.*?)(?=",)'
he\"llo
# echo '{"TotalPages":33,"FooBar":"he\"llo","anotherValue":100}' | grep -Po '(?<="TotalPages":)(.*?)(?=,)'
33
# echo '{"TotalPages":33,"FooBar":"he\"llo","anotherValue":100}' | grep -Po '(?<="anotherValue":)(.*?)(?=})'
100
Version which uses Ruby and http://flori.github.com/json/
< file.json ruby -e "require 'rubygems'; require 'json'; puts JSON.pretty_generate(JSON[STDIN.read]);"
Or more concisely:
< file.json ruby -r rubygems -r json -e "puts JSON.pretty_generate(JSON[STDIN.read]);"
This is yet another Bash and Python hybrid answer. I posted this answer, because I wanted to process more complex JSON output, but, reducing the complexity of my bash application. I want to crack open the following JSON object from http://www.arcgis.com/sharing/rest/info?f=json in Bash:
{
"owningSystemUrl": "http://www.arcgis.com",
"authInfo": {
"tokenServicesUrl": "https://www.arcgis.com/sharing/rest/generateToken",
"isTokenBasedSecurity": true
}
}
In the following example, I created my own implementation of jq and unquote leveraging Python. You'll note that once we import the Python object from json to a Python dictionary we can use Python syntax to navigate the dictionary. To navigate the above, the syntax is:
data
data[ "authInfo" ]
data[ "authInfo" ][ "tokenServicesUrl" ]
By using magic in Bash, we omit data and only supply the Python text to the right of data, i.e.
jq
jq '[ "authInfo" ]'
jq '[ "authInfo" ][ "tokenServicesUrl" ]'
Note, with no parameters, jq acts as a JSON prettifier. With parameters, we can use Python syntax to extract anything we want from the dictionary including navigating subdictionaries and array elements.
Here are the Bash Python hybrid functions:
#!/bin/bash -xe
jq_py() {
cat <<EOF
import json, sys
data = json.load( sys.stdin )
print( json.dumps( data$1, indent = 4 ) )
EOF
}
jq() {
python -c "$( jq_py "$1" )"
}
unquote_py() {
cat <<EOF
import json,sys
print( json.load( sys.stdin ) )
EOF
}
unquote() {
python -c "$( unquote_py )"
}
Here's a sample usage of the Bash Python functions:
curl http://www.arcgis.com/sharing/rest/info?f=json | tee arcgis.json
# {"owningSystemUrl":"https://www.arcgis.com","authInfo":{"tokenServicesUrl":"https://www.arcgis.com/sharing/rest/generateToken","isTokenBasedSecurity":true}}
cat arcgis.json | jq
# {
# "owningSystemUrl": "https://www.arcgis.com",
# "authInfo": {
# "tokenServicesUrl": "https://www.arcgis.com/sharing/rest/generateToken",
# "isTokenBasedSecurity": true
# }
# }
cat arcgis.json | jq '[ "authInfo" ]'
# {
# "tokenServicesUrl": "https://www.arcgis.com/sharing/rest/generateToken",
# "isTokenBasedSecurity": true
# }
cat arcgis.json | jq '[ "authInfo" ][ "tokenServicesUrl" ]'
# "https://www.arcgis.com/sharing/rest/generateToken"
cat arcgis.json | jq '[ "authInfo" ][ "tokenServicesUrl" ]' | unquote
# https://www.arcgis.com/sharing/rest/generateToken
There is an easier way to get a property from a JSON string. Using a package.json file as an example, try this:
#!/usr/bin/env bash
my_val="$(json=$(<package.json) node -pe "JSON.parse(process.env.json)['version']")"
We're using process.env, because this gets the file's contents into Node.js as a string without any risk of malicious contents escaping their quoting and being parsed as code.
Now that PowerShell is cross platform, I thought I'd throw its way out there, since I find it to be fairly intuitive and extremely simple.
curl -s 'https://api.github.com/users/lambda' | ConvertFrom-Json
ConvertFrom-Json converts the JSON into a PowerShell custom object, so you can easily work with the properties from that point forward. If you only wanted the 'id' property for example, you'd just do this:
curl -s 'https://api.github.com/users/lambda' | ConvertFrom-Json | select -ExpandProperty id
If you wanted to invoke the whole thing from within Bash, then you'd have to call it like this:
powershell 'curl -s "https://api.github.com/users/lambda" | ConvertFrom-Json'
Of course, there's a pure PowerShell way to do it without curl, which would be:
Invoke-WebRequest 'https://api.github.com/users/lambda' | select -ExpandProperty Content | ConvertFrom-Json
Finally, there's also ConvertTo-Json which converts a custom object to JSON just as easily. Here's an example:
(New-Object PsObject -Property #{ Name = "Tester"; SomeList = #('one','two','three')}) | ConvertTo-Json
Which would produce nice JSON like this:
{
"Name": "Tester",
"SomeList": [
"one",
"two",
"three"
]
}
Admittedly, using a Windows shell on Unix is somewhat sacrilegious, but PowerShell is really good at some things, and parsing JSON and XML are a couple of them. This is the GitHub page for the cross platform version: PowerShell
I can not use any of the answers here. Neither jq, shell arrays, declare, grep -P, lookbehind, lookahead, Python, Perl, Ruby, or even Bash, is available.
The remaining answers simply do not work well. JavaScript sounded familiar, but the tin says Nescaffe - so it is a no go, too :) Even if available, for my simple needs - they would be overkill and slow.
Yet, it is extremely important for me to get many variables from the JSON formatted reply of my modem. I am doing it in Bourne shell (sh) with a very trimmed down BusyBox at my routers! There aren't any problems using AWK alone: just set delimiters and read the data. For a single variable, that is all!
awk 'BEGIN { FS="\""; RS="," }; { if ($2 == "login") {print $4} }' test.json
Remember I don't have any arrays? I had to assign within the AWK parsed data to the 11 variables which I need in a shell script. Wherever I looked, that was said to be an impossible mission. No problem with that, either.
My solution is simple. This code will:
parse .json file from the question (actually, I have borrowed a working data sample from the most upvoted answer) and picked out the quoted data, plus
create shell variables from within the awk assigning free named shell variable names.
eval $( curl -s 'https://api.github.com/users/lambda' |
awk ' BEGIN { FS="""; RS="," };
{
if ($2 == "login") { print "Login=""$4""" }
if ($2 == "name") { print "Name=""$4""" }
if ($2 == "updated_at") { print "Updated=""$4""" }
}' )
echo "$Login, $Name, $Updated"
There aren't any problems with blanks within. In my use, the same command parses a long single line output. As eval is used, this solution is suited for trusted data only.
It is simple to adapt it to pickup unquoted data. For a huge number of variables, a marginal speed gain can be achieved using else if. Lack of arrays obviously means: no multiple records without extra fiddling. But where arrays are available, adapting this solution is a simple task.
#maikel's sed answer almost works (but I can not comment on it). For my nicely formatted data - it works. Not so much with the example used here (missing quotes throw it off). It is complicated and difficult to modify. Plus, I do not like having to make 11 calls to extract 11 variables. Why? I timed 100 loops extracting 9 variables: the sed function took 48.99 seconds and my solution took 0.91 second! Not fair? Doing just a single extraction of 9 variables: 0.51 vs. 0.02 second.
Someone who also has XML files, might want to look at my Xidel. It is a command-line interface, dependency-free JSONiq processor. (I.e., it also supports XQuery for XML or JSON processing.)
The example in the question would be:
xidel -e 'json("http://twitter.com/users/username.json")("name")'
Or with my own, nonstandard extension syntax:
xidel -e 'json("http://twitter.com/users/username.json").name'
You can try something like this -
curl -s 'http://twitter.com/users/jaypalsingh.json' |
awk -F=":" -v RS="," '$1~/"text"/ {print}'
One interesting tool that hasn't be covered in the existing answers is using gron written in Go which has a tagline that says Make JSON greppable! which is exactly what it does.
So essentially gron breaks down your JSON into discrete assignments see the absolute 'path' to it. The primary advantage of it over other tools like jq would be to allow searching for the value without knowing how nested the record to search is present at, without breaking the original JSON structure
e.g., I want to search for the 'twitter_username' field from the following link, I just do
% gron 'https://api.github.com/users/lambda' | fgrep 'twitter_username'
json.twitter_username = "unlambda";
% gron 'https://api.github.com/users/lambda' | fgrep 'twitter_username' | gron -u
{
"twitter_username": "unlambda"
}
As simple as that. Note how the gron -u (short for ungron) reconstructs the JSON back from the search path. The need for fgrep is just to filter your search to the paths needed and not let the search expression be evaluated as a regex, but as a fixed string (which is essentially grep -F)
Another example to search for a string to see where in the nested structure the record is under
% echo '{"foo":{"bar":{"zoo":{"moo":"fine"}}}}' | gron | fgrep "fine"
json.foo.bar.zoo.moo = "fine";
It also supports streaming JSON with its -s command line flag, where you can continuously gron the input stream for a matching record. Also gron has zero runtime dependencies. You can download a binary for Linux, Mac, Windows or FreeBSD and run it.
More usage examples and trips can be found at the official Github page - Advanced Usage
As for why you one can use gron over other JSON parsing tools, see from author's note from the project page.
Why shouldn't I just use jq?
jq is awesome, and a lot more powerful than gron, but with that power comes complexity. gron aims to make it easier to use the tools you already know, like grep and sed.
You can use jshon:
curl 'http://twitter.com/users/username.json' | jshon -e text
Here's one way you can do it with AWK:
curl -sL 'http://twitter.com/users/username.json' | awk -F"," -v k="text" '{
gsub(/{|}/,"")
for(i=1;i<=NF;i++){
if ( $i ~ k ){
print $i
}
}
}'
Here is a good reference. In this case:
curl 'http://twitter.com/users/username.json' | sed -e 's/[{}]/''/g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) { where = match(a[i], /\"text\"/); if(where) {print a[i]} } }'
Parsing JSON is painful in a shell script. With a more appropriate language, create a tool that extracts JSON attributes in a way consistent with shell scripting conventions. You can use your new tool to solve the immediate shell scripting problem and then add it to your kit for future situations.
For example, consider a tool jsonlookup such that if I say jsonlookup access token id it will return the attribute id defined within the attribute token defined within the attribute access from standard input, which is presumably JSON data. If the attribute doesn't exist, the tool returns nothing (exit status 1). If the parsing fails, exit status 2 and a message to standard error. If the lookup succeeds, the tool prints the attribute's value.
Having created a Unix tool for the precise purpose of extracting JSON values you can easily use it in shell scripts:
access_token=$(curl <some horrible crap> | jsonlookup access token id)
Any language will do for the implementation of jsonlookup. Here is a fairly concise Python version:
#!/usr/bin/python
import sys
import json
try: rep = json.loads(sys.stdin.read())
except:
sys.stderr.write(sys.argv[0] + ": unable to parse JSON from stdin\n")
sys.exit(2)
for key in sys.argv[1:]:
if key not in rep:
sys.exit(1)
rep = rep[key]
print rep
A two-liner which uses Python. It works particularly well if you're writing a single .sh file and you don't want to depend on another .py file. It also leverages the usage of pipe |. echo "{\"field\": \"value\"}" can be replaced by anything printing a JSON file to standard output.
echo "{\"field\": \"value\"}" | python -c 'import sys, json
print(json.load(sys.stdin)["field"])'
If you have the PHP interpreter installed:
php -r 'var_export(json_decode(`curl http://twitter.com/users/username.json`, 1));'
For example:
We have a resource that provides JSON content with countries' ISO codes: http://country.io/iso3.json and we can easily see it in a shell with curl:
curl http://country.io/iso3.json
But it looks not very convenient, and not readable. Better parse the JSON content and see a readable structure:
php -r 'var_export(json_decode(`curl http://country.io/iso3.json`, 1));'
This code will print something like:
array (
'BD' => 'BGD',
'BE' => 'BEL',
'BF' => 'BFA',
'BG' => 'BGR',
'BA' => 'BIH',
'BB' => 'BRB',
'WF' => 'WLF',
'BL' => 'BLM',
...
If you have nested arrays this output will looks much better...
There is also a very simple, but powerful, JSON CLI processing tool, fx.
Examples
Use an anonymous function:
echo '{"key": "value"}' | fx "x => x.key"
Output:
value
If you don't pass anonymous function parameter → ..., code will be automatically transformed into an anonymous function. And you can get access to JSON by this keyword:
$ echo '[1,2,3]' | fx "this.map(x => x * 2)"
[2, 4, 6]
Or just use dot syntax too:
echo '{"items": {"one": 1}}' | fx .items.one
Output:
1
You can pass any number of anonymous functions for reducing JSON:
echo '{"items": ["one", "two"]}' | fx "this.items" "this[1]"
Output:
two
You can update existing JSON using spread operator:
echo '{"count": 0}' | fx "{...this, count: 1}"
Output:
{"count": 1}
Just plain JavaScript. There isn't any need to learn new syntax.
Later version of fx has an interactive mode! -
I needed something in Bash that was short and would run without dependencies beyond vanilla Linux LSB and Mac OS for both Python 2.7 & 3 and handle errors, e.g. would report JSON parse errors and missing property errors without spewing Python exceptions:
json-extract () {
if [[ "$1" == "" || "$1" == "-h" || "$1" == "-?" || "$1" == "--help" ]] ; then
echo 'Extract top level property value from json document'
echo ' Usage: json-extract <property> [ <file-path> ]'
echo ' Example 1: json-extract status /tmp/response.json'
echo ' Example 2: echo $JSON_STRING | json-extract status'
echo ' Status codes: 0 - success, 1 - json parse error, 2 - property missing'
else
python -c $'import sys, json;\ntry: obj = json.load(open(sys.argv[2])); \nexcept: sys.exit(1)\ntry: print(obj[sys.argv[1]])\nexcept: sys.exit(2)' "$1" "${2:-/dev/stdin}"
fi
}

How to extract a json entry without jq in bash if I know the start and end patterns? [duplicate]

I'm trying to parse JSON returned from a curl request, like so:
curl 'http://twitter.com/users/username.json' |
sed -e 's/[{}]/''/g' |
awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}'
The above splits the JSON into fields, for example:
% ...
"geo_enabled":false
"friends_count":245
"profile_text_color":"000000"
"status":"in_reply_to_screen_name":null
"source":"web"
"truncated":false
"text":"My status"
"favorited":false
% ...
How do I print a specific field (denoted by the -v k=text)?
There are a number of tools specifically designed for the purpose of manipulating JSON from the command line, and will be a lot easier and more reliable than doing it with Awk, such as jq:
curl -s 'https://api.github.com/users/lambda' | jq -r '.name'
You can also do this with tools that are likely already installed on your system, like Python using the json module, and so avoid any extra dependencies, while still having the benefit of a proper JSON parser. The following assume you want to use UTF-8, which the original JSON should be encoded in and is what most modern terminals use as well:
Python 3:
curl -s 'https://api.github.com/users/lambda' | \
python3 -c "import sys, json; print(json.load(sys.stdin)['name'])"
Python 2:
export PYTHONIOENCODING=utf8
curl -s 'https://api.github.com/users/lambda' | \
python2 -c "import sys, json; print json.load(sys.stdin)['name']"
Frequently Asked Questions
Why not a pure shell solution?
The standard POSIX/Single Unix Specification shell is a very limited language which doesn't contain facilities for representing sequences (list or arrays) or associative arrays (also known as hash tables, maps, dicts, or objects in some other languages). This makes representing the result of parsing JSON somewhat tricky in portable shell scripts. There are somewhat hacky ways to do it, but many of them can break if keys or values contain certain special characters.
Bash 4 and later, zsh, and ksh have support for arrays and associative arrays, but these shells are not universally available (macOS stopped updating Bash at Bash 3, due to a change from GPLv2 to GPLv3, while many Linux systems don't have zsh installed out of the box). It's possible that you could write a script that would work in either Bash 4 or zsh, one of which is available on most macOS, Linux, and BSD systems these days, but it would be tough to write a shebang line that worked for such a polyglot script.
Finally, writing a full fledged JSON parser in shell would be a significant enough dependency that you might as well just use an existing dependency like jq or Python instead. It's not going to be a one-liner, or even small five-line snippet, to do a good implementation.
Why not use awk, sed, or grep?
It is possible to use these tools to do some quick extraction from JSON with a known shape and formatted in a known way, such as one key per line. There are several examples of suggestions for this in other answers.
However, these tools are designed for line based or record based formats; they are not designed for recursive parsing of matched delimiters with possible escape characters.
So these quick and dirty solutions using awk/sed/grep are likely to be fragile, and break if some aspect of the input format changes, such as collapsing whitespace, or adding additional levels of nesting to the JSON objects, or an escaped quote within a string. A solution that is robust enough to handle all JSON input without breaking will also be fairly large and complex, and so not too much different than adding another dependency on jq or Python.
I have had to deal with large amounts of customer data being deleted due to poor input parsing in a shell script before, so I never recommend quick and dirty methods that may be fragile in this way. If you're doing some one-off processing, see the other answers for suggestions, but I still highly recommend just using an existing tested JSON parser.
Historical notes
This answer originally recommended jsawk, which should still work, but is a little more cumbersome to use than jq, and depends on a standalone JavaScript interpreter being installed which is less common than a Python interpreter, so the above answers are probably preferable:
curl -s 'https://api.github.com/users/lambda' | jsawk -a 'return this.name'
This answer also originally used the Twitter API from the question, but that API no longer works, making it hard to copy the examples to test out, and the new Twitter API requires API keys, so I've switched to using the GitHub API which can be used easily without API keys. The first answer for the original question would be:
curl 'http://twitter.com/users/username.json' | jq -r '.text'
To quickly extract the values for a particular key, I personally like to use "grep -o", which only returns the regex's match. For example, to get the "text" field from tweets, something like:
grep -Po '"text":.*?[^\\]",' tweets.json
This regex is more robust than you might think; for example, it deals fine with strings having embedded commas and escaped quotes inside them. I think with a little more work you could make one that is actually guaranteed to extract the value, if it's atomic. (If it has nesting, then a regex can't do it of course.)
And to further clean (albeit keeping the string's original escaping) you can use something like: | perl -pe 's/"text"://; s/^"//; s/",$//'. (I did this for this analysis.)
To all the haters who insist you should use a real JSON parser -- yes, that is essential for correctness, but
To do a really quick analysis, like counting values to check on data cleaning bugs or get a general feel for the data, banging out something on the command line is faster. Opening an editor to write a script is distracting.
grep -o is orders of magnitude faster than the Python standard json library, at least when doing this for tweets (which are ~2 KB each). I'm not sure if this is just because json is slow (I should compare to yajl sometime); but in principle, a regex should be faster since it's finite state and much more optimizable, instead of a parser that has to support recursion, and in this case, spends lots of CPU building trees for structures you don't care about. (If someone wrote a finite state transducer that did proper (depth-limited) JSON parsing, that would be fantastic! In the meantime we have "grep -o".)
To write maintainable code, I always use a real parsing library. I haven't tried jsawk, but if it works well, that would address point #1.
One last, wackier, solution: I wrote a script that uses Python json and extracts the keys you want, into tab-separated columns; then I pipe through a wrapper around awk that allows named access to columns. In here: the json2tsv and tsvawk scripts. So for this example it would be:
json2tsv id text < tweets.json | tsvawk '{print "tweet " $id " is: " $text}'
This approach doesn't address #2, is more inefficient than a single Python script, and it's a little brittle: it forces normalization of newlines and tabs in string values, to play nice with awk's field/record-delimited view of the world. But it does let you stay on the command line, with more correctness than grep -o.
On the basis that some of the recommendations here (especially in the comments) suggested the use of Python, I was disappointed not to find an example.
So, here's a one-liner to get a single value from some JSON data. It assumes that you are piping the data in (from somewhere) and so should be useful in a scripting context.
echo '{"hostname":"test","domainname":"example.com"}' | python -c 'import json,sys;obj=json.load(sys.stdin);print obj["hostname"]'
Following martinr's and Boecko's lead:
curl -s 'http://twitter.com/users/username.json' | python -mjson.tool
That will give you an extremely grep-friendly output. Very convenient:
curl -s 'http://twitter.com/users/username.json' | python -mjson.tool | grep my_key
You could just download jq binary for your platform and run (chmod +x jq):
$ curl 'https://twitter.com/users/username.json' | ./jq -r '.name'
It extracts "name" attribute from the json object.
jq homepage says it is like sed for JSON data.
Using Node.js
If the system has Node.js installed, it's possible to use the -p print and -e evaluate script flags with JSON.parse to pull out any value that is needed.
A simple example using the JSON string { "foo": "bar" } and pulling out the value of "foo":
node -pe 'JSON.parse(process.argv[1]).foo' '{ "foo": "bar" }'
Output:
bar
Because we have access to cat and other utilities, we can use this for files:
node -pe 'JSON.parse(process.argv[1]).foo' "$(cat foobar.json)"
Output:
bar
Or any other format such as an URL that contains JSON:
node -pe 'JSON.parse(process.argv[1]).name' "$(curl -s https://api.github.com/users/trevorsenior)"
Output:
Trevor Senior
Use Python's JSON support instead of using AWK!
Something like this:
curl -s http://twitter.com/users/username.json | \
python -c "import json,sys;obj=json.load(sys.stdin);print(obj['name']);"
macOS v12.3 (Monterey) removed /usr/bin/python, so we must use /usr/bin/python3 for macOS v12.3 and later.
curl -s http://twitter.com/users/username.json | \
python3 -c "import json,sys;obj=json.load(sys.stdin);print(obj['name']);"
You've asked how to shoot yourself in the foot and I'm here to provide the ammo:
curl -s 'http://twitter.com/users/username.json' | sed -e 's/[{}]/''/g' | awk -v RS=',"' -F: '/^text/ {print $2}'
You could use tr -d '{}' instead of sed. But leaving them out completely seems to have the desired effect as well.
If you want to strip off the outer quotes, pipe the result of the above through sed 's/\(^"\|"$\)//g'
I think others have sounded sufficient alarm. I'll be standing by with a cell phone to call an ambulance. Fire when ready.
Using Bash with Python
Create a Bash function in your .bashrc file:
function getJsonVal () {
python -c "import json,sys;sys.stdout.write(json.dumps(json.load(sys.stdin)$1))";
}
Then
curl 'http://twitter.com/users/username.json' | getJsonVal "['text']"
Output:
My status
Here is the same function, but with error checking.
function getJsonVal() {
if [ \( $# -ne 1 \) -o \( -t 0 \) ]; then
cat <<EOF
Usage: getJsonVal 'key' < /tmp/
-- or --
cat /tmp/input | getJsonVal 'key'
EOF
return;
fi;
python -c "import json,sys;sys.stdout.write(json.dumps(json.load(sys.stdin)$1))";
}
Where $# -ne 1 makes sure at least 1 input, and -t 0 make sure you are redirecting from a pipe.
The nice thing about this implementation is that you can access nested JSON values and get JSON content in return! =)
Example:
echo '{"foo": {"bar": "baz", "a": [1,2,3]}}' | getJsonVal "['foo']['a'][1]"
Output:
2
If you want to be really fancy, you could pretty print the data:
function getJsonVal () {
python -c "import json,sys;sys.stdout.write(json.dumps(json.load(sys.stdin)$1, sort_keys=True, indent=4))";
}
echo '{"foo": {"bar": "baz", "a": [1,2,3]}}' | getJsonVal "['foo']"
{
"a": [
1,
2,
3
],
"bar": "baz"
}
Update (2020)
My biggest issue with external tools (e.g., Python) was that you have to deal with package managers and dependencies to install them.
However, now that we have jq as a standalone, static tool that's easy to install cross-platform via GitHub Releases and Webi (webinstall.dev/jq), I'd recommend that:
Mac, Linux:
curl -sS https://webi.sh/jq | bash
Windows 10:
curl.exe -A MS https://webi.ms/jq | powershell
Cheat Sheet: https://webinstall.dev/jq
Original (2011)
TickTick is a JSON parser written in bash (less than 250 lines of code).
Here's the author's snippet from his article, Imagine a world where Bash supports JSON:
#!/bin/bash
. ticktick.sh
``
people = {
"Writers": [
"Rod Serling",
"Charles Beaumont",
"Richard Matheson"
],
"Cast": {
"Rod Serling": { "Episodes": 156 },
"Martin Landau": { "Episodes": 2 },
"William Shatner": { "Episodes": 2 }
}
}
``
function printDirectors() {
echo " The ``people.Directors.length()`` Directors are:"
for director in ``people.Directors.items()``; do
printf " - %s\n" ${!director}
done
}
`` people.Directors = [ "John Brahm", "Douglas Heyes" ] ``
printDirectors
newDirector="Lamont Johnson"
`` people.Directors.push($newDirector) ``
printDirectors
echo "Shifted: "``people.Directors.shift()``
printDirectors
echo "Popped: "``people.Directors.pop()``
printDirectors
This is using standard Unix tools available on most distributions. It also works well with backslashes (\) and quotes (").
Warning: This doesn't come close to the power of jq and will only work with very simple JSON objects. It's an attempt to answer to the original question and in situations where you can't install additional tools.
function parse_json()
{
echo $1 | \
sed -e 's/[{}]/''/g' | \
sed -e 's/", "/'\",\"'/g' | \
sed -e 's/" ,"/'\",\"'/g' | \
sed -e 's/" , "/'\",\"'/g' | \
sed -e 's/","/'\"---SEPERATOR---\"'/g' | \
awk -F=':' -v RS='---SEPERATOR---' "\$1~/\"$2\"/ {print}" | \
sed -e "s/\"$2\"://" | \
tr -d "\n\t" | \
sed -e 's/\\"/"/g' | \
sed -e 's/\\\\/\\/g' | \
sed -e 's/^[ \t]*//g' | \
sed -e 's/^"//' -e 's/"$//'
}
parse_json '{"username":"john, doe","email":"john#doe.com"}' username
parse_json '{"username":"john doe","email":"john#doe.com"}' email
--- outputs ---
john, doe
johh#doe.com
Parsing JSON with PHP CLI
It is arguably off-topic, but since precedence reigns, this question remains incomplete without a mention of our trusty and faithful PHP, am I right?
It is using the same example JSON, but let’s assign it to a variable to reduce obscurity.
export JSON='{"hostname":"test","domainname":"example.com"}'
Now for PHP goodness, it is using file_get_contents and the php://stdin stream wrapper.
echo $JSON | php -r 'echo json_decode(file_get_contents("php://stdin"))->hostname;'
Or as pointed out using fgets and the already opened stream at CLI constant STDIN.
echo $JSON | php -r 'echo json_decode(fgets(STDIN))->hostname;'
If someone just wants to extract values from simple JSON objects without the need for nested structures, it is possible to use regular expressions without even leaving Bash.
Here is a function I defined using bash regular expressions based on the JSON standard:
function json_extract() {
local key=$1
local json=$2
local string_regex='"([^"\]|\\.)*"'
local number_regex='-?(0|[1-9][0-9]*)(\.[0-9]+)?([eE][+-]?[0-9]+)?'
local value_regex="${string_regex}|${number_regex}|true|false|null"
local pair_regex="\"${key}\"[[:space:]]*:[[:space:]]*(${value_regex})"
if [[ ${json} =~ ${pair_regex} ]]; then
echo $(sed 's/^"\|"$//g' <<< "${BASH_REMATCH[1]}")
else
return 1
fi
}
Caveats: objects and arrays are not supported as values, but all other value types defined in the standard are supported. Also, a pair will be matched no matter how deep in the JSON document it is as long as it has exactly the same key name.
Using the OP's example:
$ json_extract text "$(curl 'http://twitter.com/users/username.json')"
My status
$ json_extract friends_count "$(curl 'http://twitter.com/users/username.json')"
245
Unfortunately the top voted answer that uses grep returns the full match that didn't work in my scenario, but if you know the JSON format will remain constant you can use lookbehind and lookahead to extract just the desired values.
# echo '{"TotalPages":33,"FooBar":"he\"llo","anotherValue":100}' | grep -Po '(?<="FooBar":")(.*?)(?=",)'
he\"llo
# echo '{"TotalPages":33,"FooBar":"he\"llo","anotherValue":100}' | grep -Po '(?<="TotalPages":)(.*?)(?=,)'
33
# echo '{"TotalPages":33,"FooBar":"he\"llo","anotherValue":100}' | grep -Po '(?<="anotherValue":)(.*?)(?=})'
100
Version which uses Ruby and http://flori.github.com/json/
< file.json ruby -e "require 'rubygems'; require 'json'; puts JSON.pretty_generate(JSON[STDIN.read]);"
Or more concisely:
< file.json ruby -r rubygems -r json -e "puts JSON.pretty_generate(JSON[STDIN.read]);"
This is yet another Bash and Python hybrid answer. I posted this answer, because I wanted to process more complex JSON output, but, reducing the complexity of my bash application. I want to crack open the following JSON object from http://www.arcgis.com/sharing/rest/info?f=json in Bash:
{
"owningSystemUrl": "http://www.arcgis.com",
"authInfo": {
"tokenServicesUrl": "https://www.arcgis.com/sharing/rest/generateToken",
"isTokenBasedSecurity": true
}
}
In the following example, I created my own implementation of jq and unquote leveraging Python. You'll note that once we import the Python object from json to a Python dictionary we can use Python syntax to navigate the dictionary. To navigate the above, the syntax is:
data
data[ "authInfo" ]
data[ "authInfo" ][ "tokenServicesUrl" ]
By using magic in Bash, we omit data and only supply the Python text to the right of data, i.e.
jq
jq '[ "authInfo" ]'
jq '[ "authInfo" ][ "tokenServicesUrl" ]'
Note, with no parameters, jq acts as a JSON prettifier. With parameters, we can use Python syntax to extract anything we want from the dictionary including navigating subdictionaries and array elements.
Here are the Bash Python hybrid functions:
#!/bin/bash -xe
jq_py() {
cat <<EOF
import json, sys
data = json.load( sys.stdin )
print( json.dumps( data$1, indent = 4 ) )
EOF
}
jq() {
python -c "$( jq_py "$1" )"
}
unquote_py() {
cat <<EOF
import json,sys
print( json.load( sys.stdin ) )
EOF
}
unquote() {
python -c "$( unquote_py )"
}
Here's a sample usage of the Bash Python functions:
curl http://www.arcgis.com/sharing/rest/info?f=json | tee arcgis.json
# {"owningSystemUrl":"https://www.arcgis.com","authInfo":{"tokenServicesUrl":"https://www.arcgis.com/sharing/rest/generateToken","isTokenBasedSecurity":true}}
cat arcgis.json | jq
# {
# "owningSystemUrl": "https://www.arcgis.com",
# "authInfo": {
# "tokenServicesUrl": "https://www.arcgis.com/sharing/rest/generateToken",
# "isTokenBasedSecurity": true
# }
# }
cat arcgis.json | jq '[ "authInfo" ]'
# {
# "tokenServicesUrl": "https://www.arcgis.com/sharing/rest/generateToken",
# "isTokenBasedSecurity": true
# }
cat arcgis.json | jq '[ "authInfo" ][ "tokenServicesUrl" ]'
# "https://www.arcgis.com/sharing/rest/generateToken"
cat arcgis.json | jq '[ "authInfo" ][ "tokenServicesUrl" ]' | unquote
# https://www.arcgis.com/sharing/rest/generateToken
There is an easier way to get a property from a JSON string. Using a package.json file as an example, try this:
#!/usr/bin/env bash
my_val="$(json=$(<package.json) node -pe "JSON.parse(process.env.json)['version']")"
We're using process.env, because this gets the file's contents into Node.js as a string without any risk of malicious contents escaping their quoting and being parsed as code.
Now that PowerShell is cross platform, I thought I'd throw its way out there, since I find it to be fairly intuitive and extremely simple.
curl -s 'https://api.github.com/users/lambda' | ConvertFrom-Json
ConvertFrom-Json converts the JSON into a PowerShell custom object, so you can easily work with the properties from that point forward. If you only wanted the 'id' property for example, you'd just do this:
curl -s 'https://api.github.com/users/lambda' | ConvertFrom-Json | select -ExpandProperty id
If you wanted to invoke the whole thing from within Bash, then you'd have to call it like this:
powershell 'curl -s "https://api.github.com/users/lambda" | ConvertFrom-Json'
Of course, there's a pure PowerShell way to do it without curl, which would be:
Invoke-WebRequest 'https://api.github.com/users/lambda' | select -ExpandProperty Content | ConvertFrom-Json
Finally, there's also ConvertTo-Json which converts a custom object to JSON just as easily. Here's an example:
(New-Object PsObject -Property #{ Name = "Tester"; SomeList = #('one','two','three')}) | ConvertTo-Json
Which would produce nice JSON like this:
{
"Name": "Tester",
"SomeList": [
"one",
"two",
"three"
]
}
Admittedly, using a Windows shell on Unix is somewhat sacrilegious, but PowerShell is really good at some things, and parsing JSON and XML are a couple of them. This is the GitHub page for the cross platform version: PowerShell
I can not use any of the answers here. Neither jq, shell arrays, declare, grep -P, lookbehind, lookahead, Python, Perl, Ruby, or even Bash, is available.
The remaining answers simply do not work well. JavaScript sounded familiar, but the tin says Nescaffe - so it is a no go, too :) Even if available, for my simple needs - they would be overkill and slow.
Yet, it is extremely important for me to get many variables from the JSON formatted reply of my modem. I am doing it in Bourne shell (sh) with a very trimmed down BusyBox at my routers! There aren't any problems using AWK alone: just set delimiters and read the data. For a single variable, that is all!
awk 'BEGIN { FS="\""; RS="," }; { if ($2 == "login") {print $4} }' test.json
Remember I don't have any arrays? I had to assign within the AWK parsed data to the 11 variables which I need in a shell script. Wherever I looked, that was said to be an impossible mission. No problem with that, either.
My solution is simple. This code will:
parse .json file from the question (actually, I have borrowed a working data sample from the most upvoted answer) and picked out the quoted data, plus
create shell variables from within the awk assigning free named shell variable names.
eval $( curl -s 'https://api.github.com/users/lambda' |
awk ' BEGIN { FS="""; RS="," };
{
if ($2 == "login") { print "Login=""$4""" }
if ($2 == "name") { print "Name=""$4""" }
if ($2 == "updated_at") { print "Updated=""$4""" }
}' )
echo "$Login, $Name, $Updated"
There aren't any problems with blanks within. In my use, the same command parses a long single line output. As eval is used, this solution is suited for trusted data only.
It is simple to adapt it to pickup unquoted data. For a huge number of variables, a marginal speed gain can be achieved using else if. Lack of arrays obviously means: no multiple records without extra fiddling. But where arrays are available, adapting this solution is a simple task.
#maikel's sed answer almost works (but I can not comment on it). For my nicely formatted data - it works. Not so much with the example used here (missing quotes throw it off). It is complicated and difficult to modify. Plus, I do not like having to make 11 calls to extract 11 variables. Why? I timed 100 loops extracting 9 variables: the sed function took 48.99 seconds and my solution took 0.91 second! Not fair? Doing just a single extraction of 9 variables: 0.51 vs. 0.02 second.
Someone who also has XML files, might want to look at my Xidel. It is a command-line interface, dependency-free JSONiq processor. (I.e., it also supports XQuery for XML or JSON processing.)
The example in the question would be:
xidel -e 'json("http://twitter.com/users/username.json")("name")'
Or with my own, nonstandard extension syntax:
xidel -e 'json("http://twitter.com/users/username.json").name'
You can try something like this -
curl -s 'http://twitter.com/users/jaypalsingh.json' |
awk -F=":" -v RS="," '$1~/"text"/ {print}'
One interesting tool that hasn't be covered in the existing answers is using gron written in Go which has a tagline that says Make JSON greppable! which is exactly what it does.
So essentially gron breaks down your JSON into discrete assignments see the absolute 'path' to it. The primary advantage of it over other tools like jq would be to allow searching for the value without knowing how nested the record to search is present at, without breaking the original JSON structure
e.g., I want to search for the 'twitter_username' field from the following link, I just do
% gron 'https://api.github.com/users/lambda' | fgrep 'twitter_username'
json.twitter_username = "unlambda";
% gron 'https://api.github.com/users/lambda' | fgrep 'twitter_username' | gron -u
{
"twitter_username": "unlambda"
}
As simple as that. Note how the gron -u (short for ungron) reconstructs the JSON back from the search path. The need for fgrep is just to filter your search to the paths needed and not let the search expression be evaluated as a regex, but as a fixed string (which is essentially grep -F)
Another example to search for a string to see where in the nested structure the record is under
% echo '{"foo":{"bar":{"zoo":{"moo":"fine"}}}}' | gron | fgrep "fine"
json.foo.bar.zoo.moo = "fine";
It also supports streaming JSON with its -s command line flag, where you can continuously gron the input stream for a matching record. Also gron has zero runtime dependencies. You can download a binary for Linux, Mac, Windows or FreeBSD and run it.
More usage examples and trips can be found at the official Github page - Advanced Usage
As for why you one can use gron over other JSON parsing tools, see from author's note from the project page.
Why shouldn't I just use jq?
jq is awesome, and a lot more powerful than gron, but with that power comes complexity. gron aims to make it easier to use the tools you already know, like grep and sed.
You can use jshon:
curl 'http://twitter.com/users/username.json' | jshon -e text
Here's one way you can do it with AWK:
curl -sL 'http://twitter.com/users/username.json' | awk -F"," -v k="text" '{
gsub(/{|}/,"")
for(i=1;i<=NF;i++){
if ( $i ~ k ){
print $i
}
}
}'
Here is a good reference. In this case:
curl 'http://twitter.com/users/username.json' | sed -e 's/[{}]/''/g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) { where = match(a[i], /\"text\"/); if(where) {print a[i]} } }'
Parsing JSON is painful in a shell script. With a more appropriate language, create a tool that extracts JSON attributes in a way consistent with shell scripting conventions. You can use your new tool to solve the immediate shell scripting problem and then add it to your kit for future situations.
For example, consider a tool jsonlookup such that if I say jsonlookup access token id it will return the attribute id defined within the attribute token defined within the attribute access from standard input, which is presumably JSON data. If the attribute doesn't exist, the tool returns nothing (exit status 1). If the parsing fails, exit status 2 and a message to standard error. If the lookup succeeds, the tool prints the attribute's value.
Having created a Unix tool for the precise purpose of extracting JSON values you can easily use it in shell scripts:
access_token=$(curl <some horrible crap> | jsonlookup access token id)
Any language will do for the implementation of jsonlookup. Here is a fairly concise Python version:
#!/usr/bin/python
import sys
import json
try: rep = json.loads(sys.stdin.read())
except:
sys.stderr.write(sys.argv[0] + ": unable to parse JSON from stdin\n")
sys.exit(2)
for key in sys.argv[1:]:
if key not in rep:
sys.exit(1)
rep = rep[key]
print rep
A two-liner which uses Python. It works particularly well if you're writing a single .sh file and you don't want to depend on another .py file. It also leverages the usage of pipe |. echo "{\"field\": \"value\"}" can be replaced by anything printing a JSON file to standard output.
echo "{\"field\": \"value\"}" | python -c 'import sys, json
print(json.load(sys.stdin)["field"])'
If you have the PHP interpreter installed:
php -r 'var_export(json_decode(`curl http://twitter.com/users/username.json`, 1));'
For example:
We have a resource that provides JSON content with countries' ISO codes: http://country.io/iso3.json and we can easily see it in a shell with curl:
curl http://country.io/iso3.json
But it looks not very convenient, and not readable. Better parse the JSON content and see a readable structure:
php -r 'var_export(json_decode(`curl http://country.io/iso3.json`, 1));'
This code will print something like:
array (
'BD' => 'BGD',
'BE' => 'BEL',
'BF' => 'BFA',
'BG' => 'BGR',
'BA' => 'BIH',
'BB' => 'BRB',
'WF' => 'WLF',
'BL' => 'BLM',
...
If you have nested arrays this output will looks much better...
There is also a very simple, but powerful, JSON CLI processing tool, fx.
Examples
Use an anonymous function:
echo '{"key": "value"}' | fx "x => x.key"
Output:
value
If you don't pass anonymous function parameter → ..., code will be automatically transformed into an anonymous function. And you can get access to JSON by this keyword:
$ echo '[1,2,3]' | fx "this.map(x => x * 2)"
[2, 4, 6]
Or just use dot syntax too:
echo '{"items": {"one": 1}}' | fx .items.one
Output:
1
You can pass any number of anonymous functions for reducing JSON:
echo '{"items": ["one", "two"]}' | fx "this.items" "this[1]"
Output:
two
You can update existing JSON using spread operator:
echo '{"count": 0}' | fx "{...this, count: 1}"
Output:
{"count": 1}
Just plain JavaScript. There isn't any need to learn new syntax.
Later version of fx has an interactive mode! -
I needed something in Bash that was short and would run without dependencies beyond vanilla Linux LSB and Mac OS for both Python 2.7 & 3 and handle errors, e.g. would report JSON parse errors and missing property errors without spewing Python exceptions:
json-extract () {
if [[ "$1" == "" || "$1" == "-h" || "$1" == "-?" || "$1" == "--help" ]] ; then
echo 'Extract top level property value from json document'
echo ' Usage: json-extract <property> [ <file-path> ]'
echo ' Example 1: json-extract status /tmp/response.json'
echo ' Example 2: echo $JSON_STRING | json-extract status'
echo ' Status codes: 0 - success, 1 - json parse error, 2 - property missing'
else
python -c $'import sys, json;\ntry: obj = json.load(open(sys.argv[2])); \nexcept: sys.exit(1)\ntry: print(obj[sys.argv[1]])\nexcept: sys.exit(2)' "$1" "${2:-/dev/stdin}"
fi
}

bash shell script getting substring from particular line in file that is delimited by double quotes [duplicate]

I'm trying to parse JSON returned from a curl request, like so:
curl 'http://twitter.com/users/username.json' |
sed -e 's/[{}]/''/g' |
awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}'
The above splits the JSON into fields, for example:
% ...
"geo_enabled":false
"friends_count":245
"profile_text_color":"000000"
"status":"in_reply_to_screen_name":null
"source":"web"
"truncated":false
"text":"My status"
"favorited":false
% ...
How do I print a specific field (denoted by the -v k=text)?
There are a number of tools specifically designed for the purpose of manipulating JSON from the command line, and will be a lot easier and more reliable than doing it with Awk, such as jq:
curl -s 'https://api.github.com/users/lambda' | jq -r '.name'
You can also do this with tools that are likely already installed on your system, like Python using the json module, and so avoid any extra dependencies, while still having the benefit of a proper JSON parser. The following assume you want to use UTF-8, which the original JSON should be encoded in and is what most modern terminals use as well:
Python 3:
curl -s 'https://api.github.com/users/lambda' | \
python3 -c "import sys, json; print(json.load(sys.stdin)['name'])"
Python 2:
export PYTHONIOENCODING=utf8
curl -s 'https://api.github.com/users/lambda' | \
python2 -c "import sys, json; print json.load(sys.stdin)['name']"
Frequently Asked Questions
Why not a pure shell solution?
The standard POSIX/Single Unix Specification shell is a very limited language which doesn't contain facilities for representing sequences (list or arrays) or associative arrays (also known as hash tables, maps, dicts, or objects in some other languages). This makes representing the result of parsing JSON somewhat tricky in portable shell scripts. There are somewhat hacky ways to do it, but many of them can break if keys or values contain certain special characters.
Bash 4 and later, zsh, and ksh have support for arrays and associative arrays, but these shells are not universally available (macOS stopped updating Bash at Bash 3, due to a change from GPLv2 to GPLv3, while many Linux systems don't have zsh installed out of the box). It's possible that you could write a script that would work in either Bash 4 or zsh, one of which is available on most macOS, Linux, and BSD systems these days, but it would be tough to write a shebang line that worked for such a polyglot script.
Finally, writing a full fledged JSON parser in shell would be a significant enough dependency that you might as well just use an existing dependency like jq or Python instead. It's not going to be a one-liner, or even small five-line snippet, to do a good implementation.
Why not use awk, sed, or grep?
It is possible to use these tools to do some quick extraction from JSON with a known shape and formatted in a known way, such as one key per line. There are several examples of suggestions for this in other answers.
However, these tools are designed for line based or record based formats; they are not designed for recursive parsing of matched delimiters with possible escape characters.
So these quick and dirty solutions using awk/sed/grep are likely to be fragile, and break if some aspect of the input format changes, such as collapsing whitespace, or adding additional levels of nesting to the JSON objects, or an escaped quote within a string. A solution that is robust enough to handle all JSON input without breaking will also be fairly large and complex, and so not too much different than adding another dependency on jq or Python.
I have had to deal with large amounts of customer data being deleted due to poor input parsing in a shell script before, so I never recommend quick and dirty methods that may be fragile in this way. If you're doing some one-off processing, see the other answers for suggestions, but I still highly recommend just using an existing tested JSON parser.
Historical notes
This answer originally recommended jsawk, which should still work, but is a little more cumbersome to use than jq, and depends on a standalone JavaScript interpreter being installed which is less common than a Python interpreter, so the above answers are probably preferable:
curl -s 'https://api.github.com/users/lambda' | jsawk -a 'return this.name'
This answer also originally used the Twitter API from the question, but that API no longer works, making it hard to copy the examples to test out, and the new Twitter API requires API keys, so I've switched to using the GitHub API which can be used easily without API keys. The first answer for the original question would be:
curl 'http://twitter.com/users/username.json' | jq -r '.text'
To quickly extract the values for a particular key, I personally like to use "grep -o", which only returns the regex's match. For example, to get the "text" field from tweets, something like:
grep -Po '"text":.*?[^\\]",' tweets.json
This regex is more robust than you might think; for example, it deals fine with strings having embedded commas and escaped quotes inside them. I think with a little more work you could make one that is actually guaranteed to extract the value, if it's atomic. (If it has nesting, then a regex can't do it of course.)
And to further clean (albeit keeping the string's original escaping) you can use something like: | perl -pe 's/"text"://; s/^"//; s/",$//'. (I did this for this analysis.)
To all the haters who insist you should use a real JSON parser -- yes, that is essential for correctness, but
To do a really quick analysis, like counting values to check on data cleaning bugs or get a general feel for the data, banging out something on the command line is faster. Opening an editor to write a script is distracting.
grep -o is orders of magnitude faster than the Python standard json library, at least when doing this for tweets (which are ~2 KB each). I'm not sure if this is just because json is slow (I should compare to yajl sometime); but in principle, a regex should be faster since it's finite state and much more optimizable, instead of a parser that has to support recursion, and in this case, spends lots of CPU building trees for structures you don't care about. (If someone wrote a finite state transducer that did proper (depth-limited) JSON parsing, that would be fantastic! In the meantime we have "grep -o".)
To write maintainable code, I always use a real parsing library. I haven't tried jsawk, but if it works well, that would address point #1.
One last, wackier, solution: I wrote a script that uses Python json and extracts the keys you want, into tab-separated columns; then I pipe through a wrapper around awk that allows named access to columns. In here: the json2tsv and tsvawk scripts. So for this example it would be:
json2tsv id text < tweets.json | tsvawk '{print "tweet " $id " is: " $text}'
This approach doesn't address #2, is more inefficient than a single Python script, and it's a little brittle: it forces normalization of newlines and tabs in string values, to play nice with awk's field/record-delimited view of the world. But it does let you stay on the command line, with more correctness than grep -o.
On the basis that some of the recommendations here (especially in the comments) suggested the use of Python, I was disappointed not to find an example.
So, here's a one-liner to get a single value from some JSON data. It assumes that you are piping the data in (from somewhere) and so should be useful in a scripting context.
echo '{"hostname":"test","domainname":"example.com"}' | python -c 'import json,sys;obj=json.load(sys.stdin);print obj["hostname"]'
Following martinr's and Boecko's lead:
curl -s 'http://twitter.com/users/username.json' | python -mjson.tool
That will give you an extremely grep-friendly output. Very convenient:
curl -s 'http://twitter.com/users/username.json' | python -mjson.tool | grep my_key
You could just download jq binary for your platform and run (chmod +x jq):
$ curl 'https://twitter.com/users/username.json' | ./jq -r '.name'
It extracts "name" attribute from the json object.
jq homepage says it is like sed for JSON data.
Using Node.js
If the system has Node.js installed, it's possible to use the -p print and -e evaluate script flags with JSON.parse to pull out any value that is needed.
A simple example using the JSON string { "foo": "bar" } and pulling out the value of "foo":
node -pe 'JSON.parse(process.argv[1]).foo' '{ "foo": "bar" }'
Output:
bar
Because we have access to cat and other utilities, we can use this for files:
node -pe 'JSON.parse(process.argv[1]).foo' "$(cat foobar.json)"
Output:
bar
Or any other format such as an URL that contains JSON:
node -pe 'JSON.parse(process.argv[1]).name' "$(curl -s https://api.github.com/users/trevorsenior)"
Output:
Trevor Senior
Use Python's JSON support instead of using AWK!
Something like this:
curl -s http://twitter.com/users/username.json | \
python -c "import json,sys;obj=json.load(sys.stdin);print(obj['name']);"
macOS v12.3 (Monterey) removed /usr/bin/python, so we must use /usr/bin/python3 for macOS v12.3 and later.
curl -s http://twitter.com/users/username.json | \
python3 -c "import json,sys;obj=json.load(sys.stdin);print(obj['name']);"
You've asked how to shoot yourself in the foot and I'm here to provide the ammo:
curl -s 'http://twitter.com/users/username.json' | sed -e 's/[{}]/''/g' | awk -v RS=',"' -F: '/^text/ {print $2}'
You could use tr -d '{}' instead of sed. But leaving them out completely seems to have the desired effect as well.
If you want to strip off the outer quotes, pipe the result of the above through sed 's/\(^"\|"$\)//g'
I think others have sounded sufficient alarm. I'll be standing by with a cell phone to call an ambulance. Fire when ready.
Using Bash with Python
Create a Bash function in your .bashrc file:
function getJsonVal () {
python -c "import json,sys;sys.stdout.write(json.dumps(json.load(sys.stdin)$1))";
}
Then
curl 'http://twitter.com/users/username.json' | getJsonVal "['text']"
Output:
My status
Here is the same function, but with error checking.
function getJsonVal() {
if [ \( $# -ne 1 \) -o \( -t 0 \) ]; then
cat <<EOF
Usage: getJsonVal 'key' < /tmp/
-- or --
cat /tmp/input | getJsonVal 'key'
EOF
return;
fi;
python -c "import json,sys;sys.stdout.write(json.dumps(json.load(sys.stdin)$1))";
}
Where $# -ne 1 makes sure at least 1 input, and -t 0 make sure you are redirecting from a pipe.
The nice thing about this implementation is that you can access nested JSON values and get JSON content in return! =)
Example:
echo '{"foo": {"bar": "baz", "a": [1,2,3]}}' | getJsonVal "['foo']['a'][1]"
Output:
2
If you want to be really fancy, you could pretty print the data:
function getJsonVal () {
python -c "import json,sys;sys.stdout.write(json.dumps(json.load(sys.stdin)$1, sort_keys=True, indent=4))";
}
echo '{"foo": {"bar": "baz", "a": [1,2,3]}}' | getJsonVal "['foo']"
{
"a": [
1,
2,
3
],
"bar": "baz"
}
Update (2020)
My biggest issue with external tools (e.g., Python) was that you have to deal with package managers and dependencies to install them.
However, now that we have jq as a standalone, static tool that's easy to install cross-platform via GitHub Releases and Webi (webinstall.dev/jq), I'd recommend that:
Mac, Linux:
curl -sS https://webi.sh/jq | bash
Windows 10:
curl.exe -A MS https://webi.ms/jq | powershell
Cheat Sheet: https://webinstall.dev/jq
Original (2011)
TickTick is a JSON parser written in bash (less than 250 lines of code).
Here's the author's snippet from his article, Imagine a world where Bash supports JSON:
#!/bin/bash
. ticktick.sh
``
people = {
"Writers": [
"Rod Serling",
"Charles Beaumont",
"Richard Matheson"
],
"Cast": {
"Rod Serling": { "Episodes": 156 },
"Martin Landau": { "Episodes": 2 },
"William Shatner": { "Episodes": 2 }
}
}
``
function printDirectors() {
echo " The ``people.Directors.length()`` Directors are:"
for director in ``people.Directors.items()``; do
printf " - %s\n" ${!director}
done
}
`` people.Directors = [ "John Brahm", "Douglas Heyes" ] ``
printDirectors
newDirector="Lamont Johnson"
`` people.Directors.push($newDirector) ``
printDirectors
echo "Shifted: "``people.Directors.shift()``
printDirectors
echo "Popped: "``people.Directors.pop()``
printDirectors
This is using standard Unix tools available on most distributions. It also works well with backslashes (\) and quotes (").
Warning: This doesn't come close to the power of jq and will only work with very simple JSON objects. It's an attempt to answer to the original question and in situations where you can't install additional tools.
function parse_json()
{
echo $1 | \
sed -e 's/[{}]/''/g' | \
sed -e 's/", "/'\",\"'/g' | \
sed -e 's/" ,"/'\",\"'/g' | \
sed -e 's/" , "/'\",\"'/g' | \
sed -e 's/","/'\"---SEPERATOR---\"'/g' | \
awk -F=':' -v RS='---SEPERATOR---' "\$1~/\"$2\"/ {print}" | \
sed -e "s/\"$2\"://" | \
tr -d "\n\t" | \
sed -e 's/\\"/"/g' | \
sed -e 's/\\\\/\\/g' | \
sed -e 's/^[ \t]*//g' | \
sed -e 's/^"//' -e 's/"$//'
}
parse_json '{"username":"john, doe","email":"john#doe.com"}' username
parse_json '{"username":"john doe","email":"john#doe.com"}' email
--- outputs ---
john, doe
johh#doe.com
Parsing JSON with PHP CLI
It is arguably off-topic, but since precedence reigns, this question remains incomplete without a mention of our trusty and faithful PHP, am I right?
It is using the same example JSON, but let’s assign it to a variable to reduce obscurity.
export JSON='{"hostname":"test","domainname":"example.com"}'
Now for PHP goodness, it is using file_get_contents and the php://stdin stream wrapper.
echo $JSON | php -r 'echo json_decode(file_get_contents("php://stdin"))->hostname;'
Or as pointed out using fgets and the already opened stream at CLI constant STDIN.
echo $JSON | php -r 'echo json_decode(fgets(STDIN))->hostname;'
If someone just wants to extract values from simple JSON objects without the need for nested structures, it is possible to use regular expressions without even leaving Bash.
Here is a function I defined using bash regular expressions based on the JSON standard:
function json_extract() {
local key=$1
local json=$2
local string_regex='"([^"\]|\\.)*"'
local number_regex='-?(0|[1-9][0-9]*)(\.[0-9]+)?([eE][+-]?[0-9]+)?'
local value_regex="${string_regex}|${number_regex}|true|false|null"
local pair_regex="\"${key}\"[[:space:]]*:[[:space:]]*(${value_regex})"
if [[ ${json} =~ ${pair_regex} ]]; then
echo $(sed 's/^"\|"$//g' <<< "${BASH_REMATCH[1]}")
else
return 1
fi
}
Caveats: objects and arrays are not supported as values, but all other value types defined in the standard are supported. Also, a pair will be matched no matter how deep in the JSON document it is as long as it has exactly the same key name.
Using the OP's example:
$ json_extract text "$(curl 'http://twitter.com/users/username.json')"
My status
$ json_extract friends_count "$(curl 'http://twitter.com/users/username.json')"
245
Unfortunately the top voted answer that uses grep returns the full match that didn't work in my scenario, but if you know the JSON format will remain constant you can use lookbehind and lookahead to extract just the desired values.
# echo '{"TotalPages":33,"FooBar":"he\"llo","anotherValue":100}' | grep -Po '(?<="FooBar":")(.*?)(?=",)'
he\"llo
# echo '{"TotalPages":33,"FooBar":"he\"llo","anotherValue":100}' | grep -Po '(?<="TotalPages":)(.*?)(?=,)'
33
# echo '{"TotalPages":33,"FooBar":"he\"llo","anotherValue":100}' | grep -Po '(?<="anotherValue":)(.*?)(?=})'
100
Version which uses Ruby and http://flori.github.com/json/
< file.json ruby -e "require 'rubygems'; require 'json'; puts JSON.pretty_generate(JSON[STDIN.read]);"
Or more concisely:
< file.json ruby -r rubygems -r json -e "puts JSON.pretty_generate(JSON[STDIN.read]);"
This is yet another Bash and Python hybrid answer. I posted this answer, because I wanted to process more complex JSON output, but, reducing the complexity of my bash application. I want to crack open the following JSON object from http://www.arcgis.com/sharing/rest/info?f=json in Bash:
{
"owningSystemUrl": "http://www.arcgis.com",
"authInfo": {
"tokenServicesUrl": "https://www.arcgis.com/sharing/rest/generateToken",
"isTokenBasedSecurity": true
}
}
In the following example, I created my own implementation of jq and unquote leveraging Python. You'll note that once we import the Python object from json to a Python dictionary we can use Python syntax to navigate the dictionary. To navigate the above, the syntax is:
data
data[ "authInfo" ]
data[ "authInfo" ][ "tokenServicesUrl" ]
By using magic in Bash, we omit data and only supply the Python text to the right of data, i.e.
jq
jq '[ "authInfo" ]'
jq '[ "authInfo" ][ "tokenServicesUrl" ]'
Note, with no parameters, jq acts as a JSON prettifier. With parameters, we can use Python syntax to extract anything we want from the dictionary including navigating subdictionaries and array elements.
Here are the Bash Python hybrid functions:
#!/bin/bash -xe
jq_py() {
cat <<EOF
import json, sys
data = json.load( sys.stdin )
print( json.dumps( data$1, indent = 4 ) )
EOF
}
jq() {
python -c "$( jq_py "$1" )"
}
unquote_py() {
cat <<EOF
import json,sys
print( json.load( sys.stdin ) )
EOF
}
unquote() {
python -c "$( unquote_py )"
}
Here's a sample usage of the Bash Python functions:
curl http://www.arcgis.com/sharing/rest/info?f=json | tee arcgis.json
# {"owningSystemUrl":"https://www.arcgis.com","authInfo":{"tokenServicesUrl":"https://www.arcgis.com/sharing/rest/generateToken","isTokenBasedSecurity":true}}
cat arcgis.json | jq
# {
# "owningSystemUrl": "https://www.arcgis.com",
# "authInfo": {
# "tokenServicesUrl": "https://www.arcgis.com/sharing/rest/generateToken",
# "isTokenBasedSecurity": true
# }
# }
cat arcgis.json | jq '[ "authInfo" ]'
# {
# "tokenServicesUrl": "https://www.arcgis.com/sharing/rest/generateToken",
# "isTokenBasedSecurity": true
# }
cat arcgis.json | jq '[ "authInfo" ][ "tokenServicesUrl" ]'
# "https://www.arcgis.com/sharing/rest/generateToken"
cat arcgis.json | jq '[ "authInfo" ][ "tokenServicesUrl" ]' | unquote
# https://www.arcgis.com/sharing/rest/generateToken
There is an easier way to get a property from a JSON string. Using a package.json file as an example, try this:
#!/usr/bin/env bash
my_val="$(json=$(<package.json) node -pe "JSON.parse(process.env.json)['version']")"
We're using process.env, because this gets the file's contents into Node.js as a string without any risk of malicious contents escaping their quoting and being parsed as code.
Now that PowerShell is cross platform, I thought I'd throw its way out there, since I find it to be fairly intuitive and extremely simple.
curl -s 'https://api.github.com/users/lambda' | ConvertFrom-Json
ConvertFrom-Json converts the JSON into a PowerShell custom object, so you can easily work with the properties from that point forward. If you only wanted the 'id' property for example, you'd just do this:
curl -s 'https://api.github.com/users/lambda' | ConvertFrom-Json | select -ExpandProperty id
If you wanted to invoke the whole thing from within Bash, then you'd have to call it like this:
powershell 'curl -s "https://api.github.com/users/lambda" | ConvertFrom-Json'
Of course, there's a pure PowerShell way to do it without curl, which would be:
Invoke-WebRequest 'https://api.github.com/users/lambda' | select -ExpandProperty Content | ConvertFrom-Json
Finally, there's also ConvertTo-Json which converts a custom object to JSON just as easily. Here's an example:
(New-Object PsObject -Property #{ Name = "Tester"; SomeList = #('one','two','three')}) | ConvertTo-Json
Which would produce nice JSON like this:
{
"Name": "Tester",
"SomeList": [
"one",
"two",
"three"
]
}
Admittedly, using a Windows shell on Unix is somewhat sacrilegious, but PowerShell is really good at some things, and parsing JSON and XML are a couple of them. This is the GitHub page for the cross platform version: PowerShell
I can not use any of the answers here. Neither jq, shell arrays, declare, grep -P, lookbehind, lookahead, Python, Perl, Ruby, or even Bash, is available.
The remaining answers simply do not work well. JavaScript sounded familiar, but the tin says Nescaffe - so it is a no go, too :) Even if available, for my simple needs - they would be overkill and slow.
Yet, it is extremely important for me to get many variables from the JSON formatted reply of my modem. I am doing it in Bourne shell (sh) with a very trimmed down BusyBox at my routers! There aren't any problems using AWK alone: just set delimiters and read the data. For a single variable, that is all!
awk 'BEGIN { FS="\""; RS="," }; { if ($2 == "login") {print $4} }' test.json
Remember I don't have any arrays? I had to assign within the AWK parsed data to the 11 variables which I need in a shell script. Wherever I looked, that was said to be an impossible mission. No problem with that, either.
My solution is simple. This code will:
parse .json file from the question (actually, I have borrowed a working data sample from the most upvoted answer) and picked out the quoted data, plus
create shell variables from within the awk assigning free named shell variable names.
eval $( curl -s 'https://api.github.com/users/lambda' |
awk ' BEGIN { FS="""; RS="," };
{
if ($2 == "login") { print "Login=""$4""" }
if ($2 == "name") { print "Name=""$4""" }
if ($2 == "updated_at") { print "Updated=""$4""" }
}' )
echo "$Login, $Name, $Updated"
There aren't any problems with blanks within. In my use, the same command parses a long single line output. As eval is used, this solution is suited for trusted data only.
It is simple to adapt it to pickup unquoted data. For a huge number of variables, a marginal speed gain can be achieved using else if. Lack of arrays obviously means: no multiple records without extra fiddling. But where arrays are available, adapting this solution is a simple task.
#maikel's sed answer almost works (but I can not comment on it). For my nicely formatted data - it works. Not so much with the example used here (missing quotes throw it off). It is complicated and difficult to modify. Plus, I do not like having to make 11 calls to extract 11 variables. Why? I timed 100 loops extracting 9 variables: the sed function took 48.99 seconds and my solution took 0.91 second! Not fair? Doing just a single extraction of 9 variables: 0.51 vs. 0.02 second.
Someone who also has XML files, might want to look at my Xidel. It is a command-line interface, dependency-free JSONiq processor. (I.e., it also supports XQuery for XML or JSON processing.)
The example in the question would be:
xidel -e 'json("http://twitter.com/users/username.json")("name")'
Or with my own, nonstandard extension syntax:
xidel -e 'json("http://twitter.com/users/username.json").name'
You can try something like this -
curl -s 'http://twitter.com/users/jaypalsingh.json' |
awk -F=":" -v RS="," '$1~/"text"/ {print}'
One interesting tool that hasn't be covered in the existing answers is using gron written in Go which has a tagline that says Make JSON greppable! which is exactly what it does.
So essentially gron breaks down your JSON into discrete assignments see the absolute 'path' to it. The primary advantage of it over other tools like jq would be to allow searching for the value without knowing how nested the record to search is present at, without breaking the original JSON structure
e.g., I want to search for the 'twitter_username' field from the following link, I just do
% gron 'https://api.github.com/users/lambda' | fgrep 'twitter_username'
json.twitter_username = "unlambda";
% gron 'https://api.github.com/users/lambda' | fgrep 'twitter_username' | gron -u
{
"twitter_username": "unlambda"
}
As simple as that. Note how the gron -u (short for ungron) reconstructs the JSON back from the search path. The need for fgrep is just to filter your search to the paths needed and not let the search expression be evaluated as a regex, but as a fixed string (which is essentially grep -F)
Another example to search for a string to see where in the nested structure the record is under
% echo '{"foo":{"bar":{"zoo":{"moo":"fine"}}}}' | gron | fgrep "fine"
json.foo.bar.zoo.moo = "fine";
It also supports streaming JSON with its -s command line flag, where you can continuously gron the input stream for a matching record. Also gron has zero runtime dependencies. You can download a binary for Linux, Mac, Windows or FreeBSD and run it.
More usage examples and trips can be found at the official Github page - Advanced Usage
As for why you one can use gron over other JSON parsing tools, see from author's note from the project page.
Why shouldn't I just use jq?
jq is awesome, and a lot more powerful than gron, but with that power comes complexity. gron aims to make it easier to use the tools you already know, like grep and sed.
You can use jshon:
curl 'http://twitter.com/users/username.json' | jshon -e text
Here's one way you can do it with AWK:
curl -sL 'http://twitter.com/users/username.json' | awk -F"," -v k="text" '{
gsub(/{|}/,"")
for(i=1;i<=NF;i++){
if ( $i ~ k ){
print $i
}
}
}'
Here is a good reference. In this case:
curl 'http://twitter.com/users/username.json' | sed -e 's/[{}]/''/g' | awk -v k="text" '{n=split($0,a,","); for (i=1; i<=n; i++) { where = match(a[i], /\"text\"/); if(where) {print a[i]} } }'
Parsing JSON is painful in a shell script. With a more appropriate language, create a tool that extracts JSON attributes in a way consistent with shell scripting conventions. You can use your new tool to solve the immediate shell scripting problem and then add it to your kit for future situations.
For example, consider a tool jsonlookup such that if I say jsonlookup access token id it will return the attribute id defined within the attribute token defined within the attribute access from standard input, which is presumably JSON data. If the attribute doesn't exist, the tool returns nothing (exit status 1). If the parsing fails, exit status 2 and a message to standard error. If the lookup succeeds, the tool prints the attribute's value.
Having created a Unix tool for the precise purpose of extracting JSON values you can easily use it in shell scripts:
access_token=$(curl <some horrible crap> | jsonlookup access token id)
Any language will do for the implementation of jsonlookup. Here is a fairly concise Python version:
#!/usr/bin/python
import sys
import json
try: rep = json.loads(sys.stdin.read())
except:
sys.stderr.write(sys.argv[0] + ": unable to parse JSON from stdin\n")
sys.exit(2)
for key in sys.argv[1:]:
if key not in rep:
sys.exit(1)
rep = rep[key]
print rep
A two-liner which uses Python. It works particularly well if you're writing a single .sh file and you don't want to depend on another .py file. It also leverages the usage of pipe |. echo "{\"field\": \"value\"}" can be replaced by anything printing a JSON file to standard output.
echo "{\"field\": \"value\"}" | python -c 'import sys, json
print(json.load(sys.stdin)["field"])'
If you have the PHP interpreter installed:
php -r 'var_export(json_decode(`curl http://twitter.com/users/username.json`, 1));'
For example:
We have a resource that provides JSON content with countries' ISO codes: http://country.io/iso3.json and we can easily see it in a shell with curl:
curl http://country.io/iso3.json
But it looks not very convenient, and not readable. Better parse the JSON content and see a readable structure:
php -r 'var_export(json_decode(`curl http://country.io/iso3.json`, 1));'
This code will print something like:
array (
'BD' => 'BGD',
'BE' => 'BEL',
'BF' => 'BFA',
'BG' => 'BGR',
'BA' => 'BIH',
'BB' => 'BRB',
'WF' => 'WLF',
'BL' => 'BLM',
...
If you have nested arrays this output will looks much better...
There is also a very simple, but powerful, JSON CLI processing tool, fx.
Examples
Use an anonymous function:
echo '{"key": "value"}' | fx "x => x.key"
Output:
value
If you don't pass anonymous function parameter → ..., code will be automatically transformed into an anonymous function. And you can get access to JSON by this keyword:
$ echo '[1,2,3]' | fx "this.map(x => x * 2)"
[2, 4, 6]
Or just use dot syntax too:
echo '{"items": {"one": 1}}' | fx .items.one
Output:
1
You can pass any number of anonymous functions for reducing JSON:
echo '{"items": ["one", "two"]}' | fx "this.items" "this[1]"
Output:
two
You can update existing JSON using spread operator:
echo '{"count": 0}' | fx "{...this, count: 1}"
Output:
{"count": 1}
Just plain JavaScript. There isn't any need to learn new syntax.
Later version of fx has an interactive mode! -
I needed something in Bash that was short and would run without dependencies beyond vanilla Linux LSB and Mac OS for both Python 2.7 & 3 and handle errors, e.g. would report JSON parse errors and missing property errors without spewing Python exceptions:
json-extract () {
if [[ "$1" == "" || "$1" == "-h" || "$1" == "-?" || "$1" == "--help" ]] ; then
echo 'Extract top level property value from json document'
echo ' Usage: json-extract <property> [ <file-path> ]'
echo ' Example 1: json-extract status /tmp/response.json'
echo ' Example 2: echo $JSON_STRING | json-extract status'
echo ' Status codes: 0 - success, 1 - json parse error, 2 - property missing'
else
python -c $'import sys, json;\ntry: obj = json.load(open(sys.argv[2])); \nexcept: sys.exit(1)\ntry: print(obj[sys.argv[1]])\nexcept: sys.exit(2)' "$1" "${2:-/dev/stdin}"
fi
}

What are good CLI tools for JSON?

General Problem
Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e.
These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context.
XML Analogues
I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this:
<foo user="me">
<baz key="zoidberg" value="squid" />
<baz key="leela" value="cyclops" />
<baz key="fry" value="rube" />
</foo>
And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable.
Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us.
Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future.
JSON Solutions?
So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this:
{
"firstName": "Bender",
"lastName": "Robot",
"age": 200,
"address": {
"streetAddress": "123",
"city": "New York",
"state": "NY",
"postalCode": "1729"
},
"phoneNumber": [
{ "type": "home", "number": "666 555-1234" },
{ "type": "fax", "number": "666 555-4567" }
]
}
And wanted to find the average number of phone numbers each person had, I could do something like this with XPath:
fn:avg(/fn:count(phoneNumber))
Questions
Are there any command-line tools
that can "query" JSON files in this
way?
If you have to process a bunch of
JSON files on a Unix command line,
what tools do you use?
Heck, is there even work being done
to make a query language like this
for JSON?
If you do use tools like this in
your day-to-day work, what do you
like/dislike about them? Are there
any gotchas?
I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed.
Related Questions
Grep and Sed Equivalent for XML Command Line Processing
Is there a query language for JSON?
JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON
I just found this:
http://stedolan.github.io/jq/
"jq is a lightweight and flexible command-line JSON processor."
2014 update:
#user456584 mentioned:
There's also the 'json' command (e.g. 'jsontool'). I tend to prefer it over jq. Very UNIX-y. Here's a link to the project: github.com/trentm/json –
in the json README at http://github.com/trentm/json there is a long list of similar things
jq: http://stedolan.github.io/jq/
json:select: http://jsonselect.org/
jsonpipe: https://github.com/dvxhouse/jsonpipe
json-command: https://github.com/zpoley/json-command
JSONPath: http://goessner.net/articles/JsonPath/, http://code.google.com/p/jsonpath/wiki/Javascript
jsawk: https://github.com/micha/jsawk
jshon: http://kmkeen.com/jshon/
json2: https://github.com/vi/json2
fx: https://github.com/antonmedv/fx
I have created a module specifically designed for command-line JSON manipulation:
https://github.com/ddopson/underscore-cli
FLEXIBLE - THE "swiss-army-knife" tool for processing JSON data - can be used as a simple pretty-printer, or as a full-powered Javascript command-line
POWERFUL - Exposes the full power and functionality of underscore.js (plus underscore.string)
SIMPLE - Makes it simple to write JS one-liners similar to using "perl -pe"
CHAINED - Multiple command invokations can be chained together to create a data processing pipeline
MULTI-FORMAT - Rich support for input / output formats - pretty-printing, strict JSON, etc [coming soon]
DOCUMENTED - Excellent command-line documentation with multiple examples for every command
It allows you to do powerful things really easily:
cat earthporn.json | underscore select '.data .title'
# [ 'Fjaðrárgljúfur canyon, Iceland [OC] [683x1024]',
# 'New town, Edinburgh, Scotland [4320 x 3240]',
# 'Sunrise in Bryce Canyon, UT [1120x700] [OC]',
# ...
# 'Kariega Game Reserve, South Africa [3584x2688]',
# 'Valle de la Luna, Chile [OS] [1024x683]',
# 'Frosted trees after a snowstorm in Laax, Switzerland [OC] [1072x712]' ]
cat earthporn.json | underscore select '.data .title' | underscore count
# 25
underscore map --data '[1, 2, 3, 4]' 'value+1'
# prints: [ 2, 3, 4, 5 ]
underscore map --data '{"a": [1, 4], "b": [2, 8]}' '_.max(value)'
# [ 4, 8 ]
echo '{"foo":1, "bar":2}' | underscore map -q 'console.log("key = ", key)'
# key = foo
# key = bar
underscore pluck --data "[{name : 'moe', age : 40}, {name : 'larry', age : 50}, {name : 'curly', age : 60}]" name
# [ 'moe', 'larry', 'curly' ]
underscore keys --data '{name : "larry", age : 50}'
# [ 'name', 'age' ]
underscore reduce --data '[1, 2, 3, 4]' 'total+value'
# 10
It has a very nice command-line help system and is extremely flexible. It is well tested and ready for use; however, I'm still building out a few of the features like alternatives for input/output format, and merging in my template handling tool (see TODO.md). If you have any feature requests, comment on this post or add an issue in github. I've designed out a pretty extensive feature-set, but I'd be glad to prioritize features that are needed by members of the community.
One way you could do is to convert it to XML. Following uses two perl modules (JSON and XML::Simple) to do fly-by conversion:
cat test.json | perl -MJSON -MXML::Simple -e 'print XMLout(decode_json(do{local$/;<>}),RootName=>"json")'
which for your example json ends up as:
<json age="200" firstName="Bender" lastName="Robot">
<address city="New York" postalCode="1729" state="NY" streetAddress="123" />
<phoneNumber number="666 555-1234" type="home" />
<phoneNumber number="666 555-4567" type="fax" />
</json>
Have a look at this insane project jsawk. It is design to filter through JSON input from the command line. Check resty as well for a command line REST client that you can use in pipelines that may come in handy.
There is also interactive terminal tool — fx
Pipe into fx any JSON and anonymous function for reducing it.
$ echo '{...}' | fx [code ...]
Start interactive mode without passing any arguments:
$ curl ... | fx
https://github.com/antonmedv/fx
Recently I discovered that JSON can easily be eval-ed with Python:
$ python -c "json=eval(open('/json.txt').read()); print len(json['phoneNumber'])"
2
Though the method will obviously fail if the JSON input contains nulls.
Have a look at the f:json-document() from the FXSL 2.x library.
Using this function it is extremely easy to incorporate JSon and use it just as... XML.
For example, one can just write the following XPath expression:
f:json-document($vstrParam)/Students/*[sex = 'Female']
and get all children of Students with sex = 'Female'
Here is the complete example:
<xsl:stylesheet version="2.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:f="http://fxsl.sf.net/"
exclude-result-prefixes="f xs"
>
<xsl:import href="../f/func-json-document.xsl"/>
<xsl:output omit-xml-declaration="yes" indent="yes"/>
<xsl:variable name="vstrParam" as="xs:string">
{
"teacher":{
"name":
"Mr Borat",
"age":
"35",
"Nationality":
"Kazakhstan"
},
"Class":{
"Semester":
"Summer",
"Room":
null,
"Subject":
"Politics",
"Notes":
"We're happy, you happy?"
},
"Students":
{
"Smith":
{"First Name":"Mary","sex":"Female"},
"Brown":
{"First Name":"John","sex":"Male"},
"Jackson":
{"First Name":"Jackie","sex":"Female"}
}
,
"Grades":
{
"Test":
[
{"grade":"A","points":68,"grade":"B","points":25,"grade":"C","points":15},
{"grade":"C","points":2, "grade":"B","points":29, "grade":"A","points":55},
{"grade":"C","points":2, "grade":"A","points":72, "grade":"A","points":65}
]
}
}
</xsl:variable>
<xsl:template match="/">
<xsl:sequence select=
"f:json-document($vstrParam)/Students/*[sex = 'Female']"/>
</xsl:template>
</xsl:stylesheet>
When the above transformation is applied on any XML document (ignored), the correct result is produced:
<Smith>
<First_Name>Mary</First_Name>
<sex>Female</sex>
</Smith>
<Jackson>
<First_Name>Jackie</First_Name>
<sex>Female</sex>
</Jackson>
Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us.
[...]
So this leads me to my question: are there any tools like this for JSON?
If you ask me, xidel is exactly what you're looking for.
Xidel is a command line tool to download and extract data from HTML/XML pages or JSON-APIs, using CSS, XPath 3.0, XQuery 3.0, JSONiq or pattern templates. It can also create new or transformed XML/HTML/JSON documents.
$ xidel -s "input.json" -e '
$json/avg(
count((phoneNumber)())
)
'
2