Retrieve JSON Object Key Values using SED command - json

I have a JSON like below coming from curl command and it is present in a output.txt file. I want to retreive the JIRA status, here it is "In Progress"
{
"self": "https://jira.com/jira/rest/api/2/issue/1",
"fields": {
"status": {
"self": "https://jira.com/jira/rest/api/2/status/10170",
"description": "",
"name": "In Progress",
"id": "10170"
}
}
}
I have a restriction to use only sed . I tried like below it does not work . I am not sure how to navigate to the name value. can you please suggest to print JIRA status
sed -n 's|.*"fields":{"status":{"name":"\([^"]*\)".*|\1|p' output.txt

You can use
sed -n 's/^[[:space:]]*"name": "\(.*\)",/\1/p' output.txt
# With GNU sed:
sed -n 's/^\s*"name":\s*"\(.*\)",/\1/p' output.txt
See the online demo
Details:
n - suppresses default line output
^\s*"name":\s*"\(.*\)", - matches
^ - start of string
\s* - zero or more whitespaces
"name": - a literal string
\s* - zero or more whitespaces
" - a " char
\(.*\) - a POSIX BRE capturing group matching any text up to
", - the last occurrence of ", (they are at the end of the targeted line anyway).
\1 - replaces the whole match with Group 1 value
p - only prints the replacement result.
With a GNU sed, you can also use the -z option to read the file as a single string and then use a more specific pattern:
sed -z 's/.*"fields":\s*{\s*"status": {.*\s*name": "\([^"]*\)".*/\1/' output.txt
See this online demo.
It does something very close to this demo.

Related

Bulk update values in json files (writing files)

I have a set of JSON files in a local folder. What I want to do is change a particular string value in it, permanently. That means, deleting or modifying the old entry, writing a new one, and saving it.
Below is the format of the file:
{
"name": "ABC #1",
"description": "This is the description",
"image": "ipfs://NewUriToReplace/1.png",
"dna": "a56c520f57ba2a861de8c78099b4691f9dad6e87",
"edition": 1,
"date": 1641634646966,
"creator": "Team Dreamlabs",
"attributes": [
{
I want to change ABA #1 to ABC #9501 in this file, ABC #2 to ABC #9502 in the text file, and so on. How do I do that on MAC in one go?
As I understand from the example, you are adding a value of 9500 to your integers after the symbol #.
Because this kind of a replacement is a kind of string operation, a cycle with command sed might be used:
for f in *.json; do sed -i.bak 's/\("name": "ABC #\)\([0-9]\)",/\1950\2",/' $f; done
it just replaces a single digit to the new composition... Despite it responses to the example, obviously, it would not work for more than number #9.
Then we need to use a bash function:
function add_number() { old_number=$(cat $1 | sed -n 's/[ ]*"name": "ABC #\([0-9]*\)",/\1/p'); new_number=$(($old_number+9500)); sed -i.bak "s/\(\"name\": \"ABC #\)\([0-9]*\)\",/\1${new_number}\",/" $1; }; for f in *.json; do add_number $f ; done
The function add_number extracts the integer value, then adds a desired number to it and then replaces content of the file.
For both extraction and replacing the sed is used again.
At extraction flag -n allows to limit the amount of lines at sed output and mode p prints the result of replacement. Also, we do not want spaces symbols to pass into this assignment.
At replacement double quotes used in order to enable the bash to use the variable value inside of sed. Also, the real quotes are masked.
Regarding addition from the comment below, in order to make replacement in another line with tag edition (and using the same number), just a new replacement sed operation should be added with amended regular expression to fit this line.
Finally, the overall code in a better look:
function add_number() {
old_number=$(cat $1 | sed -n 's/[ ]*"name": "ABC #\([0-9]*\)",/\1/p')
new_number=$(($old_number+9500))
sed -i.bak "s/\(\"name\": \"ABC #\)[0-9]*\",/\1${new_number}\",/" $1
sed -i.bak "s/\(\"edition\": \)[0-9]*,/\1${new_number},/" $1
}
for f in *.json
do add_number $f
done
Those previous answers helped me to write this code:
using variables inside of sed
assigning the variable
If you are going to manipulate your JSON files on more than just this one occasion, then you might want to consider using tools that are designed to accomplish such tasks with ease.
One popular choice could be jq which is a "lightweight and flexible command-line JSON processor" that "has zero runtime dependencies" and is also available for OS X. By using jq within your shell, the following would be one way to accomplish what you have asked for.
Adding the numeric value 9500 to the number sitting in the field called edition:
jq '.edition += 9500' file.json
Interpreting a part of a string as number, adding again 9500 to it, and recomposing the string:
jq '.name |= ((./"#" | .[1] |= "\(tonumber + 9500)") | join("#"))' file.json
On the whole, iterating over your files, making both changes at once, writing to a temporary file and replacing the original on success, while having the value to be added as external variable:
v=9500
for f in *.json; do jq --argjson v $v '
.edition += $v | .name |= ((./"#" | .[1] |= "\(tonumber + $v)") | join("#"))
' "$f" > "$f.new" && mv "$f.new" "$f"
done
Here is an online "playground for jq", set up to simulate the application of my code from above to three imaginary files of yours. Feel free to edit the jq filter and/or the input JSON in order to see what could be possible using jq.

Replace tags in text file using key-value pairs from JSON file

I am trying to write a shell script that can read a json string, decode it to an array and foreach through the array and use the key/value for replacing strings in another file.
If this were PHP, then I would write something like this.
$array = json_decode($jsonString, true);
foreach($array as $key => $value)
{
str_replace($key, $value, $rawString);
}
I need this to be converted to Bash script.
Here is the example JSON string.
{
"login": "lambda",
"id": 37398,
"avatar_url": "https://avatars.githubusercontent.com/u/37398?v=3",
"gravatar_id": "",
"url": "https://api.github.com/users/lambda",
"html_url": "https://github.com/lambda",
"followers_url": "https://api.github.com/users/lambda/followers",
"following_url": "https://api.github.com/users/lambda/following{/other_user}",
"gists_url": "https://api.github.com/users/lambda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lambda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lambda/subscriptions",
"organizations_url": "https://api.github.com/users/lambda/orgs",
"repos_url": "https://api.github.com/users/lambda/repos",
"events_url": "https://api.github.com/users/lambda/events{/privacy}",
"received_events_url": "https://api.github.com/users/lambda/received_events",
"type": "User",
"site_admin": false,
"name": "Brian Campbell",
"company": null,
"blog": null,
"location": null,
"email": null,
"hireable": null,
"bio": null,
"public_repos": 27,
"public_gists": 23,
"followers": 8,
"following": 2,
"created_at": "2008-11-30T21:03:27Z",
"updated_at": "2016-12-21T23:53:11Z"
}
I've this file,
Lamba login name is %login%, and avatar url is %avatar_url%
I am using jq
jq -c '.[]' /tmp/json | while read i; do
echo $i
done
This outputs only the value part. How do I loop through key and also get value?
Also, I've found that the keys of the json string can be returned using
jq 'keys' /tmp/params
However, I am still trying to figure out how to loop through the key and return the data.
The whole thing can be done quite simply (and very efficiently) in jq.
For the sake of illustration, suppose we have defined dictionary to be the dictionary object given in the question, and template to be the template string:
def dictionary: { ...... };
def template:
"Lamba login name is %login%, and avatar url is %avatar_url%";
Then the required interpolation can be performed as follows:
dictionary
| reduce to_entries[] as $pair (template; gsub("%\($pair.key)%"; $pair.value))
The above produces:
"Lamba login name is lambda, and avatar url is https://avatars.githubusercontent.com/u/37398?v=3"
There are of course many other ways in which the dictionary and template string can be presented.
I'm assuming your JSON is in infile.json and the text with the tags to be replaced in infile.txt.
Here is an entirely unreadable one-liner that does it:
$ sed -f <(jq -r 'to_entries[] | [.key, .value] | #tsv' < infile.json | sed 's~^~s|%~;s~\t~%|~;s~$~|g~') infile.txt
Lamba login name is lambda, and avatar url is https://avatars.githubusercontent.com/u/37398?v=3
Now, to decipher what this does. First, a few linebreaks for readability:
sed -f <(
jq -r '
to_entries[] |
[.key, .value] |
#tsv
' < infile.json |
sed '
s~^~s|%~
s~\t~%|~
s~$~|g~
'
) infile.txt
We're basically using a sed command that takes its instructions from a file; instead of an actual file, we use process substitution to generate the sed commands:
jq -r 'to_entries[] | [.key, .value] | #tsv' < infile.json |
sed 's~^~s|%~;s~\t~%|~;s~$~|g~'
Some processing with jq, followed by some sed substitutions.
This is what the jq command does:
Generate raw output (no quotes, actual tabs instead of \t) with the -r option
Turn the input JSON object into an array of key-value pairs with the to_entries function, resulting in
[
{
"key": "login",
"value": "lambda"
},
{
"key": "id",
"value": 37398
},
...
]
Get all elements of the array with []:
{
"key": "login",
"value": "lambda"
}
{
"key": "id",
"value": 37398
}
...
Get a list of arrays with key/value in each using [.key, .value], resulting in
[
"login",
"lambda"
]
[
"id",
37398
]
...
Finally, use the #tsv filter to get the key-value pairs as a tab separated list:
login lambda
id 37398
...
Now, we pipe this to sed, which performs three substitutions:
s~^~s|%~ – add s|% to the beginning of each line
s~\t~%|~ – replace the tab with %|
s~$~|g~ – add |g to the end of each line
This gives us a sed file that looks as follows:
s|%login%|lambda|g
s|%id%|37398|g
s|%avatar_url%|https://avatars.githubusercontent.com/u/37398?v=3|g
Notice that for these substitutions, we used ~ as the delimiter, and for the substitution commands we generated, we used | – mostly to avoid running into problems with strings containing /.
If this sed file were stored as commands.sed, the overall command would correspond to
sed -f commands.sed infile.txt
Remarks
If your shell doesn't support process substitution, you could make sed read from standard input instead, using sed -f -:
jq -r 'to_entries[] | [.key, .value] | #tsv' < infile.json |
sed 's~^~s|%~;s~\t~%|~;s~$~|g~' |
sed -f - infile.txt
If infile.json contained | or ~, you would have to choose different delimiters for the sed substitutions (see for example this answer about using a non-printable character as a delimiter) or even perform additional substitutions to get rid of the delimiting characters first and put them back in at the end (see this and this Q&A).
Some seds (such as BSD sed found in MacOS) have trouble with \t used in the pattern to substitute. If that is the case, the command s~\t~%|~ has to be replaced by s~'$'\t''~%|~ to "splice in" the tab character, or (if the shell doesn't support ANSI-C quoting) even with s~'"$(printf '\t')"'~%|~.
Here's a simple sed solution. Assume that the json Object is in x.json and the file where the replacements should be done in f.txt.
The following x.sed - Programm called as
sed -n -f x.sed x.json <(echo FILE_DELIM) f.txt
does the job.
x.sed:
1,$H
$ {
x
:b
s/\("\([^"]\+\)" *: *\(\("\([^"]*\)"\)\|\(\(\w\|\.\)\+\)\).*FILE_DELIM.*\)%\2%\(.*\)/\1\3\8/
tb
s/.*FILE_DELIM\n//
p
}
The trick is to save the two files (separated by the string FILE_DELIM) in one line in sed's hold space and then recursively replace the keys (e.g. %login%) by their values behind the FILE_DELIM.
The crucial point is to define the pattern which matches a key value pair in the json object. Here I used:
" followed by non " followed by " followed by blanks followed by a colon (*1) followed by blanks followed by (again a qouted string or a string consisting of (word characters or .)) (*2)
The backreference \2 in the search pattern matches the key and is replaced with \3 which matches the value.
*1): Up to here this matches a key like "login"
*2): The values are allowed to be "xyz", "", abc, 0.1, ...

Use Sed to find and replace json field

I have set of json files where after last key value pair i have comma which needs to be replaced.
{
"RepetitionTime": 0.72,
"TaskName":"WM",
"Manufacturer": "Siemens",
"ManufacturerModelName": "Skyra",
"MagneticFieldStrength": 3.0,
"EchoTime":"0.033",
}
It should look like:
{
"RepetitionTime": 0.72,
"TaskName":"WM",
"Manufacturer": "Siemens",
"ManufacturerModelName": "Skyra",
"MagneticFieldStrength": 3.0,
"EchoTime": 0.033
}
How can i achive this using sed.
Edit: Changed output - There should not be any "" around 0.033.
sed -i \'7i'\\t'\"EchoTime\": \0.033\' sub-285345_task-WM_acq-RL_bold.json
is not helping me. I have tried few other options but no success..
I trioed using simplejson and json package in python too. But given that the files are incorrct json, json.loads(file) throws errors..
I would prefer sed over python for now..
sed -Ei.bak 's/^([[:blank:]]*"EchoTime[^"]*":)"([^"]*)",$/\1\2/' file.json
will do it
Sample Output
{
"RepetitionTime": 0.72,
"TaskName":"WM",
"Manufacturer": "Siemens",
"ManufacturerModelName": "Skyra",
"MagneticFieldStrength": 3.0,
"EchoTime":0.033
}
Notes
E to enable extended regular expressions.
i to enable inplace editing, a backup file with .bak extension is created.
Please try the following command.
sed -i 's#\(.*\)EchoTime":"\(.*\)",$#\1EchoTime":\2#' sub-285345_task-WM_acq-RL_bold.json
In case you are not limited to sed and open for awk , then following can be used :
awk ' BEGIN{FS=OFS=":"}/EchoTime/ {gsub(/\"|\,/,"",$2)}1' file.json
{
"RepetitionTime": 0.72,
"TaskName":"WM",
"Manufacturer": "Siemens",
"ManufacturerModelName": "Skyra",
"MagneticFieldStrength": 3.0,
"EchoTime":0.033
}
Explanation:
FS=OFS=":" : This will set input and o/p field separator as ":"
/EchoTime/ : Search for the line containing EchoTime.
/EchoTime/{gsub(/\"|\,/,"",$2)}: Once echo time is found use global sub to replace , double quotes and comma in second field of that line.
1 : awk's default action is to print.
For making changes in original file:
awk ' BEGIN{FS=OFS=":"}/EchoTime/ {gsub(/\"|\,/,"",$2)}1' file.json >json.tmp && mv json.tmp file.json

Use grep to parse a key from a json file and get the value

Can someone suggest how I can get the value 45 after parsing an example json text as shown below :
....
"test": 12
"job": 45
"task": 11
.....
Please note that I am aware of tools like jq and others but this requires it to be installed.
I am hoping to get this executed using grep, awk or sed command.
awk -F'[[:space:]]*:[[:space:]]*' '/^[[:space:]]*"job"/{ print $2 }'
sed -n 's/^[[:space:]]*"job"[[:space:]]*:[[:space:]]*//p'
You can use grep -oP (PCRE):
grep -oP '"job"\s*:\s*\K\d+' file
45
\K is used for reseting the previously matched data.
Using awk, if you just want to print it:
awk -F ':[ \t]*' '/^.*"job"/ {print $2}' filename
Above command matches any line that has "job" at the beginning of a line, and then prints the second column of that line. awk option -F is used to set the column separator as : followed by any number of spaces or tabs.
If you want to store this value in bash variable job_val:
job_val=$(awk -F ':[ \t]*' '/^.*"job"/ {print $2}' filename)
Use specialized tools like jq for the task :
Had your file looked like
[
{
"test": 12,
"job": 45,
"task": 11
}
]
below stuff would get you home
jq ".[].job" file
Had your file looked like
{
"stuff" :{
.
.
"test": 12,
"job": 45,
"task": 11
.
.
}
}
below
jq ".stuff.job" file
would get you home.

Find and edit a Json file using bash

I have multiple files in the following format with different categories like:
{
"id": 1,
"flags": ["a", "b", "c"],
"name": "test",
"category": "video",
"notes": ""
}
Now I want to append all the files flags whose category is video with string d. So my final file should look like the file below:
{
"id": 1,
"flags": ["a", "b", "c", "d"],
"name": "test",
"category": "video",
"notes": ""
}
Now using the following command I am able to find files of my interest, but now I want to work with editing part which I an unable to find as there are 100's of file to edit manually, e.g.
find . - name * | xargs grep "\"category\": \"video\"" | awk '{print $1}' | sed 's/://g'
You can do this
find . -type f | xargs grep -l '"category": "video"' | xargs sed -i -e '/flags/ s/]/, "d"]/'
This will find all the filnames which contain line with "category": "video", and then add the "d" flag.
Details:
find . -type f
=> Will get all the filenames in your directory
xargs grep -l '"category": "video"'
=> Will get those filenames which contain the line "category": "video"
xargs sed -i -e '/flags/ s/]/, "d"]/'
=> Will add the "d" letter to the flags:line.
"TWEET!!" ... (yellow flag thown to the ground) ... Time Out!
What you have, here, is "a JSON file." You also have, at your #!shebang command, your choice of(!) full-featured programming languages ... with intimate and thoroughly-knowledgeale support for JSON ... with which you can very-speedily write your command-file.
Even if it is "theoretically possible" to do this using "bash scripts," this is roughly equivalent to "putting a beautiful stone archway over the front-entrance to a supermarket." Therefore, "waste ye no time" in such an utterly-profitless pursuit. Write a script, using a language that "honest-to-goodness knows about(!) JSON," to decode the contents of the file, then manipulate it (as a data-structure), then re-encode it again.
Here is a more appropriate approach using PHP in shell:
FILE=foo2.json php -r '$file = $_SERVER["FILE"]; $arr = json_decode(file_get_contents($file)); if ($arr->category == "video") { $arr->flags[] = "d"; file_put_contents($file,json_encode($arr)); }'
Which will load the file, decode into array, add "d" into flags property only when category is video, then write back to the file in JSON format.
To run this for every json file, you can use find command, e.g.
find . -name "*.json" -print0 | while IFS= read -r -d '' file; do
FILE=$file
# run above PHP command in here
done
If the files are in the same format, this command may help (version for a single file):
ex +':/category.*video/norm kkf]i, "d"' -scwq file1.json
or:
ex +':/flags/,/category/s/"c"/"c", "d"/' -scwq file1.json
which is basically using Ex editor (now part of Vim).
Explanation:
+ - executes Vim command (man ex)
:/pattern_or_range/cmd - find pattern, if successful execute another Vim commands (:h :/)
norm kkf]i - executes keystrokes in normal mode
kk - move cursor up twice
f] - find ]
i, "d" - insert , "d"
-s - silent mode
-cwq - executes wq (write & quit)
For multiple files, use find and -execdir or extend above ex command to:
ex +'bufdo!:/category.*video/norm kkf]i, "d"' -scxa *.json
Where bufdo! executes command for every file, and -cxa saves every file. Add -V1 for extra verbose messages.
If flags line is not 2 lines above, then you may perform backward search instead. Or using similar approach to #sps by replacing ] with d.
See also: How to change previous line when the pattern is found? at Vim.SE.
Using jq:
find . -type f | xargs cat | jq 'select(.category=="video") | .flags |= . + ["d"]'
Explanation:
jq 'select(.category=="video") | .flags |= . + ["d"]'
# select(.category=="video") => filters by category field
# .flags |= . + ["d"] => Updates the flags array