Delete unused key:value properties in JSON file - json

I have key:value JSON object that is used in my JavaScript project. Value is a string and this object looks like this
{
key1:{
someKey: "Some text",
someKey2: "Some text2"
},
key2:{
someKey3:{
someKey4: "Some text3",
someKey5: "Some text4"
}
}
}
I use it in the project like this: key1.someKey and key2.someKey3.someKey4. Do you have idea how to delete unused properties? Let's say we don't use key2.someKey3.someKey5 in any file in a project, so i want it to be deleted from a JSON file. To people in the comments. I did't say i want to use JavaScript for this. I don't want to use it in browser or server. I just want the script that can do that on my local computer.

If you live within javascript and node, you can use something like this to get all the paths:
Using some modified code from here: https://stackoverflow.com/a/70763473/999943
var lodash=require('lodash') // use this if calling from the node REPL
// import lodash from 'lodash'; // use this if calling from a script
const allPaths = (o, prefix = '', out = []) => {
if (lodash.isObject(o) || lodash.isArray(o)) Object.entries(o).forEach(([k, v]) => allPaths(v, prefix === '' ? k : `${prefix}.${k}`, out));
else out.push(prefix);
return out;
};
let j = {
key1: { someKey: 'Some text', someKey2: 'Some text2' },
key2: { someKey3: { someKey4: 'Some text3', someKey5: 'Some text4' } }
}
allPaths(j)
[
'key1.someKey',
'key1.someKey2',
'key2.someKey3.someKey4',
'key2.someKey3.someKey5'
]
That's all well and good, but now you want to take that list and look through your codebase for usage.
The main choices available are text searching with grep or awk or ag, or parse the language and look through the symbolic representation of the language after it's loaded into your project. Tree-shaking can do this for libraries... I haven't looked into how to do tree-shaking for dictionary keys, or some other undefined reference check like a linter may do for a language.
Then once you have all the instances found, then you either manually modify your list or use a json library to modify it.
My weapons of choice in this instance are:
jq and bash and grep
It's not infallible. But it's a start. (use with caution).
setup_test.sh
#!/usr/bin/env bash
mkdir src
echo "key2.someKey3.someKey4" > src/a.js
echo "key1.someKey2" > src/b.js
echo "key3.otherKey" > src/c.js
test.json
{
"key1":{
"someKey": "Some text",
"someKey2": "Some text2"
},
"key2":{
"someKey3":{
"someKey4": "Some text3",
"someKey5": "Some text4"
}
}
}
check_for_dict_references.sh
#!/usr/bin/env bash
json_input=$1
code_path=$2
cat << HEREDOC
json_input=$json_input
code_path=$code_path
HEREDOC
echo "Paths found in json"
paths="$(cat "$json_input" | jq -r 'paths | join(".")')"
no_refs=
for path in $paths; do
escaped_path=$(echo "$path" | sed -e "s|\.|\\\\.|g")
if ! grep -r "$escaped_path" "$code_path" ; then
no_refs="$no_refs $path"
fi
done
echo "Missing paths..."
echo "$no_refs"
echo "Creating a new json file without the unused paths"
del_paths_list=
for path in $no_refs; do
del_paths_list+=".$path, "
done
del_paths_list=${del_paths_list:0:-2} # remove trailing comma space
cat "$json_input" | jq -r 'del('$del_paths_list')' > ${json_input}.new.json
Running the setup_test.sh, then we can test the jq + grep solution
$ ./check_for_dict_references.sh test.json src
json_input=test.json
code_path=src
Paths found in json
src/b.js:key1.someKey2
src/b.js:key1.someKey2
src/b.js:key1.someKey2
src/a.js:key2.someKey3.someKey4
src/a.js:key2.someKey3.someKey4
src/a.js:key2.someKey3.someKey4
Missing paths...
key2.someKey3.someKey5
Creating a new json file without the unused paths
If you look closely you would want it to also print key1.someKey, but this got "found" in the middle of the name key1.someKey2. There are some more fancy regex things you can do, but for the purpose of this script it may be enough.
Now look in your directory for the new json file:
$ cat test.json.new.json
{
"key1": {
"someKey": "Some text",
"someKey2": "Some text2"
},
"key2": {
"someKey3": {
"someKey4": "Some text3"
}
}
}
Hope that helps.

Related

How to add commas in between JSON objects using Linux Shell and SnowSQL?

While there are several posts about this topic on Stack Overflow, none match my exact use case. I am using a Linux shell script to run SnowSQL to generate a json file.
========================
My json file needs to have a comma between json objects.
This:
{
"CAMPAIGN": "Welcome_New",
"UUID": "fe881781-bdc2-41b2-95f2-e0e8c19dc597"
}
{
"CAMPAIGN": "Welcome_Existing",
"UUID": "77a41c02-beb9-48bf-ada4-b2074c1a78cb"
}
...needs to look this:
{
"CAMPAIGN": "Welcome_New",
"UUID": "fe881781-bdc2-41b2-95f2-e0e8c19dc597"
},
{
"CAMPAIGN": "Welcome_Existing",
"UUID": "77a41c02-beb9-48bf-ada4-b2074c1a78cb"
}
Here is my complete ksh script:
#!/usr/bin/ksh
. /appl/.snf_logon
export SNOW_PKEY_FILE=$(mktemp ./pkey-XXXXXX)
trap "rm -f ${SNOW_PKEY_FILE}" EXIT
LibGetSnowCred
{
outFile=JSON_FILE_TYPE_TEST.json
inDir=/testing
outFileNm=#my_db.my_schema.my_file_stage/${outFile}
snowsql \
--private-key-path $SNOW_PKEY_FILE \
-o exit_on_error=true \
-o friendly=false \
-o timing=false \
-o log_level=ERROR \
-o echo=true <<!
COPY INTO ${outFileNm}
FROM (SELECT object_construct(
'UUID',UUID
,'CAMPAIGN',CAMPAIGN)
FROM my_db.my_schema.JSON_Test_Table
LIMIT 2)
FILE_FORMAT=(
TYPE=JSON
COMPRESSION=NONE
)
OVERWRITE=True
HEADER=False
SINGLE=True
MAX_FILE_SIZE=4900000000
;
get ${outFileNm} file://${inDir}/;
rm ${outFileNm};
!
if [ $? -eq 0 ]; then
echo "Export successful"
else
echo "ERROR in export"
fi
}
Is the best practice to add the comma during the SELECT or after the file is generated and how?
With or without that comma, the text is still not JSON but just a random text that looks like JSON. You export several rows, each row as an independent object. You need to gather all these objects into an array to produce a valid JSON.
A JSON that encodes an array of rows looks like this:
[
{
"CAMPAIGN": "Welcome_New",
"UUID": "fe881781-bdc2-41b2-95f2-e0e8c19dc597"
},
{
"CAMPAIGN": "Welcome_Existing",
"UUID": "77a41c02-beb9-48bf-ada4-b2074c1a78cb"
}
]
The easiest way to produce this output would be to ask the database, if it supports this option (to wrap all the records into a list before generating the JSON, to not export each record in a separate JSON).
If this is not possible then you have a file that contains multiple JSONs. You can use jq to convert these individual JSONs into a JSON similar to the one described above (encoding an array of objects).
It is as simple as that:
jq --slurp '.' input_file > output_file
The option --slurp tells jq to read all the JSONs from the file input_file in memory, to parse them and to put them into an array. That is the program input.
'.' is the jq program. It says "dump the current object". It does not do any processing to the input data. The current object is the array.
After it executes the program (which, in this case doesn't do anything), jq dumps the modified value (as JSON, of course) to the standard output (by default, on screen).
The > output_file part redirects this output to a file (named output_file) instead of showing it on screen.
You can see how it works on the jq playground.

How can I prettyprint JSON on the command line, but allow invalid JSON objects to pass though?

I'm currently tailing some logs in bash that are half JSON, half text like below:
{"response":{"message":"asdfasdf"}}
{"log":{"example":"asdfasdf"}}
here is some text
{"another":{"example":"asdfasdf"}}
more text
Each line is either a full valid JSON object or some text that would fail a JSON parser.
I've looked at jq and underscore-cli to see if they have options to return the invalid object in the case of failure, but I'm not seeing any.
I've also tried to use a || operator to cat the piped input, but I'm losing the value somehow. Maybe I should read up on pipes more? Example: getLogs -t | (underscore print || cat)
I think I could write a script that stores the input. Format it, and return the output if successful. If it fails returned the stored value. I feel like there should be a simpler way though. Any thoughts?
You can use this node library
install with
$ npm install -g js-beautify
Here is what I did:
$ js-beautify -r test.js
beautified test.js
I tested it with an incomplete json file and it worked
jq can check for invalid json
#!/bin/bash
while read p; do
if jq -e . >/dev/null 2>&1 <<<"$p"; then
echo $p | jq
else
echo 'Skipping invalid json'
fi
done < /tmp/tst.txt
{
"response": {
"message": "asdfasdf"
}
}
{
"log": {
"example": "asdfasdf"
}
}
Skipping invalid json
{
"another": {
"example": "asdfasdf"
}
}
Skipping invalid json

unix command to filter the json

[
{
"name":"sandboxserver.tar.gz.part-aa",
"hash":"010d126f8ccf199f3cd5f468a90d5ae1",
"bytes":4294967296,
"last_modified":"2018-10-10T01:32:00.069000",
"content_type":"binary/octet-stream"
},
{
"name":"sandboxserver.tar.gz.part-ab",
"hash":"49a6f22068228f51488559c096aa06ce",
"bytes":397973601,
"last_modified":"2018-10-10T01:32:22.395000",
"content_type":"binary/octet-stream"
},
{
"name":"sandboxserver.tar.gz.part-ac",
"hash":"2c5e845f46357e203214592332774f4c",
"bytes":5179281858,
"last_modified":"2018-10-11T08:20:11.566000",
"content_type":"binary/octet-stream"
}
]
I am getting above JSON as response while listing the objects in cloud object storage using curl -l -X GET. How can I get the object "name" assigned to an array while looping through all the objects.
for example
array[1]="sandboxserver.tar.gz.part- aa"
array[2]="sandboxserver.tar.gz.part- ab"
array[3]="sandboxserver.tar.gz.part- ac"
You can use jq.
jq is a powerful tool that lets you read, filter, and write JSON in bash.
You might need to install it first.
Try this:
I've pasted your json into a file:
~$ cat n1.json
[
{
"name":"sandboxserver.tar.gz.part-aa",
"hash":"010d126f8ccf199f3cd5f468a90d5ae1",
"bytes":4294967296,
"last_modified":"2018-10-10T01:32:00.069000",
"content_type":"binary/octet-stream"
},
{
"name":"sandboxserver.tar.gz.part-ab",
"hash":"49a6f22068228f51488559c096aa06ce",
"bytes":397973601,
"last_modified":"2018-10-10T01:32:22.395000",
"content_type":"binary/octet-stream"
},
{
"name":"sandboxserver.tar.gz.part-ac",
"hash":"2c5e845f46357e203214592332774f4c",
"bytes":5179281858,
"last_modified":"2018-10-11T08:20:11.566000",
"content_type":"binary/octet-stream"
}
]
And then used jq to find the names:
~$ jq -r '.[].name' n1.json
sandboxserver.tar.gz.part-aa
sandboxserver.tar.gz.part-ab
sandboxserver.tar.gz.part-ac
If you don't want to depend on external utility like jq, use can use python + bash combo do the trick.
response="$(cat data.json)"
declare -a array
array=($(python -c "import json,sys; data=[arr['name'] for arr in json.loads(sys.argv[1])]; print('\n'.join(data));" "$response"))
echo "${array[#]}"
Advice: Writing embedded python code may soon become unreadable so you may want to put the python code in a separate script and run the script.

Parsing JSON from shell script using JSON.sh

I'm working on parsing JSON data using JSON.sh. And I wanted to read data from json file (test.json) whose content will be something like,
{
"/home/ukrishnan/projects/test.yml": {
"LOG_DRIVER": "syslog",
"IMAGE": "mysql:5.6"
},
"/home/ukrishnan/projects/mysql/app.xml": {
"ENV_ACCOUNT_BRIDGE_ENDPOINT": "/u01/src/test/sample.txt"
}
}
And I try to parse this JSON using JSON.sh by using,
test_parser=`sh ./lib/JSON.sh < test/test.json`
echo $test_parser
It prints,
["/home/ukrishnan/projects/test.yml","LOG_DRIVER"] "syslog" ["/home/ukrishnan/projects/test.yml","IMAGE"] "mysql:5.6" ["/home/ukrishnan/projects/test.yml"] {"LOG_DRIVER":"syslog","IMAGE":"mysql:5.6"} ["/home/ukrishnan/projects/mysql/app.xml","ENV_ACCOUNT_BRIDGE_ENDPOINT"] "/u01/src/test/sample.txt" ["/home/ukrishnan/projects/mysql/app.xml"] {"ENV_ACCOUNT_BRIDGE_ENDPOINT":"/u01/src/test/sample.txt"} [] {"/home/ukrishnan/projects/test.yml":{"LOG_DRIVER":"syslog","IMAGE":"mysql:5.6"},"/home/ukrishnan/projects/mysql/app.xml":{"ENV_ACCOUNT_BRIDGE_ENDPOINT":"/u01/src/test/sample.txt"}}
Whereas, the same command (sh ./lib/JSON.sh < test/test.json), if I run through terminal, it is printing with line breaks,
["/home/ukrishnan/projects/test.yml","LOG_DRIVER"] "syslog"
["/home/ukrishnan/projects/test.yml","IMAGE"] "mysql:5.6"
["/home/ukrishnan/projects/test.yml"] {"LOG_DRIVER":"syslog","IMAGE":"mysql:5.6"}
["/home/ukrishnan/projects/mysql/app.xml","ENV_ACCOUNT_BRIDGE_ENDPOINT"] "/u01/src/test/sample.txt"
["/home/ukrishnan/projects/mysql/app.xml"] {"ENV_ACCOUNT_BRIDGE_ENDPOINT":"/u01/src/test/sample.txt"}
[] {"/home/ukrishnan/projects/test.yml":{"LOG_DRIVER":"syslog","IMAGE":"mysql:5.6"},"/home/ukrishnan/projects/mysql/app.xml":{"ENV_ACCOUNT_BRIDGE_ENDPOINT":"/u01/src/test/sample.txt"}}
I wanted to read this and assign to bash variables like,
file_name='/home/ukrishnan/projects/test.yml'
key='LOG_DRIVER'
value='syslog'
As I'm almost completely new to shell script and grep or awk, I don't have much idea of how to achieve this. Any help on this would be greatly appreciated.
I wrote a JSON serializer / deserializer for gawk, if you're interested. Save that script and modify it, replacing everything above # === FUNCTIONS === with the following:
#!/usr/bin/gawk -f
# capture JSON string from beginning to end into a scalar variable
{ json = json ORS $0 }
END {
# objectify JSON string to the multilevel array "obj"
deserialize(json, obj)
for (filename in obj) {
print "file_name=" quote(filename)
for (key in obj[filename]) {
# print key="value"
print key "=" quote(obj[filename][key])
}
}
}
Do chmod 755 json.awk and execute it. Output will resemble this:
$ ./json.awk test5.json
file_name="/home/ukrishnan/projects/mysql/app.xml"
ENV_ACCOUNT_BRIDGE_ENDPOINT="/u01/src/test/sample.txt"
file_name="/home/ukrishnan/projects/test.yml"
LOG_DRIVER="syslog"
IMAGE="mysql:5.6"
Hopefully the logic is reasonably easy to follow. If you prefer to output filename=, key=, and value= on every loop iteration, modify the nested for loops accordingly:
for (filename in obj) {
for (key in obj[filename]) {
print "file_name=" quote(filename)
print "key=" quote(key)
print "value=" quote(obj[filename][key])
}
}
That change will result in the following output:
$ ./json.awk test5.json
file_name="/home/ukrishnan/projects/mysql/app.xml"
key="ENV_ACCOUNT_BRIDGE_ENDPOINT"
value="/u01/src/test/sample.txt"
file_name="/home/ukrishnan/projects/test.yml"
key="LOG_DRIVER"
value="syslog"
file_name="/home/ukrishnan/projects/test.yml"
key="IMAGE"
value="mysql:5.6"
Anyway, with that output, you can do something silly in BASH like this to populate and act upon the variables:
#!/bin/bash
./test.awk test5.json | while read -r line; do {
eval $line
[ "${line/=*/}" = "value" ] && {
echo "bash: file_name=$file_name"
echo "bash: key=$key"
echo "bash: value=$value"
echo "------"
}
}; done
It'd probably be more graceful just to do all processing within gawk from start to finish and not mess with the polyglot handoff, though.
Getting back to json.awk, if you prefer to keep json.awk modular for easy reuse in future projects, you could remove everything above # === FUNCTIONS ===, create a separate main.awk containing the code block at the top of this answer, and #include "json.awk" as a helper library pretty much anywhere outside of END {...} (just below the shbang, for example).
JSON.sh (from http://json.org) offers a nice bash friendly means of flattening out a JSON file. Which you've already provided how it looks in your question. So, the flatten form is the format:
[node] tab value
You have to think in UNIX script in extracting the information you want, you'll note the lines you're interested in actually follow this pattern:
["filename","key"] tab ["value"]
In regex notation, we replace:
filename with (.*)
key with (.*)
tab with \t
value with (.*)
We can retrieve the first, second and third matching groups with \1, \2, \3 respectively.
When used in sed we also note that these symbols []() need to be escaped with a backslash \, resulting in the following script:
./lib/JSON.sh < test/test.json | sed 's/\["\(.*\)","\(.*\)\"]\t"\(.*\)"/\1,\2,\3/;t;d'
/home/ukrishnan/projects/test.yml,LOG_DRIVER,syslog
/home/ukrishnan/projects/test.yml,IMAGE,mysql:5.6
/home/ukrishnan/projects/mysql/app.xml,ENV_ACCOUNT_BRIDGE_ENDPOINT,/u01/src/test/sample.txt
Now we put the lines in a loop and for each line, we can extract out filename,key,value:
for line in $(./lib/JSON.sh < test/test.json | sed 's/\["\(.*\)","\(.*\)\"]\t"\(.*\)"/\1,\2,\3/;t;d')
do
IFS="," read -ra arr <<< $line
filename=${arr[0]}
key=${arr[1]}
value=${arr[2]}
cat <<EOF
filename : $filename
key : $key
value : $value
EOF
done
Which outputs:
filename : /home/ukrishnan/projects/test.yml
key : LOG_DRIVER
value : syslog
filename : /home/ukrishnan/projects/test.yml
key : IMAGE
value : mysql:5.6
filename : /home/ukrishnan/projects/mysql/app.xml
key : ENV_ACCOUNT_BRIDGE_ENDPOINT
value : /u01/src/test/sample.txt

Changing values in a JSON data file from shell

I have created a JSON file which in this case contains:
{"ipaddr":"10.1.1.2","hostname":"host2","role":"http","status":"active"},
{"ipaddr":"10.1.1.3","hostname":"host3","role":"sql","status":"active"},
{"ipaddr":"10.1.1.4","hostname":"host4","role":"quad","status":"active"},
On other side I have a variable with values for example:
arr="10.1.1.2 10.1.1.3"
which comes from a subsequent check of the server status for example. For those values I want to change the status field to "inactive". In other words to grep the host and change its "status" value.
Expected output:
{"ipaddr":"10.1.1.2","hostname":"host2","role":"http","status":"inactive"},
{"ipaddr":"10.1.1.3","hostname":"host3","role":"sql","status":"inactive"},
{"ipaddr":"10.1.1.4","hostname":"host4","role":"quad","status":"active"},
$ arr="10.1.1.2 10.1.1.3"
$ awk -v arr="$arr" -F, 'BEGIN { gsub(/\./,"\\.",arr); gsub(/ /,"|",arr) }
$1 ~ "\"(" arr ")\"" { sub(/active/,"in&") } 1' file
{"ipaddr":"10.1.1.2","hostname":"host2","role":"http","status":"inactive"},
{"ipaddr":"10.1.1.3","hostname":"host3","role":"sql","status":"inactive"},
{"ipaddr":"10.1.1.4","hostname":"host4","role":"quad","status":"active"},
Here is a quick perl "wrap-around one-liner": that uses the JSON module and slurps with the -0 switch:
perl -MJSON -n0E '$j = decode_json($_);
for (#{$j->{hosts}}){$_->{status}=inactive if $_->{ipaddr}=~/2|3/} ;
say to_json( $j->{hosts}, {pretty=>1} )' status_data.json
might be nicer or might violate PBP recommendations for map:
perl -MJSON -n0E '$j = decode_json($_);
map { $_->{status}=inactive if $_->{ipaddr}=~/2|3/ } #{ $j->{hosts} } ;
say to_json( $j->{hosts} )' status_data.json
A shell script that resets status using jq would also be possible. Here's a quick way to parse and output changes to JSON using jq:
cat status_data.json| jq -r '.hosts |.[] |
select(.ipaddr == "10.1.1.2"//.ipaddr == "10.1.1.3" )' |jq '.status = "inactive"'
EDIT In an earlier comment I was uncertain whether the OP was more interested in an application than a quick search and replace (something about the phrases "On other side..." and "check on the server status"). Here is a (still simple) perl approach in script form:
use v5.16; #strict, warnings, say
use JSON ;
use IO::All;
my $status_data < io 'status_data.json';
my $network = JSON->new->utf8->decode($status_data) ;
my #changed_hosts= qw/10.1.1.2 10.1.1.3/;
sub status_report {
foreach my $host ( #{ $network->{hosts} }) {
say "$host->{hostname} is $host->{status}";
}
}
sub change_status {
foreach my $host ( #{ $network->{hosts} }){
foreach (#changed_hosts) {
$host->{status} = "inactive" if $host->{ipaddr} eq $_ ;
}
}
status_report;
}
defined $ENV{CHANGE_HAPPENED} ? change_status : status_report ;
The script reads the JSON file status_data.json (using IO::All which is great fun) then decodes it with JSON into a hash. It is hard to tell if this us a complete a solution because if you are "monitoring" host status then we should check the JSON data file periodically and compare it to our hash and then run the main body of the script one when changes have occurred.
To simulate changes occurring you can define/undefine CHANGE_HAPPENED in your environment with export CHANGE_HAPPENED=1 (or setenv if in in tcsh) and unset CHANGE_HAPPENED and the script will then either update the messages and the hash or "report". For this to be complete the data in our hash should be updated to match the the data file either periodically or when an event occurs. The status_report() subroutine could be changed so that it builds arrays of #inactive_hosts and #active_hosts when update_status() told it to do so: if ( something_happened() ) { update_status() }, etc.
Hope that helps.
status_data.json
{
"hosts":[
{"ipaddr":"10.1.1.2","hostname":"host2","role":"http","status":"active"},
{"ipaddr":"10.1.1.3","hostname":"host3","role":"sql","status":"active"},
{"ipaddr":"10.1.1.4","hostname":"host4","role":"quad","status":"active"}
]
}
output:
~/ % perl network_status_json.pl
host2 is active
host3 is active
host4 is active
~/ % export CHANGE_HAPPENED=1
~/ % perl network_status_json.pl
host2 is inactive
host3 is inactive
host4 is active
Version 1:
Using a simple regex based transformation. This can be done in several ways. From the initial question, the list of ipaddr is in variable in arr. Example using a Bash env variable:
$ export var="... ..."
It would be a possible solution to provide this information by command line parameters.
#!/usr/bin/perl
my %inact; # ipaddr to inactivate
my $arr=$ENV{arr} ; # from external var (export arr=...)
## $arr=shift; # from command line arg
for( split(/\s+/, $arr)){ $inact{$_}=1 }
while(<>){ # one "json" line at the time
if(/"ipaddr":"(.*?)"/ and $inact{$1}){
s/"active"/"inactive"/}
print $_;
}
Version 2:
Using Json parser we can do more complex transformations; as the input is not real JSON we will process one line of "almost json" at the time:
use JSON;
use strict;
my ($line, %inact);
my $arr=$ENV{arr} ;
for( split(/\s+/, $arr)){ $inact{$_}=1 }
while(<>){ # one "json" line at the time
if(/^\{.*\},/){
s/,\n//;
$line = from_json( $_);
if($inact{$line->{ipaddr}}){
$line->{status} = "inactive" ;}
print to_json($line), ",\n"; }
else { print $_;}
}
#!/bin/ksh
# your "array" of IP
arr="10.1.1.2 10.1.1.3"
# create and prepare temporary file for sed action
SedAction=/tmp/Action.sed
# --- for/do generating SedAction --------
echo "#sed action" > ${SedAction}
#take each IP from the arr variable one by one
for IP in ${arr}
do
# prepare for a psearch pattern use
IP_RE="$( echo "${IP}" | sed 's/\./\\./g' )"
# generate sed action in temporary file.
# final action will be like:
# s/\("ipaddr":"10\.1\.1\.2".*\)"active"}/\1"inactive"}/;t
# escape(double) \ for in_file espace, escape(simple) " for this line interpretation
echo "s/\\\(\"ipaddr\":\"${IP_RE}\".*\\\)\"active\"}/\\\1\"inactive\"}/;t" >> ${SedAction}
done
# --- sed generating sed action ---------------
echo "${arr}" \
| tr " " "\n" \
| sed 's/\./\\./g
s#.*#s/\\("ipaddr":"&".*\\)"active"}/\\1"inactive"}/;t#
' \
> ${SedAction}
# core of the process (use -i for inline editing or "double" redirection for non GNU sed)
sed -f ${SedAction} YourFile
# clean temporary file
rm ${SedAction}
Self commented, tested in ksh/AIX.
2 way to generate the SedAction depending of action you want to do also (if any). You only need one to work, i prefer the second
This is very simple indeed in Perl, using the JSON module.
use strict;
use warnings;
use JSON qw/ from_json to_json /;
my $json = JSON->new;
my $data = from_json(do { local $/; <DATA> });
my $arr = "10.1.1.2 10.1.1.3";
my %arr = map { $_ => 1 } split ' ', $arr;
for my $item (#$data) {
$item->{status} = 'inactive' if $arr{$item->{ipaddr}};
}
print to_json($data, { pretty => 1 }), "\n";
__DATA__
[
{"ipaddr":"10.1.1.2","hostname":"host2","role":"http","status":"active"},
{"ipaddr":"10.1.1.3","hostname":"host3","role":"sql","status":"active"},
{"ipaddr":"10.1.1.4","hostname":"host4","role":"quad","status":"active"}
]
output
[
{
"role" : "http",
"hostname" : "host2",
"status" : "inactive",
"ipaddr" : "10.1.1.2"
},
{
"hostname" : "host3",
"role" : "sql",
"ipaddr" : "10.1.1.3",
"status" : "inactive"
},
{
"ipaddr" : "10.1.1.4",
"status" : "active",
"hostname" : "host4",
"role" : "quad"
}
]