CSV to JSON using BASH - json

I am trying to covert the below csv into json format.
Africa,Kenya,NAI,281
Africa,Kenya,NAI,281
Asia,India,NSI,100
Asia,India,BSE,160
Asia,Pakistan,ISE,100
Asia,Pakistan,ANO,100
European Union,United Kingdom,LSE,100
This is the desired json format and I just cannot get to create it. I will post my work in progress below this.. Any help or direction would be appreciated...
{"name":"Africa",
"children":[
{"name":"Kenya",
"children":[
{"name":"NAI","size":"109"},
{"name":"NAA","size":"160"}]}]},
{"name":"Asia",
"children":[
{"name":"India",
"children":[
{"name":"NSI","size":"100"},
{"name":"BSE","size":"60"}]},
{"name":"Pakistan",
"children":[
{"name":"ISE","size":"120"},
{"name":"ANO","size":"433"}]}]},
{"name":"European Union",
"children":[
{"name":"United Kingdom",
"children":[
{"name":"LSE","size":"550"},
{"name":"PLU","size":"123"}]}]}
Work in Progress.
$1 is the file with the csv values pasted above.
#!/bin/bash
pcountry=$(head -1 $1 | cut -d, -f2)
cat $1 | while read line ; do
region=$(echo $line|cut -d, -f1)
country=$(echo $line|cut -d, -f2)
code=$(echo $line|cut -d, -f3-)
size=$(echo $line|cut -d, -f4)
if test "$pcountry" == "$country" ;
then
echo -e {\"name\":\"$region\", '\n' \"children\": [ '\n'{\"name\":\"$country\",'\n'\"children\": [ '\n' \{\"name\":\"NAI\",\"size\":\"$size\"\}
else
if test "$pregion" == "$region"
then :
else
echo -e ,'\n'{\"name\":\""$region\", '\n' \"children\": [ '\n'{\"name\":\"$country\",'\n'\"children\": [ '\n' \{\"name\":\"NAI\",\"size\":\"$size\"\},
pcountry=$country
pregion=$region
fi ; done
Problem is that I cannot seem to find a way to find out when a countries value ends.

As a number of the commenters have said, using the shell for this kind of conversion is a horrible idea. And, it would be nigh impossible to do it with just bash builtins; and shell scripts are used to combine standard unix commands like sed, awk, cut, etc. anyway. You should choose a better language that's built for that kind of iterative parsing/processing to solve your problem.
However, because it's late and I've had too much coffee, I threw together a bash script (with a few bits of sed thrown in for parsing help) that takes the example .csv data you have and outputs the JSON in the format you noted. Here's the script:
#! /bin/bash
# Initial input file format:
#
# Africa,Kenya,NAI,281
# Africa,Kenya,NAA,281
# Asia,India,NSI,100
# Asia,India,BSE,160
# Asia,Pakistan,ISE,100
# Asia,Pakistan,ANO,100
# European Union,United Kingdom,LSE,100
#
# Intermediate file format for parsing to JSON:
#
# Africa|Kenya:NAI=281
# Asia|India:BSE=160&NSI=100|Pakistan:ISE=100&ANO=100
# European Union|United Kingdom:LSE=100
#
# Call as:
#
# $ ./script INPUTFILE.csv >OUTPUTFILE.json
#
# temporary files for output/parsing
TMP="./tmp.dat"
TMP2="./tmp2.dat"
>$TMP
>$TMP2
# read through initial file and output intermediate format
while read line
do
region=$(echo $line | cut -d, -f1)
country=$(echo $line | cut -d, -f2)
code=$(echo $line | cut -d, -f3)
size=$(echo $line | cut -d, -f4)
# region record already started
if grep "^$region" $TMP 2>&1 >/dev/null ;then
>$TMP2
while read rec
do
if echo $rec | grep "^$region" 2>&1 >/dev/null
then
if echo "$rec" | grep "\|$country:" 2>&1 >/dev/null
then
echo "$rec" | sed -e 's/\('"$country"':[^\|][^\|]*\)/\1\&'"$code"'='"$size"'/' >>$TMP2
else
echo "$rec|$country:$code=$size" >>$TMP2
fi
else
echo $rec >>$TMP2
fi
done < $TMP
mv $TMP2 $TMP
else
# new region
echo "$region|$country:$code=$size" >>$TMP
fi
done < $1
# Parse through our intermediary format and output JSON to standard out
echo "["
country_count=$(cat $TMP | wc -l)
while read line
do
country=$(echo $line | cut -d\| -f1)
echo "{ \"name\": \"$country\", "
echo " \"children\": ["
region_count=$(echo $line | cut -d\| -f2- | sed -e 's/|/\n/g' | wc -l)
echo $line | cut -d\| -f2- | sed -e 's/|/\n/g' |
while read region
do
name=$(echo $region | cut -d: -f1)
echo " { \"name\": \"$name\", "
echo " \"children\": ["
code_count=$(echo $region | sed -e 's/^'"$name"'://' -e 's/&/\n/g' | wc -l)
echo $region | sed -e 's/^'"$name"'://' -e 's/&/\n/g' |
while read code_size
do
code=$(echo $code_size | cut -d= -f1)
size=$(echo $code_size | cut -d= -f2)
code_count=$((code_count - 1))
COMMA=""
if [ $code_count -gt 0 ]; then
COMMA=","
fi
echo " { \"name\": \"$code\", \"size\": \"$size\" }$COMMA "
done
echo " ]"
region_count=$((region_count - 1))
if [ $region_count -gt 0 ]; then
echo " },"
else
echo " }"
fi
done
echo " ]"
country_count=$((country_count - 1))
COMMA=""
if [ $country_count -gt 0 ]; then
COMMA=","
fi
echo "}$COMMA"
done < $TMP
echo "]"
exit 0
And, here's the resulting output from the above script:
[
{ "name": "Africa",
"children": [
{ "name": "Kenya",
"children": [
{ "name": "NAI", "size": "281" },
{ "name": "NAA", "size": "281" }
]
}
]
},
{ "name": "Asia",
"children": [
{ "name": "India",
"children": [
{ "name": "NSI", "size": "100" },
{ "name": "BSE", "size": "160" }
]
},
{ "name": "Pakistan",
"children": [
{ "name": "ISE", "size": "100" },
{ "name": "ANO", "size": "100" }
]
}
]
},
{ "name": "European Union",
"children": [
{ "name": "United Kingdom",
"children": [
{ "name": "LSE", "size": "100" }
]
}
]
}
]
Please don't use code like the above in any production environment.

Here is a solution using jq.
If filter.jq contains the following filter
reduce (
split("\n")[] # split string into lines
| split(",") # split data
| select(length>0) # eliminate blanks
) as [$c1,$c2,$c3,$c4] ( # convert to object
{} # e.g. "Africa": { "Kenya": {
; setpath([$c1,$c2,"name"];$c3) # "name": "NAI",
| setpath([$c1,$c2,"size"];$c4) # "size": "281"
) # }, }
| [ # then build final array of objects format:
keys[] as $k1 # [ {
| {name: $k1, children: ( # "name": "Africa",
.[$k1] # "children": {
| keys[] as $k2 # "name": "Kenya",
| {name: $k2, children:.[$k2]} # "children": { "name": "NAI", "size": "281" }
)} # ...
]
and data contains the sample data then the command
$ jq -M -Rsr -f filter.jq data
produces
[
{
"name": "Africa",
"children": {
"name": "Kenya",
"children": {
"name": "NAI",
"size": "281"
}
}
},
{
"name": "Asia",
"children": {
"name": "India",
"children": {
"name": "BSE",
"size": "160"
}
}
},
{
"name": "Asia",
"children": {
"name": "Pakistan",
"children": {
"name": "ANO",
"size": "100"
}
}
},
{
"name": "European Union",
"children": {
"name": "United Kingdom",
"children": {
"name": "LSE",
"size": "100"
}
}
}
]

You'd be much better off using a tool like xidel that can manipulate csv / raw text and understands JSON:
I'm going to assume so_24300508.csv :
Africa,Kenya,NAI,109
Africa,Kenya,NAA,160
Asia,India,NSI,100
Asia,India,BSE,60
Asia,Pakistan,ISE,120
Asia,Pakistan,ANO,433
European Union,United Kingdom,LSE,550
European Union,United Kingdom,PLU,123
(this is extracted from your JSON sample instead of the CSV sample you provided)
xidel -s so_24300508.csv --json-mode=deprecated --xquery '
[
let $csv:=x:lines($raw)
for $region in distinct-values($csv ! tokenize(.,",")[1])
return {
"name":$region,
"children":[
for $country in distinct-values($csv[starts-with(.,$region)] ! tokenize(.,",")[2]) return {
"name":$country,
"children":for $data in $csv[starts-with(.,$region) and contains(.,$country)]
let $value:=tokenize($data,",")
return {
"name":$value[3],
"size":$value[4]
}
}
]
}
]
'
(without --json-mode=deprecated replace [ ] with array{ })
See this code snippet for intermediate steps leading to this query.
Also see this online xidelcgi demo.
Output:
[
{
"name": "Africa",
"children": [
{
"name": "Kenya",
"children": [
{
"name": "NAI",
"size": "109"
},
{
"name": "NAA",
"size": "160"
}
]
}
]
},
{
"name": "Asia",
"children": [
{
"name": "India",
"children": [
{
"name": "NSI",
"size": "100"
},
{
"name": "BSE",
"size": "60"
}
]
},
{
"name": "Pakistan",
"children": [
{
"name": "ISE",
"size": "120"
},
{
"name": "ANO",
"size": "433"
}
]
}
]
},
{
"name": "European Union",
"children": [
{
"name": "United Kingdom",
"children": [
{
"name": "LSE",
"size": "550"
},
{
"name": "PLU",
"size": "123"
}
]
}
]
}
]

Related

How do I convert my txt file from samba-tool to json file in bash script

at the beginning I would like to write that I am just learning to write scripts. I have a test domain "universum.local" in VBox set on Ubuntu 22.04 ADDC Samba. I would like to query a domain controller for a list of domain users (10) with bash script and save data about them to a json file. At the moment I was able to get the necessary information and save it to a txt file.
Here is my scripts code:
#!/bin/bash
clear
ldapuserslistfilename="ldapuserslist.txt"
ldapuserslistfile="$tmp/$ldapuserslistfilename"
ldapusersinfofilename="ldapusersinfo.txt"
ldapusersinfofile="$tmp/$ldapusersinfofilename"
# main code
touch $ldapuserslistfile
touch $ldapusersinfofile
samba-tool user list > $ldapuserslistfile
while read -r line ; do
for user in $line ; do
samba-tool user show $user >> $ldapusersinfofile
done
done < $ldapuserslistfile
# copying txt files for tests
cp $ldapuserslistfile /mnt
cp $ldapusersinfofile /mnt
# deleting files
if [ -f $ldapuserslistfile ] ; then rm -f $ldapuserslisfile ; fi
if [ -f $ldapusersinfofile ] ; then rm -f $ldapusersinfofile ; fi
There is output, all users are saved in the txt file in the form below:
dn: CN=Bruce Banner,OU=Users,OU=MARVEL,OU=UNIVERSUM,DC=universum,DC=local
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
cn: Bruce Banner
sn: Banner
givenName: Bruce
instanceType: 4
whenCreated: 20220926075536.0Z
whenChanged: 20220926075536.0Z
displayName: Bruce Banner
uSNCreated: 4128
name: Bruce Banner
objectGUID: d1fb86d4-17bc-43f2-af83-ca06fa733e9e
badPwdCount: 0
codePage: 0
countryCode: 0
badPasswordTime: 0
lastLogoff: 0
lastLogon: 0
primaryGroupID: 513
objectSid: S-1-5-21-2846706046-4262971904-2743650290-1109
accountExpires: 9223372036854775807
logonCount: 0
sAMAccountName: hulk
sAMAccountType: 805306368
userPrincipalName: hulk#universum.local
objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=universum,DC=local
pwdLastSet: 0
userAccountControl: 512
uSNChanged: 4132
memberOf: CN=Avengers,OU=Groups,OU=MARVEL,OU=UNIVERSUM,DC=universum,DC=local
distinguishedName: CN=Bruce Banner,OU=Users,OU=MARVEL,OU=UNIVERSUM,DC=universum,DC=local
I would like to have this data in json format like
{
"users": [
{
"cn" : "Bruce Banner",
"sn" : "Banner",
"givenName" : "Bruce",
"whenCreated" : "20220926075536.0Z",
"<objectname>" : "<value>",
"<objectname>" : "<value>",
},
{
<next user info>
},
{
<next user info>
}
]
}
objectname is the next user item like lastLogon, lastLogoff, etc. I would like to save all users in the json file so that I can read them with another powershell script on my computer
UPDATE:
I added the lines below
# conversion fron txt to json
jsonfilename="jsontestfile.json"
json="./$jsonfilename"
touch $json
ed -s $ldapusersinfofile << 'EOF' > $json
v/^cn:\|^sn:\|^givenName:\|^displayName:\|^name:\|^whenCreated:/d
,s/^\(.*[^:]*\): \(.*\)/"\1": "\2"/
g/cn\|sn\|givenName\|displayName\|name\|whenCreated/s/$/,/
,s/^/ /
g/lastLogon/t. \
s/.*/},/g
1,$-1g/}/t. \
s/.*/{/g
0a
{
.
$s/,//
,p
Q
EOF
between #main code section and # copying txt files for tests section and I have output to json file like
{
"cn": "James Rhodes",
"sn": "Rhodes",
"givenName": "James",
"whenCreated": "20220926075852.0Z",
"displayName": "James Rhodes",
"name": "James Rhodes",
"lastLogon": "0"
},
{
"cn": "T'Chala",
"givenName": "T'Chala",
"whenCreated": "20220926081521.0Z",
"displayName": "T'Chala",
"name": "T'Chala",
"lastLogon": "0"
},
{
"cn": "Stephen Strange",
"sn": "Strange",
"givenName": "Stephen",
"whenCreated": "20220926080942.0Z",
"displayName": "Stephen Strange",
"name": "Stephen Strange",
"lastLogon": "0"
}
to be able to read the jsonfile in my powershells script, there missing
{
"users": [
at the beginig data and
]
}
at the end of data to have file like
{
"users": [
{
"cn": "James Rhodes",
"sn": "Rhodes",
"givenName": "James",
"whenCreated": "20220926075852.0Z",
"displayName": "James Rhodes",
"name": "James Rhodes",
"lastLogon": "0"
},
{
"cn": "T'Chala",
"givenName": "T'Chala",
"whenCreated": "20220926081521.0Z",
"displayName": "T'Chala",
"name": "T'Chala",
"lastLogon": "0"
},
{
"cn": "Stephen Strange",
"sn": "Strange",
"givenName": "Stephen",
"whenCreated": "20220926080942.0Z",
"displayName": "Stephen Strange",
"name": "Stephen Strange",
"lastLogon": "0"
}
]
}
to read by PS script
Clear
$json = Get-Content <pathToFile>\jsontestfile.json -Raw | ConvertFrom-Json
foreach ($user in $json.users){
echo $user.cn
echo $user.sn
echo $user.givenName
echo "----------"
}
how to add missing characters?
You need 'jq' and then parse the output of samba-tool, like this:
sudo samba-tool user show $USERNAME -H ldap://<DC_SHORT_HOSTNAME> -d0 | jq -R -s -c 'split("\n")'
You can easily use a 'for loop' around a list of usernames.
I changed my code to this:
# add prefix
printf '{\n"users" :\n[\n' > $json
# add converted users data
ed -s $data << 'EOF' >> $json
v/^cn:\|^sn:\|^givenName:\|^whenCreated:\|^displayName:\|^name:\|^badPwdCount:\|^badPasswordTime:\|^lastLogoff:\|^primaryGroupID:\|^accountExpires:\|^sAMAccountName:\|^userPrincipalName:\|^pwdLastSet:\|^userAccountControl:\|^lastLogonTimestamp:\|^whenChanged:\|^lastLogon:\|^logonCount:\|^distinguishedName:/d
,s/^\(.*[^:]*\): \(.*\)/"\1" : "\2"/
g/cn\|sn\|givenName\|whenCreated\|displayName\|name\|badPwdCount\|badPasswordTime\|lastLogoff\|primaryGroupID\|accountExpires\|sAMAccountName\|userPrincipalName\|pwdLastSet\|userAccountControl\|lastLogonTimestamp\|whenChanged\|lastLogon\|logonCount/s/$/,/
,s/^/ /
g/distinguishedName/t. \
s/.*/},/g
1,$-1g/}/t. \
s/.*/{/g
0a
{
.
$s/,//
,p
Q
EOF
# add sufix
printf ']\n}' >> $json
It now works and Powershell can read my file.

Columnar CSV Output from nested SON

[
{
"name": "Metadata:MER-2.0-ver AGYW_PREV-Results (Semi Annual)",
"id": "XOPEXepA7zg",
"categoryOptions.name": [
"0 -2 month",
">2months-<1 year",
"< 1 year",
"(1 - 4) Years",
"(1-9) Years"
],
"categoryOptions.id": [
"wfvXckoyaE9",
"Yi2K2FUDa3B",
"kKt6hryCX75",
"A0B8w6HoZvV",
"upbvx1IvICR"
]
},
{
"name": "Metadata:MER-2.0-ver KP-Results (Semi Annual)",
"id": "k9p3Ghbi6eW",
"categoryOptions.name": [
"Sex Workers",
"People in prisons and other enclosed settings (Incarcerated Population) ",
"PWID..",
"MSM",
"Transgender"
],
"categoryOptions.id": [
"mwTwhESK21T",
"eQjIwsDqbPy",
"zYaPQA3uTiH",
"vu0dG7psM5W",
"Jyo9XWumVtZ"
]
},
{
"name": "Metadata:MER-2.0-ver PP-Results (Semi Annual)",
"id": "rkExsSSc3yI",
"categoryOptions.name": [
"Adolescents (10-24)",
"Clients of Sex Workers",
"Displaced Persons",
"Fishing communities",
"Military and other Uniform Services"
],
"categoryOptions.id": [
"yWwp6xnt0pw",
"jlKwW6DC023",
"wF42hb47Z7J",
"qkIUghy30Vl",
"Vcuw6LkdAkk"
]
},
{
"name": "Metadata:MER-2.0-ver PREP_CURR-and-TX_ML (Semi Annual)",
"id": "ZYdO3FqQgo1",
"categoryOptions.name": [
"Adolescents (10-24)",
"Clients of Sex Workers",
"Displaced Persons",
"Fishing communities",
"Military and other Uniform Services"
],
"categoryOptions.id": [
"yWwp6xnt0pw",
"jlKwW6DC023",
"wF42hb47Z7J",
"qkIUghy30Vl",
"Vcuw6LkdAkk"
]
},
{
"name": "Metadata:MER-2.0-ver SupplyChain-Results (Semi Annual)",
"id": "Cub0DEVWs3P",
"categoryOptions.name": [
"TLD 30-count bottles",
"TLD 90-count bottles",
"TLD 180-count bottles",
"TLE/400 30-count bottles",
"TLE/400 90-count bottles"
],
"categoryOptions.id": [
"dtmTsLvH2dk",
"sOLj1z1XRxh",
"SnkZTF4kThV",
"sNnXSKiPvb5",
"t3iPChPFIcd"
]
}
]
Expected Output should be in csv format as below:
key,name,id,"categoryOptions.name","categoryOptions.id"
0,Metadata:MER-2.0-ver AGYW_PREV-Results (Semi Annual),XOPEXepA7zg,0 -2 month,wfvXckoyaE9
0,Metadata:MER-2.0-ver AGYW_PREV-Results (Semi Annual),XOPEXepA7zg,>2months-<1 year,Yi2K2FUDa3B
1,Metadata:MER-2.0-ver KP-Results (Semi Annual),k9p3Ghbi6eW,Sex Workers,mwTwhESK21T
1,Metadata:MER-2.0-ver KP-Results (Semi Annual),k9p3Ghbi6eWPeople in prisons and other enclosed settings (Incarcerated Population),eQjIwsDqbPy
2,Metadata:MER-2.0-ver PP-Results (Semi Annual),rkExsSSc3yI,Adolescents (10-24),yWwp6xnt0pw
2,Metadata:MER-2.0-ver PP-Results (Semi Annual),rkExsSSc3yI,Clients of Sex Workers,jlKwW6DC023
upto key4
The above input json came from here below:
cat /home/fred/Downloads/metadata/multiple-dataset-metadata.json
| jq '[.dataSets[]
| {name: .name,id: .id,"categoryOptions.name": [.dataSetElements[].dataElement.categoryCombo.categories[].categoryOptions
[].name],"categoryOptions.id": [.dataSetElements[].dataElement.categoryCombo.categories[].categoryOptions[].id]}]'
Here is one solution to the problem as I understand it:
range(0;length) as $i
| .[$i]
| [$i, .name, .id] +
( range(0, .["categoryOptions.name"]|length) as $j
| [ .["categoryOptions.name"][$j], .["categoryOptions.id"][$j] ] )
| #csv
This produces everything except the header row, the production of which is left as an exercise.
Invocation
... would be along the lines of:
jq -r -f program.jq input.json
To add onto #peak's solution
The final invocation ( with CSV header) may look like this:
jq -r -f program.jq input.json > output.csv && sed -i '1i "key","name","id","categoryOptions.name","categoryOptions.id"' output.csv
The sed solution is picked from here

Convert a txt file into JSON

I need to convert a simple txt list into a specific json format.
My list looks like this :
server1
server2
server3
server4
I need to have a JSON output that would look like this :
{ "data": [
{ "{SERVER}":"server1" },
{ "{SERVER}":"server2" },
{ "{SERVER}":"server3" },
{ "{SERVER}":"server4" }
]}
I was able to generate this with a bash script but I don't know how to remove the comma for the last line. The list is dynamic and can have a different amount of servers every time the script is run.
Any tip please ?
EDIT : my current code :
echo "{ "data": [" > /tmp/json_output
for srv in `cat /tmp/list`; do
echo "{ \"{SERVER}\":\"$srv\" }," >> /tmp/json_output
done
echo "]}" >> /tmp/json_output
I'm very new at this, sorry if I sound noobish.
This is very easy for Xidel:
xidel -s input.txt -e '{"data":[x:lines($raw) ! {"{SERVER}":.}]}'
{
"data": [
{
"{SERVER}": "server1"
},
{
"{SERVER}": "server2"
},
{
"{SERVER}": "server3"
},
{
"{SERVER}": "server4"
}
]
}
x:lines($raw) is a shorthand for tokenize($raw,'\r\n?|\n'). It creates an array of every new line.
In human terms you can see x:lines($raw) ! {"{SERVER}":.} as "create a JSON object for every new line".
See also this xidelcgi demo.
I would use jq for this.
$ jq -nR 'reduce inputs as $i ([]; .+[$i]) | map ({"{SERVER}": .}) | {data: .}' tmp.txt
{
"data": [
{
"{SERVER}": "server1"
},
{
"{SERVER}": "server2"
},
{
"{SERVER}": "server3"
},
{
"{SERVER}": "server4"
}
]
}
(It seems to me there should be an easier way to produce the array ["server1", "server2", "server3", "server4"] to feed to the map filter, but this is functional.)
Breaking this down piece by piece, we first create an array of your server names.
$ jq -nR 'reduce inputs as $item ([]; .+[$item])' tmp.txt
[
"server1",
"server2",
"server3",
"server4"
]
This array is fed to a map filter that creates an array of objects with the {SERVER} key:
$ jq -nR 'reduce inputs as $item ([]; .+[$item]) | map ({"{SERVER}": .})' tmp.txt
[
{
"{SERVER}": "server1"
},
{
"{SERVER}": "server2"
},
{
"{SERVER}": "server3"
},
{
"{SERVER}": "server4"
}
]
And finally, this is used to create an object that maps the key data to the array of objects, as shown at the top.
As others said a tool like jq should better be used.
Alternatively, you could use an array and determine whether the element you process is the last or not, for example your code could be :
declare -a servers
servers=($(cat /tmp/list))
pos=$(( ${#servers[*]} - 1 ))
last=${servers[$pos]}
echo "{ "data": [" > /tmp/json_output
for srv in "${servers[#]}"; do
if [[ $srv == $last ]]
then
echo "{ \"{SERVER}\":\"$srv\" }" >> /tmp/json_output
else
echo "{ \"{SERVER}\":\"$srv\" }," >> /tmp/json_output
fi
done
echo "]}" >> /tmp/json_output
You can do this using Python 3 :
#!/usr/local/bin/python3
import os
import json
d = []
with open('listofservers.txt', 'r', encoding='utf-8') as f:
for server in f:
d.append({'{Server}' : server.rstrip("\n")})
print(json.dumps({'data': d}, indent=" "))
Which will print :
{
"data": [
{
"{Server}": "server1"
},
{
"{Server}": "server2"
},
{
"{Server}": "server3"
},
{
"{Server}": "server4"
}
]
}

How to merge objects with InstanceId unique in bash shell?

I have two json files as below:
I wanna merge objects in tmp1.json and tmp2.json with InstanceId unique value in bash shell.
I have tried jq with argjson option but my jq 1.4 version not support this option. Sorry, I unable update jq to 1.5 version.
#cat tmp1.json
{
"VolumeId": "vol-046e0be08ac95095a",
"Instances": [
{
"InstanceId": "i-020ce1b2ad08fa6bd"
}
]
}
{
"VolumeId": "vol-007253a7d24c1c668",
"Instances": [
{
"InstanceId": "i-0c0650c15b099b993"
}
]
}
#cat tmp2.json
{
"InstanceId": "i-0c0650c15b099b993",
"InstanceName": "Test1"
}
{
"InstanceId": "i-020ce1b2ad08fa6bd",
"InstanceName": "Test"
}
My desired is:
{
"VolumeId": "vol-046e0be08ac95095a",
"Instances": [
{
"InstanceId": "i-020ce1b2ad08fa6bd"
"InstanceName": "Test"
}
]
}
{
"VolumeId": "vol-007253a7d24c1c668",
"Instances": [
{
"InstanceId": "i-0c0650c15b099b993"
"InstanceName": "Test1"
}
]
}
#!/bin/bash
JQ=jq-1.4
# For ease of understanding, the following is a bit more verbose than
# necessary.
# One way to get around the constraints of using jq 1.4 is
# to use the "slurp" option so that the contents of the two files can
# be kept separately.
# Note that jq 1.6 includes the following def of INDEX, but we can use it with jq 1.4.
($JQ -s . tmp1.json ; $JQ -s . tmp2.json) | $JQ -s '
def INDEX(stream; idx_expr):
reduce stream as $row ({};
.[$row|idx_expr|
if type != "string" then tojson
else .
end] |= $row);
.[0] as $tmp1
| .[1] as $tmp2
| INDEX($tmp2[]; .InstanceId) as $dict
| $tmp1
| map( .Instances |= map(.InstanceName = $dict[.InstanceId].InstanceName))
| .[]
'
Streamlined
INDEX(.[1][]; .InstanceId) as $dict
| .[0][]
| .Instances |= map(.InstanceName = $dict[.InstanceId].InstanceName)
minify the two json files
try the following command:
cat tmp2.json|jq -r '"\(.InstanceId) \(.InstanceName)"'|xargs -n2 sh -c 'cat tmp1.json|jq "if .Instances[0].InstanceId==\"$0\" then .Instances[0].InstanceName=\"$1\" else empty end"'
Here is the output:
{
"VolumeId": "vol-007253a7d24c1c668",
"Instances": [
{
"InstanceId": "i-0c0650c15b099b993",
"InstanceName": "Test1"
}
]
}
{
"VolumeId": "vol-046e0be08ac95095a",
"Instances": [
{
"InstanceId": "i-020ce1b2ad08fa6bd",
"InstanceName": "Test"
}
]
}

jq update contents of one file to another as key value

I am trying to update branches.json.branch2 values from branch2.json.Employes values
Using jq, How can I merge content of one file to another file
Below are the files
I have tried this but it did work, it just prints the original data without updating the details
#!/bin/sh
#call file with branch name for example ./update.sh branch2
set -xe
branchName=$1
fullPath=`pwd`/$1".json"
list=$(cat ${fullPath})
branchDetails=$(echo ${list} | /usr/local/bin/jq -r '.Employes')
newJson=$(cat branches.json |
jq --arg updateKey "$1" --arg updateValue "$branchDetails" 'to_entries |
map(if .key == "$updateKey"
then . + {"value":"$updateValue"}
else .
end) |
from_entries')
echo $newJson &> results.json
branch1.json
{
"Employes": [
{
"Name": "Ikon",
"age": "30"
},
{
"Name": "Lenon",
"age": "35"
}
]
}
branch2.json
{
"Employes": [
{
"Name": "Ken",
"age": "40"
},
{
"Name": "Frank",
"age": "23"
}
]
}
brances.json / results.json fromat
{
"branch1": [
{
"Name": "Ikon",
"age": "30"
},
{
"Name": "Lenon",
"age": "35"
}
],
"branch2": [
{
"Name": "Ken",
"age": "40"
},
{
"Name": "Frank",
"age": "23"
}
]
}
Note: I dont have the list of all the branch files at any given point, so script is responsible only to update the that branch details.
If the file name is the name of the property you want to update, you could utilize input_filename to select the files. No testing needed, just pass in the files you want to update. Just be aware of the order you pass in the input files.
Merge the contents of the file as you see fit. To simply replace, just do a plain assignment.
$ jq 'reduce inputs as $i (.;
.[input_filename|rtrimstr(".json")] = $i.Employes
)' branches.json branch{1,2}.json
Your script would just need to be:
#!/bin/sh
#call file with branch name for example ./update.sh branch2
set -xe
branchName=$1
newJson=$(jq 'reduce inputs as $i (.; .[input_filename|rtrimstr(".json")] = $i.Employees)' branches.json "$branchName.json")