Get keys and values from Json file using GitLab-ci - json

I'm creating a gitlab-ci.yml file where I need to iterate in a json file and extract the keys and values. I cannot use jq and I am trying with a cat command to do this. This is my script:
script:
while read line
do
echo $line
done < $myfile
And this is myfile:
{"var1":"test1",
"other_var":"test2"}
Since I am not able to use jq becuase is not installed and my customer doesn't allow me to install it, how I can make a print like this one:
"Key is var1 and value is test1"

Related

json2csv output multiple jsons to one csv

I am using json2csv to convert multiple json files structured like
{
"address": "0xe9f6191596bca549e20431978ee09d3f8db959a9",
"copyright": "None",
"created_at": "None"
...
}
The problem is that I need to put multiple json files into one csv file.
In my code I iterate through a hash file, call a curl with that hash and output the data to a json. Then I use json2csv to convert each json to csv.
mkdir -p curl_outs
{ cat hashes.hash; echo; } | while read h; do
echo "Downloading $h"
curl -L https://main.net955305.contentfabric.io/s/main/q/$h/meta/public/nft > curl_outs/$h.json;
node index.js $h;
json2csv -i curl_outs/$h.json -o main.csv;
done
I use -o to output the json into csv, however it just overwrites the previous json data. So I end up with only one row.
I have used >>, and this does append to the csv file.
json2csv -i "curl_outs/${h}.json" >> main.csv
But for some reason it appends the data's keys to the end of the csv file
I've also tried
cat csv_outs/*.csv > main.csv
However I get the same output.
How do I append multiple json files to one main csv file?
It's not entirely clear from the image and your description what's wrong with >>, but it looks like maybe the CSV file doesn't have a trailing line break, so appending the next file (>>) starts writing directly at the end of the last row and column (cell) of the previous file's data.
I deal with CSVs almost daily and love the GoCSV tool. Its stack subcommand will do just what the name implies: stack multiple CSVs, one on top of the other.
In your case, you could download each JSON and convert it to an individual (intermediate) CSV. Then, at the end, stack all the intermediate CSVs, then delete all the intermediate CSVs.
mkdir -p curl_outs
{ cat hashes.hash; echo; } | while read h; do
echo "Downloading $h"
curl -L https://main.net955305.contentfabric.io/s/main/q/$h/meta/public/nft > curl_outs/$h.json;
node index.js $h;
json2csv -i curl_outs/$h.json -o curl_outs/$h.csv;
done
gocsv stack curl_outs/*.csv > main.csv;
# I suggested deleting the intermediate CSVs
# rm curl_outs/*.csv
# ...
I changed the last line of your loop to json2csv -i curl_outs/$h.json -o curl_outs/$h.csv; to create those intermediate CSVs I mentioned before. Now, gocsv's stack subcommand can take a list of those intermediate CSVs and give you main.csv.

Get key and value from json string limited with openwrt

Im checking inside a openwrt with very few shell commands to see If is possible to filter json string to have the values.
For example
{"address":"192.168.2.2","user":"user1","groups":"permissions"}
I receive from curl the string and I need to separate values to pass vars to other commands.
For now Im checking some examples but not works
#!/bin/sh
. /usr/share/libubox/jshn.sh
json_init
json_load '$(cat $STRING)'
json_get_keys keys
for k in $keys; do
json_get_var v "$k"
echo "$k : $v"
done
But produce error "Failed to parse message data"
My problem is justly that I cna't use jq, or python to choose data, so only solution is to separate first.
Suggestions?
I found other form more clean to do the same
eval $(jsonfilter -s $STRING -e 'ADDRESS=#.address' -e 'USER=#.user')
echo "address=$ADDRESS user=$USER"
With this form I can filter every value how parameter, without jq or python function.

How to split text file into multiple files and extract filename from line prefix?

I have a simple log file with content like:
1504007980.039:{"key":"valueA"}
1504007990.359:{"key":"valueB", "key2": "valueC"}
...
That I'd like to output to multiple files that each have as content the JSON part that comes after the timestamp. So I would get as a result the files:
1504007980039.json
1504007990359.json
...
This is similar to How to split one text file into multiple *.txt files? but the name of the file should be extracted from each line (and remove an extra dot), and not generated via an index
Preferably I'd want a one-liner that can be executed in bash.
Since you aren't using GNU awk you need to close output files as you go to avoid the "too many open files" error. To avoid that and issues around specific values in your JSON and issues related to undefined behavior during output redirection, this is what you need:
awk '{
fname = $0
sub(/\./,"",fname)
sub(/:.*/,".json",fname)
sub(/[^:]+:/,"")
print >> fname
close(fname)
}' file
You can of course squeeze it onto 1 line if you see some benefit to that:
awk '{f=$0;sub(/\./,"",f);sub(/:.*/,".json",f);sub(/[^:]+:/,"");print>>f;close(f)}' file
awk solution:
awk '{ idx=index($0,":"); fn=substr($0,1,idx-1)".json"; sub(/\./,"",fn);
print substr($0,idx+1) > fn; close(fn) }' input.log
idx=index($0,":") - capturing index of the 1st :
fn=substr($0,1,idx-1)".json" - preparing filename
Viewing results (for 2 sample lines from the question):
for f in *.json; do echo "$f"; cat "$f"; echo; done
The output (filename -> content):
1504007980039.json
{"key":"valueA"}
1504007990359.json
{"key":"valueB"}

Dynamically create and update json using bash

In my hypothetical folder /hd/log/, I have 2 dozens Folder and each folder has log files in this format foldername.2017.07.09.log. I have a crontab that gzips the last log file every night, so there is a new log file with new log name every day.
I am trying to create a dynamic json file whose out put looks like this:
[
{
"Foldername": "foldername",
"lastmodifiedfile": "/hd/log/foldername/foldername.2017.07.09.log"
},
{
"Foldername": "foldername2",
"lastmodifiedfile": "/hd/log/foldername2/foldername2.2017.07.09.log"
}
]
The bash script should be able to dynamically create array for each subfolder name (in case more folder are added or names are changed) and also give direct link to the last modified file.
I already php program to parse json file, but no sane way to crease this json file dynamically.
Any help or pointers is appreciated.
printf "%s" "["
for var in $(find /hd/log -type d)
do
path=$("ls -1t $var" | head -1)
echo $var"/"$path | awk -F\/ '{ printf "%s","\n\t{\n\t\t\"Foldername\":\""$(NF-1)"\",\n\t\tlastmodifiedfile\":\""$0"\"\n\t},"}'
done
printf "%s" "]"
Here we find all directories in /hd/log in a loop taking each directory in turn and then using ls -1t | head -1 to get the last modified file in the directory. The path and file is then parsed through awk to get the desired output. We first set the delimiter for awk as / with the -F flag. Then we then print the json syntax as required using the last but one / delimited piece of data for the directory (NF -1 - number field -1) and the complete line for the last modified file ($0).

Read a json file with bash

I kwould like to read the json file from http://freifunk.in-kiel.de/alfred.json in bash and separate it into files named by hostname of each element in that json string.
How do I read json with bash?
How do I read json with bash?
You can use jq for that. First thing you have to do is extract the list of hostnames and save it to a bash array. Running a loop on that array you would then run again a query for each hostname to extract each element based on them and save the data through redirection with the filename based on them as well.
The easiest way to do this is with two instances of jq -- one listing hostnames, and another (inside the loop) extracting individual entries.
This is, alas, a bit inefficient (since it means rereading the file from the top for each record to extract).
while read -r hostname; do
[[ $hostname = */* ]] && continue # paranoia; see comments
jq --arg hostname "$hostname" \
'.[] | select(.hostname == $hostname)' <alfred.json >"out-${hostname}.json"
done < <(jq -r '.[] | .hostname' <alfred.json)
(The out- prefix prevents alfred.json from being overwritten if it includes an entry for a host named alfred).
You can use python one-liner in similar way like (I haven't checked):
curl -s http://freifunk.in-kiel.de/alfred.json | python -c '
import json, sys
tbl=json.load(sys.stdin)
for t in tbl:
with open(tbl[t]["hostname"], "wb") as fp:
json.dump(tbl[t], fp)
'