Extract json object from wget request - json

I want to do with php exactly the opposite of what James Nine was asking in PHP: How to extract JSON strings out of a string dump.
That is to keep the json object(s) out from a large output from a wget request. I can't figure out what the regex should be, basically negate
~\{(?:[^{}]|(?R))*\}~

Related

iterating json to store key value pairs using shell script

I have a json file that is getting created at runtime using the sh script within groovy code. The json file has below contents.
cat.json
{
"user1":"pass1",
"user2":"pass2",
"user3":"pass3"
}
Now I want to create a file at runtime which stores key value pairs in below format
test
user1:pass1
user2:pass2
user3:pass3
can some one help me out shell codes for writing this.
You have literally dozen ways to convert that JSON document to a tabular data file (pretty much like CSV/colon-SV) since you mentioned Java, Groovy, including Java-driven scripting engines (BeanShell, JavaScript, Groovy itself), but if you can use jq then you can extract k/v pairs at least for simple values that do not require any escaping:
#!/bin/sh
jq -r 'to_entries[] | "\(.key):\(.value)"' \
< cat.json
This answer is inspired by searching for extracting entries using jq (or converting a JSON file to a CSV file) and especially by the answer https://stackoverflow.com/a/50496145/12232870 by #peak.

How can you get CSV instead of JSON from the HTTP API of InfluxDB?

I want to use an influxdb within the context of business intelligence:
ETL, joining data from other databases, creating live dashboards.
Right now, we are using standard BI-Tools such as QLIK or Microsoft PowerBI.
According to the documentation the HTTP API should be used for querying (https://docs.influxdata.com/influxdb/v1.2/guides/querying_data/)
My problem is that the output of the API seems to be JSON only. That means that each analyst first has to figure out how to transform the JSON into table-format before joining other data etc.
Is it possible to tell the API to produce a csv-like table output?
Do you have recommendations which tools to use to produce good Dashboards? I tried grafana but that seemed to fall short when joining other data.
You can use -H "Accept: application/csv" in your curl to have a response in CSV. For instance:
$ curl -G 'http://localhost:8086/query' --data-urlencode "db=my_db" --data-urlencode "q=SELECT * FROM \"cpu\"" -H "Accept: application/csv"
name,tags,time,host,region,value
cpu,,1493031640435991638,serverA,us_west,0.64
You can use jq to convert the JSON output to CSV as follows, which also allows you to get RFC3339 formatted timestamps:
jq -r "(.results[0].series[0].columns), (.results[0].series[0].values[]) | #csv"
which gives the output
"time","ppm","T"
"2019-01-17T19:45:00Z",864.5,18.54
"2019-01-17T19:50:00Z",861.4,18.545
"2019-01-17T19:55:00Z",866.2,18.5
"2019-01-17T20:00:00Z",863.9,18.47
and works because:
(.results[0].series[0].columns) gets the column names as array
, concatenates the output
(.results[0].series[0].values[]) gets the data values as array
| #csv uses the jq csv formatter
-r is used to get raw output
Further resources:
Use https://jqplay.org/ to build queries
Other examples: Convert JSON array into CSV using jq
https://unix.stackexchange.com/questions/429241/convert-json-to-csv
How to convert arbirtrary simple JSON to CSV using jq?

curl command json response value without jq

my curl command returns a json response
{"token":"abcd"}
how do I get the token value into a variable in the shellscript?
i can not use jq something most posts have suggested in the past. The pattern of this response is also set (there will be only 1 key-value pair), so if this response can in anyway be converted to a string then using substring could be a option.
I found this blog, did exactly what I want. Works wonder.

Ways to parse JSON using KornShell

I have a working code for parsing a JSON output using KornShell by treating it as a string of characters. The issue I have is that the vendor keeps changing the position of the field that I am intersted in. I understand in JSON, we can parse it by key-value pairs.
Is there something out there that can do this? I am intersted in a specific field and I would like to use it to run the checks on the status of another RESTAPI call.
My sample json output is like this:
JSONDATA value :
{
"status": "success",
"job-execution-id": 396805,
"job-execution-user": "flexapp",
"job-execution-trigger": "RESTAPI"
}
I would need the job-execution-id value to monitor this job through the rest of the script.
I am using the following command to parse it:
RUNJOB=$(print ${DATA} |cut -f3 -d':'|cut -f1 -d','| tr -d [:blank:]) >> ${LOGDIR}/${LOGFILE}
The problem with this is, it is field delimited by :. The field position has been known to be changed by the vendors during releases.
So I am trying to see if I can use a utility out there that would always give me the key-value pair of "job-execution-id": 396805, no matter where it is in the json output.
I started looking at jsawk, and it requires the js interpreter to be installed on our machines which I don't want. Any hint on how to go about finding which RPM that I need to solve it?
I am using RHEL5.5.
Any help is greatly appreciated.
The ast-open project has libdss (and a dss wrapper) which supposedly could be used with ksh. Documentation is sparse and is limited to a few messages on the ast-user mailing list.
The regression tests for libdss contain some json and xml examples.
I'll try to find more info.
Python is included by default with CentOS so one thing you could do is pass your JSON string to a Python script and use Python's JSON parser. You can then grab the value written out by the script. An example you could modify to meet your needs is below.
Note that by specifying other dictionary keys in the Python script you can get any of the values you need without having to worry about the order changing.
Python script:
#get_job_execution_id.py
# The try/except is because you'll probably have Python 2.4 on CentOS 5.5,
# and the straight "import json" statement won't work unless you have Python 2.6+.
try:
import json
except:
import simplejson as json
import sys
json_data = sys.argv[1]
data = json.loads(json_data)
job_execution_id = data['job-execution-id']
sys.stdout.write(str(job_execution_id))
Kornshell script that executes it:
#get_job_execution_id.sh
#!/bin/ksh
JSON_DATA='{"status":"success","job-execution-id":396805,"job-execution-user":"flexapp","job-execution-trigger":"RESTAPI"}'
EXECUTION_ID=`python get_execution_id.py "$JSON_DATA"`
echo $EXECUTION_ID

Parsing JSON DATA in AWS

I still cannot parse JSON data in linux.
I need a linux command to parse Json data to readable string.
someone told me to use underscore-cli.(https://npmjs.org/package/underscore-cli)
I install and use it, still the result is unreadable.
my data:
"2005\u5e7405\u670812\u65e5(\u6728) 02\u664216\u5206"
according to this link
http://json.parser.online.fr/
the result is
"2005年05月12日(木) 02時16分"
Is there any other way to parse this Json data?
Please help.
Try jq: http://stedolan.github.com/jq/
echo '"2005\u5e7405\u670812\u65e5(\u6728) 02\u664216\u5206"' | ./jq .
"2005年05月12日(木) 02時16分"
jq takes escaped unicode and outputs it in utf-8.