How to read JSON file in Prolog - json

I found a few SO posts on related issues which were unhelpful. I finally figured it out and here's how to read the contents of a .json file. Say the path is /home/xxx/dnns/test/params.json, I want to turn the dictionary in the .json into a Prolog dictionary:
{
"type": "lenet_1d",
"input_channel": 1,
"output_size": 130,
"batch_norm": 1,
"use_pooling": 1,
"pooling_method": "max",
"conv1_kernel_size": 17,
"conv1_num_kernels": 45,
"conv1_stride": 1,
"conv1_dropout": 0.0,
"pool1_kernel_size": 2,
"pool1_stride": 2,
"conv2_kernel_size": 12,
"conv2_num_kernels": 35,
"conv2_stride": 1,
"conv2_dropout": 0.514948804688646,
"pool2_kernel_size": 2,
"pool2_stride": 2,
"fcs_hidden_size": 109,
"fcs_num_hidden_layers": 2,
"fcs_dropout": 0.8559119274655482,
"cost_function": "SmoothL1",
"optimizer": "Adam",
"learning_rate": 0.0001802763794651928,
"momentum": null,
"data_is_target": 0,
"data_train": "/home/xxx/data/20180402_L74_70mm/train_2.h5",
"data_val": "/home/xxx/data/20180402_L74_70mm/val_2.h5",
"batch_size": 32,
"data_noise_gaussian": 1,
"weight_decay": 0,
"patience": 20,
"cuda": 1,
"save_initial": 0,
"k": 4,
"save_dir": "DNNs/20181203090415_11_created/k_4"
}

To read a JSON file with SWI-Prolog, query
?- use_module(library(http/json)). % to enable json_read_dict/2
?- FPath = '/home/xxx/dnns/test/params.json', open(FPath, read, Stream), json_read_dict(Stream, Dicty).
You'll get
FPath = 'DNNs/test/k_4/model_params.json',
Stream = <stream>(0x7fa664401750),
Dicty = _12796{batch_norm:1, batch_size:32, conv1_dropout:0.
0, conv1_kernel_size:17, conv1_num_kernels:45, conv1_stride:
1, conv2_dropout:0.514948804688646, conv2_kernel_size:12, co
nv2_num_kernels:35, conv2_stride:1, cost_function:"SmoothL1"
, cuda:1, data_is_target:0, data_noise_gaussian:1, data_trai
n:"/home/xxx/Downloads/20180402_L74_70mm/train_2.h5", data
_val:"/home/xxx/Downloads/20180402_L74_70mm/val_2.h5", fcs
_dropout:0.8559119274655482, fcs_hidden_size:109, fcs_num_hi
dden_layers:2, input_channel:1, k:4, learning_rate:0.0001802
763794651928, momentum:null, optimizer:"Adam", output_size:1
30, patience:20, pool1_kernel_size:2, pool1_stride:2, pool2_
kernel_size:2, pool2_stride:2, pooling_method:"max", save_di
r:"DNNs/20181203090415_11_created/k_4", save_initial:0, type
:"lenet_1d", use_pooling:1, weight_decay:0}.
where Dicty is the desired dictionary.
If you want to define this as a predicate, you could do:
:- use_module(library(http/json)).
get_dict_from_json_file(FPath, Dicty) :-
open(FPath, read, Stream), json_read_dict(Stream, Dicty), close(Stream).

Even DEC10 Prolog released 40 years ago could handle JSON just as a normal term . There should be no need for a specialized library or parser for JSON because Prolog can just parse it directly .
?- X={"a":3,"b":"hello","c":undefined,"d":null} .
X = {"a":3, "b":"hello", "c":undefined, "d":null}.
?-

Related

Is this JSON data parsed into Python dict correctly?

Cannot extract components of data parsed from JSON to Python dictionary.
I attempted to print the value corresponding with a dictionary entry but get an error.
import urllib, json, requests
url = "https://storage.googleapis.com/osbuddy-exchange/summary.json"
response = urllib.urlopen(url)
data = json.loads(response.read())
print type(data)
for key, value in data.iteritems():
print value
print ''
print "data['entry']: ", data['99']
print "name: ", data['name']```
I was hoping I could get attributes of an entry. Say the 'buy_average' given a specific key. Instead I get an error when referencing specific components.
<type 'dict'>
22467 {u'sell_average': 3001, u'buy_average': 0, u'name': u'Bastion potion(2)', u'overall_average': 3001, u'sp': 180, u'overall_quantity': 2, u'members': True, u'sell_quantity': 2, u'buy_quantity': 0, u'id': 22467}
22464 {u'sell_average': 4014, u'buy_average': 0, u'name': u'Bastion potion(3)', u'overall_average': 4014, u'sp': 270, u'overall_quantity': 612, u'members': True, u'sell_quantity': 612, u'buy_quantity': 0, u'id': 22464}
5745 {u'sell_average': 0, u'buy_average': 0, u'name': u'Dragon bitter(m)', u'overall_average': 0, u'sp': 2, u'overall_quantity': 0, u'members': True, u'sell_quantity': 0, u'buy_quantity': 0, u'id': 5745}
...
data['entry']: {u'sell_average': 7843, u'buy_average': 7845, u'name': u'Ranarr potion (unf)', u'overall_average': 7844, u'sp': 25, u'overall_quantity': 23838, u'members': True, u'sell_quantity': 15090, u'buy_quantity': 8748, u'id': 99}
name:
Traceback (most recent call last):
File "C:/Users/Michael/PycharmProjects/osrsGE/osrsGE.py", line 16, in <module>
print "name: ", data['name']
KeyError: 'name'
Process finished with exit code 1
There is no key named 'name' in the dict named 'data'.
The first level keys are numbers like: "6", "2", "8",etc
The seconds level object has a key named 'name' so code like:
print(data['2']['name']) # Cannonball
should work

AWS X-Ray Python SDK get_service_graph

I am trying to get JSON using get_service_graph() provided by AWS X-Ray Python SDK in AWS Lambda function. reference link
import boto3
from datetime import datetime
def lambda_handler(event, context):
client = boto3.client('xray')
response1 = client.get_service_graph(
StartTime=datetime(2017, 5, 20, 12, 0),
EndTime=datetime(2017, 5, 20, 18, 0)
)
return response1
However, when I passed StartTime and EndTime parameters, stack trace reports datetime type is not JSON serializable. I even tried the following way.
response1 = client.get_service_graph(
StartTime="2017-05-20 00:00:00",
EndTime="2017-05-20 02:00:00"
)
What's weird is, if EndTime is set as "2017-05-20 01:00:00", there is no error generated. Other than that, the same error occurred.
{
"stackTrace": [
[
"/usr/lib64/python2.7/json/__init__.py",
251,
"dumps",
"sort_keys=sort_keys, **kw).encode(obj)"
],
[
"/usr/lib64/python2.7/json/encoder.py",
207,
"encode",
"chunks = self.iterencode(o, _one_shot=True)"
],
[
"/usr/lib64/python2.7/json/encoder.py",
270,
"iterencode",
"return _iterencode(o, 0)"
],
[
"/var/runtime/awslambda/bootstrap.py",
104,
"decimal_serializer",
"raise TypeError(repr(o) + \" is not JSON serializable\")"
]
],
"errorType": "TypeError",
"errorMessage": "datetime.datetime(2017, 5, 20, 1, 53, 13, tzinfo=tzlocal()) is not JSON serializable"
}
I did try only use date, like datetime(2017, 5, 20). However, if I use two consecutive days as StartTime and EndTime, the runtime complains the interval can't be more than 6 hours. If I use same date, it only returns empty JSON. I don't know how to get granularity of get_service_graph().
I think Python SDK for AWS X-Ray might be premature, but I'd still like to seek help from someone who had the same experience. Thanks!
the right way is using datetime(2017, 5, 20) not a string... but can you try using only date... without time? at least the AWS docs shows an example exactly like yours but only yyyy-mm-dd without time

Get JSON's attribute value in Chatterbot and Django integration

statement.text in chatterbot and Django integration returns
{'text': u'How are you doing?', 'created_at': datetime.datetime(2017, 2, 20, 7, 37, 30, 746345, tzinfo=<UTC>), 'extra_data': {}, 'in_response_to': [{'text': u'Hi', 'occurrence': 3}]}
I want a value of text attribute so that it prints How are you doing?
The chatterbot return the json object(dict) so you can use the dictionary operations like following
[1]: data = {'text': u'How are you doing?', 'created_at': datetime.datetime(2017, 2, 20, 7, 37, 30, 746345, tzinfo=<UTC>), 'extra_data': {}, 'in_response_to': [{'text': u'Hi', 'occurrence': 3}]}
[2]: data['text'] or data.get('text')[this approch is good].
What you got is dictionary. Value of dictionary can be obtained by get() function. You can also use dict['text'], but it does not perform error checking. get function returns None if key is not present.

convert simple perl string into JSON in perl

I am very new to JSON. I ran some commands and stored its output in string. Now i want to convert it into JSON. How can i convert it into perl hash referencces and then convert i into JSON. My output is like this but this is in string format :-
{"limits": {"rate": [], "absolute": {"maxServerMeta": 128, "maxPersonality": 5, "maxImageMeta": 128, "maxPersonalitySize": 10240, "maxSecurityGroupRules": 20, "maxTotalKeypairs": 100, "totalRAMUsed": 6144, "totalInstancesUsed": 3, "maxSecurityGroups": 10, "totalFloatingIpsUsed": 0, "maxTotalCores": 20, "totalSecurityGroupsUsed": 0, "maxTotalFloatingIps": 10, "maxTotalInstances": 10, "totalCoresUsed": 6, "maxTotalRAMSize": 51200}}}
I am using this code:-
my %hash_ref = split /[,:]/, $curl_cmd3_output;
my $h = from_json( $hash_ref ); #<-- $h is a perl hash reference
print $h;
$max= $h->{'limits'}{'absolute'}{'maxSecurityGroupRules'}, "\n"; #<-- 20
print $max;
But i am getting this error
hash- or arrayref expected (not a simple scalar, use allow_nonref to allow this)
How to solve it ?
Your $curl_cmd3_output is a string representation of a JSON hash. First, you have to transform to a perl hash, and then read the key you are looking for:
use strict;
use warnings;
use JSON;
my $curl_cmd3_output = q!{"limits": {"rate": [], "absolute": {"maxServerMeta": 128, "maxPersonality": 5, "maxImageMeta": 128, "maxPersonalitySize": 10240, "maxSecurityGroupRules": 20, "maxTotalKeypairs": 100, "totalRAMUsed": 6144, "totalInstancesUsed": 3, "maxSecurityGroups": 10, "totalFloatingIpsUsed": 0, "maxTotalCores": 20, "totalSecurityGroupsUsed": 0, "maxTotalFloatingIps": 10, "maxTotalInstances": 10, "totalCoresUsed": 6, "maxTotalRAMSize": 51200}}}!;
my $h = from_json($curl_cmd3_output ); #<-- $h is a perl hash reference
print $h->{limits}->{absolute}->{maxSecurityGroupRules}, "\n"; #<-- 20

pm3d in gnuplot with binary data

I have some data files with content
a1 b1 c1 d1
a1 b2 c2 d2
...
[blank line]
a2 b1 c1 d1
a2 b2 c2 d2
...
I plot this with gnuplot using
splot 'file' u 1:2:3:4 w pm3d.
Now, I want to use a binary file. I created the file with Fortran using unformatted stream-access (direct or sequential access did not work directly). By using gnuplot with
splot 'file' binary format='%float%float%float%float' u 1:2:3
I get a normal 3D-plot. However, the pm3d-command does not work as I don't have the blank lines in the binary file. I get the error message:
>splot 'file' binary format='%float%float%float%float' u 1:2:3:4 w pm3d
Warning: Single isoline (scan) is not enough for a pm3d plot.
Hint: Missing blank lines in the data file? See 'help pm3d' and FAQ.
According to the demo script in http://gnuplot.sourceforge.net/demo/image2.html, I have to specify the record length (which I still don't understand right). However, using this script from the demo page and the command with pm3d obtains the same error message:
splot 'scatter2.bin' binary record=30:30:29:26 u 1:2:3 w pm3d
So how is it possible to plot this four dimensional data from a binary file correctly?
Edit: Thanks, mgilson. Now it works fine. Just for the record: My fortran code-snippet:
open(unit=83,file=fname,action='write',status='replace',access='stream',form='unformatted')
a= 0.d0
b= 0.d0
do i=1,200
do j=1,100
write(83)real(a),real(b),c(i,j),d(i,j)
b = b + db
end do
a = a + da
b = 0.d0
end do
close(83)
The gnuplot commands:
set pm3d map
set contour
set cntrparam levels 20
set cntrparam bspline
unset clabel
splot 'fname' binary record=(100,-1) format='%float' u 1:2:3:4 t 'd as pm3d-projection, c as contour'
Great question, and thanks for posting it. This is a corner of gnuplot I hadn't spent much time with before. First, I need to generate a little test data -- I used python, but you could use fortran just as easily:
Note that my input array (b) is just a 10x10 array. The first two "columns" in the datafile are just the index (i,j), but you could use anything.
>>> import numpy as np
>>> a = np.arange(10)
>>> b = a[None,:]+a[:,None]
>>> b
array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
[ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
[ 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
[ 4, 5, 6, 7, 8, 9, 10, 11, 12, 13],
[ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
[ 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
[ 7, 8, 9, 10, 11, 12, 13, 14, 15, 16],
[ 8, 9, 10, 11, 12, 13, 14, 15, 16, 17],
[ 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]])
>>> with open('foo.dat','wb') as foo:
... for (i,j),dat in np.ndenumerate(b):
... s = struct.pack('4f',i,j,dat,dat)
... foo.write(s)
...
So here I just write 4-floating point values to the file for each data-point. Again, this is what you've already done using fortran. Now for plotting it:
splot 'foo.dat' binary record=(10,-1) format='%float' u 1:2:3:4 w pm3d
I believe that this specifies that each "scan" is a "record". Since I know that each scan will be 10 floats long, that becomes the first index in the record list. The -1 indicates that gnuplot should keep reading records until it finds the end of the file.