Convert yaml -> json in Racket - json

I was looking to convert some yaml documents to json using Racket and the yaml and json libraries. Both seem to work very well, but don't necessarily work well together. At the risk of this question being a little meta (I am interested in an idiomatic solution), can someone point me in the right direction?
Example yaml:
Title: Example
Description: An example
Content:
Type1:
- foo
- bar
- baz
Type2:
- chocolate
- vanilla
- strawberry
My quick attempt at converting a yaml:
#lang racket/base
(require json
yaml)
; reading is easy
(define example-yaml (file->yaml "./example.yaml"))
; writing doesn't like the keys-as-strings... why not?
; (write-json example-yaml)
; write-json: expected argument of type <legal JSON key value>; given: "Description"
; keys-as-symbols seems to be fine
(define example-yaml-2
#hash((Content
.
#hash((Type1 . ("foo" "bar" "baz"))
(Type2 . ("chocolate" "vanilla" "strawberry"))))
(Description . "An example")
(Title . "Example")))
(write-json example-yaml-2)
; {"Content":{"Type2":["chocolate","vanilla","strawberry"],"Type1":["foo","bar","baz"]},"Description":"An example","Title":"Example"}
I gather that the issue is that the json package doesn't see strings as a valid key in a jsexpr. The docs give the following example:
> (jsexpr? #hasheq(("turnip" . 82)))
#f
From where I sit the options seem to be:
Change the behavior of the yaml package to emit keys as symbols rather than as strings
Change the behavior of the json package to treat (jsexpr? #hasheq(("turnip" . 82))) as #t
Parse my yamls, then munge the resulting data structure such that keys are symbols.
I guess I don't entirely understand the implications (or have a solid handle on the implementation) of these options. I also am not entirely sure why keys as strings aren't valid jsexprs, given that the json it emits uses strings as keys as well. Thank you for any insight you can provide!

For method 3, just changing hash-table keys from strings to symbols might not be enough. It depends on how much you know about the format of your data.
For example the yaml package allows all sorts of things as "keys", not just strings but also binary data, numbers, hash-maps, or any other Yaml objects (keys may be arbitrary nodes).
So you must either:
Know beforehand that all keys in all of your Yaml data are simple strings,
Or be able to sanely convert any arbitrary Yaml value into a symbol,
Or convert Yaml maps into some Json structure other than a Json map.
For now I'm going to assume (1), that you know beforehand that all keys are strings.
;; yaml-key->symbol
;; In my Yaml data, I know beforehand that all keys are strings
(define (yaml-key->symbol key)
(cond
[(string? key) (string->symbol key)]
[else
(error 'yaml-key->symbol
"expected all Yaml keys to be strings, but got: ~v"
key)]))
There are other potential mismatches between Yaml and Json that you might have to consider.
How do you convert yaml byte-strings? As lists of bytes? Hex strings?
How do you convert yaml sets? As lists?
How do you convert yaml timestamps / dates? As Json maps mapping fields to numbers? Date strings? Number of seconds since the unix epoch?
For each of these questions, make a decision and document it. Or if you know ahead of time that your Yaml data definitely doesn't include any of these, document that too, and validate with an error similar to yaml-key->symbol above.
Once you know how to convert everything you might see in your data, you can traverse the Yaml recursively and convert it to Json.

Related

How to marshal a predicate from JSON in Prolog?

In Python it is common to marshal objects from JSON. I am seeking similar functionality in Prolog, either swi-prolog or scryer.
For instance, if we have JSON stating
{'predicate':
{'mortal(X)', ':-', 'human(X)'}
}
I'm hoping to find something like load_predicates(j) and have that data immediately consulted. A version of json.dumps() and loads() would also be extremely useful.
EDIT: For clarity, this will allow interoperability with client applications which will be collecting rules from users. That application is probably not in Prolog, but something like React.js.
I agree with the commenters that it would be easier to convert the JSON data to a .pl file in the proper format first and then load that.
However, you can load the predicates from JSON directly, convert them to a representation that Prolog understands, and use assertz to add them to the knowledge base.
If indeed the data contains all the syntax needed for a predicate (as is the case in the example data in the question) then converting the representation is fairly simple as you just need to concatenate the elements of the list into a string and then create a term out of the string. Note that this assumption skips step 2 in the first comment by Guy Coder.
Note that the Prolog JSON library is rather strict in which format it accepts: only double quotes are valid as string delimiters, and lists with singleton values (i.e., not key-value pairs) need to use the notation [a,b,c] instead of {a,b,c}. So first the example data needs to be rewritten:
{"predicate":
["mortal(X)", ":-", "human(X)"]
}
Then you can load it in SWI-Prolog. Minimal working example:
:- use_module(library(http/json)).
% example fact for testing
human(aristotle).
load_predicate(J) :-
% open the file
open(J, read, JSONstream, []),
% parse the JSON data
json_read(JSONstream, json(L)),
% check for an occurrence of the predicate key with value L2
member(predicate=L2, L),
% concatenate the list into a string
atomics_to_string(L2, S),
% create a term from the string
term_string(T, S),
% add to knowledge base
assertz(T).
Example run:
?- consult('mwe.pl').
true.
?- load_predicate('example_predicate.json').
true.
?- mortal(X).
X = aristotle.
Detailed explanation:
The predicate json_read stores the data in the following form:
json([predicate=['mortal(X)', :-, 'human(X)']])
This is a list inside a json term with one element for each key-value pair. The element has the syntax key=value. In the call to json_read you can already strip the json() term and store the list directly in the variable L.
Then member/2 is used to search for the compound term predicate=L2. If you have more than one predicate in the JSON file then you should turn this into a foreach or in a recursive call to process all predicates in the list.
Since the list L2 already contains a syntactically well-formed Prolog predicate it can just be concatenated, turned into a term using term_string/2 and asserted. Note that in case the predicate is not yet in the required format, you can construct a predicate out of the various pieces using built-in predicate manipulation functionality, see https://www.swi-prolog.org/pldoc/doc_for?object=copy_predicate_clauses/2 for some pointers.

TCL Dict to JSON

I am trying to convert a dict into JSON format and not seeing any easy method using TclLib Json Package. Say, I have defined a dict as follows :
set countryDict [dict create USA {population 300 capital DC} Canada {population 30 capital Ottawa}]
I want to convert this to json format as shown below:
{
"USA": {
"population": 300,
"captial": "DC"
},
"Canada": {
"population": 30,
"captial": "Ottawa"
}
}
("population" is number and capital is string). I am using TclLib json package (https://wiki.tcl-lang.org/page/Tcllib+JSON) . Any help would be much appreciated.
There's two problems with the “go straight there” approach that you appear to be hoping for:
Tcl's type system is extremely different to JSON's; in Tcl, every value is a (subtype of) string, but JSON expects objects, arrays, numbers and strings to wholly different things.
The capital becomes captial. For bonus fun. (Hopefully that's just a typo on your part, but we'll cope.)
I'd advise using rl_json for this; it's a much more capable package that treats JSON as a fundamental type. (It's even better at it when it comes to querying into the JSON structure.)
package require rl_json
set result {{}}; # Literal empty JSON object
dict for {countryID data} $countryDict {
rl_json::json set result $countryID [rl_json::json template {{
"population": "~N:population",
"captial": "~S:capital"
}} $data]
# Yes, that was {{ … }}, the outer ones are for Tcl & the inner ones for a JSON object
}
puts [rl_json::json pretty $result]
That produces almost exactly the output you asked for, except with different indentation. $result is the “production” version of the output that you can work with for further processing, but which has no excess whitespace at all (which is a great choice when you're dealing with documents over 100MB long).
Notes:
The initial JSON object could have been done like this:
set result "{}"
that would have worked just as well (and been the same Tcl bytecode).
json set puts an item into an object or array; that's exactly what we want here (in a dict for to go over the input data).
json template takes an optional dictionary for mapping substitution names in the template to values; that's perfect for your use case. Otherwise we'd have had to do dict with data {} to map the contents of the dictionary into variables, and that's less than perfect when the input data isn't strictly controlled.
The contents of template argument to json template is itself JSON. The ~N: prefix in a leaf string value says “replace this with a number from the substitution called…”, and ~S: says “replace this with a string from the substitution called…”. There are others.

Excessive use of map[string]interface{} in go development?

The majority of my development experience has been from dynamically typed languages like PHP and Javascript. I've been practicing with Golang for about a month now by re-creating some of my old PHP/Javascript REST APIs in Golang. I feel like I'm not doing things the Golang way most of the time. Or more generally, I'm not use to working with strongly typed languages. I feel like I'm making excessive use of map[string]interface{} and slices of them to box up data as it comes in from http requests or when it gets shipped out as json http output. So what I'd like to know is if what I'm about to describe goes against the philosophy of golang development? Or if I'm breaking the principles of developing with strongly typed languages?
Right now, about 90% of the program flow for REST Apis I've rewritten with Golang can be described by these 5 steps.
STEP 1 - Receive Data
I receive http form data from http.Request.ParseForm() as formvals := map[string][]string. Sometimes I will store serialized JSON objects that need to be unmarshaled like jsonUserInfo := json.Unmarshal(formvals["user_information"][0]) /* gives some complex json object */.
STEP 2 - Validate Data
I do validation on formvals to make sure all the data values are what I expect before using it in SQL queries. I treat everyting as a string, then use Regex to determine if the string format and business logic is valid (eg. IsEmail, IsNumeric, IsFloat, IsCASLCompliant, IsEligibleForVoting,IsLibraryCardExpired etc...). I've written my own Regex and custom functions for these types of validations
STEP 3 - Bind Data to SQL Queries
I use golang's database/sql.DB to take my formvals and bind them to my Query and Exec functions like this Query("SELECT * FROM tblUser WHERE user_id = ?, user_birthday > ? ",formvals["user_id"][0], jsonUserInfo["birthday"]). I never care about the data types I'm supplying as arguments to be bound, so they're all probably strings. I trust the validation in the step immediately above has determined they are acceptable for SQL use.
STEP 4 - Bind SQL results to []map[string]interface{}{}
I Scan() the results of my queries into a sqlResult := []map[string]interface{}{} because I don't care if the value types are null, strings, float, ints or whatever. So the schema of an sqlResult might look like:
sqlResult =>
[0] {
"user_id":"1"
"user_name":"Bob Smith"
"age":"45"
"weight":"34.22"
},
[1] {
"user_id":"2"
"user_name":"Jane Do"
"age":nil
"weight":"22.22"
}
I wrote my own eager load function so that I can bind more information like so EagerLoad("tblAddress", "JOIN ON tblAddress.user_id",&sqlResult) which then populates sqlResult with more information of the type []map[string]interface{}{} such that it looks like this:
sqlResult =>
[0] {
"user_id":"1"
"user_name":"Bob Smith"
"age":"45"
"weight":"34.22"
"addresses"=>
[0] {
"type":"home"
"address1":"56 Front Street West"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
[1] {
"type":"work"
"address1":"5 Kennedy Avenue"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
},
[1] {
"user_id":"2"
"user_name":"Jane Do"
"age":nil
"weight":"22.22"
"addresses"=>
[0] {
"type":"home"
"address1":"56 Front Street West"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
}
STEP 5 - JSON Marshal and send HTTP Response
then I do a http.ResponseWriter.Write(json.Marshal(sqlResult)) and output data for my REST API
Recently, I've been revisiting articles with code samples that use structs in places I would have used map[string]interface{}. For example, I wanted to refactor Step 2 with a more standard approach that other golang developers would use. So I found this https://godoc.org/gopkg.in/go-playground/validator.v9, except all it's examples are with structs . I also noticed that most blogs that talk about database/sql scan their SQL results into typed variables or structs with typed properties, as opposed to my Step 4 which just puts everything into map[string]interface{}
Hence, i started writing this question. I feel the map[string]interface{} is so useful because majority of the time,I don't really care what the data is and it gives me to the freedom in Step 4 to construct any data schema on the fly before I dump it as JSON http response. I do all this with as little code verbosity as possible. But this means my code is not as ready to leverage Go's validation tools, and it doesn't seem to comply with the golang community's way of doing things.
So my question is, what do other golang developers do with regards to Step 2 and Step 4? Especially in Step 4...do Golang developers really encourage specifying the schema of the data through structs and strongly typed properties? Do they also specify structs with strongly typed properties along with every eager loading call they make? Doesn't that seem like so much more code verbosity?
It really depends on the requirements just like you have said you don't require to process the json it comes from the request or from the sql results. Then you can easily unmarshal into interface{}. And marshal the json coming from sql results.
For Step 2
Golang has library which works on validation of structs used to unmarshal json with tags for the fields inside.
https://github.com/go-playground/validator
type Test struct {
Field `validate:"max=10,min=1"`
}
// max will be checked then min
you can also go to godoc for validation library. It is very good implementation of validation for json values using struct tags.
For STEP 4
Most of the times, We use structs if we know the format and data of our JSON. Because it provides us more control over the data types and other functionality. For example if you wants to empty a JSON feild if you don't require it in your JSON. You should use struct with _ json tag.
Now you have said that you don't care if the result coming from sql is empty or not. But if you do it again comes to using struct. You can scan the result into struct with sql.NullTypes. With that also you can provide json tag for omitempty if you wants to omit the json object when marshaling the data when sending a response.
Struct values encode as JSON objects. Each exported struct field
becomes a member of the object, using the field name as the object
key, unless the field is omitted for one of the reasons given below.
The encoding of each struct field can be customized by the format
string stored under the "json" key in the struct field's tag. The
format string gives the name of the field, possibly followed by a
comma-separated list of options. The name may be empty in order to
specify options without overriding the default field name.
The "omitempty" option specifies that the field should be omitted from
the encoding if the field has an empty value, defined as false, 0, a
nil pointer, a nil interface value, and any empty array, slice, map,
or string.
As a special case, if the field tag is "-", the field is always
omitted. Note that a field with name "-" can still be generated using
the tag "-,".
Example of json tags
// Field appears in JSON as key "myName".
Field int `json:"myName"`
// Field appears in JSON as key "myName" and
// the field is omitted from the object if its value is empty,
// as defined above.
Field int `json:"myName,omitempty"`
// Field appears in JSON as key "Field" (the default), but
// the field is skipped if empty.
// Note the leading comma.
Field int `json:",omitempty"`
// Field is ignored by this package.
Field int `json:"-"`
// Field appears in JSON as key "-".
Field int `json:"-,"`
As you can analyze from above information given in Golang spec for json marshal. Struct provide so much control over json. That's why Golang developer most probably use structs.
Now on using map[string]interface{} you should use it when you don't the structure of your json coming from the server or the types of fields. Most Golang developers stick to structs wherever they can.

Delimiter for multiple json strings

I'd like to save multiple json strings to a file and separate them by a delimiter, such that it will be easy to read this list in, split on the delimiter and work with each json doc separately.
Serializing using a json array is not an option due to external reasons.
I would like to use a delimiter that is illegal in JSON (e.g. delimiting using a comma would be a bad idea since there are commas within the json strings).
Are there any characters that are not considered legal in JSON serialized strings?
I know it's not exactly what you needed, but you can use this SO answer to write the json string to a CSV, then read it on the other side by using a good streaming CSV reader such as this one
NDJSON
Have a look at NDJSON (Newline delimited JSON).
http://ndjson.org/
It seems to me to be exactly how you should do things, though its not exactly what you asked for. (If you can't flatten your JSON objects into single lines then it's not for you though!) You asked for a delimiter that is not allowed in JSON. Newline is allowed in JSON, but it is not necessary for JSON to contain newlines.
The format is used for log files amongst other things. I discovered it when looking at the Lichess API documentation.
You can start listening in to a broadcast stream of NDJSON data part way through, wait for the next newline character and then start processing objects as and when they arrive.
If you go for NDJSON, you are at least following a standard and I think you'd be hard pressed to find an alternative standard to follow.
Example NDJSON
{"some":"thing"}
{"foo":17,"bar":false,"quux":true}
{"may":{"include":"nested","objects":["and","arrays"]}}
An old question, but hopefully this answer will be useful.
Most JSON readers crash on this character: , which is information separator two. They declare it "unexpected token", so I guess it has to be wrapped to pass or something.

How to convert between BSON and JSON, especially for those special objects?

I am not asking for any libraries to do so and I am just writing code for bson_to_json and json_to_bson.
so here is the BSON specification.
For regular double, doc, array, string, it is fine and it is easy to convert between BSON and JSON.
However, for those particular objects, such as
Timestamp and UTC:
If convert from JSON to BSON, how can I know they are timestamp and utc?
Regex (string, string), JavaScript code with scope (string, doc)
their structures have multiple parts, how can I present the structures in JSON?
Binary data (generic, function, etc)`
How can I present the type of binary data in JSON?
int32 and int64
How can I present them in JSON, so BSON can know which is 32 bit or 64 bit?
Thanks
As we know JSON cannot express objects so you will need to decide how you want the stringified version of the BSON objects (field types) to be represented within the output of your ocaml driver.
Some of the data types are easy, Timestamp is not needed since it is internal to sharding only and Javascript blocks are best left out due to the fact that they are best used only within system.js as saved functions for use in MRs.
You also gotta consider that some of these fields are actually both in and out. What I mean by in and out is that some are used to specify input documents to be serialised to BSON and some are part of output document that need deserialising from BSON into JSON.
Regex is one which will most likely be a field type you send down. As such you will need to serialise your ocaml object to the BSON equivilant of {$regex: 'd', '$options': 'ig'} from /d/ig PCRE representation.
Dates can be represented in JSON by either choosing to use the ISODate string or a timestamp for the representation. The output will be something like {$sec:556675,$usec:6787} and you can convert $sec to the display you need.
Binary data in JSON can be represented by taking the data (if I remember right) property from the output document and then encoding that to base 64 and storing it as a stirng in the field.
int32 and int64 has no real definition between the two in JSON except that 64bit ints will be bigger than 2147483647 so I am unsure if you can keep the data types unique there.
That should help get you started.