Using Python I can do following:
r = requests.get(url_base + url)
jsonObj = json.loads(r.content.decode('raw_unicode_escape'))
print(jsonObj["PartDetails"]["ManufacturerPartNumber"]
Is there any way to perform same thing using Golang?
Currently I need following:
json.Unmarshal(body, &part_number_json)
fmt.Println("\r\nPartDetails: ", part_number_json.(map[string]interface{})["PartDetails"].(map[string]interface{})["ManufacturerPartNumber"])
That is to say I need to use casting for each field of JSON what tires and makes the code unreadable.
I tried this using reflection but it is not comphortable too.
EDIT:
currently use following function:
func jso(json interface{}, fields ...string) interface{} {
res := json
for _, v := range fields {
res = res.(map[string]interface{})[v]
}
return res
and call it like that:
fmt.Println("PartDetails: ", jso( part_number_json, "PartDetails", "ManufacturerPartNumber") )
There are third-party packages like gjson that can help you do that.
That said, note that Go is Go, and Python is Python. Go is statically typed, for better and worse. It takes more code to write simple JSON manipulation, but that code should be easier to maintain later since it's more strictly typed and the compiler helps you check against error. Types also serve as documentation - simply nesting dicts and arrays is completely arbitrary.
I have found the following resource very helpful in creating a struct from json. Unmarshaling should only match the fields you have defined in the struct, so take what you need, and leave the rest if you like.
https://mholt.github.io/json-to-go/
Related
I'm trying to encode and decode structs, I've searched around quite a bit and a lot of the questions regarding this topic is usually people who want to encode primitives, or simple structs. What I want is to encode a struct that could look like this:
Name string
Id int
file *os.File
keys *ecdsa.PrivateKey
}
The name and the ID is no problem, and I can encode them using either gob or json marshalling. However when I want to encode a file for example using gob, I'd usegob.Register(os.File{}) I get an error that file has no exported fields, due to the fields in the file struct being lower case. I would use a function like this
buf := bytes.Buffer{}
enc := gob.NewEncoder(&buf)
gob.Register(big.Int{})
...
err := enc.Encode(&p)
if err != nil {
log.Fatal(err)
}
fmt.Println("uncompressed size (bytes): ", len(buf.Bytes()))
return buf.Bytes()
}
I'm not sure if it's correct to register within the encode function, however it seems odd that I have to register all structs that is being referenced to for the one specific struct i want to encode. For example with a file, I would have to register a ton of interfaces, it doesn't seem to be the correct way to do it. Is there a simple way to encode and decode structs that have a bit more complexity.
If I use json marshalling to do this it will always return nil if I use a pointer to another struct. Is there a way to get all the information I want?
Thanks!
Imagine your struct ponts to a file in /foo/bar/baz.txt and you serialize your struct. The you send it to another computer (perhaps in a different operational system) and re-create the struct. What do you expect?
What if you serialize, delete the file (or update the content) and re-create the struct in the same computer?
One solution is store the content of the file.
Another solution is to store the path to the file and, when you deserialize the struct you can try to reopen the file. You can add a security layer by storing the hash of the content, size and other metadata to check if the file is the same.
The answer will guide you to the best implementation
For a project of mine I have to deal with XML files over 2GB. I would like to store the data mongoDB. I have decided to give it a try using the Go language. But I have a bit of trouble figuring out the best way to do this in Go.
I've seen a lot of examples with a fixed XML structure, but the data structure I get is dynamic, so using some kind of predefined struct isn't going to work for me.
Now I stumbled upon this package: https://github.com/basgys/goxml2json which looks very promising, but there are a few things I don't get:
The example given in the readme is using a XML string, but I don't see anything in the code that accepts a file.
Given the example, I have 2GB xml files, I cannot simply load the whole XML file in memory. This would flud my server.
I think it is good to say, I just have to convert the XML data just once to its JSON form so I can store it in mongoDB.
Does some of you have some ideas on how to parse XML files efficiently to JSON using Go?
Go provides a builtin XML stream parser at encoding/xml.Decoder.
A typical usage pattern is to read tokens until you find something of interest and then unmarshal the token into an XML tagged struct, then handle that data accordingly. This way you're only loading into memory what is required for a single XML token or to unmarshal an interesting bit of data.
For example (Go Playground):
d := xml.NewDecoder(xmlStream)
for {
// Decode the next token from the stream...
token, err := d.Token()
if err == io.EOF {
break
}
check(err)
// Switch behavior based on the token type.
switch el := token.(type) {
case xml.StartElement:
// Handle "person" start elements by unmarshaling from XML...
if el.Name.Local == "person" {
var p Person
err := d.DecodeElement(&p, &el)
check(err)
// ...then marshal to JSON...
jsonbytes, err := json.Marshal(p)
check(err)
// ...then take other action (e.g. insert into database).
fmt.Printf("OK: %s\n", string(jsonbytes))
// OK: {"Id":"123","Name":"Alice","Age":30}
}
}
}
The majority of my development experience has been from dynamically typed languages like PHP and Javascript. I've been practicing with Golang for about a month now by re-creating some of my old PHP/Javascript REST APIs in Golang. I feel like I'm not doing things the Golang way most of the time. Or more generally, I'm not use to working with strongly typed languages. I feel like I'm making excessive use of map[string]interface{} and slices of them to box up data as it comes in from http requests or when it gets shipped out as json http output. So what I'd like to know is if what I'm about to describe goes against the philosophy of golang development? Or if I'm breaking the principles of developing with strongly typed languages?
Right now, about 90% of the program flow for REST Apis I've rewritten with Golang can be described by these 5 steps.
STEP 1 - Receive Data
I receive http form data from http.Request.ParseForm() as formvals := map[string][]string. Sometimes I will store serialized JSON objects that need to be unmarshaled like jsonUserInfo := json.Unmarshal(formvals["user_information"][0]) /* gives some complex json object */.
STEP 2 - Validate Data
I do validation on formvals to make sure all the data values are what I expect before using it in SQL queries. I treat everyting as a string, then use Regex to determine if the string format and business logic is valid (eg. IsEmail, IsNumeric, IsFloat, IsCASLCompliant, IsEligibleForVoting,IsLibraryCardExpired etc...). I've written my own Regex and custom functions for these types of validations
STEP 3 - Bind Data to SQL Queries
I use golang's database/sql.DB to take my formvals and bind them to my Query and Exec functions like this Query("SELECT * FROM tblUser WHERE user_id = ?, user_birthday > ? ",formvals["user_id"][0], jsonUserInfo["birthday"]). I never care about the data types I'm supplying as arguments to be bound, so they're all probably strings. I trust the validation in the step immediately above has determined they are acceptable for SQL use.
STEP 4 - Bind SQL results to []map[string]interface{}{}
I Scan() the results of my queries into a sqlResult := []map[string]interface{}{} because I don't care if the value types are null, strings, float, ints or whatever. So the schema of an sqlResult might look like:
sqlResult =>
[0] {
"user_id":"1"
"user_name":"Bob Smith"
"age":"45"
"weight":"34.22"
},
[1] {
"user_id":"2"
"user_name":"Jane Do"
"age":nil
"weight":"22.22"
}
I wrote my own eager load function so that I can bind more information like so EagerLoad("tblAddress", "JOIN ON tblAddress.user_id",&sqlResult) which then populates sqlResult with more information of the type []map[string]interface{}{} such that it looks like this:
sqlResult =>
[0] {
"user_id":"1"
"user_name":"Bob Smith"
"age":"45"
"weight":"34.22"
"addresses"=>
[0] {
"type":"home"
"address1":"56 Front Street West"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
[1] {
"type":"work"
"address1":"5 Kennedy Avenue"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
},
[1] {
"user_id":"2"
"user_name":"Jane Do"
"age":nil
"weight":"22.22"
"addresses"=>
[0] {
"type":"home"
"address1":"56 Front Street West"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
}
STEP 5 - JSON Marshal and send HTTP Response
then I do a http.ResponseWriter.Write(json.Marshal(sqlResult)) and output data for my REST API
Recently, I've been revisiting articles with code samples that use structs in places I would have used map[string]interface{}. For example, I wanted to refactor Step 2 with a more standard approach that other golang developers would use. So I found this https://godoc.org/gopkg.in/go-playground/validator.v9, except all it's examples are with structs . I also noticed that most blogs that talk about database/sql scan their SQL results into typed variables or structs with typed properties, as opposed to my Step 4 which just puts everything into map[string]interface{}
Hence, i started writing this question. I feel the map[string]interface{} is so useful because majority of the time,I don't really care what the data is and it gives me to the freedom in Step 4 to construct any data schema on the fly before I dump it as JSON http response. I do all this with as little code verbosity as possible. But this means my code is not as ready to leverage Go's validation tools, and it doesn't seem to comply with the golang community's way of doing things.
So my question is, what do other golang developers do with regards to Step 2 and Step 4? Especially in Step 4...do Golang developers really encourage specifying the schema of the data through structs and strongly typed properties? Do they also specify structs with strongly typed properties along with every eager loading call they make? Doesn't that seem like so much more code verbosity?
It really depends on the requirements just like you have said you don't require to process the json it comes from the request or from the sql results. Then you can easily unmarshal into interface{}. And marshal the json coming from sql results.
For Step 2
Golang has library which works on validation of structs used to unmarshal json with tags for the fields inside.
https://github.com/go-playground/validator
type Test struct {
Field `validate:"max=10,min=1"`
}
// max will be checked then min
you can also go to godoc for validation library. It is very good implementation of validation for json values using struct tags.
For STEP 4
Most of the times, We use structs if we know the format and data of our JSON. Because it provides us more control over the data types and other functionality. For example if you wants to empty a JSON feild if you don't require it in your JSON. You should use struct with _ json tag.
Now you have said that you don't care if the result coming from sql is empty or not. But if you do it again comes to using struct. You can scan the result into struct with sql.NullTypes. With that also you can provide json tag for omitempty if you wants to omit the json object when marshaling the data when sending a response.
Struct values encode as JSON objects. Each exported struct field
becomes a member of the object, using the field name as the object
key, unless the field is omitted for one of the reasons given below.
The encoding of each struct field can be customized by the format
string stored under the "json" key in the struct field's tag. The
format string gives the name of the field, possibly followed by a
comma-separated list of options. The name may be empty in order to
specify options without overriding the default field name.
The "omitempty" option specifies that the field should be omitted from
the encoding if the field has an empty value, defined as false, 0, a
nil pointer, a nil interface value, and any empty array, slice, map,
or string.
As a special case, if the field tag is "-", the field is always
omitted. Note that a field with name "-" can still be generated using
the tag "-,".
Example of json tags
// Field appears in JSON as key "myName".
Field int `json:"myName"`
// Field appears in JSON as key "myName" and
// the field is omitted from the object if its value is empty,
// as defined above.
Field int `json:"myName,omitempty"`
// Field appears in JSON as key "Field" (the default), but
// the field is skipped if empty.
// Note the leading comma.
Field int `json:",omitempty"`
// Field is ignored by this package.
Field int `json:"-"`
// Field appears in JSON as key "-".
Field int `json:"-,"`
As you can analyze from above information given in Golang spec for json marshal. Struct provide so much control over json. That's why Golang developer most probably use structs.
Now on using map[string]interface{} you should use it when you don't the structure of your json coming from the server or the types of fields. Most Golang developers stick to structs wherever they can.
I want to parse and validate (custom) JSON configuration files within Go. I would like to be able to parse the file into a struct and validate that:
no unexpected keys are present in the JSON file (in particular to detect typos)
certain keys are present and have non-empty values
In case the validation fails (or in case of a syntax error), I want to print an error message to the user that explains as detailed as possible where in the file the error happened (e.g. by stating the line number if possible).
The JSON parser built into Go seems to just silently ignore unexpected keys. I also tried using jsonpb (Protobuf) to deserialize the JSON, which returns an error in case of an unexpected key, but does not report the position.
To check for non-empty values, I could use an existing validation library, but I haven't seen any that reports detailed error messages. Alternatively, I could write custom code that validates the data returned by the built-in JSON parser, but it would be nice if there was a generic way.
Is there a simple way to get the desired behaviour?
Have you looked at JSON schema?
JSON Schema describes your JSON data format.
I believe it is in Draft stage, but a lot of languages have validation libraries. Here's a Go implementation:
https://github.com/xeipuuv/gojsonschema
You can also use the encoding/json JSON Decoder and force errors when unexpected keys are found. It won't tell you the line number, but it's a start and you don't require any external package.
package main
import (
"bytes"
"encoding/json"
"fmt"
)
type MyType struct {
ExpectedKey string `json:"expected_key"`
}
func main() {
jsonBytes := []byte(`{"expected_key":"a", "unexpected_key":"b"}`)
var typePlaceholder MyType
// Create JSON decoder
dec := json.NewDecoder(bytes.NewReader(jsonBytes))
// Force errors when unexpected keys are present
dec.DisallowUnknownFields()
if err := dec.Decode(&typePlaceholder); err != nil {
fmt.Println(err.Error())
}
}
You can see that working in playground here
Is it possible to reconstruct a JSON object in XQuery? Using XML, it's possible to use computed constructors to rebuild an element:
element { node-name($some-element) } {
(: Do stuff with $some-element/(#*|node()) :)
}
But using JSON objects, it seems that it's not possible to reconstruct properties. I would like to do something like this, but this throws a syntax error:
object-node {
for $p in $some-json-object/*
return node-name($p) : $p
}
It looks like it's possible to workaround that by mutating the JSON object:
let $obj := json:object(document{xdmp:from-json($json)}/*)
let $_put := map:put($o, 'prop-name', $prop-val)
return xdmp:to-json($o)/node()
But this has some obvious limitations.
I'm afraid using json:object really is the way to use here. Could be worse though, you only need a few lines to copy all json properties. You also don't need that document{} constructor, nor the extra type cast to json:object. xdmp:from-json already returns a json:object:
let $org := xdmp:from-json($json)
let $new := json:object()
let $_ :=
for $key in map:keys($org)
return map:put($new, $key, map:get($org, $key))
return xdmp:to-json($new)/node()
HTH!
This may be helpful for you: http://docs.marklogic.com/guide/app-dev/json
However, I often take a different approach in xQuery (being comfortable with XML). This may get some push-back from people here, but it is my approach:
Construct what you like in XML and then transform it. If you make your XML in the http: //marklogic.com/xdmp/json/basic namespace, then you can just transform it to whatever complex JSON you desire using json:transform-to-json - since all of the hints to datatypes are in the attributes of the XML. The nice thing about this approach is that it is a nice middle format. I can transform to JSON - or I can apply an XSLT transformation and get other XML if I desire.
It should be noted that json:transform-to-json has other modes of operation and can get datatype hints from your own schema as well. But I perfer the built-in schema.
I stumbled across this blog post by #paxstonhare that uses a non-functional approach, rebuilding new JSON objects during the tree walk by mutating them using map:put():
http://developer.marklogic.com/blog/walking-among-the-json-trees