There appears to be few options to validate the source JSON used when unmarshalling to a struct. By validate I mean 3 main things:
a required field exists in the JSON
the field is the correct type (e.g. don't force a string into an integer)
the field contains a valid value (value range / enum)
For nested structs, I simply mean where an attribute in one struct has the type of another struct:
type Example struct {
Attr1 int `json:"attr1"`
Attr2 ExampleToo `json:"attr2"`
}
type ExampleToo struct {
Attr3 int `json:"attr3"`
}
And this JSON would be valid:
{"attr1": 5, "attr2": {"attr3": 0}}
To keep this simple, I'll focus simply on integers. The concept of "zero values" is the first issue. I could create an UnmarshalJSON method, which is detected by JSON packages, including the standard encoding/json package. The problem with this approach is that is that is does not support nested structs. If ExampleToo has an UnmarshalJSON method, the ExampleToo.UnmarshalJSON() method is never called if unmarshalling to an Example object. It would be possible to write a method Example.UnmarshalJSON() that recursively handled validation, but that seems extremely complex, especially if ExampleToo is reused in many places.
So there appears to be some packages like the go-playground/validator where validation can be specified both as functions and tags. However, this works on the struct created, and not the JSON itself. So if a field is tagged as validation:"required" on an integer, and the integer value is 0, this will return an error because 0 is both a valid value and the "zero value" for integers.
An example of the latter here: https://go.dev/play/p/zqSUksPzUiq
I could also use pointers for everything, checking for nil as missing values. The main problem with that is that it requires dereferencing on each use and is a pretty uncommon practice for things like integers and strings.
One thing that I have also considered is a "sister struct" that uses pointers to do validation for required fields. The process would basically be to write a validation method for each struct, then validate that sister struct. If it works, then deserialize the main struct (without pointers). I haven't started on this, just a concept I've thought about, but I'm hoping there are better validation options.
So... is there a better way to do JSON/YAML input validation on nested structs? I'm happy to mix methods where say UnmarshalJSON is used for doing some work like verifying fields exist, but I'd like to pass that back to the library to let it continue to call UnmarshalJSON on subsequent nested structs. I'd also rather defer to the JSON library for casting values into the struct, etc.
Related
I'm writing a library to deserialize a subset of JSON into predefined Python types.
I want to deserialize arbitrary JSON into an object that quacks like serde-json's Value. However, I don't want it to deserialize into String's, Number's and Bool's - instead when the deserializer hits one of these I would prefer it simply keeps a reference to the respective byte string so I can efficiently (i.e. without the additional type conversion) parse the byte strings into the correct arbitrary Python types. Something like this:
use serde::Deserialize;
use serde_json::value::RawValue;
use serde_json::Map;
#[derive(Deserialize)]
pub enum MyValue<'a> {
Null,
Bytes(&'a RawValue),
Array(Vec<MyValue<'a>>),
Object(Map<String, MyValue<'a>>),
}
This will require writing a lot of traits so that it behaves like Value, and I'm not even sure if it won't just ignore deserializing the structural parts and put everything into a RawValue.
What is the cleanest way to do this?
Setup
I started a project using MySQL and as such, my project has some helper types that assist with dealing with nulls, both when unmarshalling incoming data on the API, inputting data into the DB, and then the inverse of that, pulling data out of the Database and responding with said data to the API.
For the purposes of this question, we'll deal with a struct i have that represents a Character.
type Character struct {
MongoID primitive.ObjectID `bson:"_id" json:"-"`
ID uint64 `bson:"id" json:"id"`
Name string `bson:"name" json:"name"`
CorporationID uint `bson:"corporation_id" json:"corporation_id"`
AllianceID null.Uint `bson:"alliance_id" json:"alliance_id,omitempty"`
FactionID null.Uint `bson:"faction_id" json:"faction_id,omitempty"`
SecurityStatus float64 `bson:"security_status" json:"security_status"`
NotModifiedCount uint `bson:"not_modified_count" json:"not_modified_count"`
UpdatePriority uint `bson:"update_priority" json:"update_priority"`
Etag null.String `bson:"etag" json:"etag"`
CachedUntil time.Time `bson:"cached_until" json:"cached_until"`
CreatedAt time.Time `bson:"created_at" json:"created_at"`
UpdatedAt time.Time `bson:"updated_at" json:"updated_at"`
}
I want to specifically concentrate on the AllianceID property of type null.Uint which is represented with the following struct:
// Uint is an nullable uint.
type Uint struct {
Uint uint
Valid bool
}
In an API setup using JSON and MySQL (i.e. My setup, but this is not exclusive), this structure allows me to easily deal with values that are "nullable" without having to deal with Pointers. I've always heard that it is best to avoid Pointers with the exception of pointers to structures (Slices, Slices of Structs, Map of Structs, etc). If you have a primitive type (int, bool, float, etc), try to avoid using a pointer to that primitive type.
This type has functions like MarshalJSON, UnmarshalJSON, Scan, and Value with logic inside those functions that leverage the Vaild property to determine what type of value to return. This works really really well with this setup.
Question
After some research, I've come to realize that Mongo would suit me better than a relational database, but due to the fluidity of a Mongo Document (Schemaless), I'm having a hard time understanding how to handle scenarios where a field maybe missing, or a property that i have in MySQL that would normally be null and I can easily unmarshal ontop this struct and use the helper functions logically, is handled. Also, when I setup a connection to Mongo and pull a couple of rows from MySQL and created Documents in Mongo from these rows, the BSON layer is marshalling the entire type for Alliance ID and sticking it in the DB.
Example:
"alliance_id" : {
"uint" : NumberLong(99007760),
"valid" : true
},
Where as in MySQL, the Value function implementing Valuer interface would be called and return 99007760 and that is the value in the DB.
Another scenario would be if valid was false. In MySQL this would mean a null value and when the Value function is called, it would return nil and the mysql driver would populate the field with NULL
So my question is how do I do this? Do I need to start from scratch and rebuild my models and redo some of the logic in my application that leverages the Valid property and use *Pointers or can I do what I am attempting to do using these helper types.
I do want to say that I have tried implementing the Marshaller, and Unmarshaller interfaces on the bson package and the alliane_id in the document is still set to the json encoded version of this type as I outlined above. I wanted to point this out to rule out any suggestions of implemeting those interfaces. If what I am attempting to achieve is counter intuitive to Mongo, please link some guides that can help me achieve what im attempting to do.
Thank you to all who can assist with this.
The easiest way to deal with optional fields like this is to use a pointer:
type Character struct {
ID *uint64 `bson:"id,omitempty" json:"id"`
Name string `bson:"name" json:"name"`
...
}
Above, the ID field will be written if it s non-nil. When unmarshaling, it will be set to a non-nil value if database record has a value for it. If you omit the omitempty flag, marshaling this struct will write null to the database.
For strings, you may use omitempty to omit the field completely if it is empty. If you want to store empty strings, omit omitempty.
The majority of my development experience has been from dynamically typed languages like PHP and Javascript. I've been practicing with Golang for about a month now by re-creating some of my old PHP/Javascript REST APIs in Golang. I feel like I'm not doing things the Golang way most of the time. Or more generally, I'm not use to working with strongly typed languages. I feel like I'm making excessive use of map[string]interface{} and slices of them to box up data as it comes in from http requests or when it gets shipped out as json http output. So what I'd like to know is if what I'm about to describe goes against the philosophy of golang development? Or if I'm breaking the principles of developing with strongly typed languages?
Right now, about 90% of the program flow for REST Apis I've rewritten with Golang can be described by these 5 steps.
STEP 1 - Receive Data
I receive http form data from http.Request.ParseForm() as formvals := map[string][]string. Sometimes I will store serialized JSON objects that need to be unmarshaled like jsonUserInfo := json.Unmarshal(formvals["user_information"][0]) /* gives some complex json object */.
STEP 2 - Validate Data
I do validation on formvals to make sure all the data values are what I expect before using it in SQL queries. I treat everyting as a string, then use Regex to determine if the string format and business logic is valid (eg. IsEmail, IsNumeric, IsFloat, IsCASLCompliant, IsEligibleForVoting,IsLibraryCardExpired etc...). I've written my own Regex and custom functions for these types of validations
STEP 3 - Bind Data to SQL Queries
I use golang's database/sql.DB to take my formvals and bind them to my Query and Exec functions like this Query("SELECT * FROM tblUser WHERE user_id = ?, user_birthday > ? ",formvals["user_id"][0], jsonUserInfo["birthday"]). I never care about the data types I'm supplying as arguments to be bound, so they're all probably strings. I trust the validation in the step immediately above has determined they are acceptable for SQL use.
STEP 4 - Bind SQL results to []map[string]interface{}{}
I Scan() the results of my queries into a sqlResult := []map[string]interface{}{} because I don't care if the value types are null, strings, float, ints or whatever. So the schema of an sqlResult might look like:
sqlResult =>
[0] {
"user_id":"1"
"user_name":"Bob Smith"
"age":"45"
"weight":"34.22"
},
[1] {
"user_id":"2"
"user_name":"Jane Do"
"age":nil
"weight":"22.22"
}
I wrote my own eager load function so that I can bind more information like so EagerLoad("tblAddress", "JOIN ON tblAddress.user_id",&sqlResult) which then populates sqlResult with more information of the type []map[string]interface{}{} such that it looks like this:
sqlResult =>
[0] {
"user_id":"1"
"user_name":"Bob Smith"
"age":"45"
"weight":"34.22"
"addresses"=>
[0] {
"type":"home"
"address1":"56 Front Street West"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
[1] {
"type":"work"
"address1":"5 Kennedy Avenue"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
},
[1] {
"user_id":"2"
"user_name":"Jane Do"
"age":nil
"weight":"22.22"
"addresses"=>
[0] {
"type":"home"
"address1":"56 Front Street West"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
}
STEP 5 - JSON Marshal and send HTTP Response
then I do a http.ResponseWriter.Write(json.Marshal(sqlResult)) and output data for my REST API
Recently, I've been revisiting articles with code samples that use structs in places I would have used map[string]interface{}. For example, I wanted to refactor Step 2 with a more standard approach that other golang developers would use. So I found this https://godoc.org/gopkg.in/go-playground/validator.v9, except all it's examples are with structs . I also noticed that most blogs that talk about database/sql scan their SQL results into typed variables or structs with typed properties, as opposed to my Step 4 which just puts everything into map[string]interface{}
Hence, i started writing this question. I feel the map[string]interface{} is so useful because majority of the time,I don't really care what the data is and it gives me to the freedom in Step 4 to construct any data schema on the fly before I dump it as JSON http response. I do all this with as little code verbosity as possible. But this means my code is not as ready to leverage Go's validation tools, and it doesn't seem to comply with the golang community's way of doing things.
So my question is, what do other golang developers do with regards to Step 2 and Step 4? Especially in Step 4...do Golang developers really encourage specifying the schema of the data through structs and strongly typed properties? Do they also specify structs with strongly typed properties along with every eager loading call they make? Doesn't that seem like so much more code verbosity?
It really depends on the requirements just like you have said you don't require to process the json it comes from the request or from the sql results. Then you can easily unmarshal into interface{}. And marshal the json coming from sql results.
For Step 2
Golang has library which works on validation of structs used to unmarshal json with tags for the fields inside.
https://github.com/go-playground/validator
type Test struct {
Field `validate:"max=10,min=1"`
}
// max will be checked then min
you can also go to godoc for validation library. It is very good implementation of validation for json values using struct tags.
For STEP 4
Most of the times, We use structs if we know the format and data of our JSON. Because it provides us more control over the data types and other functionality. For example if you wants to empty a JSON feild if you don't require it in your JSON. You should use struct with _ json tag.
Now you have said that you don't care if the result coming from sql is empty or not. But if you do it again comes to using struct. You can scan the result into struct with sql.NullTypes. With that also you can provide json tag for omitempty if you wants to omit the json object when marshaling the data when sending a response.
Struct values encode as JSON objects. Each exported struct field
becomes a member of the object, using the field name as the object
key, unless the field is omitted for one of the reasons given below.
The encoding of each struct field can be customized by the format
string stored under the "json" key in the struct field's tag. The
format string gives the name of the field, possibly followed by a
comma-separated list of options. The name may be empty in order to
specify options without overriding the default field name.
The "omitempty" option specifies that the field should be omitted from
the encoding if the field has an empty value, defined as false, 0, a
nil pointer, a nil interface value, and any empty array, slice, map,
or string.
As a special case, if the field tag is "-", the field is always
omitted. Note that a field with name "-" can still be generated using
the tag "-,".
Example of json tags
// Field appears in JSON as key "myName".
Field int `json:"myName"`
// Field appears in JSON as key "myName" and
// the field is omitted from the object if its value is empty,
// as defined above.
Field int `json:"myName,omitempty"`
// Field appears in JSON as key "Field" (the default), but
// the field is skipped if empty.
// Note the leading comma.
Field int `json:",omitempty"`
// Field is ignored by this package.
Field int `json:"-"`
// Field appears in JSON as key "-".
Field int `json:"-,"`
As you can analyze from above information given in Golang spec for json marshal. Struct provide so much control over json. That's why Golang developer most probably use structs.
Now on using map[string]interface{} you should use it when you don't the structure of your json coming from the server or the types of fields. Most Golang developers stick to structs wherever they can.
Given: I have two structs of the same type, conforming to Codable Protocol.
The structs can be multi-level (nested properties, surely also are conforming to Codable). The type is not known at the time of implementation, so i consider it generic, conforming to Codable.
One object is "base" (say, received from server), second (actually the copy of "base"), but modified inside application.
The intention is: To send a request for saving new data, but sending only the "diff" of two structs. So, only the fields, that are different should be present in resulting JSON.
The straightforward way with getting JSON strings for both structs and manipulating with them, is understandable, but seem to be the last-chance approach...
I've tried the approach with Mirror, and recursion, but now have managed to make it work only for first level - on the second level of nesting i've lost the type of nested property (if struct or array), and cannot cast it right then...
I wonder if it can be made somehow with custom encoder?
P.S.: the generic type should have all properties as Optionals, so should not provide any explicit initializers.
Instead of your "last-chance approach" -- matching JSON strings -- you could use JSONSerialization.jsonObject to convert the JSON data to Foundation objects and perform your comparison on that higher level of abstraction (if that's what you meant in your question in the first place, then sorry - nevermind).
Of course you'd pay an extra penalty of converting your Codable objects to data and then parsing that data into an object hierarchy.
In decode.go, it mentions:
// To unmarshal JSON into a value implementing the Unmarshaler interface,
// Unmarshal calls that value's UnmarshalJSON method, including
// when the input is a JSON null.
// Otherwise, if the value implements encoding.TextUnmarshaler
// and the input is a JSON quoted string, Unmarshal calls that value's
// UnmarshalText method with the unquoted form of the string.
What are the differences between UnmarshalText and UnmarshalJSON? Which one is preferred?
Simply:
UnmarshalText unmarshals a text-encoded value.
UnmarshalJSON unmarshals a JSON-encoded value.
Which is preferred depends on what you're doing.
JSON encoding is defined by RFC 7159. If you're consuming or producing JSON documents, you should use JSON encoding.
Text encoding has no standard, and is entirely implementation-dependent. Go implements Text-(un)marshalers for a few types, but there's no guarantee that any other application will understand these formats.
Text-encoding is most commonly used for things like URL query parameters, HTML forms, or other loosely-defined formats.
If you have a choice in the matter, using JSON is probably a better way to go. But again, it depends on what you're doing what makes the most sense.
As it relates to Go's JSON unmarshaler, the JSON unmarshaler will call a type's UnmarshalJSON method, if it's defined, and fall back to UnmarshalText if that is defined.
If you know you'll be using JSON, you should absolutely define an UnmarshalJSON function.
You would generally create an UnmarshalText only if you expected it to be used in non-JSON contexts, with the added benefit that the JSON unmarshaler would also use it, without having to duplicate it (if indeed the same implementation would work for JSON).
Per the documentation:
To unmarshal JSON into a value implementing the Unmarshaler interface,
Unmarshal calls that value's UnmarshalJSON method, including when the
input is a JSON null. Otherwise, if the value implements
encoding.TextUnmarshaler and the input is a JSON quoted string,
Unmarshal calls that value's UnmarshalText method with the unquoted
form of the string.
Meaning: if you want to take some JSON and unmarshal it with some custom logic, you would use UnmarshalJSON. If you want to take the text in a string field of a JSON document and decode that in some special way (i.e. parse it rather than just write it into a string-typed field), you would use UnmarshalText. For example, net.IP implements UnmarshalText so that you can provide a string value like "ipAddress": "1.2.3.4" and unmarshal it into a net.IP field. If net.IP did not implement UnmarshalText, you would only be able to unmarshal the JSON representation of the underlying type ([]byte).