Golang serialize/deserialize an Empty Array not as null - json

Is there a way to serialize an empty array attribute (not null) of a struct and deserialize it back to an empty array (not null again)?
Considering that an empty array is actually a pointer to null, is the perceptible initial difference between an empty array and pointer to null completely lost after serialize/deserialize?
The worst practical scenario is that when I show an empty array attribute to my REST client, as a json "att":[], at first time, and, after cache register to redis and recover it, the same attribute is shown to my client as "att":null, causing a contract broken and a lot of confusing.
Summing up: is possible to show the Customer 2 addresses like an json empty array, after serialize/deserialize => https://play.golang.org/p/TVwvTWDyHZ

I am pretty sure the easiest way you can do it is to change your line
var cust1_recovered Customer
to
cust1_recovered := Customer{Addresses: []Address{}}
Unless I am reading your question incorrectly, I believe this is your desired output:
ORIGINAL Customer 2 {
"Name": "Customer number 2",
"Addresses": []
}
RECOVERED Customer 2 {
"Name": "Customer number 2",
"Addresses": []
}
Here is a playground to verify with: https://play.golang.org/p/T9K1VSTAM0
The limitation here, as #mike pointed out, is if Addresses is truly nil before you encode, once you decode you do not get the json equivalent null, but would instead end up with an empty list.

No, it's not possible. To understand why, let's look at the Go spec. For it to output two different results for empty vs. nil, any serialization method would need to be able to tell the difference between the two. However, according to the Go spec,
Two array types are identical if they have identical element types and
the same array length.
Since neither contains any elements and have the same element type, the only difference could be in length, but it also states that
The length of a nil slice, map or channel is 0
So through comparison, it would be unable to tell. Of course, there are methods other than comparison, so to really put the nail in the coffin, here's the portion that shows they have the same underlying representation. The spec also guarantees that
A struct or array type has size zero if it contains no fields (or
elements, respectively) that have a size greater than zero.
so the actual allocated structure of a zero length array has to be of size zero. If it's of size zero, it can't store any information about whether it's empty or nil, so the object itself can't know either. In short, there is no difference between a nil array and a zero length array.
The "perceptible initial difference between an empty array and pointer to null" is not lost during serialization/deserialization, it's lost from the moment initial assignment is complete.

For another solution, we have forked encoding/json to add a new method called MarshalSafeCollections(). This method will marshal Slices/Arrays/Maps as their respective empty values ([]/{}). Since most of our instantiation happens on the data layer we did not want to add code that fixed issues in our http response layer. The changes to the library are minimal and follow go releases.

Related

Swift unable to preserve order in String made from JSON for hash verification

We receive a JSON object from network along with a hash value of the object. In order to verify the hash we need to turn that JSON into a string and then make a hash out of it while preserving the order of the elements in the way they are in the JSON.
Say we have:
[
{"site1":
{"url":"https://this.is.site.com/",
"logoutURL":"",
"loadStart":[],
"loadStop":[{"someMore":"smthelse"}],
"there's_more": ... }
},
{"site2":
....
}
]
The Android app is able to get same hash value, and while debugging it we fed same simple string into both algorithms and were able to get out same hash out of it.
The difference that is there happens because of the fact that dictionaries are unordered structure.
While debugging we see that just before feeding a string into a hash algorithm, the string looks like the original JSON, just without the indentations, which means it preserves the order of items in it (on Android that is):
[{"site1":{"url":"https://this.is.site.com/", ...
While doing this with many approaches by now I'm not able to achieve the same: string that I get is different in order and therefore results in a different hash. Is there a way to achieve this?
UPDATE
It appears the problem is slightly different - thanks to #Rob Napier's answer below: I need a hash of only a part of incoming string (that has JSON in it), which means for getting that part I need to first parse it into JSON or struct, and after that - while getting the string value of it - the order of items is lost.
Using JSONSerialization and JSONDecoder (which uses JSONSerialization), it's not possible to reproduce the input data. But this isn't needed. What you're receiving is a string in the first place (as an NSData). Just don't get rid of it. You can parse the data into JSON without throwing away the data.
It is possible to create JSON parsers from scratch in Swift that maintain round-trip support (I have a sketch of such a thing at RNJSON). JSON isn't really that hard to parse. But what you're describing is a hash of "the thing you received." Not a hash of "the re-serialized JSON."

Excessive use of map[string]interface{} in go development?

The majority of my development experience has been from dynamically typed languages like PHP and Javascript. I've been practicing with Golang for about a month now by re-creating some of my old PHP/Javascript REST APIs in Golang. I feel like I'm not doing things the Golang way most of the time. Or more generally, I'm not use to working with strongly typed languages. I feel like I'm making excessive use of map[string]interface{} and slices of them to box up data as it comes in from http requests or when it gets shipped out as json http output. So what I'd like to know is if what I'm about to describe goes against the philosophy of golang development? Or if I'm breaking the principles of developing with strongly typed languages?
Right now, about 90% of the program flow for REST Apis I've rewritten with Golang can be described by these 5 steps.
STEP 1 - Receive Data
I receive http form data from http.Request.ParseForm() as formvals := map[string][]string. Sometimes I will store serialized JSON objects that need to be unmarshaled like jsonUserInfo := json.Unmarshal(formvals["user_information"][0]) /* gives some complex json object */.
STEP 2 - Validate Data
I do validation on formvals to make sure all the data values are what I expect before using it in SQL queries. I treat everyting as a string, then use Regex to determine if the string format and business logic is valid (eg. IsEmail, IsNumeric, IsFloat, IsCASLCompliant, IsEligibleForVoting,IsLibraryCardExpired etc...). I've written my own Regex and custom functions for these types of validations
STEP 3 - Bind Data to SQL Queries
I use golang's database/sql.DB to take my formvals and bind them to my Query and Exec functions like this Query("SELECT * FROM tblUser WHERE user_id = ?, user_birthday > ? ",formvals["user_id"][0], jsonUserInfo["birthday"]). I never care about the data types I'm supplying as arguments to be bound, so they're all probably strings. I trust the validation in the step immediately above has determined they are acceptable for SQL use.
STEP 4 - Bind SQL results to []map[string]interface{}{}
I Scan() the results of my queries into a sqlResult := []map[string]interface{}{} because I don't care if the value types are null, strings, float, ints or whatever. So the schema of an sqlResult might look like:
sqlResult =>
[0] {
"user_id":"1"
"user_name":"Bob Smith"
"age":"45"
"weight":"34.22"
},
[1] {
"user_id":"2"
"user_name":"Jane Do"
"age":nil
"weight":"22.22"
}
I wrote my own eager load function so that I can bind more information like so EagerLoad("tblAddress", "JOIN ON tblAddress.user_id",&sqlResult) which then populates sqlResult with more information of the type []map[string]interface{}{} such that it looks like this:
sqlResult =>
[0] {
"user_id":"1"
"user_name":"Bob Smith"
"age":"45"
"weight":"34.22"
"addresses"=>
[0] {
"type":"home"
"address1":"56 Front Street West"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
[1] {
"type":"work"
"address1":"5 Kennedy Avenue"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
},
[1] {
"user_id":"2"
"user_name":"Jane Do"
"age":nil
"weight":"22.22"
"addresses"=>
[0] {
"type":"home"
"address1":"56 Front Street West"
"postal":"L3L3L3"
"lat":"34.3422242"
"lng":"34.5523422"
}
}
STEP 5 - JSON Marshal and send HTTP Response
then I do a http.ResponseWriter.Write(json.Marshal(sqlResult)) and output data for my REST API
Recently, I've been revisiting articles with code samples that use structs in places I would have used map[string]interface{}. For example, I wanted to refactor Step 2 with a more standard approach that other golang developers would use. So I found this https://godoc.org/gopkg.in/go-playground/validator.v9, except all it's examples are with structs . I also noticed that most blogs that talk about database/sql scan their SQL results into typed variables or structs with typed properties, as opposed to my Step 4 which just puts everything into map[string]interface{}
Hence, i started writing this question. I feel the map[string]interface{} is so useful because majority of the time,I don't really care what the data is and it gives me to the freedom in Step 4 to construct any data schema on the fly before I dump it as JSON http response. I do all this with as little code verbosity as possible. But this means my code is not as ready to leverage Go's validation tools, and it doesn't seem to comply with the golang community's way of doing things.
So my question is, what do other golang developers do with regards to Step 2 and Step 4? Especially in Step 4...do Golang developers really encourage specifying the schema of the data through structs and strongly typed properties? Do they also specify structs with strongly typed properties along with every eager loading call they make? Doesn't that seem like so much more code verbosity?
It really depends on the requirements just like you have said you don't require to process the json it comes from the request or from the sql results. Then you can easily unmarshal into interface{}. And marshal the json coming from sql results.
For Step 2
Golang has library which works on validation of structs used to unmarshal json with tags for the fields inside.
https://github.com/go-playground/validator
type Test struct {
Field `validate:"max=10,min=1"`
}
// max will be checked then min
you can also go to godoc for validation library. It is very good implementation of validation for json values using struct tags.
For STEP 4
Most of the times, We use structs if we know the format and data of our JSON. Because it provides us more control over the data types and other functionality. For example if you wants to empty a JSON feild if you don't require it in your JSON. You should use struct with _ json tag.
Now you have said that you don't care if the result coming from sql is empty or not. But if you do it again comes to using struct. You can scan the result into struct with sql.NullTypes. With that also you can provide json tag for omitempty if you wants to omit the json object when marshaling the data when sending a response.
Struct values encode as JSON objects. Each exported struct field
becomes a member of the object, using the field name as the object
key, unless the field is omitted for one of the reasons given below.
The encoding of each struct field can be customized by the format
string stored under the "json" key in the struct field's tag. The
format string gives the name of the field, possibly followed by a
comma-separated list of options. The name may be empty in order to
specify options without overriding the default field name.
The "omitempty" option specifies that the field should be omitted from
the encoding if the field has an empty value, defined as false, 0, a
nil pointer, a nil interface value, and any empty array, slice, map,
or string.
As a special case, if the field tag is "-", the field is always
omitted. Note that a field with name "-" can still be generated using
the tag "-,".
Example of json tags
// Field appears in JSON as key "myName".
Field int `json:"myName"`
// Field appears in JSON as key "myName" and
// the field is omitted from the object if its value is empty,
// as defined above.
Field int `json:"myName,omitempty"`
// Field appears in JSON as key "Field" (the default), but
// the field is skipped if empty.
// Note the leading comma.
Field int `json:",omitempty"`
// Field is ignored by this package.
Field int `json:"-"`
// Field appears in JSON as key "-".
Field int `json:"-,"`
As you can analyze from above information given in Golang spec for json marshal. Struct provide so much control over json. That's why Golang developer most probably use structs.
Now on using map[string]interface{} you should use it when you don't the structure of your json coming from the server or the types of fields. Most Golang developers stick to structs wherever they can.

Accessing information in a JSON nested in object

I'm trying to access the roomName, but so far I am unable. I don't get how to get past the barriar of the info.[long ID with dashes].roomName.
At most I can get back the object of the long id or undefined.
I have tried info[0].roomName. Trying to get the first object in the info and then go on from there. The long id number is also in list.id, I don't know if that can help.
I would have set the info as an array like list is, but this is not my JSON, only one that I am working with.
{
"list":[ IGNORE, can access code here ],
"info":{
"e5eb1ccf-bd45-4d01-8e2a":{
"id":"e5eb1ccf-bd45-4d01-8e2a",
"name":"Lucy",
"roomName":"Arts" <<I need to get to this.
}
}
}
I hope this makes sense, first post and this is just a boiled down version of what I have. Putting in the id number in the info.e5eb1ccf-bd45-4d01-8e2a.roomName breaks after the first -
Once you've parsed the JSON (assuming you even have JSON*) and you have an object, you'd use brackets notation:
var room = theObject.info["e5eb1ccf-bd45-4d01-8e2a"].roomName;
console.log(room); // Arts
* Remember, JSON is a textual notation for data exchange. (More here.) If you're dealing with JavaScript source code, and not dealing with a string, you're not dealing with JSON.

Deserialize an anonymous JSON array?

I got an anonymous array which I want to deserialize, here the example of the first array object
[
{ "time":"08:55:54",
"date":"2016-05-27",
"timestamp":1464332154807,
"level":3,
"message":"registerResourcePath ('', '/sap/bc/ui5_ui5/ui2/ushell/resources/')",
"details":"","component":"sap.ui.ModuleSystem"},
{"time":"08:55:54","date":"2016-05-27","timestamp":1464332154808,"level":3,"message":"URL prefixes set to:","details":"","component":"sap.ui.ModuleSystem"},
{"time":"08:55:54","date":"2016-05-27","timestamp":1464332154808,"level":3,"message":" (default) : /sap/bc/ui5_ui5/ui2/ushell/resources/","details":"","component":"sap.ui.ModuleSystem"}
]
I tried deserializing using CL_TREX_JSON_SERIALIZER, but it is corrupt and does not work with my JSON, here is why
Then I tried /UI2/CL_JSON, but it needs a "structure" that perfectly fits the object given by the JSON Object. "Structure" means in my case an internal table of objects with the attributes time, date, timestamp, level, messageanddetails. And there was the problem: it does not properly handle references and uses class description to describe the field assigned to the field-symbol. Since I can not have a list of objects but only a list of references to objects that solution also doesn't works.
As a third attempt I tried with the CALL TRANSFORMATION as described by Horst Keller, but with this method I was not able to read in an anonymous array, and here is why
My major points:
I do not want to change the JSON, since that is what I get from sap.ui.log
I prefere to use built-in functionality and not a thirdparty framework
Your problem comes out not from the anonymity of array, but from the awkwardness of SAP JSON (De)serializer, which doesn't respect double quotes, which enclose JSON attributes. The issue is thoroughly described in this answer.
If you don't want to change your JSON on-the-fly, the only way you have is to change CL_TREX_JSON_DESERIALIZER class like this.
/UI5/CL_JSON_PARSER parses JSONs with unknown format.
Note that it's got "for internal use" written on it so many times that you probably should take it seriously and clone its code to fixate it.

JSON oData.metadata

I have questions about JSON returning from the server using the Microsoft oData API.
Cannot figure it out.
Query1:
http://localhost:63717/odata/City(1)
Fiddler returns the raw data below.
Everything is in its own brackets.
{
"odata.metadata":"http://localhost:63717/odata/$metadata#City/#Element","CityID":1,"CityName":"Minnetonka","CityAddr1":null,"CityAddr2":null,"CityCity":null,"CityState":null,"CityZip":null,"CityPhone":null,"CityFAX":null,"CityExtent":"-93.53,44.88,-93.39,44.93","CityHeaderImage":null
}
Query2:
http://localhost:63717/odata/City?$filter=CityName eq 'Minnetonka'
Fiddler returns the raw data below.
Data is in two sets of bracketed data
{
"odata.metadata":"http://localhost:63717/odata/$metadata#City","value":[
{
"CityID":1,"CityName":"Minnetonka","CityAddr1":null,"CityAddr2":null,"CityCity":null,"CityState":null,"CityZip":null,"CityPhone":null,"CityFAX":null,"CityExtent":"-93.53,44.88,-93.39,44.93","CityHeaderImage":null
}
]
}
What do I have to do to format my JSON coming back for $filters in the oData request?
That odata.metadata is killing me in Query2.
Please explain what I am doing wrong.
In the first example, you have just one City element (denoted by City(1) in the request and #City/#Element in the result path).
In the second example, the value property in result is showing an array of City types (a listing of one or more objects). [ ... ] denotes an array in JavaScript. For a $filter type query, this is what I would expect. You can also see that the response path is less specific (#City instead of #City/#Element).
The path shown in the odata.metadata property value describes the structure of the element being returned, as I showed two examples above. The format of the return data will change depending on how you request it.
If you're having trouble parsing the JSON returned, consider using a library to do the heavy lifting for you. For example:
datajs
JayData
Breeze.js
[Source]
You are not doing anything wrong, the two formats actually represent two different forms of result.
The first you are requesting a single item as you are specifying the key for the entity.
In the second you are potentially asking for a list of entities. The Odata.Metadata is separate in this response otherwise it would be repeated for every item returned and would be a waste in terms of content length.
Because of the way that you are addressing the entity.
With //localhost:63717/odata/City(1) you are addressing one entity ("/entityset/key"). You will always return back one City (if one exists). There is no need for it to return an array because it will never return more than one.
With //localhost:63717/odata/City you are addressing a collection of entities ("/entityset"). 0 to n City entities could be returned, hence the need for a collection.