I am trying to decode an incoming JSON in my REST API written in Go. I am using decoder.Decode() function and my problem is that I need to apply a certain rules on which struct should be used in the process of decoding because sometimes the JSON contains:
"type": {
"type" : "string",
"maxLength" : 30
},
and sometimes:
"type": {
"type" : "integer",
"max" : 30,
"min" : 10
},
I somehow need to tell Go that "If the type.type is string, use this struct (type Type_String struct) and if the type.type is integer, use other struct (type Type_Integer struct)". I am not really sure how to do it. One solution which is on my mind is to make an universal struct with the all possible properties, use it on any kind of object and then filter the properties based on the type property but this is just so dirty. I guess I can also write my own decoder but that seems also a bit strange.
I am new to Go and I am pretty much used to the freedom JavaScript offers.
First of all, if fields of "type" depends on "type.type", in my opinion, it's better to move it one level up. Something like:
...
"type" : "integer",
"intOptions": {
"max" : 30,
"min" : 10
},
....
Then you can create a struct with only one field:
type Type struct {
Type string
}
and do something like:
myType := new(Type)
json.Unmarshal([]byte(yourJsonString), myType)
And now, depending on myType's value you can use different structs for decoding your json.
You can always decode to interface{} like mentioned here: How to access interface fields on json decode?
http://play.golang.org/p/3z8-unhsH4
package main
import (
"encoding/json"
"fmt"
)
var one string = `{"type": {"type": "string", "maxLength":30}}`
var two string = `{"type": {"type": "integer", "max":30, "min":10}}`
func f(data map[string]interface{}) {
t := data["type"]
typemap := t.(map[string]interface{})
t2 := typemap["type"].(string)
switch t2 {
case "string":
fmt.Println("maxlength:", typemap["maxLength"].(float64))
case "integer":
fmt.Println("max:", typemap["max"].(float64))
default:
panic("oh no!")
}
}
func main() {
var jsonR map[string]interface{}
err := json.Unmarshal([]byte(one), &jsonR)
if err != nil {
panic(err)
}
f(jsonR)
json.Unmarshal([]byte(two), &jsonR)
f(jsonR)
}
The idea is to unmarshal to map[string]interface{} and then cast and compare before accessing values.
In the above code, the f function does the cast and compare. Given this poor json, I used poor variable name, t and t2 to represent the json values of "type" at different depths. Once t2 has the value, the switch statement does something with the "string" or the "integer" and what it does is print the maxLength or the max value.
Related
In our code base we have a function which merges two structs , something like below .
func CombineStruct(s1 interface{}, s2 interface{}) error {
data, err := json.Marshal(s1)
if err != nil {
return err
}
return json.Unmarshal(data, s2)
}
We use the above func to combine two structs something like below .
m := model.SomeModel{}
CombineStruct(someStruct, &m)
//above line merges two structs
Also currently all our structs has only json tags not bson tags yet, should we need to add bson tags in all the places ?
for ex :
type someStruct struct {
Field1 string `json:"field1"`
Field2 string `json:"field2"`
Field3 interface{} `json:"field2"`
}
In the above someStruct we have fields of type interface too!
Now the issue that i'm facing is wherever we combine the struct I see those object data in mongoDB as array of key-value pair something like below :
"studentDetails" : [
{
"Key" : "Details",
"Value" : [
[
{
"Key" : "Name",
"Value" : "Bob"
},
{
"Value" : "21",
"Key" : "Age"
}
]
]
},
{
"Key" : "Enrolled",
"Value" : false
}
],
But I want this to be displayed like something like below . Not like key-value pair.
"studentDetails" : {
"Details" : [
{
"name" : "serverdr",
"age" : 21
},
{
"Enrolled" : false
}
],
It was displaying objects like above way in our old global sing mgo driver .But using the new go-mongo driver when we combine two structs using the CombineStruct() function it displays as array of key value pair.
I tried something like below and that worked like a charm :)
So basically what's the problem is that the mongo-driver defaults to unmarshalling as bson.D for structs of type interface{} where as mgo mgo-driver defaults to bson.M .
So we will have to add the below code while trying to establish connection with mongo-db , SetRegistry() options as clientOpts To map the old mgo behavior, so that mongo-driver defaults to bson.M while unmarshalling structs of type interface{} , and this should not display the values back as key-value pair
tM := reflect.TypeOf(bson.M{})
reg := bson.NewRegistryBuilder().RegisterTypeMapEntry(bsontype.EmbeddedDocument, tM).Build()
clientOpts := options.Client().ApplyURI(SOMEURI).SetAuth(authVal).SetRegistry(reg)
client, err := mongo.Connect(ctx, clientOpts)
lets say i have a json response like this, as you can see sometimes email exists sometimes not.
now i need to check the email key exists or not and pretty print the json response accordingly.
how can i do this?
[
{"name" : "name1", "mobile": "123", "email": "email1#example.com", "carrier": "carrier1", "city", "city1"},
{"name" : "name2", "mobile": "1234", "carrier": "carrier2", "city", "city2"}
...
]
here i need to check the p.Email exists or not, if it exists assign the email value if not assign empty string
for i, p := range jsonbody.Data {
a := p.Name
b := p.Phone[i].Mobile
c := p.INTaddress[i].Email // here i need to check
d := p.Phone[i].Carrier
e := p.Address[i].City
..........
}
i tried searching but not found any answers for golang.
here i need to check the p.Email exists or not, if it exists assign the email value if not assign empty string
Note that when you define the field as Email string and the incoming JSON provides no "email" entry then the Email field will remain an empty string, so you could simply use that as is. No additional checks necessary.
If you want to allow for null use Email *string and simply use an if condition to check against nil as suggested by 072's answer.
And when you need to differentiate between undefined/null/empty use a custom unmarshaler as suggested in the answer below:
type String struct {
IsDefined bool
Value string
}
// This method will be automatically invoked by json.Unmarshal
// but only for values that were provided in the json, regardless
// of whether they were null or not.
func (s *String) UnmarshalJSON(d []byte) error {
s.IsDefined = true
if string(d) != "null" {
return json.Unmarshal(d, &s.Value)
}
return nil
}
https://go.dev/play/p/gs9G4v32HWL
Then you can use the custom String instead of the builtin string for the fields that you need to check whether they were provided or not. And to do the checking, you'd obviously inspect the IsDefined field after the unmarshal happened.
You can use a pointer, then check against nil:
package main
import (
"encoding/json"
"fmt"
)
var input = []byte(`
[
{"name" : "name1", "mobile": "123", "email": "email1#example.com", "carrier": "carrier1", "city": "city1"},
{"name" : "name2", "mobile": "1234", "carrier": "carrier2", "city": "city2"}
]
`)
type contact struct {
Name string
Email *string
}
func main() {
var contacts []contact
json.Unmarshal(input, &contacts)
// [{Name:name1 Email:0xc00004a340} {Name:name2 Email:<nil>}]
fmt.Printf("%+v\n", contacts)
}
I'm experimenting with rewriting parts of our system in Go. They're currently written in Python. Some of the data that I want to serve lives in Elasticsearch.
Across our users, we have a few standard fields, but also allow people to create a number of custom fields specific to their environment. E.g., we have a product object that has some common fields like name and price, but we let someone create a field like discount_price or requires_freight to agree with their use case.
In Python, this is easy to accommodate. The JSON is read in, our chosen JSON parser does some reasonable type inference, and we can then return the data after it's processed.
In Go, any data we want to deal with from the Elasticsearch JSON response has to be mapped to a type. Or at least that's my understanding. For example:
import (
"encoding/json"
)
...
type Product struct {
Name string `json:"name"`
Price string `json:"price"`
}
Here's a simplified example of what the data might look like. I've prefixed the names of the nonstandard fields I'd want to pass through with custom:
{
"id": "ABC123",
"name": "Great Product",
"price": 10.99,
"custom_alternate_names": ["Great Product"],
"custom_sellers": [{
"id": "ABC123",
"name": "Great Product LLC",
"emails": ["great#stuff.com"]
}]
}
It would be fine to special case for places where the route actually needs to process or manipulate a custom field. But in most cases, passing through the JSON data unchanged is fine, so the fact we aren't imposing any type mappings for safety isn't adding anything.
Is there a way to set up the struct with an interface (?) that could act as a passthrough for any unmapped fields? Or a way to take the unparsed JSON data and recombine it with the mapped object before the data are returned?
You can do something like this
package main
import (
"encoding/json"
"log"
)
type Product struct {
//embed product to be able to pull the properties without
//requiring a nested object in the json
KnownFields
OtherStuff json.RawMessage
}
//Custom unmarshaller
func (p *Product) UnmarshalJSON(b []byte) error {
var k KnownFields
//unmarshal the known fields
err := json.Unmarshal(b, &k)
if err != nil {
return err
}
p.KnownFields = k
//You can use json.RawMessage or map[string]interface
p.OtherStuff = json.RawMessage(b)
return nil
}
type KnownFields struct {
Name string `json:"name"`
Price json.Number `json:"price"`
}
const JSON = `
{
"id": "ABC123",
"name": "Great Product",
"price": 10.99,
"custom_alternate_names": ["Great Product"],
"custom_sellers": [{
"id": "ABC123",
"name": "Great Product LLC",
"emails": ["great#stuff.com"]
}]
}`
func main() {
var p Product
err := json.Unmarshal([]byte(JSON), &p)
if err != nil {
log.Panic(err)
}
log.Printf("%v", p)
}
If you are going to mutate and marshall the Product you will have to implement a custom marshaller and also you would need to use map[string]interface{} as the OtherStuff to avoid having duplicate entries for the known fields
Im new in golang and i have a question.
I have 5 structs where i use Json , but the JSON file can have more structs than the ones i have predetermined BUT... the structures of the JSON satisfies the structures in my programing ( lets say i have 5 structs "struct1" , 6 structs "struct 2" , 1 struct "struct 3", and so on...)
My question is , i want to make a function were i take the JSON FILE , read the structs of it and have as an output the number of structs of the JSON file.
I think i could use the map[string]interface{} but i dont understand it.
I hope i have explain myself
Thank u very much!
Without example JSON or structs, the exact question you are asking is a bit hard to decipher, specifically the "output the number of structs" bit in the question, as it could be interpreted several different ways. I will do my best to answer what I think are the most probable questions you are asking.
Interfaces
First off, some basic go knowledge that might be useful, but outside JSON marshaling itself. The interface{} type appears special, but is not a hardwired keyword as it first might appear. What the interface keyword does is describe the requirements that an object must have to fulfill that interface. Because interface{} has no requirements, and because everything is automatically interfaced in go, everything satisfies the interface{} type.
Because of this implementation of interfaces, map[string]interface{} is really map[string]. This allows for the JSON un/marshal to not care about what is on the value side of the map. This exactly lines up with the format of JSON, where you have a string key on one side, and a value that could be any of the JSON datatypes on the other.
How many different objects are in the base JSON object?
let us take an example json
{
"debug": "on",
"window": {
"title": "Sample Konfabulator Widget",
"name": "main_window",
"width": 500,
"height": 500
},
"image": {
"src": "Images/Sun.png",
"name": "sun1",
"hOffset": 250,
"vOffset": 250,
"alignment": "center"
},
"text": {
"data": "Click Here",
"size": 36,
"style": "bold",
"name": "text1",
"hOffset": 250,
"vOffset": 100,
"alignment": "center",
"onMouseUp": "sun1.opacity = (sun1.opacity / 100) * 90;"
}
}
The answer to the question in this circumstance would be four. debug, window, image, and text.
The process for determining the number would then be:
Load the json into a byte array.
Marshal into an interface{}
Determine type (array vs object etc.) using type switch. see this A Tour of Go page
If you know the type, you can skip this step
Convert to desired type
Get length, or perform any other operation as desired.
package main
import (
"encoding/json"
"fmt"
)
func main() {
myJSON := `<see above>`
var outStruct *interface{}
json.Unmarshal([]byte(myJSON), &outStruct)
outMap := (*outStruct).(map[string]interface{})
fmt.Printf("Num Structs: %d", len(outMap))
}
Go Playground
How many json objects that I do not have structs for are present?
This answer has a very similar answer to the first one, and is really about manipulation of the output map and the struct
Taking almost the entire code from the first one to the second, let us assume that you have the following structs set up
type Image struct {
Name string
//etc
}
type Text struct {
Name string
//etc
}
type Window struct {
Name string
//etc
}
type Base struct {
Image Image
Window Window
Text Text
}
In this case, in addition to the previous steps, you would have to
5. Marshal the json into a base object
6. Go through the map[string]interface{}, and for each key
7. Determine if the key is one of the objects in your base struct
total := 0
for k, _ := range outMap {
if k != "image" && k != "text" && k != "window" && k != "other" {
total++
}
}
fmt.Printf("Total unknown structs: %d\n", total)
How many of my structs are empty?
This last question is also rather simple, and could be done by checking the map for a value given the input key, but for completion's sake, the example code marshals the JSON into a struct, and uses that.
Marshal JSON into base
For each of Window, Item, Text, Other in base, determine if empty.
total = 0
if (Image{}) == outBase.Image {
total++
}
if (Window{}) == outBase.Window {
total++
}
if (Text{}) == outBase.Text {
total++
}
if (Other{}) == outBase.Other {
total++
}
fmt.Printf("Total empty structs: %d\n", total)
Go Playground
See this go blog post for more information on golang JSON.
So I have a project with lots of incoming data about 15 sources in total, of course there are inconsistencies in how each label there data available in their rest api's. I need to Change some of their field names to be consistent with the others, but I am at a loss on how to do this when the data sources are json object arrays. A working example of what I am trying to do is found here playground and below
however I seem to lack the knowledge as to how to make this work when the data is not a single json object , but instead and array of objects that I am unmarshaling.
Another approach is using Maps like in this example but the result is the same, works great as is for single objects, but I can not seem to get it to work with json object arrays. Iteration through arrays is not a possibility as I am collecting about 8,000 records every few minutes.
package main
import (
"encoding/json"
"os"
)
type omit bool
type Value interface{}
type CacheItem struct {
Key string `json:"key"`
MaxAge int `json:"cacheAge"`
Value Value `json:"cacheValue"`
}
func NewCacheItem() (*CacheItem, error) {
i := &CacheItem{}
return i, json.Unmarshal([]byte(`{
"key": "foo",
"cacheAge": 1234,
"cacheValue": {
"nested": true
}
}`), i)
}
func main() {
item, _ := NewCacheItem()
json.NewEncoder(os.Stdout).Encode(struct {
*CacheItem
// Omit bad keys
OmitMaxAge omit `json:"cacheAge,omitempty"`
OmitValue omit `json:"cacheValue,omitempty"`
// Add nice keys
MaxAge int `json:"max_age"`
Value *Value `json:"value"`
}{
CacheItem: item,
// Set the int by value:
MaxAge: item.MaxAge,
// Set the nested struct by reference, avoid making a copy:
Value: &item.Value,
})
}
It appears your desired output is JSON. You can accomplish the conversion by unmarshaling into a slice of structs, and then iterating through each of those to convert them to the second struct type (your anonymous struct above), append them into a slice and then marshal the slice back to JSON:
package main
import (
"fmt"
"encoding/json"
)
type omit bool
type Value interface{}
type CacheItem struct {
Key string `json:"key"`
MaxAge int `json:"cacheAge"`
Value Value `json:"cacheValue"`
}
type OutGoing struct {
// Omit bad keys
OmitMaxAge omit `json:"cacheAge,omitempty"`
OmitValue omit `json:"cacheValue,omitempty"`
// Add nice keys
Key string `json:"key"`
MaxAge int `json:"max_age"`
Value *Value `json:"value"`
}
func main() {
objects := make([]CacheItem, 0)
sample := []byte(`[
{
"key": "foo",
"cacheAge": 1234,
"cacheValue": {
"nested": true
}},
{
"key": "baz",
"cacheAge": 123,
"cacheValue": {
"nested": true
}}]`)
json.Unmarshal(sample, &objects)
out := make([]OutGoing, 0, len(objects))
for _, o := range objects {
out = append(out, OutGoing{Key:o.Key, MaxAge:o.MaxAge, Value:&o.Value})
}
s, _ := json.Marshal(out)
fmt.Println(string(s))
}
This outputs
[{"key":"foo","max_age":1234,"value":{"nested":true}},{"key":"baz","max_age":123,"value":{"nested":true}}]
You could probably skip this iteration and conversion code if you wrote custom MarshalJSON and UnmarshalJSON methods for your CacheItem type, instead of relying on struct field tags. Then you could pass the same slice to both Unmarshal and Marshal.
To me there's no obvious performance mistake with these approaches -- contrast with building a string in a loop using the + operator -- and when that's the case it's often best to just get the software to work and then test for performance rather than ruling out a solution based on fears of performance issues without actually testing.
If there's a performance problem with the above approaches, and you really want to avoid marshal and unmarshal completely, you could look into byte replacement in the JSON data (e.g. regexp). I'm not recommending this approach, but if your changes are very simple and the inputs are very consistent it could work, and it would give another approach you could performance test, and then you could compare performance test results.