Golang - How does golang memory profile count allocs/op? - json

I'm writing a custom JSON marshal function and comparing it to the built-in json.Marshal method.
My understanding is that when bytes.Buffer reaches its capacity, it needs to double its size and this costs 1 allocation.
However, benchmark result seems to indicate json.Marshal is doing this in a way where it does NOT need to allocate whenever it grows the underlying buffer, whereas my implementation seems to cost an extra allocation everytime the buffer doubles.
Why would MarshalCustom (code below) need to allocate more than json.Marshal?
$ go test -benchmem -run=^$ -bench ^BenchmarkMarshalText$ test
BenchmarkMarshalText/Marshal_JSON-10 79623 13545 ns/op 3123 B/op 2 allocs/op
BenchmarkMarshalText/Marshal_Custom-10 142296 8378 ns/op 12464 B/op 8 allocs/op
PASS
ok test 2.356s
Full code.
type fakeStruct struct {
Names []string `json:"names"`
}
var ResultBytes []byte
func BenchmarkMarshalText(b *testing.B) {
names := randomNames(1000)
b.Run("Marshal JSON", func(b *testing.B) {
fs := fakeStruct{
Names: names,
}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
ResultBytes, _ = json.Marshal(fs)
}
})
b.Run("Marshal Custom", func(b *testing.B) {
fs := fakeStruct{
Names: names,
}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
ResultBytes = MarshalCustom(fs)
}
})
}
func MarshalCustom(fs fakeStruct) []byte {
var b bytes.Buffer
b.WriteByte('{')
// Names
b.WriteString(`,"names":[`)
for i := 0; i < len(fs.Names); i++ {
if i > 0 {
b.WriteByte(',')
}
b.WriteByte('"')
b.WriteString(fs.Names[i])
b.WriteByte('"')
}
b.WriteByte(']')
b.WriteByte('}')
buf := append([]byte(nil), b.Bytes()...)
return buf
}
func randomNames(num int) []string {
const letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
const maxLen = 5
rand.Seed(time.Now().UnixNano())
res := make([]string, rand.Intn(num))
for i := range res {
l := rand.Intn(maxLen) + 1 // cannot be empty
s := make([]byte, l)
for j := range s {
s[j] = letters[rand.Intn(len(letters))]
}
res[i] = string(s)
}
return res
}

#oakad is correct. If I force a GC run in every iteration of the benchmark, the allocs/op is much closer or even the same.
for i := 0; i < b.N; i++ {
// marshal here
runtime.GC()
}

Related

Memory consumption with large array with Echo or Gin framework

I have a memory problem when I try to send a large array with Echo (and Gin too).
After the request, memory is not free.
package main
import (
"net/http"
"strconv"
"github.com/labstack/echo"
)
type User struct {
Username string
Password string
Lastname string
Firstname string
}
func main() {
e := echo.New()
e.GET("/", func(c echo.Context) error {
var user User
users := make([]User, 0)
for i := 0; i < 100000; i++ {
user = User{
Username: "ffgfgfghhfghfhgfgfhgfghfghfhgfhgfh" + strconv.Itoa(i),
Password: "gjgjghjgjhgjhghjfrserhkhjhklljjkbhjvftxersgdghjjkhkljkbhftd",
Lastname: "njuftydfhgjkjlkjlkjlkhjkhu",
Firstname: "jkggkjkl,,lm,kljkvgf"}
users = append(users, user)
}
defer func() {
users = nil
}()
return c.JSON(http.StatusOK, users)
})
e.Logger.Fatal(e.Start(":1323"))
}
To test, I run request in parallel and I have these results :
1 request : 300Mo
5 requests : 1.5Go
10 requests : 3.1Go
more : my PC freeze :)
How can I reduce memory consumption?
EDIT
It works well if I don't have to process the data.
If, for example, I get 100,000 lines from the database.
And then I need to process them to return a JSON with several levels. In this case, I am obliged to create an array or map.
But the problem is that memory is never released. And it increases with each request and it is even worse in the case of parallel requests.
Here an example:
import (
"strconv"
"time"
)
type sqlDataType struct {
ApplicationID int
ApplicationName string
ApplicationCreatedAt time.Time
ApplicationUpdatedAt time.Time
ModuleID int
ModuleName string
ModuleCreatedAt time.Time
ModuleUpdatedAt time.Time
ActionID int
ActionName string
ActionCreatedAt time.Time
ActionUpdatedAt time.Time
}
type DataApplicationType struct {
Name string
CreatedAt time.Time
UpdatedAt time.Time
Modules map[int]dataModuleType
}
type dataModuleType struct {
Name string
CreatedAt time.Time
UpdatedAt time.Time
Actions map[int]dataActionType
}
type dataActionType struct {
Name string
CreatedAt time.Time
UpdatedAt time.Time
}
// InitData inits data for test
func InitData() map[int]DataApplicationType {
data := make(map[int]DataApplicationType)
const nbApplications = 10
const nbModules = 1000
const nbActions = 100000
sqlData := make([]sqlDataType, 0)
for i := 0; i < nbActions; i++ {
line := sqlDataType{
ApplicationID: (i % nbApplications) + 1,
ApplicationName: "Application " + strconv.Itoa((i%nbApplications)+1),
ApplicationCreatedAt: time.Now(),
ApplicationUpdatedAt: time.Now(),
ModuleID: (i % nbModules) + 1,
ModuleName: "Module " + strconv.Itoa((i%nbModules)+1),
ModuleCreatedAt: time.Now(),
ModuleUpdatedAt: time.Now(),
ActionID: i + 1,
ActionName: "Action " + strconv.Itoa(i+1),
ActionCreatedAt: time.Now(),
ActionUpdatedAt: time.Now(),
}
sqlData = append(sqlData, line)
}
nbData := len(sqlData)
for i := 0; i < nbData; i++ {
if _, ok := data[sqlData[i].ApplicationID]; !ok {
dac := new(dataActionType)
dac.Name = sqlData[i].ActionName
dac.CreatedAt = sqlData[i].ActionCreatedAt
dac.UpdatedAt = sqlData[i].ActionUpdatedAt
dmo := new(dataModuleType)
dmo.Name = sqlData[i].ModuleName
dmo.CreatedAt = sqlData[i].ModuleCreatedAt
dmo.UpdatedAt = sqlData[i].ModuleUpdatedAt
dmo.Actions = make(map[int]dataActionType)
dmo.Actions[sqlData[i].ActionID] = *dac
dap := new(DataApplicationType)
dap.Name = sqlData[i].ApplicationName
dap.CreatedAt = sqlData[i].ApplicationCreatedAt
dap.UpdatedAt = sqlData[i].ApplicationUpdatedAt
dap.Modules = make(map[int]dataModuleType)
dap.Modules[sqlData[i].ModuleID] = *dmo
data[sqlData[i].ApplicationID] = *dap
}
if _, ok := data[sqlData[i].ApplicationID].Modules[sqlData[i].ModuleID]; !ok {
dac := new(dataActionType)
dac.Name = sqlData[i].ActionName
dac.CreatedAt = sqlData[i].ActionCreatedAt
dac.UpdatedAt = sqlData[i].ActionUpdatedAt
dmo := new(dataModuleType)
dmo.Name = sqlData[i].ModuleName
dmo.CreatedAt = sqlData[i].ModuleCreatedAt
dmo.UpdatedAt = sqlData[i].ModuleUpdatedAt
dmo.Actions = make(map[int]dataActionType)
dmo.Actions[sqlData[i].ActionID] = *dac
data[sqlData[i].ApplicationID].Modules[sqlData[i].ModuleID] = *dmo
}
if _, ok := data[sqlData[i].ApplicationID].Modules[sqlData[i].ModuleID].Actions[sqlData[i].ActionID]; !ok {
dac := new(dataActionType)
dac.Name = sqlData[i].ActionName
dac.CreatedAt = sqlData[i].ActionCreatedAt
dac.UpdatedAt = sqlData[i].ActionUpdatedAt
data[sqlData[i].ApplicationID].Modules[sqlData[i].ModuleID].Actions[sqlData[i].ActionID] = *dac
}
}
return data
}
In the main.go:
func main() {
// Lancement de Cobra
// commands.Execute()
go issues.InitData()
go issues.InitData()
go issues.InitData()
go issues.InitData()
go issues.InitData()
time.Sleep(60 * time.Second)
}
This script needs around 500 Mo of memory and does not release it whereas the map hasn't even been transformed into JSON.
How can I reduce memory consumption and/or I can have a stable memory consumption state with many calls?
Thanks for you help
Allocated memory isn't immediately returned back to the OS, see Cannot free memory once occupied by bytes.Buffer; and Freeing unused memory?
Your answer gives little improvement, because you are still building the (Go) array (or rather slice) in memory, and once it's done, only then you proceed to marshal it into the response. You also create a new encoder for each item, you marshal a single item and you just throw it away. You may use json.Encoder to marshal multiple items. You also flush the response after each item, that's also terribly inefficient. That defeats the purpose of all internal buffering...
Instead you may marshal them as soon as an item (User) is ready, so you don't have to keep all in memory. And don't flush after every user, it's enough to do it once in the end, which isn't necessary as once you return from the handler, the server will flush all buffered data.
Do it something like this:
e.GET("/", func(c echo.Context) error {
c.Response().WriteHeader(http.StatusOK)
enc := json.NewEncoder(c.Response())
for i := 0; i < 100000; i++ {
user := User{
Username: "ffgfgfghhfghfhgfgfhgfghfghfhgfhgfh" + strconv.Itoa(i),
Password: "gjgjghjgjhgjhghjfrserhkhjhklljjkbhjvftxersgdghjjkhkljkbhfd",
Lastname: "njuftydfhgjkjlkjlkjlkhjkhu",
Firstname: "jkggkjkl,,lm,kljkvgf",
}
if err := enc.Encode(user); err != nil {
return err
}
}
return nil
})
One thing to note here: the above code does not send a JSON array to the output, it sends a series of JSON objects. If this is not suitable to you and you do need to send a single JSON array, simply "frame" the data and insert a comma between items:
e.GET("/", func(c echo.Context) error {
resp := c.Response()
resp.WriteHeader(http.StatusOK)
if _, err := io.WriteString(resp, "["); err != nil {
return err
}
enc := json.NewEncoder(resp)
for i := 0; i < 100000; i++ {
if i > 0 {
if _, err := io.WriteString(resp, ","); err != nil {
return err
}
}
user := User{
Username: "ffgfgfghhfghfhgfgfhgfghfghfhgfhgfh" + strconv.Itoa(i),
Password: "gjgjghjgjhgjhghjfrserhkhjhklljjkbhjvftxersgdghjjkhkljkbhft",
Lastname: "njuftydfhgjkjlkjlkjlkhjkhu",
Firstname: "jkggkjkl,,lm,kljkvgf",
}
if err := enc.Encode(user); err != nil {
return err
}
}
if _, err := io.WriteString(resp, "]"); err != nil {
return err
}
return nil
})

How to retrieve form-data as map (like PHP and Ruby) in Go (Golang)

I'm a PHP Dev. But currently moving to Golang... I'm trying to retrieve data from a Form (Post method):
<!-- A really SIMPLE form -->
<form class="" action="/Contact" method="post">
<input type="text" name="Contact[Name]" value="Something">
<input type="text" name="Contact[Email]" value="Else">
<textarea name="Contact[Message]">For this message</textarea>
<button type="submit">Submit</button>
</form>
In PHP I would simple use this to get the data:
<?php
print_r($_POST["Contact"])
?>
// Output would be something like this:
Array
(
[Name] => Something
[Email] => Else
[Message] => For this message
)
BUT in go... either I get one by one or the whole thing but not the Contact[] Array only such as PHP
I thought about 2 solutions:
1) Get one by one:
// r := *http.Request
err := r.ParseForm()
if err != nil {
w.Write([]byte(err.Error()))
return
}
contact := make(map[string]string)
contact["Name"] = r.PostFormValue("Contact[Name]")
contact["Email"] = r.PostFormValue("Contact[Email]")
contact["Message"] = r.PostFormValue("Contact[Message]")
fmt.Println(contact)
// Output
map[Name:Something Email:Else Message:For this Message]
Note that the map keys are the whole: "Contact[Name]"...
2) Range whole map r.Form and "parse|obtain" those values with Prefix
"Contact[" and then replacing "Contact[" and "]" with empty string
so I can get the Form array Key only such the PHP Example
I went for this work around by my own but... ranging over the whole form may not be a good idea (?)
// ContactPost process the form sent by the user
func ContactPost(w http.ResponseWriter, r *http.Request, ps httprouter.Params) {
err := r.ParseForm()
if err != nil {
w.Write([]byte(err.Error()))
return
}
contact := make(map[string]string)
for i := range r.Form {
if strings.HasPrefix(i, "Contact[") {
rp := strings.NewReplacer("Contact[", "", "]", "")
contact[rp.Replace(i)] = r.Form.Get(i)
}
}
w.Write([]byte(fmt.Sprint(contact)))
}
//Output
map[Name:Something Email:Else Message:For this Message]
Both solutions give me the same output... But in the 2nd example I don't necessarily need to know the keys of "Contact[]"
I know... I may just forget about that "Form Array" and use name="Email" on my inputs and retrieve one by one but... I've passing through some scenarios where I use ONE form that contain more than 2 arrays of data and do different things with each one, like ORMs
Question 1: Is there a easier way to get my Form Array as an actual map in Golang like PHP does?
Question 2: Should I retrieve the data one by one (Tedious as much and I may change the Form data at some point and recompile...) or iterate the whole thing as I've done in the 2nd example.
Sorry for my bad English... Thanks in advance!
Is there a easier way to get my Form Array as an actual map in Golang like PHP does?
You can use the PostForm member of the http.Request type. It is of type url.Values -- which is actually (ta-da) a map[string][]string, and you can treat is as such. You'll still need to call req.ParseForm() first, though.
if err := req.ParseForm(); err != nil {
// handle error
}
for key, values := range req.PostForm {
// [...]
}
Note that PostForm is a map of lists of strings. That's because in theory, each field could be present multiple times in the POST body. The PostFormValue() method handles this by implicitly returning the first of multiple values (meaning, when your POST body is &foo=bar&foo=baz, then req.PostFormValue("foo") will always return "bar").
Also note that PostForm will never contain nested structures like you are used from PHP. As Go is statically typed, a POST form value will always be a mapping of string (name) to []string (value/s).
Personally, I wouldn't use the bracket syntax (contact[email]) for POST field names in Go applications; that's a PHP specific construct, anyway and as you've already noticed, Go does not support it very well.
Should I retrieve the data one by one (Tedious as much and I may change the Form data at some point and recompile...) or iterate the whole thing as I've done in the 2nd example.
There's probably no correct answer for that. If you are mapping your POST fields to a struct with static fields, you'll have to explicitly map them at some point (or use reflect to implement some magical auto-mapping).
I had a similar problem, so I wrote this function
func ParseFormCollection(r *http.Request, typeName string) []map[string]string {
var result []map[string]string
r.ParseForm()
for key, values := range r.Form {
re := regexp.MustCompile(typeName + "\\[([0-9]+)\\]\\[([a-zA-Z]+)\\]")
matches := re.FindStringSubmatch(key)
if len(matches) >= 3 {
index, _ := strconv.Atoi(matches[1])
for ; index >= len(result); {
result = append(result, map[string]string{})
}
result[index][matches[2]] = values[0]
}
}
return result
}
It turns a collection of form key value pairs into a list of string maps. For example, if I have form data like this:
Contacts[0][Name] = Alice
Contacts[0][City] = Seattle
Contacts[1][Name] = Bob
Contacts[1][City] = Boston
I can call my function passing the typeName of "Contacts":
for _, contact := range ParseFormCollection(r, "Contacts") {
// ...
}
And it will return a list of two map objects, each map containing keys for "Name" and "City". In JSON notation, it would look like this:
[
{
"Name": "Alice",
"City": "Seattle"
},
{
"Name": "Bob",
"City": "Boston"
}
]
Which incidentally, is exactly how I'm posting the data up to the server in an ajax request:
$.ajax({
method: "PUT",
url: "/api/example/",
dataType: "json",
data: {
Contacts: [
{
"Name": "Alice",
"City": "Seattle"
},
{
"Name": "Bob",
"City": "Boston"
}
]
}
})
If your form data key structure doesn't quite match mine, then I you could probably adapt the Regex that I'm using to suit your needs.
I had the same question. The submission of array form params is also idiomatic in the Ruby/Rails world where I'm coming from. But, after some research, it looks like this is not really the "Go-way".
I've been using the dot prefix convention: contact.name, contact.email, etc.
func parseFormHandler(writer http.ResponseWriter, request *http.Request) {
request.ParseForm()
userParams := make(map[string]string)
for key, _ := range request.Form {
if strings.HasPrefix(key, "contact.") {
userParams[string(key[8:])] = request.Form.Get(key)
}
}
fmt.Fprintf(writer, "%#v\n", userParams)
}
func main() {
server := http.Server{Addr: ":8088"}
http.HandleFunc("/", parseFormHandler)
server.ListenAndServe()
}
Running this server and then curling it:
$ curl -id "contact.name=Jeffrey%20Lebowski&contact.email=thedude#example.com&contact.message=I%20hate%20the%20Eagles,%20man." http://localhost:8088
Results in:
HTTP/1.1 200 OK
Date: Thu, 12 May 2016 16:41:44 GMT
Content-Length: 113
Content-Type: text/plain; charset=utf-8
map[string]string{"name":"Jeffrey Lebowski", "email":"thedude#example.com", "message":"I hate the Eagles, man."}
Using the Gorilla Toolkit
You can also use the Gorilla Toolkit's Schema Package to parse the form params into a struct, like so:
type Submission struct {
Contact Contact
}
type Contact struct {
Name string
Email string
Message string
}
func parseFormHandler(writer http.ResponseWriter, request *http.Request) {
request.ParseForm()
decoder := schema.NewDecoder()
submission := new(Submission)
err := decoder.Decode(submission, request.Form)
if err != nil {
log.Fatal(err)
}
fmt.Fprintf(writer, "%#v\n", submission)
}
Running this server and then curling it:
$ curl -id "Contact.Name=Jeffrey%20Lebowski&Contact.Email=thedude#example.com&Contact.Message=I%20hate%20the%20Eagles,%20man." http://localhost:8088
Results in:
HTTP/1.1 200 OK
Date: Thu, 12 May 2016 17:03:38 GMT
Content-Length: 128
Content-Type: text/plain; charset=utf-8
&main.Submission{Contact:main.Contact{Name:"Jeffrey Lebowski", Email:"thedude#example.com", Message:"I hate the Eagles, man."}}
I've been using the dot prefix convention: contact.name, contact.email
I decided to leave a script here so people don't have to spend so much time writing their own custom parser.
Here is a simple script that traverses the form data and puts the values in a struct that follows a format close to PHP and Ruby's.
package formparser
import (
"strings"
"mime/multipart"
)
type NestedFormData struct {
Value *ValueNode
File *FileNode
}
type ValueNode struct {
Value []string
Children map[string]*ValueNode
}
type FileNode struct {
Value []*multipart.FileHeader
Children map[string]*FileNode
}
func (fd *NestedFormData) ParseValues(m map[string][]string){
n := &ValueNode{
Children: make(map[string]*ValueNode),
}
for key, val := range m {
keys := strings.Split(key,".")
fd.nestValues(n, &keys, val)
}
fd.Value = n
}
func (fd *NestedFormData) ParseFiles(m map[string][]*multipart.FileHeader){
n := &FileNode{
Children: make(map[string]*FileNode),
}
for key, val := range m {
keys := strings.Split(key,".")
fd.nestFiles(n, &keys, val)
}
fd.File = n
}
func (fd *NestedFormData) nestValues(n *ValueNode, k *[]string, v []string) {
var key string
key, *k = (*k)[0], (*k)[1:]
if len(*k) == 0 {
if _, ok := n.Children[key]; ok {
n.Children[key].Value = append(n.Children[key].Value, v...)
} else {
cn := &ValueNode{
Value: v,
Children: make(map[string]*ValueNode),
}
n.Children[key] = cn
}
} else {
if _, ok := n.Children[key]; ok {
fd.nestValues(n.Children[key], k,v)
} else {
cn := &ValueNode{
Children: make(map[string]*ValueNode),
}
n.Children[key] = cn
fd.nestValues(cn, k,v)
}
}
}
func (fd *NestedFormData) nestFiles(n *FileNode, k *[]string, v []*multipart.FileHeader){
var key string
key, *k = (*k)[0], (*k)[1:]
if len(*k) == 0 {
if _, ok := n.Children[key]; ok {
n.Children[key].Value = append(n.Children[key].Value, v...)
} else {
cn := &FileNode{
Value: v,
Children: make(map[string]*FileNode),
}
n.Children[key] = cn
}
} else {
if _, ok := n.Children[key]; ok {
fd.nestFiles(n.Children[key], k,v)
} else {
cn := &FileNode{
Children: make(map[string]*FileNode),
}
n.Children[key] = cn
fd.nestFiles(cn, k,v)
}
}
}
Then you can use the package like so:
package main
import (
"MODULE_PATH/formparser"
"strconv"
"fmt"
)
func main(){
formdata := map[string][]string{
"contact.name": []string{"John Doe"},
"avatars.0.type": []string{"water"},
"avatars.0.name": []string{"Korra"},
"avatars.1.type": []string{"air"},
"avatars.1.name": []string{"Aang"},
}
f := &formparser.NestedFormData{}
f.ParseValues(formdata)
//then access form values like so
fmt.Println(f.Value.Children["contact"].Children["name"].Value)
fmt.Println(f.Value.Children["avatars"].Children[strconv.Itoa(0)].Children["name"].Value)
fmt.Println(f.Value.Children["avatars"].Children[strconv.Itoa(0)].Children["type"].Value)
fmt.Println(f.Value.Children["avatars"].Children[strconv.Itoa(1)].Children["name"].Value)
fmt.Println(f.Value.Children["avatars"].Children[strconv.Itoa(1)].Children["type"].Value)
//or traverse the Children in a loop
for key, child := range f.Value.Children {
fmt.Println("Key:", key, "Value:", child.Value)
if child.Children != nil {
for k, c := range child.Children {
fmt.Println(key + "'s child key:", k, "Value:", c.Value)
}
}
}
//if you want to access files do not forget to call f.ParseFiles()
}
I wrote some code, that transforms FormData array to json string.
package phprubyformdatatojson
import (
"bytes"
"io"
"net/url"
"strconv"
"strings"
)
type Node struct {
Name string
Value string
Subnodes []*Node
ArrayValue []*Node
}
func getJsonFromNode(rootNode *Node) string {
return "{" + nodeToJson(rootNode) + "}"
}
func nodeToJson(n *Node) string {
if len(n.Subnodes) == 0 && len(n.ArrayValue) == 0 {
return "\"" + n.Name + "\"" + ": " + "\"" + n.Value + "\""
}
if len(n.Subnodes) > 0 {
var parts []string
for _, subnode := range n.Subnodes {
parts = append(parts, nodeToJson(subnode))
}
if len(n.Name) > 0 {
return "\"" + n.Name + "\"" + ": {" + strings.Join(parts, ", ") + "}"
} else {
return strings.Join(parts, ", ")
}
}
if len(n.ArrayValue) > 0 {
var parts []string
for _, arrayPart := range n.ArrayValue {
parts = append(parts, "{"+nodeToJson(arrayPart)+"}")
}
return "\"" + n.Name + "\"" + ": [" + strings.Join(parts, ", ") + "]"
}
return "{}"
}
func addNode(nodeMap map[string]*Node, key string, value string) map[string]*Node {
keys := splitKeyToParts(key)
var lastNode *Node
previosKey := "rootNode"
totalKey := ""
for index, keyPart := range keys {
if totalKey == "" {
totalKey += keyPart
} else {
totalKey += "|||" + keyPart
}
isNumber := false
if _, err := strconv.Atoi(keyPart); err == nil {
isNumber = true
}
if index < len(keys)-1 {
if z, ok := nodeMap[totalKey]; !ok {
lastNode = z
node := &Node{}
nodeMap[totalKey] = node
lastNode = node
prevNode, oook := nodeMap[previosKey]
if oook {
if isNumber {
prevNode.ArrayValue = append(prevNode.ArrayValue, node)
} else {
node.Name = keyPart
prevNode.Subnodes = append(prevNode.Subnodes, node)
}
}
}
} else {
lastNode = nodeMap[previosKey]
newNode := &Node{Name: keyPart, Value: value}
if isNumber {
lastNode.ArrayValue = append(lastNode.ArrayValue, newNode)
} else {
lastNode.Subnodes = append(lastNode.Subnodes, newNode)
}
}
previosKey = totalKey
}
return nodeMap
}
func splitKeyToParts(key string) []string {
const DELIMITER = "|||||"
key = strings.Replace(key, "][", DELIMITER, -1)
key = strings.Replace(key, "[", DELIMITER, -1)
key = strings.Replace(key, "]", DELIMITER, -1)
key = strings.Trim(key, DELIMITER)
return strings.Split(key, DELIMITER)
}
func TransformMapToJsonString(source map[string][]string) string {
nodesMap := map[string]*Node{}
nodesMap["rootNode"] = &Node{}
for key, value := range source {
nodesMap = addNode(nodesMap, key, strings.Join(value, ""))
}
return getJsonFromNode(nodesMap["rootNode"])
}
When you can manualy transform you request and json.Unmarshal it, or write gin.Middleware
func PhpRubyArraysToJsonMiddleware(c *gin.Context) {
body, _ := c.GetRawData()
m, _ := url.ParseQuery(string(body))
parsedJson := TransformMapToJsonString(m)
newBody := []byte(parsedJson)
c.Request.Body = io.NopCloser(bytes.NewBuffer(newBody))
c.Next()
}
and use it like this
func handelUpdate(c *gin.Context) {
req := &YourJsonStruct{}
if err := c.BindJSON(req); err != nil {
c.Status(http.StatusBadRequest)
return
}
// your code
}
func main() {
router := gin.Default()
router.Use(PhpRubyArraysToJsonMiddleware)
router.POST("/update", handelUpdate)
}

Passing different type's parameters to the function

I have this function and i would like to make it be able to receive all types of slices, not only []string, but []int and so on... I would like to know if is there some way to abstract the type when passing the parameter to the function header or if i should do other thing to accomplish that.
package removeDuplicate
// RemoveDuplicate remove duplicate items from slice setting it to arr2
func RemoveDuplicate(arr []string) []string {
arr2 := arr[:1]
Loop:
for i := 1; i < len(arr); {
for j := 0; j < len(arr2); {
if arr[i] != arr[j] {
j++
} else {
i++
continue Loop
}
}
arr2 = append(arr2, arr[i])
i++
}
return arr2
}
Thanks in advance =]
If you alter the function signature to accept interface{} you can get something that works on built in types.
package main
import "fmt"
func main() {
x := []interface{}{"bob", "doug", "bob"}
fmt.Println(RemoveDuplicate(x))
y := []interface{}{1, 3, 1}
fmt.Println(RemoveDuplicate(y))
z := []interface{}{"bob", "2", "doug", 3, 2, "bob"}
fmt.Println(RemoveDuplicate(z))
}
func RemoveDuplicate(arr []interface{}) []interface{} {
arr2 := arr[:1]
Loop:
for i := 1; i < len(arr); {
for j := 0; j < len(arr2); {
if arr[i] != arr[j] {
j++
} else {
i++
continue Loop
}
}
arr2 = append(arr2, arr[i])
i++
}
return arr2
}
Have a look at the FAQ Can I convert a []T to an []interface{}? (and the one before) for more information.
Any kind of generic algorithms in Go can be implemented with either of two mechanisms: interfaces and reflection. With interfaces, you can do it similarly to the sort package:
type Slice interface {
Len() int
Swap(i, j int)
Eq(i, j int) bool
SubSlice(i, j int) Slice
}
func RemoveDuplicate(s Slice) Slice {
n := 1
Loop:
for i := 1; i < s.Len(); i++ {
for j := 0; j < n; j++ {
if s.Eq(i, j) {
continue Loop
}
}
s.Swap(n, i)
n++
}
return s.SubSlice(0, n)
}
Playground with ints and strings: http://play.golang.org/p/WwC27eP72n.

Don't read unneeded JSON key-values into memory

I have a JSON file with a single field that takes a huge amount of space when loaded into memory. The other fields are reasonable, but I'm trying to take care not to load that particular field unless I absolutely have to.
{
"Field1": "value1",
"Field2": "value2",
"Field3": "a very very long string that potentially takes a few GB of memory"
}
When reading that file into memory, I'd want to ignore Field3 (because loading it could crash my app). Here's some code that I would assume does that because it uses io streams rather than passing a []byte type to the Unmarshal command.
package main
import (
"encoding/json"
"os"
)
func main() {
type MyStruct struct {
Field1 string
Field2 string
}
fi, err := os.Open("myJSONFile.json")
if err != nil {
os.Exit(2)
}
// create an instance and populate
var mystruct MyStruct
err = json.NewDecoder(fi).Decode(&mystruct)
if err != nil {
os.Exit(2)
}
// do some other stuff
}
The issue is that the built-in json.Decoder type reads the entire file into memory on Decode before throwing away key-values that don't match the struct's fields (as has been pointed out on StackOverflow before: link).
Are there any ways of decoding JSON in Go without keeping the entire JSON object in memory?
You could write a custom io.Reader that you feed to json.Decoder and that will pre-read your json file and skip that specific field.
The other option is to write your own decoder, more complicated and messy.
//edit it seemed like a fun exercise, so here goes:
type IgnoreField struct {
io.Reader
Field string
buf bytes.Buffer
}
func NewIgnoreField(r io.Reader, field string) *IgnoreField {
return &IgnoreField{
Reader: r,
Field: field,
}
}
func (iF *IgnoreField) Read(p []byte) (n int, err error) {
if n, err = iF.Reader.Read(p); err != nil {
return
}
s := string(p)
fl := `"` + iF.Field + `"`
if i := strings.Index(s, fl); i != -1 {
l := strings.LastIndex(s[0:i], ",")
if l == -1 {
l = i
}
iF.buf.WriteString(s[0:l])
s = s[i+1+len(fl):]
i = strings.Index(s, `"`)
if i != -1 {
s = s[i+1:]
}
for {
i = strings.Index(s, `"`) //end quote
if i != -1 {
s = s[i+1:]
fmt.Println("Skipped")
break
} else {
if n, err = iF.Reader.Read(p); err != nil {
return
}
s = string(p)
}
}
iF.buf.WriteString(s)
}
ln := iF.buf.Len()
if ln >= len(p) {
tmp := iF.buf.Bytes()
iF.buf.Reset()
copy(p, tmp[0:len(p)])
iF.buf.Write(p[len(p):])
ln = len(p)
} else {
copy(p, iF.buf.Bytes())
iF.buf.Reset()
}
return ln, nil
}
func main() {
type MyStruct struct {
Field1 string
Field2 string
}
fi, err := os.Open("myJSONFile.json")
if err != nil {
os.Exit(2)
}
// create an instance and populate
var mystruct MyStruct
err := json.NewDecoder(NewIgnoreField(fi, "Field3")).Decode(&mystruct)
if err != nil {
fmt.Println(err)
}
fmt.Println(mystruct)
}
playground

Go- Copy all common fields between structs

I have a database that stores JSON, and a server that provides an external API to whereby through an HTTP post, values in this database can be changed. The database is used by different processes internally, and as such have a common naming scheme.
The keys the customer sees are different, but map 1:1 with the keys in the database (there are unexposed keys). For example:
This is in the database:
{ "bit_size": 8, "secret_key": false }
And this is presented to the client:
{ "num_bits": 8 }
The API can change with respect to field names, but the database always has consistent keys.
I have named the fields the same in the struct, with different flags to the json encoder:
type DB struct {
NumBits int `json:"bit_size"`
Secret bool `json:"secret_key"`
}
type User struct {
NumBits int `json:"num_bits"`
}
I'm using encoding/json to do the Marshal/Unmarshal.
Is reflect the right tool for this? Is there an easier way since all of the keys are the same? I was thinking some kind of memcpy (if I kept the user fields in the same order).
Couldn't struct embedding be useful here?
package main
import (
"fmt"
)
type DB struct {
User
Secret bool `json:"secret_key"`
}
type User struct {
NumBits int `json:"num_bits"`
}
func main() {
db := DB{User{10}, true}
fmt.Printf("Hello, DB: %+v\n", db)
fmt.Printf("Hello, DB.NumBits: %+v\n", db.NumBits)
fmt.Printf("Hello, User: %+v\n", db.User)
}
http://play.golang.org/p/9s4bii3tQ2
buf := bytes.Buffer{}
err := gob.NewEncoder(&buf).Encode(&DbVar)
if err != nil {
return err
}
u := User{}
err = gob.NewDecoder(&buf).Decode(&u)
if err != nil {
return err
}
Here's a solution using reflection. You have to further develop it if you need more complex structures with embedded struct fields and such.
http://play.golang.org/p/iTaDgsdSaI
package main
import (
"encoding/json"
"fmt"
"reflect"
)
type M map[string]interface{} // just an alias
var Record = []byte(`{ "bit_size": 8, "secret_key": false }`)
type DB struct {
NumBits int `json:"bit_size"`
Secret bool `json:"secret_key"`
}
type User struct {
NumBits int `json:"num_bits"`
}
func main() {
d := new(DB)
e := json.Unmarshal(Record, d)
if e != nil {
panic(e)
}
m := mapFields(d)
fmt.Println("Mapped fields: ", m)
u := new(User)
o := applyMap(u, m)
fmt.Println("Applied map: ", o)
j, e := json.Marshal(o)
if e != nil {
panic(e)
}
fmt.Println("Output JSON: ", string(j))
}
func applyMap(u *User, m M) M {
t := reflect.TypeOf(u).Elem()
o := make(M)
for i := 0; i < t.NumField(); i++ {
f := t.FieldByIndex([]int{i})
// skip unexported fields
if f.PkgPath != "" {
continue
}
if x, ok := m[f.Name]; ok {
k := f.Tag.Get("json")
o[k] = x
}
}
return o
}
func mapFields(x *DB) M {
o := make(M)
v := reflect.ValueOf(x).Elem()
t := v.Type()
for i := 0; i < v.NumField(); i++ {
f := t.FieldByIndex([]int{i})
// skip unexported fields
if f.PkgPath != "" {
continue
}
o[f.Name] = v.FieldByIndex([]int{i}).Interface()
}
return o
}
Using struct tags, the following would sure be nice,
package main
import (
"fmt"
"log"
"hacked/json"
)
var dbj = `{ "bit_size": 8, "secret_key": false }`
type User struct {
NumBits int `json:"bit_size" api:"num_bits"`
}
func main() {
fmt.Println(dbj)
// unmarshal from full db record to User struct
var u User
if err := json.Unmarshal([]byte(dbj), &u); err != nil {
log.Fatal(err)
}
// remarshal User struct using api field names
api, err := json.MarshalTag(u, "api")
if err != nil {
log.Fatal(err)
}
fmt.Println(string(api))
}
Adding MarshalTag requires just a small patch to encode.go:
106c106,112
< e := &encodeState{}
---
> return MarshalTag(v, "json")
> }
>
> // MarshalTag is like Marshal but marshalls fields with
> // the specified tag key instead of the default "json".
> func MarshalTag(v interface{}, tag string) ([]byte, error) {
> e := &encodeState{tagKey: tag}
201a208
> tagKey string
328c335
< for _, ef := range encodeFields(v.Type()) {
---
> for _, ef := range encodeFields(v.Type(), e.tagKey) {
509c516
< func encodeFields(t reflect.Type) []encodeField {
---
> func encodeFields(t reflect.Type, tagKey string) []encodeField {
540c547
< tv := f.Tag.Get("json")
---
> tv := f.Tag.Get(tagKey)
The following function use reflect to copy fields between two structs. A src field is copied to a dest field if they have the same field name.
// CopyCommonFields copies src fields into dest fields. A src field is copied
// to a dest field if they have the same field name.
// Dest and src must be pointers to structs.
func CopyCommonFields(dest, src interface{}) {
srcType := reflect.TypeOf(src).Elem()
destType := reflect.TypeOf(dest).Elem()
destFieldsMap := map[string]int{}
for i := 0; i < destType.NumField(); i++ {
destFieldsMap[destType.Field(i).Name] = i
}
for i := 0; i < srcType.NumField(); i++ {
if j, ok := destFieldsMap[srcType.Field(i).Name]; ok {
reflect.ValueOf(dest).Elem().Field(j).Set(
reflect.ValueOf(src).Elem().Field(i),
)
}
}
}
Usage:
func main() {
type T struct {
A string
B int
}
type U struct {
A string
}
src := T{
A: "foo",
B: 5,
}
dest := U{}
CopyCommonFields(&dest, &src)
fmt.Printf("%+v\n", dest)
// output: {A:foo}
}
You can cast structures if they have same field names and types, effectively reassigning field tags:
package main
import "encoding/json"
type DB struct {
dbNumBits
Secret bool `json:"secret_key"`
}
type dbNumBits struct {
NumBits int `json:"bit_size"`
}
type User struct {
NumBits int `json:"num_bits"`
}
var Record = []byte(`{ "bit_size": 8, "secret_key": false }`)
func main() {
d := new(DB)
e := json.Unmarshal(Record, d)
if e != nil {
panic(e)
}
var u User = User(d.dbNumBits)
println(u.NumBits)
}
https://play.golang.org/p/uX-IIgL-rjc
Here's a solution without reflection, unsafe, or a function per struct. The example is a little convoluted, and maybe you wouldn't need to do it just like this, but the key is using a map[string]interface{} to get away from a struct with field tags. You might be able to use the idea in a similar solution.
package main
import (
"encoding/json"
"fmt"
"log"
)
// example full database record
var dbj = `{ "bit_size": 8, "secret_key": false }`
// User type has only the fields going to the API
type User struct {
// tag still specifies internal name, not API name
NumBits int `json:"bit_size"`
}
// mapping from internal field names to API field names.
// (you could have more than one mapping, or even construct this
// at run time)
var ApiField = map[string]string{
// internal: API
"bit_size": "num_bits",
// ...
}
func main() {
fmt.Println(dbj)
// select user fields from full db record by unmarshalling
var u User
if err := json.Unmarshal([]byte(dbj), &u); err != nil {
log.Fatal(err)
}
// remarshal from User struct back to json
exportable, err := json.Marshal(u)
if err != nil {
log.Fatal(err)
}
// unmarshal into a map this time, to shrug field tags.
type jmap map[string]interface{}
mInternal := jmap{}
if err := json.Unmarshal(exportable, &mInternal); err != nil {
log.Fatal(err)
}
// translate field names
mExportable := jmap{}
for internalField, v := range mInternal {
mExportable[ApiField[internalField]] = v
}
// marshal final result with API field names
if exportable, err = json.Marshal(mExportable); err != nil {
log.Fatal(err)
}
fmt.Println(string(exportable))
}
Output:
{ "bit_size": 8, "secret_key": false }
{"num_bits":8}
Edit: More explanation. As Tom notes in a comment, there's reflection going on behind the code. The goal here is to keep the code simple by using the available capabilities of the library. Package json currently offers two ways to work with data, struct tags and maps of [string]interface{}. The struct tags let you select fields, but force you to statically pick a single json field name. The maps let you pick field names at run time, but not which fields to Marshal. It would be nice if the json package let you do both at once, but it doesn't. The answer here just shows the two techniques and how they can be composed in a solution to the example problem in the OP.
"Is reflect the right tool for this?" A better question might be, "Are struct tags the right tool for this?" and the answer might be no.
package main
import (
"encoding/json"
"fmt"
"log"
)
var dbj = `{ "bit_size": 8, "secret_key": false }`
// translation from internal field name to api field name
type apiTrans struct {
db, api string
}
var User = []apiTrans{
{db: "bit_size", api: "num_bits"},
}
func main() {
fmt.Println(dbj)
type jmap map[string]interface{}
// unmarshal full db record
mdb := jmap{}
if err := json.Unmarshal([]byte(dbj), &mdb); err != nil {
log.Fatal(err)
}
// build result
mres := jmap{}
for _, t := range User {
if v, ok := mdb[t.db]; ok {
mres[t.api] = v
}
}
// marshal result
exportable, err := json.Marshal(mres)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(exportable))
}
An efficient way to achieve your goal is to use the gob package.
Here an example with the playground:
package main
import (
"bytes"
"encoding/gob"
"fmt"
)
type DB struct {
NumBits int
Secret bool
}
type User struct {
NumBits int
}
func main() {
db := DB{10, true}
user := User{}
buf := bytes.Buffer{}
err := gob.NewEncoder(&buf).Encode(&db)
if err != nil {
panic(err)
}
err = gob.NewDecoder(&buf).Decode(&user)
if err != nil {
panic(err)
}
fmt.Println(user)
}
Here the official blog post: https://blog.golang.org/gob