Wants to generate json output from mysql using gorm in Go - mysql

So I am trying to get data from my database i.e MySQL. I am able to finish that step accessing the database but, the problem is I want to get the output in JSON format, but i did some research but didn't got the result so anyone can guide or hep me in this, getting the MySQL data in json by using GORM.
Here is the sample of my code which i written.
package main
import (
"fmt"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/mysql" //This must be introduced! !
)
type k_movie struct {
Id uint32
Title string `gorm:"default:''"`
Url_name string `gorm:"default:''"`
K_score string ``
Poster_url string ``
}
func main() {
db, errDb := gorm.Open("mysql", "root:xyz#123#(127.0.0.1)/dbdump?charset=utf8mb4&loc=Local")
if errDb != nil {
fmt.Println(errDb)
}
defer db.Close() //Close the database connection after use up
db.LogMode(true) //Open sql debug mode
//SELECT * FROM `k_movies` WHERE (id>0 and id<.....)
var movies []k_movie
db.Where("id>? and id<?", 0, 103697).Limit(3).Find(&movies)
fmt.Println(movies)
//Get the number
total := 0
db.Model(&k_movie{}).Count(&total)
fmt.Println(total)
var infos []k_movie //Define an array to receive multiple results
db.Where("Id in (?)", []uint32{1, 2, 3, 4, 5, 6, 7, 8}).Find(&infos)
fmt.Println(infos)
fmt.Println(len(infos)) //Number of results
var notValue []k_movie
db.Where("id=?", 3).Find(&notValue)
if len(notValue) == 0 {
fmt.Println("No data found!")
} else {
fmt.Println(notValue)
}
}
And the output I'm getting in this format.
kumardivyanshu#Divyanshus-MacBook-Air ~/myproject/src/github.com/gorm_mysql % go run test.go
(/Users/kumardivyanshu/myproject/src/github.com/gorm_mysql/test.go:31)
[2021-05-13 08:59:45] [3.89ms] SELECT * FROM `k_movies` WHERE (id>0 and id<103697) LIMIT 3
[3 rows affected or returned ]
[{1 Golmaal: Fun Unlimited golmaal-fun-unlimited 847 https://movieassetsdigital.sgp1.cdn.digitaloceanspaces.com/thumb/32b7385e1e616d7ba3d11e1bee255ecce638a136} {2 Dabangg 2 dabangg-2 425 https://movieassetsdigital.sgp1.cdn.digitaloceanspaces.com/thumb/1420c4d6f817d2b923cd8b55c81bdb9d9fd1eca0} {3 Force force 519 https://movieassetsdigital.sgp1.cdn.digitaloceanspaces.com/thumb/cd1dc247da9d16e194f4bfb09d99f4dedfb2de00}]
(/Users/kumardivyanshu/myproject/src/github.com/gorm_mysql/test.go:36)
[2021-05-13 08:59:45] [20.22ms] SELECT count(*) FROM `k_movies`
[0 rows affected or returned ]
103697
(/Users/kumardivyanshu/myproject/src/github.com/gorm_mysql/test.go:40)
[2021-05-13 08:59:45] [2.32ms] SELECT * FROM `k_movies` WHERE (Id in (1,2,3,4,5,6,7,8))
[8 rows affected or returned ]
[{1 Golmaal: Fun Unlimited golmaal-fun-unlimited 847 https://movieassetsdigital.sgp1.cdn.digitaloceanspaces.com/thumb/32b7385e1e616d7ba3d11e1bee255ecce638a136} {2 Dabangg 2 dabangg-2 425 https://movieassetsdigital.sgp1.cdn.digitaloceanspaces.com/thumb/1420c4d6f817d2b923cd8b55c81bdb9d9fd1eca0} {3 Force force 519 https://movieassetsdigital.sgp1.cdn.digitaloceanspaces.com/thumb/cd1dc247da9d16e194f4bfb09d99f4dedfb2de00} {4 Eega eega 906 https://movieassetsdigital.sgp1.cdn.digitaloceanspaces.com/thumb/08aef7d961d4699bf2d12a7c854b6b32d1445247} {5 Fukrey fukrey 672 https://movieassetsdigital.sgp1.cdn.digitaloceanspaces.com/thumb/5d14bd2fb0166f4bb9ab919e31b69f2605f366aa} {6 London Paris New York london-paris-new-york 323 https://movieassetsdigital.sgp1.cdn.digitaloceanspaces.com/thumb/222d8a6b5c76b1d3cfa0b93d4bcf1a1f16f5e199} {7 Bhaag Milkha Bhaag bhaag-milkha-bhaag 963 https://movieassetsdigital.sgp1.cdn.digitaloceanspaces.com/thumb/efa8b86c753ae0110cc3e82006fadabb06f1486c} {8 Bobby Jasoos bobby-jasoos 244 https://movieassetsdigital.sgp1.cdn.digitaloceanspaces.com/thumb/0e9540d4c962ec33d8b63c0c563e7b64169122e0}]
8
(/Users/kumardivyanshu/myproject/src/github.com/gorm_mysql/test.go:45)
[2021-05-13 08:59:45] [1.56ms] SELECT * FROM `k_movies` WHERE (id=3)
[1 rows affected or returned ]
[{3 Force force 519 https://movieassetsdigital.sgp1.cdn.digitaloceanspaces.com/thumb/cd1dc247da9d16e194f4bfb09d99f4dedfb2de00}]

You need to define the json tag on your struct, so you can use the json.Marshal to grab a []byte slice that presents a json object.
Example taken from Go by example:
type Response2 struct {
Page int `json:"page"`
Fruits []string `json:"fruits"`
}
res2D := &Response2{
Page: 1,
Fruits: []string{"apple", "peach", "pear"}}
res2B, _ := json.Marshal(res2D)
fmt.Println(string(res2B))
That would print:
{"page":1,"fruits":["apple","peach","pear"]}

Related

Why is data bytes size different when serialized with JSONSerialization

Why is data serialized using JSONSerialization differ from the data serialized with the extension below?
let uint8Array: [UInt8] = [123, 234, 255]
let data1 = uint8Array.data // 3bytes
let data2 = try! JSONSerialization.data(withJSONObject: uint8Array) // 13 bytes
extension Data {
var bytes: [UInt8] {
return [UInt8](self)
}
}
extension Array where Element == UInt8 {
var data: Data {
return Data(self)
}
}
JSON, and so the JSONSerialization will transform an object (here an array of Int), into Data according to some rules (like adding "[", "]", "," in our cases, in others input values and options, it can add if necessary ":", "{", "}", "\n"`, etc) and using the UTF8 (or kind of UTF16, but I'll skip this part for understanding, see this part in WikiPedia) encoding.
So in fact, for 123 value, it will be 1 into UTF8 encoding, then 2, then 3, so just for one number, it's 3 bytes.
So you see that for your 3 Int numbers, it will be 9 bytes, then, we need to add the "[", "]", and the two commas to separate them: ",", which makes 9 + 2 + 2 = 13 bytes.
To illustrates:
Let's add:
extension Data {
func hexRepresentation(separator: String) -> String {
map { String(format: "%02hhX", $0) }.joined(separator: separator)
}
func intRepresentation(separator: String) -> String {
map { String(format: "%d", $0) }.joined(separator: separator)
}
}
And use it to see what are the values inside data1 & data2:
print(data1.hexRepresentation(separator: "-"))
print(data1.intRepresentation(separator: "-"))
print(data2.hexRepresentation(separator: "-"))
print(data2.intRepresentation(separator: "-"))
$>7B-EA-FF
$>123-234-255
$>5B-31-32-33-2C-32-33-34-2C-32-35-35-5D
$>91-49-50-51-44-50-51-52-44-50-53-53-93
I let you chose as you prefers Int or hex raw value interpretations.
We see that using the first method, we get 3 bytes, with the initial values of your array.
But for the JSON one, we can split it as such:
$>91 -> [
$>49-50-51 -> 1, 2 & 3
$>44 -> ,
$>50-51-52 -> 2, 3, 4
$>44 -> ,
$>50-53-53 -> 2, 5, 5
$>93 -> ]
You can check on any UTF8 table (like https://www.utf8-chartable.de, etc), but 91 that for [, 49 it's for 1, etc.

How can I consume message from kafka in order?

Background
A producer produces some data and send to Kafka in order, like:
{uuid: 123 status: 1}
{uuid: 123 status: 3}
status 1 means begin
status 3 means succeed
I use sarama.NewConsumerGroup(xx, xx, config).Consume(xx, xx, myhandler) to consume with the code:
func (h myhandler) ConsumeClaim(sess sarama.ConsumerGroupSession, claim sarama.ConsumerGroupClaim) error {
for msg := range claim.Messages() {
key := fmt.Sprintf("%q-%d-%d", msg.Topic, msg.Partition, msg.Offset)
_, err := rdb.RedisClient.Get(h.ctx, key).Result()
if err == redis.Nil {
msgQueue <- msg.Value
sess.MarkMessage(msg, "")
rdb.RedisClient.Set(h.ctx, key, none, 12*time.Hour)
} else if err != nil {
log.Errorln("RedisClient get key error : ", err)
return err
} else {
continue
}
}
return nil
}
msgQueue := make(chan interface{}, 1000)
And then I decode the value in msgQueue to a struct and insert a record into mysql.
Question
Normally, final data status is '3', but I find that sometimes it is '1'
And I find the message order in channel msgQueue is not fixed.
So how can I ensure the final status of data is 3 ?
How to fix
I have provided a method that is not good enough to see how it can be optimized.
conn := &gorm.DB{}
data := &Log{}
if data.Status != 1 {
conn = conn.Clauses(clause.OnConflict{
Columns: []clause.Column{{Name: "uuid"}},
DoUpdates: clause.AssignmentColumns([]string{"status"}),
})
}
conn.Create(data)
return conn.Error
And mysql has a unique constraint index for uuid.
When the data order is {uuid: 123 status: 1},
{uuid: 123 status: 3}, It's right.
When the data order is {uuid: 123 status: 3},
{uuid: 123 status: 1}, the final status is also right, but it will return error Error 1062: Duplicate entry '123' for key 'unique_index_uuid'.
It's not beautiful. So how can I optimize or are there other ways to do it?
That depends on the topic partitions. Kafka does not provide ordering guarantees within a topic, only within a partition.
In other words, if you sent a message A, then message B to partition 0, then the order will be that: first A, then B. But if they end up on different partitions it can happen that B is written to its partitions, before A is written to its.
Here's a quote from Confluent's web site:
Kafka only provides a total order over records within a partition, not between different partitions in a topic. Per-partition ordering combined with the ability to partition data by key is sufficient for most applications. However, if you require a total order over records this can be achieved with a topic that has only one partition, though this will mean only one consumer process per consumer group.
Link

Rebuild JSON string to a different structure, parsing specific fields

I receive a JSON like this
{
"raw_content":"very long string"
"mode":"ML",
"user_id":"4000008367",
"user_description":"John Doe",
"model":3,
"dest_contact":"test#email.it",
"order_details":[
"ART.: 214883 PELL GRANI 9 ESPR.BAR SKGR 1000 SGOC.: 1000 GR\nVS.ART: 305920132 COMPOS. PALLET\n36 COLLI PEZ.RA: 6 TOT.PEZZI: 216 B: 12 T: 6\nEU C.L.: 24,230\nCO SCAP- : 16,500 CA SCAP- : 15,000 CO SCCP- : 0,000\nCO SAGV- : 0,00000\nC.N. : 17,200SCAD.MIN.: 25/01/22\nCONDIZIONI PAGAMENTO : 60GG B.B. DT RIC FT FINEMESE ART62\n",
"ART.: 287047 PELLINI BIO100%ARABICALTGR 250 SGOC.: 250 GR\nVS.ART: 315860176 COMPOS. PALLET\n36 COLLI PEZ.RA: 6 TOT.PEZZI: 216 B: 12 T: 3\nEU C.L.: 8,380\nCO SCAP- : 16,500 PR SCAP- : 15,000 CO SCCP- : 0,000\nCO SAGV- : 0,00000\nC.N. : 5,950SCAD.MIN.: 25/01/22\nCONDIZIONI PAGAMENTO : 60GG B.B. DT RIC FT FINEMESE ART62\n",
"ART.: 3137837 CAFFE PELLINI TOP LTGR 250 SGOC.: 250 GR\nVS.ART: 315850175 COMPOS. PALLET\n30 COLLI PEZ.RA: 12 TOT.PEZZI: 360 B: 6 T: 5\nEU C.L.: 6,810\nCO SCAP- : 16,500 PR SCAP- : 12,000 CO SCCP- : 0,000\nCO SAGV- : 0,00000\nC.N. : 5,000SCAD.MIN.: 18/08/21\nCONDIZIONI PAGAMENTO : 60GG B.B. DT RIC FT FINEMESE ART62\n",
"ART.: 7748220 ESPRES.SUP.TRADIZ. MOKPKGR 500 SGOC.: 500 GR\nVS.ART: 315930114 COMPOS. PALLET\n80 COLLI PEZ.RA: 10 TOT.PEZZI: 800 B: 10 T: 6\nEU C.L.: 7,580\nCO SCAP- : 16,500 PR SCAP- : 27,750 CO SCCP- : 0,000\nCO SAGV- : 0,00000\nC.N. : 4,570SCAD.MIN.: 25/01/22\nCONDIZIONI PAGAMENTO : 60GG B.B. DT RIC FT FINEMESE ART62\n"
],
"order_footer":"\nPALLET DA CM. 80X120\nT O T A L E C O L L I 182\n\n- EX D.P.R. 322 - 18/05/82 NON SI ACCETTANO TERMINI MINIMI DI\nCONSERVAZIONE INFERIORI A QUELLI INDICATI\n- CONSEGNA FRANCO BANCHINA, PALLET MONOPRODOTTO\nCOME DA PALLETTIZZAZIONE SPECIFICATA\nCONDIZIONI DI PAGAMENTO : COME DA ACCORDI\n"
}
and I want to reorder it to this
{
id: "32839ds8a32jjdas93193snkkk32jhds-k2j1", // generated, see my implementation
rawContent: "very long string",
parsedContent: {"mode":"ML", "user_id":"4000008367", "user_description":"John Doe", "order_details":[ "....." ], ... } // basically all the fields except raw content
}
How can I do this? I'm trying to work it with maps:
var output map[string]interface{}
var message processing.Document // message is of type struct {ID string, Raw string, Parsed string}
err = json.Unmarshal([]byte(doc), &output)
if err != nil {
// error handling
}
message.ParsedContent = "{"
for key, data := range output {
if key == "raw_content" {
hash := md5.Sum([]byte(data.(string)))
message.ID = hex.EncodeToString(hash[:])
message.RawContent = base64.RawStdEncoding.EncodeToString([]byte(data.(string)))
} else {
temp := fmt.Sprintf("\"%s\": \"%s\", ", key, data)
message.ParsedContent = message.ParsedContent + temp
}
}
message.ParsedContent = message.ParsedContent + "}"
msg, err := json.Marshal(message)
if err != nil {
// error handling
}
fmt.Println(string(msg))
There's a few problem with this. If it was only strings it would be ok, but there are integers and the sprintf doesn't work (the output I get, for example on the field "model" is "model": "%!s(float64=3)". I could do an if key == model and parse it as an int, but as I said the fields are not always the same and there are other integers that sometimes are there and sometimes are not there.
Also, the field "order_footer" has escaped new lines which are somehow deleted in my parsing, and this breaks the validity of the resulting JSON.
How can I solve this issues?
EDIT: As suggested, hand-parsing JSON is a bad idea. I could parse it into a struct, the field "model" actually tells me which struct to use. The struct for "model": 3 for example is:
type MOD3 struct {
Raw string `json:"raw_content"`
Mode string `json:"mode"`
UserID string `json:"user_id"`
UserDes string `json:"user_description"`
Model int `json:"model"`
Heading string `json:"legal_heading"`
DestContact string `json:"dest_contact"`
VendorID string `json:"vendor_id"`
VendorLegal string `json:"vendor_legal"`
OrderID string `json:"order_id"`
OrderDate int64 `json:"order_date"`
OrderReference string `json:"order_reference"`
DeliveryDate int64 `json:"delivery_date"`
OrderDestination string `json:"order_destination"`
OrderDestinationAddress string `json:"order_destination_address"`
Items []string `json:"order_details"`
OrderFooter string `json:"order_footer"`
}
At this point, how can I parse specific fields to the output format?
You should never, ever, ever try to generate JSON by hand. The steps should be:
Get JSON and parse it in a model object.
Create a copy of the model object with all the changes you want.
Convert the copied model object to JSON.
You don't know enough about JSON to modify it on the fly. I know enough, and the only reasonable way is complete parsing, and writing back the complete changes.

Using Microsoft.FSharpLu to serialize JSON to a stream

I've been using the Newtonsoft.Json and Newtonsoft.Json.Fsharp libraries to create a new JSON serializer and stream to a file. I like the ability to stream to a file because I'm handling large files and, prior to streaming, often ran into memory issues.
I stream with a simple fx:
open Newtonsoft.Json
open Newtonsoft.Json.FSharp
open System.IO
let writeToJson (path: string) (obj: 'a) : unit =
let serialized = JsonConvert.SerializeObject(obj)
let fileStream = new StreamWriter(path)
let serializer = new JsonSerializer()
serializer.Serialize(fileStream, obj)
fileStream.Close()
This works great. My problem is that the JSON string is then absolutely cluttered with stuff I don't need. For example,
let m =
[
(1.0M, None)
(2.0M, Some 3.0M)
(4.0M, None)
]
let makeType (tup: decimal * decimal option) = {FieldA = fst tup; FieldB = snd tup}
let y = List.map makeType m
Default.serialize y
val it : string =
"[{"FieldA": 1.0},
{"FieldA": 2.0,
"FieldB": {
"Case": "Some",
"Fields": [3.0]
}},
{"FieldA": 4.0}]"
If this is written to a JSON and read into R, there are nested dataframes and any of the Fields associated with a Case end up being a list:
library(jsonlite)
library(dplyr)
q <- fromJSON("default.json")
x <-
q %>%
flatten()
x
> x
FieldA FieldB.Case FieldB.Fields
1 1 <NA> NULL
2 2 Some 3
3 4 <NA> NULL
> sapply(x, class)
FieldA FieldB.Case FieldB.Fields
"numeric" "character" "list"
I don't want to have to handle these things in R. I can do it but it's annoying and, if there are files with many, many columns, it's silly.
This morning, I started looking at the Microsoft.FSharpLu.Json documentation. This library has a Compact.serialize function. Quick tests suggest that this library will eliminate the need for nested dataframes and the lists associated with any Case and Field columns. For example:
Compact.serialize y
val it : string =
"[{
"FieldA": 1.0
},
{
"FieldA": 2.0,
"FieldB": 3.0
},
{
"FieldA": 4.0
}
]"
When this string is read into R,
q <- fromJSON("compact.json")
x <- q
x
> x
FieldA FieldB
1 1 NA
2 2 3
3 4 NA
> sapply(x, class)
FieldA FieldB
"numeric" "numeric
This is much simpler to handle in R. and I'd like to start using this library.
However, I don't know if I can get the Compact serializer to serialize to a stream. I see .serializeToFile, .desrializeStream, and .tryDeserializeStream, but nothing that can serialize to a stream. Does anyone know if Compact can handle writing to a stream? How can I make that work?
The helper to serialize to stream is missing from the Compact module in FSharpLu.Json, but you should be able to do it by following the C# example from
http://www.newtonsoft.com/json/help/html/SerializingJSON.htm. Something along the lines:
let writeToJson (path: string) (obj: 'a) : unit =
let serializer = new JsonSerializer()
serializer.Converters.Add(new Microsoft.FSharpLu.Json.CompactUnionJsonConverter())
use sw = new StreamWriter(path)
use writer = new JsonTextWriter(sw)
serializer.Serialize(writer, obj)

sending JSON with go

I'm trying to send a JSON message with Go.
This is the server code:
func (network *Network) Join(
w http.ResponseWriter,
r *http.Request) {
//the request is not interesting
//the response will be a message with just the clientId value set
log.Println("client wants to join")
message := Message{-1, -1, -1, ClientId(len(network.Clients)), -1, -1}
var buffer bytes.Buffer
enc := json.NewEncoder(&buffer)
err := enc.Encode(message)
if err != nil {
fmt.Println("error encoding the response to a join request")
log.Fatal(err)
}
fmt.Printf("the json: %s\n", buffer.Bytes())
fmt.Fprint(w, buffer.Bytes())
}
Network is a custom struct. In the main function, I'm creating a network object and registering it's methods as callbacks to http.HandleFunc(...)
func main() {
runtime.GOMAXPROCS(2)
var network = new(Network)
var clients = make([]Client, 0, 10)
network.Clients = clients
log.Println("starting the server")
http.HandleFunc("/request", network.Request)
http.HandleFunc("/update", network.GetNews)
http.HandleFunc("/join", network.Join)
log.Fatal(http.ListenAndServe("localhost:5000", nil))
}
Message is a struct, too. It has six fields all of a type alias for int.
When a client sends an http GET request to the url "localhost:5000/join", this should happen
The method Join on the network object is called
A new Message object with an Id for the client is created
This Message is encoded as JSON
To check if the encoding is correct, the encoded message is printed on the cmd
The message is written to the ResponseWriter
The client is rather simple. It has the exact same code for the Message struct. In the main function it just sends a GET request to "localhost:5000/join" and tries to decode the response. Here's the code
func main() {
// try to join
var clientId ClientId
start := time.Now()
var message Message
resp, err := http.Get("http://localhost:5000/join")
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.Status)
dec := json.NewDecoder(resp.Body)
err = dec.Decode(&message)
if err != nil {
fmt.Println("error decoding the response to the join request")
log.Fatal(err)
}
fmt.Println(message)
duration := time.Since(start)
fmt.Println("connected after: ", duration)
fmt.Println("with clientId", message.ClientId)
}
I've started the server, waited a few seconds and then ran the client. This is the result
The server prints "client wants to join"
The server prints "the json: {"What":-1,"Tag":-1,"Id":-1,"ClientId":0,"X":-1,"Y":-1}"
The client prints "200 OK"
The client crashes "error decoding the response to the join request"
The error is "invalid character "3" after array element"
This error message really confused me. After all, nowhere in my json, there's the number 3. So I imported io/ioutil on the client and just printed the response with this code
b, _ := ioutil.ReadAll(resp.Body)
fmt.Printf("the json: %s\n", b)
Please note that the print statement is the same as on the server. I expected to see my encoded JSON. Instead I got this
"200 OK"
"the json: [123 34 87 104 97 116 ....]" the list went on for a long time
I'm new to go and don't know if i did this correctly. But it seems as if the above code just printed the slice of bytes. Strange, on the server the output was converted to a string.
My guess is that somehow I'm reading the wrong data or that the message was corrupted on the way between server and client. But honestly these are just wild guesses.
In your server, instead of
fmt.Fprint(w, buffer.Bytes())
you need to use:
w.Write(buffer.Bytes())
The fmt package will format the Bytes() into a human-readable slice with the bytes represented as integers, like so:
[123 34 87 104 97 116 ... etc
You don't want to use fmt.Print to write stuff to the response. Eg
package main
import (
"fmt"
"os"
)
func main() {
bs := []byte("Hello, playground")
fmt.Fprint(os.Stdout, bs)
}
(playground link)
Produces
[72 101 108 108 111 44 32 112 108 97 121 103 114 111 117 110 100]
Use the Write() method of the ResponseWriter instead
You could have found this out by telneting to your server as an experiment - always a good idea when you aren't sure what is going on!