and need your help.
Wanted to build simple api and stuck with some problem.
I've choose gin and database/sql with postgres driver
package main
import (
"database/sql"
"fmt"
"github.com/gin-gonic/gin"
_ "github.com/lib/pq"
)
func main() {
router := gin.Default()
router.GET("/search/:text", SearchWord)
router.Run(":8080")
}
I need to make query to DB and make json out of this request.
func checkErr(err error) {
if err != nil {
panic(err)
}
}
type Message struct {
ticket_id int `json:"ticket_id"`
event string `json:"event"`
}
func SearchWord(c *gin.Context) {
word := c.Params.ByName("text")
db, err := sql.Open("postgres", "host=8.8.8.8 user= password= dbname=sample")
defer db.Close()
checkErr(err)
rows, err2 := db.Query("SELECT ticket_id,event FROM ....$1, word)
checkErr(err)
for rows.Next() {
var ticket_id int
var event string
err = rows.Scan(&ticket_id, &event)
checkErr(err)
fmt.Printf("%d | %s \n\n", ticket_id, event)
}
}
This coda working nice, but when i need to make json.
I need to make struct of a row
type Message struct {
ticket_id int `json:"ticket_id"`
event string `json:"event"`
}
an then i need to create slice , and append every rows.Next() loop an than answer to browser with Json...
c.JSON(200, messages)
But how to do that...don't know :(
disclaimer: I am brand new to go
Since you Scanned your column data into your variables, you should be able to initialize a structure with their values:
m := &Message{ticket_id: ticket_id, event: event}
You could initialize a slice with
s := make([]*Message, 0)
And then append each of your message structs after instantiation:
s = append(s, m)
Because I'm not too familiar with go there are a couple things i'm not sure about:
after copying data from query to your vars using rows.Scan does initializing the Message struct copy the current iterations values as expected??
If there is a way to get the total number of rows from your query it might be slighlty more performant to initialize a static length array, instead of a slice?
I think #inf deleted answer about marshalling your Message to json down the line might need to be addressed, and Message field's might need to be capitalized
copied from #inf:
The names of the members of your struct need be capitalized so that
they get exported and can be accessed.
type Message struct {
Ticket_id int `json:"ticket_id"`
Event string `json:"event"` }
I'm going to cheat a little here and fix a few things along the way:
First: open your database connection pool once at program start-up (and not on every request).
Second: we'll use sqlx to make it easier to marshal our database rows into our struct.
package main
var db *sqlx.DB
func main() {
var err error
// sqlx.Connect also checks that the connection works.
// sql.Open only "establishes" a pool, but doesn't ping the DB.
db, err = sqlx.Connect("postgres", "postgres:///...")
if err != nil {
log.Fatal(err)
}
router := gin.Default()
router.GET("/search/:text", SearchWord)
router.Run(":8080")
}
// in_another_file.go
type Message struct {
TicketID int `json:"ticket_id" db:"ticket_id"`
Event string `json:"event" db:"event"`
}
func SearchWord(c *gin.Context) {
word := c.Params.ByName("text")
// We create a slice of structs to marshal our rows into
var messages []*Message{}
// Our DB connection pool is safe to use concurrently from here
err := db.Select(&messages, "SELECT ticket_id,event FROM ....$1, word)
if err != nil {
http.Error(c.Writer, err.Error(), 500)
return
}
// Write it out using gin-gonic's JSON writer.
c.JSON(200, messages)
}
I hope that's clear. sqlx also takes care of calling rows.Close() for you, which will otherwise leave connections hanging.
Related
I am working on deserializing json into a struct as shown below and it works fine.
type DataConfigs struct {
ClientMetrics []Client `json:"ClientMetrics"`
}
type Client struct {
ClientId int `json:"clientId"`
.....
.....
}
const (
ConfigFile = "clientMap.json"
)
func ReadConfig(path string) (*DataConfigs, error) {
files, err := utilities.FindFiles(path, ConfigFile)
// check for error here
var dataConfig DataConfigs
body, err := ioutil.ReadFile(files[0])
// check for error here
err = json.Unmarshal(body, &dataConfig)
// check for error here
return &dataConfig, nil
}
Now I am trying to build a map of integer and Client using DataConfigs object that was created as shown above in the code. So I created a method to do the job as shown below and I modified ReadConfig method to do that too.
func ReadConfig(path string, logger log.Logger) (*DataConfigs, error) {
files, err := utilities.FindFiles(path, ConfigFile)
// check for error here
var dataConfig DataConfigs
body, err := ioutil.ReadFile(files[0])
// check for error here
err = json.Unmarshal(body, &dataConfig)
// check for error here
idx := BuildIndex(dataConfig)
// now how to add this idx and dataConfig object in one struct?
return &dataConfig, nil
}
func BuildIndex(dataConfig DataConfigs) map[int]Client {
m := make(map[int]Client)
for _, dataConfig := range dataConfig.ClientMetrics {
m[dataConfig.ClientId] = dataConfig
}
return m
}
My confusion is - Should I modify DataConfigs struct to add idx map too and then return that struct from ReadConfig method or should I create a new struct to handle that?
Basically I want to return DataConfigs struct which has ClientMetrics array along with idx map. How can I do this here? I am slightly confuse because I started with golang recently.
This is basically a design question with multiple options. First, I would avoid adding the map to your original DataConfigs type since it does not match the json representation. This could lead to confusion down the road.
Which option to choose depends on your requirements and preferences. Some ideas from the top of my head:
Have you considered returning the map only? After all, you've got every Client in your map. If you need to iterate all Clients you can iterate all values of your map.
Second option is to return the map in addition to DataConfigs. Go allows to return multiple values from a function as you already do for error handling.
Finally, you could wrap DataConfigs and your map in a new simple struct type as you already guessed.
First of all i'm new here and i'm trying to learn Golang. I would like to check my csv file (which has 3 values; type, maker, model) and create a new one and after a filter operation i want to write new data(filtered) to the created csv file. Here is my code so you can understand me more clearly.
package main
import (
"encoding/csv"
"fmt"
"os"
)
func main() {
//openning my csv file which is vehicles.csv
recordFile, err := os.Open("vehicles.csv")
if err != nil{
fmt.Println("An error encountered ::", err)
}
//reading it
reader := csv.NewReader(recordFile)
vehicles, _ := reader.ReadAll()
//creating a new csv file
newRecordFile, err := os.Create("newCsvFile.csv")
if err != nil{
fmt.Println("An error encountered ::", err)
}
//writing vehicles.csv into the new csv
writer := csv.NewWriter(newRecordFile)
err = writer.WriteAll(vehicles)
if err != nil {
fmt.Println("An error encountered ::", err)
}
}
After i build it, it is working this way. It reads and writes the all data to new created csv file. But the problem here is, i want to filter duplicates of readed csv which is vehicles, i am creating another function (outside of the main function) to filter duplicates but i can't do it because vehicles 's type is [][]string, i searched the internet about filtering duplicates but all i found is int or string types. What i want to do is create a function and call it before WriteAll operation so WriteAll can write the correct (duplicates filtered) data into new csv file. Help me please!!
I appreciate any answer.
Happy coding!
This depends on how you define "uniqueness", but in general there are a few parts of this problem.
What is unique?
All fields must be equal
Only some fields must be equal
Normalize some or all fields before comparing
You have a few approaches for applying your uniqueness, including:
You can use a map, keyed by the "pieces" of uniqueness, requires O(N) state
You can sort the records and compare with the prior record as you iterate, requires O(1) state but is more complicated
You have two approaches for filtering and outputting:
You can build a new slice based on the old one using a loop and write all at once, this requires O(N) space
You can write the records out to the file as you go if you don't need to sort, this requires O(1) space
I think a reasonably simple and performant approach would be to pick (1) from the first, (1) from the second, and (2) from the third, which together would look like:
package main
import (
"encoding/csv"
"errors"
"io"
"log"
"os"
)
func main() {
input, err := os.Open("vehicles.csv")
if err != nil {
log.Fatalf("opening input file: %s", err)
}
output, err := os.Create("vehicles_filtered.csv")
if err != nil {
log.Fatalf("creating output file: %s", err)
}
defer func() {
// Ensure the file is closed at the end of the program
if err := output.Close(); err != nil {
log.Fatalf("finalizing output file: %s", err)
}
}()
reader := csv.NewReader(input)
writer := csv.NewWriter(output)
seen := make(map[[3]string]bool)
for {
// Read in one record
record, err := reader.Read()
if errors.Is(err, io.EOF) {
break
}
if err != nil {
log.Fatalf("reading record: %s", err)
}
if len(record) != 3 {
log.Printf("bad record %q", record)
continue
}
// Check if the record has been seen before, skipping if so
key := [3]string{record[0], record[1], record[2]}
if seen[key] {
continue
}
seen[key] = true
// Write the record
if err := writer.Write(record); err != nil {
log.Fatalf("writing record %d: %s", len(seen), err)
}
}
}
I need to get one param from posted json.
And I don't want to make struct for only this.
This is what I have tried
type NewTask struct {
Price uint64 `json:"price"`
}
func (pc TaskController) Create(c *gin.Context) {
var service Service
if err := c.BindJSON(&service); err != nil {
log.Println(err) // this works
}
var u NewTask
if err := c.BindJSON(&u); err != nil {
log.Println(err) // this return EOF error
}
fmt.Println(u.Price)
}
Requested Json data have many other fields including price
{
...other fields
price: 30
}
But this don't work.I think its because I am binding twice, How can I success in binding multiple?
Thanks
Try to use ShouldBindJSON. The BindJSON is reading the body, so we are at EOF if the context Body get read multiple times.
ShouldBindJSON stores the request body into the context, and reuse when it is called again.
In the following example from Web Development with Go by Shiju Varghese, which is for implementing a HTTP server using a new MongoDB session for each HTTP request:
Why is json package's Decode method used in PostCategory function?
Why is json package's Marshal method used in GetCategories function?
At first I thought that Decode in PostCategory and Marshal in GetCategories are opposite to each other, but later I found that there is a Unmarshal method and maybe a Encode one in the json package. So I asked a question earlier.
Here is the program
package main
import (
"encoding/json"
"log"
"net/http"
"github.com/gorilla/mux"
"gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson"
)
var session *mgo.Session
type (
Category struct {
Id bson.ObjectId `bson:"_id,omitempty"`
Name string
Description string
}
DataStore struct {
session *mgo.Session
}
)
//Close mgo.Session
func (d *DataStore) Close() {
d.session.Close()
}
//Returns a collection from the database.
func (d *DataStore) C(name string) *mgo.Collection {
return d.session.DB("taskdb").C(name)
}
//Create a new DataStore object for each HTTP request
func NewDataStore() *DataStore {
ds := &DataStore{
session: session.Copy(),
}
return ds
}
//Insert a record
func PostCategory(w http.ResponseWriter, r *http.Request) {
var category Category
// Decode the incoming Category json
err := json.NewDecoder(r.Body).Decode(&category)
if err != nil {
panic(err)
}
ds := NewDataStore()
defer ds.Close()
//Getting the mgo.Collection
c := ds.C("categories")
//Insert record
err = c.Insert(&category)
if err != nil {
panic(err)
}
w.WriteHeader(http.StatusCreated)
}
//Read all records
func GetCategories(w http.ResponseWriter, r *http.Request) {
var categories []Category
ds := NewDataStore()
defer ds.Close()
//Getting the mgo.Collection
c := ds.C("categories")
iter := c.Find(nil).Iter()
result := Category{}
for iter.Next(&result) {
categories = append(categories, result)
}
w.Header().Set("Content-Type", "application/json")
j, err := json.Marshal(categories)
if err != nil {
panic(err)
}
w.WriteHeader(http.StatusOK)
w.Write(j)
}
func main() {
var err error
session, err = mgo.Dial("localhost")
if err != nil {
panic(err)
}
r := mux.NewRouter()
r.HandleFunc("/api/categories", GetCategories).Methods("GET")
r.HandleFunc("/api/categories", PostCategory).Methods("POST")
server := &http.Server{
Addr: ":8080",
Handler: r,
}
log.Println("Listening...")
server.ListenAndServe()
}
I think the main reason for using a json.NewDecoder here is to read directly from response's body (r.Body) here, since NewDecoder takes an io.Reader as an input.
You could have used json.Unmarshal but then you'd have to first read response body into a []byte and pass that value to json.Unmarshal. NewDecoder is more convenient here.
TL;DR — Marshal/Unmarshal take and return byte slices, while Encode/Decode do the same thing, but read the bytes from a stream such as a network connection (readers and writers).
The encoding/json package uses the Encoder and Decoder types to act on streams of data, that is, io.Reader's and io.Writer's. This means that you can take data directly from a network socket (or an HTTP body in this case which implements io.Reader) and transform it to JSON as the bytes come in. Doing it this way, we can go ahead and start processing that JSON as soon as any data is available but before we've received the whole document (on a slow network connection with a big document this could save us a lot of time, and for some streaming protocols with "infinitely sized" document streams this is absolutely necessary!)
Marshal and Unmarshal however operate on byte slices, which means that you have to have the entirety of the JSON document in memory before you can use them. In your example, the author uses Marshal because they already have a []byte slice so there's no point in constructing a buffer using the byte slice, then making an encoder that uses that buffer, then calling encode: Instead they can just let Marshal do that for them.
In reality, Marshal/Unmarshal are just convenience methods on top of Encoders and Decoders. If we look at the source for Unmarshal, we see that under the hood it's just constructing an encoder (or the internal representation of an encoder, but trust me, they're the same thing, if you want proof you can look at the Encode method source and see that it's also creating an encodeState) and then returning the output bytes:
func Marshal(v interface{}) ([]byte, error) {
e := &encodeState{}
err := e.marshal(v)
if err != nil {
return nil, err
}
return e.Bytes(), nil
}
I am new to BoltDB and Golang, and trying to get your help.
So, I understand that I can only save byte array ([]byte) for key and value of BoltDB. If I have a struct of user as below, and key will be the username, what would be the best choice to store the data into BoltDB where it expects array of bytes?
Serializing it or JSON? Or better way?
type User struct {
name string
age int
location string
password string
address string
}
Thank you so much, have a good evening
Yes, I would recommend marshaling the User struct to JSON and then use a unique key []byte slice. Don't forget that marshaling to JSON only includes the exported struct fields, so you'll need to change your struct as shown below.
For another example, see the BoltDB GitHub page.
type User struct {
Name string
Age int
Location string
Password string
Address string
}
func (user *User) save(db *bolt.DB) error {
// Store the user model in the user bucket using the username as the key.
err := db.Update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucketIfNotExists(usersBucket)
if err != nil {
return err
}
encoded, err := json.Marshal(user)
if err != nil {
return err
}
return b.Put([]byte(user.Name), encoded)
})
return err
}
A good option is the Storm package, which allows for exactly what you are wanting to do:
package main
import (
"fmt"
"github.com/asdine/storm/v3"
)
type user struct {
ID int `storm:"increment"`
address string
age int
}
func main() {
db, e := storm.Open("storm.db")
if e != nil {
panic(e)
}
defer db.Close()
u := user{address: "123 Main St", age: 18}
db.Save(&u)
fmt.Printf("%+v\n", u) // {ID:1 address:123 Main St age:18}
}
As you can see, you don't have to worry about marshalling, it takes care of it for you. By default it uses JSON, but you can configure it to use GOB or others as well:
https://github.com/asdine/storm