monitoring new user - html

I am a go student, so I am writing a simple API, that needs analytics
I want to create a monitoring new users, to see how many users had registered by specific period. So I set date (init date and end date) and return number of new users.
Here is my func only:
package db
import (
"github.com/sirupsen/logrus"
"time"
)
func NewUsersByPeriod(start time.Time, end time.Time) (count int) {
Qselect := `SELECT COUNT(*) FROM "User" WHERE datereg BETWEEN $1 and $2 ;`
row := connectionVar.QueryRowx(Qselect, start, end)
err := row.Scan(&count)
if err != nil {
logrus.Fatal(err)
}
return count
}
My question is about how to realize it correctly, what frameworks I could use?
Write any recommendation

I use gin-gonic for every RESTful API I make ( https://gin-gonic.github.io/gin/ ). Whether is 1 QPS or 100,000 QPS, it's a solid performer, simple to use with great documentation.
You wouldn't want to use logrus.Fatal mind you as that would terminate the API. Use gin to handle the errors and output them using JSON or similar and the correct http code obviously.

Related

go mysql LAST_INSERT_ID() returns 0

I have this MySQL database where I need to add records with a go program and need to retrieve the id of the last added record, to add the id to another table.
When i run insert INSERT INTO table1 values("test",1); SELECT LAST_INSERT_ID() in MySQL Workbench, it returns the last id, which is auto incremented, with no issues.
If I run my go code however, it always prints 0. The code:
_, err := db_client.DBClient.Query("insert into table1 values(?,?)", name, 1)
var id string
err = db_client.DBClient.QueryRow("SELECT LAST_INSERT_ID()").Scan(&id)
if err != nil {
panic(err.Error())
}
fmt.Println("id: ", id)
I tried this variation to try to narrow down the problem scope further: err = db_client.DBClient.QueryRow("SELECT id from table1 where name=\"pleasejustwork\";").Scan(&id), which works perfectly fine; go returns the actual id.
Why is it not working with the LAST_INSERT_ID()?
I'm a newbie in go so please do not go hard on me if i'm making stupid go mistakes that lead to this error :D
Thank you in advance.
The MySQL protocol returns LAST_INSERT_ID() values in its response to INSERT statements. And, the golang driver exposes that returned value. So, you don't need the extra round trip to get it. These ID values are usually unsigned 64-bit integers.
Try something like this.
res, err := db_client.DBClient.Exec("insert into table1 values(?,?)", name, 1)
if err != nil {
panic (err.Error())
}
id, err := res.LastInsertId()
if err != nil {
panic (err.Error())
}
fmt.Println("id: ", id)
I confess I'm not sure why your code didn't work. Whenever you successfully issue a single-row INSERT statement, the next statement on the same database connection always has access to a useful LAST_INSERT_ID() value. This is true whether or not you use explicit transactions.
But if your INSERT is not successful, you must treat the last insert ID value as unpredictable. (That's a technical term for "garbage", trash, rubbish, basura, etc.)

GORM 2.0 get last insert ID

I'm operating on a MySQL database using GORM v 2.0. I'm inserting row into database using GORM transaction (tx := db.Begin()). In previous GORM versions, Begin() returned sql.Tx object that allowed to use LastInsertId() method on query return parameter.
To do that in GORM v 2.0, i can simply call db.Last() function after insert row into database, or i can use smarter method?
Thank you.
In V2.0 the GetLastInsertId method was removed. As #rustyx says, the ID is populated in the model you pass the Create function. I wouldn't bother calling db.Last(&...) as this is a bit of a waste when the model will already have it.
type User struct {
gorm.Model
Name string
}
user1 := User{Name: "User One"}
_ = db.Transaction(func(tx *gorm.DB) error {
tx.Create(&user1)
return nil
})
// This is unnecessary
// db.Last(&user1)
fmt.Printf("User one ID: %d\n", user1.ID)

How to extract raw query from dbr golang query builder

I'm new to the golang dbr library (https://godoc.org/github.com/gocraft/dbr)
and I did not find an information about how to get a raw query using this library.
I need something similar to get_compiled_select() from php igniter. I need it to combine multiple complex queries with union.
The following will dump the query...
stmt := session.Select("*").From(table).Where("id = ?", ...)
buf := dbr.NewBuffer()
_ = stmt.Build(stmt.Dialect, buf)
fmt.Println(buf.String())
// print the interpolated values
for _, v := range stmt.WhereCond {
fmt.Println(v)
}
Note that the output will not include the interpolated values.
I'm not so sure previous answer (setting the struct as public) is the wise solution, even if that's works.
IMO, better solution would be creating new getter function inside select.go
func (sel *SelectStmt) GetRaw() string {
return sel.raw.Query
}
With this method, it should be easier to maintain.
u can set raw struct from expr as public.
I hope it helps u.

Retrieve relation one to many into JSON sql pure, Golang, Performance

Suppose that I've the following structures that it's the mapped tables.
type Publisher struct{
ID int `db:"id"`
Name string `db:"name"`
Books []*Book
}
type Book struct {
ID int `db:"id"`
Name string `db:"name"`
PublisherID `db:"publisher_id"`
}
So, What if I wanna retrieve all the Publisher with all related Books I would like to get a JSON like this:
[ //Publisher 1
{
"id" : "10001",
"name":"Publisher1",
"books" : [
{ "id":321,"name": "Book1"},
{ "id":333,"name": "Book2"}
]
},
//Publisher 2
{
"id" : "10002",
"name":"Slytherin Publisher",
"books" : [
{ "id":4021,"name": "Harry Potter and the Chamber of Secrets"},
{ "id":433,"name": "Harry Potter and the Order of the Phoenix"}
]
},
]
So I've the following structure that I use to retrieve all kind of query related with Publisher
type PublisherRepository struct{
Connection *sql.DB
}
// GetEbooks return all the books related with a publisher
func (r *PublisherRepository) GetBooks(idPublisher int) []*Book {
bs := make([]Book,0)
sql := "SELECT * FROM books b WHERE b.publisher_id =$1 "
row, err := r.Connection.Query(sql,idPublisher)
if err != nil {
//log
}
for rows.Next() {
b := &Book{}
rows.Scan(&b.ID, &b.Name, &b.PublisherID)
bs := append(bs,b)
}
return bs
}
func (r *PublisherRepository) GetAllPublishers() []*Publisher {
sql := "SELECT * FROM publishers"
ps := make([]Publisher,0)
rows, err := r.Connection.Query(sql)
if err != nil {
// log
}
for rows.Next() {
p := &Publisher{}
rows.Scan(&p.ID,&p.Name)
// Is this the best way?
books := r.GetBooks(p.ID)
p.Books = books
}
return ps
}
So , here my questions
What is the best approach to retrieve all the publisher with the best performance, because a for inside a for is not the best solution, what if I've 200 publisher and in the average of each publisher has 100 books.
Is in GoLang idiomatic PublisherRepository or is there another way to create something to manage the transactions of an entity with pure sql?
1) Bad about this would be the sql request per iteration. So here a solution that does not make an extra request per Publisher:
func (r *PublisherRepository) GetAllPublishers() []*Publisher {
sql := "SELECT * FROM publishers"
ps := make(map[int]*Publisher)
rows, err := connection.Query(sql)
if err != nil {
// log
}
for rows.Next() {
p := &Publisher{}
rows.Scan(&p.ID,&p.Name)
ps[p.ID] = p
}
sql = "SELECT * FROM books"
rows, err := connection.Query(sql)
if err != nil {
//log
}
for rows.Next() {
b := &Book{}
rows.Scan(&b.ID, &b.Name, &b.PublisherID)
ps[b.PublisherID].Books = append(ps[b.PublisherID].Books, b)
}
// you might choose to keep the map as a return value, but otherwise:
// preallocate memory for the slice
publishers := make([]*Publisher, 0, len(ps))
for _, p := range ps {
publishers = append(publishers, p)
}
return publishers
}
2) Unless you create the PublisherRepository only once, this might be a bad idea creating and closing loads of connections. Depending also on your sql client implementation I would suggest (and also have seen it for many other go database clients) to have one connection for the entire server. Pooling is done internally by many of the sql clients, that is why you should check your sql client.
If your sql client library does pooling internally use a global variable for the "connection" (it's not really one connection if pooling is done internally):
connection *sql.DB
func New () *PublisherRepository {
repo := &PublisherRepository{}
return repo.connect()
}
type PublisherRepository struct{
}
func (r *PublisherRepository) connect() *PublisherRepository {
// open new connection if connection is nil
// or not open (if there is such a state)
// you can also check "once.Do" if that suits your needs better
if connection == nil {
// ...
}
return r
}
So each time you create a new PublisherRepository, it will only check if connection already exists. If you use once.Do, go will only create the "connection" once and you are done with it.
If you have other structs that will use the connection as well, you need a global place for your connection variable or (even better) you write a little wrapper package for your sql client, that is in turn used in all your structs.
In your case, the simplest way would be to use json_agg in the query. Like here http://sqlfiddle.com/#!15/97c41/4 (sqlfiddle is slow so here is screenshot http://i.imgur.com/hxMPkUa.png) Not very Go friendly (you need to unmarshal query result data if you want to do something with the books) but all books in one query as you wanted without for loops.
As #TehSphinX said it is better to have single global db connection.
But before implementing strange queries I really suggest you to think: why do you need to return the full list of publishers and their books in one API query? I can't imagine the situation in web or mobile app where your idea might be a good decision. Usually, you just show users list of publishers then users chooses one and you show him the list of books by this publisher. This is "win-win" situation for you and your users - you can make simple queries and your users just get small sets of data that they actually need without paying for unnecessary traffic/wasting browser memory. As you said there can be 200 publishers with 100 books and I'm sure your users don't need 20000 books loaded in one request. Of course, if you are not trying to make your API more data theft friendly.
Even if you have something like a short preview-like list of books for each publisher you should think about pagination for publishers and/or denormalisation of books data for this case (add a column to publishers table with the short list of books in JSON format).

Golang query multiple databases with a JOIN

Using the golang example below, how can I query (JOIN) multiple databases.
For example, I want to have the relation db1.username.id = db2.comments.username_id.
id := 123
var username string
err := db.QueryRow("SELECT username FROM users WHERE id=?", id).Scan(&username)
switch {
case err == sql.ErrNoRows:
log.Printf("No user with that ID.")
case err != nil:
log.Fatal(err)
default:
fmt.Printf("Username is %s\n", username)
}
As you are using MySQL, you can select fields across databases. See this related question for
details. For example you should be able to do this:
err := db.QueryRow(`
SELECT
db1.users.username
FROM
db1.users
JOIN
db2.comments
ON db1.users.id = db2.comments.username_id
`).Scan(&username)
You can of course simply fetch all entries from db2.comments using a second database connection and use the values in a query to db1.users. This is, of course, not recommended as it is the job of the database server which it can, most likely, do better than you.