How to get id of last inserted row from sqlx? - mysql

I'd like to get back the id of the last post inserted into MySql database using sqlx:
resultPost, err := shared.Dbmap.Exec("INSERT INTO post (user_id, description, link) VALUES (?, ?, ?)", userID, title, destPath)
if err != nil {
log.Println(err)
c.JSON(
http.StatusInternalServerError,
gin.H{"error": "internal server error"})
}
fmt.Println("resultPost is:", resultPost)
The problem is that the resultPost is printed as an object:
resultPost is: {0xc420242000 0xc4202403a0}
So I'm wondering what is the correct way to extract the id of the row just inserted?

The return value from Exec, Result is not meant to be accessed directly--it's an object with two methods to call, one of which is LastInsertId().
lastId, err := resultPost.LastInsertId()
if err != nil {
panic(err)
}
fmt.Println("LastInsertId: ", lastId)

Looks like you just need :
resultPost.LastInsertId()
For more information, search for LastInsertId in this documentation

Related

go mysql LAST_INSERT_ID() returns 0

I have this MySQL database where I need to add records with a go program and need to retrieve the id of the last added record, to add the id to another table.
When i run insert INSERT INTO table1 values("test",1); SELECT LAST_INSERT_ID() in MySQL Workbench, it returns the last id, which is auto incremented, with no issues.
If I run my go code however, it always prints 0. The code:
_, err := db_client.DBClient.Query("insert into table1 values(?,?)", name, 1)
var id string
err = db_client.DBClient.QueryRow("SELECT LAST_INSERT_ID()").Scan(&id)
if err != nil {
panic(err.Error())
}
fmt.Println("id: ", id)
I tried this variation to try to narrow down the problem scope further: err = db_client.DBClient.QueryRow("SELECT id from table1 where name=\"pleasejustwork\";").Scan(&id), which works perfectly fine; go returns the actual id.
Why is it not working with the LAST_INSERT_ID()?
I'm a newbie in go so please do not go hard on me if i'm making stupid go mistakes that lead to this error :D
Thank you in advance.
The MySQL protocol returns LAST_INSERT_ID() values in its response to INSERT statements. And, the golang driver exposes that returned value. So, you don't need the extra round trip to get it. These ID values are usually unsigned 64-bit integers.
Try something like this.
res, err := db_client.DBClient.Exec("insert into table1 values(?,?)", name, 1)
if err != nil {
panic (err.Error())
}
id, err := res.LastInsertId()
if err != nil {
panic (err.Error())
}
fmt.Println("id: ", id)
I confess I'm not sure why your code didn't work. Whenever you successfully issue a single-row INSERT statement, the next statement on the same database connection always has access to a useful LAST_INSERT_ID() value. This is true whether or not you use explicit transactions.
But if your INSERT is not successful, you must treat the last insert ID value as unpredictable. (That's a technical term for "garbage", trash, rubbish, basura, etc.)

Safely perform DB migrations with Go

Let's say I have a web app that shows a list of posts. The post struct is:
type Post struct {
Id int64 `sql:",primary"`
Title string
Body string
}
It retrieves the posts with:
var posts []*Post
rows, err := db.QueryContext(ctx, "select * from posts;")
if err != nil {
return nil, oops.Wrapf(err, "could not get posts")
}
defer rows.Close()
for rows.Next() {
p := &Post{}
err := rows.Scan(
&p.Id,
&p.Title,
&p.Body,
)
if err != nil {
return nil, oops.Wrapf(err, "could not scan row")
}
posts = append(posts, p)
}
return posts, nil
All works fine. Now, I want to alter the table schema by adding a column:
ALTER TABLE posts ADD author varchar(62);
Suddenly, the requests to get posts result in:
sql: expected 4 destination arguments in Scan, not 3
which makes sense since the table now has 4 columns instead of the 3 stipulated by the retrieval logic.
I can then update the struct to be:
type Post struct {
Id int64 `sql:",primary"`
Title string
Body string
Author string
}
and the retrival logic to be:
for rows.Next() {
p := &Post{}
err := rows.Scan(
&p.Id,
&p.Title,
&p.Body,
&p.Author
)
if err != nil {
return nil, oops.Wrapf(err, "could not scan row")
}
posts = append(posts, p)
}
which solves this. However, this implies there is always a period of downtime between migration and logic update + deploy. How to avoid that downtime?
I have tried swapping the order of the above changes but this does not work, with that same request resulting in:
sql: expected 3 destination arguments in Scan, not 4
(which makes sense, since the table only has 3 columns at that point as opposed to 4);
and other requests resulting in:
Error 1054: Unknown column 'author' in 'field list'
(which makes sense, because at that point the posts table does not have an author column just yet)
You should be able to achieve your desired behavior by adapting the SQL Query to return the exact fields you want to populate.
SELECT Id , Title , Body FROM posts;
This way even if you add another column Author the query results only contain 3 values.

How to avoid race conditions in GORM

I am developing a system to enable patient registration with incremental queue number. I am using Go, GORM, and MySQL.
An issue happens when more than one patients are registering at the same time, they tend to get the same queue number which it should not happen.
I attempted using transactions and hooks to achieve that but I still got duplicate queue number. I have not found any resource about how to lock the database when a transaction is happening.
func (r repository) CreatePatient(pat *model.Patient) error {
tx := r.db.Begin()
defer func() {
if r := recover(); r != nil {
tx.Rollback()
}
}()
err := tx.Error
if err != nil {
return err
}
// 1. get latest queue number and assign it to patient object
var queueNum int64
err = tx.Model(&model.Patient{}).Where("registration_id", pat.RegistrationID).Select("queue_number").Order("created_at desc").First(&queueNum).Error
if err != nil && err != gorm.ErrRecordNotFound {
tx.Rollback()
return err
}
pat.QueueNumber = queueNum + 1
// 2. write patient data into the db
err = tx.Create(pat).Error
if err != nil {
tx.Rollback()
return err
}
return tx.Commit().Error
}
As stated by #O. Jones, transactions don't save you here because you're extracting the largest value of a column, incrementing it outside the db and then saving that new value. From the database's point of view the updated value has no dependence on the queried value.
You could try doing the update in a single query, which would make the dependence obvious:
UPDATE patient AS p
JOIN (
SELECT max(queue_number) AS queue_number FROM patient WHERE registration_id = ?
) maxp
SET p.queue_number = maxp.queue_number + 1
WHERE id = ?
In gorm you can't run a complex update like this, so you'll need to make use of Exec.
I'm not 100% certain the above will work because I'm less familiar with MySQL transaction isolation guarantees.
A cleaner way
Overall, it'd be cleaner to keep a table of queues (by reference_id) with a counter that you update atomically:
Start a transaction, then
SELECT queue_number FROM queues WHERE registration_id = ? FOR UPDATE;
Increment the queue number in your app code, then
UPDATE queues SET queue_number = ? WHERE registration_id = ?;
Now you can use the incremented queue number in your patient creation/update before transaction commit.

Avoid loops in - Recursive m2m relation self referencing

this is less of a question about golang or mysql, its more a general question. Hope i am still in the right place and someone could help me to wrap my head around this.
I have a struct Role which can have multiple child roles.
type Role struct{
Name string
Children []Role
}
So let's say Role A has a child Role B and Role B has a Child Role C.
In my frontend, the m2m relation is displayed as a multi-select HTML field.
To avoid an infinite loop (A-B-C-A...) I want that the user can not enter one of the related Roles.
For example, Role C should not display Role A and B, because if a user would select those, an infinite loop would happen.
The database in the backend is looking like this:
roles table (main table)
id, name, ...
role_roles (junction table)
role_id, child_id
I created this helper method to detect the ids which should not get displayed. It's checking if the Role C is somewhere in the field child_id then it takes the role_id of this entry and is doing the same again. This works, but it is looking really unprofessional and I was wondering how this could be solved in a more elegant way - and with fewer SQL queries...
// whereIDLoop returns the ids which should get excluded
func whereIDLoop(id int) ([]int, error) {
ids := []int{}
b := builder.GlobalBuilder
rows, err := b.Select("role_roles").Columns("role_id").Where("child_id = ?", id).All()
if err != nil {
return nil, err
}
for rows.Next() {
var id int
if err := rows.Scan(&id); err != nil {
return nil,err
}
ids = append(ids, id)
id2,err := whereIDLoop(id)
if err != nil {
return nil, err
}
if id2 != nil{
ids = append(ids, id2...)
}
}
err = rows.Close()
if err != nil {
return nil, err
}
return ids, nil
}
Thanks for any help.
Cheers Pat
can't say best practice, my suggestion is put the validation logic at application layer.
a helper function in JS to filter the option in multi-select like this
a validation logic in API
a validation logic at repository layer.
track the roles when apply in case there is a circle.

Golang query multiple databases with a JOIN

Using the golang example below, how can I query (JOIN) multiple databases.
For example, I want to have the relation db1.username.id = db2.comments.username_id.
id := 123
var username string
err := db.QueryRow("SELECT username FROM users WHERE id=?", id).Scan(&username)
switch {
case err == sql.ErrNoRows:
log.Printf("No user with that ID.")
case err != nil:
log.Fatal(err)
default:
fmt.Printf("Username is %s\n", username)
}
As you are using MySQL, you can select fields across databases. See this related question for
details. For example you should be able to do this:
err := db.QueryRow(`
SELECT
db1.users.username
FROM
db1.users
JOIN
db2.comments
ON db1.users.id = db2.comments.username_id
`).Scan(&username)
You can of course simply fetch all entries from db2.comments using a second database connection and use the values in a query to db1.users. This is, of course, not recommended as it is the job of the database server which it can, most likely, do better than you.