I have this MySQL database where I need to add records with a go program and need to retrieve the id of the last added record, to add the id to another table.
When i run insert INSERT INTO table1 values("test",1); SELECT LAST_INSERT_ID() in MySQL Workbench, it returns the last id, which is auto incremented, with no issues.
If I run my go code however, it always prints 0. The code:
_, err := db_client.DBClient.Query("insert into table1 values(?,?)", name, 1)
var id string
err = db_client.DBClient.QueryRow("SELECT LAST_INSERT_ID()").Scan(&id)
if err != nil {
panic(err.Error())
}
fmt.Println("id: ", id)
I tried this variation to try to narrow down the problem scope further: err = db_client.DBClient.QueryRow("SELECT id from table1 where name=\"pleasejustwork\";").Scan(&id), which works perfectly fine; go returns the actual id.
Why is it not working with the LAST_INSERT_ID()?
I'm a newbie in go so please do not go hard on me if i'm making stupid go mistakes that lead to this error :D
Thank you in advance.
The MySQL protocol returns LAST_INSERT_ID() values in its response to INSERT statements. And, the golang driver exposes that returned value. So, you don't need the extra round trip to get it. These ID values are usually unsigned 64-bit integers.
Try something like this.
res, err := db_client.DBClient.Exec("insert into table1 values(?,?)", name, 1)
if err != nil {
panic (err.Error())
}
id, err := res.LastInsertId()
if err != nil {
panic (err.Error())
}
fmt.Println("id: ", id)
I confess I'm not sure why your code didn't work. Whenever you successfully issue a single-row INSERT statement, the next statement on the same database connection always has access to a useful LAST_INSERT_ID() value. This is true whether or not you use explicit transactions.
But if your INSERT is not successful, you must treat the last insert ID value as unpredictable. (That's a technical term for "garbage", trash, rubbish, basura, etc.)
Related
I am developing a system to enable patient registration with incremental queue number. I am using Go, GORM, and MySQL.
An issue happens when more than one patients are registering at the same time, they tend to get the same queue number which it should not happen.
I attempted using transactions and hooks to achieve that but I still got duplicate queue number. I have not found any resource about how to lock the database when a transaction is happening.
func (r repository) CreatePatient(pat *model.Patient) error {
tx := r.db.Begin()
defer func() {
if r := recover(); r != nil {
tx.Rollback()
}
}()
err := tx.Error
if err != nil {
return err
}
// 1. get latest queue number and assign it to patient object
var queueNum int64
err = tx.Model(&model.Patient{}).Where("registration_id", pat.RegistrationID).Select("queue_number").Order("created_at desc").First(&queueNum).Error
if err != nil && err != gorm.ErrRecordNotFound {
tx.Rollback()
return err
}
pat.QueueNumber = queueNum + 1
// 2. write patient data into the db
err = tx.Create(pat).Error
if err != nil {
tx.Rollback()
return err
}
return tx.Commit().Error
}
As stated by #O. Jones, transactions don't save you here because you're extracting the largest value of a column, incrementing it outside the db and then saving that new value. From the database's point of view the updated value has no dependence on the queried value.
You could try doing the update in a single query, which would make the dependence obvious:
UPDATE patient AS p
JOIN (
SELECT max(queue_number) AS queue_number FROM patient WHERE registration_id = ?
) maxp
SET p.queue_number = maxp.queue_number + 1
WHERE id = ?
In gorm you can't run a complex update like this, so you'll need to make use of Exec.
I'm not 100% certain the above will work because I'm less familiar with MySQL transaction isolation guarantees.
A cleaner way
Overall, it'd be cleaner to keep a table of queues (by reference_id) with a counter that you update atomically:
Start a transaction, then
SELECT queue_number FROM queues WHERE registration_id = ? FOR UPDATE;
Increment the queue number in your app code, then
UPDATE queues SET queue_number = ? WHERE registration_id = ?;
Now you can use the incremented queue number in your patient creation/update before transaction commit.
The problem
I wrote an application which synchronizes data from BigQuery into a MySQL database. I try to insert roughly 10-20k rows in batches (up to 10 items each batch) every 3 hours. For some reason I receive the following error when it tries to upsert these rows into MySQL:
Can't create more than max_prepared_stmt_count statements:
Error 1461: Can't create more than max_prepared_stmt_count statements
(current value: 2000)
My "relevant code"
// ProcessProjectSkuCost receives the given sku cost entries and sends them in batches to upsertProjectSkuCosts()
func ProcessProjectSkuCost(done <-chan bigquery.SkuCost) {
var skuCosts []bigquery.SkuCost
var rowsAffected int64
for skuCostRow := range done {
skuCosts = append(skuCosts, skuCostRow)
if len(skuCosts) == 10 {
rowsAffected += upsertProjectSkuCosts(skuCosts)
skuCosts = []bigquery.SkuCost{}
}
}
if len(skuCosts) > 0 {
rowsAffected += upsertProjectSkuCosts(skuCosts)
}
log.Infof("Completed upserting project sku costs. Affected rows: '%d'", rowsAffected)
}
// upsertProjectSkuCosts inserts or updates ProjectSkuCosts into SQL in batches
func upsertProjectSkuCosts(skuCosts []bigquery.SkuCost) int64 {
// properties are table fields
tableFields := []string{"project_name", "sku_id", "sku_description", "usage_start_time", "usage_end_time",
"cost", "currency", "usage_amount", "usage_unit", "usage_amount_in_pricing_units", "usage_pricing_unit",
"invoice_month"}
tableFieldString := fmt.Sprintf("(%s)", strings.Join(tableFields, ","))
// placeholderstring for all to be inserted values
placeholderString := createPlaceholderString(tableFields)
valuePlaceholderString := ""
values := []interface{}{}
for _, row := range skuCosts {
valuePlaceholderString += fmt.Sprintf("(%s),", placeholderString)
values = append(values, row.ProjectName, row.SkuID, row.SkuDescription, row.UsageStartTime,
row.UsageEndTime, row.Cost, row.Currency, row.UsageAmount, row.UsageUnit,
row.UsageAmountInPricingUnits, row.UsagePricingUnit, row.InvoiceMonth)
}
valuePlaceholderString = strings.TrimSuffix(valuePlaceholderString, ",")
// put together SQL string
sqlString := fmt.Sprintf(`INSERT INTO
project_sku_cost %s VALUES %s ON DUPLICATE KEY UPDATE invoice_month=invoice_month`, tableFieldString, valuePlaceholderString)
sqlString = strings.TrimSpace(sqlString)
stmt, err := db.Prepare(sqlString)
if err != nil {
log.Warn("Error while preparing SQL statement to upsert project sku costs. ", err)
return 0
}
// execute query
res, err := stmt.Exec(values...)
if err != nil {
log.Warn("Error while executing statement to upsert project sku costs. ", err)
return 0
}
rowsAffected, err := res.RowsAffected()
if err != nil {
log.Warn("Error while trying to access affected rows ", err)
return 0
}
return rowsAffected
}
// createPlaceholderString creates a string which will be used for prepare statement (output looks like "(?,?,?)")
func createPlaceholderString(tableFields []string) string {
placeHolderString := ""
for range tableFields {
placeHolderString += "?,"
}
placeHolderString = strings.TrimSuffix(placeHolderString, ",")
return placeHolderString
}
My question:
Why do I hit the max_prepared_stmt_count when I immediately execute the prepared statement (see function upsertProjectSkuCosts)?
I could only imagine it's some sort of concurrency which creates tons of prepared statements in the meantime between preparing and executing all these statements. On the other hand I don't understand why there would be so much concurrency as the channel in the ProcessProjectSkuCost is a buffered channel with a size of 20.
You need to close the statement inside upsertProjectSkuCosts() (or re-use it - see the end of this post).
When you call db.Prepare(), a connection is taken from the internal connection pool (or a new connection is created, if there aren't any free connections). The statement is then prepared on that connection (if that connection isn't free when stmt.Exec() is called, the statement is then also prepared on another connection).
So this creates a statement inside your database for that connection. This statement will not magically disappear - having multiple prepared statements in a connection is perfectly valid. Golang could see that stmt goes out of scope, see it requires some sort of cleanup and then do that cleanup, but Golang doesn't (just like it doesn't close files for you and things like that). So you'll need to do that yourself using stmt.Close(). When you call stmt.Close(), the driver will send a command to the database server, telling it the statement is no longer needed.
The easiest way to do this is by adding defer stmt.Close() after the err check following db.Prepare().
What you can also do, is prepare the statement once and make that available for upsertProjectSkuCosts (either by passing the stmt into upsertProjectSkuCosts or by making upsertProjectSkuCosts a func of a struct, so the struct can have a property for the stmt). If you do this, you should not call stmt.Close() - because you aren't creating new statements anymore, you are re-using an existing statement.
Also see Should we also close DB's .Prepare() in Golang? and https://groups.google.com/forum/#!topic/golang-nuts/ISh22XXze-s
Hi i am using gorp and want to use select query for any table without actually knowing its schema
for that i am using the query
db, err := sql.Open("mysql", "root:1234#tcp(localhost:3306)/information_schema")
checkErr(err, "sql.Open failed")
dbmap := &gorp.DbMap{Db: db, Dialect: gorp.MySQLDialect{}}
var data []interface{}
_, err = dbmap.Select(&data, "select * from collations")
checkErr(err, "select query failed")
fmt.Println(data)
}
However this is resulting in an error because i can only pass a struct as first parameter to select
this returns an error
select query failed gorp: select into non-struct slice requires 1 column, got 6
suggest me some corrections or any other alternative so that i can use select query on any table name dynamically selected by user
If you don't know the schema, don't use GORP ... Why? because GORPs is a mapper, it needs a source and a target field to know how to process data, if you don't pass a target then GORP really doesn't know what to do.
However, you can do this using the standard SQL package. See this answer for more information: sql: scan row(s) with unknown number of columns (select * from ...)
I am trying to fetch multiple columns in a mysql database using the go language. Currently I could modify this script, and it would work perfectly well for just fetching one column, and printing it using the http print function. However, when it fetches two things the script no longer works. I don't have any idea what I need to do to fix it. I know that the sql is fine as I have tested that out in the mysql terminal, and it has given me the expected result that I wanted.
conn, err := sql.Open("mysql", "user:password#tcp(localhost:3306)/database")
statement, err := conn.Prepare("select first,second from table")
rows, err := statement.Query()
for rows.Next() {
var first string
rows.Scan(&first)
var second string
rows.Scan(&second)
fmt.Fprintf(w, "Title of first is :"+first+"The second is"+second)
}
conn.Close()
You are using the wrong variable names but I assume that's just a "typo" in your sample code.
Then the correct syntax is:
var first, second string
rows.Scan(&first, &second)
See this on what Scan(dest ...interface{}) means and how to use it.
You should also handle and not ignore the errors.
Finally, you should use Fprintf as it's intended:
fmt.Fprintf(w, "Title of first is : %s\nThe second is: %s", first, second)
Using the golang example below, how can I query (JOIN) multiple databases.
For example, I want to have the relation db1.username.id = db2.comments.username_id.
id := 123
var username string
err := db.QueryRow("SELECT username FROM users WHERE id=?", id).Scan(&username)
switch {
case err == sql.ErrNoRows:
log.Printf("No user with that ID.")
case err != nil:
log.Fatal(err)
default:
fmt.Printf("Username is %s\n", username)
}
As you are using MySQL, you can select fields across databases. See this related question for
details. For example you should be able to do this:
err := db.QueryRow(`
SELECT
db1.users.username
FROM
db1.users
JOIN
db2.comments
ON db1.users.id = db2.comments.username_id
`).Scan(&username)
You can of course simply fetch all entries from db2.comments using a second database connection and use the values in a query to db1.users. This is, of course, not recommended as it is the job of the database server which it can, most likely, do better than you.