Golang Join array interface - mysql

I try to create bulk insert. I use gorm github.com/jinzhu/gorm
import (
"fmt"
dB "github.com/edwinlab/api/repositories"
)
func Update() error {
tx := dB.GetWriteDB().Begin()
sqlStr := "INSERT INTO city(code, name) VALUES (?, ?),(?, ?)"
vals := []interface{}{}
vals = append(vals, "XX1", "Jakarta")
vals = append(vals, "XX2", "Bandung")
tx.Exec(sqlStr, vals)
tx.Commit()
return nil
}
But I got an error:
Error 1136: Column count doesn't match value count at row 1 becuse i return wrong query
INSERT INTO city(code, name) VALUES ('XX1','Jakarta','XX2','Bandung', %!v(MISSING)),(%!v(MISSING), %!v(MISSING))
If I use manual query it works:
tx.Exec(sqlStr, "XX1", "Jakarta", "XX2", "Bandung")
It will generate:
INSERT INTO city(code, name) VALUES ('XX1', 'Jakarta'),('XX2', 'Bandung')
The problem is how to make array interface to generate string like "XX1", "Jakarta", ...
Thanks for help.

If you want to pass elements of a slice to a function with variadic parameter, you have to use ... to tell the compiler you want to pass all elements individually and not pass the slice value as a single argument, so simply do:
tx.Exec(sqlStr, vals...)
This is detailed in the spec: Passing arguments to ... parameters.
Tx.Exec() has the signature of:
func (tx *Tx) Exec(query string, args ...interface{}) (Result, error)
So you have to pass vals.... Also don't forget to check returned error, e.g.:
res, err := tx.Exec(sqlStr, vals...)
if err != nil {
// handle error
}

Related

Golang, MySQL, Can't append query data into struct list [duplicate]

This question already has answers here:
When to use single quotes, double quotes, and backticks in MySQL
(13 answers)
Closed 10 days ago.
When I'm trying to parse data into struct and then append it into a slice, get nothing. But if I use query in MySQL workbench, I get some values....
query, err := db.Query("SELECT 'description','is_done' FROM tasks WHERE 'user_id' = ?;", userId)
if err != nil {
return nil, err
}
defer query.Close()
var tasks []TodoUserDTO
var currentTask TodoUserDTO
for query.Next() {
err = query.Scan(&currentTask.Description, &currentTask.IsDone)
if err != nil {
panic(err)
}
tasks = append(tasks, currentTask)
}
TodoDTO struct looks like this:
type TodoUserDTO struct {
Description string `json:"desc"`
IsDone bool `json:"done"`
}
Based on the code, it looks like you're using the wrong column names in the SELECT statement of your query. The SELECT statement should include the actual column names of the columns in the tasks table, rather than the literal strings of the column names.
Try changing the SELECT statement to this:
"SELECT description, is_done FROM tasks WHERE user_id = ?"

Golang: cannot use sql.NamedArg in prepared statement

I have a query that is using NamedArgs from the sql.DB and I'm getting an error when building
cannot use args (type []sql.NamedArg) as type []interface {} in argument to stmt.Exec
The example in the SQL library shows it being used:
Example usage:
db.ExecContext(ctx, `
delete from Invoice
where
TimeCreated < #end
and TimeCreated >= #start;`,
sql.Named("start", startTime),
sql.Named("end", endTime),
)
The only difference is that I'm currently using a prepared statement stmt and calling the Exec method on that. I've created a slice of NamedArg with my values and it's using the ... expander.
res, err := stmt.Exec(args...)
What exactly is wrong when the example shows the sql.Named() method call directly in the code? Why wouldn't an expanded slice work?
That's just how passing arguments to a variadic function works in Go. You either pass the individual values which can be of any type, or you pass in a slice whose element type matches exactly that of the variadic parameter and follow it with ....
I.e. you can either do:
res, err := stmt.Exec(
sql.Named("start", startTime),
sql.Named("end", endTime),
)
or you can do:
args := []interface{}{
sql.Named("start", startTime),
sql.Named("end", endTime),
}
res, err := stmt.Exec(args...)

Sort data according to sequence of keys in a slice Go

I work with Go and MySQL database. Assume I have a slice of string like this: []string{"b", "c", "a"} and I want to have final data like this:
[]Student{
Student{ID: "b", Name: "Ben"},
Student{ID: "c", Name: "Carl"},
Student{ID: "a", Name: "Alexander"},
}
When I want to build MySQL query, is using ORDER BY FIELD(id,'b','c','a') an efficient way? Or if I don't use it, I will have code like this:
keys := []string{"b", "c", "a"}
...
students := make([]Student, 0)
for rows.Next() {
s := Student{}
err := rows.Scan(&s.ID, &s.Name)
if err != nil {
log.Fatal(err)
}
students = append(students, s)
}
mStudents := make(map[string]Student, 0)
for _, v := range students {
mStudents[v.ID] = v
}
finalData := make([]Student, 0)
for _, v := range keys {
if _, ok := mStudents[v]; ok {
finalData = append(finalData, mStudents[v])
}
}
But I think that's a very inefficient way. So, is there another way?
Thank you.
Using MySQL's ORDER BY FIELD(id,'b','c','a') is efficient and there's nothing wrong with it if you don't mind having to extend the query and having your logic in the query.
If you want to do it in Go: Go's standard lib provides a sort.Slice() function to sort any slice. You have to pass a less() function to it which must tell how 2 elements in the slice correlate to each other, if one is less than the other.
You want an order designated by another, sorted keys slice. So basically to tell if one student is "less" than another, you need to compare the indices of their keys.
To avoid having to linear-search the keys slice each time, you should build a map of them:
m := map[string]int{}
for i, k := range keys {
m[k] = i
}
And so the index that is the base of the "less" logic is a simple map lookup:
sort.Slice(students, func(i, j int) bool {
return m[students[i].ID] < m[students[j].ID]
})
Try this on the Go Playground.

Dynamic SQL select query in Golang

I am trying to build API, with database/sql and mysql driver, that will read data based on URL parameters.
Something like this
myapi.com/users?columns=id,first_name,last_name,country&sort=desc&sortColumn=last_name&limit=10&offset=20
I know how to get all columns or just specific columns when it is defined in struct. But I want to know is it possible to get columns from url and instead of predefined struct save it to map and than just scan those columns.
I have working code that will get data from above endpoint only if number of columns is same as in struct. If I remove country for example I get error that Scan expects 4 params but 3 are given.
I don't need specific code, just some directions since I am learning Go and my background is PHP where this is easier to do.
Update
Thanks to answers I have partly working solution.
Here is code:
cols := []string{"id", "first_name", "last_name"}
vals := make([]interface{}, len(cols))
w := map[string]interface{}{"id": 105}
var whereVal []interface{}
var whereCol []string
for k, v := range w {
whereVal = append(whereVal, v)
whereCol = append(whereCol, fmt.Sprintf("%s = ?", k))
}
for i := range cols {
vals[i] = new(interface{})
}
err := db.QueryRow("SELECT "+strings.Join(cols, ",")+" FROM users WHERE "+strings.Join(whereCol, " AND "), whereVal...).Scan(vals...)
if err != nil {
fmt.Println(err)
}
b, _ := json.Marshal(vals)
fmt.Println(string(b))
This should query SELECT id, first_name, last_name FROM users WHERE id = 105;
But how do I get data out to proper json object? Now it prints out strings encoded in base64 like this.
[105,"Sm9obm55","QnJhdm8="]
From what I know (also not much experienced in Go) if you don't assign a real type to value then Scan will return []byte and when it is marshalled it returns base64 encoded string.
So you have to assign a type to your columns and if you want proper json then assign keys to values.
In your example it can be done something like this:
cols := []string{"id", "first_name", "last_name"}
vals := make([]interface{}, len(cols))
result := make(map[string]interface{}, len(cols))
for i, key := range cols {
switch key {
case "id", "status":
vals[i] = new(int)
default:
vals[i] = new(string)
}
result[key] = vals[i]
}
b, _ := json.Marshal(result)
fmt.Println(string(b))
So, instead of looping over cols and creating new interface for each column, now we are creating key/value pairs and assigning type based on column name.
Also, if you have nullable columns in table, and you probably have, then you'll get error because nil can't go into string. So I suggest this package gopkg.in/guregu/null.v3 and then assign type like null.String. That way you'll get back null as a value.
For example:
for i, key := range cols {
switch key {
case "id", "status":
vals[i] = new(int)
case "updated_at", "created_at":
vals[i] = new(null.Time)
default:
vals[i] = new(null.String)
}
result[key] = vals[i]
}
Here is an option that a found to return dynamic resultset, you will need a interface{} array but you have to assign to a new(interface{}) to get a pointer that can be write by Scan method
//...
types, _ := rows.ColumnTypes()
for rows.Next() {
row := make([]interface{}, len(types))
for i := range types {
row[i] = new(interface{})
}
rows.Scan(row...)
}
You must first fetch the result columns count and then don't exceed the size.
If you meant the query fields, you need dynamic create the query string, the params size must be the same.
I would create a query statement with the dynamic fields(use placeholder for avoiding sql injection):
rows := db.QueryRow("SELECT {{YOUR_FIELDS}} from table_tbl")
Create variable carrier with the same size of columns
vals := make([]interface{}, len(rows.Columns()))
Use sql.RawBytes for field's type if you don't need type checking or can't know their types, otherwise use the same type of field.
for i, _ := range cols {
vals[i] = new(sql.RawBytes)
//check column name, if it is id, and you know it is integer
//vals[i] = new(int)
}
Iterate rows and scan
for rows.Next() {
err = rows.Scan(vals...)
}

Limit max prepared statement count

The problem
I wrote an application which synchronizes data from BigQuery into a MySQL database. I try to insert roughly 10-20k rows in batches (up to 10 items each batch) every 3 hours. For some reason I receive the following error when it tries to upsert these rows into MySQL:
Can't create more than max_prepared_stmt_count statements:
Error 1461: Can't create more than max_prepared_stmt_count statements
(current value: 2000)
My "relevant code"
// ProcessProjectSkuCost receives the given sku cost entries and sends them in batches to upsertProjectSkuCosts()
func ProcessProjectSkuCost(done <-chan bigquery.SkuCost) {
var skuCosts []bigquery.SkuCost
var rowsAffected int64
for skuCostRow := range done {
skuCosts = append(skuCosts, skuCostRow)
if len(skuCosts) == 10 {
rowsAffected += upsertProjectSkuCosts(skuCosts)
skuCosts = []bigquery.SkuCost{}
}
}
if len(skuCosts) > 0 {
rowsAffected += upsertProjectSkuCosts(skuCosts)
}
log.Infof("Completed upserting project sku costs. Affected rows: '%d'", rowsAffected)
}
// upsertProjectSkuCosts inserts or updates ProjectSkuCosts into SQL in batches
func upsertProjectSkuCosts(skuCosts []bigquery.SkuCost) int64 {
// properties are table fields
tableFields := []string{"project_name", "sku_id", "sku_description", "usage_start_time", "usage_end_time",
"cost", "currency", "usage_amount", "usage_unit", "usage_amount_in_pricing_units", "usage_pricing_unit",
"invoice_month"}
tableFieldString := fmt.Sprintf("(%s)", strings.Join(tableFields, ","))
// placeholderstring for all to be inserted values
placeholderString := createPlaceholderString(tableFields)
valuePlaceholderString := ""
values := []interface{}{}
for _, row := range skuCosts {
valuePlaceholderString += fmt.Sprintf("(%s),", placeholderString)
values = append(values, row.ProjectName, row.SkuID, row.SkuDescription, row.UsageStartTime,
row.UsageEndTime, row.Cost, row.Currency, row.UsageAmount, row.UsageUnit,
row.UsageAmountInPricingUnits, row.UsagePricingUnit, row.InvoiceMonth)
}
valuePlaceholderString = strings.TrimSuffix(valuePlaceholderString, ",")
// put together SQL string
sqlString := fmt.Sprintf(`INSERT INTO
project_sku_cost %s VALUES %s ON DUPLICATE KEY UPDATE invoice_month=invoice_month`, tableFieldString, valuePlaceholderString)
sqlString = strings.TrimSpace(sqlString)
stmt, err := db.Prepare(sqlString)
if err != nil {
log.Warn("Error while preparing SQL statement to upsert project sku costs. ", err)
return 0
}
// execute query
res, err := stmt.Exec(values...)
if err != nil {
log.Warn("Error while executing statement to upsert project sku costs. ", err)
return 0
}
rowsAffected, err := res.RowsAffected()
if err != nil {
log.Warn("Error while trying to access affected rows ", err)
return 0
}
return rowsAffected
}
// createPlaceholderString creates a string which will be used for prepare statement (output looks like "(?,?,?)")
func createPlaceholderString(tableFields []string) string {
placeHolderString := ""
for range tableFields {
placeHolderString += "?,"
}
placeHolderString = strings.TrimSuffix(placeHolderString, ",")
return placeHolderString
}
My question:
Why do I hit the max_prepared_stmt_count when I immediately execute the prepared statement (see function upsertProjectSkuCosts)?
I could only imagine it's some sort of concurrency which creates tons of prepared statements in the meantime between preparing and executing all these statements. On the other hand I don't understand why there would be so much concurrency as the channel in the ProcessProjectSkuCost is a buffered channel with a size of 20.
You need to close the statement inside upsertProjectSkuCosts() (or re-use it - see the end of this post).
When you call db.Prepare(), a connection is taken from the internal connection pool (or a new connection is created, if there aren't any free connections). The statement is then prepared on that connection (if that connection isn't free when stmt.Exec() is called, the statement is then also prepared on another connection).
So this creates a statement inside your database for that connection. This statement will not magically disappear - having multiple prepared statements in a connection is perfectly valid. Golang could see that stmt goes out of scope, see it requires some sort of cleanup and then do that cleanup, but Golang doesn't (just like it doesn't close files for you and things like that). So you'll need to do that yourself using stmt.Close(). When you call stmt.Close(), the driver will send a command to the database server, telling it the statement is no longer needed.
The easiest way to do this is by adding defer stmt.Close() after the err check following db.Prepare().
What you can also do, is prepare the statement once and make that available for upsertProjectSkuCosts (either by passing the stmt into upsertProjectSkuCosts or by making upsertProjectSkuCosts a func of a struct, so the struct can have a property for the stmt). If you do this, you should not call stmt.Close() - because you aren't creating new statements anymore, you are re-using an existing statement.
Also see Should we also close DB's .Prepare() in Golang? and https://groups.google.com/forum/#!topic/golang-nuts/ISh22XXze-s