Check if database table exists using golang - mysql

I am trying to do a simple thing, check if there is a table, if not then create that table in database.
Here this is the logic I used.
test := "June_2019"
sql_query := `select * from ` + test + `;`
read_err := db.QueryRow(sql_query, 5)
error_returned := read_err.Scan(read_err)
defer db.Close()
if error_returned == nil {
fmt.Println("table is there")
} else {
fmt.Println("table not there")
}
In my database I have June_2019 table. But still this code returns me not nil value. I used db.QueryRow(sql_query, 5) 5 as I have five colomns in my table.
What am I missing here? I am still learning golang.
Thanks in advance.

I have solved the problem using golang and MySQL.
_, table_check := db.Query("select * from " + table + ";")
if table_check == nil {
fmt.Println("table is there")
} else {
fmt.Println("table not there")
}
I have used db.Query() which returns values and and error, here I checked only error.
I think most people thought I want to do it in MySQL way, I just wanted to learn how to use golang to do MySQL operations.

Related

go mysql LAST_INSERT_ID() returns 0

I have this MySQL database where I need to add records with a go program and need to retrieve the id of the last added record, to add the id to another table.
When i run insert INSERT INTO table1 values("test",1); SELECT LAST_INSERT_ID() in MySQL Workbench, it returns the last id, which is auto incremented, with no issues.
If I run my go code however, it always prints 0. The code:
_, err := db_client.DBClient.Query("insert into table1 values(?,?)", name, 1)
var id string
err = db_client.DBClient.QueryRow("SELECT LAST_INSERT_ID()").Scan(&id)
if err != nil {
panic(err.Error())
}
fmt.Println("id: ", id)
I tried this variation to try to narrow down the problem scope further: err = db_client.DBClient.QueryRow("SELECT id from table1 where name=\"pleasejustwork\";").Scan(&id), which works perfectly fine; go returns the actual id.
Why is it not working with the LAST_INSERT_ID()?
I'm a newbie in go so please do not go hard on me if i'm making stupid go mistakes that lead to this error :D
Thank you in advance.
The MySQL protocol returns LAST_INSERT_ID() values in its response to INSERT statements. And, the golang driver exposes that returned value. So, you don't need the extra round trip to get it. These ID values are usually unsigned 64-bit integers.
Try something like this.
res, err := db_client.DBClient.Exec("insert into table1 values(?,?)", name, 1)
if err != nil {
panic (err.Error())
}
id, err := res.LastInsertId()
if err != nil {
panic (err.Error())
}
fmt.Println("id: ", id)
I confess I'm not sure why your code didn't work. Whenever you successfully issue a single-row INSERT statement, the next statement on the same database connection always has access to a useful LAST_INSERT_ID() value. This is true whether or not you use explicit transactions.
But if your INSERT is not successful, you must treat the last insert ID value as unpredictable. (That's a technical term for "garbage", trash, rubbish, basura, etc.)

How do you use URL.Query().Get() to fill in a SELECT statement using Golang

func startHandler(w http.ResponseWriter, r *http.Request) {
conn_str := dbuser + ":" + dbpass + "#tcp(" + dbhost + ":" + dbport + ")/" + dbdb
log.Println(conn_str)
db, err := sql.Open("mysql", conn_str)
if err != nil {
log.Println("DB Error - Unable to connect:", err)
}
defer db.Close()
table := r.URL.Query().Get("table")
rows, _ := db.Query("SELECT * FROM "+ table) //selects all columns from table
cols, _ := rows.Columns()
fmt.Fprintf(w, "%s\n", cols)
When i try this, it does not fill in the value that i entered from my website. If i log.Println(table) it does show in my terminal. But it will not display on website or fill in the select statement with table...
Assuming you are calling this from an API, there a few things I would add, not mention avoiding wildcard (*) selects.
EDIT: I think I may have misunderstood your question, I will see if I can give a better answer.
You're saying that you have a printed value for table, but no response from the DB? Other commentor is correct, instead of "_", get a real error and fmt.Println(err.Error()).
EDIT 2: Another good point made by #jub0bs is that this is a huge vulnerability. Go supports this very well by allowing you to do:
db.Query("SELECT * FROM ?",table)
instead of what you have currently.
I just ran the following code:
results,err := publicDB.Query("SELECT * FROM "+r.URL.Query().Get("name"))+" LIMIT 1"
if err != nil {
fmt.Println(err.Error())
}
for results.Next(){
fmt.Println(results.Columns())
}
and it worked. I called the URL www.mysite.com/endpoint?name=tablename

Limit max prepared statement count

The problem
I wrote an application which synchronizes data from BigQuery into a MySQL database. I try to insert roughly 10-20k rows in batches (up to 10 items each batch) every 3 hours. For some reason I receive the following error when it tries to upsert these rows into MySQL:
Can't create more than max_prepared_stmt_count statements:
Error 1461: Can't create more than max_prepared_stmt_count statements
(current value: 2000)
My "relevant code"
// ProcessProjectSkuCost receives the given sku cost entries and sends them in batches to upsertProjectSkuCosts()
func ProcessProjectSkuCost(done <-chan bigquery.SkuCost) {
var skuCosts []bigquery.SkuCost
var rowsAffected int64
for skuCostRow := range done {
skuCosts = append(skuCosts, skuCostRow)
if len(skuCosts) == 10 {
rowsAffected += upsertProjectSkuCosts(skuCosts)
skuCosts = []bigquery.SkuCost{}
}
}
if len(skuCosts) > 0 {
rowsAffected += upsertProjectSkuCosts(skuCosts)
}
log.Infof("Completed upserting project sku costs. Affected rows: '%d'", rowsAffected)
}
// upsertProjectSkuCosts inserts or updates ProjectSkuCosts into SQL in batches
func upsertProjectSkuCosts(skuCosts []bigquery.SkuCost) int64 {
// properties are table fields
tableFields := []string{"project_name", "sku_id", "sku_description", "usage_start_time", "usage_end_time",
"cost", "currency", "usage_amount", "usage_unit", "usage_amount_in_pricing_units", "usage_pricing_unit",
"invoice_month"}
tableFieldString := fmt.Sprintf("(%s)", strings.Join(tableFields, ","))
// placeholderstring for all to be inserted values
placeholderString := createPlaceholderString(tableFields)
valuePlaceholderString := ""
values := []interface{}{}
for _, row := range skuCosts {
valuePlaceholderString += fmt.Sprintf("(%s),", placeholderString)
values = append(values, row.ProjectName, row.SkuID, row.SkuDescription, row.UsageStartTime,
row.UsageEndTime, row.Cost, row.Currency, row.UsageAmount, row.UsageUnit,
row.UsageAmountInPricingUnits, row.UsagePricingUnit, row.InvoiceMonth)
}
valuePlaceholderString = strings.TrimSuffix(valuePlaceholderString, ",")
// put together SQL string
sqlString := fmt.Sprintf(`INSERT INTO
project_sku_cost %s VALUES %s ON DUPLICATE KEY UPDATE invoice_month=invoice_month`, tableFieldString, valuePlaceholderString)
sqlString = strings.TrimSpace(sqlString)
stmt, err := db.Prepare(sqlString)
if err != nil {
log.Warn("Error while preparing SQL statement to upsert project sku costs. ", err)
return 0
}
// execute query
res, err := stmt.Exec(values...)
if err != nil {
log.Warn("Error while executing statement to upsert project sku costs. ", err)
return 0
}
rowsAffected, err := res.RowsAffected()
if err != nil {
log.Warn("Error while trying to access affected rows ", err)
return 0
}
return rowsAffected
}
// createPlaceholderString creates a string which will be used for prepare statement (output looks like "(?,?,?)")
func createPlaceholderString(tableFields []string) string {
placeHolderString := ""
for range tableFields {
placeHolderString += "?,"
}
placeHolderString = strings.TrimSuffix(placeHolderString, ",")
return placeHolderString
}
My question:
Why do I hit the max_prepared_stmt_count when I immediately execute the prepared statement (see function upsertProjectSkuCosts)?
I could only imagine it's some sort of concurrency which creates tons of prepared statements in the meantime between preparing and executing all these statements. On the other hand I don't understand why there would be so much concurrency as the channel in the ProcessProjectSkuCost is a buffered channel with a size of 20.
You need to close the statement inside upsertProjectSkuCosts() (or re-use it - see the end of this post).
When you call db.Prepare(), a connection is taken from the internal connection pool (or a new connection is created, if there aren't any free connections). The statement is then prepared on that connection (if that connection isn't free when stmt.Exec() is called, the statement is then also prepared on another connection).
So this creates a statement inside your database for that connection. This statement will not magically disappear - having multiple prepared statements in a connection is perfectly valid. Golang could see that stmt goes out of scope, see it requires some sort of cleanup and then do that cleanup, but Golang doesn't (just like it doesn't close files for you and things like that). So you'll need to do that yourself using stmt.Close(). When you call stmt.Close(), the driver will send a command to the database server, telling it the statement is no longer needed.
The easiest way to do this is by adding defer stmt.Close() after the err check following db.Prepare().
What you can also do, is prepare the statement once and make that available for upsertProjectSkuCosts (either by passing the stmt into upsertProjectSkuCosts or by making upsertProjectSkuCosts a func of a struct, so the struct can have a property for the stmt). If you do this, you should not call stmt.Close() - because you aren't creating new statements anymore, you are re-using an existing statement.
Also see Should we also close DB's .Prepare() in Golang? and https://groups.google.com/forum/#!topic/golang-nuts/ISh22XXze-s

How to get database tables list from MySQL (SHOW TABLES)

I have a problem with getting database table list (SHOW TABLES) in Go.
I use this packages
database/sql
gopkg.in/gorp.v1
github.com/ziutek/mymysql/godrv
and connect to MYSQL by this code:
db, err := sql.Open(
"mymysql",
"tcp:127.0.0.1:3306*test/root/root")
if err != nil {
panic(err)
}
dbmap := &DbMap{Conn:&gorp.DbMap{Db: db}}
And I use this code to get list of tables
result, _ := dbmap.Exec("SHOW TABLES")
But result is empty!
I use classic go-sql-driver/mysql:
db, _ := sql.Open("mysql", "root:qwerty#/dbname")
res, _ := db.Query("SHOW TABLES")
var table string
for res.Next() {
res.Scan(&table)
fmt.Println(table)
}
PS don't ignore errors! This is only an example
I'm trying this code and work successfully. I create a list of string and use Select query to get list of database tables.
tables := []string{}
dbmap.Select(&tables, "SHOW TABLES")
fmt.Println(tables)

Golang query multiple databases with a JOIN

Using the golang example below, how can I query (JOIN) multiple databases.
For example, I want to have the relation db1.username.id = db2.comments.username_id.
id := 123
var username string
err := db.QueryRow("SELECT username FROM users WHERE id=?", id).Scan(&username)
switch {
case err == sql.ErrNoRows:
log.Printf("No user with that ID.")
case err != nil:
log.Fatal(err)
default:
fmt.Printf("Username is %s\n", username)
}
As you are using MySQL, you can select fields across databases. See this related question for
details. For example you should be able to do this:
err := db.QueryRow(`
SELECT
db1.users.username
FROM
db1.users
JOIN
db2.comments
ON db1.users.id = db2.comments.username_id
`).Scan(&username)
You can of course simply fetch all entries from db2.comments using a second database connection and use the values in a query to db1.users. This is, of course, not recommended as it is the job of the database server which it can, most likely, do better than you.