MySQL returning "Too many connections" error - mysql

I'm doing something like this:
import(
"database/sql"
"github.com/go-sql-driver/mysql"
)
var db *sql.DB
func main() {
var err error
db, err = sql.Open(...)
if err != nil {
panic(err)
}
for j := 0; j < 8000; j++ {
_, err := db.Query("QUERY...")
if err != nil {
logger.Println("Error " + err.Error())
return
}
}
}
It works for the first 150 queries (for that I'm using another function to make) but after that, I get the error :
mysqli_real_connect(): (HY000/1040): Too many connections
So clearly I'm doing something wrong but I can't find what is it. I don't know what to open and close a new connection for each query.
Error in the log file :
"reg: 2020/06/28 03:35:34 Errores Error 1040: Too many connections"
(it is printed only once)
Error in mysql php my admin:
"mysqli_real_connect(): (HY000/1040): Too many connections"
"La conexión para controluser, como está definida en su configuración, fracasó."
(translated: "the connection for controluser, as it is defined in ti's configuration , failed.")
"mysqli_real_connect(): (08004/1040): Too many connections"

Every time you call Query(), you're creating a new database handle. Each active handle needs a unique database connection. Since you're not calling Close, that handle, and thus the connection, remains open until the program exits.
Solve your problem by calling rows.Close() after you're done with each query:
for j := 0; j < 8000; j++ {
rows, err := db.Query("QUERY...")
if err != nil {
logger.Println("Error " + err.Error())
return
}
// Your main logic here
rows.Close()
}
This Close() call is often called in a defer statement, but this precludes the use of a for loop (since a defer only executes when then function returns), so you may want to move your main logic to a new function:
for j := 0; j < 8000; j++ {
doStuff()
}
// later
func doStuff() {
rows, err := db.Query("QUERY...")
if err != nil {
logger.Println("Error " + err.Error())
return
}
defer rows.Close()
// Your main logic here
}

Related

How to set up loop in database until it connects to mysqlDB

I have init funcion in my golang program. I need to make it so the connection loops until it connects to database if it cant connect it idles then try's again and again for 60 seconds if it cant connect at end then it exist out. Also, have environment variable that overrides the default 60 seconds and user can put their own time for until it connects. I have my code below im looking and theres no solid solution in web .
var override string = os.Getenv("OVERRIDE")
func dsn() string {
return fmt.Sprintf("%s:%s#tcp(%s:%s)/%s", username, password, ipdb, portdb, dbName)
func init() {
var db, err = sql.Open("mysql", dsn())
if err != nil {
fmt.Println(err.Error())
} else {
fmt.Printf("Connection established to MYSQL server\n%s\n", dsn())
}
// Checking if table exists in database
rows, table_check := db.Query("select * from " + "Url" + ";")
if table_check == nil {
fmt.Println("\nTable exists in the Database")
count := 0
for rows.Next() {
count++
}
fmt.Printf("Number of records are %d \n", count)
} else {
fmt.Printf("\nTable does not exist in Database %s", table_check)
_, status := db.Exec("CREATE TABLE Url (LongUrl varchar(255),ShortUrl varchar(32));")
if status == nil {
fmt.Println("\n New table created in the Database")
} else {
fmt.Printf("\n The table was not created %s", status)
}
}
defer db.Close()
}
func main(){
......
}
You could use a switch with a timer channel.
For every 500 ms the connection is attempted, and if in 60 seconds the switch has not been resolved then the timeout channel exits the loop.
For example:
func main() {
doneCh := make(chan bool)
db, err := sql.Open("mysql", dsn())
if err != nil {
fmt.Errorf("Could not open connection: %w", err)
os.Exit(1)
}
// Start polling for connection
go func() {
ticker := time.NewTicker(500 * time.Millisecond)
timeoutTimer := time.After(time.Second * 60)
for {
select {
case <-timeoutTimer:
err = errors.New("connection to db timed out")
doneCh <- true
return
case <-ticker.C:
// Simulate some strange wait ... :)
if rand.Intn(10) == 2 {
err = db.Ping()
if err == nil {
doneCh <- true
return
}
}
fmt.Println("Retrying connection ... ")
}
}
}()
// Start waiting for timeout or success
fmt.Println("Waiting for DB connection ...")
<-doneCh
if err != nil {
fmt.Printf("Could not connect to DB: %v \n", err)
os.Exit(1)
}
defer db.Close()
// Do some work
fmt.Printf("Done! Do work now. %+v", db.Stats())
}
Note
Handle your errors as necessary. The sql.Ping() may return you some specific error which will indicate to you that you should retry.
Some errors may not be worth retrying with.
For timers and channels see:
Timers
Channels
Edit: Supposedly simpler way without channels and using a while loop
func main() {
db, err := sql.Open("mysql", dsn())
if err != nil {
fmt.Errorf("Could not open connection: %w", err)
os.Exit(1)
}
defer db.Close()
timeOut := time.Second * 60
idleDuration := time.Millisecond * 500
start := time.Now()
for time.Since(start) < timeOut {
if err = db.Ping(); err == nil {
break
}
time.Sleep(idleDuration)
fmt.Println("Retrying to connect ...")
}
if time.Since(start) > timeOut {
fmt.Printf("Could not connect to DB, timeout after %s", timeOut)
os.Exit(1)
}
// Do some work
fmt.Printf("Done! \nDo work now. %+v", db.Stats())
}

Bulk insert with Golang and Gorm deadlock in concurrency goroutines

I'm trying to bulk insert many records using Gorm, Golang and MySQL. My code looks like this:
package main
import (
"fmt"
"sync"
"gorm.io/driver/mysql"
"gorm.io/gorm"
)
type Article struct {
gorm.Model
Code string `gorm:"size:255;uniqueIndex"`
}
func main() {
db, err := gorm.Open(mysql.Open("root#tcp(127.0.0.1:3306)/q_test"), nil)
if err != nil {
panic(err)
}
db.AutoMigrate(&Article{})
// err = db.Exec("TRUNCATE articles").Error
err = db.Exec("DELETE FROM articles").Error
if err != nil {
panic(err)
}
// Build some articles
n := 10000
var articles []Article
for i := 0; i < n; i++ {
article := Article{Code: fmt.Sprintf("code_%d", i)}
articles = append(articles, article)
}
// // Save articles
// err = db.Create(&articles).Error
// if err != nil {
// panic(err)
// }
// Save articles with goroutines
chunkSize := 100
var wg sync.WaitGroup
wg.Add(n / chunkSize)
for i := 0; i < n; i += chunkSize {
go func(i int) {
defer wg.Done()
chunk := articles[i:(i + chunkSize)]
err := db.Create(&chunk).Error
if err != nil {
panic(err)
}
}(i)
}
wg.Wait()
}
When I run this code sometimes (about one in three times) I get this error:
panic: Error 1213: Deadlock found when trying to get lock; try restarting transaction
If I run the code without goroutines (commented lines), I get no deadlock. Also, I've noticed that if I remove the unique index on the code field the deadlock doesn't happen anymore. And if I replace the DELETE FROM articles statement with TRUNCATE articles the deadlock doesn't seem to happen anymore.
I've also run the same code with Postgresql and it works, with no deadlocks.
Any idea why the deadlock happens only with the unique index on MySQL and how to avoid it?
DELETE statement is executed using a row lock, each row in the table is locked for deletion.
TRUNCATE TABLE always locks the table and page but not each row.
source : https://stackoverflow.com/a/20559931/18012302
I think mysql need time to do DELETE query.
try add time.Sleep after query delete.
err = db.Exec("DELETE FROM articles").Error
if err != nil {
panic(err)
}
time.Sleep(time.Second)

Deadlock error occurrd when query mysql in golang, not in query segment but after out of "rows.Next()" loop

when i use golang to query mysql, sometimes i found "deadlock err" in my code.
my question is not "why deadlock occurred", but why deadlock err found in "err = rows.Err()". in my mind, if deadlock occurred, i should get it at "tx.Query"'s return err.
this is demo code, "point 2" is where deadlock error occurred
func demoFunc(tx *sql.Tx, arg1, arg2 int) ([]outItem, error) {
var ret []outItem
var err error
var rows *sql.Rows
//xxxLockSql may deadlock, so try again for 3-times
for i := 0; i < 3; i++ {
//------ point 1
rows, err = tx.Query(xxxLockSql, arg1, arg2)
if err == nil {
break
}
log.Printf("[ERROR] xxxLockSql failed, err %s, retry %d", err.Error(), i)
time.Sleep(time.Millisecond * 10)
}
//if query xxxLockSql failed up to 3-times, then return
if err != nil {
log.Printf("[ERROR] xxxLockSql failed, err %s", err.Error())
return ret, err
}
defer rows.Close()
for rows.Next() {
err = rows.Scan(&a1, &a2)
if err != nil {
return ret, err
}
ret = append(ret, acl)
}
//------ point 2
if err = rows.Err(); err != nil {
// i found deadlock err in this "if" segment.
// err content is "Error 1213: Deadlock found when trying to get lock; try restarting transaction"
log.Printf("[ERROR] loop rows failed, err %s", err.Error())
return ret, err
}
return ret, nil
}
I cannot be sure about the reason since you did not mention your database driver (and which sql package you are using). But I think this is because sql.Query is lazy, which means querying and loading the rows is postponed until actual use, i.e., rows.Next() - that is why the deadlock error occurs there.
As of why it is out of the loop, it's because that when an error occurs, rows.Next() returns false and break the loop.

How to fix leaking MySQL conns with Go?

I have data on my local computer in 2 MySQL databases (dbConnOuter and dbConnInner) that I want to process and collate into a 3rd database (dbConnTarget).
The code runs for about 17000 cycles, then stops with these error messages:
[mysql] 2018/08/06 18:20:57 packets.go:72: unexpected EOF
[mysql] 2018/08/06 18:20:57 packets.go:405: busy buffer
As far as I can tell I'm properly closing the connections where I'm reading from and I'm using Exec for writing, that I believe handles its own resources. I've also tried prepared statements, but it didn't help, the result was the same.
Below is the relevant part of my code, and it does similar database operations prior to this without any issues.
As this is one of my first experiments with Go, I can't yet see where I might be wasting my resources.
import (
"database/sql"
_ "github.com/go-sql-driver/mysql"
)
var dbConnOuter *sql.DB
var dbConnInner *sql.DB
var dbConnTarget *sql.DB
func main() {
dbConnOuter = connectToDb(dbUserDataOne)
dbConnInner = connectToDb(dbUserDataTwo)
dbConnTarget = connectToDb(dbUserDataThree)
// execute various db processing functions
doStuff()
}
func connectToDb(dbUser dbUser) *sql.DB {
dbConn, err := sql.Open("mysql", fmt.Sprintf("%v:%v#tcp(127.0.0.1:3306)/%v", dbUser.username, dbUser.password, dbUser.dbname))
if err != nil {
panic(err)
}
dbConn.SetMaxOpenConns(500)
return dbConn
}
// omitted similar db processing functions that work just fine
func doStuff() {
outerRes, err := dbConnOuter.Query("SELECT some outer data")
if err != nil {
panic(err)
}
defer outerRes.Close()
for outerRes.Next() {
outerRes.Scan(&data1)
innerRes, err := dbConnInner.Query("SELECT some inner data using", data1)
if err != nil {
panic(err)
}
innerRes.Scan(&data2, &data3)
innerRes.Close()
dbConnTarget.Exec("REPLACE INTO whatever", data1, data2, data3)
}
}

DB calls in goroutine failing without error

I wrote a script to migrate lots of data from one DB to another and got it working fine, but now I want to try and use goroutines to speed up the script by using concurrent DB calls. Since making the change to calling go processBatch(offset) instead of just processBatch(offset), I can see that a few goroutines are started but the script finishes almost instantly and nothing is actually done. Also the number of started goroutines varies every time I call the script. There are no errors (that I can see).
I'm still new to goroutines and Go in general, so any pointers as to what I might be doing wrong are much appreciated. I have removed all logic from the code below that is not related to concurrency or DB access, as it runs fine without the changes. I also left a comment where I believe it fails, as nothing below that line is run (Print gives not output). I also tried using sync.WaitGroup to stagger DB calls, but it didn't seem to change anything.
var (
legacyDB *sql.DB
v2DB *sql.DB
)
func main() {
var total, loops int
var err error
legacyDB, err = sql.Open("mysql", "...")
if err != nil {
panic(err)
}
defer legacyDB.Close()
v2DB, err = sql.Open("mysql", "...")
if err != nil {
panic(err)
}
defer v2DB.Close()
err = legacyDB.QueryRow("SELECT count(*) FROM users").Scan(&total)
checkErr(err)
loops = int(math.Ceil(float64(total) / float64(batchsize)))
fmt.Println("Total: " + strconv.Itoa(total))
fmt.Println("Loops: " + strconv.Itoa(loops))
for i := 0; i < loops; i++ {
offset := i * batchsize
go processBatch(offset)
}
legacyDB.Close()
v2DB.Close()
}
func processBatch(offset int) {
query := namedParameterQuery.NewNamedParameterQuery(`
SELECT ...
LIMIT :offset,:batchsize
`)
query.SetValue(...)
rows, err := legacyDB.Query(query.GetParsedQuery(), (query.GetParsedParameters())...)
// nothing after this line gets done (Println here does not show output)
checkErr(err)
defer rows.Close()
....
var m runtime.MemStats
runtime.ReadMemStats(&m)
log.Printf("\nAlloc = %v\nTotalAlloc = %v\nSys = %v\nNumGC = %v\n\n", m.Alloc/1024/1024, m.TotalAlloc/1024/1024, m.Sys/1024/1024, m.NumGC)
}
func checkErr(err error) {
if err != nil {
panic(err)
}
}
As Nadh mentioned in a comment, that would be because the program exits when the main function finishes, regardless whether or not there are still other goroutines running. To fix this, a *sync.WaitGroup will suffice. A WaitGroup is used for cases where you have multiple concurrent operations, and you would like to wait until they have all completed. Documentation can be found here: https://golang.org/pkg/sync/#WaitGroup.
An example implementation for your program without the use of global variables would look like replacing
fmt.Println("Total: " + strconv.Itoa(total))
fmt.Println("Loops: " + strconv.Itoa(loops))
for i := 0; i < loops; i++ {
offset := i * batchsize
go processBatch(offset)
}
with
fmt.Println("Total: " + strconv.Itoa(total))
fmt.Println("Loops: " + strconv.Itoa(loops))
wg := new(sync.WaitGroup)
wg.Add(loops)
for i := 0; i < loops; i++ {
offset := i * batchsize
go func(offset int) {
defer wg.Done()
processBatch(offset)
}(offset)
}
wg.Wait()