How can we run queries concurrently, using go routines? - mysql

I am using gorm v1 (ORM), go version 1.14
DB connection is created at the start of my app
and that DB is being passed throughout the app.
I have a complex & long functionality.
Let's say I have 10 sets of queries to run and the order doesn't matter.
So, what I did was
go queryset1(DB)
go queryset2(DB)
...
go queryset10(DB)
// here I have a wait, maybe via channel or WaitGroup.
Inside queryset1:
func queryset1(db *gorm.DB, /*wg or errChannel*/){
db.Count() // basic count query
wg.Done() or errChannel <- nil
}
Now, the problem is I encounter the error :1040 "too many connections" - Mysql.
Why is this happening? Does every go routine create a new connection?
If so, is there a way to check this & "live connections" in mysql
(Not the show status variables like connection)
How can I concurrently query the DB?
Edit:
This guy has the same problem

The error is not directly related to go-gorm, but to the underlying MySQL configuration and your initial connection configuration. In your code, you can manage the following parameters during your initial connection to the database.
maximum open connections (SetMaxOpenConns function)
maximum idle connections (SetMaxIdleConns function)
maximum timeout for idle connections (SetConnMaxLifetime function)
For more details, check the official docs or this article how to get the maximum performance from your connection configuration.
If you want to prevent a situation where each goroutine uses a separate connection, you can do something like this:
// restrict goroutines to be executed 5 at a time
connCh := make(chan bool, 5)
go queryset1(DB, &wg, connCh)
go queryset2(DB, &wg, connCh)
...
go queryset10(DB, &wg, connCh)
wg.Wait()
close(connCh)
Inside your queryset functions:
func queryset1(db *gorm.DB, wg *sync.WaitGroup, connCh chan bool){
connCh <- true
db.Count() // basic count query
<-connCh
wg.Done()
}
The connCh will allow the first 5 goroutines to write in it and block the execution of the rest of the goroutines until one of the first 5 goroutines takes the value from the connCh channel. This will prevent the situations where each goroutine will start it's own connection. Some of the connections should be reused, but that also depends on the initial connection configuration.

Related

go-sql-driver: get invalid connection when wait_timeout is 8h as default

One Sentence
Got MySQL invalid connection issue when MaxOpenConns are abundant and wait_timeout is 8h.
Detailed
I've a script intending to read all records from table A, make some transformation, and write the resulted records to table B. And the code works this way:
One goroutine scans table A, putting the records into a channel;
Other four goroutine (number configurable) concurrently consume from above channel, accumulating 50 rows (batch size configurable) to insert into table B, then accumulating another 50 rows, and so on so forth.
Scanner goroutine holds one *sql.DB, and inserter goroutines share another *sql.DB
go-sql-driver: either Version 1.4.1 (2018-11-14) or Version 1.5 (2020-01-07)
(problem encountered with 1.4.1, and reproducible demo, see below, uses 1.5)
Go version: go1.13.15 darwin/amd64
The invalid connection issue is almost steadily reproducible. 
In a specific running case, table A has 67227 records, channel size is set to 100000, table A scanner (1 goroutine) reads 1000 a time, table B inserter(4 goroutines) write 50 a time. It ends up with 67127 records in table B (2*50 lost), and 2 lines of error output in console:
[mysql] 2020/12/11 21:54:18 packets.go:36: read tcp x.x.x.x:64062->x.x.x.x:3306: read: operation timed out
[mysql] 2020/12/11 21:54:21 packets.go:36: read tcp x.x.x.x:64070->x.x.x.x:3306: read: operation timed out
(The number of error lines varies when I reproduce, it's usually 1, 2 or 3. N error lines coincide with N*50 records insertion failure into table B.)
And from my log file, it prints invalid connection:
2020/12/11 21:54:18 main.go:135: [goroutine 56] BatchExecute: BatchInsertPlace(): SqlDb.ExecContext(): invalid connection
Stats={MaxOpenConnections:0 OpenConnections:4 InUse:3 Idle:1 WaitCount:0 WaitDuration:0s MaxIdleClosed:14 MaxLifetimeClosed:0}
2020/12/11 21:54:21 main.go:135: [goroutine 55] BatchExecute: BatchInsertPlace(): SqlDb.ExecContext(): invalid connection
Stats={MaxOpenConnections:0 OpenConnections:4 InUse:3 Idle:1 WaitCount:0 WaitDuration:0s MaxIdleClosed:14 MaxLifetimeClosed:0}
Trials and observations
By printing each success/ fail write operation with goroutine id in log, it appears that the error always happen when any 1 of all 4 inserting goroutines has an over ~45 seconds interval between 2 consecutive writes. I think it's just taking this long to accumulate 50 records before inserting them to table B.
In contrast, when I happened to make a change so that the 4 inserting goroutines write some averagely, (i.e. no one has a much longer writing interval than others), the error is not seen. Repeated 3 times.
Looks one error only affects one batch write operation, and the following batches work well. So why not retry with the errored batch? I suppose one retry and it will get through. Still, I don't mind keep retrying until success:
var retryExecTillSucc = func(goroutineId int, records []*MyDto) {
err := inserter.BatchInsert(records)
for { // retry until success. This is a workaround for 'invalid connection' issue
if err == nil { break }
logger.Printf("[goroutine %v] BatchExecute: %v \nStats=%+v\n", goroutineId, err, inserter.RdsClient.SqlDb.Stats())
err = inserter.retryBatchInsert(records)
}
logger.Printf("[goroutine %v] BatchExecute: Success \nStats=%+v\n", goroutineId, inserter.RdsClient.SqlDb.Stats())
}
Surprisingly, with this change, retries of the errored batch keep getting error and never succeed...
Summary
It looks obvious that one (idle) connection was broken when the error occur, but my question is:
MySQL wait_timeout is set 8h, so why is the connection timed out so quickly?
Since MaxOpenConns is not set, it shouldn't be a limitation, especially considering the merely 4 OpenConnections in log.
What else to check as potential root cause?
(Too long, but just hope to put it clearly and get some advice~)
Update
Minimal, reproducible example, including:
Code
One sample log file
MySQL error log
Don't you use Context? I suppose the read timeout is caused by Context Timeout, or readTimeout parameter.
MySQL doesn't provide safe and efficient canceling mechanism. When context is cancelled or reached readTimeout, DB.ExecContext returns without terminating using connection. It cause "invalid connection" next time the connection is used.
If you want to limit execution time of long query, you can use MAX_EXECUTION_TIME hint instead of context.
See https://dev.mysql.com/doc/refman/5.7/en/optimizer-hints.html#optimizer-hints-execution-time for reference.

Does gorm.Open() create a new connection pool every time it's called?

I'm working on a piece of code that is making calls to the database from several different places. In this code I have been using the GORM library, and calling gorm.Open() every time I need to interact with the database.
What I'm wondering is what is happening under the hood when I call this? Is a new connection pool created every time I call it or is each call to gorm.Open() sharing the same connection pool?
TLDR: yes, try to reuse the returned DB object.
gorm.Open does the following: (more or less):
lookup the driver for the given dialect
call sql.Open to return a DB object
call DB.Ping() to force it to talk to the database
This means that one sql.DB object is created for every gorm.Open. Per the doc, this means one connection pool for each DB object.
This means that the recommendations for sql.Open apply to gorm.Open:
The returned DB is safe for concurrent use by multiple goroutines and
maintains its own pool of idle connections. Thus, the Open function
should be called just once. It is rarely necessary to close a DB.
Yes, also note that the connection pool can be configured as such, in both GORM v1 and v2:
// SetMaxIdleConns sets the maximum number of connections in the idle connection pool.
db.DB().SetMaxIdleConns(10)
// SetMaxOpenConns sets the maximum number of open connections to the database.
db.DB().SetMaxOpenConns(100)
// SetConnMaxLifetime sets the maximum amount of time a connection may be reused.
db.DB().SetConnMaxLifetime(time.Hour)
Calling the DB() function on the *gorm.DB instance returns the underlying *sql.DB instance.
For those who are just starting with gorm, here is a more complete example.
db, err := gorm.Open(mysql.Open(url))
if err != nil {
// control error
}
sqlDB, err := db.DB()
if err != nil {
// control error
}
sqlDB.SetMaxIdleConns(10)
sqlDB.SetMaxOpenConns(100)
sqlDB.SetConnMaxLifetime(time.Hour)

Golang RESTful API load testing causing too many database connections

I think I am having serious issue managing database connection pool in Golang. I built an RESTful API using Gorilla web toolkit which works great when only few requests are being sent over to the server. But now I started performing load testing using loader.io site. I apologize for the long post, but I wanted to give you the full picture.
Before going further, here are some info on the server running the API and MySQL:
Dedicated Hosting Linux
8GB RAM
Go version 1.1.1
Database connectivity using go-sql-driver
MySQL 5.1
Using loader.io I can send 1000 GET requests/15 seconds without problems. But when I send 1000 POST requests/15 seconds I get lots of errors all of which are ERROR 1040 too many database connections. Many people have reported similar issues online. Note that I am only testing on one specific POST request for now. For this post request I ensured the following (which was also suggested by many others online)
I made sure I use not Open and Close *sql.DB for short lived functions. So I created only global variable for the connection pool as you see in the code below, although I am open for suggestion here because I do not like to use global variables.
I made sure to use db.Exec when possible and only use db.Query and db.QueryRow when results are expected.
Since the above did not solve my problem, I tried to set db.SetMaxIdleConns(1000), which solved the problem for 1000 POST requests/15 seconds. Meaning no more 1040 errors. Then I increased the load to 2000 POST requests/15 seconds and I started getting ERROR 1040 again. I tried to increase the value in db.SetMaxIdleConns() but that did not make a difference.
Here some connection statistics I get from the MySQL database on the number of connections by running SHOW STATUS WHERE variable_name = 'Threads_connected';
For 1000 POST requests/15 seconds: observed #threads_connected ~= 100
For 2000 POST requests/15 seconds: observed #threads_connected ~= 600
I also increased the maximum connections for MySQL in my.cnf but that did not make a difference. What do you suggest? Does the code look fine? If yes, then it is probably the connections are just limited.
You will find a simplified version of the code below.
var db *sql.DB
func main() {
db = DbConnect()
db.SetMaxIdleConns(1000)
http.Handle("/", r)
err := http.ListenAndServe(fmt.Sprintf("%s:%s", API_HOST, API_PORT), nil)
if err != nil {
fmt.Println(err)
}
}
func DbConnect() *sql.DB {
db, err := sql.Open("mysql", connectionString)
if err != nil {
fmt.Printf("Connection error: %s\n", err.Error())
return nil
}
return db
}
func PostBounce(w http.ResponseWriter, r *http.Request) {
userId, err := AuthRequest(r)
//error checking
//ready requesy body and use json.Unmarshal
bounceId, err := CreateBounce(userId, b)
//return HTTP status code here
}
func AuthRequest(r *http.Request) (id int, err error) {
//parse header and get username and password
query := "SELECT Id FROM Users WHERE Username=? AND Password=PASSWORD(?)"
err = db.QueryRow(query, username, password).Scan(&id)
//error checking and return
}
func CreateBounce(userId int, bounce NewBounce) (bounceId int64, err error) {
//initialize some variables
query := "INSERT INTO Bounces (.....) VALUES (?, ?, ?, ?, ?, ?, ?, ?)"
result, err := db.Exec(query, ......)
//error checking
bounceId,_ = result.LastInsertId()
//return
}
Go database/sql doesn't prevent you from creating an infinite number of connections to the database. If there is an idle connection in the pool, it will be used, otherwise a new connection is created.
So, under load, your request handlers sql.DB is probably finding no idle connections and so a new connection is created when needed. This churns for a bit -reusing idle connections when possible and creating new when needed-, ultimately reaching the max connections for the Db. And, unfortunately, in Go 1.1 there isn't a convenient way (e.g. SetMaxOpenConns) to limit open connections.
Upgrade to a newer version of Golang. In Go 1.2+ you get SetMaxOpenConns. And check out the MySql docs for starting setting and then tune.
db.SetMaxOpenConns(100) //tune this
If you must use Go 1.1 you'll need to ensure in your code that *sql.DB is only being used by N clients at a time.
#MattSelf proposed solution is correct, but I ran into other issues. Here I highlighted what I did exactly to solve the problem (by the way, the server is running CentOS).
Since I have a dedicated server I increased the max_connections for MySQL
In /etc/my.cnf I added the line max_connections=10000. Although, that is more connections than what I need.
Restart MySQL: service mysql restart
Changed the ulimit -n. That is to increase the number of descriptive files that are open.
To do that I made changes to two files:
In /etc/sysctl.conf I added the line
fs.file-max = 65536
In /etc/security/limits.conf I added the following lines:
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
Reboot your server
Upgraded Go to 1.3.3 as suggested by #MattSelf
Set
db.SetMaxOpenConns(10000)
Again the number is too large for what I need, but this proved to me that things worked.
I ran a test using loader.io which consists of 5000 clients each sending POST request all within 15 seconds. All went through without errors.
Something else to note is setting the back_log to a higher value in your my.cnf file to something like a few hundred or 1000. This will help handle more connections per second. See High connections per second.

Go write unix /tmp/mysql.sock: broken pipe when sending a lot of requests

I have a Go API endpoint that makes several MySQL query. When the endpoint receives a small number of requests, it works just fine. However, I am now testing it using apache bench with 100 requests. The first 100 all went through. However, the 2nd 100 caused this error to appear
2014/01/15 12:08:03 http: panic serving 127.0.0.1:58602: runtime error: invalid memory address or nil pointer dereference
goroutine 973 [running]:
net/http.func·009()
/usr/local/Cellar/go/1.2/libexec/src/pkg/net/http/server.go:1093 +0xae
runtime.panic(0x402960, 0x9cf419)
/usr/local/Cellar/go/1.2/libexec/src/pkg/runtime/panic.c:248 +0x106
database/sql.(*Rows).Close(0x0, 0xc2107af540, 0x69)
/usr/local/Cellar/go/1.2/libexec/src/pkg/database/sql/sql.go:1576 +0x1e
store.findProductByQuery(0xc2107af540, 0x69, 0x0, 0xb88e80, 0xc21000ac70)
/Users/dennis.suratna/workspace/session-go/src/store/product.go:83 +0xe3
store.FindProductByAppKey(0xc210337748, 0x7, 0x496960, 0x6, 0xc2105eb1b0)
/Users/dennis.suratna/workspace/session-go/src/store/product.go:28 +0x11c
api.SessionHandler(0xb9eff8, 0xc2108ee200, 0xc2108f5750, 0xc2103285a0, 0x0, ...)
/Users/dennis.suratna/workspace/session-go/src/api/session_handler.go:31 +0x2fb
api.func·001(0xb9eff8, 0xc2108ee200, 0xc2108f5750, 0xc2103285a0)
/Users/dennis.suratna/workspace/session-go/src/api/api.go:81 +0x4f
reflect.Value.call(0x3ad9a0, 0xc2101ffdb0, 0x130, 0x48d520, 0x4, ...)
/usr/local/Cellar/go/1.2/libexec/src/pkg/reflect/value.go:474 +0xe0b
reflect.Value.Call(0x3ad9a0, 0xc2101ffdb0, 0x130, 0xc2103c4a00, 0x3, ...)
/usr/local/Cellar/go/1.2/libexec/src/pkg/reflect/value.go:345 +0x9d
github.com/codegangsta/inject.(*injector).Invoke(0xc2103379c0, 0x3ad9a0, 0xc2101ffdb0, 0x4311a0, 0x1db94e, ...)
It looks like it's not caused by the number of concurrent requests but, rather, something that is not properly closed. I am already closing every prepare statement that I create in my code. I am wondering if anyone has ever seen this before.
Edit:
This is how I am initializing my MySQL connection:
func InitStore(environment string) error {
db, err := sql.Open("mysql", connStr(environment))
....
S = &Store{
Mysql: db,
Environment: environment,
}
}
In this happens only once when I start the server.
Ok so I was able to solve this problem and now I can send ~500 requests with 10 concurrency with no more Broken pipe or Too many connections error.
I think it all comes down to following best practices. When you don't expect multiple rows to be returned user QueryRow instead of Query AND chain it with Scan
db.QueryRow(...).Scan(...)
If you don't expect rows to be returned and if you're not going to re-use your statements, use Exec not Prepare.
If you have prepared statement or querying multiple rows. Don't forget to Close()
Got all of the above from
https://github.com/go-sql-driver/mysql/issues/111
If you use Go 1.2.x you can use db.SetMaxOpenConns to tell the sql package to not open more than X connections. Queries that need a database connection after X connections are already open (and busy) will block until there's an available connection.
That being said: what are the next lines of the "stack trace"? Line ~1093 in http/server.go is the recover code when your serve function fails. It looks more like you are just mishandling some data and that makes it fail or you are missing an error check and then try processing data when you really were returned an error, etc.

all pooled connections were in use and max pool size was reached

I am writing a .NET 4.0 console app that
Opens up a connection Uses a Data Reader to cursor through a list of keys
For each key read, calls a web service
Stores the result of a web service in the database
I then spawn multiple threads of this process in order to improve the maximum number of records that I can process per second.
When I up the process beyond about 30 or so threads, I get the following error:
System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Is there an Server or client side option to tweak to allow me to obtain more connections fromn the connection pool?
I am calling a sql 2008 r2 DATABASE.
tHx
This sounds like a design issue. What's your total record count from the database? Iterating through the reader will be really fast. Even if you have hundreds of thousands of rows, going through that reader will be quick. Here's a different approach you could take:
Iterate through the reader and store the data in a list of objects. Then iterate through your list of objects at a number of your choice (e.g. two at a time, three at a time, etc) and spawn that number of threads to make calls to your web service in parallel.
This way you won't be opening multiple connections to the database, and you're dealing with what is likely the true bottleneck (the HTTP call to the web service) in parallel.
Here's an example:
List<SomeObject> yourObjects = new List<SomeObject>();
if (yourReader.HasRows) {
while (yourReader.Read()) {
SomeObject foo = new SomeObject();
foo.SomeProperty = myReader.GetInt32(0);
yourObjects.Add(foo);
}
}
for (int i = 0; i < yourObjects.Count; i = i + 2) {
//Kick off your web service calls in parallel. You will likely want to do something with the result.
Task[] tasks = new Task[2] {
Task.Factory.StartNew(() => yourService.MethodName(yourObjects[i].SomeProperty)),
Task.Factory.StartNew(() => yourService.MethodName(yourObjects[i+1].SomeProperty)),
};
Task.WaitAll(tasks);
}
//Now do your database INSERT.
Opening up a new connection for all your requests is incredibly inefficient. If you simply want to use the same connection to keep requesting things, that is more than possible. You can open a connection, and then run as many SqlCommand commands through that one connection. Simply keep the ONE connection around, and dispose of it after all your threading is done.
Please restart the IIS you will be able to connect