pymysql lost connection after a long period - mysql

using pymysql connect to mysql, leave the program running for a long time,for example, leave the office at night and come back next morning.during this period,no any operation on this application.now doing a database commit would give this error.
File "/usr/local/lib/python3.3/site-packages/pymysql/cursors.py", line 117, in execute
self.errorhandler(self, exc, value)
File "/usr/local/lib/python3.3/site-packages/pymysql/connections.py", line 189, in defaulterrorhandler
raise errorclass(errorvalue)
pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')
restart the web server (tornado),it's ok.why if leave it long time would get this error?

The wait_timeout exists for a reason: long-time idle connections are wasteful of limited server resources. You are correct: increasing it is not the right approach.
Fortunately, this is Python which has a robust exception mechanism. I'm not familiar with pymysql but presumably you've got an "open_connection" method somewhere which allows
try:
cursor.do_something()
except pymysql.err.OperationalError as e:
if e[0] == 2013: # Lost connection to server
# redo open_connection and do_something
else:
raise
Since you didn't post any calling code, I can't structure this example to match your application. There things worth noting about the except clause: the first is that they should always be as narrow as possible, in this case there are (presumably) many OperationalErrors and you only know how to deal with 'Lost connection'.
In case it isn't a lost connection exception, you should re-raise it so it doesn't get swallowed. Unless you know how to handle other OperationalErrors, this will get passed up the stack and result in an informative error message which is a reasonable thing to do since your cursor is probably useless anyway by then.
Restarting the web server only fixes the lost connection as an accidental side effect of reinitializing everything; handling the exception within the code is a much more gentle way of accomplishing your goal.

Related

go-sql-driver: get invalid connection when wait_timeout is 8h as default

One Sentence
Got MySQL invalid connection issue when MaxOpenConns are abundant and wait_timeout is 8h.
Detailed
I've a script intending to read all records from table A, make some transformation, and write the resulted records to table B. And the code works this way:
One goroutine scans table A, putting the records into a channel;
Other four goroutine (number configurable) concurrently consume from above channel, accumulating 50 rows (batch size configurable) to insert into table B, then accumulating another 50 rows, and so on so forth.
Scanner goroutine holds one *sql.DB, and inserter goroutines share another *sql.DB
go-sql-driver: either Version 1.4.1 (2018-11-14) or Version 1.5 (2020-01-07)
(problem encountered with 1.4.1, and reproducible demo, see below, uses 1.5)
Go version: go1.13.15 darwin/amd64
The invalid connection issue is almost steadily reproducible. 
In a specific running case, table A has 67227 records, channel size is set to 100000, table A scanner (1 goroutine) reads 1000 a time, table B inserter(4 goroutines) write 50 a time. It ends up with 67127 records in table B (2*50 lost), and 2 lines of error output in console:
[mysql] 2020/12/11 21:54:18 packets.go:36: read tcp x.x.x.x:64062->x.x.x.x:3306: read: operation timed out
[mysql] 2020/12/11 21:54:21 packets.go:36: read tcp x.x.x.x:64070->x.x.x.x:3306: read: operation timed out
(The number of error lines varies when I reproduce, it's usually 1, 2 or 3. N error lines coincide with N*50 records insertion failure into table B.)
And from my log file, it prints invalid connection:
2020/12/11 21:54:18 main.go:135: [goroutine 56] BatchExecute: BatchInsertPlace(): SqlDb.ExecContext(): invalid connection
Stats={MaxOpenConnections:0 OpenConnections:4 InUse:3 Idle:1 WaitCount:0 WaitDuration:0s MaxIdleClosed:14 MaxLifetimeClosed:0}
2020/12/11 21:54:21 main.go:135: [goroutine 55] BatchExecute: BatchInsertPlace(): SqlDb.ExecContext(): invalid connection
Stats={MaxOpenConnections:0 OpenConnections:4 InUse:3 Idle:1 WaitCount:0 WaitDuration:0s MaxIdleClosed:14 MaxLifetimeClosed:0}
Trials and observations
By printing each success/ fail write operation with goroutine id in log, it appears that the error always happen when any 1 of all 4 inserting goroutines has an over ~45 seconds interval between 2 consecutive writes. I think it's just taking this long to accumulate 50 records before inserting them to table B.
In contrast, when I happened to make a change so that the 4 inserting goroutines write some averagely, (i.e. no one has a much longer writing interval than others), the error is not seen. Repeated 3 times.
Looks one error only affects one batch write operation, and the following batches work well. So why not retry with the errored batch? I suppose one retry and it will get through. Still, I don't mind keep retrying until success:
var retryExecTillSucc = func(goroutineId int, records []*MyDto) {
err := inserter.BatchInsert(records)
for { // retry until success. This is a workaround for 'invalid connection' issue
if err == nil { break }
logger.Printf("[goroutine %v] BatchExecute: %v \nStats=%+v\n", goroutineId, err, inserter.RdsClient.SqlDb.Stats())
err = inserter.retryBatchInsert(records)
}
logger.Printf("[goroutine %v] BatchExecute: Success \nStats=%+v\n", goroutineId, inserter.RdsClient.SqlDb.Stats())
}
Surprisingly, with this change, retries of the errored batch keep getting error and never succeed...
Summary
It looks obvious that one (idle) connection was broken when the error occur, but my question is:
MySQL wait_timeout is set 8h, so why is the connection timed out so quickly?
Since MaxOpenConns is not set, it shouldn't be a limitation, especially considering the merely 4 OpenConnections in log.
What else to check as potential root cause?
(Too long, but just hope to put it clearly and get some advice~)
Update
Minimal, reproducible example, including:
Code
One sample log file
MySQL error log
Don't you use Context? I suppose the read timeout is caused by Context Timeout, or readTimeout parameter.
MySQL doesn't provide safe and efficient canceling mechanism. When context is cancelled or reached readTimeout, DB.ExecContext returns without terminating using connection. It cause "invalid connection" next time the connection is used.
If you want to limit execution time of long query, you can use MAX_EXECUTION_TIME hint instead of context.
See https://dev.mysql.com/doc/refman/5.7/en/optimizer-hints.html#optimizer-hints-execution-time for reference.

Cause of boto3 error "An error occurred (Unavailable)...Tags could not be retrieved"?

I just started getting this error on a script that hasn't changed. Updating to boto3==1.4.3 had no effect. Doesn't look like throttling, does it? Not sure where to go with this one, any suggestions would be much appreciated.
File "/Library/Python/2.7/site-packages/botocore/client.py", line 251, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Library/Python/2.7/site-packages/botocore/client.py", line 537, in _make_api_call
raise ClientError(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (Unavailable)
when calling the DescribeSecurityGroups operation (reached max retries: 4):
Tags could not be retrieved.
Edited to add: As noted in comment below, this is coming from a seemingly innocuous call to describe_security_groups().
Another update: OK, this was transient and is no longer happening. Le sigh. Maybe some timeout due to network issues or something. I'll leave it up on SO for now, in case someone sees this and has a flash of insight.

Go write unix /tmp/mysql.sock: broken pipe when sending a lot of requests

I have a Go API endpoint that makes several MySQL query. When the endpoint receives a small number of requests, it works just fine. However, I am now testing it using apache bench with 100 requests. The first 100 all went through. However, the 2nd 100 caused this error to appear
2014/01/15 12:08:03 http: panic serving 127.0.0.1:58602: runtime error: invalid memory address or nil pointer dereference
goroutine 973 [running]:
net/http.func·009()
/usr/local/Cellar/go/1.2/libexec/src/pkg/net/http/server.go:1093 +0xae
runtime.panic(0x402960, 0x9cf419)
/usr/local/Cellar/go/1.2/libexec/src/pkg/runtime/panic.c:248 +0x106
database/sql.(*Rows).Close(0x0, 0xc2107af540, 0x69)
/usr/local/Cellar/go/1.2/libexec/src/pkg/database/sql/sql.go:1576 +0x1e
store.findProductByQuery(0xc2107af540, 0x69, 0x0, 0xb88e80, 0xc21000ac70)
/Users/dennis.suratna/workspace/session-go/src/store/product.go:83 +0xe3
store.FindProductByAppKey(0xc210337748, 0x7, 0x496960, 0x6, 0xc2105eb1b0)
/Users/dennis.suratna/workspace/session-go/src/store/product.go:28 +0x11c
api.SessionHandler(0xb9eff8, 0xc2108ee200, 0xc2108f5750, 0xc2103285a0, 0x0, ...)
/Users/dennis.suratna/workspace/session-go/src/api/session_handler.go:31 +0x2fb
api.func·001(0xb9eff8, 0xc2108ee200, 0xc2108f5750, 0xc2103285a0)
/Users/dennis.suratna/workspace/session-go/src/api/api.go:81 +0x4f
reflect.Value.call(0x3ad9a0, 0xc2101ffdb0, 0x130, 0x48d520, 0x4, ...)
/usr/local/Cellar/go/1.2/libexec/src/pkg/reflect/value.go:474 +0xe0b
reflect.Value.Call(0x3ad9a0, 0xc2101ffdb0, 0x130, 0xc2103c4a00, 0x3, ...)
/usr/local/Cellar/go/1.2/libexec/src/pkg/reflect/value.go:345 +0x9d
github.com/codegangsta/inject.(*injector).Invoke(0xc2103379c0, 0x3ad9a0, 0xc2101ffdb0, 0x4311a0, 0x1db94e, ...)
It looks like it's not caused by the number of concurrent requests but, rather, something that is not properly closed. I am already closing every prepare statement that I create in my code. I am wondering if anyone has ever seen this before.
Edit:
This is how I am initializing my MySQL connection:
func InitStore(environment string) error {
db, err := sql.Open("mysql", connStr(environment))
....
S = &Store{
Mysql: db,
Environment: environment,
}
}
In this happens only once when I start the server.
Ok so I was able to solve this problem and now I can send ~500 requests with 10 concurrency with no more Broken pipe or Too many connections error.
I think it all comes down to following best practices. When you don't expect multiple rows to be returned user QueryRow instead of Query AND chain it with Scan
db.QueryRow(...).Scan(...)
If you don't expect rows to be returned and if you're not going to re-use your statements, use Exec not Prepare.
If you have prepared statement or querying multiple rows. Don't forget to Close()
Got all of the above from
https://github.com/go-sql-driver/mysql/issues/111
If you use Go 1.2.x you can use db.SetMaxOpenConns to tell the sql package to not open more than X connections. Queries that need a database connection after X connections are already open (and busy) will block until there's an available connection.
That being said: what are the next lines of the "stack trace"? Line ~1093 in http/server.go is the recover code when your serve function fails. It looks more like you are just mishandling some data and that makes it fail or you are missing an error check and then try processing data when you really were returned an error, etc.

python: sqlalchemy - how do I ensure connection not stale using new event system

I am using the sqlalchemy package in python. I have an operation that takes some time to execute after I perform an autoload on an existing table. This causes the following error when I attempt to use the connection:
sqlalchemy.exc.OperationalError: (OperationalError) (2006, 'MySQL server has gone away')
I have a simple utility function that performs an insert many:
def insert_data(data_2_insert, table_name):
engine = create_engine('mysql://blah:blah123#localhost/dbname')
# Metadata is a Table catalog.
metadata = MetaData()
table = Table(table_name, metadata, autoload=True, autoload_with=engine)
for c in mytable.c:
print c
column_names = tuple(c.name for c in mytable.c)
final_data = [dict(zip(column_names, x)) for x in data_2_insert]
ins = mytable.insert()
conn = engine.connect()
conn.execute(ins, final_data)
conn.close()
It is the following line that times long time to execute since 'data_2_insert' has 677,161 rows.
final_data = [dict(zip(column_names, x)) for x in data_2_insert]
I came across this question which refers to a similar problem. However I am not sure how to implement the connection management suggested by the accepted answer because robots.jpg pointed this out in a comment:
Note for SQLAlchemy 0.7 - PoolListener is deprecated, but the same solution can be implemented using the new event system.
If someone can please show me a couple of pointers on how I could go about integrating the suggestions into the way I use sqlalchemy I would be very appreciative. Thank you.
I think you are looking for something like this:
from sqlalchemy import exc, event
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def check_connection(dbapi_con, con_record, con_proxy):
'''Listener for Pool checkout events that pings every connection before using.
Implements pessimistic disconnect handling strategy. See also:
http://docs.sqlalchemy.org/en/rel_0_8/core/pooling.html#disconnect-handling-pessimistic'''
cursor = dbapi_con.cursor()
try:
cursor.execute("SELECT 1") # could also be dbapi_con.ping(),
# not sure what is better
except exc.OperationalError, ex:
if ex.args[0] in (2006, # MySQL server has gone away
2013, # Lost connection to MySQL server during query
2055): # Lost connection to MySQL server at '%s', system error: %d
# caught by pool, which will retry with a new connection
raise exc.DisconnectionError()
else:
raise
If you wish to trigger this strategy conditionally, you should avoid use of decorator here and instead register listener using listen() function:
# somewhere during app initialization
if config.check_connection_on_checkout:
event.listen(Pool, "checkout", check_connection)
More info:
Connection Pool Events
Events API
There is a better way to handle it right now - pool_recycle
engine = create_engine('mysql://...', pool_recycle=3600)
MySQL has a default timeout of 8 hours.
This leads to the connection to be closed by MySQL but the engine above it (such as SQLAlchemy) to not know about it.
There are 2 ways to solve it -
Optimistic - Using pool_recycle
Pessimistic - using pool_pre_ping=True
I prefer to go with the pool_recycle as it doesn't emit a SELECT 1 before each query - causing less stress on the db

Django connection pooling and time fields

Has anyone got connection pooling working with Django, SQLAlchemy, and MySQL?
I used this tutorial (http://node.to/wordpress/2008/09/30/another-database-connection-pool-solution-for-django-mysql/) which worked great but the issue I'm having is whenever I bring back a time field it is being converted to a timedelta since the Django-specific conversions are not being used.
Conversion code from django/db/backends/mysql/base.py
django_conversions = conversions.copy()
django_conversions.update({
FIELD_TYPE.TIME: util.typecast_time,
FIELD_TYPE.DECIMAL: util.typecast_decimal,
FIELD_TYPE.NEWDECIMAL: util.typecast_decimal,
})
Connection code from article:
if settings.DATABASE_HOST.startswith('/'):
self.connection = Database.connect(port=kwargs['port'],
unix_socket=kwargs['unix_socket'],
user=kwargs['user'],
db=kwargs['db'],
passwd=kwargs['passwd'],
use_unicode=kwargs['use_unicode'],
charset='utf8')
else:
self.connection = Database.connect(host=kwargs['host'],
port=kwargs['port'],
user=kwargs['user'],
db=kwargs['db'],
passwd=kwargs['passwd'],
use_unicode=kwargs['use_unicode'],
charset='utf8')
In Django trunk, edit django/db/init.py and comment out the line:
signals.request_finished.connect(close_connection)
This signal handler causes it to disconnect from the database after every request. I don't know what all of the side-effects of doing this will be, but it doesn't make any sense to start a new connection after every request; it destroys performance, as you've noticed.
Another necessary change is in django/middleware/transaction.py; remove the two transaction.is_dirty() tests and always call commit() or rollback(). Otherwise, it won't commit a transaction if it only read from the database, which will leave locks open that should be closed.