work-around for (2006, 'MySQL server has gone away')? - mysql

This question is my exact issue
Django - OperationalError: (2006, 'MySQL server has gone away')
An aparent work-around to this otherwise unresolved problem is to increase the wait_timeout for the execution
Background
I have a celery task which runs at a specific time once a day. Initially it was working fine but from last week i have started getting :
Exception_ocoured_: (2013, 'Lost connection to MySQL server during
query')
This celery task simply fetches some details from db, max of 4000 rows and mails to the end user.
Question :
Is there any way to increase this timeout only for the specific celery task which is facing this issue in django environment, as i don't want to disturb the native setup?
I am looking for a djangoish solution whose lifetime is only as long as this celery task executes.
For eg :
#task
def doSomething():
try:
set_timeout_for_mysql = 20000 # <== main agenda for this question
# OR
ping_resp = somehow_test_mysql_con()
while(ping_resp == False):
keep trying to connect or create new connection
# do_operations
except Exception, e:
# log exception
Spec :
In [18]: django.VERSION
Out[18]: (1, 7, 7, 'final', 0)
and
django-celery==3.0.21
PS :
Any other workaround will do if someone has resolved this without disturbing the core setup!!!

from django.db import close_old_connections
...
close_old_connections()
... # do some db jobs, it will reconnect db
Good Luck

You can set the wait_timeout for each Session (Connection)
set wait_timeout=10000;
SHOW VARIABLES LIKE 'wait_timeout';
but, are you sure that the error is from the wait_timeout. a other solution is the max_allowed_packet. You can increase it to see if this the Problem.

Related

create_engine problems mysql 5.7 and sqlalchemy

I've inherited an application making use of python & sqlalchemy to interact with a mysql database. When I issue:
mysql_engine = sqlalchemy.create_engine('mysql://uname:pwd#192.168.xx.xx:3306/testdb', connect_args={'use_unicode':True,'charset':'utf8', 'init_command':'SET NAMES UTF8'}, poolclass=NullPool)
, at startup, an exception is thrown:
cmd = unicode("USE testdb")
with mysql_engine.begin() as conn:
conn.execute(cmd)
sqlalchemy.exc.OperationalError: (OperationalError) (2003, "Can't connect to MySQL server on '192.168.xx.xx' (101)") None None
However, using IDLE I can do:
>>> import MySQLdb
>>> Con = MySQLdb.Connect(host="192.168.xx.xx", port=3306, user="uname", passwd="pwd", db="testdb")
>>> Cursor = Con.cursor()
>>> sql = "USE testdb"
>>> Cursor.execute(sql)
The application at this point defaults to using an onboard sqlite database. After this I can quite happily switch to the MySQL database using the create_engine statement above. However, on reboot the MySQL database connection will fail again, defaulting to the onboard sqlite db, etc, etc.
Has anyone got any suggestions as to how this could be happening?
Just thought I would update this - the problem still occurs exactly as described above. I've updated the app so that the user can manually connect to the MySQL db by selecting a menu option. This calls the identical code which exceptions when the app is starting, but works just fine once the app is up and running.
The MySQL instance is completely separate from the app and running throughout, so it should be available to receive connections at all times.
I guess the fundamental question i'm grappling with is how can the same connect code work when the app is up and running, but throw an exception when it is starting?
Is there any artifact of SQLAlchemy that can cause it to fail to create usable connections that isn't dependant on the connection parameters or the remote database?
Ahhh, it all seems so obvious now...
The reason for the exception on startup was because the network interface hadn't finished configuring when the application would make its first request to the remote database. (Which is why the same thing would be successful when attempted at a later time).
As communication with the remote database is a prerequisite for the application, I now do something like this:
if grep -Fxq "mysql" /path/to/my/db/config.config
then
while ! ip a | grep inet.*wlan0 ; do sleep 1; echo "waiting for network..."; done;
fi
... in the startup script for my application - ensuring that the network interface has finished configuring before the application can run.
Of course, the application will never run if the interface doesn't configure, so it still needs some finessing to allow it to timeout and default to using a local database...

MySQL 5.5 : "Got an error reading communication packets"

I just upgraded MySQL from 5.1 to 5.5.
I fixed few issues running mysql_upgrade, and changing some deprecated configurations...
I also updated PHP, from 5.3.3-7 to 5.3.29-1.
But, since that, I'm having a reccurent problem (always thrown in this order) :
1. Client* - PHP Warning
Warning: Packets out of order. Expected 1 received 0. Packet size=1 in
/home/www/www.mywebsite.com/shared/vendor/doctrine/dbal/lib/Doctrine/DBAL/Connection.php
line 694
2. Client* - PHP Warning
Warning: PDOStatement::execute() [pdostatement.execute]: Error reading
result set's header in
/home/www/www.mywebsite.com/shared/vendor/doctrine/dbal/lib/Doctrine/DBAL/Connection.php
line 694
3. Server* - MySQL Warning :
150127 17:25:15 [Warning] Aborted connection 309 to db:
'my_database' user: 'root' host: '127.0.0.1' (Got an error
reading communication packets)
4. Client* - PHP Error
PDOStatement::execute() [pdostatement.execute]: MySQL server
has gone away in
/home/www/www.mywebsite.com/shared/vendor/doctrine/dbal/lib/Doctrine/DBAL/Connection.php
line 694
*NB: What I call "Client" is the PHP Application, and "Server" is the MySQL Server, even if they're both on the same localhost Server.
So, apparently, the origin of all those problems is the first one : "Packets out of order".
But when I search for this error I can't find many answers, and they are most of the time not related to my problem : I use Doctrine as an abstraction, so I don't write any query or fetch any result myself. Plus, it's almost never the same values as me, but in my case I always get those values ("Expected 1 received 0. Packet size=1").
The closest result would be this MySQL bug report, but "No feedback was provided for this bug for over a month, so it is
being suspended automatically"...
Plus, some of the "2." errors aren't thrown by my PHP Doctrine code (they're not executed from my localhost, but from another known external service, probably using some old PHP Propel code).
So that might mean there is a problem with my MySQL configuration itself, but I tried changing some parameters without obtaining any obvious effect (sometimes it takes more time after restarting MySQL to get the first errors for example).
Any help would be very much appreciated !
And here is my current configuration (I've got 2 MySQL instances, the second one using replication is mostly for read only).
I also checked most of the system resources with Munin and didn't see anything abnormal (the RAM usage for example is pretty high, but as there is 50Go on the server it's not full at all).
UPDATE
I isolated an SQL query that was repeatedly failing from my PHP Client. When I executed from my local with MySQL Workbench, it did exactly the same (closed the connexion with a MySQL server has gone away message). When I did it from the sql command line it also did the same. Then I executed it from the sql command line on the server host, and it succeded. But some time after when I tried again from Workbench/whatever it worked... So it looks like those "corrupted packets" are cached and disapear after some time.
Thanks, I fixed this issue doing :
RESET QUERY CACHE;
FLUSH QUERY CACHE;

How to re-try MySQL connection if the database is temporarily unavailable

import MySQLdb
MySQLdb.connect fails sometimes unable to connect, is there a way to set a timeout and try this repeatedly without writing application specific code
Abstract your connection code into a method you can call and have it return the connection. Inside that method try to connect to the database up to 3 or 5 times. Do a time.sleep(.01) between each attempt. That is a 100th of a second. If you can go less, that would be better.

Django - OperationalError: (2006, 'MySQL server has gone away')

Bottom line first: How do you refresh the MySQL connection in django?
Following a MySQL server has gone away error I found that MySQL documentation and other sources (here) suggest increasing the wait_timeout MySQL parameter. To me this seems like a workaround rather than a solution. I'd rather keep a reasonable wait_timeout and refresh the connection in the code.
The error:
File "C:\my_proj\db_conduit.py", line 147, in load_some_model
SomeModel.objects.update()
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\models\manager.py", line 177, in update
return self.get_query_set().update(*args, **kwargs)
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\models\query.py", line 469, in update
transaction.commit(using=self.db)
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\transaction.py", line 142, in commit
connection.commit()
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\backends\__init__.py", line 201, in commit
self._commit()
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\backends\__init__.py", line 46, in _commit
return self.connection.commit()
OperationalError: (2006, 'MySQL server has gone away')
Setup: Django 1.3.0 , MySQL 5.5.14 , innodb 1.1.8 , Python 2.6.6, Win7 64bit
The idea of the solution is clear: reconnect to mysql if the current connection is broken.
Please check this out:
def make_sure_mysql_usable():
from django.db import connection, connections
# mysql is lazily connected to in django.
# connection.connection is None means
# you have not connected to mysql before
if connection.connection and not connection.is_usable():
# destroy the default mysql connection
# after this line, when you use ORM methods
# django will reconnect to the default mysql
del connections._connections.default
having the same issue.
I need idea how to check connection state for MySQLdb connection in django.
i guess it can be achieved by
try:
cursor.execute(sql)
catch OperationalError:
reconnect
is anybody have a better idea?
UPDATE
my decision
self.connection.stat()
if self.connection.errno()!=0:
check state of mysqldb connection if error recreate connection
UPDATE AGAIN
you also need to serve case if connection is close
if self.connection.open:
self.connection.stat()
refresh connection is just recreating it
db_settings = settings.DATABASES['mysql_db']
try:
self.connection = MySQLdb.connect(host=db_settings['HOST'],port=int(db_settings['PORT']),db=db_settings['NAME'],user=db_settings['USER'],passwd=db_settings['PASSWORD'])
except MySQLdb.OperationalError, e:
self.connection = None
Since Django 1.6, you can use
import django.db
django.db.close_old_connections()
This does basically the same thing as adamsmith's answer except that it handles multiple databases and also honors the CONN_MAX_AGE setting. Django calls close_old_connections() automatically before and after each request, so you normally don't have to worry about it unless you have some long-running code outside of the normal request/response cycle.
The main reason that leads to this exception is mostly due to client ideal longer than wait_timeout on mysql server.
In order to prevent that kind of error, django supports an option named CONN_MAX_AGE which allow django to recreate new connection if old connections are ideal too long.
So you should make sure that CONN_MAX_AGE value is smaller than wait_timout value.
One important thing is that, django with wsgi handles checking CONN_MAX_AGE every requests by calling close_old_connections. So you mainly don't need to care about that. However if you are using django in standard alone application, there is no trigger to run that function. So you have to call it manually. So let call close_old_connections in your code base.
Note: close_old_connections will keep old connections if they're not expired yet. Your connections are still reused in case of high frequency query.
This way can also close the idle connections and make things good.
So before you need to make a query after a long time, running the below lines will work:
from django.db import close_old_connections
# To prevent the error if possible.
close_old_connections()
# Then the following sentence should be always ok.
YourModel.objects.all()

"MySQL server has gone away" with Ruby on Rails

After our Ruby on Rails application has run for a while, it starts throwing 500s with "MySQL server has gone away". Often this happens overnight. It's started doing this recently, with no obvious change in our server configuration.
Mysql::Error: MySQL server has gone away: SELECT * FROM `widgets`
Restarting the mongrels (not the MySQL server) fixes it.
How can we fix this?
Ruby on Rails 2.3 has a reconnect option for your database connection:
production:
# Your settings
reconnect: true
See:
Ruby on Rails 2.3 Release Notes, sub section 4.8 Reconnecting MySQL Connections.
MySQL auto-reconnect revisited
Good luck!
This is probably caused by the persistent connections to MySQL going away (time out is likely if it's happening over night) and Ruby on Rails is failing to restore the connection, which it should be doing by default:
In the file vendor/rails/actionpack/lib/action_controller/dispatcher.rb is the code:
if defined?(ActiveRecord)
before_dispatch { ActiveRecord::Base.verify_active_connections! }
to_prepare(:activerecord_instantiate_observers) {ActiveRecord::Base.instantiate_observers }
end
The method verify_active_connections! performs several actions, one of which is to recreate any expired connections.
The most likely cause of this error is that this is because a monkey patch has redefined the dispatcher to not call verify_active_connections!, or verify_active_connections! has been changed, etc.
Try ActiveRecord::Base.connection.verify! in Ruby on Rails 4. Verify pings the server and reconnects if it is not connected.
I had this problem when sending really large statements to MySQL. MySQL limits the size of statements and will close the connection if you go over the limit.
set global max_allowed_packet = 1048576; # 2^20 bytes (1 MB) was enough in my case
As the other contributors to this thread have said, it is most likely that MySQL server has closed the connection to your Ruby on Rails application because of inactivity. The default timeout is 28800 seconds, or 8 hours.
set-variable = wait_timeout=86400
Adding this line to your /etc/my.cnf will raise the timeout to 24 hours
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#option_mysqld_wait_timeout.
Although the documentation doesn't indicate it, a value of 0 may disable the timeout completely, but you would need to experiment as this is just speculation.
There are however three other situations that I know of that can generate that error. The first is the MySQL server being restarted. This will obviously drop all the connections, but as the MySQL client is passive, and this won't be noticed till you do the next query.
The second condition is if someone kills your query from the MySQL command line, and this also drops the connection, because it could leave the client in an undefined state.
The last is if your MySQL server restarts itself due to a fatal internal error. That is, if you are doing a simple query against a table and instantly see 'MySQL has gone away', I'd take a close look at your server's logs to check for hardware error, or database corruption.
First, determine the max_connections in MySQL:
show variables like "max_connections";
You need to make sure that the number of connections you're making in your Ruby on Rails application is less than the maximum allowed number of connections. Note that extra connections can be coming from your cron jobs, delayed_job processes (each would have the same pool size in your database.yml), etc.
Monitor the SQL connections as you go through your application, run processes, etc. by doing the following in MySQL:
show status where variable_name = 'Threads_connected';
You might want to consider closing connections after a Thread finishes execution as database connections do not get closed automatically (I think this is less of an issue with Ruby on Rails 4 applications Reaper):
Thread.new do
begin
# Thread work here
ensure
begin
if (ActiveRecord::Base.connection && ActiveRecord::Base.connection.active?)
ActiveRecord::Base.connection.close
end
rescue
end
end
end
The connection to the MySQL server is probably timing out.
You should be able to increase the timeout in MySQL, but for a proper fix, have your code check that the database connection is still alive, and re-connect if it's not.
Using reconnect: true in the database.yml will cause the database connection to be re-established AFTER the ActiveRecord::StatementInvalid error is raised (As Dave Cheney mentioned).
Unfortunately adding a retry on the database operation seemed necessary to guard against the connection timeout:
begin
do_some_active_record_operation
rescue ActiveRecord::StatementInvalid => e
Rails.logger.debug("Got statement invalid #{e.message} ... trying again")
# Second attempt, now that db connection is re-established
do_some_active_record_operation
end
Do you monitor the number of open MySQL connections or threads? What is your mysql.ini settings for max_connections?
mysql> show status;
Look at Connections, Max_used_connections, Threads_connected, and Threads_created.
You may need to increase the limits in your MySQL configuration, or perhaps rails is not closing the connection properly*.
Note: I've only used Ruby on Rails briefly...
The MySQL documentation for server status is in http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html.
Something else to check is Unicorn config is correct. See before_fork and after_fork handling of ActiveRecord connection here: https://gist.github.com/nebiros/2776085#file-unicorn-rb
I had this problem in a Ruby on Rails 3 application, using the mysql2 gem. I copied out the offending query and tried running it in MySQL directly, and I got the same error, "MySQL server has gone away.".
The query in question was very, very large. A very large insert (+1 MB). The field I was trying to insert into was a TEXT column and their max size is 64 KB. Rather than throwing an errorm, the connection went away.
I increased the size of the field and got the same thing, so I'm still not sure what the exact issue was. The point is that it was in the database due to some strange query. Anyway!
While forking in Rails.
For anyone running into this while forking in Rails, try clearing the existing connections before forking and then establish a new connection for each fork, like this:
# Clear existing connections before forking to ensure they do not get inherited.
::ActiveRecord::Base.clear_all_connections!
fork do
# Establish a new connection for each fork.
::ActiveRecord::Base.establish_connection
# The rest of the code for each fork...
end
See this StackOverflow answer here: https://stackoverflow.com/a/8915353/293280