create_engine problems mysql 5.7 and sqlalchemy - mysql

I've inherited an application making use of python & sqlalchemy to interact with a mysql database. When I issue:
mysql_engine = sqlalchemy.create_engine('mysql://uname:pwd#192.168.xx.xx:3306/testdb', connect_args={'use_unicode':True,'charset':'utf8', 'init_command':'SET NAMES UTF8'}, poolclass=NullPool)
, at startup, an exception is thrown:
cmd = unicode("USE testdb")
with mysql_engine.begin() as conn:
conn.execute(cmd)
sqlalchemy.exc.OperationalError: (OperationalError) (2003, "Can't connect to MySQL server on '192.168.xx.xx' (101)") None None
However, using IDLE I can do:
>>> import MySQLdb
>>> Con = MySQLdb.Connect(host="192.168.xx.xx", port=3306, user="uname", passwd="pwd", db="testdb")
>>> Cursor = Con.cursor()
>>> sql = "USE testdb"
>>> Cursor.execute(sql)
The application at this point defaults to using an onboard sqlite database. After this I can quite happily switch to the MySQL database using the create_engine statement above. However, on reboot the MySQL database connection will fail again, defaulting to the onboard sqlite db, etc, etc.
Has anyone got any suggestions as to how this could be happening?
Just thought I would update this - the problem still occurs exactly as described above. I've updated the app so that the user can manually connect to the MySQL db by selecting a menu option. This calls the identical code which exceptions when the app is starting, but works just fine once the app is up and running.
The MySQL instance is completely separate from the app and running throughout, so it should be available to receive connections at all times.
I guess the fundamental question i'm grappling with is how can the same connect code work when the app is up and running, but throw an exception when it is starting?
Is there any artifact of SQLAlchemy that can cause it to fail to create usable connections that isn't dependant on the connection parameters or the remote database?

Ahhh, it all seems so obvious now...
The reason for the exception on startup was because the network interface hadn't finished configuring when the application would make its first request to the remote database. (Which is why the same thing would be successful when attempted at a later time).
As communication with the remote database is a prerequisite for the application, I now do something like this:
if grep -Fxq "mysql" /path/to/my/db/config.config
then
while ! ip a | grep inet.*wlan0 ; do sleep 1; echo "waiting for network..."; done;
fi
... in the startup script for my application - ensuring that the network interface has finished configuring before the application can run.
Of course, the application will never run if the interface doesn't configure, so it still needs some finessing to allow it to timeout and default to using a local database...

Related

Flask-SQLAlchemy error MySQL server has gone away

I'm newbie running a Flask app connected to a MySQL remote server with Flask-SQLAlchemy.
The app has very little traffic and it's usual to stay idle for more than 8 hours, I then get disconnected from the MySQL server.
this is my app code:
from flask import Flask, render_template, request, redirect, jsonify, make_response
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import or_
app = Flask(__name__)
app.config['DEBUG'] = True
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {'pool_pre_ping': True, 'pool_recycle': 300, 'echo':'debug',}
app.config['SQLALCHEMY_DATABASE_URI']='mysql+pymysql://user:pass#myipcode/db'
db = SQLAlchemy(app)
Everything works ok until no querys are performed for 8 hours, then I lose db connection and logs show this error code:
"MySQL server has gone away (%r)" % (e,)) sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2006, "MySQL server has gone away (BrokenPipeError(32, 'Broken pipe'))
I did some research and was adviced to set 'SQLALCHEMY_ENGINE_OPTIONS' as written in the code sample but it behaves the same with or without such engine options, connection is not recycled every 300 seconds as It should, and pool_pre_ping doesn't seem to make any difference. 'echo':'debug' option works as intended since I get every transaction logged.
What should I do to prevent the connection from being disconnected even after a long period of inactivty?
EDIT:
To add some additional info:
The database is hosted in Cloud SQL from GCP .
I'm lost...
Any help would be greatly appreciated.
Finally I figured it out myself,
It had to do with the fact that my app was running on a mountpoint like http://ServerIP/app instead of http://ServerIP/ because it was initially intended as a staging server.
I was using uWSGI and, in order to make it work in the aforementioned path, I had to specify a mount parameter in the [UWSGI] block from app.ini file.
When the uWSGI server started it showed like it was mounting two apps, one in '' and the other in '/app', and I guess that created a conflict that made the app unable to manage connections on MySQL server.
Mounting the app in http://ServerIP/ worked like a charm.

SQLAlchemy AppEngine standard - Lost connection to MySQL server

I'm trying to connect to a Google Cloud SQL second generation in Python from AppEngine standard (Python 2.7).
Until now, I was using MySQLDB driver directly and it was fine.
I've tried to switch to SQLAlchemy, but now I'm always having this error when the code is deployed (it seems to work fine in local) resulting in a error 500 (It's not just some connections which are lost, it constantly fails) :
OperationalError: (_mysql_exceptions.OperationalError) (2013, "Lost connection to MySQL server at 'reading initial communication packet', system error: 38") (Background on this error at: http://sqlalche.me/e/e3q8)
I don't understand because the setup doesn't differ from before, so it must be related to the way I use SQLAlchemy.
I use something like this :
create_engine("mysql+mysqldb://appuser:password#x.x.x.x/db_name?unix_socket=/cloudsql/gcpProject:europe-west1:instanceName")
I've tried different values (with, without the ip, ...). But it is still the same. Is is a version compatibility problem ?
I use
MySQL-python in the app.yaml and SQLAlchemy 1.2.4 :
app.yaml :
- name: MySQLdb
version: "latest"
requirements.txt :
SQLAlchemy==1.2.4
It was a problem in the url. I was adding in a specific part of the code "/dbname" at the end of the connection string, resulting in something like this :
mysql+mysqldb://appuser:password#/db_name?unix_socket=/cloudsql/gcpProject:europe-west1:instanceName/dbname
So in the end, the meaning of this error can also be that the unix socket is wrong.
There are a number of causes for connection loss to Google CloudSQL server but quite rightly, you have to ensure that your setup is appropriate first. I don't think this issue is about version compatibility.
According to the documentation, for your application to be able to connect to your Cloud SQL instance when the app is deployed, you require to add the user, password, database, and instance connection name variables from Cloud SQL to the related environment variables in the app.yaml file(Your displayed app.yaml does not seem to contain these environment variables).
I recommend you review the details in the link for details on how to set up your CloudSQL instance and connecting to the instance.

Web2py scheduler stopping the app from running

I cann't get the scheduler run in my ubuntu or debian linux machines. Originally I had db = DAL('sqlite://storage.sqlite') which causes a weird situation where the website becomes unavailable (the browser cannot establish a connection to the server at 127.0.0.1) but when I run the same app without scheduler (without -K appname), and check the app database, it shows that the scheduler task has been running successfully. What is causing connection to the server to break?
Second, I tried using pymysql or mysqldb instead of sqlite. but, I get this error "Error in URI 'pymysql' or database not supported" even without -K myapp option.

Lost connection to MySQL server at 'waiting for initial communication packet'

I have seen many stack overflows questions and some blogs tried workarounds, but nothings helped - hence re-posting the question with more details.
I am seeing the weird behaviour with MySQL and Python application, details are as follows:
1) My application works perfectly fine with MySQL (tried and tested on many platforms) but on this particular machine it fails to connect to MySQL.
structure of application is :
Windows service -> parent process -> Mysql(child process)
and when application tries to connect to MySQL it get this error:
ERROR 2013 , Lost connection to MySQL server at 'waiting for initial communication packet' - system error 0
I tried:
- connect_timeout=300
- skip-name-resolve=0
- firewall is OFF
- use 17.0.0.1, localhost , IP of machine to connect to but it still fails with same error.
2) Now the weird thing is -
If I manually follow all the steps which application does, It works perfectly fine, details are follows:
a) Start MySQL with same command (which application uses) with administrator privileges
mysql --default-file = xxx --basedir =xxx
b) Connect with same credentials ( -u root -P 6075 -h 127.0.0.1) and
It works perfectly fine, I double checked all the steps which application does, there is no difference between manually steps and application code.
AM I missing something here ? Any suggestions ?
MySQL version : 5.5.35
Python : 2.7
Base OS : Windows 2012 R2
Thanks in advance..
Found a reason - answering my question:
When I used to run MySQL from my application - it was running under system user privileges - so it used to pick "C:\WINDOWS\TEMP" as a temp directory- this directory was messed up - has lot of unnecessary files .. and MySQL was stuck while processing files under this directory...
But when I ran it manually under My administrator account it was using his temp directory... C:\Users\USER_NAME\AppData\Local\Temp and everything was working like magic...
To fix this permanently I changed tmp directory through MySQL conf file and now My Application runs like the Wind.... :)
[mysqld]
tmpdir = 'PATH_TO_THE_DIRECTORY'
I was getting this same error trying to set up a SQL Server Linked Server
Cannot initialize the data source object of OLE DB provider "MSDASQL" for linked server "DBLINKED".
OLE DB provider "MSDASQL" for linked server "DBLINKED" returned message "[MySQL][ODBC 8.0(w) Driver]Lost connection to MySQL server at 'waiting for initial communication packet', system error: 10060". (Microsoft SQL Server, Error: 7303)
You mentioned it in your initial question - the Connection Timeout was the issue for me.
The default is 0 - raised it to 300. I thought default of 0 would mean no timeout, but it's obviously something reasonably short, and I was trying to connect to a remote database on a slow internet connection. A lot of other question and answers out there relate to connecting within the same machine, so this error isn't reported much.

Django - OperationalError: (2006, 'MySQL server has gone away')

Bottom line first: How do you refresh the MySQL connection in django?
Following a MySQL server has gone away error I found that MySQL documentation and other sources (here) suggest increasing the wait_timeout MySQL parameter. To me this seems like a workaround rather than a solution. I'd rather keep a reasonable wait_timeout and refresh the connection in the code.
The error:
File "C:\my_proj\db_conduit.py", line 147, in load_some_model
SomeModel.objects.update()
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\models\manager.py", line 177, in update
return self.get_query_set().update(*args, **kwargs)
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\models\query.py", line 469, in update
transaction.commit(using=self.db)
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\transaction.py", line 142, in commit
connection.commit()
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\backends\__init__.py", line 201, in commit
self._commit()
File "C:\Python26\lib\site-packages\django-1.3-py2.6.egg\django\db\backends\__init__.py", line 46, in _commit
return self.connection.commit()
OperationalError: (2006, 'MySQL server has gone away')
Setup: Django 1.3.0 , MySQL 5.5.14 , innodb 1.1.8 , Python 2.6.6, Win7 64bit
The idea of the solution is clear: reconnect to mysql if the current connection is broken.
Please check this out:
def make_sure_mysql_usable():
from django.db import connection, connections
# mysql is lazily connected to in django.
# connection.connection is None means
# you have not connected to mysql before
if connection.connection and not connection.is_usable():
# destroy the default mysql connection
# after this line, when you use ORM methods
# django will reconnect to the default mysql
del connections._connections.default
having the same issue.
I need idea how to check connection state for MySQLdb connection in django.
i guess it can be achieved by
try:
cursor.execute(sql)
catch OperationalError:
reconnect
is anybody have a better idea?
UPDATE
my decision
self.connection.stat()
if self.connection.errno()!=0:
check state of mysqldb connection if error recreate connection
UPDATE AGAIN
you also need to serve case if connection is close
if self.connection.open:
self.connection.stat()
refresh connection is just recreating it
db_settings = settings.DATABASES['mysql_db']
try:
self.connection = MySQLdb.connect(host=db_settings['HOST'],port=int(db_settings['PORT']),db=db_settings['NAME'],user=db_settings['USER'],passwd=db_settings['PASSWORD'])
except MySQLdb.OperationalError, e:
self.connection = None
Since Django 1.6, you can use
import django.db
django.db.close_old_connections()
This does basically the same thing as adamsmith's answer except that it handles multiple databases and also honors the CONN_MAX_AGE setting. Django calls close_old_connections() automatically before and after each request, so you normally don't have to worry about it unless you have some long-running code outside of the normal request/response cycle.
The main reason that leads to this exception is mostly due to client ideal longer than wait_timeout on mysql server.
In order to prevent that kind of error, django supports an option named CONN_MAX_AGE which allow django to recreate new connection if old connections are ideal too long.
So you should make sure that CONN_MAX_AGE value is smaller than wait_timout value.
One important thing is that, django with wsgi handles checking CONN_MAX_AGE every requests by calling close_old_connections. So you mainly don't need to care about that. However if you are using django in standard alone application, there is no trigger to run that function. So you have to call it manually. So let call close_old_connections in your code base.
Note: close_old_connections will keep old connections if they're not expired yet. Your connections are still reused in case of high frequency query.
This way can also close the idle connections and make things good.
So before you need to make a query after a long time, running the below lines will work:
from django.db import close_old_connections
# To prevent the error if possible.
close_old_connections()
# Then the following sentence should be always ok.
YourModel.objects.all()