I'm using the Sequel gem which works great. However I'm trying to debug a multithreading bug so I activated the log (at the Sequel level : .i.e using a Logger when creating the connection to the database). My problem is , all the SQL logs coming from the different connections are tangled in the log file and there is no know which query correspond to which connection. Having a connection id or something added to the log would be really useful.
Is there a way to do so or an alternative solution ?
If there's nothing built-in, try monkey patching or changing the logger, or the call to it, so it prepends each log line with the thread's ID.
The relevant file in Sequel would be:
https://github.com/jeremyevans/sequel/blob/master/lib/sequel/database/logging.rb
Based on it, chances are you could subclass Logger and throw that in to make it work.
http://www.ruby-doc.org/stdlib-2.1.0/libdoc/logger/rdoc/Logger.html
If the Logger docs and its code is anything to go by, you can probably do what you want by overriding the add() method, e.g.:
def add(severity, message = nil, progname = nil, &block)
thread_msg = "thread: #{Thread.current.object_id}"
progname ||= #progname
if message.nil?
if block_given?
message = yield
else
message = progname
progname = #progname
end
end
message = "#{thread_msg}\n#{message}"
super(severity, message, progname, &block)
end
Related
Running django via gunicorn to RDS (AWS mysql), I'm seeing this error in my gunicorn logs:
Exception _mysql_exceptions.ProgrammingError: (2014, "Commands out of sync; you can't run this command now") in <bound method Cursor.__del__ of <MySQLdb.cursors.Cursor object at 0x690ecd0>> ignored
I can't reliably reproduce it yet, nor can I track down the underlying code that's causing it.
I am using raw cursors in some places, following this pattern:
cursor = connections['read_only'].cursor()
sql = "select username from auth_user;"
cursor.execute(sql)
rows = cursor.fetchall()
usernames = []
for row in rows:
usernames.append(row[0])
In some places I immediately reuse the cursor for another query execute() / fetchall() pattern. Sometimes I don't.
I also use raw manager queries in some place.
I'm not explicitly closing cursors, but I don't believe that I should.
Other than that: I'm not using any stored procedures, no init_command parameters, nor anything else indicated in the other answers I've seen posted here.
Any ideas or suggestions for how to debug would be appreciated.
Check out https://code.djangoproject.com/ticket/17289
you'll need to do something like:
while cursor.nextset() is not None:
if verbose:
print "rows modified %s" % cursor.rowcount
I'm connecting to a MySQL database through the Matlab Database Toolbox in order to run the same query over and over again within 2 nested for loops. After each iteration I get this warning:
Warning: com.mathworks.toolbox.database.databaseConnect#26960369 is not serializable
In Import_Matrices_DOandT_julaugsept_inflow_nomettsed at 476
Warning: com.mysql.jdbc.Connection#6e544a45 is not serializable
In Import_Matrices_DOandT_julaugsept_inflow_nomettsed at 476
Warning: com.mathworks.toolbox.database.databaseConnect#26960369 not serializable
In Import_Matrices_DOandT_julaugsept_inflow_nomettsed at 476
Warning: com.mysql.jdbc.Connection#6e544a45 is not serializable
In Import_Matrices_DOandT_julaugsept_inflow_nomettsed at 476
My code is basically structured like this:
%Server
host =
user =
password =
dbName =
%# JDBC parameters
jdbcString = sprintf('jdbc:mysql://%s/%s', host, dbName);
jdbcDriver = 'com.mysql.jdbc.Driver';
%# Create the database connection object
conn = database(dbName, user , password, jdbcDriver, jdbcString);
setdbprefs('DataReturnFormat', 'numeric');
%Loop
for SegmentNum=3:41;
for tl=1:15;
tic;
sqlquery=['giant string'];
results = fetch(conn, sqlquery);
(some code here that saves the results into a few variables)
save('inflow.mat');
end
end
time = toc
close(conn);
clear conn
Eventually, after some iterations the code will crash with this error:
Error using database/fetch (line 37)
Query execution was interrupted
Error in Import_Matrices_DOandT_julaugsept_inflow_nomettsed (line
466)
results = fetch(conn, sqlquery);
Last night it errored after 25 iterations. I have about 600 iterations total I need to do, and I don't want to have to keep checking back on it every 25. I've heard there can be memory issues with database connection objects...is there a way to keep my code running?
Let's take this one step at a time.
Warning: com.mathworks.toolbox.database.databaseConnect#26960369 is not serializable
This comes from this line
save('inflow.mat');
You are trying to save the database connection. That doesn't work. Try specifying the variables you wish to save only, and it should work better.
There are a couple of tricks to excluding the values, but honestly, I suggest you just find the most important variables you wish to save, and save those. But if you wish, you can piece together a solution from this page.
save inflow.mat a b c d e
Try wrapping the query in a try catch block. Whenever you catch an error reset the connection to the database which should free up the object.
nQuery = 100;
while(nQuery>0)
try
query_the_database();
nQuery = nQuery - 1;
catch
reset_database_connection();
end
end
The ultimate main reason for this is that database connection objects are TCP/IP ports and multiple processes cannot access the same port. That is why database connection object are not serialized. Ports cannot be serialized.
Workaround is to create a connection with in the for loop.
using:
Python 2.7.3
SQLAlchemy 0.7.8
PyODBC 3.0.3
I have implemented my own Dialect for the EXASolution DB using PyODBC as the underlying db driver. I need to make use of PyODBC's output_converter function to translate DECIMAL(x, 0) columns to integers/longs.
The following code snippet does the trick:
pyodbc = self.dbapi
dbapi_con = connection.connection
dbapi_version = dbapi_con.getinfo(pyodbc.SQL_DRIVER_VER)
(major, minor, patch) = [int(x) for x in dbapi_version]
if major >= 3:
dbapi_con.add_output_converter(pyodbc.SQL_DECIMAL, self.decimal2int)
I have placed this code snippet in the initialize(self, connection) method of
class EXADialect_pyodbc(PyODBCConnector, EXADialect):
Code gets called, and no exception is thrown, but this is a one time initialization. Later on, other connections are created. These connections are not passed through my initialization code.
Does anyone have a hint on how connection initialization works with SQLAlchemy, and where to place my code so that it gets called for every new connection created?
This is an old question, but something I hit recently, so an updated answer may help someone else along the way. In my case, I was trying to automatically downcase mssql UNIQUEIDENTIFIER columns (guids).
You can grab the raw connection (pyodbc) through the session or engine to do this:
engine = create_engine(connection_string)
make_session = sessionmaker(engine)
...
session = make_session()
session.connection().connection.add_output_converter(SQL_DECIMAL, decimal2int)
# or
connection = engine.connect().connection
connection.add_output_converter(SQL_DECIMAL, decimal2int)
I am encountering the 'mysql has gone away' error in Ruby after a certain amount of time that the script has been running.
I want to try to tell the mysql gem to auto-reconnect when the connection is lost.
My current code looks like the following:
def self.connect()
begin
if !##dbh.nil?
self.disconnect
end
##dbh = Mysql.real_connect(##server, ##user, ##pass, ##db)
puts "[+] Connected to the " + ##db + " database with user '" + ##user + "'"
rescue Mysql::Error => e
# log error
end
end
The following guide [0] says that the mysql gem has a 'reconnect' object variable, however, I am unsure of how to use it within my code.
How do I implement this option into the code above?
Thanks in advance,
Ryan
[0] http://www.tmtm.org/en/mysql/ruby/
EDIT ---
OK. I think I have figured it out.
I need to add ##dbh.reconnect = true after the ##dbh = Mysql.real_connect(##server, ##user, ##pass, ##db) line.
Note: According to a 'nice' chapy on IRC the mysql gem may not be the best Ruby gem to use.
If you're starting on a new project, the mysql2 gem is the way to go. It's an enormous improvement over the old version.
An attempt to Ruby-ize your example is:
def connect
begin
if (#dbh)
self.disconnect
end
#dbh = Mysql.real_connect(#server, #user, #pass, #db)
puts "[+] Connected to the #{#db} database with user '#{#user}'"
rescue Mysql::Error => e
# log error
end
end
The reason for using traditional # variables is you can use attr_accessor if you design your interface properly.
It's better to use a singleton instance than to wreck around with a singleton class. For instance:
class MyApp
def self.db
#db ||= Database.new
end
class Database
# Instance methods like initialize, connect, disconnect, etc.
end
end
You can use this like:
MyApp.db.connect
The advantage of using an instance of a class instead of a class directly is you can support more than one connection at a time.
I stumbled upon the following:
def save_formset(self, request, form, formset, change):
instances = formset.save(commit=False)
bargain_id = 0
total_price = Decimal(0)
for instance in instances:
if isinstance(instance, BargainProduct):
total_price += instance.quantity * instance.product.price
bargain_id = instance.id
instance.save()
updateTotal = Bargain.objects.get(id=bargain_id)
updateTotal.total_price = total_price - updateTotal.discount_price
updateTotal.save()
This code is working for me on my local MySQL setup, however, on my live test enviroment running on SQLite3* I get the "Bargain matching query does not exist." error..
I am figuring this is due to a different hierarchy of saving the instances on SQLite.. however it seems they run(and should) act the same..?
*I cannot recompile MySQL with python support on my liveserver atm so thats a no go
Looking at the code, if you have no instances coming out of the formset.save(), bargain_id will be 0 when it gets down to the Bargain.objects.get(id=bargain_id) line, since it will skip over the for loop. If it is 0, I'm guessing it will fail with the error you are seeing.
You might want to check to see if the values are getting stored correctly in the database during your formset.save() and it is returning something back to instances.
This line is giving the error:
updateTotal = Bargain.objects.get(id=bargain_id)
which most probably is because of this line:
instances = formset.save(commit=False)
Did you define a save() method for the formset? Because it doesn't seen to have one built-in. You save it by accessing what formset.cleaned_data returns as the django docs say.
edit: I correct myself, it actually has a save() method based on this page.
I've been looking at this same issue. It is saving the data to the database, and the formset is filled. The problem is that the save on instances = formset.save(commit=False) doesn't return a value. When I look at the built-in save method, it should give back the saved data.
Another weird thing about this, is that it seems to work on my friends MySQL backend, but not on his SQLITE3 backend. Next to that it doesn't work on my MySQL backend.
Local loop returns these print outs (on MySQL).. on sqlite3 it fails with a does not excist on the query
('Formset: ', <django.forms.formsets.BargainProductFormFormSet object at 0x101fe3790>)
('Instances: ', [<BargainProduct: BargainProduct object>])
[18/Apr/2011 14:46:20] "POST /admin/shop/deal/add/ HTTP/1.1" 302 0