We have a Rails application that is running in a MySQL master-slave set-up for a while now, using the master_slave_adapter plugin. Recently, there was a need for background processing of long running tasks. So we settled on DelayedJob.
DelayedJob's table/model uses the same master-slave adapter. And it keeps the slave connection alive by polling the table. But the master connection remains idle for long periods of time, closes overnight, and the next time someone activates a job this happens:
Mysql::Error: MySQL server has gone away: UPDATE `delayed_jobs` SET locked_by = null, locked_at = null WHERE (locked_by = 'delayed_job host:[snip] pid:20481')
I've heard bad things about using the reconnect option in my database.yml, because it allegedly doesn't set the connection character set after a reconnect, like it does on the first connection initialization.
What's the proper way to make this work?
FWIW, we now monkey patch Delayed::Job in the two places it matters. Here's the blob:
module Delayed
class Job < ActiveRecord::Base
class << self
def refresh_connections_for_delayed_job
# Do a cheap check to see if we're actually using master-slave.
if (c = self.connection).respond_to? :master_connection
c.master_connection.reconnect! unless c.master_connection.active?
end
end
def clear_locks_with_connection_refresh!(worker_name)
self.refresh_connections_for_delayed_job
self.clear_locks_without_connection_refresh!(worker_name)
end
alias_method_chain :clear_locks!, :connection_refresh
end
def lock_exclusively_with_connection_refresh!(max_run_time, worker)
self.class.refresh_connections_for_delayed_job
self.lock_exclusively_without_connection_refresh!(max_run_time, worker)
end
alias_method_chain :lock_exclusively!, :connection_refresh
end
end
Related
I am using the Mariaex.start_link method to establish a connection with MySQL database and it returns me a pid. I was wondering what's the best practice to manage these pids, i.e. close and create new ones every time? keep 1, 2, ... n pid(s) around as needed?
Also how would I close that connection or kill that pid? I tried Process.exit with :normal which doesn't stop it and I tried it with :kill but I get an error probably from Mariaex and it doesn't seem clean to kill it that way.
Thanks!
You might refer to Ecto codebase to see how it handles this case.
Basically, it starts a connection, executes a query and stops the Mariaex GenServer immediately after:
with {:ok, conn} <- Mariaex.start_link(opts) do
value = Ecto.Adapters.MySQL.Connection.execute(conn, sql, [], opts)
GenServer.stop(conn)
value
end
I'm making a eggdrop tcl script to write activity of several public IRC channels to a database (over time this will be 10 to 15 channels I think). I have two options how to handle the database connection in mind.
An user says something -> Open a mysql connection to the database -> insert information about what the user said -> close the connection
Start the bot -> Open a mysql connection to the database -> Insert information when there is channel activity -> Wait for more information etc.
I think it's better to use case 1, but when there is much channel activity I think opening and closing a connection every time will cause a massive server load and slows things down drastically after a while.
What's the best way to do this?
If you want to keep the connection open just call
mysql::ping $dbhandle
from time to time.
This can be done with something like this:
proc keepMySqlOpen {dbhandle} {
mysql::ping $dbhandle
after 2000 [list keepMySqlOpen $dbhandle]
}
....
set dbh [mysql::open ...]
keepMySqlOpen $dbh
...
An other option is just to use mysql::ping before accessing the db, which should according to the mysqltcl manual reconnect if necessary. This might be the best of both worlds (let the connection time out if there is not much activity, keep it open otherwise).
This is a sample code I'd like to run:
for i in range(1,2000):
db = create_engine('mysql://root#localhost/test_database')
conn = db.connect()
#some simple data operations
conn.close()
db.dispose()
Is there a way of running this without getting "Too many connections" errors from MySQL?
I already know I can handle the connection otherwise or have a connection pool. I'd just like to understand how to properly close a connection from sqlalchemy.
Here's how to write that code correctly:
db = create_engine('mysql://root#localhost/test_database')
for i in range(1,2000):
conn = db.connect()
#some simple data operations
conn.close()
db.dispose()
That is, the Engine is a factory for connections as well as a pool of connections, not the connection itself. When you say conn.close(), the connection is returned to the connection pool within the Engine, not actually closed.
If you do want the connection to be actually closed, that is, not pooled, disable pooling via NullPool:
from sqlalchemy.pool import NullPool
db = create_engine('mysql://root#localhost/test_database', poolclass=NullPool)
With the above Engine configuration, each call to conn.close() will close the underlying DBAPI connection.
If OTOH you actually want to connect to different databases on each call, that is, your hardcoded "localhost/test_database" is just an example and you actually have lots of different databases, then the approach using dispose() is fine; it will close out every connection that is not checked out from the pool.
In all of the above cases, the important thing is that the Connection object is closed via close(). If you're using any kind of "connectionless" execution, that is engine.execute() or statement.execute(), the ResultProxy object returned from that execute call should be fully read, or otherwise explicitly closed via close(). A Connection or ResultProxy that's still open will prohibit the NullPool or dispose() approaches from closing every last connection.
Tried to figure out a solution to disconnect from database for an unrelated problem (must disconnect before forking).
You need to invalidate the connection from the connection Pool too.
In your example:
for i in range(1,2000):
db = create_engine('mysql://root#localhost/test_database')
conn = db.connect()
# some simple data operations
# session.close() if needed
conn.invalidate()
db.dispose()
I use this one
engine = create_engine('...')
with engine.connect() as conn:
conn.execute(text(f"CREATE SCHEMA IF NOT EXISTS...")
engine.dispose()
In my case these always works and I am able to close!
So using invalidate() before close() makes the trick. Otherwise close() sucks.
conn = engine.raw_connection()
conn.get_warnings = True
curSql = xx_tmpsql
myresults = cur.execute(curSql, multi=True)
print("Warnings: #####")
print(cur.fetchwarnings())
for curresult in myresults:
print(curresult)
if curresult.with_rows:
print(curresult.column_names)
print(curresult.fetchall())
else:
print("no rows returned")
cur.close()
conn.invalidate()
conn.close()
engine.dispose()
Hi i have a python script that connects to an Amazon RDS machine and check for new entries.
my scripts works on the localhost perfectly. But on the RDS it does not detect the new entry. once i cancel the script and run again i get the new entry. For testing i tried it out like this
cont = MySQLdb.connect("localhost","root","password","DB")
cursor = cont.cursor()
for i in range(0, 100):
cursor.execute("Select count(*) from box")
A = cursor.fetchone()
print A
and during this process when i add a new entry it does not detect the new entry but when i close the connection and run it again i get the new entry. Why is this i checked the cache it was at 0. what else am i missing.
I have seen this happen in MySQL command-line clients as well.
My understanding (from other people linking to this URL) is that Python's API often silently creates transactions: http://www.python.org/dev/peps/pep-0249/
If that is true, then your cursor is looking at a consistent version of the data, even after another transaction adds rows. You could try doing a cursor.rollback() in the for loop to stop the implicit transaction that the SELECT is running in.
I got the solution to this, it is due to the Isolation level in Mysql, all i had to do was
Set the default transaction isolation level transaction-isolation = READ-COMMITTED
And i am using Django for this i had to add this in the django database settings
'OPTIONS': {
"init_command": "SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED"
}
We are using Sinatra and Sequel for a small API implementation. The problem we have however is that on every page request Sequel opens new connections to MySQL, and keeps them open till they timeout, or you restart Apache.
There's not a lot of documentation on how to reuse connections, so any help, explanations, and/or pointers in the right direction would help.
I wrapped the Sequel stuff in a tiny wrapper and reuse this wrapper, like this:
get '/api/:call' do
##api ||= SApi.new
##api.call(params[:call])
end
class SApi
def initialize
connect
end
def connect
#con = Sequel.connect("...")
end
def call(x)
#handle call using #con
end
end
Alternatively, you can call #con.disconnect once you're finished or call Sequel.connect using a block:
Sequel.connect("...") do |c|
# work with c
end #connection closed
We figured out what we were doing wrong. It was rather stupid, we initialized Sequel in a before filter in Sinatra.
So instead we do:
DB = Sequel.mysql("...")
Then we simply use the DB constant to use Sequel.