mysqlplus is better adapter than ruby mysql? - mysql

I want to know if the mysqlplus gem is a better database driver than the common Ruby mysql gem? I used to have some problems in my Rails application, like:
ActiveRecord::StatementInvalid: Mysql::Error: MySQL server has gone away

MySQL server has gone away means either the mysql server has crashed running your query or (more commonly) you sent it a quert that is larger than max_allowed_packet. see http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html

if you want to only check for 'mysql server has gone away' errors then activerecord is more than sufficient. it is a mature codebase, good enough for most usecases.
checkout http://blog.new-bamboo.co.uk/2010/4/11/automatic-reconnection-of-mysql-connections-in-active-record for more details.
mysqlplus is better when you need concurrency, all the cool boys recommend it :-)
but i am not sure if it is production ready.

mysql reaps the connections after a period of inactivity. this is defined in 'wait_timeout'.
Can see this in mysql by:
mysql> show variables like 'wait_timeout'
by default it is 8 hours. you are getting this error as you have established a connection and there have been no queries executed over this connection for this period.
Activerecord has ActiveRecord::Base#verify_active_connections! for this usecase.
if you specify reconnect: true in database.yml it will do this automatically.
The above method is executed when we checkout a connection from the connection-pool, it guards against inactivity.
It will not help you if you are running a long-running query and it exceeds the wait_timeout period, Then you may have to increase the timeout variable in mysql. You may also try setting the patch in:
http://gist.github.com/238999 This will retry the query on such a error, circumstances may have changed, but the patch is not robust as it does not have a retry count.

Related

MySQL Query running even after losing connection

I've a MySQL 5.1.41 Server installed on a Ubuntu machine. I get connected to it through Workbench from my Windows machine over TCP/IP. I run a bigger query, after 900 seconds I got the below message, (there is no wait_timeout defined in the server's configuration file my.cnf)
Error Code: 2013. Lost connection to MySQL server during query
But when I look into the process list by using show processlist; command, I can still see my query running.
I got this link http://dev.mysql.com/doc/refman/5.0/en/gone-away.html where I found the below lines,
The problem on Windows is that in some cases MySQL does not get an
error from the OS when writing to the TCP/IP connection to the server,
but instead gets the error when trying to read the answer from the
connection.
I'm not sure whether this is the reason for my observation.
Please clarify me on this.
Thanks in advance!!
Closing connection is not a reason to stop a query. A query might be update, or kind of transaction, or select with output to remote (server) file.
Closed connection is just is just means, that you will not receive any data from DBMS after executing query (data, timings - nothing).
The reason of closing connection could be different, as SO-User posted. Try increasing
on server side:
wait_timeout
max_allowed_packet
on client side:
any kinds of timeout you find in your client (i.e. that SO-User suggests)
Do not forget to reload DBMS config and restart client (for sure)
In MySQL WorkBench we have an option to change timeout.
Find it under
Edit → Preferences → SQL Editor → DBMS connection read time out (in seconds): 600
Changed the value to 6000 or something higher.
Update
Lost connection to MySQL server
There are three likely causes for this error message.
Usually it indicates network connectivity trouble and you should check
the condition of your network if this error occurs frequently. If the
error message includes “during query,” this is probably the case you
are experiencing.
Sometimes the “during query” form happens when millions of rows are
being sent as part of one or more queries. If you know that this is
happening, you should try increasing net_read_timeout from its default
of 30 seconds to 60 seconds or longer, sufficient for the data
transfer to complete.
More rarely, it can happen when the client is attempting the initial
connection to the server. In this case, if your connect_timeout value
is set to only a few seconds, you may be able to resolve the problem
by increasing it to ten seconds, perhaps more if you have a very long
distance or slow connection. You can determine whether you are
experiencing this more uncommon cause by using SHOW GLOBAL STATUS LIKE
'Aborted_connects'. It will increase by one for each initial
connection attempt that the server aborts. You may see “reading
authorization packet” as part of the error message; if so, that also
suggests that this is the solution that you need.
If the cause is none of those just described, you may be experiencing
a problem with BLOB values that are larger than max_allowed_packet,
which can cause this error with some clients. Sometime you may see an
ER_NET_PACKET_TOO_LARGE error, and that confirms that you need to
increase max_allowed_packet.
Doc link: Error lost connection
and also check here

Force to reconnect MySQL in Rails

How to force MySQL reconnect at my will in Rails application? I would like to do this either periodically or on DB exceptions like "MySQL server has gone away".
I found ActiveRecord::Base.remove_connection but as it is written, it should be called for some model, not the whole application.
It's a huge pain to restart the Rails console when I'm running it via Heroku with a bunch of objects in variables and then lose my database connection.
The following is code I would not consider "good" to put in your actual application but it temporarily gets over the oft encountered Mysql2::Error: closed MySQL connection in a console:
ActiveRecord::Base.connection.reconnect!
How about using reconnect = true in your database.yml as described here?

How to prevent "Mysql2::Error: This connection is in use by" with Sidekiq

After running Sidekiq for a couple of hours, I see a bunch of jobs fail with Mysql2::Error: This connection is in use by: #<Celluloid::Thread:0x0000000d1b56e0 sleep>. Seems the Sidekiq threads are somehow conflicting over the MySQL connection pool.
concurrency is set to the default 25 in sidekiq.yml and the pool is 28 in database.yml. There are no long-lived queries and the exceptions happen in standard finder calls, nothing fancy.
How can I prevent this error to ensure jobs run smoothly?
Your problem is caused because sidekiq is getting all connections to your DB and at the same time the rails app is also requesting the connections.
You have 25 sidekiq workers but how much rails servers do you have?
e.g. if you have unicorn running 4 child workers you'll need 29 slots (at least)
Finally fixed this by isolating all the workers. It came down to Typhoeus library apparently not being thread-safe. Replaced with Net/HTTP and it works again.
Some more details at https://github.com/mperham/sidekiq/issues/1400#issuecomment-45838886
In our case, we had a lot of these errors in a specific worker type. We identified that we were using Timeout.timeout() calls in one of the jobs running in these workers. We removed those calls and these errors went away after that. For some reference as to why Timeout.timeout() calls are dangerous, please look here and here.

Rails MySQL has gone away after timeout, catching error and re-executing command automatically for all statements

I have wait_timeout on my MySQL server set to 86400 (24 hours). I have an application that doesn't get used that often sometimes though (particularly not used on weekends). So what ends up happening is that on Monday morning, people come in to use the application and each controller has to error once before it works. This leads to me getting a lot of bug reports because the 'system isn't working'. It's very frustrating that each controller has to error once in order for rails to reconnect that connection. Is there a way to catch all of those errors and have it re-execute the statement or do I have to add a check to every controller?
Add reconnect: true to config/database.yml in the environment you are using so it will automaticly reconnect when a timout occures!

"MySQL server has gone away" with Ruby on Rails

After our Ruby on Rails application has run for a while, it starts throwing 500s with "MySQL server has gone away". Often this happens overnight. It's started doing this recently, with no obvious change in our server configuration.
Mysql::Error: MySQL server has gone away: SELECT * FROM `widgets`
Restarting the mongrels (not the MySQL server) fixes it.
How can we fix this?
Ruby on Rails 2.3 has a reconnect option for your database connection:
production:
# Your settings
reconnect: true
See:
Ruby on Rails 2.3 Release Notes, sub section 4.8 Reconnecting MySQL Connections.
MySQL auto-reconnect revisited
Good luck!
This is probably caused by the persistent connections to MySQL going away (time out is likely if it's happening over night) and Ruby on Rails is failing to restore the connection, which it should be doing by default:
In the file vendor/rails/actionpack/lib/action_controller/dispatcher.rb is the code:
if defined?(ActiveRecord)
before_dispatch { ActiveRecord::Base.verify_active_connections! }
to_prepare(:activerecord_instantiate_observers) {ActiveRecord::Base.instantiate_observers }
end
The method verify_active_connections! performs several actions, one of which is to recreate any expired connections.
The most likely cause of this error is that this is because a monkey patch has redefined the dispatcher to not call verify_active_connections!, or verify_active_connections! has been changed, etc.
Try ActiveRecord::Base.connection.verify! in Ruby on Rails 4. Verify pings the server and reconnects if it is not connected.
I had this problem when sending really large statements to MySQL. MySQL limits the size of statements and will close the connection if you go over the limit.
set global max_allowed_packet = 1048576; # 2^20 bytes (1 MB) was enough in my case
As the other contributors to this thread have said, it is most likely that MySQL server has closed the connection to your Ruby on Rails application because of inactivity. The default timeout is 28800 seconds, or 8 hours.
set-variable = wait_timeout=86400
Adding this line to your /etc/my.cnf will raise the timeout to 24 hours
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#option_mysqld_wait_timeout.
Although the documentation doesn't indicate it, a value of 0 may disable the timeout completely, but you would need to experiment as this is just speculation.
There are however three other situations that I know of that can generate that error. The first is the MySQL server being restarted. This will obviously drop all the connections, but as the MySQL client is passive, and this won't be noticed till you do the next query.
The second condition is if someone kills your query from the MySQL command line, and this also drops the connection, because it could leave the client in an undefined state.
The last is if your MySQL server restarts itself due to a fatal internal error. That is, if you are doing a simple query against a table and instantly see 'MySQL has gone away', I'd take a close look at your server's logs to check for hardware error, or database corruption.
First, determine the max_connections in MySQL:
show variables like "max_connections";
You need to make sure that the number of connections you're making in your Ruby on Rails application is less than the maximum allowed number of connections. Note that extra connections can be coming from your cron jobs, delayed_job processes (each would have the same pool size in your database.yml), etc.
Monitor the SQL connections as you go through your application, run processes, etc. by doing the following in MySQL:
show status where variable_name = 'Threads_connected';
You might want to consider closing connections after a Thread finishes execution as database connections do not get closed automatically (I think this is less of an issue with Ruby on Rails 4 applications Reaper):
Thread.new do
begin
# Thread work here
ensure
begin
if (ActiveRecord::Base.connection && ActiveRecord::Base.connection.active?)
ActiveRecord::Base.connection.close
end
rescue
end
end
end
The connection to the MySQL server is probably timing out.
You should be able to increase the timeout in MySQL, but for a proper fix, have your code check that the database connection is still alive, and re-connect if it's not.
Using reconnect: true in the database.yml will cause the database connection to be re-established AFTER the ActiveRecord::StatementInvalid error is raised (As Dave Cheney mentioned).
Unfortunately adding a retry on the database operation seemed necessary to guard against the connection timeout:
begin
do_some_active_record_operation
rescue ActiveRecord::StatementInvalid => e
Rails.logger.debug("Got statement invalid #{e.message} ... trying again")
# Second attempt, now that db connection is re-established
do_some_active_record_operation
end
Do you monitor the number of open MySQL connections or threads? What is your mysql.ini settings for max_connections?
mysql> show status;
Look at Connections, Max_used_connections, Threads_connected, and Threads_created.
You may need to increase the limits in your MySQL configuration, or perhaps rails is not closing the connection properly*.
Note: I've only used Ruby on Rails briefly...
The MySQL documentation for server status is in http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html.
Something else to check is Unicorn config is correct. See before_fork and after_fork handling of ActiveRecord connection here: https://gist.github.com/nebiros/2776085#file-unicorn-rb
I had this problem in a Ruby on Rails 3 application, using the mysql2 gem. I copied out the offending query and tried running it in MySQL directly, and I got the same error, "MySQL server has gone away.".
The query in question was very, very large. A very large insert (+1 MB). The field I was trying to insert into was a TEXT column and their max size is 64 KB. Rather than throwing an errorm, the connection went away.
I increased the size of the field and got the same thing, so I'm still not sure what the exact issue was. The point is that it was in the database due to some strange query. Anyway!
While forking in Rails.
For anyone running into this while forking in Rails, try clearing the existing connections before forking and then establish a new connection for each fork, like this:
# Clear existing connections before forking to ensure they do not get inherited.
::ActiveRecord::Base.clear_all_connections!
fork do
# Establish a new connection for each fork.
::ActiveRecord::Base.establish_connection
# The rest of the code for each fork...
end
See this StackOverflow answer here: https://stackoverflow.com/a/8915353/293280