I have a Rails application now only runs internally, so there's not so much visits right now. And there're two resque worker running hardly to fetch data from the web and inserts into a mysql database, every insert will followed by sleep 10 second.
We run it on a VPS. After like every 5 hours, I will encounter an Exception Exception occured: [Mysql2::Error] closed MySQL connection".
What could be the reason causing the exception? Now the pool size is 5.
Will it help if I raise the pool size and specify reconnect: true in my database.yml?
This is a common issue when using the mysql2 gem version 0.2.11 or below in combination with multithreading. There is a bug on the issue tracker with a details on the problem but in conclusion the advice is to:
Update the gem version you are using to >= 0.2.12
Add the reconnect: true option your db connection config in the database.yml file
You probably already solved your problem but this might help others who come across this question.
If your workers are inactive for a long period of time, they'll lose their MySQL connection.
see here for the solution
or just stick this in an initializer
unless Rails.env.to_s == 'test'
module ActiveRecord::ConnectionAdapters
class Mysql2Adapter
alias_method :execute_without_retry, :execute
def execute(*args)
execute_without_retry(*args)
rescue Exception => e
if e.message =~ /server has gone away/i
warn "Server timed out, retrying"
reconnect!
retry
else
raise e
end
end
end
end
end
More insight on how to debug this for delayed_job. I did the following after setting reconnect: true on database.yml, and this solution not working.
cd /your_rails_deploy_code/log
cat production.log
# check the pids from delayed job:
E, [2017-02-01T19:45:21.614579 #2592] ERROR -- : 2017-02-01T19:45:21+0000: [Worker(delayed_job.3 host:demeter pid:2592)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=193675) FAILED (0 prior attempts) with Mysql2::Error: closed MySQL connection
On my specific case, pid:2592 is the only one constantly failing. Why? Let's find out:
[deploy#demeter] ps -ef | grep 2592
deploy 2592 1 0 Jan31 ? 00:00:40 production/delayed_job.3
deploy 23312 1 0 Feb01 ? 00:00:40 production/delayed_job.1
deploy 23318 1 0 Feb01 ? 00:00:40 production/delayed_job.0
I noticed that specific process started days before my latest deploy. As soon as I killed it, the errors were gone. What I assume happened on my specific case was that my latest deploy did not delete all delayed_job instances correctly.
Related
I'm in the process of upgrading an installation of Redmine from 3.0.3 to 3.3.3.
The process I always follow for this is to install a fresh Redmine on a new machine, import and sqldump from the current one, then copy the important stuff (files/config.yml/database.yml, plugins) over and run all the necessary steps. This has generally worked well in the past.
At the moment, after importing the sqldump, Redmine isn't starting and I'm getting an error I'm not able to figure out.
The mysql import appears to work:
mysql -u 'user' -p'mypassword' redmine < /home/redmine20170608.sql
Then I do the usual steps which all run with no errors:
bundle exec rake redmine:plugins:migrate RAILS_ENV=production
bundle exec rake db:migrate RAILS_ENV=production
bundle exec rake tmp:sessions:clear
bundle exec rake tmp:cache:clear
sudo service httpd restart
When I navigate to myredmine.com I get the "Internal Error" message. Check the logs and the out put is:
ActiveRecord::StatementInvalid (Mysql2::Error: Unknown column 'tokens.updated_on' in 'field list': UPDATE `tokens` SET `tokens`.`updated_on` = '2017-06-09 07:10:56.515511' WHERE `tokens`.`user_id` = 1 AND `tokens`.`value` = '5a229e24fe73e8a43768c46af2275a8b4a60c9b3' AND `tokens`.`action` = 'session'):
app/models/user.rb:425:in `verify_session_token'
app/controllers/application_controller.rb:77:in `session_expired?'
app/controllers/application_controller.rb:67:in `session_expiration'
Migrating to CreateRolesManagedRoles (20150528092912)
Started GET "/" for 72.155.92.149 at 2017-06-09 07:16:14 +0000
Processing by WelcomeController#index as HTML
Completed 500 Internal Server Error in 25ms (ActiveRecord: 1.8ms)
ActiveRecord::StatementInvalid (Mysql2::Error: Unknown column 'tokens.updated_on' in 'field list': UPDATE `tokens` SET `tokens`.`updated_on` = '2017-06-09 07:16:14.896744' WHERE `tokens`.`user_id` = 1 AND `tokens`.`value` = '5a229e24fe73e8a43768c46af2275a8b4a60c9b3' AND `tokens`.`action` = 'session'):
app/models/user.rb:425:in `verify_session_token'
app/controllers/application_controller.rb:77:in `session_expired?'
app/controllers/application_controller.rb:67:in `session_expiration'
This is the code from line 425 of that file:
scope.update_all(:updated_on => Time.now) == 1
Which is inside this section:
# Returns true if token is a valid session token for the user whose id is user_id
def self.verify_session_token(user_id, token)
return false if user_id.blank? || token.blank?
scope = Token.where(:user_id => user_id, :value => token.to_s, :action => 'session')
if Setting.session_lifetime?
scope = scope.where("created_on > ?", Setting.session_lifetime.to_i.minutes.ago)
end
if Setting.session_timeout?
scope = scope.where("updated_on > ?", Setting.session_timeout.to_i.minutes.ago)
end
scope.update_all(:updated_on => Time.now) == 1
end
I usually find the error output for these to be relatively self explanatory but I don't know how to interpret this one.
I've deleted all of the plugins to make sure its not a compatibility issue and still getting the same problem.
The current Redmine is 3.0.3, running on Ruby 1.9.3-p551, Rails 4.2.1 and AWS Linux AMI 2010.03 (which I am advised to move away from).
The new Redmine is 3.3.3, running on Ruby 2.2.5-p319, Rails 4.2.7.1 and CentOS 7.
Any help greatly appreciated.
As discussed in the comments the error is
ActiveRecord::StatementInvalid (Mysql2::Error: Unknown column 'tokens.updated_on'
There's no column called updated_on in Token model and you are trying to update it on line 425
scope.update_all(:updated_on => Time.now) == 1
You need to add migration for that column.
run following command in your terminal from app's root folder,
rails g migration AddUpdatedOnToToken updated_on:datetime
rake db:migrate
The answer was to manually add the required column. The way described in the first answer didn't work - as noted I kept getting permission denied. This is weird because there are only two mysql users and both have access to that database.
So the way to fix this was to log into mysql as the Redmine user and run these commands:
USE mydatabase;
ALTER TABLE tokens ADD updated_on VARCHAR(60);
And the issue was resolved - I was able to continue and access Redmine with no issues.
Using Laravel and Supervisord to keep php artisan queue:listen running. For some reason it has run well for a long time and now suddenly I'm getting this error and then the job is restarting
[2016-02-19 14:49:23] production.ERROR: exception 'Illuminate\Database\QueryException' with message 'SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction (SQL: updateaccountssetupdated_at= 2016-02-19 14:49:23,ReceivableBalance= 11968.1419330000,RecoupableIncomeTotal= 0,RecoupableExpenseTotal= 0 whereid= 74)' in /home/ec2-user/MPWLaravel/vendor/laravel/framework/src/Illuminate/Database/Connection.php:555
...
I read a few things that said I was getting a dealock because too many queue workers were running but I should only have one.
When I run, ps aux | grep artisan I get...
c2-user 12838 0.2 0.6 348288 26200 ? S 19:42 0:01 php artisan queue:listen --timeout=600
ec2-user 12920 76.3 2.0 484132 78212 ? R 19:49 3:47 php artisan queue:work --queue=https://sqs.us-east-1.amazonaws.com/129423672202/MpwNewProduction --delay=0 --memory=128 --sleep=3 --tries=0 --env=production
which suggests two queue workers running? Which may cause the deadlock? Or is that just the one running and then the queue:listen is just still checking for new messages?
So lost.
Ended up being a second queue listener running I was unaware of. For anyone who may stumble upon a similar issue.
During this step of brew install mysql:
==> Pouring mysql-5.7.9.el_capitan.bottle.1.tar.gz
This keeps happening to my log every 10 seconds:
Log:
12/8/15 2:54:58.681 PM com.apple.xpc.launchd[1]: (com.oracle.oss.mysql.mysqld[38555]) Service could not initialize: Unable to set current working directory. error = 2: No such file or directory, path = /usr/local/mysql: 15B42: xpcproxy + 12028 [1353][19011403-4854-3CCD-9FCF-49C36302EB40]: 0x2
12/8/15 2:54:58.681 PM com.apple.xpc.launchd[1]: (com.oracle.oss.mysql.mysqld) Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
Now, I can run the sql server but it's still making these requests and I don't know how to fix it. Even when I stop the server this request keeps occurring.
I faced the same problem, seems you have installed mysql not from homebrew.
Deleting this /Library/LaunchDaemons/com.oracle.oss.mysql.mysqld.plist and restart PC solved this problem.
I am using the Parallel gem with Rails3 and getting issues with mysql threads, even with a simple line like:
Parallel.each(User.all, :in_processes => 1) { |r| puts r.username }
It alternately works and then fails the second time through. Here is the error I get:
ruby-1.8.7-p330 :035 > Parallel.each(User.all, :in_processes => 1) { |r| puts r.username }
ActiveRecord::StatementInvalid: Mysql::Error: MySQL server has gone away: SELECT `users`.* FROM `users`
from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330#p-ecom1-rails3/gems/activerecord-3.0.3/lib/active_record/connection_adapters/abstract_adapter.rb:202:in `log'
from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330#p-ecom1-rails3/gems/activerecord-3.0.3/lib/active_record/connection_adapters/mysql_adapter.rb:289:in `execute'
from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330#p-ecom1-rails3/gems/activerecord-3.0.3/lib/active_record/connection_adapters/mysql_adapter.rb:619:in `select'
from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330#p-ecom1-rails3/gems/activerecord-3.0.3/lib/active_record/connection_adapters/abstract/database_statements.rb:7:in `select_all'
from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330#p-ecom1-rails3/gems/activerecord-3.0.3/lib/active_record/connection_adapters/abstract/query_cache.rb:56:in `select_all'
from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330#p-ecom1-rails3/gems/activerecord-3.0.3/lib/active_record/base.rb:467:in `find_by_sql'
from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330#p-ecom1-rails3/gems/activerecord-3.0.3/lib/active_record/relation.rb:64:in `to_a'
from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330#p-ecom1-rails3/gems/activerecord-3.0.3/lib/active_record/relation/finder_methods.rb:143:in `all'
from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330#p-ecom1-rails3/gems/activerecord-3.0.3/lib/active_record/base.rb:439:in `__send__'
from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330#p-ecom1-rails3/gems/activerecord-3.0.3/lib/active_record/base.rb:439:in `all'
from (irb):35
from (null):0
If I do the non-parallel version, it works fine:
User.all.each { |r| puts r.username }
I am using the mysql gem, but have tried mysql2 and mysqlplus.
Running on OSX.
I am thinking its an issue with how ActiveRecord and the mysql gem work with threads.
From what I've read, it might be that I need to tweak the mysql settings to make it more concurrency friendly. Although the alternate gems seem to address handling concurrency better.
I have raised this as a query with the gem - https://github.com/grosser/parallel/issues/9#comment_844380 - but this seems like more like a fundamental issue of how to setup mysql with ruby for concurrent access...
So, my question is - is there a definitive configuration for Rails3 and mysql for concurrent DB access?
Thanks, Chris
EDIT
What seems to be working is splitting into 2 queries, one to the get the id's, then looping through the id's in parallel and within the loop, re-accessing the entity by id.
ids = User.all.map { |u| u.id }
Parallel.each(ids, :in_processes => 1) do |uid|
ActiveRecord::Base.connection.reconnect!
r = User.find(uid)
puts r.username
end
You need to establish connections after forking. That's a "feature" of forking - network connections are in an inconsistent state.
Parallel.each(User.all, :in_processes => 1) do |r|
::ActiveRecord::Base.establish_connection
puts r.username
end
I was getting a very similar error with the following:
pid = Process.fork
if pid
Process.detach(pid)
else
# Perform long task using ActiveRecord
do_stuff
end
If I hit the server with a request while do_stuff was running, it would kill the task and throw an exception:
ActiveRecord::StatementInvalid (Mysql2::Error: Lost connection to MySQL server during query: ...
Adding Francois' suggestion fixed my problem:
pid = Process.fork
if pid
Process.detach(pid)
else
# Perform long task using ActiveRecord
ActiveRecord::Base.establish_connection
do_stuff
end
Thanks Francois!
I'm switching the database of a rails 3.0.3 app I have developed from
postgres to mysql so that I can avail of amazon's rds. Before I make
the change I have been running my test code using mysql on my dev
machine with the mysql2 adaptor . My test code is throwing up some
errors that I haven't quite been able to get to the bottom of yet.
Basically I have a model that is used to store large xml uploads. My
test code looks something like this
test "xml upload for large file" do
file = File.new("test/files/lib/upload_sample.xml")
upload = XmlUpload.create(:xml_contents => contents = file.read)
.....
.....
end
The create line is throwing up the following error
ActiveRecord::StatementInvalid: Mysql2::Error: SAVEPOINT active_record_1 does not exist: ROLLBACK TO SAVEPOINT active_record_1
/Users/conor/.rvm/gems/ruby-1.9.2-p136/gems/activerecord-3.0.3/lib/ active_record/connection_adapters/abstract_adapter.rb:202:in `rescue in log'
/Users/conor/.rvm/gems/ruby-1.9.2-p136/gems/activerecord-3.0.3/lib/active_record/connection_adapters/abstract_adapter.rb:194:in `log'
/Users/conor/.rvm/gems/ruby-1.9.2-p136/gems/mysql2-0.2.6/lib/ active_record/connection_adapters/mysql2_adapter.rb:314:in `execute'
/Users/conor/.rvm/gems/ruby-1.9.2-p136/gems/mysql2-0.2.6/lib/ active_record/connection_adapters/mysql2_adapter.rb:358:in `rollback_to_savepoint'
/Users/conor/.rvm/gems/ruby-1.9.2-p136/gems/activerecord-3.0.3/lib/ active_record/connection_adapters/abstract/database_statements.rb: 149:in `rescue in transaction'
/Users/conor/.rvm/gems/ruby-1.9.2-p136/gems/activerecord-3.0.3/lib/ active_record/connection_adapters/abstract/database_statements.rb: 127:in `transaction'
/Users/conor/.rvm/gems/ruby-1.9.2-p136/gems/activerecord-3.0.3/lib/ active_record/transactions.rb:204:in `transaction'
/Users/conor/.rvm/gems/ruby-1.9.2-p136/gems/activerecord-3.0.3/lib/ active_record/transactions.rb:287:in `with_transaction_returning_status'
/Users/conor/.rvm/gems/ruby-1.9.2-p136/gems/activerecord-3.0.3/lib/ active_record/transactions.rb:237:in `block in save'
/Users/conor/.rvm/gems/ruby-1.9.2-p136/gems/activerecord-3.0.3/lib/ active_record/transactions.rb:248:in `rollback_active_record_state!'
/Users/conor/.rvm/gems/ruby-1.9.2-p136/gems/activerecord-3.0.3/lib/ active_record/transactions.rb:236:in `save'
....
I have been storing the file contents in a text field. I realise that
I should seriously look at storing the files in s3 but this is the
setup that I have at the moment. In postgres everything worked fine
but in order to get things to work with mysql I had to set the :limit
variable so that LONGTEXT was used instead of the standard text
field. The files can be quite large but when I test using small files
there are no problems
I could be barking up the wrong tree entirely but I suspect that the
problem may be caused by the database connection being dropped based
on the errors thrown up when I try uploading a file in the development
mode. I did some checking on this error and I'm not sure what could be dropping the connection, the file isn't taking 8 hrs (the default connection drop time) to insert
Mysql2::Error: MySQL server has gone away: INSERT INTO xml_uploads ........
My database.yaml settings are the following.
test:
adapter: mysql2
encoding: utf8
reconnect: true
database: app_test
username: username
password: password
host: localhost
Does anyone have any clues as to what the problem is and how it can
fixed? Any help with this would be greatly appreciated.
I decided to go with the storing the data in S3 anyway but a friend did point me in the direction of the solution to this issue, I tested it and it worked, so I thought I should post it here in case anyone else runs into the same problem.
Basically the problem is caused by the max_allowed_packet variable being set to something smaller than the blog/text field size. The query can't be executed so the connection gets dropped. Here are some details about the max_allowed_packet variable
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_max_allowed_packet
and also some info on adjusting it on rds instances
http://www.henrybaxter.ca/?p=111