Laravel Supervisord DB Deadlock - mysql

Using Laravel and Supervisord to keep php artisan queue:listen running. For some reason it has run well for a long time and now suddenly I'm getting this error and then the job is restarting
[2016-02-19 14:49:23] production.ERROR: exception 'Illuminate\Database\QueryException' with message 'SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction (SQL: updateaccountssetupdated_at= 2016-02-19 14:49:23,ReceivableBalance= 11968.1419330000,RecoupableIncomeTotal= 0,RecoupableExpenseTotal= 0 whereid= 74)' in /home/ec2-user/MPWLaravel/vendor/laravel/framework/src/Illuminate/Database/Connection.php:555
...
I read a few things that said I was getting a dealock because too many queue workers were running but I should only have one.
When I run, ps aux | grep artisan I get...
c2-user 12838 0.2 0.6 348288 26200 ? S 19:42 0:01 php artisan queue:listen --timeout=600
ec2-user 12920 76.3 2.0 484132 78212 ? R 19:49 3:47 php artisan queue:work --queue=https://sqs.us-east-1.amazonaws.com/129423672202/MpwNewProduction --delay=0 --memory=128 --sleep=3 --tries=0 --env=production
which suggests two queue workers running? Which may cause the deadlock? Or is that just the one running and then the queue:listen is just still checking for new messages?
So lost.

Ended up being a second queue listener running I was unaware of. For anyone who may stumble upon a similar issue.

Related

Private Ethereum : Got error (Transaction was not mined within 750 seconds) when saving migration to chain on a private Ethereum network

I tried to migrate the contract Migrations by using Truffle migrate, it got hang and show error message as belows. Please help me if I configured anything wrong.
⠸ Saving migration to chain. {
"jsonrpc": "2.0",
"id": 1574154369501,
"result": "0x"
}
Error: Error: Error: Transaction was not mined within 750 seconds, please make sure your transaction was properly sent. Be aware that it might still be mined!
at Object.run (/usr/local/lib/node_modules/truffle/build/webpack:/packages/migrate/index.js:92:1)
at
at process._tickCallback (internal/process/next_tick.js:188:7)
Truffle v5.1.0 (core: 5.1.0)
Node v8.10.0
This problem was fixed. The root cause was an environment. At the first time I ran this on VirtualBox Ubantu. The performance was quite slow. Then I tried to run on Host Windows. It's fast and worked.

Intrusion Detection System OSSEC

I configured the OSSEC by following the procedure from https://blog.rapid7.com/2017/06/30/how-to-install-and-configure-ossec-on-ubuntu-linux/ this site. but after configuration, when I tried /var/ossec/bin/ossec-control restart
I got
ossec-monitord not running ..
ossec-logcollector not running ..
ossec-remoted not running ..
ossec-syscheckd not running ..
ossec-analysisd not running ..
ossec-maild not running ..
ossec-execd not running ..
OSSEC HIDS v2.9.0 Stopped
Starting OSSEC HIDS v2.9.0 (by Trend Micro Inc.)...
OSSEC analysisd: Testing rules failed. Configuration error. Exiting.
In logtest, I got
Error reading XML file '/var/ossec/etc/ossec.conf': XMLERR: Element 'syscheck' not closed. (line 252).
2018/05/22 15:20:59 ossec-testrule(1202): ERROR: Configuration error at '/var/ossec/etc/ossec.conf'. Exiting.
where can I solve the problem?
You have to close the tag in your config file,
Edit ossec.conf :
Type <\syscheck> where you opend it.

SonarRunner failed - Analysis is already running

Hi I'm running a Jenkins build with sonarRunner (in Gradle) and during this sonarRunner task, it's failing with the following error message. Any ideas! how to fix it?
09:57:21.410 ERROR - It looks like an analysis of 'MyProject' is already running (started less than a minute ago).
:****Runner FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':****Runner'.
> org.****.api.utils.SonarException: The project is already being analysed.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
OK, that was quick.
I used -Dsonar.forceAnalysis=true and the error went away.

Ruby - Exception occured: [Mysql2::Error] closed MySQL connection

I have a Rails application now only runs internally, so there's not so much visits right now. And there're two resque worker running hardly to fetch data from the web and inserts into a mysql database, every insert will followed by sleep 10 second.
We run it on a VPS. After like every 5 hours, I will encounter an Exception Exception occured: [Mysql2::Error] closed MySQL connection".
What could be the reason causing the exception? Now the pool size is 5.
Will it help if I raise the pool size and specify reconnect: true in my database.yml?
This is a common issue when using the mysql2 gem version 0.2.11 or below in combination with multithreading. There is a bug on the issue tracker with a details on the problem but in conclusion the advice is to:
Update the gem version you are using to >= 0.2.12
Add the reconnect: true option your db connection config in the database.yml file
You probably already solved your problem but this might help others who come across this question.
If your workers are inactive for a long period of time, they'll lose their MySQL connection.
see here for the solution
or just stick this in an initializer
unless Rails.env.to_s == 'test'
module ActiveRecord::ConnectionAdapters
class Mysql2Adapter
alias_method :execute_without_retry, :execute
def execute(*args)
execute_without_retry(*args)
rescue Exception => e
if e.message =~ /server has gone away/i
warn "Server timed out, retrying"
reconnect!
retry
else
raise e
end
end
end
end
end
More insight on how to debug this for delayed_job. I did the following after setting reconnect: true on database.yml, and this solution not working.
cd /your_rails_deploy_code/log
cat production.log
# check the pids from delayed job:
E, [2017-02-01T19:45:21.614579 #2592] ERROR -- : 2017-02-01T19:45:21+0000: [Worker(delayed_job.3 host:demeter pid:2592)] Job ActiveJob::QueueAdapters::DelayedJobAdapter::JobWrapper (id=193675) FAILED (0 prior attempts) with Mysql2::Error: closed MySQL connection
On my specific case, pid:2592 is the only one constantly failing. Why? Let's find out:
[deploy#demeter] ps -ef | grep 2592
deploy 2592 1 0 Jan31 ? 00:00:40 production/delayed_job.3
deploy 23312 1 0 Feb01 ? 00:00:40 production/delayed_job.1
deploy 23318 1 0 Feb01 ? 00:00:40 production/delayed_job.0
I noticed that specific process started days before my latest deploy. As soon as I killed it, the errors were gone. What I assume happened on my specific case was that my latest deploy did not delete all delayed_job instances correctly.

hudson build timedout | how to perform cleanup in such case?

Suppose a particular job is executing for more then x minutes then I abort it using the build timout plugin
Now after the build is aborted in such manner I want to perform some cleanup actions. Want to know - how can I do it?
How can I kill all process that my build had started?
To perform such action I found a post build plugin.http://wiki.hudson-ci.org/display/HUDSON/Post+build+task
But I get follwing error when it tries to execute my bat script
Build timed out. Aborting
Build was aborted
Performing Post build task...
Match found for :aborted : True
Logical operation result is TRUE
Running script : C:\aks.bat
[workspace] $ cmd /c call C:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\hudson6930497910994453428.bat
Exception when executing the batch command : null
Finished: ABORTED
Why is it so? Can anyone suggest some other plugin?