Using fork in Ruby on Rails for creating parallel process - mysql

I have a Rails 3 app in production with Passenger on Apache. I have this code:
class Billing < ActiveRecord::Base
after_save :sendEmails
private
def sendEmails
fork do
UserMailer.clientBilling(self.user, self).deliver
end
end
end
In localhost, when the app creates a billing, after it is saved, the app sends an email to the user, everything works fine. But in the server, after the app creates a billing, it throws me errors related to the gem MySQL2, errors like "MySQL server has gone away" or "Connection lost", and the app doesn't send the emails. If I remove the fork it works fine, but I want to use fork, I want to create a separated process because it takes to long when sending emails. What could be the problem?

The problem is that a forked process inherits some of its parent's resources, such as its file descriptors. In particular one such shared resource is the MySQL connection. When the child process finishes its email sending and exits it closes the MySQL connection, which closes the parent processes connection.
If you do continue down this path (and it is frought with similar subtleties) then you need to do something like this:
# Clear existing connections before forking to ensure they do not get inherited.
::ActiveRecord::Base.clear_all_connections!
fork do
# Establish a new connection for each fork.
::ActiveRecord::Base.establish_connection
# The rest of the code for each fork...
end
You'll have to do similar thing with services like memcached or mongodb if you use those.

Be extremely careful when using fork with rails/passenger, it can become very messy! Instead, you should use resque or delayed_job for this task!

You can reestablish the connection inside of the fork:
dbconfig = YAML::load(File.open('your_app_dir/config/database.yml'))
ActiveRecord::Base.establish_connection(dbconfig['development'])

Related

VerneMQ plugin_chain_exhausted Authentication MySQL

I have a running instance of VerneMQ (cluster of 2 nodes) on Google kubernets and using MySQL (CloudSQL) for Auth. Server accepts connections over TLS
It works fine, but after a few days i start seeing this message on the log:
can't authenticate client {[],<<"Client-id">>} from X.X.X.X:16609 due to plugin_chain_exhausted
The client app (paho) complains that the server refused the connection for being "not authorized (code=5 in paho error)"
after a few retry it finally connects. but every time it get's harder and harder until it just won't connect anymore
If i restart VerneMQ everything get's back to normal
I have only 3 clients currently connected at most, at the same time.
clients already connected have no issues in pub/sub.
In my configuration i have (among other things):
log.console.level=debug
plugins.vmq_diversity=on
vmq_diversity.mysql.* = all of them set
allow_anonymous=off
vmq_diversity.auth_mysql.enabled=on
it's like the server degrades over time. the status webpage reports no problem
My verne server was build from the git repository about a month ago and runs on a docker container
what could be the cause?
what else could i check to find posibles causes? maybe a diversity missconfiguration?
Tks
To quickly explain the plugin_chain_exhausted log: with Verne you can run multiple authentication/authorization plugins, and they will be checked in a chain. If one plugin allows the client, it will be in. If no plugin allows the client, you'll see the log above.
This does not explain the behaviour you describe, though. I don't think I have seen that.
In any case, the first thing to check is whether you actually run multiple plugins. For instance: have you disabled the vmq.passwd and the vmq.acl plugins?

EF Core MySql connections not closing

We have developed a project in .NET Core and Entity Framework Core using the MySql nuget package.
The context is added to dependancy injection using the following line:
services.AddDbContext<ReadWriteContext>(options => options.UseMySQL(Configuration["Machine:ReadWriteConnectionString"]));
Then in a controller, this is injected as such:
public class SystemController : Controller
{
private readonly ReadWriteContext _dataContext;
public SystemController(ReadWriteContext dataContext)
{
_dataContext = dataContext;
}
...
}
And used as such:
var hasServices = await _dataContext.Services.AnyAsync();
In the logs we see the opening and closing log lines:
Opening connection to database 'config_service' on server '10.211.55.5'.
Closing connection to database 'config_service' on server '10.211.55.5'.
However, when we look at the MySql server and run "show full processlist", the connections are still showing as being in the sleep state and never close. When you stop the .NET process, the connections then close and disappear from MySql process list.
How do I get the connections to close when the request is finished. The AddDbContext should be scoped to the current request, but it does not appear to properly close the connections.
Any help would be great?
I'm not aware with MySql. That said if you were in a SQL server context you would be facing the "connection pool": https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling.
That is your application keep a set of active connection to speed up the process of reaching the data server. Thus even if you sais to the pool "I don't need this connection anymore", it does not believe you and keep the connection open... just in case.
For how long, well it depends on logic beyond the scope of the application. One use to say: it is the connection pool realm, just let it do his job.
So the answer for your question is: you can't explicitly close the connection to the server. The connection pool will decide when to close or not.

How to keep server and application separate

I have a nodejs Application running on server with node-mysql and express, At first I faced problem where some exceptions were not handled and the application would go down with network connectivity issues.
I handled all uncaught exceptions and the sever wouldn't go down this time but instead it would hang. I figured it was because I returned response only if query didn't raise any exception, so I handled all query related exceptions too.
Next if MySQL server terminate connection for some reason my application wouldn't reconnect, i tried reconnecting but it would give an error related to "enqueue connection handshake or something". From another stack question I was supposed to use connection pool so if server terminates connection it will regain connectivity some how, which I did.
My here question is that each time I faced an issue I had to shut down whole application and thanks to nodejs where server is configured programmatically goes down too. Can I or better yet how can I decouple my Server and Application almost completely so that if I make some change in my application I wouldn't have to re-deploy?
Specially for case that right now everything is okay and my application is constantly giving me connection pool error on server and in development version its working fine, so even if I restart my application I am not sure how will I face this problem again so I can properly diagnose this.
Let me know if anyone needs more info regarding my question.
Are you using a front-end framework to serve your application, or are you serving it all from server calls?
So fundamentally, if your server barfs for any reason (i.e. 500 error), you WANT to shut down and restart, because once your server is in that state, all of your in-transit data and your stack is in an unknown state. There's no way to correctly recover from that, so you are safer from both a server and an end-user point of view to shutdown the process and restart.
You can minimise the impact of this by using something like Node's Cluster module, which allows you to fork child processes of your server and generate multiple instances of the same server, connected to the same database, allowing access on the same port etc, therefore, if your user (or your server), manages to hit an unhandled exception, it can kill the process and restart without shutting down your entire server.
Edit: Here's a snippet:
var cluster = require('cluster');
var threads = require('os').cpus().length;
if(cluster.isMaster) {
for(var i = 0; i < threads; i++) {
cluster.fork();
}
cluster.on('exit', function(dead, code, signal) {
console.log('worker ' +dead.process.pid+ ' died.');
var worker = cluster.fork();
console.log('worker '+worker.process.pid+ ' started');
});
} else {
//
// do your server logic in here
}
That being said, there's no way for you to run up your application and server separately if Node is serving your client content. Once you terminate your server, your Endpoints are down. If you really wanted to be able to keep a client-side application active and reboot your server, you'd have to entirely separate the logic, i.e. have your Application in a different project to your server, and use your server as API endpoints only.
As for Connection Pools in Node-mysql: I have never used that module so I couldn't say what best practice is there.

PHP MySQL Connection Timed Out on private network

I am getting intermittent 'Connection Timed Out' errors when a php script on my web server connects to the MySQL database server over the private network. However, if I tell the script to use the public network to connect, these errors do not appear.
My connection script is setup so that whenever I try to connect to mysql, it checks for errors, if there is an error, it sends me an email then automatically switches to the public network to try that connection. If the public connection fails, it sends me another email and displays a custom web page to the user.
I get about 5 to 10 connection errors every hour. There are hundreds of successful connections every minute.
These machines are dedicated machines. I contacted our hosting company and they tested the routers and cables and said everything is fine. I tried pinging the servers both ways and there are no errors at all for test periods over an hour.
I am using the latest Nginx with the latest PHP and PHP-FPM. Mysql is 5.5.27. These are Centos 6 64bit systems with that latest updates.
I've tried many network configuration options, adjustments to php-fpm & mysql config file and no matter what I do or change, nothing fixes it.
The weird thing is, everything works great over the public network and pings and file transfer work great over the private network between both machines.
Any ideas?
** UPDATE **
I made some changes to the PHP-FPM config file and to the MySQL config file and the errors are now about 2 to 3 per hour but still unresolved.
I'm not sure this is your case but still worth mentioning as it helped me in a similar situation. Basically, there is a cap on max number of connections in linux kernel: https://serverfault.com/questions/10852/what-limits-the-maximum-number-of-connections-on-a-linux-server
Not sure if it is shared between all the networks, but if you think it's worth checking I'd just raise those variable values say twice and see if it had any effect on how frequently the error happens.

How do I reset a Datamapper connection after Passenger forks a worker process?

After upgrading several parts of my Rails app (Ruby 1.9.2, Rails 3.0.4, Datamapper 1.1.0) and moving to Passenger Standalone, we started getting weird MySQL connection errors, including:
Field-count mismatch
Lost connection to MySQL server during query
MySQL server has gone away
Then I remembered that Passenger forks processes and you need to re-open new connections for things like redis, memcache, etc. or the data stream will get garbled, and I found another post regaling similar adventures due to the same problem with MySQL.
But I also recalled reading here that Passenger took care of the database connections automatically.
So I have two questions:
1) How do I tell DataMapper to create and use a new database connection? And/or:
2) Does forking Passenger take care of the forking database connections automatically or not? For fork's sake... ;)
To answer #2, no, Passenger itself does not handle closing file handles after forking. You have to manage it yourself, unless your gem does it for you.
To answer #1, I cobbled together some things I found. Add to environment.rb and let me know if it works!
if defined?(PhusionPassenger)
PhusionPassenger.on_event(:starting_worker_process) do |forked|
if forked
# We're in smart spawning mode.
DataObjects::Pooling.pools.each do |pool|
pool.dispose
end
else
# We're in direct spawning mode. We don't need to do anything.
end
end
end