EF Core MySql connections not closing - mysql

We have developed a project in .NET Core and Entity Framework Core using the MySql nuget package.
The context is added to dependancy injection using the following line:
services.AddDbContext<ReadWriteContext>(options => options.UseMySQL(Configuration["Machine:ReadWriteConnectionString"]));
Then in a controller, this is injected as such:
public class SystemController : Controller
{
private readonly ReadWriteContext _dataContext;
public SystemController(ReadWriteContext dataContext)
{
_dataContext = dataContext;
}
...
}
And used as such:
var hasServices = await _dataContext.Services.AnyAsync();
In the logs we see the opening and closing log lines:
Opening connection to database 'config_service' on server '10.211.55.5'.
Closing connection to database 'config_service' on server '10.211.55.5'.
However, when we look at the MySql server and run "show full processlist", the connections are still showing as being in the sleep state and never close. When you stop the .NET process, the connections then close and disappear from MySql process list.
How do I get the connections to close when the request is finished. The AddDbContext should be scoped to the current request, but it does not appear to properly close the connections.
Any help would be great?

I'm not aware with MySql. That said if you were in a SQL server context you would be facing the "connection pool": https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling.
That is your application keep a set of active connection to speed up the process of reaching the data server. Thus even if you sais to the pool "I don't need this connection anymore", it does not believe you and keep the connection open... just in case.
For how long, well it depends on logic beyond the scope of the application. One use to say: it is the connection pool realm, just let it do his job.
So the answer for your question is: you can't explicitly close the connection to the server. The connection pool will decide when to close or not.

Related

Load Testing Database with JMETER : force re open connection to load test queries with connection opening

I need to validate a workload on a DB used to answer to http api.
In this context, on production, there are a lot of connections opened / closed. For a connection, there are only 2 or 3 small queries launched.. So connection 'activity' (open/close) has to be taken into account in our application.
I need to 'bench' / test the DB without the application stack, so I'd like JMETER to query directly the database like the web service would do..
When using / configuring odbc connection pool through "jdbc connection configuration", I only see the way to define a large pool of connection that will be used, after, to launch queries. That mean... the connections stay alive after playing ThreadGroup scenario, and are reused. In real application, for a scenario, this would make a new connection, and would close this one at the end.
Is there a way to do it (make a new connection for every ThreadGroup run) in JMETER with JDBC 'components' ?
as a workarround, I created a small script and asked jmeter to run it... but it's far more heavier for the server to do it (launch a new process each time to execute the (php) script.. and I couldn't load the server enough by doing it, to reproduce the workload.
JMeter is actually calling Connection.close() function after executing the statement, under the hood the connection is being returned to the pool and it waits for the next thread which requires the connection.
If your application behaviour is the same you don't need to worry about anything. If it's different - you won't get such precise control with the JDBC Connection Configuration and JDBC Request sampler.
If you want to create and destroy connections manually you will have to switch to JSR232 Sampler and implement connection and query logic in Groovy, see Working with a relational database Groovy user manual chapter for more details, code examples, etc.

PHP + MySQL connection pool [duplicate]

Is it possible to cache database connections when using PHP like you would in a J2EE container? If so, how?
There is no connection pooling in php.
mysql_pconnect and connection pooling are two different things.
There are many problems connected with mysql_pconnect and first you should read the manual and carefully use it, but this is not connection pooling.
Connection pooling is a technique where the application server manages the connections. When the application needs a connection it asks the application server for it and the application server returns one of the pooled connections if there is one free.
We can do connection scaling in php for that please go through following link: http://www.oracle.com/technetwork/articles/dsl/white-php-part1-355135.html
So no connection pooling in php.
As Julio said apache releases all resources when the request ends for the current reques. You can use mysql_pconnect but you are limited with that function and you must be very careful. Other choice is to use singleton pattern, but none of this is pooling.
This is a good article: https://blogs.oracle.com/opal/highly-scalable-connection-pooling-in-php
Also read this one http://www.apache2.es/2.2.2/mod/mod_dbd.html
Persistent connections are nothing like connection pooling. A persistent connection in php will only be reused if you make multiple db connects within the same request/script execution context. In most typical web dev scenarios you'll max out your connections way faster if you use mysql_pconnect because your script will have no way to get a reference to any open connections on your next request. The best way to use db connections in php is to make a singleton instance of a db object so that the connection is reused within the context of your script execution. This still incurs at least 1 db connect per request, but it's better than making multiple db connects per reqeust.
There is no real db connection pooling in php due to the nature of php. Php is not an application server that can sit there in between requests and manage references to a pool of open connections, at least not without some kind of major hack. I think in theory you could write an app server in php and run it as a commandline script that would just sit there in the background and keep a bunch of db connections open and pass references to them to your other scripts, but I don't know if that would be possible in practice, how you'd pass the references from your commandline script to other scripts, and I sort of doubt it would perform well even if you could pull it off. Anyway that's mostly speculation. I did just notice the link someone else posted to an apache module to allow connection pooling for prefork servers such as php. Looks interesting:
https://github.com/junamai2000/mod_namy_pool#readme
I suppose you're using mod_php, right?
When a PHP file finishes executing all it's state is killed so there's no way (in PHP code) to do connection pooling. Instead you have to rely on extensions.
You can mysql_pconnect so that your connections won't get closed after the page finishes, that way they get reused in the next request.
This might be all that you need but this isn't the same as connection pooling as there's no way to specify the number of connections to maintain opened.
You can use MySQLi.
For more info, scroll down to Connection pooling section # http://www.php.net/manual/en/mysqli.quickstart.connections.php#example-1622
Note that Connection pooling is also dependent on your server (i.e. Apache httpd) and its configuration.
If an unused persistent connection for a given combination of "host, username, password, socket, port and default database can not be found" in the open connection pool, then only mysqli opens a new connection otherwise it would reuse already open available persistent connections, which is in a way similar to the concept of connection pooling. The use of persistent connections can be enabled and disabled using the PHP directive mysqli.allow_persistent. The total number of connections opened by a script can be limited with mysqli.max_links (this may be interesting to you to address max_user_connections issue hitting hosting server's limit). The maximum number of persistent connections per PHP process can be restricted with mysqli.max_persistent.
In wider programming context, it's a task of web/app server however in this context, it's being handled by mysqli directive of PHP itself in a way supporting connection re-usability. You may also implement a singleton class to get a static instance of connection to reuse just like in Java. Just want to remind that java also doesn't support connection pooling as part of its standard JDBC, they're being different module/layers on top of JDBC drivers.
Coming to PHP, the good thing is that for the common databases in the PHP echosystem it does support Persistent Database Connections which persists the connection for 500 requests (config of max_requests in php.ini) and this avoids creating a new connection in each request. So check it out in docs in detail, it solves most of your challenges. Please note that PHP is not so much sophisticated in terms of extensive multi-threading mechanism and concurrent processing together with powerful asynchronous event handling, when compared to strictly object oriented Java. So in a way it is very less effective for PHP to have such in-built mechanism like pooling.
You cannot instantiate connection pools manually.
But you can use the "built in" connection pooling with the mysql_pconnect function.
I would like to suggest PDO::ATTR_PERSISTENT
Persistent connections are links that do not close when the execution of your script ends. When a persistent connection is requested, PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link.
Connection pooling works at MySQL server side like this.
If persistence connection is enabled into MySQL server config then MySQL keep a connection open and in sleep state after requested client (php script) finises its work and die.
When a 2nd request comes with same credential data (Same User Name, Same Password, Same Connection Parameter, Same Database name, Maybe from same IP, I am not sure about the IP) Then MySQL pool the previous connection from sleep state to active state and let the client use the connection. This helps MySQL to save time for initial resource for connection and reduce the total number of connection.
So the connection pooling option is actually available at MySQL server side. At PHP code end there is no option. mysql_pconnect() is just a wrapper that inform PHP to not send connection close request signal at the end of script run.
For features such as connection pooling - you need to install swoole extension first: https://openswoole.com/
It adds async features to php.
After that its trivial to add mysql and redis connection pooling:
https://github.com/open-smf/connection-pool
Some PHP frameworks come with pooling built-in: https://hyperf.wiki/2.2/#/en/pool

How to keep server and application separate

I have a nodejs Application running on server with node-mysql and express, At first I faced problem where some exceptions were not handled and the application would go down with network connectivity issues.
I handled all uncaught exceptions and the sever wouldn't go down this time but instead it would hang. I figured it was because I returned response only if query didn't raise any exception, so I handled all query related exceptions too.
Next if MySQL server terminate connection for some reason my application wouldn't reconnect, i tried reconnecting but it would give an error related to "enqueue connection handshake or something". From another stack question I was supposed to use connection pool so if server terminates connection it will regain connectivity some how, which I did.
My here question is that each time I faced an issue I had to shut down whole application and thanks to nodejs where server is configured programmatically goes down too. Can I or better yet how can I decouple my Server and Application almost completely so that if I make some change in my application I wouldn't have to re-deploy?
Specially for case that right now everything is okay and my application is constantly giving me connection pool error on server and in development version its working fine, so even if I restart my application I am not sure how will I face this problem again so I can properly diagnose this.
Let me know if anyone needs more info regarding my question.
Are you using a front-end framework to serve your application, or are you serving it all from server calls?
So fundamentally, if your server barfs for any reason (i.e. 500 error), you WANT to shut down and restart, because once your server is in that state, all of your in-transit data and your stack is in an unknown state. There's no way to correctly recover from that, so you are safer from both a server and an end-user point of view to shutdown the process and restart.
You can minimise the impact of this by using something like Node's Cluster module, which allows you to fork child processes of your server and generate multiple instances of the same server, connected to the same database, allowing access on the same port etc, therefore, if your user (or your server), manages to hit an unhandled exception, it can kill the process and restart without shutting down your entire server.
Edit: Here's a snippet:
var cluster = require('cluster');
var threads = require('os').cpus().length;
if(cluster.isMaster) {
for(var i = 0; i < threads; i++) {
cluster.fork();
}
cluster.on('exit', function(dead, code, signal) {
console.log('worker ' +dead.process.pid+ ' died.');
var worker = cluster.fork();
console.log('worker '+worker.process.pid+ ' started');
});
} else {
//
// do your server logic in here
}
That being said, there's no way for you to run up your application and server separately if Node is serving your client content. Once you terminate your server, your Endpoints are down. If you really wanted to be able to keep a client-side application active and reboot your server, you'd have to entirely separate the logic, i.e. have your Application in a different project to your server, and use your server as API endpoints only.
As for Connection Pools in Node-mysql: I have never used that module so I couldn't say what best practice is there.

Is there is any problems if we don't close the mysql connection in mysql node js module

If we don't close or end mysql connection in node js, is it effects in feature or not.
I am doing like this
var mysql = require('mysql');
var connection = mysql.createConnection(...);
connection.query('SELECT 1', function(err, rows) {
// connected! (unless `err` is set)
});
Am not ending mysql connection any where in my code. My question is
Is it necessary to close the mysql connection.
If we don't close the mysql connection will i face any other problems in future.
Please help me am new to nodejs
EDIT1
I will not face any problems like unable to connect, too many connections to open etc, means any resource related issues? right.
EDIT2
At which instant mysql connection will be close if we don't end it manually or by using end function?
You should close the connection.
The point at which you do it depends on what your program does with that connection.
If the program is long lived and only needs a single connection but uses that connection continuously then you can just leave that one connection open, but you should be be prepared to re-open the connection if required - e.g. if the server gets restarted.
If the program just opens one connection, does some stuff, and then doesn't use the connection for some time you should close the connection while you're not using it.
If the program is short lived, i.e. it makes a connection, does some stuff, and then exits you can get away without closing the connection because it'll get closed automagically when your program exits.
Actually, you will run into problems such as this one from MySQL:
"[host] is blocked because of many connection errors; unblock with ' mysqladmin flush-hosts"
As MySQL will start counting those failed disconnects as "connection errors". And then, your database remains unusable until you execute FLUSH HOSTS on the MySQL server. I know this for a fact, it happened with our NodeJS project.
Write some code to explicitly close the db connection before any output or render statements using connection.end() or connection.destroy()
No you don't need to close it.
If you question is "is there any specific cleanup on mysql side when I tell server that I'm closing connection" then the answer is "no, if you just exit your client process or close socket its the same as calling connection.end()"

Using fork in Ruby on Rails for creating parallel process

I have a Rails 3 app in production with Passenger on Apache. I have this code:
class Billing < ActiveRecord::Base
after_save :sendEmails
private
def sendEmails
fork do
UserMailer.clientBilling(self.user, self).deliver
end
end
end
In localhost, when the app creates a billing, after it is saved, the app sends an email to the user, everything works fine. But in the server, after the app creates a billing, it throws me errors related to the gem MySQL2, errors like "MySQL server has gone away" or "Connection lost", and the app doesn't send the emails. If I remove the fork it works fine, but I want to use fork, I want to create a separated process because it takes to long when sending emails. What could be the problem?
The problem is that a forked process inherits some of its parent's resources, such as its file descriptors. In particular one such shared resource is the MySQL connection. When the child process finishes its email sending and exits it closes the MySQL connection, which closes the parent processes connection.
If you do continue down this path (and it is frought with similar subtleties) then you need to do something like this:
# Clear existing connections before forking to ensure they do not get inherited.
::ActiveRecord::Base.clear_all_connections!
fork do
# Establish a new connection for each fork.
::ActiveRecord::Base.establish_connection
# The rest of the code for each fork...
end
You'll have to do similar thing with services like memcached or mongodb if you use those.
Be extremely careful when using fork with rails/passenger, it can become very messy! Instead, you should use resque or delayed_job for this task!
You can reestablish the connection inside of the fork:
dbconfig = YAML::load(File.open('your_app_dir/config/database.yml'))
ActiveRecord::Base.establish_connection(dbconfig['development'])