I have a nodejs Application running on server with node-mysql and express, At first I faced problem where some exceptions were not handled and the application would go down with network connectivity issues.
I handled all uncaught exceptions and the sever wouldn't go down this time but instead it would hang. I figured it was because I returned response only if query didn't raise any exception, so I handled all query related exceptions too.
Next if MySQL server terminate connection for some reason my application wouldn't reconnect, i tried reconnecting but it would give an error related to "enqueue connection handshake or something". From another stack question I was supposed to use connection pool so if server terminates connection it will regain connectivity some how, which I did.
My here question is that each time I faced an issue I had to shut down whole application and thanks to nodejs where server is configured programmatically goes down too. Can I or better yet how can I decouple my Server and Application almost completely so that if I make some change in my application I wouldn't have to re-deploy?
Specially for case that right now everything is okay and my application is constantly giving me connection pool error on server and in development version its working fine, so even if I restart my application I am not sure how will I face this problem again so I can properly diagnose this.
Let me know if anyone needs more info regarding my question.
Are you using a front-end framework to serve your application, or are you serving it all from server calls?
So fundamentally, if your server barfs for any reason (i.e. 500 error), you WANT to shut down and restart, because once your server is in that state, all of your in-transit data and your stack is in an unknown state. There's no way to correctly recover from that, so you are safer from both a server and an end-user point of view to shutdown the process and restart.
You can minimise the impact of this by using something like Node's Cluster module, which allows you to fork child processes of your server and generate multiple instances of the same server, connected to the same database, allowing access on the same port etc, therefore, if your user (or your server), manages to hit an unhandled exception, it can kill the process and restart without shutting down your entire server.
Edit: Here's a snippet:
var cluster = require('cluster');
var threads = require('os').cpus().length;
if(cluster.isMaster) {
for(var i = 0; i < threads; i++) {
cluster.fork();
}
cluster.on('exit', function(dead, code, signal) {
console.log('worker ' +dead.process.pid+ ' died.');
var worker = cluster.fork();
console.log('worker '+worker.process.pid+ ' started');
});
} else {
//
// do your server logic in here
}
That being said, there's no way for you to run up your application and server separately if Node is serving your client content. Once you terminate your server, your Endpoints are down. If you really wanted to be able to keep a client-side application active and reboot your server, you'd have to entirely separate the logic, i.e. have your Application in a different project to your server, and use your server as API endpoints only.
As for Connection Pools in Node-mysql: I have never used that module so I couldn't say what best practice is there.
Related
I have a running instance of VerneMQ (cluster of 2 nodes) on Google kubernets and using MySQL (CloudSQL) for Auth. Server accepts connections over TLS
It works fine, but after a few days i start seeing this message on the log:
can't authenticate client {[],<<"Client-id">>} from X.X.X.X:16609 due to plugin_chain_exhausted
The client app (paho) complains that the server refused the connection for being "not authorized (code=5 in paho error)"
after a few retry it finally connects. but every time it get's harder and harder until it just won't connect anymore
If i restart VerneMQ everything get's back to normal
I have only 3 clients currently connected at most, at the same time.
clients already connected have no issues in pub/sub.
In my configuration i have (among other things):
log.console.level=debug
plugins.vmq_diversity=on
vmq_diversity.mysql.* = all of them set
allow_anonymous=off
vmq_diversity.auth_mysql.enabled=on
it's like the server degrades over time. the status webpage reports no problem
My verne server was build from the git repository about a month ago and runs on a docker container
what could be the cause?
what else could i check to find posibles causes? maybe a diversity missconfiguration?
Tks
To quickly explain the plugin_chain_exhausted log: with Verne you can run multiple authentication/authorization plugins, and they will be checked in a chain. If one plugin allows the client, it will be in. If no plugin allows the client, you'll see the log above.
This does not explain the behaviour you describe, though. I don't think I have seen that.
In any case, the first thing to check is whether you actually run multiple plugins. For instance: have you disabled the vmq.passwd and the vmq.acl plugins?
Is it possible to cache database connections when using PHP like you would in a J2EE container? If so, how?
There is no connection pooling in php.
mysql_pconnect and connection pooling are two different things.
There are many problems connected with mysql_pconnect and first you should read the manual and carefully use it, but this is not connection pooling.
Connection pooling is a technique where the application server manages the connections. When the application needs a connection it asks the application server for it and the application server returns one of the pooled connections if there is one free.
We can do connection scaling in php for that please go through following link: http://www.oracle.com/technetwork/articles/dsl/white-php-part1-355135.html
So no connection pooling in php.
As Julio said apache releases all resources when the request ends for the current reques. You can use mysql_pconnect but you are limited with that function and you must be very careful. Other choice is to use singleton pattern, but none of this is pooling.
This is a good article: https://blogs.oracle.com/opal/highly-scalable-connection-pooling-in-php
Also read this one http://www.apache2.es/2.2.2/mod/mod_dbd.html
Persistent connections are nothing like connection pooling. A persistent connection in php will only be reused if you make multiple db connects within the same request/script execution context. In most typical web dev scenarios you'll max out your connections way faster if you use mysql_pconnect because your script will have no way to get a reference to any open connections on your next request. The best way to use db connections in php is to make a singleton instance of a db object so that the connection is reused within the context of your script execution. This still incurs at least 1 db connect per request, but it's better than making multiple db connects per reqeust.
There is no real db connection pooling in php due to the nature of php. Php is not an application server that can sit there in between requests and manage references to a pool of open connections, at least not without some kind of major hack. I think in theory you could write an app server in php and run it as a commandline script that would just sit there in the background and keep a bunch of db connections open and pass references to them to your other scripts, but I don't know if that would be possible in practice, how you'd pass the references from your commandline script to other scripts, and I sort of doubt it would perform well even if you could pull it off. Anyway that's mostly speculation. I did just notice the link someone else posted to an apache module to allow connection pooling for prefork servers such as php. Looks interesting:
https://github.com/junamai2000/mod_namy_pool#readme
I suppose you're using mod_php, right?
When a PHP file finishes executing all it's state is killed so there's no way (in PHP code) to do connection pooling. Instead you have to rely on extensions.
You can mysql_pconnect so that your connections won't get closed after the page finishes, that way they get reused in the next request.
This might be all that you need but this isn't the same as connection pooling as there's no way to specify the number of connections to maintain opened.
You can use MySQLi.
For more info, scroll down to Connection pooling section # http://www.php.net/manual/en/mysqli.quickstart.connections.php#example-1622
Note that Connection pooling is also dependent on your server (i.e. Apache httpd) and its configuration.
If an unused persistent connection for a given combination of "host, username, password, socket, port and default database can not be found" in the open connection pool, then only mysqli opens a new connection otherwise it would reuse already open available persistent connections, which is in a way similar to the concept of connection pooling. The use of persistent connections can be enabled and disabled using the PHP directive mysqli.allow_persistent. The total number of connections opened by a script can be limited with mysqli.max_links (this may be interesting to you to address max_user_connections issue hitting hosting server's limit). The maximum number of persistent connections per PHP process can be restricted with mysqli.max_persistent.
In wider programming context, it's a task of web/app server however in this context, it's being handled by mysqli directive of PHP itself in a way supporting connection re-usability. You may also implement a singleton class to get a static instance of connection to reuse just like in Java. Just want to remind that java also doesn't support connection pooling as part of its standard JDBC, they're being different module/layers on top of JDBC drivers.
Coming to PHP, the good thing is that for the common databases in the PHP echosystem it does support Persistent Database Connections which persists the connection for 500 requests (config of max_requests in php.ini) and this avoids creating a new connection in each request. So check it out in docs in detail, it solves most of your challenges. Please note that PHP is not so much sophisticated in terms of extensive multi-threading mechanism and concurrent processing together with powerful asynchronous event handling, when compared to strictly object oriented Java. So in a way it is very less effective for PHP to have such in-built mechanism like pooling.
You cannot instantiate connection pools manually.
But you can use the "built in" connection pooling with the mysql_pconnect function.
I would like to suggest PDO::ATTR_PERSISTENT
Persistent connections are links that do not close when the execution of your script ends. When a persistent connection is requested, PHP checks if there's already an identical persistent connection (that remained open from earlier) - and if it exists, it uses it. If it does not exist, it creates the link.
Connection pooling works at MySQL server side like this.
If persistence connection is enabled into MySQL server config then MySQL keep a connection open and in sleep state after requested client (php script) finises its work and die.
When a 2nd request comes with same credential data (Same User Name, Same Password, Same Connection Parameter, Same Database name, Maybe from same IP, I am not sure about the IP) Then MySQL pool the previous connection from sleep state to active state and let the client use the connection. This helps MySQL to save time for initial resource for connection and reduce the total number of connection.
So the connection pooling option is actually available at MySQL server side. At PHP code end there is no option. mysql_pconnect() is just a wrapper that inform PHP to not send connection close request signal at the end of script run.
For features such as connection pooling - you need to install swoole extension first: https://openswoole.com/
It adds async features to php.
After that its trivial to add mysql and redis connection pooling:
https://github.com/open-smf/connection-pool
Some PHP frameworks come with pooling built-in: https://hyperf.wiki/2.2/#/en/pool
We have developed a project in .NET Core and Entity Framework Core using the MySql nuget package.
The context is added to dependancy injection using the following line:
services.AddDbContext<ReadWriteContext>(options => options.UseMySQL(Configuration["Machine:ReadWriteConnectionString"]));
Then in a controller, this is injected as such:
public class SystemController : Controller
{
private readonly ReadWriteContext _dataContext;
public SystemController(ReadWriteContext dataContext)
{
_dataContext = dataContext;
}
...
}
And used as such:
var hasServices = await _dataContext.Services.AnyAsync();
In the logs we see the opening and closing log lines:
Opening connection to database 'config_service' on server '10.211.55.5'.
Closing connection to database 'config_service' on server '10.211.55.5'.
However, when we look at the MySql server and run "show full processlist", the connections are still showing as being in the sleep state and never close. When you stop the .NET process, the connections then close and disappear from MySql process list.
How do I get the connections to close when the request is finished. The AddDbContext should be scoped to the current request, but it does not appear to properly close the connections.
Any help would be great?
I'm not aware with MySql. That said if you were in a SQL server context you would be facing the "connection pool": https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling.
That is your application keep a set of active connection to speed up the process of reaching the data server. Thus even if you sais to the pool "I don't need this connection anymore", it does not believe you and keep the connection open... just in case.
For how long, well it depends on logic beyond the scope of the application. One use to say: it is the connection pool realm, just let it do his job.
So the answer for your question is: you can't explicitly close the connection to the server. The connection pool will decide when to close or not.
I'm running a Node server connecting to MySQL via the node-mysql module. Connecting to and querying MySQL works great initially without any errors, however, the first query after leaving the Node server idle for a couple hours results in an error. The error is the familiar read ECONNRESET, coming from the depths of the node-mysql module.
A stack trace (note that the three entries of the trace belong to my app's error reporting code):
Error
at exports.Error.utils.createClass.init (D:\home\site\wwwroot\errors.js:180:16)
at new newclass (D:\home\site\wwwroot\utils.js:68:14)
at Query._callback (D:\home\site\wwwroot\db.js:281:21)
at Query.Sequence.end (D:\home\site\wwwroot\node_modules\mysql\lib\protocol\sequences\Sequence.js:78:24)
at Protocol.handleNetworkError (D:\home\site\wwwroot\node_modules\mysql\lib\protocol\Protocol.js:271:14)
at PoolConnection.Connection._handleNetworkError (D:\home\site\wwwroot\node_modules\mysql\lib\Connection.js:269:18)
at Socket.EventEmitter.emit (events.js:95:17)
at net.js:441:14
at process._tickCallback (node.js:415:13)
This error happens both on my cloud Node server and MySQL server as well as a local setup of both.
My questions:
Does this problem appear to be a disconnection of Node's connection to my MySQL server(s), perhaps due to a connection lifetime limitation?
When using connection pools, node-mysql is supposed to gracefully handle disconnections and prune them from the pool. Is it not aware of the disconnect until I make a query, thus making the error unavoidable?
Considering that I see the "read ECONNRESET" error a lot in other StackOverflow posts, should I be looking elsewhere from MySQL to diagnose the problem?
Update: After more browsing, I think my issue is a duplicate of this one. It appears his connection is disconnecting as well, but no one has suggested how to keep the connection alive or how to address the error outside of failing on the first query back.
I reached out to the node-mysql folks on their Github page and got some firm answers.
MySQL does indeed prune idle connections. There's a MySQL variable "wait_timeout" that sets the number of second before timeout and the default is 8 hours. We can set the default to be much larger than that. Use show variables like 'wait_timeout'; to view your timeout setting and set wait_timeout=28800; to change it.
According to this issue, node-mysql doesn't prune pool connections after these sorts of disconnections. The module developers recommended using a heartbeat to keep the connection alive such as calling SELECT 1; on an interval. They also recommended using the node-pool module and its idleTimeoutMillis option to automatically prune idle connections.
If this happens when establishing a single reused connection, it can be avoided by establishing a connection pool instead.
For example, if you're doing something like this...
var db = require('mysql')
.createConnection({...})
.connect(function(err){});
do this instead...
var db = require('mysql')
.createPool({...});
Does this problem appear to be a disconnection of Node's connection to my MySQL server(s), perhaps due to a connection lifetime limitation?
Yes. The server has closed its end of the connection.
When using connection pools, node-mysql is supposed to gracefully handle disconnections and prune them from the pool. Is it not aware of the disconnect until I make a query, thus making the error unavoidable?
Correct, but it should handle the error internally, not pass it back to you. This appears to be a bug in node-mysql. Report it.
Considering that I see the "read ECONNRESET" error a lot in other StackOverflow posts, should I be looking elsewhere from MySQL to diagnose the problem?
It is either a bug in the node-MySQL connection pool implementation, o else you haven't configured it properly to detect failures.
I have been also facing the same issue. Apparently it was happening because one of the backend process has been triggered on table which was being referred in my api.
This caused table to go in lock wait state and my query request got failed with connection reset. Though i'm wondering why i didn't receive lock wait error .
We have a machine where with a heavy cpu load on myqsl process on Server 1.
Our webapp on server 2 suddenly seemed to bypass rights logic and expose some data as if no check were done on the db side.
Can an app miss some SQL queries because the DB server is under heavy load ?
Can MySQL loose consistency ?
Probably not. More likely, your application detected an error with the database(maybe a timeout), but is not handling the error properly. I would check the application exception handling logic carefully.