Error appear when using cron [Scheduler] QueryFailedError: read ECONNRESET - mysql

I have an operation to execute at night
#Cron(CronExpression.EVERY_DAY_AT_1AM)
async setValideFileRemainder() {
var date = new Date();
let dateToday = date.toISOString().split('T')[0];
let remainders = await this.healthFileRepository.find({
where: { remainder_date: dateToday }})
for (let file of remainders) {
file.is_valide = 1;
await this.healthFileRepository.save(file);
}
}
when I test this function in an endpoint or each 5 min for example it works, but at night it always gives me this error:
[Nest] 18728 - 09/15/2021, 7:53:52 AM [Scheduler] QueryFailedError: read ECONNRESET +38765424ms
PS: I'm using MySQL as a database

I suspect you are running into a MySQL connection timeout as your client application is idling. The MySQL server will disconnect your client if there is no activity within the time range configured by wait_timeout.
You will either need to:
tweak your MySQL server configuration and increase wait_timeout
send keep alive queries to your server (eg. SELECT 1) in an interval shorter than wait_timeout
handle connection drops gracefully, as they can occur for more reasons that are beyond your application or the SQL server (eg. route drops, link down, packet loss, ...)

Related

Doctrine\DBAL\DBALException "An exception occurred while executing '...' with params [...] Warning: Error while sending QUERY packet. PID=

Context : a Symfony 4.4 web app hosted on Ubuntu-based Docker image on Azure Web App connected to a MySQL 5.7 Azure Database for MySQL .
We have MANY (>5K events in Sentry per 14 days) errors like :
Case 1
Doctrine\DBAL\DBALException
An exception occurred while executing 'SELECT qd.id as uuid, qd.content as content FROM queue_data qd WHERE qd.tag = ? LIMIT 1000' with params [...]:
Warning: Error while sending QUERY packet. PID=...
Coming from :
// App\Utility\Queue\Service\QueueDataService::getData
public function getData(string $tag, int $limit = self::DEFAULT_LIMIT): array
{
return $this->getQueryBuilderForTag($tag)
->select('qd.id as uuid', 'qd.content as content')
->setMaxResults($limit)
->execute()
->fetchAll(FetchMode::ASSOCIATIVE);
}
Case 2
Doctrine\DBAL\DBALException
An exception occurred while executing 'SET NAMES utf8mb4':
Warning: Error while sending QUERY packet. PID=...
Coming from :
// custom code
private function myMethod(){
...
$this->connection->executeQuery('SET NAMES utf8mb4');
...
}
...
Sentry shows 245 "issues" with this message (over 14 days) => that's 245 different cases of the same problem, each instance having between 1 and 2K events (some instances actually come from consumers that are executed VERY frequently).
Nevertheless, it doesn't seem to have any impact on users...
Does anyone else have the same issues ?
Is it possible to fix this ?
How ?
Cheers !
Found the issue :
on my DB server (MySQL), I had the parameter wait_timeout set to 120 seconds
I have several message consumer processes managed by Supervisor
these workers consume several messages per process, and had no time-limit , so they could be waiting for a new message to consume for more than 2 minutes
in that case, the DB server had closed the DB connection, but the Doctrine client was not aware of that until it tried to execute a request that would fail
My fix was to :
increase the DB server config wait_timeout to 600 seconds (10 minutes)
add a time-limit of 300 seconds (5 minutes) to my consumer processes
The root cause should still be addressed by the Doctrine team : it seams weird that it doesn't ping the server (test if the connection is still open) before trying to execute a query.
Cheers !

Postgress vs MySQL: Commands out of sync;

MySQL scenario:
When I execute "SELECT" queries in MySQL using multiple threads I get the following message: "Commands out of sync; you can't run this command now", I found that this is due to the limitation of having to wait "consume" the results to make another query.
C ++ example:
void DataProcAsyncWorker :: Execute ()
{
  std :: thread (& DataProcAsyncWorker :: Run, this) .join ();
}
void DataProcAsyncWorker :: Run () {
  sql :: PreparedStatement * prep_stmt = c-> con-> prepareStatement (query);
 ...
}
Important:
I can't help using multiple threads per query (SELECT, INSERT, ETC) because the module I'm building that is being integrated with NodeJS "locks" the thread until the result is already obtained, for this reason I need to run in the background (new thread) and resolve the "promise" containing the result obtained from MySQL
Important:
I am saving several "connections" [example: 10], and with each SQL call the function chooses a connection.
This is:
1. A connection pool that contains 10 established connections, Ex:
for (int i = 0; i <10; i ++) {
    Com * c = new Com;
    c-> id = i;
    c-> con = openConnection ();
    c-> con-> setSchema ("gateway");
    conns.push_back (c);
}
2. The problem occurs when executing> = 100 SELECT queries per second, I believe that even with the connection balance 100 connections per second is a high number and the connection "ex: conns.at (50)" is in process and was not consumed
My question:
A. Does PostgreSQL have this limitation as well? Or in PostgreSQL there is also such a limitation?
B. Which server using SQL commands is recommended for large SQL queries per second without the need to "open new connections", that is:
In a conns.at (0) connection I can execute (through 2 simultaneous threads) SELECT commands.
Additional:
1. I can even create a larger number of connections in the pool, but when I simulate a number of queries per second greater than the number of pre-set connections I will get the error: "Commands out of sync", the the only solution I found was mutex, which is bad for performance
I found that PostgreSQL looks great with this (queue / queue), in a very efficient way, unlike MySQL where I need to call "_free_result", in PostgreSQL, I can run multiple queries on the same connection without receiving the error: "Commands out of sync ".
Note: I did the test using libpqxx (library for connection / queries to the PostgreSQL server in C) and it really worked like a wonder without giving me a headache.
Note: I don't know if it allows multi-thread execution or the execution is done synchronously on the server side for each connection, the only thing I know is that there is no such error in postgresql.

Error: Cannot enqueue Query after fatal error in mysql node

I am using felixge/node-mysql. Also I am using express-myconnection which prevents mysql timeout and in turn prevents killing of node server. What I am doing is logging the activities in mysql. The scenario is I have a file upload functionality once the file is uploaded I am performing different operations on the file. During every stage of processing I am logging those activities in database. This works fine if the file is small. If the file is large say 100 MB it takes some time to load so in the mean time the mysql server reconnects and creates a new connection but the logging code still uses the old reference. Error: Cannot enqueue Query after fatal error. So, my question is is there a way that i can use the new connection reference instead of the old one. There is a single function in which all the different phases of activities regarding file takes place. Any help is greatly appreciated. thanks
Hi #paul, if you have seen the gist link you can see that I have the upload.on('begin', function (fileInfo, reqs, resp) { } where I have logged the activity that file upload process has begin. Once the file is uploaded upload.on('end', function (fileInfo,request,response) { } is triggered. I am also logging some activity here. As I said in my question, if the file is big the upload takes time. In the mean time a new MySql connection is created but the insert query in 'end' event still refers to the old myconnection. So, I wanted to know how can I use the new mysql connection reference in this scenario? I hope this has explained the scenario better.
Actually, I decided to google your error for you, and after reading this thread: https://github.com/felixge/node-mysql/issues/832 I realized that you're not releasing the connection after the first query completes, and so the pool never tries to issue you a new one. You were correct that the stale connection might be the problem. here's how you fix that if it is:
upload.on('begin', function (fileInfo, reqs, resp) {
var fileType = resp.req.fields.file_type;
var originalFileName = fileInfo.name;
var renamedFilename = file.fileRename(fileInfo,fileType);
/*renaming the file */
fileInfo.name = renamedFilename;
/*start: log the details in database;*/
var utcMoment = conf.moment.utc();
var UtcSCPDateTime = new Date( utcMoment.format() );
var activityData = {
activity_type : conf.LIST_UPLOAD_BEGIN,
username : test ,
message : 'test has started the upload process for the file',
activity_datetime : UtcSCPDateTime
};
reqs.params.activityData = activityData;
req.getConnection(function(err,connection) {
var dbData = req.params.activityData;
var activity_type = dbData.activity_type;
console.dir("[ Connection ID:- ] "+connection.threadId+' ] [ Activity type:- ] '+activity_type);
var insertQuery = connection.query("INSERT INTO tblListmanagerActivityLog SET ? ",dbData, function(err, result) {
if (err) {
console.log("Error inserting while performing insert for activity "+activity_type+" : %s ",err );
} else {
console.log('Insert successfull');
}
/// Here is the change:
connection.release();
});
});
/*end: log the details in database;*/
});
I fixed mine. I always suspect that when errors occur along with my queries it has something to do with the MySQL80 Service being stopped in the background. In case other solutions failed. Try going to the task manager, head to the services, find MySQL80 and check if it is stopped, when it is, click start or set it as automatic so that it will start at the moment desktop is running.
For pool connection to work we need to comment out https://github.com/pwalczyszyn/express-myconnection/blob/master/lib/express-myconnection.js#L84 this line. Hope it helps anyone facing the same issue.
Also we can use single connection option.
I had the same issue. Came across one solution -
Re-Install MySQl and while doing so, in the configuration step, select "legacy encryption" option instead and finish the installation.
Hope this helps!

Socket.io and MySQL Connections

I'm working on my first node.js-socket.io project. Until now i coded only in PHP. In PHP it is common to close the mysql connection, when it is not needed any more.
My Question: Does it make sense to keep just one mysql-connection during server is running open, or should i handle this like PHP.
Info: In the happy hours i will have about 5 requests/seconds from socket clients and for almost all of them i have to make a mysql_crud.
Which one would you prefer?
io = require('socket.io').listen(3000); var mysql = require('mysql');
var connection = mysql.createConnection({
host:'localhost',user:'root',password :'pass',database :'myDB'
});
connection.connect(); // and never 'end' or 'destroy'
// ...
or
var app = {};
app.set_geolocation = function(driver_id, driver_location) {
connection.connect();
connection.query('UPDATE drivers set ....', function (err) {
/* do something */
})
connection.end();
}
...
The whole idea of Node.js is async io (that includes db queries).
And the rule with a mysql connection is that you can only have one query per connection at a time. So you either make a queue and have a single connection, as in the first option or create a connection each time as with option 2.
I personally would go with option 2, as opening and closing connections are not such a big overhead.
Here are some code samples to help you out:
https://codeforgeek.com/2015/01/nodejs-mysql-tutorial/

Phalcon + PHPUnit + DI: Too many db connections

I'm using PHPUnit with Phalcon. In my UnitTestCase (base test class), I've set up the connection thus:
protected function setUp(\Phalcon\DiInterface $di = null, \Phalcon\Config $config = null)
{
$dbparams = ...
if (is_null($di)) {
$di = new \Phalcon\DI\FactoryDefault();
}
$di->setShared('db', function() use ($dbconfig) {
return new \Phalcon\Db\Adapter\Pdo\Mysql($dbparams);
});
\Phalcon\DI:setDefault($di);
parent::setUp($di, $this->_config);
$this->_loaded = true;
}
I'm running into a problem, where, after a number of suites are run, I'm starting to get the following error (on every one of the test cases after a certain point):
PDOException: SQLSTATE[HY000] [1040] Too many connections
Am I doing something wrong?
So you just keep adding new connections with each test case. Since PHPUnit runs a single PHP process, none of the database connections are garbage-collected. The PHP process just keeps accumulating open connections until you exceed the database instance's max_connections value.
You can probably observe the number of connections growing if you open a session to MySQL and run SHOW PROCESSLIST from time to time.
You need to disconnect from the database in your PHPUnit tearDown() method.