Phalcon + PHPUnit + DI: Too many db connections - mysql

I'm using PHPUnit with Phalcon. In my UnitTestCase (base test class), I've set up the connection thus:
protected function setUp(\Phalcon\DiInterface $di = null, \Phalcon\Config $config = null)
{
$dbparams = ...
if (is_null($di)) {
$di = new \Phalcon\DI\FactoryDefault();
}
$di->setShared('db', function() use ($dbconfig) {
return new \Phalcon\Db\Adapter\Pdo\Mysql($dbparams);
});
\Phalcon\DI:setDefault($di);
parent::setUp($di, $this->_config);
$this->_loaded = true;
}
I'm running into a problem, where, after a number of suites are run, I'm starting to get the following error (on every one of the test cases after a certain point):
PDOException: SQLSTATE[HY000] [1040] Too many connections
Am I doing something wrong?

So you just keep adding new connections with each test case. Since PHPUnit runs a single PHP process, none of the database connections are garbage-collected. The PHP process just keeps accumulating open connections until you exceed the database instance's max_connections value.
You can probably observe the number of connections growing if you open a session to MySQL and run SHOW PROCESSLIST from time to time.
You need to disconnect from the database in your PHPUnit tearDown() method.

Related

Error appear when using cron [Scheduler] QueryFailedError: read ECONNRESET

I have an operation to execute at night
#Cron(CronExpression.EVERY_DAY_AT_1AM)
async setValideFileRemainder() {
var date = new Date();
let dateToday = date.toISOString().split('T')[0];
let remainders = await this.healthFileRepository.find({
where: { remainder_date: dateToday }})
for (let file of remainders) {
file.is_valide = 1;
await this.healthFileRepository.save(file);
}
}
when I test this function in an endpoint or each 5 min for example it works, but at night it always gives me this error:
[Nest] 18728 - 09/15/2021, 7:53:52 AM [Scheduler] QueryFailedError: read ECONNRESET +38765424ms
PS: I'm using MySQL as a database
I suspect you are running into a MySQL connection timeout as your client application is idling. The MySQL server will disconnect your client if there is no activity within the time range configured by wait_timeout.
You will either need to:
tweak your MySQL server configuration and increase wait_timeout
send keep alive queries to your server (eg. SELECT 1) in an interval shorter than wait_timeout
handle connection drops gracefully, as they can occur for more reasons that are beyond your application or the SQL server (eg. route drops, link down, packet loss, ...)

Postgress vs MySQL: Commands out of sync;

MySQL scenario:
When I execute "SELECT" queries in MySQL using multiple threads I get the following message: "Commands out of sync; you can't run this command now", I found that this is due to the limitation of having to wait "consume" the results to make another query.
C ++ example:
void DataProcAsyncWorker :: Execute ()
{
  std :: thread (& DataProcAsyncWorker :: Run, this) .join ();
}
void DataProcAsyncWorker :: Run () {
  sql :: PreparedStatement * prep_stmt = c-> con-> prepareStatement (query);
 ...
}
Important:
I can't help using multiple threads per query (SELECT, INSERT, ETC) because the module I'm building that is being integrated with NodeJS "locks" the thread until the result is already obtained, for this reason I need to run in the background (new thread) and resolve the "promise" containing the result obtained from MySQL
Important:
I am saving several "connections" [example: 10], and with each SQL call the function chooses a connection.
This is:
1. A connection pool that contains 10 established connections, Ex:
for (int i = 0; i <10; i ++) {
    Com * c = new Com;
    c-> id = i;
    c-> con = openConnection ();
    c-> con-> setSchema ("gateway");
    conns.push_back (c);
}
2. The problem occurs when executing> = 100 SELECT queries per second, I believe that even with the connection balance 100 connections per second is a high number and the connection "ex: conns.at (50)" is in process and was not consumed
My question:
A. Does PostgreSQL have this limitation as well? Or in PostgreSQL there is also such a limitation?
B. Which server using SQL commands is recommended for large SQL queries per second without the need to "open new connections", that is:
In a conns.at (0) connection I can execute (through 2 simultaneous threads) SELECT commands.
Additional:
1. I can even create a larger number of connections in the pool, but when I simulate a number of queries per second greater than the number of pre-set connections I will get the error: "Commands out of sync", the the only solution I found was mutex, which is bad for performance
I found that PostgreSQL looks great with this (queue / queue), in a very efficient way, unlike MySQL where I need to call "_free_result", in PostgreSQL, I can run multiple queries on the same connection without receiving the error: "Commands out of sync ".
Note: I did the test using libpqxx (library for connection / queries to the PostgreSQL server in C) and it really worked like a wonder without giving me a headache.
Note: I don't know if it allows multi-thread execution or the execution is done synchronously on the server side for each connection, the only thing I know is that there is no such error in postgresql.

How can I skip all errors on an RDS replica instance?

I use certain my.cnf settings like this. Does RDS instance allow such options?
slave-skip-errors = 1062,1054
replicate-ignore-db=verv_raw
replicate-ignore-table=verv.ox_session
replicate-wild-ignore-table=verv_raw.ox%
replicate-wild-ignore-table=verv_raw.ox%
I am aware of the procedure that skips one error at a time.
CALL mysql.rds_skip_repl_error;
But what I am looking for is an option to skip all errors on slave. Is it possible in RDS environment?
I solved it by creating a mysql event scheduler like this :
CREATE EVENT repl_error_skipper
ON SCHEDULE
EVERY 15 MINUTE
COMMENT 'Calling rds_skip_repl_error to skip replication error'
Do
CALL mysql.rds_skip_repl_error;
/*also you can add other logic */
To set other global variables you can find and set those (if available for changing) in rds parameter group (you will have to create new parameter group and set the variable values).
As mentioned, this command only skips one replication error. I wrote a PHP script to loop through this and ran it once a minute via cron job (my replica was log jammed with a series of thousands of bad queries than went through on the main DB)
for($i = 1; $i <= 30; $i++) {
$db = new mysqli('someserver.rds.amazonaws.com', 'root', 'password');
$res = $db->query('CALL mysql.rds_skip_repl_error;');
if(!$res) {
//echo 'Query failed: ' . $db->error . "\n";
return;
}
//var_dump($res->fetch_assoc());
$db->close();
sleep(1);
}
You'll need to tinker with this some (not every server would tolerate only one second between calls and 30 calls per minute), but it does work (albeit in a brute force manner). You must create a new DB connection every time to run this. The loop opens, runs and then closes the connection.

Matlab Database Toolbox - Warning: com.mysql.jdbc.Connection#6e544a45 is not serializable

I'm connecting to a MySQL database through the Matlab Database Toolbox in order to run the same query over and over again within 2 nested for loops. After each iteration I get this warning:
Warning: com.mathworks.toolbox.database.databaseConnect#26960369 is not serializable
In Import_Matrices_DOandT_julaugsept_inflow_nomettsed at 476
Warning: com.mysql.jdbc.Connection#6e544a45 is not serializable
In Import_Matrices_DOandT_julaugsept_inflow_nomettsed at 476
Warning: com.mathworks.toolbox.database.databaseConnect#26960369 not serializable
In Import_Matrices_DOandT_julaugsept_inflow_nomettsed at 476
Warning: com.mysql.jdbc.Connection#6e544a45 is not serializable
In Import_Matrices_DOandT_julaugsept_inflow_nomettsed at 476
My code is basically structured like this:
%Server
host =
user =
password =
dbName =
%# JDBC parameters
jdbcString = sprintf('jdbc:mysql://%s/%s', host, dbName);
jdbcDriver = 'com.mysql.jdbc.Driver';
%# Create the database connection object
conn = database(dbName, user , password, jdbcDriver, jdbcString);
setdbprefs('DataReturnFormat', 'numeric');
%Loop
for SegmentNum=3:41;
for tl=1:15;
tic;
sqlquery=['giant string'];
results = fetch(conn, sqlquery);
(some code here that saves the results into a few variables)
save('inflow.mat');
end
end
time = toc
close(conn);
clear conn
Eventually, after some iterations the code will crash with this error:
Error using database/fetch (line 37)
Query execution was interrupted
Error in Import_Matrices_DOandT_julaugsept_inflow_nomettsed (line
466)
results = fetch(conn, sqlquery);
Last night it errored after 25 iterations. I have about 600 iterations total I need to do, and I don't want to have to keep checking back on it every 25. I've heard there can be memory issues with database connection objects...is there a way to keep my code running?
Let's take this one step at a time.
Warning: com.mathworks.toolbox.database.databaseConnect#26960369 is not serializable
This comes from this line
save('inflow.mat');
You are trying to save the database connection. That doesn't work. Try specifying the variables you wish to save only, and it should work better.
There are a couple of tricks to excluding the values, but honestly, I suggest you just find the most important variables you wish to save, and save those. But if you wish, you can piece together a solution from this page.
save inflow.mat a b c d e
Try wrapping the query in a try catch block. Whenever you catch an error reset the connection to the database which should free up the object.
nQuery = 100;
while(nQuery>0)
try
query_the_database();
nQuery = nQuery - 1;
catch
reset_database_connection();
end
end
The ultimate main reason for this is that database connection objects are TCP/IP ports and multiple processes cannot access the same port. That is why database connection object are not serialized. Ports cannot be serialized.
Workaround is to create a connection with in the for loop.

Set MySQL session variable - timezone - using Doctrine 1.2 and Zend Framework

Got a Zend Framework web application using Doctrine 1.2 connecting to a MYSQL 5.1 server.
Most data needs to be entered and displayed in local timezone. So following the advice here, I'd like (I think) to set PHP and MySQL to use UTC and then I'll handle the conversion to local time for display and to UTC prior to insert/update. [If this is totally goofy, I'm happy to to hear a better approach.]
So, how do I tell Doctrine to set the MySQL session to UTC? Essentially, how do I tell Doctrine to issue the MySQL command SET SESSION time_zone = 'UTC'; when it opens the connection?
Thanks in advance!
It appears that this can be done by attaching to the Doctrine_Connection object an Doctrine_EventListener with a postConnect() method.
Doctrine ORM for PHP - Creating a New Listener
Something like:
class Kwis_Doctrine_EventListener_Timezone extends Doctrine_EventListener
{
protected $_timezone;
public function __construct($timezone = 'UTC')
{
$timezone = (string) $timezone;
$this->_timezone = $timezone;
}
public function postConnect(Doctrine_Event $event)
{
$conn = $event->getInvoker();
$conn->execute(sprintf("SET session time_zone = '%s';", $this->_timezone));
}
}
[Using sprintf() here is probably amateurish, but I couldn't figure out how to do a proper parameter bind. (embarrassed smiley).]
Then in my app's Bootstrap.php, I have something like:
protected function _initDoctrine()
{
// various operations relating to autoloading
// ..
$doctrineConfig = $this->getOption('doctrine');
$manager = Doctrine_Manager::getInstance();
// various boostrapping operations on the manager
// ..
$conn = Doctrine_Manager::connection($doctrineConfig['dsn'], 'doctrine');
// various boostrapping operations on the connection
// ..
// Here's the good stuff: Add the EventListener
$conn->addListener(new Kwis_Doctrine_EventListener_Timezone());
return $conn;
}
[One slightly off-topic note: on my local development machine, I kept running into a MySQL error in which no timezone I entered seemed to be accepted. Turns out that the standard MySQL install creates all the timezone tables in the mysql database, but doesn't actually populate them; you need to populate them separately. More info at MySQL site.]