Qt / thread event loop QTimer/MySQL queries - mysql

I am coding an application on Mac OS X / Qt.
I have a thread with an event loop. In this thread I make MySQL queries each tick (called by a qtimer).
Randomly my application crashes with the following backtrace:
#0 0x00e27acd in QMutex::lock ()
#1 0x00f5842d in QMetaObjectPrivate::connect ()
#2 0x00f5897f in QObject::connect ()
#3 0x0134c230 in QMYSQLResult::QMYSQLResult ()
#4 0x0134c2d9 in QMYSQLDriver::createResult ()
#5 0x0006daae in QSqlDatabase::exec ()
What can be the problem?

You should ensure that you setup the MySQL connection from the same thread as you perform the MySQL queries from.

Related

Codeception handle expected UserException

Running a function test as below for an application in Yii2 framework.
public function checkEmptyToken2(FunctionalTester $I)
{
$I->amOnRoute('site/verify-email', ['token' => '']);
$I->canSee('Email verify token did not come through for some reason');
}
Results in an error as below.
Codeception PHP Testing Framework v4.2.1
Powered by PHPUnit 8.5.27
Frontend\tests.functional Tests (1) -------------------------------------------------------------------------------------------------------------------------------------------------------------
E VerifyEmailCest: Check empty token2 (0.01s)
-------------------------------------------------------------------------------------------------------------------------------------------------------------
1) VerifyEmailCest: Check empty token2
Test tests/functional/VerifyEmailCest.php:checkEmptyToken2
[yii\base\UserException] Email verify token did not come through for some reason.
You can either copy/paste the verification email again OR
request a new verification email from the Login page.
Scenario Steps:
1. $I->amOnRoute("site/verify-email",{"token":""}) at tests/functional/VerifyEmailCest.php:41
#1 /data/www/frontend/models/VerifyEmailForm.php:38
#2 /data/www/frontend/controllers/SiteController.php:345
#3 frontend\controllers\SiteController->actionVerifyEmail
#4 /data/www/vendor/yiisoft/yii2/base/InlineAction.php:57
#5 /data/www/vendor/yiisoft/yii2/base/Controller.php:178
#6 /data/www/vendor/yiisoft/yii2/base/Module.php:552
#7 /data/www/vendor/yiisoft/yii2/web/Application.php:103
#8 /data/www/vendor/symfony/browser-kit/Client.php:405
#9 Codeception\Module\Yii2->amOnRoute
#10 /data/www/frontend/tests/_support/_generated/FunctionalTesterActions.php:661
Time: 718 ms, Memory: 16.00 MB
There was 1 error:
---------
ERRORS!
Tests: 1, Assertions: 0, Errors: 1.
I expect the error to be exactly that! All I want Codeception to do is to simply ignore it and move on so I can check for the error text in "canSee". I tried using try/catch statement and that results in a different error. Tried using "expectException" and that did not work either.
The codeception module will only handle subclasses of yii\web\HttpException as part of the normal request flow.
If you are throwing a yii\base\UserException then the status code will always be 500 as it is for any other exception.
While I doubt that throwing UserExceptions that do not extend the HttpException the Yii error handler supports it. I'll update the codeception module to support it as well.

How to fix Cloud SQL (MySQL) & Cloud functions slow queries

I have an application that, through the Firebase Cloud Functions, connects to a Cloud SQL database (MySQL).
The SQL CLOUD machine I am using is the free and lowest level one. (db-f1-micro, shared core, 1vCPU 0.614 GB)
I report below what is my architecture of use for the execution of a simple query.
I have a file called "database.js" which exports my connection (pool) to the db.
const mysqlPromise = require('promise-mysql');
const cf = require('./config');
const connectionOptions = {
connectionLimit: cf.config.connection_limit, // 250
host: cf.config.app_host,
port: cf.config.app_port,
user: cf.config.app_user,
password: cf.config.app_password,
database: cf.config.app_database,
socketPath: cf.config.app_socket_path
};
if(!connectionOptions.host && !connectionOptions.port){
delete connectionOptions.host;
delete connectionOptions.port;
}
const connection = mysqlPromise.createPool(connectionOptions)
exports.connection = connection
Here instead is how I use the connection to execute the query within a "callable cloud function"
Note that the tables are light (no more than 2K records)
// import connection
const db = require("../Config/database");
// define callable function
exports.getProdottiNegozio = functions
.region("europe-west1")
.https.onCall(async (data, context) => {
const { id } = data;
try {
const pool = await db.connection;
const prodotti = await pool.query(`SELECT * FROM products WHERE shop_id=? ORDER BY name`, [id]);
return prodotti;
} catch (error) {
throw new functions.https.HttpsError("failed-precondition", error);
}
});
Everything works correctly, in the sense that the query is executed and returns the expected results, but there is a performance.
Query execution is sometimes very slow. (up to 10 seconds !!!).
I have noticed that some times in the morning they are quite fast (about 1 second), but sometimes they are very slow and make my application very slow.
Checking the logs inside the GCP console I noticed that this message appears.
severity: "INFO"
textPayload: "2021-07-30T07:44:04.743495Z 119622 [Note] Aborted connection 119622 to db: 'XXX' user: 'YYY' host: 'cloudsqlproxy~XXX.XXX.XXX.XXX' (Got an error reading communication packets)"
At the end of all this I would like some help to understand how to improve the performance of the application.
Is it just a SQL CLOUD machine problem? Would it be enough to increase resources to have decent query execution?
Or am I wrong about the architecture of the code and how I organize the functions and the calls to the db?
Thanks in advance to everyone :)
Don't connect directly to your database with an auto scaling solution:
You shouldn't use an auto scaling web service (Firebase Functions) to connect to a database directly. Imagine you get 400 requests, that means 400 connections opened to your database if each function tries to connect on startup. Your database will start rejecting (or queuing) new connections. You should ideally host a service that is online permanently and let Firebase Function tell that service what to query with an existing connection.
Firebase functions takes its sweet time to start up:
Firebase Functions takes 100~300ms to start (cold start) for each function called. So add that to your wait time. More so if your function relies on a connection to something else before it can respond.
Functions have a short lifespan:
You should also know that Firebase Functions don't live very long. They are meant to be single task microservices. Their lifespan is 90 seconds if I recall correctly. Make sure your query doesn't take longer than that
Specific to your issue:
If your database gets slow during the day it might be because the usage increases.
You are using a shared core, which means you share resources on the lowest tier with the the other lower tier databases in that region/zone. You might need to increase resources, like move to a dedicated core, or optimize your query(ies). I'd recommend bumping up your CPU. The cost is really low for small CPU options

write callback called multiple times

When i run the gulp task i am getting error like this, it is probably related with this line .pipe($.autoprefixer({browser:['last 2 version','> 5%']}))
when i exclude this line it works well.
Could you help me please ?
Potentially unhandled rejection [2] Error: write callback called multiple times
gulp.task('styles',function (done){
//return
gulp.src(config.less)
.pipe($.less())
.pipe($.autoprefixer({browser:['last 2 version','> 5%']}))
.pipe(gulp.dest(config.temp));
done();
});

AWS Lambda - MySQL caching

I have Lambda that uses RDS. I wanted to improve it and use the Lambda connection caching. I have found several articles, and implemented it on my side, best to my knowledge. But now, I am not sure it is this the rigth way to go.
I have Lambda (running Node 8), which has several files used with require. I will start from the main function, until I reach the MySQL initializer, which is exact path. All will be super simple, showing only to flow of the code that runs MySQL:
Main Lambda:
const jobLoader = require('./Helpers/JobLoader');
exports.handler = async (event, context) => {
const emarsysPayload = event.Records[0];
let validationSchema;
const body = jobLoader.loadJob('JobName');
...
return;
...//
Job Code:
const MySQLQueryBuilder = require('../Helpers/MySqlQueryBuilder');
exports.runJob = async (params) => {
const data = await MySQLQueryBuilder.getBasicUserData(userId);
MySQLBuilder:
const mySqlConnector = require('../Storage/MySqlConnector');
class MySqlQueryBuilder {
async getBasicUserData (id) {
let query = `
SELECT * from sometable WHERE id= ${id}
`;
return mySqlConnector.runQuery(query);
}
}
And Finally the connector itself:
const mySqlConnector = require('promise-mysql');
const pool = mySqlConnector.createPool({
host: process.env.MY_SQL_HOST,
user: process.env.MY_SQL_USER,
password: process.env.MY_SQL_PASSWORD,
database: process.env.MY_SQL_DATABASE,
port: 3306
});
exports.runQuery = async query => {
const con = await pool.getConnection();
const result = con.query(query);
con.release();
return result;
};
I know that measuring performance will show the actual results, but today is Friday, and I will not be able to run this on Lambda until the late next week... And really, it would be awesome start of the weekend knowing I am in right direction... or not.
Thank for the inputs.
First thing would be to understand how require works in NodeJS. I do recommend you go through this article if you're interested in knowing more about it.
Now, once you have required your connection, you have it for good and it won't be required again. This matches what you're looking for as you don't want to overwhelm your database by creating a new connection every time.
But, there is a problem...
Lambda Cold Starts
Whenever you invoke a Lambda function for the first time, it will spin up a container with your function inside it and keep it alive for approximately 5 mins. It's very likely (although not guaranteed) that you will hit the same container every time as long as you are making 1 request at a time. But what happens if you have 2 requests at the same time? Then another container will be spun up in parallel with the previous, already warmed up container. You have just created another connection on your database and now you have 2 containers. Now, guess what happens if you have 3 concurrent requests? Yes! One more container, which equals one more DB connection.
As long as there are new requests to your Lambda functions, by default, they will scale out to meet demand (you can configure it in the console to limit the execution to as many concurrent executions as you want - respecting your Account limits)
You cannot safely make sure you have a fixed amount of connections to your Database by simply requiring your code upon a Function's invocation. The good thing is that this is not your fault. This is just how Lambda functions behave.
...one other approach is
to cache the data you want in a real caching system, like ElasticCache, for example. You could then have one Lambda function be triggered by a CloudWatch Event that runs in a certain frequency of time. This function would then query your DB and store the results in your external cache. This way you make sure your DB connection is only opened by one Lambda at a time, because it will respect the CloudWatch Event, which turns out to run only once per trigger.
EDIT: after the OP sent a link in the comment sections, I have decided to add a few more info to clarify what the mentioned article wants to say
From the article:
"Simple. You ARE able to store variables outside the scope of our
handler function. This means that you are able to create your DB
connection pool outside of the handler function, which can then be
shared with each future invocation of that function. This allows for
pooling to occur."
And this is exactly what you're doing. And this works! But the problem is if you have N connections (Lambda Requests) at the same time. If you don't set any limits, by default, up to 1000 Lambda functions can be spun up concurrently. Now, if you then make another 1000 requests simultaneously in the next 5 minutes, it's very likely you won't be opening any new connections, because they have already been opened on previous invocations and the containers are still alive.
Adding to the answer above by Thales Minussi but for a Python Lambda. I am using PyMySQL and to create a connection pool I added the connection code above the handler in a Lambda that fetches data. Once I did this, I was not getting any new data that was added to the DB after an instance of the Lambda was executed. I found bugs reported here and here that are related to this issue.
The solution that worked for me was to add a conn.commit() after the SELECT query execution in the Lambda.
According to the PyMySQL documentation, conn.commit() is supposed to commit any changes, but a SELECT does not make changes to the DB. So I am not sure exactly why this works.

Release connection after use, connection pool Node.js

I am wondering where the connection should be released after using it, i have seen couple of options for that:
pool.getConnection(function(err, conn){
//conn.release() // should be placed here (1)?
conn.query(query, function(err, result){
//conn.release() // should be placed here (2)?
if(!err){
//conn.release() // should be placed here (3)?
}
else{
//conn.release() // should be placed here (4)?
}
//conn.release() // should be placed here (5)?
});
//conn.release() // should be placed here (6)?
});
Or maybe should it be released both error and non error cases?
The correct place is either #2 or #5.
You want to release the connection when you are done using it.
#6 would be wrong because query() is asynchronous, so it would return right away before the connection is done with the query and before the callback fires. So you'd be releasing the connection before you are done with it.
#5 is correct because the callback has fired and you have done everything you are going to do with it. Note that this assumes that you do not use return to exit the function before that point. (Some people do that in their if (err) blocks.)
#2 is also correct if you aren't using the connection anywhere inside the callback. If you are using it in the callback, then you don't want to release it before you are done using it.
#3 and #4 are incorrect unless you use both of them. Otherwise, you are only releasing the connection in some situtations.
And #1 is incorrect because you haven't used the connection yet.