Does the ConnectionPool from SqlJocky require a close - mysql

I'm creating a back-end server application in Dart which is using a MySQL database to store data. To make the SQL call I'm using the ConnectionPool from SqlJocky.
What I do when the app starts:
Create a singleton which store the ConnectionPool
Execute multiple queries with prepareExecute and query
Locally this approach is working fine. Now I pushed a development version to Heroku and I'm getting connection issues after a few minutes.
So I wonder, do I need to close/release a single connection from the pool I use to execute a query? Or is the connection after the query placed again in the pool and free for use?
The abstract base class for all the MySQL stores:
abstract class MySQLStore {
MySQLStore(ConnectionPool connectionPool) {
this._connectionPool = connectionPool;
}
ConnectionPool get connectionPool => this._connectionPool;
ConnectionPool _connectionPool;
}
A concrete implementation for the method getAll:
Future<List<T>> getAll() async {
Completer completer = new Completer();
connectionPool.query("SELECT id, name, description FROM role").then((result) {
return result.toList();
}).then((rows) {
completer.complete(this._processRows(rows));
}).catchError((error) {
// TODO: Better error handling.
print(error);
completer.complete(null);
});
return completer.future;
}
The error I get:
SocketException: OS Error: Connection timed out, errno = 110, address = ...

This doesn't fully answer your question but I think you could simplify your code like:
Future<List<T>> getAll() async {
try {
var result = await connectionPool.query(
"SELECT id, name, description FROM role");
return this._processRows(await result.toList());
} catch(error) {
// TODO: Better error handling.
print(error);
return null;
}
}
I'm sure here is no need to close a connection with query. I don't know about prepareExecute though.
According to a comment in the SqlJocky code it can take quite some time for a connection to be released by the database server.
Maybe you need to increase the connection pool size (default 5) so you don't run out of connections while ConnectionPool is waiting for connections to be released.

After some feedback from Heroku I managed to resolve this problem by implementing a timer task that does every 50 seconds a basic MySQL call.
The response from Heroku:
Heroku's networking enforces an idle timeout of 60-90 seconds to prevent runaway processes. If you're using persistent connections in your application, make sure that you're sending a keep-alive at, say, 55 seconds to prevent your open connection from being dropped by the server.
The work around code:
const duration = const Duration(seconds: 50);
new Timer.periodic(duration, (Timer t) {
// Do a simple MySQL call with on the connection pool.
this.connectionPool.execute('SELECT id from role');
print('*** Keep alive triggered for MySQL heroku ***');
});

Related

Gracefully closing connection of DB using TypeORM in NestJs

So, before I go deep in the problem let me explain you the basic of my app.
I have connection to DB(TypeOrm), Kafka(kafkajs) in my app.
My app is the Consumer of 1 topic which:
Gets some data in the callback handler, and puts that data in one table using TypeORM Entity
Maintains the Global map (in some Singleton Instance of a class) with some id (that I get in data of point 1).
At the time of app getting shutdown, my task is:
Disconnect all the consumers of the topics (this service is connected to) from the Kafka
Traverse the Global Map (point 2) and repark the message in the some topic
Disconnect the DB connections using the close method.
Here are some piece of code that might help you understand how I added the life cycle events on Server in NestJs.
system.server.life.cycle.events.ts
#Injectable()
export class SystemServerLifeCycleEventsShared implements BeforeApplicationShutdown {
constructor(#Inject(WINSTON_MODULE_PROVIDER) private readonly logger: Logger, private readonly someService: SomeService) {}
async beforeApplicationShutdown(signal: string) {
const [err] = await this.someService.handleAbruptEnding();
if (err) this.logger.info(`beforeApplicationShutdown, error::: ${JSON.stringify(err)}`);
this.logger.info(`beforeApplicationShutdown, signal ${signal}`);
}
}
some.service.ts
export class SomeService {
constructor(private readonly kafkaConnector: KafkaConnector, private readonly postgresConnector: PostgresConnector) {}
public async handleAbruptEnding(): Promise<any> {
await this.kafkaConnector.disconnectAllConsumers();
for(READ_FROM_GLOBAL_STORE) {
await this.kafkaConnector.function.call.to.repark.the.message();
}
await this.postgresConnector.disconnectAllConnections();
return true;
}
}
postgres.connector.ts
export class PostgresConnector {
private connectionManager: ConnectionManager;
constructor () {
this.connectionManager = getConnectionManager();
}
public async disconnectAllConnections(): Promise<void[]> {
const connectionClosePromises: Promise<void> = [];
connectionManager.connections?.forEach((connection) => {
if (connection.isConnected) connectionClosePromises.push(connection.close());
});
return Promise.all(connectionClosePromises);
}
}
ConnectionManager& getConnectionManager() imported from TypeORM module.
Now here are some unusual exceptions / behavior I am facing:
Disconnect all connections is throwing exception/error as in quote:
ERROR [TypeOrmModule] Cannot execute operation on "default" connection because connection is not yet established.
If connection is not yet established then how come my isConnected came true inside of if. I am not getting any clue anywhere how is this possible. And how to do graceful shutdown of the connection in TypeORM.
Do we really need to handle the closure of the connection in TypeORM or it internally handles it.
Even if, TypeORM handles the connection closure internally, how could we achieve it explicitly.
Is there any callback that can be triggered in case the connection is disconnected properly so that I am sure, that disconnection actually happened from the db.
Some of the messages are coming after I press CTRL + C (mimicking the abrupt/closure of the process of my server) and the control comes back to Terminal. This means, some thread is coming back after the handle returns to my terminal (🤷, no clue, how would I handle this, since if you see, my handleAbruptHandling is awaited and also, I cross checked all the promises are being awaited properly.)
Some of the things to know:
I properly added my module to create the hooks of server life cycle events.
Injected the objects in almost all the classes properly.
Not getting any DI issue from NEST and server is getting started properly.
Please shed some light and let me know how can I gracefully disconnect from db using typeorm api inside NestJs in case of abrupt closure.
Thanks in advance and happy coding :)
Littlebit late but may help someone..
You are missing the param keepConnectionAlive as true in TypeOrmModuleOptions, typeOrm dont keep connections alive as default. I set keepConnectionAlive as false, if a transaction keeps the connection open im going to close the connection (typeorm wait until the transaction or other process finish before close the connection), this is my implementation
import { Logger, Injectable, OnApplicationShutdown } from '#nestjs/common';
import { getConnectionManager } from 'typeorm';
#Injectable()
export class LifecyclesService implements OnApplicationShutdown {
private readonly logger = new Logger();
onApplicationShutdown(signal: string) {
this.logger.warn('SIGNTERM: ', signal);
this.closeDBConnection();
}
closeDBConnection() {
const conn = getConnectionManager().get();
if (conn.isConnected) {
conn
.close()
.then(() => {
this.logger.log('DB conn closed');
})
.catch((err: any) => {
this.logger.error('Error clossing conn to DB, ', err);
});
} else {
this.logger.log('DB conn already closed.');
}
}
}
I discovered some TypeORM docs saying "Disconnection (closing all connections in the pool) is made when close is called"
Here: https://typeorm.biunav.com/en/connection.html#what-is-connection
I tried export const AppDataSource = new DataSource({ // details }) and importing it and doing
import { AppDataSource } from "../../src/db/data-source";
function closeConnection() {
console.log("Closing connection to db");
// AppDataSource.close(); // said "deprecated - use destroy() instead"
AppDataSource.destroy(); // hence I did this
}
export default closeConnection;
Maybe this will save someone some time

How to reconnect using Pool in nodejs' promise-mysql if the connection went down?

I am employing promise-mysql NodeJs library to make awaitable operations against a MySQL server.
Presently I have a simple config:
let pool = await mysql.createPool(dbConfig);
let connection = await pool.getConnection();
/// after the connection is made on the program start, once,
/// some querying using the connection follows
However, just one connection seems to get disconnected. Question: how to implement the automatic handling of a disconnect and bringing up a subsequent new connection if current one was dropped?
This is working solution for me
pool.getConnection((err, connection) => {
if (err) {
console.log('Error while connecting ', err)
} else {
if (connection) connection.release()
console.log('Database Connected Successfully!')
}
})
Replace this let connection = await pool.getConnection(); with above code. And execute your queries with pool and not with the connection. So, if your single connection went down or is busy, then your query will be executed with another connection from pool.

Connection Pooling in AWS Lambda with RDS?

I need effective MySQL database connection in AWS Lambda (Using Node Js).
Which is not creating connection/pool for every request, instead reuse it.
One Solution I got like opening connection outside AWS lambda handler. But the problem with this case if we not end the connection, we end up with timeout result.
e.g.
"use strict";
var db = require('./db');
exports.handler = (event, context, callback) => {
db.connect(function (conn) {
if (conn == null) {
console.log("Database connection failed: ");
callback("Error", "Database connection failed");
} else {
console.log('Connected to database.');
conn.query("INSERT INTO employee(name,salary) VALUE(?,?)",['Joe',8000], function(err,res){
if(err) throw err;
else {
console.log('A new employee has been added.');
}
});
db.getConnection().end();
callback(null, "Database connection done");
}
});
};
The most reliable way of handling database connections in AWS Lambda is to connect and disconnect from the database within the invocation itself which is what your code is already doing.
There are known ways to reuse an existing connection but success rates for that vary widely depending on database server configuration (idle connections, etc.) and production load.
Also, in the context of AWS Lambda, reusing database connections does not give you as much performance benefit due to the way how scaling works in Lambda.
In an always-on server app for example, concurrent and succeeding requests use and share the same connection or connection pool.
In Lambda however, concurrent requests are handled by different servers, with each of them having their own connection to the database. 10 concurrent requests will spin 10 separate servers connecting to your database. Reusing connections or connection pools won't be of any help here.
To solve your problem, use:
context.callbackWaitsForEmptyEventLoop = false;
The reason a timeout is happening is because the event loop is not empty as a result of the code outside of the handler. This change allows to callback to immediately end the lambda's execution. Your full code would look something like this:
var db = require('./db');
exports.handler = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
db.connect(function (conn) {
// .. rest of your code that calls the callback
});
}
For more information, check this the blog post by Jeremy Daly.
https://www.jeremydaly.com/reuse-database-connections-aws-lambda/

ER_CON_COUNT_ERROR: Too many connections knex and bookshelf

I've a simple rest api built with express, knex and bookshelf.
I'm doing some performance test with Jmeter and I've noticed that if I call the API that perform the following query there is no problem:
public static async fetchById(id: number): Promise<DatasetStats> {
return DatasetStats.where<DatasetStats>({ id }).fetch();
}
DatasetStats is a Bookshelf model
But If I set Jmeter to call the following I got a Error: ER_CON_COUNT_ERROR: Too many connections after a minute:
import * as knex from 'knex';
#injectable()
export class MyRepo {
private knex: knex;
constructor() {this.knex = knex(DatabaseConfig); }
async fetchResourcesList(datasetName: string): Promise<any> {
return this.knex.distinct('resource').from(datasetName);
}
}
The problem could be that I create a knex object for each request?
Yes. If you create new knex instance for each request, you cannot control total number of concurrent connections to the mysql db. Also you won't be able to re-use already open connections from knex's connection pool, so it is highly inefficient to open new TCP connection to the database on every query. Also if you don't destroy your knex instances after the query, connections will be left open until some idle timeouts + app will leak memory.

vert.x async jdbc doesn't close connections

I'm implementing a RESTful API with Vert.x. I used its async jdbc client with MySQL and c3p0 as connection pool.
My problem is that although the closeConnection handler is successful, the actual database connection is not closed and reused. The pool gets full in seconds, resulting in: BasicResourcePool:204 - acquire test -- pool is already maxed out. [managed: 20; max: 20]
client.getConnection(connectionAsyncResult -> {
SQLConnection connection = connectionAsyncResult.result();
connection.queryWithParams("SELECT * FROM AIRPORTS WHERE ID = ?", new JsonArray().add(id), select -> {
ResultSet resultSet = select.result();
Airport $airport = resultSet.getRows()
.stream()
.map(Airport::new)
.findFirst()
.get();
asyncResultHandler.handle(Future.succeededFuture($airport));
connection.close(closeHandler -> {
if (closeHandler.succeeded()) {
LOG.debug("Database Connection closed");
}
else if (closeHandler.failed()) {
LOG.error("Database Connection failed to close!");
}
});
});
});
Any idea what I'm missing?
All the best!
You're probably facing some exception in your handler, if such thing happens, then the last line will not be executed, therefore the close is not called and the connection returned to the pool.
You should wrap your code with a try finally block to guarantee the connection is returned to the pool.