ER_CON_COUNT_ERROR: Too many connections knex and bookshelf - mysql

I've a simple rest api built with express, knex and bookshelf.
I'm doing some performance test with Jmeter and I've noticed that if I call the API that perform the following query there is no problem:
public static async fetchById(id: number): Promise<DatasetStats> {
return DatasetStats.where<DatasetStats>({ id }).fetch();
}
DatasetStats is a Bookshelf model
But If I set Jmeter to call the following I got a Error: ER_CON_COUNT_ERROR: Too many connections after a minute:
import * as knex from 'knex';
#injectable()
export class MyRepo {
private knex: knex;
constructor() {this.knex = knex(DatabaseConfig); }
async fetchResourcesList(datasetName: string): Promise<any> {
return this.knex.distinct('resource').from(datasetName);
}
}
The problem could be that I create a knex object for each request?

Yes. If you create new knex instance for each request, you cannot control total number of concurrent connections to the mysql db. Also you won't be able to re-use already open connections from knex's connection pool, so it is highly inefficient to open new TCP connection to the database on every query. Also if you don't destroy your knex instances after the query, connections will be left open until some idle timeouts + app will leak memory.

Related

Gracefully closing connection of DB using TypeORM in NestJs

So, before I go deep in the problem let me explain you the basic of my app.
I have connection to DB(TypeOrm), Kafka(kafkajs) in my app.
My app is the Consumer of 1 topic which:
Gets some data in the callback handler, and puts that data in one table using TypeORM Entity
Maintains the Global map (in some Singleton Instance of a class) with some id (that I get in data of point 1).
At the time of app getting shutdown, my task is:
Disconnect all the consumers of the topics (this service is connected to) from the Kafka
Traverse the Global Map (point 2) and repark the message in the some topic
Disconnect the DB connections using the close method.
Here are some piece of code that might help you understand how I added the life cycle events on Server in NestJs.
system.server.life.cycle.events.ts
#Injectable()
export class SystemServerLifeCycleEventsShared implements BeforeApplicationShutdown {
constructor(#Inject(WINSTON_MODULE_PROVIDER) private readonly logger: Logger, private readonly someService: SomeService) {}
async beforeApplicationShutdown(signal: string) {
const [err] = await this.someService.handleAbruptEnding();
if (err) this.logger.info(`beforeApplicationShutdown, error::: ${JSON.stringify(err)}`);
this.logger.info(`beforeApplicationShutdown, signal ${signal}`);
}
}
some.service.ts
export class SomeService {
constructor(private readonly kafkaConnector: KafkaConnector, private readonly postgresConnector: PostgresConnector) {}
public async handleAbruptEnding(): Promise<any> {
await this.kafkaConnector.disconnectAllConsumers();
for(READ_FROM_GLOBAL_STORE) {
await this.kafkaConnector.function.call.to.repark.the.message();
}
await this.postgresConnector.disconnectAllConnections();
return true;
}
}
postgres.connector.ts
export class PostgresConnector {
private connectionManager: ConnectionManager;
constructor () {
this.connectionManager = getConnectionManager();
}
public async disconnectAllConnections(): Promise<void[]> {
const connectionClosePromises: Promise<void> = [];
connectionManager.connections?.forEach((connection) => {
if (connection.isConnected) connectionClosePromises.push(connection.close());
});
return Promise.all(connectionClosePromises);
}
}
ConnectionManager& getConnectionManager() imported from TypeORM module.
Now here are some unusual exceptions / behavior I am facing:
Disconnect all connections is throwing exception/error as in quote:
ERROR [TypeOrmModule] Cannot execute operation on "default" connection because connection is not yet established.
If connection is not yet established then how come my isConnected came true inside of if. I am not getting any clue anywhere how is this possible. And how to do graceful shutdown of the connection in TypeORM.
Do we really need to handle the closure of the connection in TypeORM or it internally handles it.
Even if, TypeORM handles the connection closure internally, how could we achieve it explicitly.
Is there any callback that can be triggered in case the connection is disconnected properly so that I am sure, that disconnection actually happened from the db.
Some of the messages are coming after I press CTRL + C (mimicking the abrupt/closure of the process of my server) and the control comes back to Terminal. This means, some thread is coming back after the handle returns to my terminal (🤷, no clue, how would I handle this, since if you see, my handleAbruptHandling is awaited and also, I cross checked all the promises are being awaited properly.)
Some of the things to know:
I properly added my module to create the hooks of server life cycle events.
Injected the objects in almost all the classes properly.
Not getting any DI issue from NEST and server is getting started properly.
Please shed some light and let me know how can I gracefully disconnect from db using typeorm api inside NestJs in case of abrupt closure.
Thanks in advance and happy coding :)
Littlebit late but may help someone..
You are missing the param keepConnectionAlive as true in TypeOrmModuleOptions, typeOrm dont keep connections alive as default. I set keepConnectionAlive as false, if a transaction keeps the connection open im going to close the connection (typeorm wait until the transaction or other process finish before close the connection), this is my implementation
import { Logger, Injectable, OnApplicationShutdown } from '#nestjs/common';
import { getConnectionManager } from 'typeorm';
#Injectable()
export class LifecyclesService implements OnApplicationShutdown {
private readonly logger = new Logger();
onApplicationShutdown(signal: string) {
this.logger.warn('SIGNTERM: ', signal);
this.closeDBConnection();
}
closeDBConnection() {
const conn = getConnectionManager().get();
if (conn.isConnected) {
conn
.close()
.then(() => {
this.logger.log('DB conn closed');
})
.catch((err: any) => {
this.logger.error('Error clossing conn to DB, ', err);
});
} else {
this.logger.log('DB conn already closed.');
}
}
}
I discovered some TypeORM docs saying "Disconnection (closing all connections in the pool) is made when close is called"
Here: https://typeorm.biunav.com/en/connection.html#what-is-connection
I tried export const AppDataSource = new DataSource({ // details }) and importing it and doing
import { AppDataSource } from "../../src/db/data-source";
function closeConnection() {
console.log("Closing connection to db");
// AppDataSource.close(); // said "deprecated - use destroy() instead"
AppDataSource.destroy(); // hence I did this
}
export default closeConnection;
Maybe this will save someone some time

Connection Pooling in AWS Lambda with RDS?

I need effective MySQL database connection in AWS Lambda (Using Node Js).
Which is not creating connection/pool for every request, instead reuse it.
One Solution I got like opening connection outside AWS lambda handler. But the problem with this case if we not end the connection, we end up with timeout result.
e.g.
"use strict";
var db = require('./db');
exports.handler = (event, context, callback) => {
db.connect(function (conn) {
if (conn == null) {
console.log("Database connection failed: ");
callback("Error", "Database connection failed");
} else {
console.log('Connected to database.');
conn.query("INSERT INTO employee(name,salary) VALUE(?,?)",['Joe',8000], function(err,res){
if(err) throw err;
else {
console.log('A new employee has been added.');
}
});
db.getConnection().end();
callback(null, "Database connection done");
}
});
};
The most reliable way of handling database connections in AWS Lambda is to connect and disconnect from the database within the invocation itself which is what your code is already doing.
There are known ways to reuse an existing connection but success rates for that vary widely depending on database server configuration (idle connections, etc.) and production load.
Also, in the context of AWS Lambda, reusing database connections does not give you as much performance benefit due to the way how scaling works in Lambda.
In an always-on server app for example, concurrent and succeeding requests use and share the same connection or connection pool.
In Lambda however, concurrent requests are handled by different servers, with each of them having their own connection to the database. 10 concurrent requests will spin 10 separate servers connecting to your database. Reusing connections or connection pools won't be of any help here.
To solve your problem, use:
context.callbackWaitsForEmptyEventLoop = false;
The reason a timeout is happening is because the event loop is not empty as a result of the code outside of the handler. This change allows to callback to immediately end the lambda's execution. Your full code would look something like this:
var db = require('./db');
exports.handler = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
db.connect(function (conn) {
// .. rest of your code that calls the callback
});
}
For more information, check this the blog post by Jeremy Daly.
https://www.jeremydaly.com/reuse-database-connections-aws-lambda/

multiple SockJs connections

I am trying to create handler for SockJs connections which handle multiple users simultaneously like this:
#Bean
public WebSocketHandler snakeHandler() { return new PerConnectionWebSocketHandler(SnakeHandler.class); }
And with code on client side:
Game.socket = new SockJS('/snake/snake');
Game.socket.onopen = function () {
Console.log('Info: WebSocket connection opened.');
Console.log('Info: Press an arrow key to begin.');
};
The problem is with next connection, not first. When I connect at first, I got single WebSocketSession. But all next, got 3 connections:
WebSocketSession, XhrStreamingSockJsSession and PollingSockJsSession and no messages "Info: WebSocket connection opened." in browser console, only first client gets it.
Any ideas?

Does the ConnectionPool from SqlJocky require a close

I'm creating a back-end server application in Dart which is using a MySQL database to store data. To make the SQL call I'm using the ConnectionPool from SqlJocky.
What I do when the app starts:
Create a singleton which store the ConnectionPool
Execute multiple queries with prepareExecute and query
Locally this approach is working fine. Now I pushed a development version to Heroku and I'm getting connection issues after a few minutes.
So I wonder, do I need to close/release a single connection from the pool I use to execute a query? Or is the connection after the query placed again in the pool and free for use?
The abstract base class for all the MySQL stores:
abstract class MySQLStore {
MySQLStore(ConnectionPool connectionPool) {
this._connectionPool = connectionPool;
}
ConnectionPool get connectionPool => this._connectionPool;
ConnectionPool _connectionPool;
}
A concrete implementation for the method getAll:
Future<List<T>> getAll() async {
Completer completer = new Completer();
connectionPool.query("SELECT id, name, description FROM role").then((result) {
return result.toList();
}).then((rows) {
completer.complete(this._processRows(rows));
}).catchError((error) {
// TODO: Better error handling.
print(error);
completer.complete(null);
});
return completer.future;
}
The error I get:
SocketException: OS Error: Connection timed out, errno = 110, address = ...
This doesn't fully answer your question but I think you could simplify your code like:
Future<List<T>> getAll() async {
try {
var result = await connectionPool.query(
"SELECT id, name, description FROM role");
return this._processRows(await result.toList());
} catch(error) {
// TODO: Better error handling.
print(error);
return null;
}
}
I'm sure here is no need to close a connection with query. I don't know about prepareExecute though.
According to a comment in the SqlJocky code it can take quite some time for a connection to be released by the database server.
Maybe you need to increase the connection pool size (default 5) so you don't run out of connections while ConnectionPool is waiting for connections to be released.
After some feedback from Heroku I managed to resolve this problem by implementing a timer task that does every 50 seconds a basic MySQL call.
The response from Heroku:
Heroku's networking enforces an idle timeout of 60-90 seconds to prevent runaway processes. If you're using persistent connections in your application, make sure that you're sending a keep-alive at, say, 55 seconds to prevent your open connection from being dropped by the server.
The work around code:
const duration = const Duration(seconds: 50);
new Timer.periodic(duration, (Timer t) {
// Do a simple MySQL call with on the connection pool.
this.connectionPool.execute('SELECT id from role');
print('*** Keep alive triggered for MySQL heroku ***');
});

How to serialize jdbc connection for spark node distrobution in a foreach

My end goal is to get apache spark to use a jdbc connection to a mysql database for transporting mapped RDD data in scala. Going about this has led to an error explaining that the simply jdbc code I'm using could not be serialized. How do I allow the jdbc class to be serialized?
Typically, the DB session in a driver cannot be serialized b/c it involves threads and open TCP connections to the underlying DB.
As #aaronman mentions, the easiest way at the moment is to include the creation of the driver connection in the closure in a partition foreach. That way you won't have serialization issues with the Driver.
This is a skeleton code of how this can be done:
rdd.foreachPartition {
msgIterator => {
val cluster = Cluster.builder.addContactPoint(host).build()
val session = cluster.connect(db)
msgIterator.foreach {msg =>
...
session.execute(statement)
}
session.close
}
}
As SparkSQL continues to evolve, I expect to have improved support for DB connectivity coming in the future. For example, DataStax created a Cassandra-Spark driver that abstracts out the connection creation per worker in an efficient way, improving on resource usage.
Look also at JdbcRDD which adds the connection handling as a function (executed on the workers)
A JDBC connection object is associated to a specific TCP connection and socket port and hence cannot be serialized. So you should create the JDBC connection in the remote executor JVM process not in the driver JVM process.
One way of achieving this is to have the connection object as a field in a singleton object in Scala (or a static field in Java) as shown below. In the below snippet the statement val session = ExecutorSingleton.session is not executed in the driver but the statement is shipped off to the Executor where it is executed.
case class ConnectionProfile(host: String, username: String, password: String)
object ExecutorSingleton {
var profile: ConnectionProfile = _
lazy val session = createConnection(profile)
def createJDBCSession(profile: ConnectionProfile) = { ... }
}
rdd.foreachPartition {
msgIterator => {
ExecutorSingleton.profile = ConnectionProfile("host", "username", "password")
msgIterator.foreach {msg =>
val session = ExecutorSingleton.session
session.execute(msg)
}
}
}