Gracefully closing connection of DB using TypeORM in NestJs - exception

So, before I go deep in the problem let me explain you the basic of my app.
I have connection to DB(TypeOrm), Kafka(kafkajs) in my app.
My app is the Consumer of 1 topic which:
Gets some data in the callback handler, and puts that data in one table using TypeORM Entity
Maintains the Global map (in some Singleton Instance of a class) with some id (that I get in data of point 1).
At the time of app getting shutdown, my task is:
Disconnect all the consumers of the topics (this service is connected to) from the Kafka
Traverse the Global Map (point 2) and repark the message in the some topic
Disconnect the DB connections using the close method.
Here are some piece of code that might help you understand how I added the life cycle events on Server in NestJs.
system.server.life.cycle.events.ts
#Injectable()
export class SystemServerLifeCycleEventsShared implements BeforeApplicationShutdown {
constructor(#Inject(WINSTON_MODULE_PROVIDER) private readonly logger: Logger, private readonly someService: SomeService) {}
async beforeApplicationShutdown(signal: string) {
const [err] = await this.someService.handleAbruptEnding();
if (err) this.logger.info(`beforeApplicationShutdown, error::: ${JSON.stringify(err)}`);
this.logger.info(`beforeApplicationShutdown, signal ${signal}`);
}
}
some.service.ts
export class SomeService {
constructor(private readonly kafkaConnector: KafkaConnector, private readonly postgresConnector: PostgresConnector) {}
public async handleAbruptEnding(): Promise<any> {
await this.kafkaConnector.disconnectAllConsumers();
for(READ_FROM_GLOBAL_STORE) {
await this.kafkaConnector.function.call.to.repark.the.message();
}
await this.postgresConnector.disconnectAllConnections();
return true;
}
}
postgres.connector.ts
export class PostgresConnector {
private connectionManager: ConnectionManager;
constructor () {
this.connectionManager = getConnectionManager();
}
public async disconnectAllConnections(): Promise<void[]> {
const connectionClosePromises: Promise<void> = [];
connectionManager.connections?.forEach((connection) => {
if (connection.isConnected) connectionClosePromises.push(connection.close());
});
return Promise.all(connectionClosePromises);
}
}
ConnectionManager& getConnectionManager() imported from TypeORM module.
Now here are some unusual exceptions / behavior I am facing:
Disconnect all connections is throwing exception/error as in quote:
ERROR [TypeOrmModule] Cannot execute operation on "default" connection because connection is not yet established.
If connection is not yet established then how come my isConnected came true inside of if. I am not getting any clue anywhere how is this possible. And how to do graceful shutdown of the connection in TypeORM.
Do we really need to handle the closure of the connection in TypeORM or it internally handles it.
Even if, TypeORM handles the connection closure internally, how could we achieve it explicitly.
Is there any callback that can be triggered in case the connection is disconnected properly so that I am sure, that disconnection actually happened from the db.
Some of the messages are coming after I press CTRL + C (mimicking the abrupt/closure of the process of my server) and the control comes back to Terminal. This means, some thread is coming back after the handle returns to my terminal (🤷, no clue, how would I handle this, since if you see, my handleAbruptHandling is awaited and also, I cross checked all the promises are being awaited properly.)
Some of the things to know:
I properly added my module to create the hooks of server life cycle events.
Injected the objects in almost all the classes properly.
Not getting any DI issue from NEST and server is getting started properly.
Please shed some light and let me know how can I gracefully disconnect from db using typeorm api inside NestJs in case of abrupt closure.
Thanks in advance and happy coding :)

Littlebit late but may help someone..
You are missing the param keepConnectionAlive as true in TypeOrmModuleOptions, typeOrm dont keep connections alive as default. I set keepConnectionAlive as false, if a transaction keeps the connection open im going to close the connection (typeorm wait until the transaction or other process finish before close the connection), this is my implementation
import { Logger, Injectable, OnApplicationShutdown } from '#nestjs/common';
import { getConnectionManager } from 'typeorm';
#Injectable()
export class LifecyclesService implements OnApplicationShutdown {
private readonly logger = new Logger();
onApplicationShutdown(signal: string) {
this.logger.warn('SIGNTERM: ', signal);
this.closeDBConnection();
}
closeDBConnection() {
const conn = getConnectionManager().get();
if (conn.isConnected) {
conn
.close()
.then(() => {
this.logger.log('DB conn closed');
})
.catch((err: any) => {
this.logger.error('Error clossing conn to DB, ', err);
});
} else {
this.logger.log('DB conn already closed.');
}
}
}

I discovered some TypeORM docs saying "Disconnection (closing all connections in the pool) is made when close is called"
Here: https://typeorm.biunav.com/en/connection.html#what-is-connection
I tried export const AppDataSource = new DataSource({ // details }) and importing it and doing
import { AppDataSource } from "../../src/db/data-source";
function closeConnection() {
console.log("Closing connection to db");
// AppDataSource.close(); // said "deprecated - use destroy() instead"
AppDataSource.destroy(); // hence I did this
}
export default closeConnection;
Maybe this will save someone some time

Related

Infinite retries when using RpcFilter in NestJS microservice setup with Kafka

I am new to Kafka and I am experiencing a mixed behaviour when trying to setup proper error handling on my consumer when there is an error. In few instances I am observing retry policy in action - kafka retries my message 5 times(as what I configured) then consumer crashes, then recovers and my group rebalanaces. However, in other instances that's not happens - consumer crashes, then recovers and my group rebalances and consumer attempts to consume the message again and again, inifinitely.
Let's say I have a controller method that's subscribed to a Kafka topic
#EventPattern("cat-topic")
public async createCat(
#Payload()
message: CatRequestDto,
#Ctx() context: IKafkaContext
): Promise<void> {
try {
await this.catService.createCat(message);
} catch (ex) {
this.logger.error(ex);
throw new RpcException(
`Couldn't create a cat`
);
}
}
Using RpcFilter on this method, like this one - https://docs.nestjs.com/microservices/exception-filters
:
import { Catch, RpcExceptionFilter, ArgumentsHost } from '#nestjs/common';
import { Observable, throwError } from 'rxjs';
import { RpcException } from '#nestjs/microservices';
#Catch(RpcException)
export class ExceptionFilter implements RpcExceptionFilter<RpcException> {
catch(exception: RpcException, host: ArgumentsHost): Observable<any> {
return throwError(() => exception.getError());
}
}
I feel like it might be something funky happening with properly committing offsets or something else. Can't pinpoint it.
Any comments are suggestions are greatly appreciated.

What is the correct type of Exception to throw in a Nestjs service?

So, by reading the NestJS documentation, I get the main idea behind how the filters work with exceptions.
But from all the code I have seen, it seems like all services always throw HttpExceptions.
My question is: Should the services really be throwing HttpExceptions? I mean, shouldn't they be more generic? And, if so, what kind of Error/Exception should I throw and how should I implement the filter to catch it, so I won't need to change it later when my service is not invoked by a Http controller?
Thanks :)
No they should not. An HttpException should be thrown from within a controller. So yes, your services should expose their own errors in a more generic way.
But "exposing errors" doesn't have to mean "throwing exceptions".
Let's say you have the following project structure :
📁 sample
|_ 📄 sample.controller.ts
|_ 📄 sample.service.ts
When calling one of your SampleService methods, you want your SampleController to know whether or not it should throw an HttpException.
This is where your SampleService comes into play. It is not going to throw anything but it's rather going to return a specific object that will tell your controller what to do.
Consider the two following classes :
export class Error {
constructor(
readonly code: number,
readonly message: string,
) {}
}
export class Result<T> {
constructor(readonly data: T) {}
}
Now take a look at this random SampleService class and how it makes use of them :
#Injectable()
export class SampleService {
isOddCheck(numberToCheck: number): Error | Result<boolean> {
const isOdd = numberToCheck%2 === 0;
if (isOdd) {
return new Result(isOdd);
}
return new Error(
400,
`Number ${numberToCheck} is even.`
);
}
}
Finally this is how your SampleController should look like :
#Controller()
export class SampleController {
constructor(
private readonly sampleService: SampleService
) {}
#Get()
sampleGetResponse(): boolean {
const result = this.sampleService.isOddCheck(13);
if (result instanceof Result) {
return result.data;
}
throw new HttpException(
result.message,
result.code,
);
}
}
As you can see nothing gets thrown from your service. It only exposes whether or not an error has occurred. Only your controller gets the responsibility to throw an HttpException when it needs to.
Also notice that I didn't use any exception filter. I didn't have to. But I hope this helps.

When use foreachPartition to write data in rdd into mysql , i lost mysql connection occasionally

i use spark rdd to write data into mysql, the operator i use is foreachPartition, in the operator i set up connection pool and write data(using scalike jdbc's), then destory the pool, howerver it seems the connection pool cannot be found occasionally, the log said Connection pool is not yet initialized. (name:'xxx), i've no idea why it happend
the data has been insert completely finally.But the exception comfused me
I believe you have implemeted in the same way (if java used)
dstream.foreachRDD(rdd -> {
rdd.foreachPartition(partitionOfRecords -> {
Connection connection = createNewConnection();
while (partitionOfRecords.hasNext()) {
connection.send(partitionOfRecords.next());
}
connection.close();
});
});
here insead of createNewConnection() method you just implement the singleton connection object pattern and leave with out closing.
dstream.foreachRDD(rdd -> {
rdd.foreachPartition(partitionOfRecords -> {
Connection connection = ConnectionObject.singleTonConnection();
while (partitionOfRecords.hasNext()) {
connection.send(partitionOfRecords.next());
}
});
});
//single ton method should be like this
public class ConnectionObject (){
private static Connection=null;
public static Connection singleTonConnection(){
if(Connection !=null){
/** get new connection from spring data source or jdbc client**/
}
return Connection;
}
}

ER_CON_COUNT_ERROR: Too many connections knex and bookshelf

I've a simple rest api built with express, knex and bookshelf.
I'm doing some performance test with Jmeter and I've noticed that if I call the API that perform the following query there is no problem:
public static async fetchById(id: number): Promise<DatasetStats> {
return DatasetStats.where<DatasetStats>({ id }).fetch();
}
DatasetStats is a Bookshelf model
But If I set Jmeter to call the following I got a Error: ER_CON_COUNT_ERROR: Too many connections after a minute:
import * as knex from 'knex';
#injectable()
export class MyRepo {
private knex: knex;
constructor() {this.knex = knex(DatabaseConfig); }
async fetchResourcesList(datasetName: string): Promise<any> {
return this.knex.distinct('resource').from(datasetName);
}
}
The problem could be that I create a knex object for each request?
Yes. If you create new knex instance for each request, you cannot control total number of concurrent connections to the mysql db. Also you won't be able to re-use already open connections from knex's connection pool, so it is highly inefficient to open new TCP connection to the database on every query. Also if you don't destroy your knex instances after the query, connections will be left open until some idle timeouts + app will leak memory.

Does the ConnectionPool from SqlJocky require a close

I'm creating a back-end server application in Dart which is using a MySQL database to store data. To make the SQL call I'm using the ConnectionPool from SqlJocky.
What I do when the app starts:
Create a singleton which store the ConnectionPool
Execute multiple queries with prepareExecute and query
Locally this approach is working fine. Now I pushed a development version to Heroku and I'm getting connection issues after a few minutes.
So I wonder, do I need to close/release a single connection from the pool I use to execute a query? Or is the connection after the query placed again in the pool and free for use?
The abstract base class for all the MySQL stores:
abstract class MySQLStore {
MySQLStore(ConnectionPool connectionPool) {
this._connectionPool = connectionPool;
}
ConnectionPool get connectionPool => this._connectionPool;
ConnectionPool _connectionPool;
}
A concrete implementation for the method getAll:
Future<List<T>> getAll() async {
Completer completer = new Completer();
connectionPool.query("SELECT id, name, description FROM role").then((result) {
return result.toList();
}).then((rows) {
completer.complete(this._processRows(rows));
}).catchError((error) {
// TODO: Better error handling.
print(error);
completer.complete(null);
});
return completer.future;
}
The error I get:
SocketException: OS Error: Connection timed out, errno = 110, address = ...
This doesn't fully answer your question but I think you could simplify your code like:
Future<List<T>> getAll() async {
try {
var result = await connectionPool.query(
"SELECT id, name, description FROM role");
return this._processRows(await result.toList());
} catch(error) {
// TODO: Better error handling.
print(error);
return null;
}
}
I'm sure here is no need to close a connection with query. I don't know about prepareExecute though.
According to a comment in the SqlJocky code it can take quite some time for a connection to be released by the database server.
Maybe you need to increase the connection pool size (default 5) so you don't run out of connections while ConnectionPool is waiting for connections to be released.
After some feedback from Heroku I managed to resolve this problem by implementing a timer task that does every 50 seconds a basic MySQL call.
The response from Heroku:
Heroku's networking enforces an idle timeout of 60-90 seconds to prevent runaway processes. If you're using persistent connections in your application, make sure that you're sending a keep-alive at, say, 55 seconds to prevent your open connection from being dropped by the server.
The work around code:
const duration = const Duration(seconds: 50);
new Timer.periodic(duration, (Timer t) {
// Do a simple MySQL call with on the connection pool.
this.connectionPool.execute('SELECT id from role');
print('*** Keep alive triggered for MySQL heroku ***');
});