ServiceStack OrmLite: MySQL connection pool - mysql

I understand the topic is not new, I read a few posts but did not come to the answer ...
Each time the connection is opened for a very long time, but the idea was to use a connection pool, is not it?
As I understand it in MySQL you cannot specify a connection pool in the connection string.
How to do the right thing not to spend a lot of time opening the database connection?
Thanks!
IDbConnectionFactory = connection = new OrmLiteConnectionFactory(TeleportParams.DbConnectionStr, MySqlDialectProvider.Instance)
...
void function1(){
var db = connection.Open();
db.Select("some request");
}
void function2(){
var db = connection.Open();
db.Select("some request");
}
...
function1();
function2();

As I understand it in MySQL you cannot specify a connection pool in the connection string.
You can add pooling=false to the connection string to disable Connection Pooling in MySql.

Related

Vertx Failed Connection issue not catched JDBCClient (.getConnection)

I can't handle case when connection failed in JDBCClient on vertx-jdbc-client - 3.3.9, example: no host to route, connection time out and etc. Because the method .getConnection() does not return failedFuture and failed is not called even on wrong hostname, username and passwords.
The method only gets executed successfully when all the provided parameters for a connection is valid else the block of code gets stuck and SQLConnection is never called. Even wrapping the code with try catch block gives no error in my case.
JDBCClient client = JDBCClient.createNonShared(Holder.getInstance().getVertx(), databaseConfig);
client.getConnection(connect -> {
if (connect.failed()){
client.close();
return;
}
/* Create connection on success */
SQLConnection connection = connect.result();
/* Execute Query */
Related: Vertx connection timeout not catched JDBCClient (.getConnection)
If you use the C3P0 connection pool, try this:
databaseConfig.put("acquire_retry_attempts", 1).put("break_after_acquire_failure", true);
Otherwise the pool keeps trying to establish a connection.

How to reconnect using Pool in nodejs' promise-mysql if the connection went down?

I am employing promise-mysql NodeJs library to make awaitable operations against a MySQL server.
Presently I have a simple config:
let pool = await mysql.createPool(dbConfig);
let connection = await pool.getConnection();
/// after the connection is made on the program start, once,
/// some querying using the connection follows
However, just one connection seems to get disconnected. Question: how to implement the automatic handling of a disconnect and bringing up a subsequent new connection if current one was dropped?
This is working solution for me
pool.getConnection((err, connection) => {
if (err) {
console.log('Error while connecting ', err)
} else {
if (connection) connection.release()
console.log('Database Connected Successfully!')
}
})
Replace this let connection = await pool.getConnection(); with above code. And execute your queries with pool and not with the connection. So, if your single connection went down or is busy, then your query will be executed with another connection from pool.

When use foreachPartition to write data in rdd into mysql , i lost mysql connection occasionally

i use spark rdd to write data into mysql, the operator i use is foreachPartition, in the operator i set up connection pool and write data(using scalike jdbc's), then destory the pool, howerver it seems the connection pool cannot be found occasionally, the log said Connection pool is not yet initialized. (name:'xxx), i've no idea why it happend
the data has been insert completely finally.But the exception comfused me
I believe you have implemeted in the same way (if java used)
dstream.foreachRDD(rdd -> {
rdd.foreachPartition(partitionOfRecords -> {
Connection connection = createNewConnection();
while (partitionOfRecords.hasNext()) {
connection.send(partitionOfRecords.next());
}
connection.close();
});
});
here insead of createNewConnection() method you just implement the singleton connection object pattern and leave with out closing.
dstream.foreachRDD(rdd -> {
rdd.foreachPartition(partitionOfRecords -> {
Connection connection = ConnectionObject.singleTonConnection();
while (partitionOfRecords.hasNext()) {
connection.send(partitionOfRecords.next());
}
});
});
//single ton method should be like this
public class ConnectionObject (){
private static Connection=null;
public static Connection singleTonConnection(){
if(Connection !=null){
/** get new connection from spring data source or jdbc client**/
}
return Connection;
}
}

vert.x async jdbc doesn't close connections

I'm implementing a RESTful API with Vert.x. I used its async jdbc client with MySQL and c3p0 as connection pool.
My problem is that although the closeConnection handler is successful, the actual database connection is not closed and reused. The pool gets full in seconds, resulting in: BasicResourcePool:204 - acquire test -- pool is already maxed out. [managed: 20; max: 20]
client.getConnection(connectionAsyncResult -> {
SQLConnection connection = connectionAsyncResult.result();
connection.queryWithParams("SELECT * FROM AIRPORTS WHERE ID = ?", new JsonArray().add(id), select -> {
ResultSet resultSet = select.result();
Airport $airport = resultSet.getRows()
.stream()
.map(Airport::new)
.findFirst()
.get();
asyncResultHandler.handle(Future.succeededFuture($airport));
connection.close(closeHandler -> {
if (closeHandler.succeeded()) {
LOG.debug("Database Connection closed");
}
else if (closeHandler.failed()) {
LOG.error("Database Connection failed to close!");
}
});
});
});
Any idea what I'm missing?
All the best!
You're probably facing some exception in your handler, if such thing happens, then the last line will not be executed, therefore the close is not called and the connection returned to the pool.
You should wrap your code with a try finally block to guarantee the connection is returned to the pool.

Does the ConnectionPool from SqlJocky require a close

I'm creating a back-end server application in Dart which is using a MySQL database to store data. To make the SQL call I'm using the ConnectionPool from SqlJocky.
What I do when the app starts:
Create a singleton which store the ConnectionPool
Execute multiple queries with prepareExecute and query
Locally this approach is working fine. Now I pushed a development version to Heroku and I'm getting connection issues after a few minutes.
So I wonder, do I need to close/release a single connection from the pool I use to execute a query? Or is the connection after the query placed again in the pool and free for use?
The abstract base class for all the MySQL stores:
abstract class MySQLStore {
MySQLStore(ConnectionPool connectionPool) {
this._connectionPool = connectionPool;
}
ConnectionPool get connectionPool => this._connectionPool;
ConnectionPool _connectionPool;
}
A concrete implementation for the method getAll:
Future<List<T>> getAll() async {
Completer completer = new Completer();
connectionPool.query("SELECT id, name, description FROM role").then((result) {
return result.toList();
}).then((rows) {
completer.complete(this._processRows(rows));
}).catchError((error) {
// TODO: Better error handling.
print(error);
completer.complete(null);
});
return completer.future;
}
The error I get:
SocketException: OS Error: Connection timed out, errno = 110, address = ...
This doesn't fully answer your question but I think you could simplify your code like:
Future<List<T>> getAll() async {
try {
var result = await connectionPool.query(
"SELECT id, name, description FROM role");
return this._processRows(await result.toList());
} catch(error) {
// TODO: Better error handling.
print(error);
return null;
}
}
I'm sure here is no need to close a connection with query. I don't know about prepareExecute though.
According to a comment in the SqlJocky code it can take quite some time for a connection to be released by the database server.
Maybe you need to increase the connection pool size (default 5) so you don't run out of connections while ConnectionPool is waiting for connections to be released.
After some feedback from Heroku I managed to resolve this problem by implementing a timer task that does every 50 seconds a basic MySQL call.
The response from Heroku:
Heroku's networking enforces an idle timeout of 60-90 seconds to prevent runaway processes. If you're using persistent connections in your application, make sure that you're sending a keep-alive at, say, 55 seconds to prevent your open connection from being dropped by the server.
The work around code:
const duration = const Duration(seconds: 50);
new Timer.periodic(duration, (Timer t) {
// Do a simple MySQL call with on the connection pool.
this.connectionPool.execute('SELECT id from role');
print('*** Keep alive triggered for MySQL heroku ***');
});