C3P0 datasource opens many threads to manage the connections in application. Is there a way where will it open only single thread for all the opened datasources.
No. c3p0 requires a Thread pool in order to perform Connection maintenance tasks without blocking client Threads.
Related
I have a Saas application on AWS ECS and databases on AWS RDS. We are planning to implement AWS RDS Proxy for pooling implementation. From the RDS proxy documentation, I saw that we don't need to make any changes to the application code. Currently, we are using application side connection pooling. When we implement an RDS proxy for pooling, does the current pooling have any impact?
Do we need to remove the application side pooling to work with RDS effectively?
My main concern is, if I choose 100% pooling in RDS proxy and from application pooling configuration if we limit that to say 100 max connection. Will that be a bottleneck?
TLDR: keep the connection pool in your application, and size it to the number of connections required by that one instance of your application (e.g. the ECS task or EKS pod).
With a database proxy in the middle, there are two separate legs to a "connection":
First, there is a connection from the application to the proxy. What you called the "application side pooling" is this type of connection. Since there's still overhead associated with creating a new instance of this type of connection, continuing to use a connection pool in your application probably is a good idea.
Second, there is a connection from the proxy to the database. These connections are managed by the proxy. The number of connections of this type is controlled by a proxy configuration. If you set this configuration to 100%, then you're allowing the proxy to use up to the database's max_connections value, and other clients may be starved for connections.
So, when your application wants to use a connection, it needs to get a connection from its local pool. Then, the proxy needs to pair that with a connection to the database. The proxy will reuse connections to the database where possible (this technique also is called multiplexing). Or, quoting the official docs: "You can open many simultaneous connections to the proxy, and the proxy keeps a smaller number of connections open to the DB instance or cluster. Doing so further minimizes the memory overhead for connections on the database server. This technique also reduces the chance of "too many connections" errors."
As your container orchestrator (e.g. ECS or EKS) scales your application horizontally, your application will open/close connections to the proxy, but the proxy will prevent your database from becoming overwhelmed by these changes.
I've created a Java Spring Boot application that launches 36 downloader droplets on digital ocean, which ssh tunnel to a database CPU Optimized droplet and downloads from an API into the database.
I've configured hikari as follows towards less pooling connections assuming the database may have trouble with too many and thinking they might not be required.
spring.datasource.hikari.maximumPoolSize=5
spring.datasource.hikari.connectionTimeout=200000
spring.datasource.hikari.maxLifetime=1800000
spring.datasource.hikari.validationTimeout=100000
I'm wondering if those settings may or may not be recommended and why. I've reduced the maximumPoolSize to 5 however I haven't found much information on whether that is considered too small for Java Spring Boot Application to run effectively.
Given each downloader is storing data in the database sequentially do I need to have more than a few pooling connections on each downloader?
I've configured the maximum connections in mysql to 250 and the maximum ssh connections on the database server to 200. I note that 114 sshD processes are created on the server. Can a server handle that many ssh tunneling connections?
Do you forsee any problems with this kind of distributed setup with Spring boot? One thing I have had to do before adjusting to these settings is place retry connection code around each database connection to prevent disconnection errors.
Thanks
Conteh
I'm using Apache HTTPClient (4.2.2) / Java7 to open many reusable connections to a tomcat 7 server (to simulate many users repeatedly hitting the service). Both client and server on Ubuntu 12 (but different machines). I made sure that systctl.conf and limits.conf allow this scenario.
This works well up to about 1500 simulated users / connections. The connections get reused as expected. Somewhere between 1500 and 1600 simulated users however, connections are no longer reused and closed/ re-opend all the time. Why might this be the case?
I don't think the problem is on the server side as when I start multiple simulation clients on different machines against the same server, the server has no problems reusing the connections as long as each client doesn't go beyond 1500 connections.
There can be various reasons as to why connections are not longer being re-used depending on the configuration of the connection manager OR server side configuration. The easiest way to find out the reason is to run HttpClient with context logging on as described in the 'context logging for connection management / request execution' example in the Logging Guide
You might need to increase the number of available workers,at least check if there are workers free when you run out of connections by going to server-status
Is there any limit on server on serving number of requests per second or number of requests serving simultaneously. [in configuration, not due to RAM, CPU etc hardware limitations]
Is there any limit on number of simultaneous requests on an instance of CouchbaseClient in Java servlet.
Is it best to create only one instance on CouchbaseClient and keep it open or to create multiple instances and destroy.
Is Moxi helpful with Couchbase 1.8.0 server/Couchbase java client 1.0.2
I need this info to setup application in production.
Thanks you
The memcached instance that runs behind Couchbase has a hard
connection limit of 10,000 connections. Couchbase in general
recommends that you should increase the number of nodes to address
the distrobution of traffic on that level.
The client itself does not have a hardcoded limit in regards to how
many connections it makes to a Couchbase cluster.
Couchbase generally recommends that you create a connection pool
from your application to the cluster and just re-use those
connections versus creation and destroying them over and over. On
heavier load applications, the creation and destruction of these
connections over and over can get very expensive from a resource
perspective.
Moxi is an integrated piece of Couchbase. However, it is generally
in place as an adapter layer for clients developers to specifically
use it or to give legacy access to applications designed to directly
access a memcached interface. If you are using the Couchbase client
driver you won't need to use the Moxi interface.
I am not familiar at all with connection pooling library. I've just discovered it through this blog article) and I am not sure that I should use one in my web application based on grails/hibernate/mysql.
So my question is simple : in which situations would you suggest to integrate a connection pooling library into a grails application? Always, Never or only over some connections threshold?
P.S. : If you have ever used successfully C3P0 in your web application, I will greatly appreciate to hear your feedback (in terms of visible positive effects).
Regardless of which pooling implementation, you should use a connection pool always in your web application. Open a connection with the database is a very expensive task and being able to reuse a already existing and idle connection greatly improves your site performance.
A connection can be managed by the application server (Tomcat, JBoss, Glassfish...) or by your application. The latter is easier to setup but it's hard to customize per deployment. Configuring a connection pool on the application and setting your site to consume it makes easy to the fine tune the connection pool parameters, like: minimum connections to keep open, max idle time and so on.
My experience with this is pretty limited, but I ended up using C3P0 for the simple reason that Hibernate does not seem to handle MySQL restarts. I got a "Broken pipe" every morning because our hosting service restarted MySQL every night.
I googled it and the only advice I could find was to use... the connection pool of the app server or C3P0. For me, the latter works just fine.
I always use a connection pool for two reasons:
Because opening connections is an expensive operation
It's dead-simple to set one up to work transparently, so there's no real advantage to not using one.
If you're already using hibernate, just modify your hibernate.cfg.xml's connection.provider_class to use org.hibernate.connection.C3P0ConnectionProvider and throw the c3p0 jar file into your servlet's WEB-INF/lib folder. Done.
If you're using JNDI and a GlobalNamingResources declaration, modify the type property to point to com.mchange.v2.c3p0.ComboPooledDataSource and throw the c3p0 jar into Tomcat's /lib folder. Done.
C3P0 is a very decent pool but I would still recommend to use the connection pool of your app server or servlet engine and to configure Grails to use it via a regular DataSource. Use a stand-alone connection pool when you can't do that (in which case C3P0 is a good choice).