I am using C3P0 (0.9.5.2) connection pool to connect to to MySQL DB. I have set default statement timeout of 1 seconds. I see that during high load, some of the connection request timing out (checkoutTimeout is 1 sec), though the max pool capacity was not reached. On analyzing thread stack, I saw 'MySQL Cancellation timer' threads in runnable state. Probably there is bulk timeout which is causing a non responsive DB and not creating new connection within 1 sec.
Is there a way to minimize the cancellation timer impact and to ensure client timeout is not happened if the max pool capacity is not reached?
Even if the pool is not maxPoolSize, checkout attempts will time out if checkoutTimeout is set and new connections cannot be acquired within the timeout. checkoutTimeout is just that — a timeout — and will enforce a time limit regardless of cause.
If you want to prevent timeouts, you have to ensure connections can be made available within the time alotted. If something is making the database nonresponsive to connection requests, the most straightforward solution obviously is to resolve that. Other approaches might include setting a larger acquireIncrement (so that connections are more likely to be prefetched) or a larger minPoolSize (same).
Alternatively, you can choose a longer timeout (or set no timeout at all).
Related
I am looking at the DBStats of a web application in Golang. The metrics is exported to prometheus every 10s by sqlstats.
In the application, MaxOpenConns is set to 100, and MaxIdleConns is set to 50. And when I look into the metrics, I notice the number of open connections is stable around 50. This is expected, which means we are keeping 50 idle connections. However, the number of InUse connection is hovering between 0 and 5, and is 0 for most of the time. This is strange to me, because there is a constant inflow of traffic, and I don't expect the number of InUse connections to be 0.
Also, I notice WaitCount and MaxIdleClosed are pretty large. WaitCount means there is no idle connections left and sql.DB cannot open more connections due to MaxOpenConns limit. But from the stats above, there seems to be more than enough of headroom for sql.DB to create more connections (OpenConnections is way below MaxOpenConnections ). The big number of MaxIdleClosed also suggests sql.DB is making additional connections even when there are enough idle connections.
At the same time I am observing some driver: bad connection errors in the app and we are using MySQL.
Why does the app try to open more connections when there are enough idle connections around, and how should I tune the db param to reduce the issue?
However, the number of InUse connection is hovering between 0 and 5, and is 0 for most of the time. This is strange to me, because there is a constant inflow of traffic, and I don't expect the number of InUse connections to be 0.
It is not strange. The number of InUse connections moves like spikes. Since you get stats only at every 10s, you just don't see the spike.
Why does the app try to open more connections when there are enough idle connections around,
See https://github.com/go-sql-driver/mysql#important-settings
"db.SetMaxIdleConns() is recommended to be set same to (or greater than) db.SetMaxOpenConns(). When it is smaller than SetMaxOpenConns(), connections can be opened and closed very frequently than you expect."
and how should I tune the db param to reduce the issue?
Follow the recommendation of the go-sql-driver/mysql README.
Use db.SetConnMaxLifetime(), and set db.SetMaxIdleConns() same to db.SetMaxOpenConns().
db.SetMaxOpenConns(100)
db.SetMaxIdleConns(100)
db.SetConnMaxLifetime(time.Minute * 3)
I'm currently using AWS lambda to connect to an AWS mysql RDS. I'm creating a pool like such
pool = mysql.createPool({
host : 'host-details',
user : 'username',
password : 'password',
database : 'db'
});
Now when users of the app 'do something' it basically just connects via my code, grabs data out of the db table and then releases the connection. So let's say there are 10 users and they all simultaneously 'do something' Does that mean:
There will be 10 connections inside this pool and as soon as the connections release that connection count goes back to 0? If so does that mean this one pool (even if it had a limit of 50 connections) could support thousands of users as the DB queries last only few milliseconds?
Looking at the RDS monitoring in AWS, there is a metric for "DB connections (count)". Does the above scenario mean it would stay at 1 connection because it's 1 pool, or would it spike to 10 connections?
From my understanding, if my connection limit is 8 for the pool, and the above scenario occurs, the other 2 connections will queue. What stops me from setting the connection limit extremely high, is this just a factor of what the database can handle? ie if I see my DB memory/cpu performance starting to get into trouble, do I scale the pool connection limit back? and vice versa.
To cap this off, I'm trying to understand how this all works so I can setup my database / code properly so things don't break once it starts to scale.
Thank you.
Assuming you created the pool outside the Lambda handler, each instance of the function will create its own pool. Whenever a request comes in for a function, if there's no idle instance of the function, a new instance is created. Each instance creates its separate pool. A single instance can handle several invocations 1 at a time. All invocations of a single instance share the pool.
Answer to Q1:
Yes, that 1 pool can support 1000s of users if all the invocations triggered by those users end up being handled by a single instance of the function (1 at a time).
Answer to Q2:
The DB connections count metric of RDS would spike to 10. It counts the number of connections. It doesn't understand whether those connection originated in a pool.
Answer to Q3:
Set the connection limit based on your database instance size (CPU / memory) & how much traffic you expect to your Lambdas. So if your database can handle 200 connections at a time & you expect 10 instances of 1 Lambda to be running concurrently (& you only have 1 Lambda in total), set the pool size to 200/10 = 20. This is a very simplistic calculation. Many other factors like query duration would affect this.
For a pictorial view of all this, see my blog post.
I have;
a CRUD heavily loaded application in PHP 7.3 which uses CodeIgniter framework.
only 2 users access to application.
The DB is mariadb 10.2 and has 10 tables.In generally, stored INT and engine default is InnoDB but in a table, I store a "mediumtext" column.
application managed by cronjobs (10 different jobs for every minute)
a job average proceed is 100-200 CRUD from DB. (Totally ~ 1k-2k CRUD works in a minute with 10 tables)
Tested;
Persistent Connection in MySQL
I faced an issue maximum connection exceed, so I noticed the Code Igniter do not close connection if you do not set pconnect to config to true in database.php. So, simplified that, it uses allow persistent connection if you set it true. So, I want to fix that issue and I find a solution that I need to set it false and it will close all connections automatically.
I changed my configuration to disallow Persistent connections.
After I update persistent connection disabled. My app started to run properly and after 1 hour later, it crashed again because of a couple of errors that showed below and I fixed those errors with setting max_allow_package to maximum value in my.cnf for mariadb.
Warning --> Error while sending QUERY packet. PID=2434
Query error: MySQL server has gone away
I noticed the DB needs to be tuning. The database size is 1GB+. I have a lot of CRUD jobs scheduled for every minute. So, I changed to buffer size to 1GB and innodb engine pool size to %25 of it. I get used to MySQL Tuner and I figure out those variables with that.
Finally, I am still getting query package errors.
Packets out of order. Expected 0 received 1. Packet size=23
My server has 8GB ram (%25 used), 4 core x 2ghz (%10 used)
I couldn't decide which configuration is the best option for now. I couldn't increase RAM, also %25 used of ram because a key buffer size is 1GB and it could get full use of ram instant jobs.
Can I;
fix the DB errors,
increase average completed CRUD process
8GB ram --> innodb_buffer_pool_size = 5G.
200 qpm --> no problem. (200qps might be a challenge).
10 tables; 2 users --> not an issue.
persistent connections --> frill; not required.
key_buffer_size = 1G? --> Why? You should not be using MyISAM. Change to 30M.
max_allow_package --> What's that? Perhaps a typo for max_allow_packet? Don't set that to more than 1% of RAM.
Packets out of order --> sounds like a network glitch, not a database error.
MEDIUMINT --> one byte smaller than INT, so it is a small benefit when applicable.
I'm trying to use C3P0 library to handle connection pooling.
These are my C3P0 settings:
minPoolSize=3
maxPoolSize=20
acquireIncrement=1
maxIdleTime=240
maxStatements=20
In the log I can see that C3P0 seems to be correctly initialized by reading
INFO com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource - Initializing c3p0 pool...
But when I try so see how many connection there are on my MySQL db with
SHOW STATUS WHERE `variable_name` = 'Threads_connected'
I can see that the result is 48, 46, 49 ecc.
I can't understand if is not correct the way I try to see how many connections there are on the db or I did't understand the way C3P0 works
I also faced such confusion in the MySQL for threads & connections. I will explain what I have learned and grasp while studying the same, if anything that I have misunderstood then or still in confusion then please make me correct.
Some basics in MySQL:
- MySQL server is a single process application.
- It is multithreaded.
- It accepts connections like TCP/IP server.
- Each connection gets a thread.
- These threads are sometimes named processes, and sometimes they're referred to as connections.
Last and second last point makes so much confusion, In our mind we think that there is a 1-1 mapping between connections and active threads. It true also but then, there is a thread pool, which means there can be threads which are not associated with any connection.
Every new connection gets its own thread. Every new connection makes new thread, and a disconnect calls for thread's destruction. So, there is a 1-1 mapping between connections and active threads. After destruction of threads it may go into the thread pool. So, the number of threads is greater than or equal to the number of connections.
Also If you run below query
SELECT t.PROCESSLIST_ID,IF (NAME = 'thread/sql/event_scheduler','event_scheduler',t.PROCESSLIST_USER) PROCESSLIST_USER,t.PROCESSLIST_HOST,t.PROCESSLIST_DB,t.PROCESSLIST_COMMAND,t.PROCESSLIST_TIME,t.PROCESSLIST_STATE,t.THREAD_ID,t.TYPE,t.NAME,t.PARENT_THREAD_ID,t.INSTRUMENTED,t.PROCESSLIST_INFO,a.ATTR_VALUE FROM performance_schema.threads t LEFT OUTER JOIN performance_schema.session_connect_attrs a ON t.processlist_id = a.processlist_id AND (a.attr_name IS NULL OR a.attr_name = 'program_name') WHERE 1=1
Then you will see column TYPE in that values are either FOREGROUND or BACKGROUND so this tells there can be some threads which are connected with DB to do some (background) work (eg. event thread, monitor thread etc.).
generally, c3p0 concerns about connection and not for threads, so you should check for SHOW FULL PROCESSLIST for connections with DB server.
I hope I have cleared the confusion which you are having with MySQL threads & connections.
To being the question, I will describe our current production environment.
Each client gets a deployment of our Spring/Hibernate application
Each deployment gets it's own database.
There are upwards of 300 clients on our server now
I have configured c3p0 with a minimum connection pool size of 1 with an increment value of 3 and a maximum of 20 connections. So my question is, what should my maximum connections to MySQL be? Should it be the max pool size times the number of clients (20 * 300 = 6000)? Or should it be less? Will an error occur if c3p0 has say, 3 connections already and tries to obtain another and MySQL is at it's max?
I do not think that all clients will need their maximum number all at the same time, but I do want to prevent any errors from happening if a fringe case occurs.
In theory, as you say, your MySQL could see up to 6000 Connections, so to be safe, that's the answer.
But you really don't want it to have 6000 Connections open. If each pool has minPoolSize of 1 and maxPoolSize of 20, it sounds as though you expect clients to often be quiescent, but to occasionally spike in usage. Unless the spikes are likely to be highly correlated in time, your usual load should be much, much lower.
By default, c3p0 Connection pools will grow quickly with spikes in load, but not decay. If you set an aggressive maxIdleTime, or better yet maxIdleTimeExcessConnections on your c3p0 pools, you can ensure that quiescent pool hold few Connections and reduce the likelihood that you will ever approach the theoretical max of 6K.
As to the MySQL setting, you can set it to 6K to be safe, or set it much lower so that you see errors rather than sluggishness if you are overtaxing the DBMS. It might be best to estimate the peak use you expect, set the MySQL max to maybe double that, and see whether your load expectations are dramatically violated (i.e. if errors occur because the DBMS refuses Connections).
With 300 distinct databases, that implies 300 c3p0 DataSources, which may lead to a high overhead in Threads and thread management. c3p0's numHelperThreads defaults to 3, and you don't want to go lower than that. So, that's something to think about.