JMeter inconsistent CommunicationsException: Communications link failure - mysql

com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet successfully received from the server was 976,464 milliseconds ago. The last packet sent successfully to the server was 974,674 milliseconds ago.
This error occurs in JMeter when I ran the following test plan that would send 15 MB files to AWS RDS.
LoadTestPlan
JDBC Connection Configuration:
Max Wait ms: 0 (indefinite wait)
Max connections: 0 (no limit)
ThreadGroup
No. of threads: 200
Ramp up seconds: 100
Loop Count: Indefinite
Scheduled to run for 3 hours
JDBC Request
LOAD DATA LOCAL INFILE statement
RDS Configuration
Engine 5.7.33
Max connections: 200
Innodb lock wait timeout: 6000
Max allowed packet: 64 MB
There were many solutions for this Communication Links Failure but for me, some requests are successful and for some I get this error. Thus I am starting to think it is the network problem but I am using high speed Ethernet of 74 Mbps speed. Even if it is the network problem there must be some parameter that when adjusted should allow connections from even poor network to be successful.
JMter version: 5.4

With regards to your statement:
Max connections: 0 (no limit)
I don't think it's true, as per BasicDataSource Configuration Parameters I would say it's rather 8 (see maxTotal parameter)
So it looks like you're using 200 concurrent threads and having 8 connections in pool, try increasing these max connections to be equal to the number of JMeter threads. Or if you're not testing the database directly you should rather mimic your application JDBC configuration.
I know that JMeter tries to set maximum pool size equal to initial pool size as it evidenced by this line, however the source code of BasicDataSource suggests setting a negative number for "no limit"
More information: The Real Secret to Building a Database Test Plan With JMeter

Related

Jmeter-Springboot-Mysql-ec2 Hikari connection pool - Connection not available

I am trying to perform Jmeter testing for a rest api built using spring boot micro service and JPA.
The API connects to a Mysql instance executes couple of queries in parallel (async) and provide the result in JSON format.
Mysql Instance is deployed in aws.
It works fine for upto 10 users. If I increase it more , then I get connection not available .
engine.jdbc.spi.SqlExceptionHelper : HikariPool-1 - Connection is not available, request timed out after 30000ms.
The only property I have configured in application.properties is connectionTimeout (This property controls the maximum number of milliseconds that a client (that's you) will wait for a connection from the pool. If this time is exceeded without a connection becoming available, a SQLException will be thrown. Lowest acceptable connection timeout is 250 ms. Default: 30000 (30 seconds))
spring.datasource.hikari.connectionTimeout: 80000 (80 seconds).
I read about HikariCP and found that
maximumPoolSize: default: 10.
minimumIdle : Default: same as maximumPoolSize
maxLifetime: Default: 1800000 (30 minutes)
I am trying various combination of the properties mentioned above to test for 100 concurrent users at a time.
Can some one tell me which properties of connection pool to tweak in order to test this for 100 users? Or what is the optimum configuration ?
Thanks in advance

MySQL connection timeout?

I have a spring-mvc application running on glassfish server with Mysql db connection in which the pool idle time is set to 300 seconds but I am getting continuously the Warnings every 5 minutes even if ther is no idle session even if the application is up in the server but no one is using it:
Unexpected exception while destroying resource from pool MediaTrackPool. Exception message: WEB9031: WebappClassLoader unable to load resource [com.mysql.jdbc.ProfilerEventHandlerFactory], because it has not yet been started, or was already stopped
Error while Resizing pool MediaTrackPool. Exception : WEB9031: WebappClassLoader unable to load resource [com.mysql.jdbc.SQLError], because it has not yet been started, or was already stopped
Could someone help me in getting rid of this warnings as or restricting them when actual ideal session is encountered because getting the warnings every 5 minutes even when no one using the application is not helping is real log analysis.
Settings for connection pool are as below:
General Settings
Pool Name: MediaTrackPool
Resource Type: javax.sql.DataSource
Datasource Classname:com.mysql.jdbc.jdbc2.optional.MysqlDataSource
Pool Settings
Initial and Minimum Pool Size: 8
Maximum Pool Size: 32
Pool Resize Quantity: 2
Idle Timeout: 300
Max Wait Time: 60000
I belive that there is a mismatch between the connection pool properties and the actual timeouts at the my sql server.
Can you check whats the value of connect_timeout, interactive_timeout and wait_timeout.
More info on setting these timouts is here.

connection issues with cleardb from cloudfoundry (on pivotal)

we constantly face issues with the connections to MySQL hosted by ClearDB. We have a dedicated plan which offers more then 300+ connections for our application.
I know the CBR on ClearDB site automatically closes an inactive connection after 60s.
The (Spring) application runs in Tomcat and uses a ConnectionPool with the following settings:
org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
dataSource.setDriverClassName("com.mysql.jdbc.Driver");
dataSource.setUrl(serviceInfo.getJdbcUrl());
dataSource.setUsername(serviceInfo.getUserName());
dataSource.setPassword(serviceInfo.getPassword());
dataSource.setInitialSize(10);
dataSource.setMaxActive(30);
dataSource.setMaxIdle(30);
dataSource.setTimeBetweenEvictionRunsMillis(34000);
dataSource.setMinEvictableIdleTimeMillis(55000);
dataSource.setTestOnBorrow(true);
dataSource.setTestWhileIdle(true);
dataSource.setValidationInterval(34000);
dataSource.setValidationQuery("SELECT 1");
The error we see in our stack is:
2015-01-13T13:36:22.75+0100 [App/0] OUT The last packet successfully received from the server was 90,052 milliseconds ago. The last packet sent successfully to the server was 90,051 milliseconds ago.; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
2015-01-13T13:36:22.75+0100 [App/0] OUT The last packet successfully received from the server was 90,052 milliseconds ago. The last packet sent successfully to the server was 90,051 milliseconds ago.
2015-01-13T13:36:22.75+0100 [App/0] OUT ... 52 common frames omitted
2015-01-13T13:36:22.75+0100 [App/0] OUT Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
2015-01-13T13:36:22.75+0100 [App/0] OUT at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2914) ~[mysql-connector-java-5.1.33.jar:5.1.33]
2015-01-13T13:36:22.75+0100 [App/0] OUT at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3337) ~[mysql-connector-java-5.1.33.jar:5.1.33]
2015-01-13T13:36:22.75+0100 [App/0] OUT ... 64 common frames omitted
Do you have any ideas what could be causing this or did you have similar experiences with ClearDB and maybe moved somewhere else?
unfortunate I'm out of any ideas, any help is really appreciated.
The error you listed looks a lot like your connection has been disconnected on the remote end (i.e. by ClearDb). 60s is a pretty short window for idle connections, so I'd suggest a few changes to your pool config.
1.) Set initialSize and minIdle (defaults to initialSize) intentionally low. This will keep the number of idle connections low. Less idle connections means there's more of a chance the connection will be reused before the 60s window expires.
2.) You don't need maxIdle here. It defaults to maxActive.
3.) Set timeBetweenEvictionRunsMillis lower. This sets how often the pool will check for idle connections. The default of 5s is probably fine.
4.) Lower minEvictableIdleTimeMillis. This is the minimum amount of time the connection will be in the pool before it can be evicted. It doesn't mean it will be evicted exactly when it's this old though. If the idle check just ran and your connection is minEvictableIdleTimeMillis - 1s old, it will have to wait for the next check to evict the connection (i.e timeBetweenEvictionRunsMillis). If you're using the default timeBetweenEvictionRunsMillis of 5s, setting this to 50s should give it plenty of time.
5.) Set the validationInterval lower. This determines how long the pool will wait since the last successful validation before it validates the connection again. I'd go with something between 2 and 5s. It's high enough that you'll get some benefit when you're busy, and low enough that it won't cause you to miss validation on bad connections.
6.) I'd also suggest that you enable removeAbandoned and logAbandoned, with removeAbandonedTimeout set to something like 5 or 10s (most web apps shouldn't hold the db connection for that long). This will eliminate the possibility that your web app is holding the connection in an idle state for more than 60s, then trying to use it again.

Database Connection does not release after idle time out in glassfish

i am using Glassfish 3 & mysql5.6.11.
i have created JDBC connection pool in glassfish.
Initial and Minimum Pool Size: - 8
Maximum Pool Size: -30
Pool Resize Quantity:- 10
Idle Timeout: - 60 (second).
Max Wait Time:- 2500 (millisecond).
with this parameter i have created pool setting.
i have set pool resize quantity value.
when no of connections increase, it does not release after idle time-out.
next time when i hit url it again increase no of connection, it does not reuse already open connection.
i am getting exception
java.sql.SQLException: Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time. Cannot allocate more connections.
i am using show processlist in mysql to show open connection.
if any one knows the solution of this problem, please share your idea with me.
i need help from any one.
Idle timeout is just the time that unused connections in the pool will remain in the pool before they are closed/recycled. That problem you are having is most likely that you are not closing your connections after use.
Fix your code to close connections when you are done with them, closing a connection will release it back to the connection pool so they are available for reuse.
Some connection pools have additional timeouts for the time a connection can be used, forcing the connection back in the pool after that time. Which to the user of that connection will look as if the connection has been closed. I don't think the glassfish pool has this option though.

How Hibernate pooling works if pooling size is less than concurrent connections?

I am using Hibernate with c3p0 as pooling provider. I have set its max size as 50 only. Now, I performed load testing of my application with 1000 concurrent threads accessing database continuously and with mysql max_connections as 2000. I am getting proper responses from the application but sometimes I face socket exception error.
So, first thing is if my pooling size is 50 only, how 1000 connections are managed by hibernate ? Does it mean that 50 connections are being taken from the pool and rest of the connections are created? Also, why I must be getting socket exception like connection reset exception?
if you've set things up properly and c3p0's maxPoolSize is 50, then if 1000 clients hit the pool, 50 will get Connections initially and the rest will wait() briefly until Connections are returned by the first cohort. the pool's job, in collaboration with your application which should hold Connections as briefly as possible, is to ensure that a limited number of Connections are efficiently shared.
if you are seeing occasional connection reset / socket exception, you probably ought to configure some Connection testing:
http://www.mchange.com/projects/c3p0/index.html#configuring_connection_testing
the latest prerelease version has some more direct advice about connection testing; you can download that or read the html source starting here:
https://github.com/swaldman/c3p0/blob/master/src/doc/index.html#L1071