I am trying to perform Jmeter testing for a rest api built using spring boot micro service and JPA.
The API connects to a Mysql instance executes couple of queries in parallel (async) and provide the result in JSON format.
Mysql Instance is deployed in aws.
It works fine for upto 10 users. If I increase it more , then I get connection not available .
engine.jdbc.spi.SqlExceptionHelper : HikariPool-1 - Connection is not available, request timed out after 30000ms.
The only property I have configured in application.properties is connectionTimeout (This property controls the maximum number of milliseconds that a client (that's you) will wait for a connection from the pool. If this time is exceeded without a connection becoming available, a SQLException will be thrown. Lowest acceptable connection timeout is 250 ms. Default: 30000 (30 seconds))
spring.datasource.hikari.connectionTimeout: 80000 (80 seconds).
I read about HikariCP and found that
maximumPoolSize: default: 10.
minimumIdle : Default: same as maximumPoolSize
maxLifetime: Default: 1800000 (30 minutes)
I am trying various combination of the properties mentioned above to test for 100 concurrent users at a time.
Can some one tell me which properties of connection pool to tweak in order to test this for 100 users? Or what is the optimum configuration ?
Thanks in advance
Related
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet successfully received from the server was 976,464 milliseconds ago. The last packet sent successfully to the server was 974,674 milliseconds ago.
This error occurs in JMeter when I ran the following test plan that would send 15 MB files to AWS RDS.
LoadTestPlan
JDBC Connection Configuration:
Max Wait ms: 0 (indefinite wait)
Max connections: 0 (no limit)
ThreadGroup
No. of threads: 200
Ramp up seconds: 100
Loop Count: Indefinite
Scheduled to run for 3 hours
JDBC Request
LOAD DATA LOCAL INFILE statement
RDS Configuration
Engine 5.7.33
Max connections: 200
Innodb lock wait timeout: 6000
Max allowed packet: 64 MB
There were many solutions for this Communication Links Failure but for me, some requests are successful and for some I get this error. Thus I am starting to think it is the network problem but I am using high speed Ethernet of 74 Mbps speed. Even if it is the network problem there must be some parameter that when adjusted should allow connections from even poor network to be successful.
JMter version: 5.4
With regards to your statement:
Max connections: 0 (no limit)
I don't think it's true, as per BasicDataSource Configuration Parameters I would say it's rather 8 (see maxTotal parameter)
So it looks like you're using 200 concurrent threads and having 8 connections in pool, try increasing these max connections to be equal to the number of JMeter threads. Or if you're not testing the database directly you should rather mimic your application JDBC configuration.
I know that JMeter tries to set maximum pool size equal to initial pool size as it evidenced by this line, however the source code of BasicDataSource suggests setting a negative number for "no limit"
More information: The Real Secret to Building a Database Test Plan With JMeter
I am building a RESTful interface to a MariaDB-hosted database, and I cannot figure out how to properly configure HikariCP so that my database connections don't time out after the server has been idle for a while.
I am on Linux, Java 1.8, and my database server is stock MariaDB 5.5.60. My application uses the following tech stack:
spring-boot-starter-jdbc:2.0.1
spring-boot-data-rest:2.0.1
jdbi3-core:3.1.0
jdbi3-sqlobject:3.1.0
mysql-connector-java:5.1.46
HikariCP:2.7.8 (implicitly provided via Spring)
My application.properties file currently looks like this:
spring.datasource.url=jdbc:mysql://localhost/my_database
spring.datasource.username=myusername
spring.datasource.password=myp#ssw0rd
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
# 15 min * 60 sec * 1000 ms = 900000
spring.datasource.hikari.maxLifetime=900000
The "maxLifetime" value is being ignored. I have tried all sorts of Hikari-related things in this file (many found here on SO) but none of them seem to work. When I try hitting the server after it has been idle overnight, I get the following warning:
com.zaxxer.hikari.pool.ProxyConnection: HikariPool-1 - Connection com.mysql.jdbc.JDBC4Connection#140ae1bb marked as broken because of SQLSTATE(08S01) ,ErrorCode(0)
com.mysql.jdbc.exception.jdb4.CommunicationsException: The last packet successfully received from the server was 422,968,077 milliseconds ago. The last packet sent successfully to the server was 422,968,086 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
...and then a pile of errors and stack traces from which I'll spare you.
My intuition tells me that there is some magical combo of parameters missing from my application.properties file, but I'm at a loss. I am also at a loss with how to verify it's actually working without having to wait overnight.
Any help is appreciated!
We have a spring-boot application which uses embedded tomcat for deployment and default tomcat-jdbc connection pooling with MySQL back-end with no customization for MySQL or Tomcat side. The app has a few schedulers that runs mostly during specific time in a day i.e. between the last cron run yesterday and 1st cron runs today, there is more than 9 hrs of gap. However, whenever the cron ran earlier, it has never come across idle connection issue. Nowadays we see an error message
The last packet successfully received from the server was XXXXXXXX milliseconds ago. The last packet sent successfully to the server was XXXXXXXY milliseconds ago.
I can always try using testOnBorrow with validateQuery adn/or testWhileIdle etc as reqd to get this working but...
I'm trying to understand the lifecycle of the active connection in tomcat-jdbc connection pooling. Acc to the documentation, the default value for wait_timeout for MySQL is 8 hrs, whereas default for idle_connection_timeout on Tomcat_jdbc is nearly 6 secs.
If the default value is in use everywhere, then why issue has never surfaced before?
Or is it something that the connections in the tomcat-jdbc connection pool are made active every time the cron starts running and becomes idle thereafter?
Is it the state of the spring-boot app or the scheduler that makes any difference?
The problem is not in configuration or setup. spring-boot app uses spring-data lib which makes use of the underlying connection pool. The pool handles the connection(s) as per the connection pool implementation. The use of #Transactional however decides when the underlying connection is opened. If there is none specified in spring-boot app the default implementation of spring-data opens it during crud operations; else it is opened during the method call in application annotated with #Transactional.
In my case it was the latter.. After opening the connection a time taking non DB process was running which was making the connection to go idle after opening and was throwing exception while actually using it later.
I have a Java EE application running in GlassFish on EC2, with a MySQL database on Amazon RDS.
I am trying to configure the JDBC connection pool to in order to minimize downtime in case of database failover.
My current configuration isn't working correctly during a Multi-AZ failover, as the standby database instance appears to be available in a couple of minutes (according to the AWS console) while my GlassFish instance remains stuck for a long time (about 15 minutes) before resuming work.
The connection pool is configured like this:
asadmin create-jdbc-connection-pool --restype javax.sql.ConnectionPoolDataSource \
--datasourceclassname com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource \
--isconnectvalidatereq=true --validateatmostonceperiod=60 --validationmethod=auto-commit \
--property user=$DBUSER:password=$DBPASS:databaseName=$DBNAME:serverName=$DBHOST:port=$DBPORT \
MyPool
If I use a Single-AZ db.m1.small instance and reboot the database from the console, GlassFish will invalidate the broken connections, throw some exceptions and then reconnect as soon the database is available. In this setup I get less than 1 minute of downtime.
If I use a Multi-AZ db.m1.small instance and reboot with failover from the AWS console, I see no exception at all. The server halts completely, with all incoming requests timing out. After 15 minutes I finally get this:
Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 940,715 milliseconds ago. The last packet sent successfully to the server was 935,598 milliseconds ago.
It appears as if each HTTP thread gets blocked on an invalid connection without getting an exception and so there's no chance to perform connection validation.
Downtime in the Multi-AZ case is always between 15-16 minutes, so it looks like a timeout of some sort but I was unable to change it.
Things I have tried without success:
connection leak timeout/reclaim
statement leak timeout/reclaim
statement timeout
using a different validation method
using MysqlDataSource instead of MysqlConnectionPoolDataSource
How can I set a timeout on stuck queries so that connections in the pool are reused, validated and replaced?
Or how can I let GlassFish detect a database failover?
As I commented before, it is because the sockets that are open and connected to the database don't realize the connection has been lost, so they stayed connected until the OS socket timeout is triggered, which I read might be usually in about 30 minutes.
To solve the issue you need to override the socket Timeout in your JDBC Connection String or in the JDNI COnnection Configuration/Properties to define the socketTimeout param to a smaller time.
Keep in mind that any connection longer than the value defined will be killed, even if it is being used (I haven't been able to confirm this, is what I read).
The other two parameters I mention in my comment are connectTimeout and autoReconnect.
Here's my JDBC Connection String:
jdbc:(...)&connectTimeout=15000&socketTimeout=60000&autoReconnect=true
I also disabled Java's DNS cache by doing
java.security.Security.setProperty("networkaddress.cache.ttl" , "0");
java.security.Security.setProperty("networkaddress.cache.negative.ttl" , "0");
I do this because Java doesn't honor the TTL's, and when the failover takes place, the DNS is the same but the IP changes.
Since you are using an Application Server, the parameters to disable DNS cache must be passed to the JVM when starting the glassfish with -Dnet and not the application itself.
I am using Hibernate with c3p0 as pooling provider. I have set its max size as 50 only. Now, I performed load testing of my application with 1000 concurrent threads accessing database continuously and with mysql max_connections as 2000. I am getting proper responses from the application but sometimes I face socket exception error.
So, first thing is if my pooling size is 50 only, how 1000 connections are managed by hibernate ? Does it mean that 50 connections are being taken from the pool and rest of the connections are created? Also, why I must be getting socket exception like connection reset exception?
if you've set things up properly and c3p0's maxPoolSize is 50, then if 1000 clients hit the pool, 50 will get Connections initially and the rest will wait() briefly until Connections are returned by the first cohort. the pool's job, in collaboration with your application which should hold Connections as briefly as possible, is to ensure that a limited number of Connections are efficiently shared.
if you are seeing occasional connection reset / socket exception, you probably ought to configure some Connection testing:
http://www.mchange.com/projects/c3p0/index.html#configuring_connection_testing
the latest prerelease version has some more direct advice about connection testing; you can download that or read the html source starting here:
https://github.com/swaldman/c3p0/blob/master/src/doc/index.html#L1071