HAProxy: drops connections from DBCP, why? - mysql

I have a webapp (Tomcat/Hibernate/DBCP 1.4) that runs queries against MySQL, and this works fine for a certain load, say 50 queries a second. When I route the same moderate load through HAProxy (still just using a single database), I get a failure, maybe one for every 500 queries. My app reports:
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 196,898 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago.
at sun.reflect.GeneratedConstructorAccessor210.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1117)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3567)
...
Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3017)
...
Meanwhile the HAProxy log is showing a lot of entries like:
27] mysql mysql/db03 0/0/34605 2364382 cD 3/3/3/3/0 0/0
Oct 15 15:43:12 localhost haproxy[3141]: 127.0.0.1:35500 [15/Oct/2012:15:42:50.0
The "cD" apparently indicates a state of client timeout. So whereas my webapp is saying that HAProxy is refusing to accepting new connections, HAProxy is saying that my webapp is not accepting data back.
I am not including my HAProxy configuration, because I've tried many different parameter values, with essentially the same result. In particular, I've set maxconn to both high and low values, in both global and server sections, and what always happens in the stats is that the max sessions rises to no more than about 7. My JDBC pool size is also high.
Is it generally ok to use a JDBC pool and a HAProxy pool together? Have people run into this kind of problem before?
I have an idea on how to solve this, which is to send a "validation query" before every query. But there's a certain overhead there, and I'd still like to know why my webapp succeeds when it goes straight to MySQL, but gets dropped connections on going through HAProxy.
How can I debug further and get more information than just "cD"? I tried running HAProxy in debug mode, but it doesn't seem to reveal anything more.

Try this:
tune.bufsize 20480
tune.maxrewrite 2048
See the ha-docs for their meaning. You have to do this with all eyes on it as you're entering the grey zone of potential lethal params. But it's worth a try to see if this works. I just solved a problem that made no sense vs. the documentation with this.
The defaults are 16k vs 1k.

Related

Unable to configure HikariCP in Spring Boot/JDBI/MySQL application

I am building a RESTful interface to a MariaDB-hosted database, and I cannot figure out how to properly configure HikariCP so that my database connections don't time out after the server has been idle for a while.
I am on Linux, Java 1.8, and my database server is stock MariaDB 5.5.60. My application uses the following tech stack:
spring-boot-starter-jdbc:2.0.1
spring-boot-data-rest:2.0.1
jdbi3-core:3.1.0
jdbi3-sqlobject:3.1.0
mysql-connector-java:5.1.46
HikariCP:2.7.8 (implicitly provided via Spring)
My application.properties file currently looks like this:
spring.datasource.url=jdbc:mysql://localhost/my_database
spring.datasource.username=myusername
spring.datasource.password=myp#ssw0rd
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
# 15 min * 60 sec * 1000 ms = 900000
spring.datasource.hikari.maxLifetime=900000
The "maxLifetime" value is being ignored. I have tried all sorts of Hikari-related things in this file (many found here on SO) but none of them seem to work. When I try hitting the server after it has been idle overnight, I get the following warning:
com.zaxxer.hikari.pool.ProxyConnection: HikariPool-1 - Connection com.mysql.jdbc.JDBC4Connection#140ae1bb marked as broken because of SQLSTATE(08S01) ,ErrorCode(0)
com.mysql.jdbc.exception.jdb4.CommunicationsException: The last packet successfully received from the server was 422,968,077 milliseconds ago. The last packet sent successfully to the server was 422,968,086 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
...and then a pile of errors and stack traces from which I'll spare you.
My intuition tells me that there is some magical combo of parameters missing from my application.properties file, but I'm at a loss. I am also at a loss with how to verify it's actually working without having to wait overnight.
Any help is appreciated!

How to fix this java mysql exception: Communications link failure?

Here is the log of this exception:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 1,409,240 milliseconds ago. The last packet sent successfully to the server was 1,409,267 milliseconds ago.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:989)
at com.mysql.jdbc.MysqlIO.nextRowFast(MysqlIO.java:2229)
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1989)
at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:3410)
at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:470)
at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:3112)
at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:2341)
...
Caused by: java.io.EOFException: Can not read response from server. Expected to read 7 bytes, read 5 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3011)
at com.mysql.jdbc.MysqlIO.nextRowFast(MysqlIO.java:2212)
I know this exception is quite normal, and I've googled it and got a lot of solutions. However, none of those solutions fix my problem. Well, it's just a simple java application, not a java web application, and I didn't use any connecting pool but simply used JDBC. My mysql version is 5.7.12. And the mysql is running on a windows server, while the java application is running on linux. I have checked the 'wait_timeout' for mysql and it's 28800, which is much larger than 1409240 ms. So the problem should probably not be caused by this issue. I've also checked the tcp connection wait time on my linux, it's 7200s, still much bigger than 1409240 ms. I also tried add '?autoReconnect=true' to the JDBC url, but it still made no difference. And I'm sure that there is nothing wrong with the accessibility of my server, cause the connection does work for several minutes before the problem occurs.I've almost tried any thing I can do. However, the problem still persists. What should I do? Is there any posibility that the problem is caused by the windows firewall?
edit:
This problem occurs when executing a SELECT query tries to select all data items from a table whose size is 10.9 G. Maybe this table is just too big so that the sentence ResultSet rs = stmt.executeQuery(sql); cannot be done. But I've checked the 'max_execution_time' variable of mysql, it's set to 0, indicating there is no restriction of execution time.
Welcome to TCP/IP. Lots of things can cause loss of a TCP connection, especially one that has been idle for a while. One such thing, as you mention, is a firewall. The connection can be knocked down by some network entity even if both the client and the server agree it should be kept alive. Connection loss likelihood goes up when client and server are not on the same local network.
Figuring out why means doing lots of packet monitoring at various places in the network. That can use up a lot of time and effort and not teach you much. Plus, learning to read wireshark output is a real task.
Most client-server programmers who need long-lasting connections use some kind of keepalive operation to avoid having the connection sit idle for too long. Keepalive operations send something and get something back. In your case you could do this the easy way by issuing a SELECT NOW() query (or some other round-trip no-op) once every minute or two while your client sits otherwise idle.
The best way to handle this kind of thing is to open the database connection when you need it, then close it when you're done. If you use the connection pooling feature you can open and close your pooled connection upon every query, and still avoid churning the physical connections. The JDBC connector and MySQL server code are optimized for this approach; it is probably the best way to go.
I had com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure reading more on the error message I realized it had to do with my SSL setting in the connection string.
When you have SSL enabled in your connection string for MySQL.Please make sure your date and time are correct or in sync with the current date. If not the connection will be hit with javax.net.ssl.SSLHandshakeException resulting in The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
. prompting you to do something about your date. This is how I solved mine. I had to set my date and time correctly. Sometimes the error messages say a lot about the problem.

Do MySql connections closed from Jdbc stay opened for some time?

I get the following error accessing to a MySql Database from Jdbc:
java.sql.SQLNonTransientConnectionException: Too many connections
At the same time I am monitoring my connections. I added a counter that counts any opening and closing. The error ouccurs when I get to 380 opened and closed connections within 3 minutes.
Is it possible that it takes some time for MySql to acutally close the connection so that there are still too many opened even though I have send a command to close them?
I am just assuming certain points that might be the reason.
MySql Connections are maintained by MySql Connection Manager so once connection is released Manager will decide to kill that thread or return it back to pool.
In some cases if MySql Resultset is not closed after retrieving data and connection has been close on that time sending it back to pool might have some latency issue.
These two are points that i think might cause that, but i am not sure if these are correct or not.
There could be other reasons that i am not knowing about.
Hope it gives you some idea.

connection issues with cleardb from cloudfoundry (on pivotal)

we constantly face issues with the connections to MySQL hosted by ClearDB. We have a dedicated plan which offers more then 300+ connections for our application.
I know the CBR on ClearDB site automatically closes an inactive connection after 60s.
The (Spring) application runs in Tomcat and uses a ConnectionPool with the following settings:
org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
dataSource.setDriverClassName("com.mysql.jdbc.Driver");
dataSource.setUrl(serviceInfo.getJdbcUrl());
dataSource.setUsername(serviceInfo.getUserName());
dataSource.setPassword(serviceInfo.getPassword());
dataSource.setInitialSize(10);
dataSource.setMaxActive(30);
dataSource.setMaxIdle(30);
dataSource.setTimeBetweenEvictionRunsMillis(34000);
dataSource.setMinEvictableIdleTimeMillis(55000);
dataSource.setTestOnBorrow(true);
dataSource.setTestWhileIdle(true);
dataSource.setValidationInterval(34000);
dataSource.setValidationQuery("SELECT 1");
The error we see in our stack is:
2015-01-13T13:36:22.75+0100 [App/0] OUT The last packet successfully received from the server was 90,052 milliseconds ago. The last packet sent successfully to the server was 90,051 milliseconds ago.; nested exception is com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
2015-01-13T13:36:22.75+0100 [App/0] OUT The last packet successfully received from the server was 90,052 milliseconds ago. The last packet sent successfully to the server was 90,051 milliseconds ago.
2015-01-13T13:36:22.75+0100 [App/0] OUT ... 52 common frames omitted
2015-01-13T13:36:22.75+0100 [App/0] OUT Caused by: java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost.
2015-01-13T13:36:22.75+0100 [App/0] OUT at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2914) ~[mysql-connector-java-5.1.33.jar:5.1.33]
2015-01-13T13:36:22.75+0100 [App/0] OUT at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3337) ~[mysql-connector-java-5.1.33.jar:5.1.33]
2015-01-13T13:36:22.75+0100 [App/0] OUT ... 64 common frames omitted
Do you have any ideas what could be causing this or did you have similar experiences with ClearDB and maybe moved somewhere else?
unfortunate I'm out of any ideas, any help is really appreciated.
The error you listed looks a lot like your connection has been disconnected on the remote end (i.e. by ClearDb). 60s is a pretty short window for idle connections, so I'd suggest a few changes to your pool config.
1.) Set initialSize and minIdle (defaults to initialSize) intentionally low. This will keep the number of idle connections low. Less idle connections means there's more of a chance the connection will be reused before the 60s window expires.
2.) You don't need maxIdle here. It defaults to maxActive.
3.) Set timeBetweenEvictionRunsMillis lower. This sets how often the pool will check for idle connections. The default of 5s is probably fine.
4.) Lower minEvictableIdleTimeMillis. This is the minimum amount of time the connection will be in the pool before it can be evicted. It doesn't mean it will be evicted exactly when it's this old though. If the idle check just ran and your connection is minEvictableIdleTimeMillis - 1s old, it will have to wait for the next check to evict the connection (i.e timeBetweenEvictionRunsMillis). If you're using the default timeBetweenEvictionRunsMillis of 5s, setting this to 50s should give it plenty of time.
5.) Set the validationInterval lower. This determines how long the pool will wait since the last successful validation before it validates the connection again. I'd go with something between 2 and 5s. It's high enough that you'll get some benefit when you're busy, and low enough that it won't cause you to miss validation on bad connections.
6.) I'd also suggest that you enable removeAbandoned and logAbandoned, with removeAbandonedTimeout set to something like 5 or 10s (most web apps shouldn't hold the db connection for that long). This will eliminate the possibility that your web app is holding the connection in an idle state for more than 60s, then trying to use it again.

Node.js and MySQL "Too many connections" error

I'm using Node.js to run a web-server for my web application. I'm also using the node-mysql module to interface with a MySQL server for all my persistent database needs.
Whenever there is a critical error within my Node.js application that crashes my app's process I get an email sent to me. So, I keep getting this email with an error saying "Too many connections". Here's an example of the error:
Error: Too many connections
at Function.Client._packetToUserObject (/apps/x/node_modules/mysql/lib/client.js:394:11)
at Client._handlePacket (/apps/x/node_modules/mysql/lib/client.js:307:43)
at Parser.EventEmitter.emit (events.js:96:17)
at Parser.write.emitPacket (/apps/x/node_modules/mysql/lib/parser.js:71:14)
at Parser.write (/apps/x/node_modules/mysql/lib/parser.js:576:7)
at Socket.EventEmitter.emit (events.js:96:17)
at TCP.onread (net.js:396:14)
As you can see all it tells me is that the error is coming from the mysql module, but it doesn't tell me where in my application code the issue is occurring.
My application opens a db connection anytime I need to run one or more queries. I immediately close the connection after all my queries and data has been collected. So, I don't understand how I could be exceeding the 151 max_connections limit.
Unless there is a place in my code where I forgot to call db.end() to close the connection, I don't see how my app would leak like this. Even if there was such a mistake, I wouldn't get these emails sent by the dozens. Yesterday, I received almost 100 emails with roughly the same error. How could this be happening? If my application had leaked and allocated connections over time, as soon as the first error occurred the app process would crash and all connections would be lost, preventing the app to crash again. Since I received ~100 emails, this means the app crashed ~100 times, and all within a short period of time. This could only mean that somewhere in my application a lot of connections where established in a short period of time, right?
How could I avoid this problem? This is very discouraging. All help is highly appreciated. Thanks
MySQL has a default MAX_CONNECTIONS = '100' not 151 unless you changed it. Also, in truth you have MAX_CONNECTIONS + 1. The plus 1 allows a root user to logon even after you have maxed out the conenctions in order to figure out what is actually being used. When your connections are maxed out try logging on as root and running the following command from MySQL.
mysql> SHOW FULL PROCESSLIST
Post the output of this command above. Once you actually know what is consuming your resources you can go about fixing it.It could easily be your code that is leaving open connections.
You should take a look at the follwoing documentation: Show Processlist
+1 for question. Investigations showed us that node-mysql opens the connections and doesn't close them. Because of that at one moment be reach the limit of max connections. The question is why node-mysql doesn't close the connections?