Hikari CP SSL Exception closing inbound before receiving peer's close_notify - mysql

Since switching from Tomcat CP (spring boot 1 default) to Hikari (spring boot 2 default) we've started seeing many instances of:
EXCEPTION STACK TRACE:
** BEGIN NESTED EXCEPTION **
javax.net.ssl.SSLException
MESSAGE: closing inbound before receiving peer's close_notify
STACKTRACE:
javax.net.ssl.SSLException: closing inbound before receiving peer's close_notify
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:129)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:308)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:255)
at java.base/sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:645)
at java.base/sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:624)
at com.mysql.cj.protocol.a.NativeProtocol.quit(NativeProtocol.java:1312)
at com.mysql.cj.NativeSession.quit(NativeSession.java:182)
at com.mysql.cj.jdbc.ConnectionImpl.realClose(ConnectionImpl.java:1750)
at com.mysql.cj.jdbc.ConnectionImpl.close(ConnectionImpl.java:720)
at com.zaxxer.hikari.pool.PoolBase.quietlyCloseConnection(PoolBase.java:135)
at com.zaxxer.hikari.pool.HikariPool.lambda$closeConnection$1(HikariPool.java:441)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Environment
Spring Boot 2.1.1.RELEASE
Java 11
mysql-connector-java 8.0.13
HikariCP 3.2.0
Database:
RDS Aurora MySql 5.7.12 (default param group)
Configuration (Spring Boot)
Settings:
spring.datasource.hikari.transactionIsolation=TRANSACTION_REPEATABLE_READ
spring.datasource.hikari.minimumIdle=10
spring.datasource.hikari.idleTimeout=300000
spring.datasource.hikari.maximumPoolSize=20
spring.datasource.hikari.connectionTimeout=5000
spring.datasource.hikari.maxLifetime=900000
spring.datasource.hikari.validationTimeout=1000
Is there a setting which I'm missing, perhaps my idle times should be set much lower?
We have not (yet) experienced any obvious bad side effects of this, i.e. the application appears to continue running without issue, but this stacktrace appears frequently (perhaps every 4 seconds.
Database Settings
If I connect to mysql via the cli and run show variables; and grep for timeout related values, I see:
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| have_statement_timeout | YES |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| rpl_stop_slave_timeout | 31536000 |
| slave_net_timeout | 60 |
| wait_timeout | 28800 |

Related

Connection issue on csv data download from RDS via MySQL CLI

I need to download locally (macOS BigSur) a large dataset (millions of rows) from an AWS RDS database (MySQL 5.7).
Thanks to this great post I am able to connect and download on my machine some data into a csv file:
mysql --host=$HOST --user $USER --password=$PASSWORD --database=$DATABASE --port=$PORT --batch \
--quick -e "$QUERY" \
| sed $'s/\\t/","/g;s/^/"/;s/$/"/;s/\\n//g' > $FILE_PATH
However, if I extend my query to thousands of records, after few seconds the process stops and the csv ends up truncated (literally the last written row is truncated half way), so I assume there is some kind of stream or timeout or buffer issue.
mysql> SHOW VARIABLES LIKE '%timeout';
+-----------------------------+----------+
| Variable_name | Value |
+-----------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| have_statement_timeout | YES |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| rpl_stop_slave_timeout | 31536000 |
| slave_net_timeout | 60 |
| wait_timeout | 28800 |
+-----------------------------+----------+
Since the command does stops after ~10sec I assume it depends on the connect_timeout value. However I tried setting it with SET ##GLOBAL.connect_timeout=7200 but I get a permission error. I tried adding the --connect-timeout=7200 parameter on the command, but it does not work (which strikes me the most).
Running the query (limited to ~250k rows) on my client (SequelAce) it runs fine, so I can exclude issues with the data or the the SQL script itself.
Any ideas or suggestions?
Are there better tools for the job maybe?

Google Cloud functions + SQL Broken Pipe error

I have various Google Cloud functions which are writing and reading to a Cloud SQL database (MySQL). The processes work however when the functions happen to run at the same time I am getting a Broken pipe error. I am using SQLAlchemy with Python, MySQL and the processes are cloud functions and the db is a google cloud database.I have seen suggested solutions that involve setting timeout values to longer. I was wondering if this would be a good approach or if there is a better approach? Thanks for your help in advance.
Heres the SQL broken pipe error:
(pymysql.err.OperationalError) (2006, "MySQL server has gone away (BrokenPipeError(32, 'Broken pipe'))")
(Background on this error at: http://sqlalche.me/e/13/e3q8)
Here are the MySQL timeout values:
show variables like '%timeout%';
+-------------------------------------------+----------+
| Variable_name | Value |
+-------------------------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| have_statement_timeout | YES |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| rpl_semi_sync_master_async_notify_timeout | 5000000 |
| rpl_semi_sync_master_timeout | 3000 |
| rpl_stop_slave_timeout | 31536000 |
| slave_net_timeout | 30 |
| wait_timeout | 28800 |
+-------------------------------------------+----------+
15 rows in set (0.01 sec)
If you cache your connection, for performance, it's normal to lost the connection after a while. To prevent this, you have to deal with disconnection.
In addition, because you are working with Cloud Functions, only one request can be handle in the same time on one instance (if you have 2 concurrent requests, you will have 2 instances). Thus, set your pool size to 1 to save resource on your database side (in case of huge parallelization)

SQL Error: 0, SQLState: 08S01 Communications link failure error with Hibernate without C3P0

We have two of these servers, were we are observing these errors. Both talk to the master RDS db.
The TPS value has recently risen and touching almost 50 TPS.
CPU usage on application servers are around 40%.
I am getting lots of these errors -
2019-09-02 00:00:12,714 65086940 [XNIO-3 task-18] INFO c.c.p.c.s.CustomRemoteTokenService [CustomRemoteTokenService.java:57] - API : service/api/path/profile/summary property api : system;;
2019-09-02 00:00:12,763 65086989 [XNIO-3 task-18] WARN o.h.e.jdbc.spi.SqlExceptionHelper [SqlExceptionHelper.java:127] - SQL Error: 0, SQLState: 08S01
2019-09-02 00:00:12,764 65086990 [XNIO-3 task-18] ERROR o.h.e.jdbc.spi.SqlExceptionHelper [SqlExceptionHelper.java:129] - Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
2019-09-02 00:00:12,765 65086991 [XNIO-3 task-18] ERROR io.undertow.request [LoggingExceptionHandler.java:80] - UT005023: Exception handling request to /service/path/to/profile/summary
java.lang.NoClassDefFoundError: org/hibernate/internal/util/JdbcExceptionHelper
Almost, 3k in 5 mins, in one server, while the other server has 0 today. Interestingly, the errors are typically high on one of the application servers, while both are expected to handle similar load.
Db configuration values are -
show global variables like "%timeout%"
-> ;
+-----------------------------+----------+
| Variable_name | Value |
+-----------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| have_statement_timeout | YES |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| rpl_stop_slave_timeout | 31536000 |
| slave_net_timeout | 60 |
| wait_timeout | 28800 |
+-----------------------------+----------+
I checked this question Database Connection to MySQL times out even after setting c3p0.testConnectionOnCheckout=true as well, however, since I don't have all the hibernate configs defined in my case, I am not sure if I need to tweak those.
I have only the following defined in my persistence.xml file -
<property name="hibernate.show_sql" value="true"/>
<property name="hibernate.format_sql" value="true"/>
<property name="hibernate.ejb.naming_strategy" value="org.hibernate.cfg.ImprovedNamingStrategy"/>
<property name="hibernate.connection.charSet" value="UTF-8"/>

what is the best solutions to mysql connection timeout?

I am writing a small web app in Go, which uses mysql to store data.
I got intermittent mysql error if the web sever didn't get any request after some amount of time(> 8 hours):
[mysql] 2017/02/08 16:31:56 packets.go:33: unexpected EOF
[mysql] 2017/02/08 16:31:56 packets.go:130: write tcp 127.0.0.1:49188->127.0.0.1:3306: write: broken pipe
I found some related discussion on github(issue 529, issue 257 and issue 446). From what I understand, mysql db would close the connection if timeout is reached.
I tried to set SetMaxOpenConns to 9 and SetMaxIdleConns to 0 as some people recommended. However, this threw exception immediately. (But if I set SetMaxIdleConns larger than 0, there was no immediate exception thrown)
I also tried to set SetConnMaxLifetime to 5 mins. This threw exception too after 5 mins.
Now I am trying the code below:
db.SetConnMaxLifetime(0)
db.SetMaxOpenConns(10)
db.SetMaxIdleConns(5)
It has been running for 20 mins. It's still too early to tell.
(UPDATE: this doesn't work either)
Here is configuration:
driver: go-sql-driver V1.3.
go version: go1.7.1 darwin/amd64
mysql: latest from docker hub
rkt version: 1.18
CoreOS: 1284.0.0
Perhaps you can start a heartbeat Goroutine to avoid timeout.
you can check your mysql time_wait variable:
mysql> show global variables like 'wait_timeout':
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wait_timeout | 300 |
+---------------+-------+
1 row in set (0.00 sec)
then use db.SetConnMaxLifetime(120*time.Second), which mean when db connection is idle over than 120s, sql.db will reopen or get a new connection from db pool by db.Open. If you not set connection max life time, you maybe use a closed connection and got the error.
watching the mysql process list,mysql> show processlist;,if connection sleep over than 300s,it's recycled by mysql:
mysql> show processlist;
+-------+-----------------+------------------+-------------+---------+---------+------------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------+-----------------+------------------+-------------+---------+---------+------------------------+------------------+
| 4 | event_scheduler | localhost | NULL | Daemon | 1363480 | Waiting on empty queue | NULL |
| 26539 | root | 172.17.0.1:48732 | NULL | Query | 0 | starting | show processlist |
| 26575 | auditcenter | 172.17.0.1:51714 | obs_gb_test | Sleep | 51 | | NULL |
+-------+-----------------+------------------+-------------+---------+---------+------------------------+------------------+
3 rows in set (0.00 sec)
SetMaxOpenConns and SetMaxIdleConns is used for setting connection resource, see enter link description here

What is the meaning of different mysql timeouts

I searched a lot on the internet but not found any brief explanation on mysql timeouts with examples. I want to know the meaning of mysql diffenernt timeouts as listed below and also want to know why and when we use them.
+----------------------------+----------+
| Variable_name | Value |
+----------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| innodb_lock_wait_timeout | 50 |
| interactive_timeout | 28800 |
| net_read_timeout | 3 |
| net_write_timeout | 60 |
| slave_net_timeout | 3600 |
| wait_timeout | 28800 |
+----------------------------+----------+
Also in ruby on rails application i can set read_timeout in my database.yml file. if query is not able to read the data within specified read_timeout value mysql will close the connection. so i also want to know what is the differentce between net_read_timeout and read_timeout
Thanks,
From The Ultimate Guide to Ruby Timeouts
connect (or open) - time to open the connection
read (or receive) - time to receive data after connected
write (or send) - time to send data after connected
checkout - time to checkout a connection from the pool
statement - time to execute a database statement