Why database connection attempted could be more than connection created? - mysql

Architecture
Our application have the following architecture, we have API pods and web pods and a database.
API pods
PHP -> ProxySQL -> RDS
Worker pods
PHP -> RDS
Question
Why could the database connection attempts be 10 times of connection actually created? Any insights into debugging this can help us.
db.Users.Connection.avg = connection attempted in the database
db.User.Threads_connected.avg = connected created in the database
Note: There is no RPS spike or increase in our system, this is normal constant load graphs.
We have 2k pods and around 2k connections created, looks good. How is attempted 10k?

Related

AWS (Elastic Beanstalk, RDS). The RDS database becomes unavailable

I deployed webAPP (Java) using Elastic Beanstalk, RDS(MySQL). HEALTH Status OK!
Access to the database is lost after 1 or 2 days. In the IDEA, when I connect to the database, I get an error [42000][1049] Unknown database 'ebdb'.
I have to rebuild environment (Elastic Beanstalk). But in a time I get that problem again. What is the reason of the error? How I can see in AWS is there database or not? Thanks.
I have never seen this issue. I have a custom Java Spring BOOT app running on Elastic Beanstlak and queries data from a RDS MySQL instance. Its been running well over a year without issue.
The database runs fine without any connection issues you are describing. When you look at the RDS instance in the AWS Management Console, what is the status of the database. Is it available - as shown here?
The URL to the RDS Management console in us-west-2:
https://us-west-2.console.aws.amazon.com/rds/home

AWS Time Out Problems with Elastic Beanstalk App with DB Access

Hi When my Elastic Beanstalk (m5a.large Windows Server with deployed .net Core WebApi) comes under heavy load, the Status in the Health Page for my EC2 instances turns red, my Requests and the Healthcheck are timing out. That happens around 1-3 minutes after having a minimum of 10-20 Req/sec for a server.
I have to launch a lot of servers, so that each server gets a Request/Second count of 1-5 so they do not turn red.
In my logs I saw the following Errors:
Exception=MySql.Data.MySqlClient.MySqlException (0x80004005): Unable to connect to any of the specified MySQL hosts.
---> MySql.Data.MySqlClient.MySqlException (0x80004005): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
These Errors brought me to the topic Connection Pooling so i switched
using MySql.Data.MySqlClient;
to
using MySqlConnector;
Now these Errors do not come up anymore but the Problem remains.
The Monitoring Feature of EB and RDS do not state any obvious Problems. Running Queries in Mysql Workbench against the Database is fast as usual.
At the moment, my Database calls from the server are synchronous and not using the async feature of MysqlConnector.
Does the m5a.large cannot process more than 5 Request/Second?
Kind Regards

How to close connection to a remote MySQL database?

I was testing my SpringBoot app which connects to a remote SQL database. I was also using MySQL workbench to view the tables. Then when I tried to run my app, it gave an error message as follows:
Data source rejected establishment of connection, message from server: "Too many connections"
I have tried restarting my PC but it still gives the same error. How can I solve it? I believe the previous connection was not properly closed. What can I do now?
The connections are automatically closed (or return to the connection pool) if you are using Spring Data Repository or JdbcTemplate. Your application may really need too many connections compared to your database limit, in that case you should check your database configuration. You can also check your connection properties in application.properties (pool size, idle time, timeout). Please add more details like code or configuration.

Unexpected MySql PoolExhaustedException

I am running two EC2 instances on AWS to serve my application one for each. Each application can open up to 100 connections to MySql for default. For database I use RDS and t2.medium instance which can handle 312 connections at a time.
In general, my connection size does not get larger than 20. When I start sending notifications to the users to come to the application, the connection size increases a lot (which is expected.). In some cases, MySql connections increases unexpectedly and my application starts to throw PoolExhaustedException:
PoolExhaustedException: [pool-7-thread-92] Timeout: Pool empty. Unable to fetch a connection in 30 seconds, none available[size:100; busy:100; idle:0; lastwait:30000].
When I check the database connections from Navicat, I see that there are about 200 connections and all of them are sleeping. I do not understand why the open connections are not used. I use standard Spring Data Jpa to save and read my entities which means I do not open or close the connections manually.
Unless I shut down one of the instance and the mysql connections are released, both instances do not response at all.
You can see the mysql connection size change graphic here, and a piece of log here.

Google cloud_sql_proxy unable to connect to instance, stream error, protocol_error

I've been successfully using the Google cloud_sql_proxy on multiple Compute Engine instances for some time, until today, one instance at a time, the proxy started to show the following error pattern:
2017/05/30 13:28:07 New connection for "project-id-1234:us-central1:sql_instance"
2017/05/30 13:28:07 couldn't connect to "project-id-1234:us-central1:sql_instance": Post https://www.googleapis.com/sql/v1beta4/projects/project-id-1234/instances/sql_instance/createEphemeral?alt=json: stream error: stream ID 1; PROTOCOL_ERROR
2017/05/30 13:28:41 New connection for "project-id-1234:us-central1:sql_instance"
2017/05/30 13:28:41 Thottling refreshCfg(project-id-1234:us-central1:sql_instance): it was only called 33.490705951s ago
2017/05/30 13:28:41 couldn't connect to "project-id-1234:us-central1:sql_instance": Post https://www.googleapis.com/sql/v1beta4/projects/project-id-1234/instances/sql_instance/createEphemeral?alt=json: stream error: stream ID 1; PROTOCOL_ERROR
When trying to connect directly to MySQL (while using the proxy) I get error 2013 (HY000):
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0 "Internal error/check (Not system error)"
What I've tried
Restarting the cloud_sql_proxy yielded a temporary fix until finally both my Compute Engine instances are unable to connect to my Cloud SQL instance and the proxies show only this result.
Restarting the Cloud SQL instance and both Compute Engine instances.
Eliminating the proxy: I added the appropriate networks to my SQL instance's Authorized Networks, and updated all applications to use the public IP. This restored functionality to my production apps, but now I'm using a public connection instead of local/proxy.
Some research
I came across a similar issue relating to Google Cloud SQL that yielded the same MySQL error above, but it appears to have only affected connecting to Cloud SQL from external, non GCE/GKE networks.
A few others have reported the same issue also started for them this morning on the Google Cloud SQL Discuss group.
My team started seeing the same issue appear today, with GKE managed servers. Same as you saw: restarts of servers and DB did nothing.
We tried doing an update of the version of Google Cloud Proxy we were using from v1.05 to v1.09 and the problem went away (for now).
I know that's not much of an explanation but give it a try to see if that helps you.