If there are many requests of db server at the same time saying that the QPS is 100, and the DB server has a connection limit saing 1000, so if the requests are slow queries which will eventually got inactivity timeout, at this time what shoud i do to prevent the npm package mysql from creating new connection?
Because the npm package mysql will remove the connection object from the connection object pool with fatal error like inactivity timeout and leave space for creating new connection.
For high load, you should use connection pools with persistent connections. Those are usually available in hight level query builders and ORMs like knex and sequelize.
But if you don't want use them, you can also try native pools.
Related
I have Django RestFrameWork application in which we are using sqlalchemy library for MySql connection.
engine = create_engine('mysql+mysqldb://username:password#hostaddress/'
'DBname', pool_recycle=1800,
connect_args={'connect_timeout': 1800}, pool_size=10, max_overflow=10, pool_pre_ping=True)
connection = engine.connect()
As the API usage increases the Mysql is creating new connections and count of threads_connected keeps growing. After reaching max value it is throwing Too many connections error. In show processList many process will be in sleep mode. If we restart the app all the connections will be reset. The following chart indicates no.of connections v/s time. How to fix this issue.
You must close connections after you've finished using them because if you don't, the connection stays open until the webserver closes it which might take a lot of time.
The best practice would be using a connection pool. Because opening and closing connections are too heavy and decreases performance. Even if you're using a connection pool you must let the connection go after you've used it.
I am creating .net core 2.1 MVC application and using Azure database for MySQL DB 5.7.
I have read below links but seems they are applicable for MS SQL DB.
https://learn.microsoft.com/en-us/azure/mysql/concepts-high-availability
https://learn.microsoft.com/en-us/azure/architecture/best-practices/retry-service-specific
Transient handling for MySQL not possible? Help me link to MYSQL related similar pages.
A transient error, also known as a transient fault, is an error that will resolve itself. Most typically these errors manifest as a connection to the database server being dropped. Also new connections to a server can't be opened. Transient errors can occur for example when hardware or network failure happens.
Transient errors should be handled using retry logic. Situations that must be considered:
An error occurs when you try to open a connection
An idle connection is dropped on the server side. When you try to issue a command it can't be executed
An active connection that currently is executing a command is dropped.
The first and second case are fairly straight forward to handle. Try to open the connection again. When you succeed, the transient error has been mitigated by the system. You can use your Azure Database for MySQL again. We recommend having waits before retrying the connection. Back off if the initial retries fail. This way the system can use all resources available to overcome the error situation. A good pattern to follow is:
Wait for 5 seconds before your first retry.
For each following retry, the increase the wait exponentially, up to 60 seconds.
Set a max number of retries at which point your application considers the operation failed.
Read more here.
And you can read more on how to troubleshoot connection issues to Troubleshoot connection issues to Azure Database for MySQL here.
I have a Django App with a pretty standard server stack
DB Backend : MySQL
WSGI Server : Gunicorn
Async worker class : Gevent
I want Django to pool MySQL connections rather than creating connections on every request.
Starting 1.6, Django has introduced persistent connections but there are issues with async workers.
Hence, either a different MySQL backend is required or app level connection pooling. I've read several of them. Some of them are very old articles. Following are some:
Django MySQL backends
django-mysqlpool
App level Connection pool
with SQL Alchemy
another with SQL Alchemy
Some Patches are also available
Django Patch
Some other approaches
MySQL DB Connector
I'm really confused as to which approach among these is the best way to pool connections? Any Help is highly appreciated.
This project still works on Django 1.9, and worked well for us.
https://github.com/djangonauts/djorm-ext-pool
your demand
want pool MySQL connections rather than creating connections on
every request.
my suggest
in db level
Indicating that your application is IO-intensive, so the proposal
is to use mysql conn pool. may be u can use thirdpart mysql pool
in app level
in app level no use connection pooling. But mostly use the cache
,may be redis cache etc,this can minus the connection number.
in webserver level
in your server socalled WSGI Server . It is ligntweight so not
pooling implement,u can refact to use queue to enhance the connection
reused. or base Gevent to refact event_queue.
Hope this may can give you some help.
I am using a dropwizard server to serve http requests. This dropwizard application is backed my mysql server for data storage. But when left idle (overnight) it gives a 'broken pipe exception'
I did a few things that I thought might help. I set the jdbc url in the yaml file to'autoConnect=true'. I also added a 'checkOnBorrow' property. I have increased the jvm to use 4gb
none of these fixes worked.
Also the wait_timeout and 'interactive_timeout for mysql serveris set to 8 hours.
does this need to more more/less?
Also is there a configuration property that can be set in the dropwizard yaml file? Or in other words how is connection pooling managed in dropwizard?
The problem:
MySql server has a timeout configured after which it terminates all connections idle in the connection pool. This in my case was the default (8 hrs). However the database connection pool is unaware of the terminated connections in the pool. So when a new request comes in, a dead connection is accessed from teh connection pool which results in a 'Broken Pipe' exception.
Solution:
So to fix this, we need to get rid of the dead connections and make the pool aware if the connection it is trying to borrow is a dead connection. This can be achieved by setting the following in the .yml configuration.
checkOnReturn: true
checkWhileIdle: true
checkOnBorrow: true
I have a Java EE application running in GlassFish on EC2, with a MySQL database on Amazon RDS.
I am trying to configure the JDBC connection pool to in order to minimize downtime in case of database failover.
My current configuration isn't working correctly during a Multi-AZ failover, as the standby database instance appears to be available in a couple of minutes (according to the AWS console) while my GlassFish instance remains stuck for a long time (about 15 minutes) before resuming work.
The connection pool is configured like this:
asadmin create-jdbc-connection-pool --restype javax.sql.ConnectionPoolDataSource \
--datasourceclassname com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource \
--isconnectvalidatereq=true --validateatmostonceperiod=60 --validationmethod=auto-commit \
--property user=$DBUSER:password=$DBPASS:databaseName=$DBNAME:serverName=$DBHOST:port=$DBPORT \
MyPool
If I use a Single-AZ db.m1.small instance and reboot the database from the console, GlassFish will invalidate the broken connections, throw some exceptions and then reconnect as soon the database is available. In this setup I get less than 1 minute of downtime.
If I use a Multi-AZ db.m1.small instance and reboot with failover from the AWS console, I see no exception at all. The server halts completely, with all incoming requests timing out. After 15 minutes I finally get this:
Communication failure detected when attempting to perform read query outside of a transaction. Attempting to retry query. Error was: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.2.v20111125-r10461): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 940,715 milliseconds ago. The last packet sent successfully to the server was 935,598 milliseconds ago.
It appears as if each HTTP thread gets blocked on an invalid connection without getting an exception and so there's no chance to perform connection validation.
Downtime in the Multi-AZ case is always between 15-16 minutes, so it looks like a timeout of some sort but I was unable to change it.
Things I have tried without success:
connection leak timeout/reclaim
statement leak timeout/reclaim
statement timeout
using a different validation method
using MysqlDataSource instead of MysqlConnectionPoolDataSource
How can I set a timeout on stuck queries so that connections in the pool are reused, validated and replaced?
Or how can I let GlassFish detect a database failover?
As I commented before, it is because the sockets that are open and connected to the database don't realize the connection has been lost, so they stayed connected until the OS socket timeout is triggered, which I read might be usually in about 30 minutes.
To solve the issue you need to override the socket Timeout in your JDBC Connection String or in the JDNI COnnection Configuration/Properties to define the socketTimeout param to a smaller time.
Keep in mind that any connection longer than the value defined will be killed, even if it is being used (I haven't been able to confirm this, is what I read).
The other two parameters I mention in my comment are connectTimeout and autoReconnect.
Here's my JDBC Connection String:
jdbc:(...)&connectTimeout=15000&socketTimeout=60000&autoReconnect=true
I also disabled Java's DNS cache by doing
java.security.Security.setProperty("networkaddress.cache.ttl" , "0");
java.security.Security.setProperty("networkaddress.cache.negative.ttl" , "0");
I do this because Java doesn't honor the TTL's, and when the failover takes place, the DNS is the same but the IP changes.
Since you are using an Application Server, the parameters to disable DNS cache must be passed to the JVM when starting the glassfish with -Dnet and not the application itself.