Closing a DB connection after API Request ends. Laravel RDS MySQL Database - mysql

We are working on a application that is being used on large scale.
the app is hosted on the AWS architecture with (Ubuntu API Instances with RDS Database)
So when we do a load test on the API using jmeter we face issues on 10K users as it shows Gateway time out issues on few request.
after debuging a requests and servers
we came to know that the Server capabilities are hitting 60% CPU only but the database connection limit's are reached
So we are now on increasing database connection capabilities but increasing db connection will have cause DB CPU reach 90% and it wil not work efficiently on all request coming
So we are thinking to disconnect the connection to databse after each api request being complete3d
for example
API : Get User
So once the api is being hit and result being returned it will disconnect the database connection of the request.
so in this case it will keep cloing connection for request once the response is being served
So the closing connection is a good practice ?

Related

Does RDS proxy affects current application side pooling?

I have a Saas application on AWS ECS and databases on AWS RDS. We are planning to implement AWS RDS Proxy for pooling implementation. From the RDS proxy documentation, I saw that we don't need to make any changes to the application code. Currently, we are using application side connection pooling. When we implement an RDS proxy for pooling, does the current pooling have any impact?
Do we need to remove the application side pooling to work with RDS effectively?
My main concern is, if I choose 100% pooling in RDS proxy and from application pooling configuration if we limit that to say 100 max connection. Will that be a bottleneck?
TLDR: keep the connection pool in your application, and size it to the number of connections required by that one instance of your application (e.g. the ECS task or EKS pod).
With a database proxy in the middle, there are two separate legs to a "connection":
First, there is a connection from the application to the proxy. What you called the "application side pooling" is this type of connection. Since there's still overhead associated with creating a new instance of this type of connection, continuing to use a connection pool in your application probably is a good idea.
Second, there is a connection from the proxy to the database. These connections are managed by the proxy. The number of connections of this type is controlled by a proxy configuration. If you set this configuration to 100%, then you're allowing the proxy to use up to the database's max_connections value, and other clients may be starved for connections.
So, when your application wants to use a connection, it needs to get a connection from its local pool. Then, the proxy needs to pair that with a connection to the database. The proxy will reuse connections to the database where possible (this technique also is called multiplexing). Or, quoting the official docs: "You can open many simultaneous connections to the proxy, and the proxy keeps a smaller number of connections open to the DB instance or cluster. Doing so further minimizes the memory overhead for connections on the database server. This technique also reduces the chance of "too many connections" errors."
As your container orchestrator (e.g. ECS or EKS) scales your application horizontally, your application will open/close connections to the proxy, but the proxy will prevent your database from becoming overwhelmed by these changes.

Django ERR_EMPTY_RESPONSE

I am currently running a Django site on ec2. The site sends a csv back to the client. The CSV is of varying sizes. If it is small the site works fine and client is able to download the file. However, if the file gets large, I get an ERR_EMPTY_RESPONSE. I am guessing this is because the connection is aborting without giving adequate time for the process to run fully. Is there a way to increase this time span?
Here's what my site is returning to the client.
with open('//home/ubuntu/Fantasy-Fire/website/optimizer/lineups.csv') as myfile:
response = HttpResponse(myfile, content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=lineups.csv'
return response
Is there some other argument that can allow me to ignore this error and keep generating the file even if it is taking awhile or is large?
I believe that you have any sort of backend proxy server which resets the connection to the Django backend and returns ERR_EMPTY_RESPONSE for the case. You should re-configure timeouts on your backend proxy. Usually that is nginx or apache used as a reverse proxy server.
What is Reverse Proxy Server
A reverse proxy server is an intermediate connection point positioned at a network’s edge. It receives initial HTTP connection requests, acting like the actual endpoint.
Essentially your network’s traffic cop, the reverse proxy serves as a gateway between users and your application origin server. In so doing it handles all policy management and traffic routing.
A reverse proxy operates by:
Receiving a user connection request
Completing a TCP three-way handshake, terminating the initial connection
Connecting with the origin server and forwarding the original request
More info at https://www.imperva.com/learn/performance/reverse-proxy/
One more possible case - your reverse proxy backend server doesn't have enough free space to process response from Django and aborts the request. You can also check free space on your reverse proxy balancer.
Within gunicorn, there is an argument for timeout, -t. When you run gunicorn, the default timeout is 30 seconds. Increase that to something your comfortable with like 90 or 120 seconds, whatever you think fits your application.

What is a good Configuration for distributed Spring Boot system with 36 downloaders through ssh tunnels

I've created a Java Spring Boot application that launches 36 downloader droplets on digital ocean, which ssh tunnel to a database CPU Optimized droplet and downloads from an API into the database.
I've configured hikari as follows towards less pooling connections assuming the database may have trouble with too many and thinking they might not be required.
spring.datasource.hikari.maximumPoolSize=5
spring.datasource.hikari.connectionTimeout=200000
spring.datasource.hikari.maxLifetime=1800000
spring.datasource.hikari.validationTimeout=100000
I'm wondering if those settings may or may not be recommended and why. I've reduced the maximumPoolSize to 5 however I haven't found much information on whether that is considered too small for Java Spring Boot Application to run effectively.
Given each downloader is storing data in the database sequentially do I need to have more than a few pooling connections on each downloader?
I've configured the maximum connections in mysql to 250 and the maximum ssh connections on the database server to 200. I note that 114 sshD processes are created on the server. Can a server handle that many ssh tunneling connections?
Do you forsee any problems with this kind of distributed setup with Spring boot? One thing I have had to do before adjusting to these settings is place retry connection code around each database connection to prevent disconnection errors.
Thanks
Conteh

Redirect all the mysql requests to other DB server

I have a staging server on AWS where my web application is running.the application uses Dedicated Database server(mysql/linux) from other provider. i would like to spin a new server on a AWS that should act like a proxy server to connect with my Dedicated Database server.
please advise me how can i achieve.
You can proxy the traffic with HAproxy, you can have one DB in active mode and one in passive mode, when ready to cut over you take the active one offline and ha will start sending requests to the other DB server.
Additionally, HAproxy will allow you to send traffic to certain DB servers depending on a variety of criteria, like the source IP. So some web apps send to one DB and others send to another.
HA proxy is very lightweight, we use it and run hundreds of thousands of requests a day without any performance issues.
Take a look at MaxScale from MariaDB. it a DB proxy. the can do all this and more..
https://mariadb.com/products/mariadb-maxscale

How to access mysql database at multiple sites without using remote jdbc connection

The idea to access multiple remote mysql databases some 30 in numbers using remote jdbc connection in my java application has been ruled out by the DBA's. No remote connection is permitted to the respective mysql DB. Connection can only be allowed from the local j2ee app servers within the LAN.
Alternatives suggested are :
1) Using a Middle ware Messaging layer which invoke some java at the remote side to provide the database results.
2) Using a Web Service on the remote MySQL database site as these site have J2EE App Servers to return the database query results. However i understand that the JDBC resultset cannot be serialized in a web service call and needs to be handled separately.
As the requirement is that the web service call will be using and SQL Query and the Result is to be returned to the Client. How efficiently this can be achieved.
Thinking of what other options are available.
Regards
Pramod.