GCP internal load balancer idle timeout - google-compute-engine

Every time when a client sends any request (say HTTP), the request is received by a load balancer (if set up) and it will redirect the request to one of the instances. Now a connection is established between Client->LB->Server. This will persist as long as the client is constantly sending requests.
But if client stops sending request to the server for a period of time(more then idle time), the load balancer will stop the communication between client and that particular server. Now if the client tries sending the request once again after some period of time then the load balancer should forward that request to some other instance.
What is idle time?
It is a period of time when client is not sending any request to the Load balancer. It generally ranges between 60 to 3600 seconds depending upon the cloud service provider.
Finally my doubt.
Ideally after the idle timeout the load balancer should terminate the existing communication, but this is not the case with GCP's Internal load balancer(i have a PoC in this context too). GCP's load balancer doesnt terminate the communication even after idle time out and maintains it for infinite time. Is there any way one can re configure the load balancer to avoid such infinite time connection?

Related

Closing a DB connection after API Request ends. Laravel RDS MySQL Database

We are working on a application that is being used on large scale.
the app is hosted on the AWS architecture with (Ubuntu API Instances with RDS Database)
So when we do a load test on the API using jmeter we face issues on 10K users as it shows Gateway time out issues on few request.
after debuging a requests and servers
we came to know that the Server capabilities are hitting 60% CPU only but the database connection limit's are reached
So we are now on increasing database connection capabilities but increasing db connection will have cause DB CPU reach 90% and it wil not work efficiently on all request coming
So we are thinking to disconnect the connection to databse after each api request being complete3d
for example
API : Get User
So once the api is being hit and result being returned it will disconnect the database connection of the request.
so in this case it will keep cloing connection for request once the response is being served
So the closing connection is a good practice ?

When is gunicorn worker ready to process incoming request?

I've a long request to handle at post_worker_init hook to download metadata from remote server, which will block the worker for several seconds. Wondering what's the behavior of the worker if a request comes in during this time. Will the arbiter assign the request to the worker, or the worker won't be assigned as it's still calling the callback at post_worker_init hook?

Django ERR_EMPTY_RESPONSE

I am currently running a Django site on ec2. The site sends a csv back to the client. The CSV is of varying sizes. If it is small the site works fine and client is able to download the file. However, if the file gets large, I get an ERR_EMPTY_RESPONSE. I am guessing this is because the connection is aborting without giving adequate time for the process to run fully. Is there a way to increase this time span?
Here's what my site is returning to the client.
with open('//home/ubuntu/Fantasy-Fire/website/optimizer/lineups.csv') as myfile:
response = HttpResponse(myfile, content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=lineups.csv'
return response
Is there some other argument that can allow me to ignore this error and keep generating the file even if it is taking awhile or is large?
I believe that you have any sort of backend proxy server which resets the connection to the Django backend and returns ERR_EMPTY_RESPONSE for the case. You should re-configure timeouts on your backend proxy. Usually that is nginx or apache used as a reverse proxy server.
What is Reverse Proxy Server
A reverse proxy server is an intermediate connection point positioned at a network’s edge. It receives initial HTTP connection requests, acting like the actual endpoint.
Essentially your network’s traffic cop, the reverse proxy serves as a gateway between users and your application origin server. In so doing it handles all policy management and traffic routing.
A reverse proxy operates by:
Receiving a user connection request
Completing a TCP three-way handshake, terminating the initial connection
Connecting with the origin server and forwarding the original request
More info at https://www.imperva.com/learn/performance/reverse-proxy/
One more possible case - your reverse proxy backend server doesn't have enough free space to process response from Django and aborts the request. You can also check free space on your reverse proxy balancer.
Within gunicorn, there is an argument for timeout, -t. When you run gunicorn, the default timeout is 30 seconds. Increase that to something your comfortable with like 90 or 120 seconds, whatever you think fits your application.

GCE HTTPS load balancer session affinity

I have a HTTPS load balancer configured with one backend service and 3 instance groups:
Endpoint protocol: HTTPS Named port: https Timeout: 600 seconds Health check: ui-health2 Session affinity: Generated cookie Affinity cookie TTL: 0 seconds Cloud CDN: disabled
Instance group Zone Healthy Autoscaling Balancing mode Capacity
group-ui-normal us-central1-c 1 / 1 Off Max. CPU: 80% 100%
group-ui-large us-central1-c 2 / 2 Off Max. CPU: 90% 100%
group-ui-xlarge us-central1-c 2 / 2 Off Max. CPU: 80% 100%
Default host and path rules, SSL terminated.
The problem is the session affinity is not working properly and I have no idea why. Most of the time it seems to work but randomly a request is answered by a different instance with the same GCLB cookie. All this reproduced with a AJAX request every 5 seconds, 20+ requests to instance A, then a request to instance B, then other 20+ requests to A...
I looked at the LB logs and there is nothing strange (apart from the random strange response), the CPU is low. Where I can find out if some instance is "unhealthy" for 5 seconds?
The Apache logs shows no errors in the health pings or the requests.
Maybe there is some strange interaction between the "Balancing mode" and the session affinity?
The load balancer are thought to handle a considerable amount of requests. It balances the cargo of them pretty effective.
The issue here is your load balancer doesn't receive too many requests, then the change of just one request, can modify the load drastically, being an obstacle for the Load Balancer to work efficaciously.

Is it possible to get the request per second from AWS load balancer?

Is it possible to get the number of requests sent to a load balancer in AWS?
I am trying to monitor the number of requests that our load balancers are receiving. Both ELB and Application Load Balance (alb).
Is there a way to do this from the cli? or the Javascript sdk?
Amazon CloudWatch has a RequestCount metric that measures "The number of requests received by the load balancer".
The Load Balancer can also generate Access Logs that provide detailed information about each request.
See:
CloudWatch Metrics for Your Classic Load Balancer
CloudWatch Metrics for Your Application Load Balancer