Max parallel http requests in Chrome - google-chrome

Currently Chrome has a limit of 6 connections per host name, and a max of 10 connections.
What happens when the number of http requests exceeds the limit in Chrome?
The extra http requests will be queued or they will fail?

If the connection limit is reached, further requests will wait until connections free up.

Related

Fast raising number of sleeping MySQL connections on AWS RDS when calling in parallel from web server API

I am using aspnetboilerplate core and entityframework core.
I have 1 request entity framework core which is big:
var user = _userRepository.GetAll()
.Include(u => u.FavoriteMeals).ThenInclude(c => c.Category)
.Include(u => u.FavoriteRestaurants).ThenInclude(c => c.CategoryMaster)
.Include(u => u.FavoriteSpecialityMeals).ThenInclude(c => c.Speciality)
.Include(u => u.FavoriteSuperBookings).ThenInclude(c => c.Boooking)
.Include(u => u.FavouritePlaces).ThenInclude(c => c.Place)
.Include(u => u.Followers).ThenInclude(u => u.User1)
.Include(u => u.Followings).ThenInclude(u => u.User2)
.FirstOrDefault(r => r.Id == id);
I use AWS RDS MySQL. And I can see that the number of connections to the database is raising to 58 when I call the API through swagger.
Then I would like to know:
Is it normal to have arround 60 connections to the database? Then I plan to use this api connected to a mobile app.
The limit of max connections on aws is per mobile or global. I mean if two mobiles are connected to my api and calls the same api function are they gonna be blocked because the number of max connections was reached?
How can I optimize this?
Thanks,
///// EDIT
I have discovered that I have a lot of requests which are kept in slepp state.
How can I fix this? My code is a s below:
builder.UseMySql(connectionString);
//todo
builder.EnableSensitiveDataLogging(true);
builder.EnableDetailedErrors(true);
Please find below my requests result to find the sleep state
The reason many connections are open with a sleep status is, that by default, MySqlConnector which is used by Pomelo, has connection pooling enabled (Pooling=true), with a max. of 100 connections per connection string (MaxPoolSize=100). See MySQL .NET Connection String Options for all default settings.
So having around 60 connections opened can easily happen, when either 60 people use the app API in parallel, or e.g. when 3 different connection strings are used, each with 20 users in parallel.
Once those connections have been opened, they will effectively stay open for a long time by default.
With ConnectionLifeTime=0 as the default setting, they will never be explicitly closed by MySqlConnector's connection pooling management. However, they will always be closed after a number of seconds specified by the MySQL system variable wait_timeout. But since this variable is set to 28800 by default, it will take 8 hours before once created pooled connections are being closed by MySQL (independent of how long they were in a sleep state).
So, to lower the number of parallel connections, either disable Pooling (radical method with some performance implications if the server is not hosted locally) or manage their lifetime through the MaxPoolSize and ConnectionLifeTime connection string parameters.
In the screenshot you provided, you can see that the host varies in its address and port, but these are the address and outgoing port of the connecting client.
However, what I have written above does also apply here. So if you are connecting from multiple web servers (clients from the viewpoint of MySQL) to your database server, by default, each web server will manage its own connection pool and keep up to 100 connections open.
For example, if you have a load balancer setup, and multiple web servers behind it that may each execute database queries, or you run your API with some server-less technology like AWS Lambda, where your API might be hosted on many different web servers, then you might end up with multiple connection pools (one for each connection string on each web server).
My issue is that is only for one user connected. I am using Task.WhenAll() in my app which launches several api requests in parallel and I also use asynchronous methods in my api which creates some new threads.
If your client sends multiple web request (e.g. REST) to your API hosted on multiple web servers/AWS Lambda, then MySQL will at least need that many connections opened at the same time.
The MySqlConnector connection pooling will keep those connections open by default, after they were used (up until 100 connections per web server).
You will end up with the following number of sleeping database connections (assuming the connection string always stays the same on each web server):
number-of-parallel-requests-per-client * number-of-clients * number-of-webservers
However, the max. number of connections kept open will (by default) not be higher than:
number-of-webservers * 100
So if your app executes 20 requests in parallel, with each of them establishing a database connection from the API (web server) to the database server, then depending on your API hosting scenario, the following will happen:
3 Web Servers with Load Balancer: If you run your app often enough, the load balancer will distribute your requests to all 3 web servers over time. So each of the 3 web servers will keep around 20 connections open to your database server. You end up with 60 connections, as long as only one client executes these requests at the same time. If a max. of 4 clients run 20 requests in parallel, you will end up with 80 connections being kept open per web server over time, so a total of 3 * 80 = 240 total sleeping database connections.
Serverless technology like AWS Lambda: The same as in the previous example applies to AWS Lambda, but the number of web servers is infinite (in theory). So your might end up exceeding MySQL's max_connections setting pretty fast, if AWS decides to distribute the API calls to many different web servers.
Advice how to configure connection pooling
If you run your API on a fixed number of web servers, you might want to keep Pooling=true and set MaxPoolSize to a values, that number-of-parallel-requests-per-client * number-of-webservers will always be lower than max_connections, but if possible higher than typical-number-of-parallel-clients * number-of-parallel-requests-per-client. If that is not plausible or possible, consider setting Pooling=false or set MaxPoolSize to a number not higher than max_connections / number-of-webservers.
If you run your API on a serverless technology like AWS Lambda, either set Pooling=false or set MaxPoolSize to some low value and ConnectionLifeTime to some low value greater than zero. I would just set Pooling=false, because it is way easier to disable connection pooling than to effectively fine tune connection pooling for a serverless environment.

What happens to the new queries when the connection pool is exhausted?

I am developing a back-end with Node.js and MySQL. Sometimes, there are a huge number of queries to be done on the database > 50,000, and I'm using connection pooling. My question is what happens to a query after it is rejected due to the pool being exhausted? Will it be queued until a connection becomes available and then executed? Or it will simply never be executed?
There are indeed similar questions but the answers didn't highlight my point, they just recommended increasing the limit size.
Maybe.
There are options you can set to modify the behavior. See https://github.com/mysqljs/mysql#pool-options
The request may wait in a queue for a free connection, or not. This is based on the option waitForConnections. If you set this option to false, the request returns an error immediately instead of waiting.
If more than queueLimit requests are already waiting, the new request returns an error immediately. The default value is 0, which means there is no limit to the queue.
The request will wait for a maximum of acquireTimeout milliseconds, then if it still didn't get a free connection, returns an error.
P.S.: I don't use Node.js, I just read this in the documentation.

Understanding odbc_pool_size in ejabberd.yml

in ejabberd.yml we have following line :
##
## Number of connections to open to the database for each virtual host
##
## odbc_pool_size: 10
we are running mysql enabled ejabberd server. MySql server connection limit is 300.
After doing research online (on very limited documentation available) , it seems like increase odbc_pool_size from default 10 mainly affects (decreases) the connecting time of client to server. we have an average of ~1500 users online at one given time instance.
My question : what exact purpose does odbc_pool_size variable serve. How will increasing the pool size affect server connect time / latency ?
UPDATE
Ejabberd Server stats :
8 gb RAM
Dual core
~2000 users (peak hours)
average cpu utilaztion 13.5%
MySql Server stats:
max supported simultaneous connection: 300
write IOPS (QPS) 23.1/sec
read IOPS 1/sec
Memory usage : 2.5/15gb
According to you what will be a good odbc_pool_size for above configuration? (I was thinking of something around 50?)
Like any pool, its size decide of the number of request that can be processed in parallel. If your pool size in 10, only 10 requests can be process in parallel, the other are queued. It means if you have 100 users that tried to connect at the same time, the last one to be process will have to wait for 10 batches of queries to have been processed, thus increasing the latency.
Increasing the pool size can help with latency, up to a point where database cannot cope with more parallelism and global performance will decrease. Good value depends on your database sizing, your use case and your overall architecture.
You need to perform benchmarks and experiment to adapt the sizing to your own case as it really depend on actual traffic patterns.

Web browsers assume that my HTTP server is prepared to accept many connections

I'm developing a web server and application on a microcontroller where resources (especially RAM) are very limited. When I point Chrome or Firefox to the web page hosted by my embedded web server, it attempts to establish a total of 6 concurrent TCP connections. First it opens one and loads the main HTML, then it attempts to open 5 more for loading various resources.
My server only has resources to handle 3 concurrent connections. Currently the device is programmed to refuse further connections by sending an RST packet in response to the SYN packets. So the first 3 SYN packets get a normal SYN-ACK reply and HTTP traffic starts, the latter 3 get an RST.
Both Chrome and Firefox seem to decide that the RST responses are fatal and abandon loading certain resources.
If the device does not send these RST responses (just forgets about the SYNs), Chrome loads the page fine. But I don't like the zombie connection attempts on the client.
Should browsers really be assuming the RST responses to connection attempts are fatal? I was under the impression that an HTTP server is allowed to close the connection at any time and the client should retry at least GET requests transparently.
What is the best solution, practically? Keep in mind that perhaps I would like to support multiple web clients with for example 4 connections in total, and if the first client grabs all 4, there are none left for the second client.
Note that for my application there is zero benefit of having parallel connections. Why must I support so many connections just because the client thinks it will be faster? Even if I manage to support 6 now, what when the browser vendors decide to increase the default and break my application?
EDIT - I see the same issue with Firefox as well not just Chrome.
Indeed modern browsers will try to use 6 connections, in some cases even 8. You have one of two options:
Just ACK but take your time replying
Use javascript to load your resources one-by-one
I am assuming here that you can't increase the concurrent capacity of the server (being a small device) or radically change the appearance of the page.
Option #2 removes most of the resources from the page and instead has JS programatically request every resource and add them to the page via the DOM. This might be a serious rework of the page.
I should also mention that you can inline images (the image bitmap is just a string in the page) so that you can prevent the (mostly) parallel fetching of images done by modern browsers.
I was under the impression that an HTTP server is allowed to close the connection at any time and the client should retry at least GET requests transparently.
The server is allowed to close the connection after the first response was sent, i.e. it might ignore the wish of the client to keep the connection open. The server is not allowed to close the connection within or before the first request was handled.
What is the best solution, practically?
Don't use too much resources which need to be retrieved in separate requests. Use data-URL's and similar. Or increase your listen queue to accept more than 3 TCP connections at the same time.

Delete Network processes hung up

Deleting network request hung up and it's not stopping. It's causing my rate limit to be exceeded and I can't even see the list of operations.
Finally the hung API requests stopped after 550k request, about 150 requests per sec. Not sure if this is a bug....