Persistent connection not working at all - html

I am making a webserver with scapy, which is going pretty well. However, it's a pain in the butt for scapy to maintain different connections at the same time. So I want the client to make a persistent connection with the webserver that servers a html page with an image.
I have the client succesfully iniating a tcp handshake and obtaining the html page, however, it opens a new connection to download the image. Which I do not want.
I understand that in HTTP/1.1 it is not necessary to send the keep-alive header, as it's a default. How come Chrome and Firefox still open more connections to download seperate files?
I am not sending a Connection: close header whatsoever, so I think it's weird that they do not maintain the same connection for all files on the webpage.
EDIT: Tried to use the actual Keep-Alive: timeout=n, max = n header. Still no result.
What could be the problem? Feel free to ask for details!

Persistent connections do not forbid to use parallel connections, they only allow to re-use the same connection for more requests. But, with persistent connections you can only do multiple requests within the same connection one after the other. This means to get lots of resources it is usually faster to open multiple connections in parallel and use each of these connections to get multiple resources, e.g. using 4 connections in parallel to get 12 images (3 images with each connection) is faster then getting all the 12 images one after the other using a single connection.

Related

How to reduce ping?

I've created a browser game.
I use
Server-side - Node.js
Client - html, js
Cloudflare
Servers located - US, Europe
When I play the game using a European server, my ping is about 40. But sometimes it raises up to 700/1000. How can I solve it? Should I change hosting? (currently - I use the digital ocean droplets for 5$)
The game is http://sigmally.com/
Game screenshot
Well first you need to identify what the issue is.
Check that your internet connection is not the problem. Use an alternative internet connection avoid VPNs for this check as they can slow your connection down.
Make sure you have adequate system resources to deal with the number of server requests.
For example Apache server will handle up to around 10000 clients at any one time.
Make sure you do not have an excessive amount of server requests.
Make sure that you have adequate processing power for the server. Check what percentage processing power you are using when you get long ping times.
Make sure that you have adequate RAM for the server. Check what percentage RAM you are using when you get long ping times.
Mare sure that the files you are requesting from the server are not located in a directory with thousands of other files as it will take the server longer to locate the file and serve it back to you.

Web browsers assume that my HTTP server is prepared to accept many connections

I'm developing a web server and application on a microcontroller where resources (especially RAM) are very limited. When I point Chrome or Firefox to the web page hosted by my embedded web server, it attempts to establish a total of 6 concurrent TCP connections. First it opens one and loads the main HTML, then it attempts to open 5 more for loading various resources.
My server only has resources to handle 3 concurrent connections. Currently the device is programmed to refuse further connections by sending an RST packet in response to the SYN packets. So the first 3 SYN packets get a normal SYN-ACK reply and HTTP traffic starts, the latter 3 get an RST.
Both Chrome and Firefox seem to decide that the RST responses are fatal and abandon loading certain resources.
If the device does not send these RST responses (just forgets about the SYNs), Chrome loads the page fine. But I don't like the zombie connection attempts on the client.
Should browsers really be assuming the RST responses to connection attempts are fatal? I was under the impression that an HTTP server is allowed to close the connection at any time and the client should retry at least GET requests transparently.
What is the best solution, practically? Keep in mind that perhaps I would like to support multiple web clients with for example 4 connections in total, and if the first client grabs all 4, there are none left for the second client.
Note that for my application there is zero benefit of having parallel connections. Why must I support so many connections just because the client thinks it will be faster? Even if I manage to support 6 now, what when the browser vendors decide to increase the default and break my application?
EDIT - I see the same issue with Firefox as well not just Chrome.
Indeed modern browsers will try to use 6 connections, in some cases even 8. You have one of two options:
Just ACK but take your time replying
Use javascript to load your resources one-by-one
I am assuming here that you can't increase the concurrent capacity of the server (being a small device) or radically change the appearance of the page.
Option #2 removes most of the resources from the page and instead has JS programatically request every resource and add them to the page via the DOM. This might be a serious rework of the page.
I should also mention that you can inline images (the image bitmap is just a string in the page) so that you can prevent the (mostly) parallel fetching of images done by modern browsers.
I was under the impression that an HTTP server is allowed to close the connection at any time and the client should retry at least GET requests transparently.
The server is allowed to close the connection after the first response was sent, i.e. it might ignore the wish of the client to keep the connection open. The server is not allowed to close the connection within or before the first request was handled.
What is the best solution, practically?
Don't use too much resources which need to be retrieved in separate requests. Use data-URL's and similar. Or increase your listen queue to accept more than 3 TCP connections at the same time.

Connections Option in RDS Mysql and best way to handle many connections

In the below image it shows current activity as 99 Connections.
How exactly it is counted.
RDS is accessed through node.js webservices, php website. Every time I do some operations I close the connection. So once after closing it doesn't decrease rather it keeps increasing. Later I got the too many connections error message once the connections became 608. I restarted then it works. I never seen it decreasing.
So what is the best way I can handle it.
Below is the image which is showing when I run SHOW FULL PROCESSLIST;
PHP-based web pages that use a MySQL connection generally exit as soon as they're done rendering page content, so the connection gets closed whether you explicitly call a mysqli or PDO close method or not.
The same is not true of Node services, which run for a long time and can therefore easily leak resources. It's probable that you're opening connections, but not closing them, in your Node service, which would produce the sort of behavior you're seeing here. (This is an easy mistake to make, especially for those of us whose background is largely in more ephemeral PHP scripts.)
One good way to identify the problem is to connect to the MySQL instance via Workbench or the console monitor and issue SHOW FULL PROCESSLIST; to get a list of currently active connections, their originating hosts, and the queries (if any) they are executing. This may help you narrow down the source of the leaking connections, so that you can identify the code at fault and repair it.

websocket never close, transfer multiple data

I have builted a web app. I send and receive a lot of data using Websockets and each time I have to open and close a Websocket connection.
Why dont avoid the constant open/close? How about when the page loads, Websockets are created and opened and they never close, so I use the same Websockets to send and receive text, arrays, links, search queries etc. I am even thinking about transfering files like images and/or videos via Websockets.
Can I do this , or do I have to close a WS connection after I am done? Will never-closing WS rise a security issue? Plus I dont know if the WS will actually close when the user leaves the page. If it does not, I guess that is another security issue, right there.
How do I transfer files via WS? I cannot imagine how to do this
Thanks in advance
Websockets are meant to remain open for the lifetime of the webpage or SPA... it's totally expected normal behavior.
The server might close the websocket at any time and this is also normal behavior - just re-open the websocket.
Normally, servers will only close the websocket if the websocket was idle for a some time (i.e. Heroku set the timeout limit to 50 seconds) or for traffic and concurrency considerations. Otherwise, the websocket connection could remain open for all time.
For example, the Plezi framework (Ruby) sends a pong frame automatically every once in a while, so the connection will remain open indefinitely unless the browser closes the socket (usually by exiting the page).

What is the difference between PING and HTTP HEAD?

I have an domain name to test.
Ping is ~20 ms.
'HTTP HEAD' is ~500 ms.
Why there are so big difference between them? Is this a server-side problem? Isn't there are too big difference? 25 times.
Well, for one, ping goes over a different protocol, ICMP. The server itself directly responds to pings. HTTP is a different protocol handled by an additional application, a web server, that must be running on the server (ping is built-in to the OS). Depending on how heavy the web server is, it could take a significant amount of time more, relative to something like a ping. Also, HEAD is sent along with a particular URL. If that URL is handled by something like ASP.NET instead of just the web server directly, then there's additional processing that must be done to return the response.
Ping is usually implemented as an ICMP echo request. A simpler datagram protocol: You send a packet, the server replies with the corresponding packet and that's about it.
HTTP HEAD is still HTTP: a TCP connection must be established between both ends and the HTTP server must reply with the headers for your request. It's obviously fast but not as simple as sending a single packet response.
If you're testing a domain, ping is a more adequate tool, while HTTP HEAD is a tool better suited to test an HTTP server.
If I'm not mistaken, a ping request is handled on the network driver level, and is extremely fast as a result (sometimes it's handled by the hardware itself, skipping software processing altogether). It will portray network latency fairly well.
An HTTP HEAD request must visit the web server, which is a user-level program, and requires copying bits of data a couple times, and web server code to parse the request, etc. The web server then has to generate the HTTP response headers for the request. Depending on the server and the requested page, this can take a while, as it has to generate the requested page anyway (It just sends you the headers only, and not the page content.)
When you run ping it responds much quicker because is it is designed to respond immediately. It shows you approximate latency, so if you get consistent results using ping you cannot get lower latency than that.
When you run HTTP HEAD you are actually making a request to a specific page, it is processed, executed rendered and only head is returned. It has much more overhead compared to ping, that's why it is taking much longer.