I am working on a web application and I am using polling approach to check if there is any update needed. These polling requests occur in every 1 or 2 seconds. The size of the response is 240 bytes if there is no update needed(An empty response is returned in that case) and around 10 KBs which is the size of the content itself. My problem is, since it returns at least 240 B in every seconds approximately, is there a way to optimize this response by pushing the boundaries a bit more?
When I checked the contents of the response, I saw that the 50 bytes are essential for me(session id and status code). However, there are some information in the header such as connection type, timeout and content-type. These settings will be same for each request of this type(i.e. it always requires content type as: "text/html; carset=utf-8"). So, can I just assume these settings in client side and prevent the server from sending these header info?
I am using django on the server side and jQuery for sending ajax requests by the way. Also, any type of push technology is out of question for now.
It does add up, but not as much as you think. If you polled every sec for a full hour, you'd have only used 864K, less than a typical webpage would require with an unprimed cache. Even if you did it for a full day, you're talking about ~20M. Maybe if you're someone like Twitter, you might need to be concerned about this, but I doubt you'll be getting anywhere near the traffic it would take for this to actually be problematic.
Nevertheless, you can of course customize the headers of the request, but what if any impact this will have on the client will be a matter to testing. Some headers can probably be dropped, but others may surprise you, and it technically could vary browser to browser, as well.
One solution to this kind of problem is "long polling". The polling client will send a request, and the webserver checks to see if there is an update. If there is not, the webserver sleeps for a second or two and then checks again in a loop, without sending a response. As soon as this loop sees an update, it sends a response. To the client web browser, it will look like the server is congested and taking a long time to respond, but actually the relevant data is being transmitted promptly and the "no data" responses are simply being skipped.
I'd recommend adding a timeout to the loop -- say 30 or 60 seconds -- after which the webserver would reply with "no data" as usual. Even just a 30 second cycle would cut your empty response load by a factor of 15-30.
Caveat: I've read about this kind of implementation but I haven't tried it myself. You will need to test compatibility with various web browsers to ensure that this fairly nonstandard method doesn't cause issues on the client side.
Related
I have to implement a REST endpoint that receives start and end dates (among other arguments). It does some computations to generate a result that is a kind of forecast according to the server state at invocation epoch and the input data (imagine a weather forecast for next few days).
Since the endpoint does not alter the system state, I plan to use GET method and return a JSON.
The issue is that the output includes also an image file (a plot). So my idea is to create a unique id for the file and include an URI in the JSON response to be consumed later (I think this is the way suggested by HATEOAS principle).
My question is, since this image file is a resource that is valid only as part of the response to a single invocation to the original endpoint, I would need a way to delete it once it was consumed.
Would it be RESTful to deleting it after serving it via a GET?
or expose it only via a DELETE?
or not delete it on consumption and keep it for some time? (purge should be performed anyway since I can't ensure the client consumes the file).
I would appreciate your ideas.
Would it be RESTful to deleting it after serving it via a GET?
Yes.
or expose it only via a DELETE?
Yes.
or not delete it on consumption and keep it for some time?
Yes.
The last of these options (caching) is a decent fit for REST in HTTP, since we have meta-data that we can use to communicate to general purpose components that a given representation has a finite lifetime.
So this reference of the report (which includes the link to the plot) could be accompanied by an Expires header that informs the client that the representation of the report has an expected shelf life.
You might, therefore, plan to garbage collect the image resource after 10 minutes, and if the client hasn't fetched it before then - poof, gone.
The reason that you might want to keep the image around after you send the response to the GET: the network is unreliable, and the GET message may never reach its destination. Having things in cache saves you the compute of trying to recalculate the image.
If you want confirmation that the client did receive the data, then you must introduce another message to the protocol, for the client to inform you that the image has been downloaded successfully.
It's reasonable to combine these strategies: schedule yourself to evict the image from the cache in some fixed amount of time, but also evict the image immediately if the consumer acknowledges receipt.
But REST doesn't make any promises about liveness - you could send a response with a link to the image, but 404 Not Found every attempt to GET it, and that's fine (not useful, of course, but fine). REST doesn't promise that resources have stable representations, or that the resource is somehow eternal.
REST gives us standards for how we request things, and how responses should be interpreted, but we get a lot of freedom in choosing which response is appropriate for any given request.
You could offer a download link in the JSON response to that binary resource that also contains the parameters that are required to generate that resource. Then you can decide yourself when to clean that file up (managing disk space) or cache it - and you can always regenerate it because you still have the parameters. I assume here that the generation doesn't take significant time.
It's a tricky one. Typically GET requests should be repeatable as an import HTTP feature, in case the original failed. Some people might rely on it.
It could also be construed as a 'non-safe' operation, GET resulting in what is effectively a DELETE.
I would be inclined to expire the image after X seconds/minutes instead, perhaps also supporting DELETE at that endpoint if the client got the result and wants to clean up early.
I am performance testing a map based web application, where a query gets fired onto the DB and in return a tabular data and map appears. Sort of like Google Maps. The problem i suppose is with rendering. While the time taken to actually "see" a table on the browser is around 1 minute, the same transaction takes around 3 minutes to complete in Vugen.
While advance tracing the logs, it shows a JSON response (of the above mentioned table) is getting downloaded. This response is of 6 Mb and is delaying the transaction to complete. I have been careful that no asynchronous calls are going along with this GET call and is actually covered with the lr_start_transaction and lr_end_transaction which is causing this high response time.
I understand that we might get better response over client side activities using TruClient protocol or others, but there is a restriction over it and Web HTTP protocol needs to be used.
I am using HP LR 12.02 version, over WinInet capture level.
My question is, is there any way i can actually emulate that "1 minute" time that a user would actually need to "see" the tabular data, rather than the 3 minutes it is taking. its okay if i disregard this JSON response and not download this 6 Mb data, if it makes any difference.
Any suggestion would be much appreciated. Thanks!
Hello all please help with the analysis of my page.
Question 1
Why since everything is load from cache. Load time is 690ms?
question 2
what will be the reason to use --> private, max-age=60000
(public), max-age=60000 VS. private, max-age=60000
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching?hl=en
First, load time isn't just defined by the time it takes to get assets from the network. Painting and parsing can take a lot of time, as can the parsing of Javascript. In your case, DOMContentLoaded is only fired after 491 milliseconds, so that's already part of the answer.
As to your second question, the answer really is in the link you provided:
If the response is marked as “public” then it can be cached, even if it has HTTP authentication associated with it, and even when the response status code isn’t normally cacheable. Most of the time, “public” isn’t necessary, because explicit caching information (like “max-age”) indicates that the response is cacheable anyway.
By contrast, “private” responses can be cached by the browser but are typically intended for a single user and hence are not allowed to be cached by any intermediate cache - e.g. an HTML page with private user information can be cached by that user’s browser, but not by a CDN.
I was looking at chrome dev tools #resource network timing to detect requests that must be improved. In the link before there's a definition for each timing but I don't understand what processes are being taken behind the scenes that are affecting the length of the period.
Below are 3 different images and here is my understanding of what's going on, please correct me if I'm wrong.
Stalled: Why there are timings where the request get's stalled for 1.17s while others are taking less?
Request Sent: it's the time that our request took to reach server
TTFB: Time took until the server responds with the first byte of data
Content Download: The time until the whole response reaches the client
Thanks
Network is an area where things will vary greatly. There are a lot of different numbers that go into play with these and they vary between different locations and even the same location with different types of content.
Here is some more detail on the areas you need more understanding with:
Stalled: This depends on what else is going on in the network stack. One thing could not be stalled at all, while other requests could be stalled because 6 connections to the same location are already open. There are more reasons for stalling, but the maximum connection limit is an easy way to explain why it may occur.
The stalled state means, we just can't send the request right now it needs to wait for some reason. Generally, this isn't a big deal. If you see it a lot and you are not on the HTTP2 protocol, then you should look into minimizing the number of resources being pulled from a given location. If you are on HTTP2, then don't worry too much about this since it deals with numerous requests differently.
Look around and see how many requests are going to a single domain. You can use the filter box to trim down the view. If you have a lot of requests going off to the same domain, then that is most likely hitting the connection limit. Domain sharding is one method to handle this with HTTP 1.1, but with HTTP 2 it is an anti-pattern and hurts performance.
If you are not hitting the max connection limit, then the problem is more nuanced and needs a more hands-on debugging approach to figure out what is going on.
Request sent: This is not the time to reach the server, that is the Time To First Byte. All request sent means is the request is sent and it took the network stack X time to carry it out.
Nothing you can do to speed this up, it is more for informational and internal debugging purposes.
Time to First Byte (TTFB): This is the total time for the sent request to get to the destination, then for the destination to process the request, and finally for the response to traverse the networks back to the client.
A high TTFB reveals one of two issues. The first is a bad network connection between the client and server. So data is slow to reach the server and get back. The second cause is, a slow server processing the request. This is either because the hardware is weak or the application running is slow. Or, both of these problems can exist at once.
To address a high TTFB, first cut out as much network as possible. Ideally, host the application locally on a low-resource virtual machine and see if there is still a big TTFB. If there is, then the application needs to be optimized for response speed. If the TTFB is super-low locally, then the networks between your client and the server are the problem. There are various ways to handle this that I won't get into since it is an area of expertise unto itself. Research network optimization, and even try moving hosts and seeing if your server providers network is the issue.
Remember the entire server-stack comes into play here. So if nginx or apache are configured poorly, or your database is taking a long time to respond, or your cache is having trouble, then these can cause delays. They are also difficult to detect locally, since your local server could vary in configuration from the remote stack.
Content Download: This is the total time from the TTFB resolving for the client to get the rest of the content from the server. This should be short unless you are downloading a large file. You should take a look at the size of the file, the conditions of the network, and then judge about how long this should take.
So the problem started when we noticed that there are cases of duplicate orders on our website. When we started investigating, we couldn't narrow down something which could cause duplicate orders on our page and yet explain the state of the duplicate data. The eeriest part was that those orders were created at the SAME instant (down to smallest millisecond). In access logs in server too, the requests are received at the same instant.
So in order to investigate further we called random customers, most of them had close to the same answers, that they use a slow connection (some of them through modem) and they use chrome. Most feedback is like, page was stuck so I pressed back button. After some search we learnt of Http pipelining feature in chrome which is an aggressive technique to fetch the page in case connection is slow.
So here is the deal, user presses the Submit button --> a verify ajax JSON call (GET) --> form data is POSTed thru ajax JSON call --> returns some feedback to customer, customer takes action and then is redirected appropriately.
I am not sure if this is the best use of the AJAX or even GET/POST calls , but this is what I am stuck with.
Since this problem occurs in very specific (slow connection and chrome must fire duplicate connections) and truth be told, I have not been able to replicate this. However since nearly 95% feedback points towards Chrome, I am forced to think of http pipelining. It is the only possible explanation that could fire requests, so that multiple records are created at same instant.
I also learnt that http pipelining is done only for GET requests not POST requests. So I am not sure whether:
this covers AJAX POST requests (I use jQuery and I do use type:POST)
Chrome may somehow be back (erroneously) throwing multiple requests for all requests (refer: What to do with chrome sending extra requests?)
Only argument I could find in Chrome's case is that http pipelining is disabled by default.
I am not even sure what checks to put in this case, since the both the requests are being served at the same instant. I could put a check at backend to check if similar record is created, but that would be an expensive check, slowing down ordering and business may not welcome it.
I found something at http://www.chromium.org/developers/design-documents/network-stack/http-pipelining but not sure I must force/hack my requests to meet one of the criteria to stop http pipelining.
Any points to test this would be appreciated.