I'm trying to implement long polling for the first time, and I'm using XMLHttpRequest objects to do it. So far, I've been successful at getting events in Firefox and Internet Explorer 11, but Chrome strangely is the odd one out this time.
I can load one page and it runs just fine. It makes the request right away and starts processing and displaying events. If I open the page in a second tab, one of the pages starts seeing delays in receiving events. In the dev tools window, I see multiple requests with this kind of timing:
"Stalled" will range up to 20 seconds. It won't happen on every request, but will usually happen on several requests in a row, and in one tab.
At first I thought this was an issue with my server, but then I opened two IE tabs and two Firefox tabs, and they all connect and receive the same events without stalling. Only Chrome is having this kind of trouble.
I figure this is likely an issue with the way in which I'm making or serving up the request. For reference, the request headers look like this:
Connection: keep-alive
Last-Event-Id: 530
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36
Accept: */*
DNT: 1
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
The response looks like this:
HTTP/1.1 200 OK
Cache-Control: no-cache
Transfer-Encoding: chunked
Content-Type: text/event-stream
Expires: Tue, 16 Dec 2014 21:00:40 GMT
Server: Microsoft-HTTPAPI/2.0
Date: Tue, 16 Dec 2014 21:00:40 GMT
Connection: close
In spite of the headers involved, I'm not using the browser's native EventSource, but rather a polyfill that lets me set additional headers. The polyfill is using XMLHttpRequest under the covers, but it seems to me that no matter how the request is being made, it shouldn't stall for 20 seconds.
What might be causing Chrome to stall like this?
Edit: Chrome's chrome://net-internals/#events page shows that there's a timeout error involved:
t=33627 [st= 5] HTTP_CACHE_ADD_TO_ENTRY [dt=20001]
--> net_error = -409 (ERR_CACHE_LOCK_TIMEOUT)
The error message refers to a patch added to Chrome six months ago (https://codereview.chromium.org/345643003), which implements a 20-second timeout when the same resource is requested multiple times. In fact, one of the bugs the patch tries to fix (bug number 46104) refers to a similar situation, and the patch is meant to reduce the time spent waiting.
It's possible the answer (or workaround) here is just to make the requests look different, although perhaps Chrome could respect the "no-cache" header I'm setting.
Yes, this behavior is due to Chrome locking the cache and waiting to see the result of one request before requesting the same resource again. The answer is to find a way to make the requests unique. I added a random number to the query string, and everything is working now.
For future reference, this was Chrome 39.0.2171.95.
Edit: Since this answer, I've come to understand that "Cache-Control: no-cache" doesn't do what I thought it does. Despite its name, responses with this header can be cached. I haven't tried, but I wonder if using "Cache-Control: no-store", which does prevent caching, would fix the issue.
adding Cache-Control: no-cache, no-transform worked for me
I have decided to keep it simple and checked the response headers of a website that did not have this issue and I changed my response headers to match theirs:
Cache-Control: max-age=3, must-revalidate
Related
I have checked out this and that. However, my debugger looks like below.
Failure example
.
No form data, No raw content
Raw example (* Although path is different from the screen capture, both of them are unable to read post data)
POST https://192.168.0.7/cgi-bin/icul/;stok=554652ca111799826a1fbdafba9d3ac1/remote_command HTTP/1.1
Host: 192.168.0.7
Connection: keep-alive
Content-Length: 419
accept: application/json, text/javascript, */*; q=0.01
Origin: https://192.168.0.7
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36
content-type: application/x-www-form-urlencoded; charset=UTF-8
Referer: https://192.168.0.7/cgi-bin/icul/;stok=554652ca111799826a1fbdafba9d3ac1/smartmomentl/access-point/network
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8,zh-TW;q=0.6,zh;q=0.4
Cookie: sysauth=f15eff5e9ebb8f152e163f8bc00505c6
command=import&args=%7B%22--json%22%3Atrue%2C%22--force%22%3Atrue%2C%22--mocks%22%3A%22%7B%5C%22DEL%5C%22%3A%7B%7D%2C%5C%22SET%5C%22%3A%7B%5C%22dhcp%5C%22%3A%7B%5C%22lan%5C%22%3A%7B%5C%22.section%5C%22%3A%5C%22dhcp%5C%22%2C%5C%22interface%5C%22%3A%5C%22lan%5C%22%2C%5C%22ignore%5C%22%3A%5C%220%5C%22%2C%5C%22leasetime%5C%22%3A%5C%2212h%5C%22%2C%5C%22range%5C%22%3A%5C%22172.16.0.100-172.16.0.200%5C%22%7D%7D%7D%7D%22%7D
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Status: 200 OK
Content-Type: text/html; charset=utf-8
Cache-Control: no-cache
Expires: 0
Transfer-Encoding: chunked
Date: Thu, 01 Jan 1970 00:09:27 GMT
Server: lighttpd/1.4.30
31
{ "ctx": "No such command", "exitStatus": false }
0
NOTE: (6)
Successful example
Differences between them I have spotted (by differentiating header contents)
Raw example (* Although path is different from the screen capture, both of them are unable to read post data)
POST https://192.168.0.7/cgi-bin/icul/;stok=92dea2b939b9fceb44ac84ac859de7f4/;stok=92dea2b939b9fceb44ac84ac859de7f4/remote_command HTTP/1.1
Host: 192.168.0.7
Connection: keep-alive
Content-Length: 53
Accept: application/json, text/javascript, */*; q=0.01
Origin: https://192.168.0.7
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Referer: https://192.168.0.7/cgi-bin/icul/;stok=92dea2b939b9fceb44ac84ac859de7f4/remote_command/command_reboot
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8,zh-TW;q=0.6,zh;q=0.4
Cookie: sysauth=683308794904e0bedaaead33acb15c7e
command=command_reboot&args=%7B%22--json%22%3Atrue%7D
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Status: 200 OK
Content-Type: text/html; charset=utf-8
Cache-Control: no-cache
Expires: 0
Transfer-Encoding: chunked
Date: Thu, 01 Jan 1970 00:02:46 GMT
Server: lighttpd/1.4.30
34
{ "ctx": "\u0022success\u0022", "exitStatus": true }
0
NOTE: (6)
Header Difference between 2 examples
Successful one is using Jquery binding while failure one using HTTPS from nodejs + browserify. However, I am still finding a way to check whether this is a problem or not (Not tested)
Missing X-Requested-With: XMLHttpRequest. However, adding this header back to the request does not fix this problem (Tested)
Capital header vs Smaller letter header field (
content-type and Content-type. However this difference is not the root cause for my problem as tried in fiddle here (Tested)
Accept vs accept (Not tested)
NOTE: (5) (7)
Still, I am not sure why the first c in content-type is in small letter case.
NOTE: (1)
What I have tried
I have tried on Firefox with firebug. It is able to show my payload. However, it cannot parse response from the server :'(
Since the web server is running in HTTPS protocol, I cannot capture packets by wireshark. Any suggestion for debugging POST requests? Thanks.
Link to a gist about debugging HTTP(s) request via command line. NOTE: (3)
Wrapper I am using
I have wrap this method from nodejs with a promise calls. Below is a snippet show an option I have used.
/**
* Wraps HTTPS module from nodejs with Promise
* #module common/http_request
*/
var createRequestSetting = function (host, path, data, cookies) {
return {
method: 'POST',
port:443,
host: host,
path: path,
headers: {
Accept: 'application/json, text/javascript, */*; q=0.01',
'Content-Type':
'application/x-www-form-urlencoded; charset=UTF-8',
'Content-Length': Buffer.byteLength(data),
'Cookie': cookies,
},
rejectUnauthorized: false,
};
};
Full source here
NOTE: (2)
Update
(1) I have verified the letter c does not affect chrome debugger. Here is the fiddle. I have tried to mimic same request with XMLHttpRequest with letter c. I can still check form data in the debugger.
(2) Link to the full source code
(3) Link to a gist from me about scripts to test HTTP(s) request
(4) Reformat the question for readability
(5) Examples are not using the same binding after code reviewing
(6) Add raw header example
(7) Add a comparison session
There was a regression bug in Chrome v61 and v62 across all platforms that caused this behaviour when the response is (amongst others) a 302. This is fixed in v63 stable which was released to all desktop platforms on 6th December 2017.
Automatic updates are phased, but going to "Help" / "About Google Chrome" will force it download the update and give you a button to restart. Occasionally it is necessary to kill all Chrome process and restart it manually to get the update.
The (now closed) bug report is here. The release announcement is here.
Clearly this is not the cause of the original poster's issue back in 2015, but searches for the issue led me here. Also note this is not just an OS X issue.
If your application returns a 302 status code, and no payload data in Chrome Devtools, you may be hitting this Chrome bug.
If you are in development or this is a URL which won't break anything, a quick, very practical, workaround is to modify your server side code to send a 200, for example in PHP you might add:
die("premature exit - send a 200");
which sends out a 200 status code. This works around the "302 bug" -- until it is fixed.
p.s. Per #leo-hendry below, Canary does have the fix as of Dec 2017, but if you don't have another reason to run Canary, running another browser side-by-side won't be worth it, as the mainline release should be coming out soon.
If this is a bug it may be behaving differently on Mac vs Windows.
The screenshot below is from Chrome 63 on Windows. You can see the request payload section as expected.
Here is what I see on Chrome 65 Beta running on Mac. Notice the request payload section is missing.
Am I correct to assume that the bug is not fixed or is there something else I should be checking?
I just noticed that you cannot see POST data if you select "Doc" from the filters in Chrome debugger (next to All, Xhr, Css, JS...). It does show if you select "All".
I probably got the same problem with the Chrome console (Chrome 69)
Neither the formdata nor the payload tab is showing.
In my scenario I POST a form with enctype "multipart/form-data" to an iframe (submitting an image file over https to the same origin). It works as expected but I don't know how to debug the data in chrome properly when it doesn't show up at all. I could dump the data in PHP but that's unnecessary complicated and totally missing the point of using the console. I've read through the suggested solutions above but I didn't manage to get rid of this problem. (The response code is 200 btw, not 302).
$_POST = Array
(
[xajax] => 1
[app] => products
[cmd] => upload
[cat] => 575
)
$_FILES = Array
(
[upfile] => Array
(
[name] => Aufkleber_Trollface.jpg
[type] => image/jpeg
[tmp_name] => /tmp/phpHwYkKD
[error] => 0
[size] => 25692
)
)
I have faced the same issue in Chrome Version 101.0.4951.67 (Official Build) (64-bit).
I did not use formData for some time, meanwhile, Chrome has been updated
FormData has been moved to separate tab Payload
Your code looks ok. Have you checked the Chrome console for errors?
If you have access to the server (and assuming it is httpd on Linux) you could make a small CGI shell script to inspect the headers and data at that end:
#!/bin/bash
echo "Content-type: text/plain"
echo ""
echo "Hello World. These are my environment variables:"
/usr/bin/env
if [ "$REQUEST_METHOD" = "POST" ]; then
if [ "$CONTENT_LENGTH" -gt 0 ]; then
echo "And this is my POST data:"
/bin/cat
fi
fi
exit 0
I had another issue, when as well my POST parameters somewhere missed, even in back-end and Chrome Headers, via very simple mistake in code.
I simply wrongly declared params object, as normal array:
var params = [];
Instead of:
var params = {};
And then assigned to it params, in result of what, I got no error, but it didn't worked:
params['param1'] = 'param1';
params['param2'] = 'param2';
$.post(url, params, function(data) {...
In this way, data was not sent or received in back-end, same as not shown because not sent in debugger. It might save for someone hour or few sometimes.
Sometime if your form set enctype="multipart/form-data", then Chrome will not catch the request payload.
<form action="" method="POST" enctype="multipart/form-data">
</form>
REF
Although this is not the answer of original question, my alternative is replacing the original implementation with the combo of form-data, es6-promise and isomorphic-fetch
All packages are able to download from npm.
It works charmly.
I have a web page that returns the following header when I access material:
HTTP/1.1 200 OK
Date: Sat, 29 Jun 2013 15:57:25 GMT
Server: Apache
Content-Length: 2247515
Cache-Control: no-cache, no-store, must-revalidate, max-age=-1
Pragma: no-cache, no-store
Expires: -1
Connection: close
Using a chrome extension, I want to modify this response header so that the material is actually cached instead of wasting bandwidth.
I have the following sample code:
chrome.webRequest.onHeadersReceived.addListener(function(details)
{
// Delete the required elements
removeHeader(details.responseHeaders, 'pragma');
removeHeader(details.responseHeaders, 'expires');
// Modify cache-control
updateHeader(details.responseHeaders, 'cache-control', 'max-age=3600;')
console.log(details.url);
console.log(details.responseHeaders);
return{responseHeaders: details.responseHeaders};
},
{urls: ["<all_urls>"]}, ['blocking', 'responseHeaders']
);
Which correctly modifies the header to something like this (based on the console.log() output):
HTTP/1.1 200 OK
Date: Sat, 29 Jun 2013 15:57:25 GMT
Server: Apache
Content-Length: 2247515
Cache-Control: max-age=3600
Connection: close
But based on everything I have tried to check this, I cannot see any evidence whatsoever that this has actually happened:
The cache does not contain an entry for this file
The Network tab in the Developer Console shows no change at all to the HTTP response (I have tried changing it to even trivial modifications just for the sake of ensuring that its not a error, but still no change).
The only real hints I can find are this question which suggests that my approach still works and this paragraph on the webRequest API documentation which suggests that this won't work (but doesn't explain why I can't get any changes whatsoever):
Note that the web request API presents an abstraction of the network
stack to the extension. Internally, one URL request can be split into
several HTTP requests (for example to fetch individual byte ranges
from a large file) or can be handled by the network stack without
communicating with the network. For this reason, the API does not
provide the final HTTP headers that are sent to the network. For
example, all headers that are related to caching are invisible to the
extension.
Nothing is working whatsoever (I can't modify the HTTP response header at all) so I think that's my first concern.
Any suggestions at where I could be going wrong or how to go about finding what is going wrong here?
If its not possible, are there any other ways to achieve what I am trying to achieve?
I have recently spent some hours on trying to get a file cached, and discovered that the chrome.webRequest and chrome.declarativeWebRequest APIs cannot force resources to be cached. In no way.
The Cache-Control (and other) response headers can be changed, but it will only be visible in the getResponseHeader method. Not in the caching behaviour.
Web servers have the ability to stream media (audio in this example) to browsers. Browsers use HTML5 controls to play the media. What I'm discovering, however, is that Firefox is caching the media, even though I (believe I) explicitly tell it not to. I have a hunch that it has something to do with the 206 Partial Content response as a regular "non-range" GET with a full 200 OK response does not get cached. Chrome (27) handles this OK, but Firefox (21) does not:
HTTP/1.1 206 Partial Content
Date: Tue, 21 May 2013 17:24:29 GMT
Expires: 0
Pragma: no-cache
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Content-Disposition: attachment; filename="audio.wav"
Content-Type: audio/x-wav
Connection: close
Accept-Ranges: bytes
Content-Range: bytes 0-218923/218924
Anyone got any ideas as to how to make Firefox not cache this? When I click to play other audio files that are named the same, Firefox simply plays the first one that was clicked (cached) in a session as opposed to re-fetching the new one from the server.
Note that this question seems to directly ask/answer this, but it does not work... I use the headers mentioned.
Thanks for any help.
EDIT: I also tried adding an ETag: header, but still Firefox caches the original response.
EDIT: Including a Content-Length: header to match (218924 in this example) does not seem to impact the issue.
EDIT: I have filed a bug at bugzilla.mozilla.org but no activity on it at this point.
Your Firefox is implementing Section 13.8 of rfc2616. So this behavior is alright.
13.8 Errors or Incomplete Response Cache Behavior
A cache that receives an incomplete response (for example, with fewer
bytes of data than specified in a Content-Length header) MAY store the
response. However, the cache MUST treat this as a partial response.
Partial responses MAY be combined as described in section 13.5.4; the
result might be a full response or might still be partial. A cache
MUST NOT return a partial response to a client without explicitly
marking it as such, using the 206 (Partial Content) status code. A
cache MUST NOT return a partial response using a status code of 200
(OK).
Partial responses may(or maynot) be stored. So Chrome and Firefox both follow the rules.
I've set up a bit of a test site.. I'm trying to implement an HTML5 video to play on a site I'm developing and I want to use jplayer so that it falls back to an swf file if html5 video is not supported.
http://dev.johnhunt.com.au/ is what I have so far. It works fine if I provide http://www.jplayer.org/video/m4v/Big_Buck_Bunny_Trailer_480x270_h264aac.m4v for the video, however if I host it on my own server it simply never starts playing.
The mime type is definitely correct, video/m4v. Charles proxy says:
Client closed connection before receiving entire response
Infact, here's the entire request:
GET /Big_Buck_Bunny_Trailer_480x270_h264aac.m4v HTTP/1.1
Host dev.johnhunt.com.au
Cache-Control no-cache
Accept-Encoding identity;q=1, *;q=0
Pragma no-cache
User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4
Accept */*
Referer http://dev.johnhunt.com.au/
Accept-Language en-US,en;q=0.8
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie __utma=120066461.1007786402.1349773481.1349773481.1349786970.2; __utmb=120066461.1.10.1349786970; __utmc=120066461; __utmz=120066461.1349773481.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)
Range bytes=0-
And response:
Some binary data (maybe 3 or 4kbytes long)
Which looks ok. I assume the 'client' is my chrome browser.. why is it giving up? How can I fix this? It's driving me mad as I can't find anything on google :(
When I use the m4v file on jplayer.org this is the request:
GET /video/m4v/Big_Buck_Bunny_Trailer_480x270_h264aac.m4v HTTP/1.1
Host www.jplayer.org
Cache-Control no-cache
Accept-Encoding identity;q=1, *;q=0
Pragma no-cache
User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4
Accept */*
Referer http://dev.johnhunt.com.au/
Accept-Language en-US,en;q=0.8
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie __utma=24821473.325705124.1349773077.1349773077.1349773077.1; __utmc=24821473; __utmz=24821473.1349773077.1.1.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided)
Range bytes=0-
Response:
Lots of binary data (very long.. working)
Cheers,
John.
I've found that when the Chrome browser sends a "Range: bytes=0-" request, you should NOT respond with a "206 Partial Content" response. To get Chrome to handle the data properly, you need to send back a "200 OK" header.
I don't know if this is correct according to the specs but it gets Chrome to work and does not appear to break other browsers.
Having just run into this with Chrome, it seems that you need to ensure that the Content-Range header is set by your server in the response.
From http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html:
Examples of byte-content-range-spec values, assuming that the entity contains a total of 1234 bytes:
. The first 500 bytes:
bytes 0-499/1234
. The second 500 bytes:
bytes 500-999/1234
. All except for the first 500 bytes:
bytes 500-1233/1234
. The last 500 bytes:
bytes 734-1233/1234
Might be a problem on your apache... presumably you are using apache given the tag.
Have you added the mime types to apache?
e.g.
AddType video/mp4 mp4
AddType video/mp4 m4v
Also check that gzip is turned off for for the media... it is already compressed... and don't gzip Jplayer.swf.
Can you post your apache config? Are you using any streaming modules such as this?
Cheers
Robin
EDIT
o and also you might want to accept-ranges bytes in apache. If you look closesly at the two links you are serving a 200 and they are serving a 206 partial data.
I've never seen this before, I've always known there was either GET or POST. And I can't find any good documentation.
GET send variables via the URL.
POST send it via the file body?
What does HEAD do?
It doesn't get used often, am I correct?
W3schools.com doesn't even mention it.
HTML’s method attribute only allows GET and POST.
The HEAD method is used to send the request and retrieve just the HTTP header as response. For example, a client application can issue a HEAD request to check the size of a file (from HTTP headers) without downloading it. As Arjan points out, it's not even valid in HTML forms.
HTTP method HEAD sends the response's headers but without a body; it's often useful, as the URL I've given explains, though hardly ever in a "form" HTML tag.
The only thing I can imagine is that the server may actually have been set up to validate the request method, to discover submissions by robots that for HEAD might actually use a different method than a browser does. (And thus reject those submissions.)
A response to a HEAD request does not imply nothing is shown to the user: even a response to HEAD can very well redirect to another page. However, like Gumbo noted: it's not valid for the method in a HTML form, so this would require a lot of testing in each possible browser...
For a moment I wondered if HEAD in a form is somehow used to avoid accidental multiple submissions. But I assume the only useful response would be a 301 Redirect, but that could also be used with GET or POST, so I don't see how HEAD would solve any issues.
A quick test in the current versions of both Safari and Firefox on a Mac shows that actually a GET is invoked. Of course, assuming this is undocumented behavior, one should not rely on that. Maybe for some time, spam robots were in fact fooled into using HEAD (which would then be rejected on the server), or might be fooled into skipping this form if they would only support GET and POST. But even the dumbest robot programmer (aren't they all dumb for not understanding their work is evil?) would soon have learned that a browser converts this into GET.
(Do you have an example of a website that uses this? Are you sure there's no JavaScript that changes this, or does something else? Can anyone test what Internet Explorer sends?)
HEAD Method
The HEAD method is functionally like GET, except that the server replies with a response line and headers, but no entity-body. Following is a simple example which makes use of HEAD method to fetch header information about hello.htm:
HEAD /hello.htm HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
Host: www.tutorialspoint.com
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Following will be a server response against the above GET request:
HTTP/1.1 200 OK
Date: Mon, 27 Jul 2009 12:28:53 GMT
Server: Apache/2.2.14 (Win32)
Last-Modified: Wed, 22 Jul 2009 19:15:56 GMT
ETag: "34aa387-d-1568eb00"
Vary: Authorization,Accept
Accept-Ranges: bytes
Content-Length: 88
Content-Type: text/html
Connection: Closed
You can notice that here server does not send any data after header.
-Obtained from tutorialspoint.com