HTTP POST payload not visible in Chrome debugger? - google-chrome

I have checked out this and that. However, my debugger looks like below.
Failure example
.
No form data, No raw content
Raw example (* Although path is different from the screen capture, both of them are unable to read post data)
POST https://192.168.0.7/cgi-bin/icul/;stok=554652ca111799826a1fbdafba9d3ac1/remote_command HTTP/1.1
Host: 192.168.0.7
Connection: keep-alive
Content-Length: 419
accept: application/json, text/javascript, */*; q=0.01
Origin: https://192.168.0.7
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36
content-type: application/x-www-form-urlencoded; charset=UTF-8
Referer: https://192.168.0.7/cgi-bin/icul/;stok=554652ca111799826a1fbdafba9d3ac1/smartmomentl/access-point/network
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8,zh-TW;q=0.6,zh;q=0.4
Cookie: sysauth=f15eff5e9ebb8f152e163f8bc00505c6
command=import&args=%7B%22--json%22%3Atrue%2C%22--force%22%3Atrue%2C%22--mocks%22%3A%22%7B%5C%22DEL%5C%22%3A%7B%7D%2C%5C%22SET%5C%22%3A%7B%5C%22dhcp%5C%22%3A%7B%5C%22lan%5C%22%3A%7B%5C%22.section%5C%22%3A%5C%22dhcp%5C%22%2C%5C%22interface%5C%22%3A%5C%22lan%5C%22%2C%5C%22ignore%5C%22%3A%5C%220%5C%22%2C%5C%22leasetime%5C%22%3A%5C%2212h%5C%22%2C%5C%22range%5C%22%3A%5C%22172.16.0.100-172.16.0.200%5C%22%7D%7D%7D%7D%22%7D
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Status: 200 OK
Content-Type: text/html; charset=utf-8
Cache-Control: no-cache
Expires: 0
Transfer-Encoding: chunked
Date: Thu, 01 Jan 1970 00:09:27 GMT
Server: lighttpd/1.4.30
31
{ "ctx": "No such command", "exitStatus": false }
0
NOTE: (6)
Successful example
Differences between them I have spotted (by differentiating header contents)
Raw example (* Although path is different from the screen capture, both of them are unable to read post data)
POST https://192.168.0.7/cgi-bin/icul/;stok=92dea2b939b9fceb44ac84ac859de7f4/;stok=92dea2b939b9fceb44ac84ac859de7f4/remote_command HTTP/1.1
Host: 192.168.0.7
Connection: keep-alive
Content-Length: 53
Accept: application/json, text/javascript, */*; q=0.01
Origin: https://192.168.0.7
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Referer: https://192.168.0.7/cgi-bin/icul/;stok=92dea2b939b9fceb44ac84ac859de7f4/remote_command/command_reboot
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8,zh-TW;q=0.6,zh;q=0.4
Cookie: sysauth=683308794904e0bedaaead33acb15c7e
command=command_reboot&args=%7B%22--json%22%3Atrue%7D
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Status: 200 OK
Content-Type: text/html; charset=utf-8
Cache-Control: no-cache
Expires: 0
Transfer-Encoding: chunked
Date: Thu, 01 Jan 1970 00:02:46 GMT
Server: lighttpd/1.4.30
34
{ "ctx": "\u0022success\u0022", "exitStatus": true }
0
NOTE: (6)
Header Difference between 2 examples
Successful one is using Jquery binding while failure one using HTTPS from nodejs + browserify. However, I am still finding a way to check whether this is a problem or not (Not tested)
Missing X-Requested-With: XMLHttpRequest. However, adding this header back to the request does not fix this problem (Tested)
Capital header vs Smaller letter header field (
content-type and Content-type. However this difference is not the root cause for my problem as tried in fiddle here (Tested)
Accept vs accept (Not tested)
NOTE: (5) (7)
Still, I am not sure why the first c in content-type is in small letter case.
NOTE: (1)
What I have tried
I have tried on Firefox with firebug. It is able to show my payload. However, it cannot parse response from the server :'(
Since the web server is running in HTTPS protocol, I cannot capture packets by wireshark. Any suggestion for debugging POST requests? Thanks.
Link to a gist about debugging HTTP(s) request via command line. NOTE: (3)
Wrapper I am using
I have wrap this method from nodejs with a promise calls. Below is a snippet show an option I have used.
/**
* Wraps HTTPS module from nodejs with Promise
* #module common/http_request
*/
var createRequestSetting = function (host, path, data, cookies) {
return {
method: 'POST',
port:443,
host: host,
path: path,
headers: {
Accept: 'application/json, text/javascript, */*; q=0.01',
'Content-Type':
'application/x-www-form-urlencoded; charset=UTF-8',
'Content-Length': Buffer.byteLength(data),
'Cookie': cookies,
},
rejectUnauthorized: false,
};
};
Full source here
NOTE: (2)
Update
(1) I have verified the letter c does not affect chrome debugger. Here is the fiddle. I have tried to mimic same request with XMLHttpRequest with letter c. I can still check form data in the debugger.
(2) Link to the full source code
(3) Link to a gist from me about scripts to test HTTP(s) request
(4) Reformat the question for readability
(5) Examples are not using the same binding after code reviewing
(6) Add raw header example
(7) Add a comparison session

There was a regression bug in Chrome v61 and v62 across all platforms that caused this behaviour when the response is (amongst others) a 302. This is fixed in v63 stable which was released to all desktop platforms on 6th December 2017.
Automatic updates are phased, but going to "Help" / "About Google Chrome" will force it download the update and give you a button to restart. Occasionally it is necessary to kill all Chrome process and restart it manually to get the update.
The (now closed) bug report is here. The release announcement is here.
Clearly this is not the cause of the original poster's issue back in 2015, but searches for the issue led me here. Also note this is not just an OS X issue.

If your application returns a 302 status code, and no payload data in Chrome Devtools, you may be hitting this Chrome bug.
If you are in development or this is a URL which won't break anything, a quick, very practical, workaround is to modify your server side code to send a 200, for example in PHP you might add:
die("premature exit - send a 200");
which sends out a 200 status code. This works around the "302 bug" -- until it is fixed.
p.s. Per #leo-hendry below, Canary does have the fix as of Dec 2017, but if you don't have another reason to run Canary, running another browser side-by-side won't be worth it, as the mainline release should be coming out soon.

If this is a bug it may be behaving differently on Mac vs Windows.
The screenshot below is from Chrome 63 on Windows. You can see the request payload section as expected.
Here is what I see on Chrome 65 Beta running on Mac. Notice the request payload section is missing.
Am I correct to assume that the bug is not fixed or is there something else I should be checking?

I just noticed that you cannot see POST data if you select "Doc" from the filters in Chrome debugger (next to All, Xhr, Css, JS...). It does show if you select "All".

I probably got the same problem with the Chrome console (Chrome 69)
Neither the formdata nor the payload tab is showing.
In my scenario I POST a form with enctype "multipart/form-data" to an iframe (submitting an image file over https to the same origin). It works as expected but I don't know how to debug the data in chrome properly when it doesn't show up at all. I could dump the data in PHP but that's unnecessary complicated and totally missing the point of using the console. I've read through the suggested solutions above but I didn't manage to get rid of this problem. (The response code is 200 btw, not 302).
$_POST = Array
(
[xajax] => 1
[app] => products
[cmd] => upload
[cat] => 575
)
$_FILES = Array
(
[upfile] => Array
(
[name] => Aufkleber_Trollface.jpg
[type] => image/jpeg
[tmp_name] => /tmp/phpHwYkKD
[error] => 0
[size] => 25692
)
)

I have faced the same issue in Chrome Version 101.0.4951.67 (Official Build) (64-bit).
I did not use formData for some time, meanwhile, Chrome has been updated
FormData has been moved to separate tab Payload

Your code looks ok. Have you checked the Chrome console for errors?
If you have access to the server (and assuming it is httpd on Linux) you could make a small CGI shell script to inspect the headers and data at that end:
#!/bin/bash
echo "Content-type: text/plain"
echo ""
echo "Hello World. These are my environment variables:"
/usr/bin/env
if [ "$REQUEST_METHOD" = "POST" ]; then
if [ "$CONTENT_LENGTH" -gt 0 ]; then
echo "And this is my POST data:"
/bin/cat
fi
fi
exit 0

I had another issue, when as well my POST parameters somewhere missed, even in back-end and Chrome Headers, via very simple mistake in code.
I simply wrongly declared params object, as normal array:
var params = [];
Instead of:
var params = {};
And then assigned to it params, in result of what, I got no error, but it didn't worked:
params['param1'] = 'param1';
params['param2'] = 'param2';
$.post(url, params, function(data) {...
In this way, data was not sent or received in back-end, same as not shown because not sent in debugger. It might save for someone hour or few sometimes.

Sometime if your form set enctype="multipart/form-data", then Chrome will not catch the request payload.
<form action="" method="POST" enctype="multipart/form-data">
</form>
REF

Although this is not the answer of original question, my alternative is replacing the original implementation with the combo of form-data, es6-promise and isomorphic-fetch
All packages are able to download from npm.
It works charmly.

Related

Chrome stalls when making multiple requests to same resource?

I'm trying to implement long polling for the first time, and I'm using XMLHttpRequest objects to do it. So far, I've been successful at getting events in Firefox and Internet Explorer 11, but Chrome strangely is the odd one out this time.
I can load one page and it runs just fine. It makes the request right away and starts processing and displaying events. If I open the page in a second tab, one of the pages starts seeing delays in receiving events. In the dev tools window, I see multiple requests with this kind of timing:
"Stalled" will range up to 20 seconds. It won't happen on every request, but will usually happen on several requests in a row, and in one tab.
At first I thought this was an issue with my server, but then I opened two IE tabs and two Firefox tabs, and they all connect and receive the same events without stalling. Only Chrome is having this kind of trouble.
I figure this is likely an issue with the way in which I'm making or serving up the request. For reference, the request headers look like this:
Connection: keep-alive
Last-Event-Id: 530
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36
Accept: */*
DNT: 1
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
The response looks like this:
HTTP/1.1 200 OK
Cache-Control: no-cache
Transfer-Encoding: chunked
Content-Type: text/event-stream
Expires: Tue, 16 Dec 2014 21:00:40 GMT
Server: Microsoft-HTTPAPI/2.0
Date: Tue, 16 Dec 2014 21:00:40 GMT
Connection: close
In spite of the headers involved, I'm not using the browser's native EventSource, but rather a polyfill that lets me set additional headers. The polyfill is using XMLHttpRequest under the covers, but it seems to me that no matter how the request is being made, it shouldn't stall for 20 seconds.
What might be causing Chrome to stall like this?
Edit: Chrome's chrome://net-internals/#events page shows that there's a timeout error involved:
t=33627 [st= 5] HTTP_CACHE_ADD_TO_ENTRY [dt=20001]
--> net_error = -409 (ERR_CACHE_LOCK_TIMEOUT)
The error message refers to a patch added to Chrome six months ago (https://codereview.chromium.org/345643003), which implements a 20-second timeout when the same resource is requested multiple times. In fact, one of the bugs the patch tries to fix (bug number 46104) refers to a similar situation, and the patch is meant to reduce the time spent waiting.
It's possible the answer (or workaround) here is just to make the requests look different, although perhaps Chrome could respect the "no-cache" header I'm setting.
Yes, this behavior is due to Chrome locking the cache and waiting to see the result of one request before requesting the same resource again. The answer is to find a way to make the requests unique. I added a random number to the query string, and everything is working now.
For future reference, this was Chrome 39.0.2171.95.
Edit: Since this answer, I've come to understand that "Cache-Control: no-cache" doesn't do what I thought it does. Despite its name, responses with this header can be cached. I haven't tried, but I wonder if using "Cache-Control: no-store", which does prevent caching, would fix the issue.
adding Cache-Control: no-cache, no-transform worked for me
I have decided to keep it simple and checked the response headers of a website that did not have this issue and I changed my response headers to match theirs:
Cache-Control: max-age=3, must-revalidate

Chrome v37/38 CORS failing (again) with 401 for OPTIONS pre-flight requests

As from Chrome version 37, pre-flighted, cross-domain requests are failing (again) if the server has authentication enabled, even though all CORS headers are set correctly. This is on localhost (my dev PC).
Some of you may be aware of the history of Chrome/CORS/auth bugs, especially when HTTPS was involved. My problem does not involve HTTPS: I have an AngularJS application served from localhost:8383 talking to a Java (Jetty) server on localhost:8081 that has HTTP BASIC auth activated. GETs work fine, but POSTs fail with a 401:
XMLHttpRequest cannot load http://localhost:8081/cellnostics/rest/patient.
Invalid HTTP status code 401
I have previously written a custom (Java) CORS filter that sets the correct CORS headers, which worked up until v36. It fails in v37 and also the latest v38 (38.0.2125.101 m). It still works as expected with Internet Explorer 11 (11.0.9600) and Opera 12.17 (build 1863).
GET requests succeed, but POSTs fail. It looks like Chrome is pre-flighting all my POSTs due to the content-type: "application/json", and that it is the pre-flighted OPTIONS request that is failing.
In the Angular app I explicitly set the following request headers. AFAIK this setting for withCredentials should ensure that credentials are sent even for OPTIONS requests:
//Enable cross domain calls
$httpProvider.defaults.useXDomain = true;
//Send all requests, even OPTIONS, with credentials
$httpProvider.defaults.withCredentials = true;
Below is the request/response. You can see that the OPTIONS method is enabled in the Access-Control-Allow-Methods header. You can also see that the Javascript app's origin is explicitly enabled: Access-Control-Allow-Origin: http://localhost:8383.
Remote Address:[::1]:8081
Request URL:http://localhost:8081/cellnostics/rest/medicaltest
Request Method:OPTIONS
Status Code:401 Full authentication is required to access this resource
Request headers:
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8,af;q=0.6
Access-Control-Request-Headers:accept, content-type
Access-Control-Request-Method:POST
Connection:keep-alive
Host:localhost:8081
Origin:http://localhost:8383
Referer:http://localhost:8383/celln-web/index.html
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.101 Safari/537.36
Response headers:
Access-Control-Allow-Credentials:true
Access-Control-Allow-Headers:Cache-Control, Pragma, Origin, Authorization, Content-Type, X-Requested-With, Accept
Access-Control-Allow-Methods:POST, GET, OPTIONS, PUT, DELETE
Access-Control-Allow-Origin:http://localhost:8383
Access-Control-Max-Age:3600
Content-Length:0
Server:Jetty(8.1.8.v20121106)
WWW-Authenticate:Basic realm="Cellnostics"
Has anyone got any idea what else I should do? I made sure to clear the Chrome cache before testing, restarting and ensuring that there were no background Chrome processes left running before restart, so I'm pretty sure that there were no lingering auth cache issues.
I've had to switch to IE 11 for testing my web development. The fact that the same client and server setup still works for IE and Opera, and the fact that there is a history of Chrome/CORS bugs, makes me suspect Chrome.
EDIT: Here's an extract from the Chrome net-internals event list:
t=108514 [st=0] +URL_REQUEST_START_JOB [dt=4]
--> load_flags = 336011264 (BYPASS_DATA_REDUCTION_PROXY | DO_NOT_SAVE_COOKIES | DO_NOT_SEND_AUTH_DATA | DO_NOT_SEND_COOKIES | MAYBE_USER_GESTURE | VERIFY_EV_CERT)
--> method = "OPTIONS"
--> priority = "LOW"
--> url = "http://localhost:8081/cellnostics/rest/patient"
...
t=108516 [st=2] HTTP_TRANSACTION_SEND_REQUEST_HEADERS
--> OPTIONS /cellnostics/rest/patient HTTP/1.1
Host: localhost:8081
Connection: keep-alive
Access-Control-Request-Method: POST
Origin: http://localhost:8383
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.101 Safari/537.36
Access-Control-Request-Headers: accept, content-type
Accept: */*
Referer: http://localhost:8383/celln-web/index.html
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8,af;q=0.6
So it looks like the Authorization header is not sent with the OPTIONS pre-flight, even though I explicitly set withCredentials = true.
However, why would IE and Opera still work? Is Chrome more standards-compliant in this regard? Why did it work and then start failing from v37?
EDIT: Chrome dev tools does not show the Content-Type of the request in the dumps above, but here it is from the Network log. The first pic shows the POST when the server auth is disabled, with content type correctly sent as 'application/json'. The 2nd pic is when the auth is enabled, showing the OPTIONS request failing (it seems OPTIONS is always sent with content type 'text/plain'?).
#Cornel Masson, did you solve the problem? I do not understand why your server is asking you to authenticate the OPTIONS request, but I am facing this same issue against a SAP NetWeaver server. I have read the whole CORS specification (I recommend) so I can clarify you some of your doubts.
About your sentence
In the Angular app I explicitly set the following request headers. AFAIK this setting for withCredentials should ensure that credentials are sent even for OPTIONS requests:
According to the CORS specification when a user agent (thus, a browser) preflights a request (requests with OPTIONS HTTP method), it MUST exclude the user credentials (cookies, HTTP authentication...) so any OPTIONS request cannot be requested as authenticated. The browser will request as authenticated the actual request (the one with the requested HTTP method like GET, POST...), but not the preflight request.
So browsers MUST not send the credentials in OPTIONS request. They will do in actual requests. If you write withCredentials = true the browser should do what I say.
According to your sentence:
It looks like Chrome is pre-flighting all my POSTs due to the content-type: "application/json":
The specification also says that a preflight request will be made by the browser when the header is not a "simple header" and here you have what that means:
A header is said to be a simple header if the header field name is an ASCII case-insensitive match for Accept, Accept-Language, or Content-Language or if it is an ASCII case-insensitive match for Content-Type and the header field value media type (excluding parameters) is an ASCII case-insensitive match for application/x-www-form-urlencoded, multipart/form-data, or text/plain.
application/json is not included so the browser MUST preflight the request as it does.
If anyone finds a solution it would be appreciated.
EDIT: I just found a person with same problem that reflects the real problems, and if you uses the same server as him you will be lucky, https://evolpin.wordpress.com/2012/10/12/the-cors/
Same here. I use Windows NTLM Authentication. Up until Chrome version 37 it worked OK. On versions 37, 38 it fails with 401 (Unauthorized) due to request Authorization header missing in pre-flight OPTIONS on both PUT, and POST.
Server side is Microsoft Web Api 2.1. I tried various CORS including latest NuGet package from Microsoft to no avail.
I have to workaround on Chrome by sending GET request instead of POST, and breaking rather huge data in multiple requests, since URL has a limit in size naturally.
Here are Request/Response headers:
Request URL: http://localhost:8082/api/ConfigurationManagerFeed/
Method: OPTIONS
Status: 401 Unauthorized
Request Headers
Accept: */*
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8,ru;q=0.6
Access-Control-Request-Headers: accept, content-type
Access-Control-Request-Method: POST
Connection: keep-alive
Host: localhost:8082
Origin: http://localhost:8383
Referer: http://localhost:8383/Application/index.html
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2125.104 Safari/537.36
X-DevTools-Emulate-Network-Conditions-Client-Id: 49E7FC68-A65C-4318-9292-852946051F27
Response Headers
Cache-Control: private
Content-Length: 6388
Content-Type: text/html; charset=utf-8
Date: Fri, 24 Oct 2014 13:40:07 GMT
Server: Microsoft-IIS/7.5
WWW-Authenticate: Negotiate
NTLM
X-Powered-By: ASP.NET
On MS IIS I implemented another workaround by overriding Microsoft standard page life cycle, i.e. processing OPTIONS right at the beginning of the HTTP request in global.ascx:
public class Global : HttpApplication
{
/// <summary>Check and cofigure CORS Pre-flight request on Begin Request lifecycle
/// </summary>
protected void Application_BeginRequest()
{
if (Request.Headers.AllKeys.Contains(CorsHandler.Origin) && Request.HttpMethod == "OPTIONS")
{
Response.StatusCode = (int)HttpStatusCode.OK;
Response.Headers.Add(CorsHandler.AccessControlAllowCredentials, "true");
Response.Headers.Add(CorsHandler.AccessControlAllowOrigin, Request.Headers.GetValues(CorsHandler.Origin).First());
string accessControlRequestMethod = Request.Headers.GetValues(CorsHandler.AccessControlRequestMethod).FirstOrDefault();
if (accessControlRequestMethod != null)
{
Response.Headers.Add(CorsHandler.AccessControlAllowMethods, accessControlRequestMethod);
}
var hdrs = Request.Headers.GetValues(CorsHandler.AccessControlRequestHeaders).ToList();
hdrs.Add("X-Auth-Token");
string requestedHeaders = string.Join(", ", hdrs.ToArray());
Response.Headers.Add(CorsHandler.AccessControlAllowHeaders, requestedHeaders);
Response.Headers.Add("Access-Control-Expose-Headers", "X-Auth-Token");
Response.Flush();
}
}
}
We were experiencing this same issue when trying to debug an Angular 4 front-end application running on localhost:4200 (using the Angular CLI Live Development Server). The Angular application makes http requests to an ASP .Net WebApi2 application running on localhost (IIS Web server) with Windows Authentication. The issue only occurred when making a POST request in Chrome (even though our WebApi is configured for CORs). We were able to temporarily workaround the issue by starting up Fiddler and running a reverse proxy until we found this post (Thank You).
The error message in Chrome debugger:
Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource
Adding the below code to our Web Api2 - Global.asax file resolved this problem.
If you are looking for the "CorsHandler", you can simply replace George's post with the hard-coded string values, as follows:
protected void Application_BeginRequest()
{
if (Request.Headers.AllKeys.Contains("Origin") && Request.HttpMethod == "OPTIONS")
{
Response.StatusCode = (int)HttpStatusCode.OK;
Response.Headers.Add("Access-Control-Allow-Credentials", "true");
Response.Headers.Add("Access-Control-Allow-Origin", Request.Headers.GetValues("Origin").First());
string accessControlRequestMethod = Request.Headers.GetValues("Access-Control-Request-Method").FirstOrDefault();
if (accessControlRequestMethod != null)
{
Response.Headers.Add("Access-Control-Allow-Methods", accessControlRequestMethod);
}
var hdrs = Request.Headers.GetValues("Access-Control-Request-Headers").ToList();
hdrs.Add("X-Auth-Token");
string requestedHeaders = string.Join(", ", hdrs.ToArray());
Response.Headers.Add("Access-Control-Allow-Headers", requestedHeaders);
Response.Headers.Add("Access-Control-Expose-Headers", "X-Auth-Token");
Response.Flush();
}
}
Kind Regards,
Chris

Why is this media upload failing 'not allowed by access-control-origin'?

Here is the request and response
**Request URL:https://www.googleapis.com/upload/drive/v2/files/0B6B-RNrxsCu2S0xxSkZQUEQ3eDQ?uploadType=media**
Request Method:OPTIONS
Status Code:200 OK
Request Headersview source
:host:www.googleapis.com
:method:OPTIONS
:path:/upload/drive/v2/files/0B6B-RNrxsCu2S0xxSkZQUEQ3eDQ?uploadType=media
:scheme:https
:version:HTTP/1.1
accept:*/*
accept-encoding:gzip,deflate,sdch
accept-language:en-US,en;q=0.8,en-AU;q=0.6
access-control-request-headers:accept, content-type, authorization, upload-content-length, upload-content-type
access-control-request-method:PUT
origin:http://dev.example.co:8888
referer:http://dev.example.co:8888/app/drivecrud.html
user-agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.114 Safari/537.36
Query String Parametersview sourceview URL encoded
uploadType:media
**Response Headersview source**
alternate-protocol:443:quic
cache-control:no-cache, no-store, max-age=0, must-revalidate
content-length:0
content-type:application/octet-stream
date:Fri, 18 Apr 2014 06:46:58 GMT
expires:Fri, 01 Jan 1990 00:00:00 GMT
pragma:no-cache
server:HTTP Upload Server Built on Apr 11 2014 13:30:54 (1397248254)
status:200 OK
version:HTTP/1.1
Fails with ...
OPTIONS https://www.googleapis.com/upload/drive/v2/files/0B6B-RNrxsCu2S0xxSkZQUEQ3eDQ?uploadType=media
Origin http://dev.example.co:8888 is not allowed by Access-Control-Allow-Origin.
By way of confirmation that everything else seems OK...
I just created the file that I'm uploading content to, so it's not permissions
If I replace uploadType=media -> =multipart, then I can
correctly create a new file with content.
So it feels like one of
I've mis-formed the request in some way
Drive bug
The Drive API is documented at https://developers.google.com/drive/web/manage-uploads#simple
so I'm asking, is my request not as specified by the API, or it is as specified and the API is broken.
As people have commented: looks like a crossdomain problem.
I assume you are using javascript to make this request;
Basicly you're bumping into a security measure for preventing scripts from moving data from one site to another without you knowing it.
The easiest way I found to fix this, unless you can edit the headers at googleapis.com, is jQuery-File-Upload. It works cross domain :)
You could also make it a 'jsonp' dataRequest, which is meant for cross domain communication.
It can easily be fixed if you have control over the receiving/responding server by adding
Access-Control-Allow-Origin: *
to the header. However you probably don't have the possibility to edit the headers of googleapis.com.
Sources:
http://www.fbloggs.com/2010/07/09/how-to-access-cross-domain-data-with-ajax-using-jsonp-jquery-and-php/
Access-Control-Allow-Origin error sending a jQuery Post to Google API's
http://cypressnorth.com/programming/cross-domain-ajax-request-with-json-response-for-iefirefoxchrome-safari-jquery/

Json response download in IE(7~10)

I am trying to upload a file and return a json response regarding properties(name, size etc) of the file. It works fine in all browsers except IE.
IE tries to download the JSON as a file !
I have IE10 and testing it on IE7 to 10 by changing browser mode and document mode from the debugger.
I am using asp.net mvc4, the file upload action have HttpPost attribute and i am returning json response using return Json(myObject);
And here are my http headers
Request
Key Value
Request POST /File/UploadFile/ HTTP/1.1
Accept text/html, application/xhtml+xml, */*
Referer http://localhost:63903/
Accept-Language en-NZ
User-Agent Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
Content-Type multipart/form-data; boundary=---------------------------7dc1e71330526
Accept-Encoding gzip, deflate
Host localhost:63903
Content-Length 1377002
DNT 1
Connection Keep-Alive
Cache-Control no-cache
Response
Key Value
Response HTTP/1.1 200 OK
Server ASP.NET Development Server/11.0.0.0
Date Tue, 18 Dec 2012 23:44:19 GMT
X-AspNet-Version 4.0.30319
X-AspNetMvc-Version 4.0
Cache-Control private
Content-Type application/json; charset=utf-8
Content-Length 154
Connection Close
I tried a few suggestions but so far back to square one !
You will need to return the json as text/html since IE does not know what to do with application/json contents..
return Json(myObject, "text/html");
Not sure but it might work (and it would be more correct if it does) to use text/x-json
return Json(myObject, "text/x-json");
Even though this question is a few months old, I thought I'll add one more suggestion, just in case anyone else is using ASP.NET MVC 3 or 4 and runs into this problem.
In my experience, when IE attempts to download the Json response as a file all you have to do to correct the problem is to add a reference to jquery.unobtrusive to your view.
for example:
#Scripts.Render("~/Scripts/jquery.unobtrusive-ajax.min.js")
Once this is in place IE will no longer try to download the json response from a JsonResult controller action. No need to change the response type etc..

Client closes connection when streaming m4v from apache to chrome with jplayer

I've set up a bit of a test site.. I'm trying to implement an HTML5 video to play on a site I'm developing and I want to use jplayer so that it falls back to an swf file if html5 video is not supported.
http://dev.johnhunt.com.au/ is what I have so far. It works fine if I provide http://www.jplayer.org/video/m4v/Big_Buck_Bunny_Trailer_480x270_h264aac.m4v for the video, however if I host it on my own server it simply never starts playing.
The mime type is definitely correct, video/m4v. Charles proxy says:
Client closed connection before receiving entire response
Infact, here's the entire request:
GET /Big_Buck_Bunny_Trailer_480x270_h264aac.m4v HTTP/1.1
Host dev.johnhunt.com.au
Cache-Control no-cache
Accept-Encoding identity;q=1, *;q=0
Pragma no-cache
User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4
Accept */*
Referer http://dev.johnhunt.com.au/
Accept-Language en-US,en;q=0.8
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie __utma=120066461.1007786402.1349773481.1349773481.1349786970.2; __utmb=120066461.1.10.1349786970; __utmc=120066461; __utmz=120066461.1349773481.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)
Range bytes=0-
And response:
Some binary data (maybe 3 or 4kbytes long)
Which looks ok. I assume the 'client' is my chrome browser.. why is it giving up? How can I fix this? It's driving me mad as I can't find anything on google :(
When I use the m4v file on jplayer.org this is the request:
GET /video/m4v/Big_Buck_Bunny_Trailer_480x270_h264aac.m4v HTTP/1.1
Host www.jplayer.org
Cache-Control no-cache
Accept-Encoding identity;q=1, *;q=0
Pragma no-cache
User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4
Accept */*
Referer http://dev.johnhunt.com.au/
Accept-Language en-US,en;q=0.8
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie __utma=24821473.325705124.1349773077.1349773077.1349773077.1; __utmc=24821473; __utmz=24821473.1349773077.1.1.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided)
Range bytes=0-
Response:
Lots of binary data (very long.. working)
Cheers,
John.
I've found that when the Chrome browser sends a "Range: bytes=0-" request, you should NOT respond with a "206 Partial Content" response. To get Chrome to handle the data properly, you need to send back a "200 OK" header.
I don't know if this is correct according to the specs but it gets Chrome to work and does not appear to break other browsers.
Having just run into this with Chrome, it seems that you need to ensure that the Content-Range header is set by your server in the response.
From http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html:
Examples of byte-content-range-spec values, assuming that the entity contains a total of 1234 bytes:
. The first 500 bytes:
bytes 0-499/1234
. The second 500 bytes:
bytes 500-999/1234
. All except for the first 500 bytes:
bytes 500-1233/1234
. The last 500 bytes:
bytes 734-1233/1234
Might be a problem on your apache... presumably you are using apache given the tag.
Have you added the mime types to apache?
e.g.
AddType video/mp4 mp4
AddType video/mp4 m4v
Also check that gzip is turned off for for the media... it is already compressed... and don't gzip Jplayer.swf.
Can you post your apache config? Are you using any streaming modules such as this?
Cheers
Robin
EDIT
o and also you might want to accept-ranges bytes in apache. If you look closesly at the two links you are serving a 200 and they are serving a 206 partial data.