Attempting to make a POST request I am getting the following response:
HTTP/1.1 408 Request body incomplete
Date: Tue, 19 Jul 2022 07:53:39 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
Cache-Control: no-cache, must-revalidate
Timestamp: 09:53:39.074
The request body did not contain the specified number of bytes. Got 62.913, expected 116.798
The error is not shown on Chrome's console, it was found using Fiddler.
I can safely say that the service is proven not to be the cause of the issue because it has been installed on different servers and it has always worked, but if it can help track down the problem, here's the call
var postJson = function (url, parameters) {
forceDateToIsoStringparameters)
var deferred = $.Deferred();
var ajax = $.ajax({
type: 'POST',
url: url,
data: parameters)
dataType: 'json'
}).done(function (data) {
deferred.resolve(data);
}).fail(function (error) {
deferred.reject(error);
});
return deferred.promise(ajax);
};
Server settings:
The service that fails is hosted on IIS 8 and has been moved from one server to another with HTTPS protocol set up.
It has been working perfectly on the previous server but currently, making some requests that are apparently too large, returns "error 408" responses on some computers via HTTPS.
The server has two separate bindings on the same IP and port (443), with two different certificates linked to two different host names.
Server bindings
Furthermore, we increased the request limit size in IIS to one that exceeds that of the body of the failing request.
IIS config
Client settings:
No difference has been found between the PCs where the service responds and others where it doesn't. Both systems run Windows 10 and the connection was performed on Chrome (even in incognito mode).
The request body was identical both on working and not working PCs and with the same exact size (116798)
On not working PCs, it has been possible to make the call succeed by reducing the size of the body; but we don't understand what setting, either on the server or client-side (PC, router, etc), could cause this behavior
Edit:
After some tests, it appears the problem does not happen with Edge, something in Chrome is truncating the ajax request
Try to annotate content-length:116.798 in the header, because there is an error in the server-side length check. If the content-length length is smaller than the actual length, there will be no error, but if the content-length length is larger than the actual length, an error will be reported, resulting in the verification failure.
Related
I'm creating a HTTP Web API where some of my resources will be cacheable. A cacheable resource will have two operations, GET & PUT. The GET will return response headers of Cache-Control: public,max-age=3600 & Etag: "2kuSN7rMzfGcB2DKt67EqDWQELA". The PUT will require the If-Match header which will contain the Etag value from a GET of the same resource. My goal is to have the browser cache invalidate a resource when I PUT to that resource. This works fine until I add the If-Match header to the PUT request. When the PUT request contains the If-Match header, subsequent GET requests will fetch from the cache which would be stale data. This is the behavior I've been experiencing with Chrome. Firefox doesn't behave like this, and works as I assume it should. Is this a bug in Chrome or am I misunderstanding some part of the HTTP spec?
Here are some example requests to show behavior:
//correctly fetchs from origin server (returns 200)
GET http://localhost/api/my-number/1
Response Headers
cache-control: public,max-age=3600
etag: "2kuSN7rMzfGcB2DKt67EqDWQELA"
Response Body
7
//correctly fetchs from disk cache (returns 200)
GET http://localhost/api/my-number/1
Response Headers
cache-control: public,max-age=3600
etag: "2kuSN7rMzfGcB2DKt67EqDWQELA"
Response Body
7
//correctly updates origin server (returns 200)
PUT http://localhost/api/my-number/1
Request Headers
if-match: "2kuSN7rMzfGcB2DKt67EqDWQELA"
Request Body
8
//incorrectly still fetches from disk cache (returns 200)
GET http://localhost/api/my-number/1
Response Headers
cache-control: public,max-age=3600
etag: "2kuSN7rMzfGcB2DKt67EqDWQELA"
Response Body
7
This is indeed incorrect behavior. RFC 7234 says:
A cache MUST invalidate the effective Request URI... as well as the URI(s) in the Location and Content-Location
response header fields (if present) when a non-error status code is
received in response to an unsafe request method.
Given that, the bug report you filed looks appropriate to me.
when I try to Respond with on HTTP 204 Status, my Chrome browser is starting an Download that fails.
Request:
Request URL: https://dummy.page/dummyRequest
Request Method: GET
Status Code: 204
Remote Address: [dummy]:443
Referrer Policy: no-referrer-when-downgrade
Response:
date: Fri, 08 Mar 2019 08:24:05 GMT
server:
status: 204
When I use Dev-Tool to inspect the response, chrome says "faild to load response data" and in firefox I can see one empty line.
My server returns a Response via Java:
return Response.noContent().build();
I also tried to return NULL at this point but that did not change anything.
The whole thing is working fine in Firefox but when I try in Chrome it starts an Download of "dummyRequest" (from the URL) which fails.
So what I want to know: why is Chrome starting a download and what could I do against?
Thanks for helping ;)
I came across the same issue with 204 responses. What worked for me was checking the Content-Type response header on the server-side before sending the response.
My 204 responses were sending a default Content-Type of application/octet-stream (from the link: "An unknown file type should use this type. Browsers pay a particular care when manipulating these files"). When switching the Content-Type to something different (I chose text/html), the trigger for downloads stopped.
I am attempting to retrieve some data from a 3rd party domain. When I enter the request url. I am able to see the data I requested. But when I attempt to make a call using ajax (to a different domain), it returns the error message. Why am I not able to retrieve the data? Might it have something to do with cross-domain policy and not using jsonp? Here is my code:
<script>
$(document).ready(function() {
$.ajax ({
type: 'GET',
url: 'https://crm.zoho.com/crm/private/json/Potentials/searchRecords?authtoken=xxx&scope=crmapi&criteria=(((Potential%20Email:test2#email.com))&selectColumns=Potentials(Potential%20Name)&fromIndex=1&toIndex=1',
dataType: 'json',
success: function(test) {
alert(JSON.stringify(test));
},
error: function(test) {
alert(JSON.stringify(test));
}
});
});
</script>
Because the request that you has send is blocked by the browser. When you perform a request using an object XmlHttpRequest and obviously javascript, the browser applied cross-domain policy, defined in WC3, and thus verify in url the origin and target domain (protocol, host and port), if those elements are in different domain (i.e. host and port), then the request never comes out from browser (a.k.a User Agent). You can use jsonp to "break" or "jump" this policy, simply is a tag "script" with a resource (src) defined in a different domain using a parameter called "jsonCallback=?" added in query string, who really receives the data in format json. This is more ugly and have a security risk, therefore never be used.
The other method is to use and enable a "technique" (is more than that) known like "CORS" (Cross Origin Request Sharing), where the client (browser) and server (resource at different domain), send, exchange and negotiate an Http Headers to secure that who send and who received are authorized to exchange information. The basics steps to realize CORS is:
Explicity define in client (ajax-jquery) that CORS will be used in request, specifying CrossDomain:true. This will enable HTTP Headers defined in CORS
Specify in the HTTP Server, a HTTP Header indicating the Domain Source that have permissions to call a resource hosted in server. The most general header can be defined like: Access-Control-Allow-Origin , with value asigned a domain, like "*" (all domain authorized) (Access-Control-Allow-Origin, *)
In some Browsers, sometimes they send a http header request called "preflight request", is like a discover via to know if the server is prepared to recieve cross-origin request. This Http Header contains a "Method HTTP" value or "Verb HTTP" (like PUT,POST,GET,DELETE) assigned to "OPTIONS", then the server must be configured too to recieve HTTP Headers with Method "OPTIONS", and therefore allow methods http like PUT, DELETE,POST or GET. In generals terms the server must have this headers when in the request had a method HTTP "OPTIONS":
Access-Control-Allow-Methods , "POST, PUT, DELETE, GET, OPTIONS"
Access-Control-Allow-Headers, ", "Content-Type, Accept"
Finally, the client (ajax) will recieve the data from the server.
This sounds a little confusing and the steps are few, sorry that not put a code like examples, but, really CORS is not hard to understand.
I hope this will help.
References from Mozilla:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS
This show what is CORS and you can use in configuration server:
http://enable-cors.org/
I have a web page that returns the following header when I access material:
HTTP/1.1 200 OK
Date: Sat, 29 Jun 2013 15:57:25 GMT
Server: Apache
Content-Length: 2247515
Cache-Control: no-cache, no-store, must-revalidate, max-age=-1
Pragma: no-cache, no-store
Expires: -1
Connection: close
Using a chrome extension, I want to modify this response header so that the material is actually cached instead of wasting bandwidth.
I have the following sample code:
chrome.webRequest.onHeadersReceived.addListener(function(details)
{
// Delete the required elements
removeHeader(details.responseHeaders, 'pragma');
removeHeader(details.responseHeaders, 'expires');
// Modify cache-control
updateHeader(details.responseHeaders, 'cache-control', 'max-age=3600;')
console.log(details.url);
console.log(details.responseHeaders);
return{responseHeaders: details.responseHeaders};
},
{urls: ["<all_urls>"]}, ['blocking', 'responseHeaders']
);
Which correctly modifies the header to something like this (based on the console.log() output):
HTTP/1.1 200 OK
Date: Sat, 29 Jun 2013 15:57:25 GMT
Server: Apache
Content-Length: 2247515
Cache-Control: max-age=3600
Connection: close
But based on everything I have tried to check this, I cannot see any evidence whatsoever that this has actually happened:
The cache does not contain an entry for this file
The Network tab in the Developer Console shows no change at all to the HTTP response (I have tried changing it to even trivial modifications just for the sake of ensuring that its not a error, but still no change).
The only real hints I can find are this question which suggests that my approach still works and this paragraph on the webRequest API documentation which suggests that this won't work (but doesn't explain why I can't get any changes whatsoever):
Note that the web request API presents an abstraction of the network
stack to the extension. Internally, one URL request can be split into
several HTTP requests (for example to fetch individual byte ranges
from a large file) or can be handled by the network stack without
communicating with the network. For this reason, the API does not
provide the final HTTP headers that are sent to the network. For
example, all headers that are related to caching are invisible to the
extension.
Nothing is working whatsoever (I can't modify the HTTP response header at all) so I think that's my first concern.
Any suggestions at where I could be going wrong or how to go about finding what is going wrong here?
If its not possible, are there any other ways to achieve what I am trying to achieve?
I have recently spent some hours on trying to get a file cached, and discovered that the chrome.webRequest and chrome.declarativeWebRequest APIs cannot force resources to be cached. In no way.
The Cache-Control (and other) response headers can be changed, but it will only be visible in the getResponseHeader method. Not in the caching behaviour.
In my Ruby on Rails application I tried to upload an image through the POSTMAN REST client in Base64 format. When I POST the image I am getting a 406 Not Acceptable Response. When I checked my database, the image was there and was successfully saved.
What is the reason for this error, is there anything I need to specify in my header?
My request:
URL --- http://localhost:3000/exercises.json
Header:
Content-Type - application/json
Raw data:
{
"exercise": {
"subbodypart_ids": [
"1",
"2"
],
"name": "Exercise14"
},
"image_file_name": "Pressurebar Above.jpg",
"image":"******base64 Format*******"
}
Your operation did not fail.
Your backend service is saying that the response type it is returning is not provided in the Accept HTTP header in your Client request.
Ref: http://en.wikipedia.org/wiki/List_of_HTTP_header_fields
Find out the response (content type) returned by Service.
Provide this (content type) in your request Accept header.
http://en.wikipedia.org/wiki/HTTP_status_code -> 406
406 Not Acceptable
The resource identified by the request is only capable of generating response entities which have content characteristics not
acceptable according to the accept headers sent in the request.
406 happens when the server cannot respond with the accept-header specified in the request.
In your case it seems application/json for the response may not be acceptable to the server.
You mentioned you're using Ruby on Rails as a backend. You didn't post the code for the relevant method, but my guess is that it looks something like this:
def create
post = Post.create params[:post]
respond_to do |format|
format.json { render :json => post }
end
end
Change it to:
def create
post = Post.create params[:post])
render :json => post
end
And it will solve your problem. It worked for me :)
"Sometimes" this can mean that the server had an internal error, and wanted to respond with an error message (ex: 500 with JSON payload) but since the request headers didn't say it accepted JSON, it returns a 406 instead. Go figure. (in this case: spring boot webapp).
In which case, your operation did fail. But the failure message was obscured by another.
You can also receive a 406 response when invalid cookies are stored or referenced in the browser - for example, when running a Rails server in Dev mode locally.
If you happened to run two different projects on the same port, the browser might reference a cookie from a different localhost session.
This has happened to me...tripped me up for a minute. Looking in browser > Developer Mode > Network showed it.
const request = require('request');
const headers = {
'Accept': '*/*',
'User-Agent': 'request',
};
const options = {
url: "https://example.com/users/6",
headers: headers
};
request.get(options, (error, response, body) => {
console.log(response.body);
});
Changing header to Accept: */* resolved my issue and make sure you don't have any other Accept Header
In my case, I added:
Content-Type: application/x-www-form-urlencoded
solved my problem completely.
If you are using 'request.js' you might use the following:
var options = {
url: 'localhost',
method: 'GET',
headers:{
Accept: '*/*'
}
}
request(options, function (error, response, body) {
...
})
In my case for a API in .NET-Core, the api is set to work with XML (by default is set to response with JSON), so I add this annotation in my Controller :
[Produces("application/xml")]
public class MyController : ControllerBase {...}
Thank you for putting me on the path !
It could also be due to a firewall blocking the request. In my case the request payload contained string properties - "like %abc%" and ampersand symbol "&" - which caused the firewall to think it is a security risk (eg. a sql injection attack) and it blocked the request. Note here the request does not actually go to the server but is returned at the firewall level itself.
In my case, there were no application server logs generated so I knew that the request did not actually reach the server and was blocked before that. The logs that helped me were Web application firewall (WAF) logs.