I have read near 20 other posts about this particular error, but most seem to be issues with the code calling Response.Close or similar, which is not our case. I understand that this particular error means that typically a user browsed away from the web page or cancelled the request midway, but in our case we are getting this error without cancelling a request. I can observe the error just after a few seconds, the download just fails in the browser (both Chrome and IE, so it's not browser specific).
We have a web api controller that serves a file download.
[HttpGet]
public HttpResponseMessage Download()
{
//
// Enumerates a directory and returns a Read-only FileStream of the download
var stream = dataProvider.GetServerVersionAssemblyStream(configuration.DownloadDirectory, configuration.ServerVersion);
if (stream == null)
{
return new HttpResponseMessage(HttpStatusCode.NotFound);
}
var response = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(stream)
};
response.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");
response.Content.Headers.ContentDisposition.FileName = $"{configuration.ServerVersion}.exe";
response.Content.Headers.ContentType = new MediaTypeHeaderValue(MediaTypeNames.Application.Octet);
response.Content.Headers.ContentLength = stream.Length;
return response;
}
Is there something incorrect we are doing in our Download method, or is there something we need to tweak in IIS?
This happens sporadically. I can't observe a pattern, it works sometimes and other times it fails repeatedly.
The file download is about 150MB
The download is initiated from a hyperlink on our web page, there is no special calling code
The download is over HTTPS (HTTP is disabled)
The Web Api is hosted on Azure
It doesn't appear to be timing out, it can happen just after a second or two, so it's not hitting the default 30 second timeout values
I also noticed I can't seem to initiate multiple file downloads from the server at once, which is concerning. This needs to be able to serve 150+ businesses and multiple simultaneous downloads, so I'm concerned there is something we need to tweak in IIS or the Web Api.
I was able to finally fix our problem. For us it turned out to be a combination of two things: 1) we had several memory leaks and CPU intensive code in our Web Api that was impacting concurrent downloads, and 2) we ultimately resolved the issue by changing MinBytesPerSecond (see: https://blogs.msdn.microsoft.com/benjaminperkins/2013/02/01/its-not-iis/) to a lower value, or 0 to disable. We have not had an issue since.
We have been directly using U2F on our auth web app with the hostname as our app ID (https://auth.company.com) and that's working fine. However, we'd like to be able to authenticate with the auth server from other apps (and hostnames, e.g. https://customer.app.com) that communicate with the auth server via HTTP API.
I can generate the sign requests and what-not through API calls and return them to the client apps, but it fails server-side (auth server) because the app ID doesn't validate (clients are using their own hostnames as app ID). This is understandable, but how should I handle this? I've read about facets but I cannot get it to work at all.
The client app JS is like:
var registerRequests = // ...
var signRequests = // ...
u2f.register('http://localhost:3000/facets', registerRequests, signRequests, function(registerResponse) {
if (registerResponse.errorCode) {
return alert("Registration error: " + registerResponse.errorCode);
}
// etc.
});
This gives me an Error code 5 (timeout error) after a while. I don't see any request to /facets . Is there a way around this or am I barking up the wrong tree (or a different forest)?
————
Okay, so after a few hours of researching this; I'm pretty sure this fiendish bit of the Firefox U2F plugin is the source of some of my woes:
if (u.scheme == "http")
if (url2str(u, true) == url2str(ou, true))
return resolve(challenge);
else
return reject("Not matching appID");
https://github.com/prefiks/u2f4moz/blob/master/ext/appIdValidator.js#L106-L110
It's essentially saying, if the appID's scheme is http, only allow it if it's exactly the same as the page's host (it goes on to do the behaviour for fetching the trusted facets JSON but only for https).
Still not sure if I'm on the right track though in how I'm trying to design this.
I didn't need to worry about facets for my particular situation. In the end I just pass the client app hostname through to the Auth server via the secure API interface and it uses that as the App ID. Seems to work okay so far.
The issue I was having with facets was due to using http in dev and the Firefox U2F plugin not permitting that with JSON facets.
Via github I installed the 2.0.3.2. RC version on my digital ocean VPS. All seemed to work fine, but just like many others i got problems with the JSON syntax error. I spent hours reading through forum pages about
API users that have to be made
API users that have to be appointed
Maintenance mode that had to be switched off
the json = array(); solution
and cUrl loopback restrictions (including the vqmod curl loopback workaround ) http://forum.opencart.com/viewtopic.php?f=191&t=146714
All of these solutions didn't seem to work... When i finally found out that I had my VPS access restricted on IP address and removed this restriction the order history update seemed to work fine so I assumed ALL was ok.
Today when I tried to edit an order, the same following error came popping up. So I started going over the forums again for a solution.
While heavily frustrated trying things i bumped in to this strange behaviour. When on the first page of order editing I get the error, but when I select the standard shop... all works fine... I can edit the order exactly how i want... but when i switch the option back to the store the order was placed in... it responds directly with the same error (see attachment).
I'm not sure if there are any other multistore users that are on 2.0.3+ that have shops that are working fine?
Could you think with me? Could it be something with the Cross-Origin Resource Sharing policy? All suggestions are welcome!
Go to Settings, edit your store (not Default),
and on first tab (Genaral), make sure that your SSL URL is set.
If you don't have SSL, then set the same value as Store URL.
Hope this helps.
Probably a cross origin policy issue as you mentioned. I solved this issue on 1.5.6 as well as the crossdomain cookie issue (which has never worked properly to my knowledge on any version) by adding:
xhrFields: { withCredentials: true },
In the AJAX request as well as setting access-control-allow-credentials on the receiving header. The trick here is that for cross origin headers to work this way you need to explicitly declare the URL which is allowed (i.e., Header set Access-Control-Allow-Origin "*" will not work). The next trick is that you don't want to accept these headers from any and every URL.
To work around this, I added something like this to the manual.php controller - which in 2.0+ would be api/order.php (and for cross domain cookie sharing common/header.php as well):
$this->load->model('setting/store');
$allowed[] = trim(HTTP_SERVER,'/');
$allowed[] = trim(HTTPS_SERVER, '/');
$stores = $this->model_setting_store->getStores();
foreach ($stores as $store) {
if ($store['url']) $allowed[] = strtolower(trim($store['url'],'/'));
if ($store['ssl']) $allowed[] = strtolower(trim($store['ssl'],'/'));
}
if (isset($this->request->server['HTTP_REFERER'])) {
$url_parts = parse_url($this->request->server['HTTP_REFERER']);
$origin = strtolower($url_parts['scheme'] . '://' . $url_parts['host']);
if (in_array($origin,$allowed)) {
header("access-control-allow-origin: " . $origin);
header("access-control-allow-credentials: true");
} else {
header("access-control-allow-origin: *");
}
} else {
header("access-control-allow-origin: *");
}
header("access-control-allow-headers: Origin, X-Requested-With, Content-Type, Accept");
header("access-control-allow-methods: PUT, GET, POST, DELETE, OPTIONS");
This would basically create an array of all acceptable URLs and if the request is valid it sets the HTTP headers explicitly to allow cookies and session data. This was primarily a fix for cross-domain cookie sharing but I have a feeling it may be helpful for working around the 2.0 api issue as well.
A colleague of me found out the api calls are always done through ssl, all I had to do is add the normal store url in the SSL field in the settings from the store (not the main).
I am using the offline HTML5 functionality to cache my web application.
It works fine some of the time, but there are certain circumstances where it has weird behaviour. I am trying to figure out why, and how I can fix it.
I am using Sammy, and I think that might be related.
Here is when it goes wrong,
Browse to my page http://domain/App note: I haven't included a slash after the /App
I am then redirected to http://domain/App/#/ by sammy
Everything is cached (including images)
I go offline, I am using a virtual machine for this, so I unplug the virtual network adapter
I close the browser
I reopen the browser and browse to my page http://domain/App/#/
The content is showing except for the images
Everything works fine if in step #1 I browse to http://domain/App/ including the slash.
There are some other weird states it gets into where the sammy routes are not called, so the page remains blank, but I haven't been able to reliably replicate that.
??
UPDATE: The problem is that the above steps caused problems before. It is now working when I follow the above steps, so it is hard to say what is going on exactly. I am starting from a consistent state every time because I am starting from a snapshot in a VM.
My cache manifest looks like this,
CACHE MANIFEST
javascripts/jquery-1.4.2.js
javascripts/sammy/sammy.js
javascripts/json_store.js
javascripts/sammy/plugins/sammy.template.js
stylesheets/jsonstore.css
templates/item.template
templates/item_detail.template
images/1Large.jpg
images/1Small.jpg
images/2Large.jpg
images/2Small.jpg
images/3Large.jpg
images/3Small.jpg
images/4Large.jpg
images/4Small.jpg
index.html
I'm running into a similar issue as well.
I think part of the problem is that jquery ajax is misinterpreting the response. I believe sammy is using the jquery to make the ajax calls, which is leading to the errors.
Here's a code snippet i used to test for this (though not a solution)
this.get('#/', function (context) {
var uri = 'index.html';
// what i'm trying to call
context.partial(uri, {});// fails on some browsers after initial caching
// show's that jquery's ajax is misinterpreting
// the response
$.ajax({
url:uri,
success: function(data, textStatus, jqXHR){
alert('success')
alert(data);
},
error: function(jqXHR, textStatus, errorThrown){
alert('error')
if(jqXHR.status == 0){ // this is actually a success
alert(jqXHR.responseText);
}else{
alert('error code: ' + jqXHR.status) // probably a real error
}
}
});
I have read some posts about this topic and the answers are comet, reverse ajax, http streaming, server push, etc.
How does incoming mail notification on Gmail works?
How is GMail Chat able to make AJAX requests without client interaction?
I would like to know if there are any code references that I can follow to write a very simple example. Many posts or websites just talk about the technology. It is hard to find a complete sample code. Also, it seems many methods can be used to implement the comet, e.g. Hidden IFrame, XMLHttpRequest. In my opinion, using XMLHttpRequest is a better choice. What do you think of the pros and cons of different methods? Which one does Gmail use?
I know it needs to do it both in server side and client side.
Is there any PHP and Javascript sample code?
The way Facebook does this is pretty interesting.
A common method of doing such notifications is to poll a script on the server (using AJAX) on a given interval (perhaps every few seconds), to check if something has happened. However, this can be pretty network intensive, and you often make pointless requests, because nothing has happened.
The way Facebook does it is using the comet approach, rather than polling on an interval, as soon as one poll completes, it issues another one. However, each request to the script on the server has an extremely long timeout, and the server only responds to the request once something has happened. You can see this happening if you bring up Firebug's Console tab while on Facebook, with requests to a script possibly taking minutes. It is quite ingenious really, since this method cuts down immediately on both the number of requests, and how often you have to send them. You effectively now have an event framework that allows the server to 'fire' events.
Behind this, in terms of the actual content returned from those polls, it's a JSON response, with what appears to be a list of events, and info about them. It's minified though, so is a bit hard to read.
In terms of the actual technology, AJAX is the way to go here, because you can control request timeouts, and many other things. I'd recommend (Stack overflow cliche here) using jQuery to do the AJAX, it'll take a lot of the cross-compability problems away. In terms of PHP, you could simply poll an event log database table in your PHP script, and only return to the client when something happens? There are, I expect, many ways of implementing this.
Implementing:
Server Side:
There appear to be a few implementations of comet libraries in PHP, but to be honest, it really is very simple, something perhaps like the following pseudocode:
while(!has_event_happened()) {
sleep(5);
}
echo json_encode(get_events());
The has_event_happened function would just check if anything had happened in an events table or something, and then the get_events function would return a list of the new rows in the table? Depends on the context of the problem really.
Don't forget to change your PHP max execution time, otherwise it will timeout early!
Client Side:
Take a look at the jQuery plugin for doing Comet interaction:
Project homepage: http://plugins.jquery.com/project/Comet
Google Code: https://code.google.com/archive/p/jquerycomet/ - Appears to have some sort of example usage in the subversion repository.
That said, the plugin seems to add a fair bit of complexity, it really is very simple on the client, perhaps (with jQuery) something like:
function doPoll() {
$.get("events.php", {}, function(result) {
$.each(result.events, function(event) { //iterate over the events
//do something with your event
});
doPoll();
//this effectively causes the poll to run again as
//soon as the response comes back
}, 'json');
}
$(document).ready(function() {
$.ajaxSetup({
timeout: 1000*60//set a global AJAX timeout of a minute
});
doPoll(); // do the first poll
});
The whole thing depends a lot on how your existing architecture is put together.
Update
As I continue to recieve upvotes on this, I think it is reasonable to remember that this answer is 4 years old. Web has grown in a really fast pace, so please be mindful about this answer.
I had the same issue recently and researched about the subject.
The solution given is called long polling, and to correctly use it you must be sure that your AJAX request has a "large" timeout and to always make this request after the current ends (timeout, error or success).
Long Polling - Client
Here, to keep code short, I will use jQuery:
function pollTask() {
$.ajax({
url: '/api/Polling',
async: true, // by default, it's async, but...
dataType: 'json', // or the dataType you are working with
timeout: 10000, // IMPORTANT! this is a 10 seconds timeout
cache: false
}).done(function (eventList) {
// Handle your data here
var data;
for (var eventName in eventList) {
data = eventList[eventName];
dispatcher.handle(eventName, data); // handle the `eventName` with `data`
}
}).always(pollTask);
}
It is important to remember that (from jQuery docs):
In jQuery 1.4.x and below, the XMLHttpRequest object will be in an
invalid state if the request times out; accessing any object members
may throw an exception. In Firefox 3.0+ only, script and JSONP
requests cannot be cancelled by a timeout; the script will run even if
it arrives after the timeout period.
Long Polling - Server
It is not in any specific language, but it would be something like this:
function handleRequest () {
while (!anythingHappened() || hasTimedOut()) { sleep(2); }
return events();
}
Here, hasTimedOut will make sure your code does not wait forever, and anythingHappened, will check if any event happend. The sleep is for releasing your thread to do other stuff while nothing happens. The events will return a dictionary of events (or any other data structure you may prefer) in JSON format (or any other you prefer).
It surely solves the problem, but, if you are concerned about scalability and perfomance as I was when researching, you might consider another solution I found.
Solution
Use sockets!
On client side, to avoid any compatibility issues, use socket.io. It tries to use socket directly, and have fallbacks to other solutions when sockets are not available.
On server side, create a server using NodeJS (example here). The client will subscribe to this channel (observer) created with the server. Whenever a notification has to be sent, it is published in this channel and the subscriptor (client) gets notified.
If you don't like this solution, try APE (Ajax Push Engine).
Hope I helped.
According to a slideshow about Facebook's Messaging system, Facebook uses the comet technology to "push" message to web browsers. Facebook's comet server is built on the open sourced Erlang web server mochiweb.
In the picture below, the phrase "channel clusters" means "comet servers".
Many other big web sites build their own comet server, because there are differences between every company's need. But build your own comet server on a open source comet server is a good approach.
You can try icomet, a C1000K C++ comet server built with libevent. icomet also provides a JavaScript library, it is easy to use as simple as:
var comet = new iComet({
sign_url: 'http://' + app_host + '/sign?obj=' + obj,
sub_url: 'http://' + icomet_host + '/sub',
callback: function(msg){
// on server push
alert(msg.content);
}
});
icomet supports a wide range of Browsers and OSes, including Safari(iOS, Mac), IEs(Windows), Firefox, Chrome, etc.
Facebook uses MQTT instead of HTTP. Push is better than polling.
Through HTTP we need to poll the server continuously but via MQTT server pushes the message to clients.
Comparision between MQTT and HTTP: http://www.youtube.com/watch?v=-KNPXPmx88E
Note: my answers best fits for mobile devices.
One important issue with long polling is error handling.
There are two types of errors:
The request might timeout in which case the client should reestablish the connection immediately. This is a normal event in long polling when no messages have arrived.
A network error or an execution error. This is an actual error which the client should gracefully accept and wait for the server to come back on-line.
The main issue is that if your error handler reestablishes the connection immediately also for a type 2 error, the clients would DOS the server.
Both answers with code sample miss this.
function longPoll() {
var shouldDelay = false;
$.ajax({
url: 'poll.php',
async: true, // by default, it's async, but...
dataType: 'json', // or the dataType you are working with
timeout: 10000, // IMPORTANT! this is a 10 seconds timeout
cache: false
}).done(function (data, textStatus, jqXHR) {
// do something with data...
}).fail(function (jqXHR, textStatus, errorThrown ) {
shouldDelay = textStatus !== "timeout";
}).always(function() {
// in case of network error. throttle otherwise we DOS ourselves. If it was a timeout, its normal operation. go again.
var delay = shouldDelay ? 10000: 0;
window.setTimeout(longPoll, delay);
});
}
longPoll(); //fire first handler