Building web server using chrome.socket API - google-chrome

I've built a web server using Chrome Packaged Apps. The problem I see repeatedly is that chrome.socket.accept() and chrome.socket.write() don't invoke their callback functions. It usually works more or less reliably if request rate is less than 1 request per seconds. If I go above that, then I start seeing errors or missing callbacks.
I did similar tests with sample "webserver" app build by Google (https://github.com/GoogleChrome/chrome-app-samples/tree/master/webserver). It has the same problem. It usually takes less than 100 requests before web server stops responding. The easiest way to reproduce the problem is to use Chrome browser as a client and hold F5 key for few seconds.
It would be desirable to have a sample app that demonstrates how to build reliable web server using chrome.socket. So far I tried several different workarounds for monitoring the situation from the app itself and restarting socket when socket stops working, but it's not easy because there is no reliable way to check the status of the connection or status of the last operation when callback is not fired. I tried to use getInfo() method, but it always returns "connected=true" regardless of the situation.
I saw this on Windows 7 and Chrome OS (Chromebook).

Just an update on this. According to this the issue is now fixed.

There are still other problems with the sample web server application. I noticed that I could make the sample app lock up by holding down Ctrl-R in the browser. I wrote a more robust one that you can use here: https://github.com/kzahel/web-server-chrome

Related

CreatePushNotificationChannelForApplicationAsync throws 0x880403E9 error

I am making a Windows Phone 8.1(winrt flavor) application and stuck in problem about PushNotification.
await PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();
This function returns PushNotificationChannel and it usually works well. But in some our devices, this function throws 0x880403E9 error.
MSDN says,
0x880403E9 The notification platform is in the process of reconnecting back to the WNS cloud due to a earlier network connectivity change. Apps should retry the channel request later using an exponential back-off strategy.
I think retrying the request can never solve this problem. The function always throws the exception. During a month. We even implemented exponential retrying strategy.
The worse fact is, the our broken devices worked well with the function before. But once it has been broken(?) by mysterious reason, it is never fixed itself. -First time we've got this problem, we did factory-reset the device and the problem fixed. But in other devices, reset was not a solution.
Somebody says updating lastest version might solve this problem, but it is not. Even in Windows Phone 8.1 Update 1(8.10.14157.200), the problem still occurs.
Is there anyone know about this problem?
Microsoft answered this question via email.
The problem happened when,
you don't have USIM card or
exhausted your data or
3G/LTE network does'nt work by some problems
Even if you are connected to network with wifi, the problem still occurs. I think Windows Phone 8.1 is always trying to process the push notification by 3G/LTE network if you turn 3G/LTE data network on.
If you can't use 3G/LTE network by given reasons, try to turn off data connection at system settings. Then push notification will be proceed via wifi network.
And they said, this is not going to be fixed even in Windows Phone 8.1 GDR2. Could be with Windows 10.
I had this problem as well and the answer above fixed it.
Just want to add that there might be a problem with roaming as well. I´m working in another country then my own and I got this problem. So if the three points above is ok, check that you are not using data roaming.

Chrome basic auth and digest auth issue

I'm having some issues with Chrome canceling some HTTP requests and I'm suspecting cached authentication data to be the cause. Let me first write down some important factors about the application I'm writing.
I was using Basic Authentication scheme for some time to guard several services and resources in my web app.
In the meantime I was using/testing the app heavily using Chrome with my main Google Account fully synced. Most frequently I was using my name - "lukasz" - as the username in Basic Auth.
Recently I have switched my application to use Digest Authentication.
Now, some of the HTTP requests I'm making are failing with status=failed with no apparent reason. It only happens when I'm using user "lukasz", if I enter some other unique username - there is no problem.
I looked everywhere in the backend and frontend and I couldn't locate the issue to be in our code. I can easily reproduce this with user "lukasz" each time. So I reverted my code to Basic Auth (while not touching the rest of app) and the problem was gone.
That led me to think that there is something wrong with cached passwords. So I cleared the cache in Chrome, but that didn't help. After several hours of analyzing the issue I decided to make sure that I'm running fresh instance of Chrome, so I reinstalled it (deleting the disk data along the way). TADAAA! The problem was gone and I couldn't reproduce this anymore.
Then I synchronized my Google Account with this newly installed Chrome and after a short while the requests to my app started failing again!! So I took a deeper look at this (cleaning profile data from disk and redoing all the steps) and indeed it looks like the problem starts as soon as my account is synced with cloud!
Yes, I know it sounds dodgy. It sounds ridiculous. It sounds stupid. But I am almost sure that those two problems are somehow related (failing requests and account sync).
My idea is this: Chrome somehow remembered that I was using "lukasz/my-pass" with Basic Auth for certain services. After I switched to Digest Auth the same combination of credentials (lukasz/my-pass) is now acting funny. Perhaps under the hood Chrome still thinks that this is Basic Auth and cancels requests when it learns otherwise?
UPDATE:
I've did some low level debugging with chrome://net-internals/ and it appears that the problem is while reading cache entry. This seems to prove my initial assumption.
I did some investigation and found this article. Apparently always adding "Last-Modified" header to my http response has solved the issue in Chrome (I'm still having some problems in FF, but that's off topic).
However, it still doesn't solve my issue entirely. Why the requests were failing in the first place?
You could try using incognito mode and see what happens. It may give you some hints without having to clear the cache or re-installing Chrome.
Also take a look at How to clear basic authentication details in chrome

What does the Google Chrome exactly do when a new tab is opened?

Today, i observed an interesting behavior. I am using windows XP-sp3 OS.
When i open a new tab in Google Chrome & view the task manager, a new process is created.
But, after some time, this process is terminated.
Why it is showing such kind of behavior? Is it due to system call vfork()? Does the child process immediately call exec()?
Does it happen only with Google Chrome or all other browsers behave in a similar fashion?
AFAIK Chrome maintains one process for each tab, also one process for some plugins too. They preferred multi-process architecture over multi-threaded one because when you are making network application which communicate with network all the time, you can expect to receive packets which can garble the memory. So having multi-process will prevent all but one process, as opposed to multi-threaded will kill the the tabs.
You can enlighten your self on following blog:
http://blog.chromium.org/2008/09/multi-process-architecture.html

Programmatically change Chrome extension update frequency

I'm developing a Chrome App (as a packaged app/extension) which purpose is to act as the base platform for several fullscreen apps to be build on top of. Chrome will be running on Ubuntu Linux.
And no trouble so far. But then I was told, that an intended app it is to be the platform for requires the source code to be updated with very short notice, as it probably is to be deployed for large scale use before the system has been tested through (even though it's a bad idea to deploy software that's not completely stable, but we're on a tight schedule). The problem is, that the "a few hours" interval for the autoupdating mechanism just isn't good enough.
So I somehow need to have the updating interval changed. I know this can be done with the --extensions-update-frequency command line switch, but as apps cannot access the command line (for obvious security reasons), and I'd prefer that the intended background page was to handle all the "administration", I don't think that switch is possible to use.
Is it somehow possible to update at a higher frequency? Or at times when it's ordered to?
There is now a method chrome.runtime.requestUpdateCheck():
Requests an update check for this app/extension.
It will return a status, which can be either "no_update", "update_available" or "throttled".
Unfortunately, the docs do not specify the limits for frequency that will trigger "throttled".
Your best option will be to have the extension manually check with your servers for an updated version. If there is an updated version show the user a desktop notification to manually update.
Potentially you could write a NPAPI plugin to modify the update frequency.
This may cause issues with CSP but you can try to live load JavaScript from your server that executes in the extension. In this case to "update" your extension you would simply update the JS hosted on your servers and the extension would automatically start using it on next load.

Increasing Google Chrome's max-connections-per-server limit to more than 6

As far as I know, at the current moment, late 2011 the max-connections-per-server limit remains 6. Please correct me if I am wrong. This is bad that we cannot fix this easily as in Firefox. As far as I know this value is hardcoded.
One of the solutions is to download the Chromium's sources and rebuild them. Is there a more easy solution?
Is there any tricky way to hack this without creating a dozen of mirror-domains?
Why I'm asking the question: My task is to create a html-javascript slideshow that will run inside a fullscreened browser, and a huge monitor is hanging on the wall. The javascript is really complicated, it preloads photos and makes a lot of ajax calls to my web services. If WIFI connection is slow, if 6 photos are loading, the AJAX calls fail, the application runs bad. I want a fast solution based, on http or browser or ubuntu tweak something else, because rebuilding the javascript app will take days.
Offtopic: do you know any other things that can be tweaked in my concrete situation?
IE is even worse with 2 connection per domain limit. But I wouldn't rely on fixing client browsers. Even if you have control over them, browsers like chrome will auto update and a future release might behave differently than you expect. I'd focus on solving the problem within your system design.
Your choices are to:
Load the images in sequence so that only 1 or 2 XHR calls are active at a time (use the success event from the previous image to check if there are more images to download and start the next request).
Use sub-domains like serverA.myphotoserver.com and serverB.myphotoserver.com. Each sub domain will have its own pool for connection limits. This means you could have 2 requests going to 5 different sub-domains if you wanted to. The downfall is that the photos will be cached according to these sub-domains. BTW, these don't need to be "mirror" domains, you can just make additional DNS pointers to the exact same website/server. This means you don't have the headache of administrating many servers, just one server with many DNS records.
I don't know that you can do it in Chrome outside of Windows -- some Googling shows that Chrome (and therefore possibly Chromium) might respond well to a certain registry hack.
However, if you're just looking for a simple solution without modifying your code base, have you considered Firefox? In the about:config you can search for "network.http.max" and there are a few values in there that are definitely worth looking at.
Also, for a device that will not be moving (i.e. it is mounted in a fixed location) you should consider not using Wi-Fi (even a Home-Plug would be a step up as far as latency / stability / dropped connections go).
BTW, HTTP 1/1 specification (RFC2616) suggests no more than 2 connections per server.
Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy. A proxy SHOULD use up to 2*N connections to another server or proxy, where N is the number of simultaneously active users. These guidelines are intended to improve HTTP response times and avoid congestion.
There doesn't appear to be an external way to hack the behaviour of the executables.
You could modify the Chrome(ium) executables as this information is obviously compiled in. That approach brings a lot of problems with support and automatic upgrades so you probably want to avoid doing that. You also need to understand how to make the changes to the binaries which is not something most people can pick up in a few days.
If you compile your own browser you are creating a support issue for yourself as you are stuck with a specific revision. If you want to get new features and bug fixes you will have to recompile. All of this involves tracking Chrome development for bugs and build breakages - not something that a web developer should have to do.
I'd follow #BenSwayne's advice for now, but it might be worth thinking about doing some of the work outside of the client (the web browser) and putting it in a background process running on the same or different machines. This process can handle many more connections and you are just responsible for getting the data back from it. Since it is local(ish) you'll get results back quickly even with minimal connections.