HTML5: shared web worker with multiple connections - html

From what I understand, the big benefit of HTML5's shared web workers is that they can accept multiple connections in a single separate thread of execution.
My question is: has anyone gotten multiple connections with a SharedWorker to work as a single thread with Google Chrome? I'm using latest version 12.0.742.112.
Demo: http://demos.zulius.com/html5/sharedworker
Source (in case demo is down): index.html, sharedworker.js
The demo establishes 2 separate event listeners. The expected output is:
foo got message: Hello World! You are connection #1
bar got message: Hello World! You are connection #2
In the demo, both event listeners fire correctly, but the connection count variable is not maintained in the SharedWorker script. This leads me to believe each connection to the SharedWorker is executing in a separate thread.
Am I doing something wrong? Or is Chrome support for SharedWorker not quite there?
UPDATE: the demo works now.

You have 2 listeners to the Worker but you only start the Worker for once, so it's 1 Worker shared by 1 owner instead of 2 owners. Increasing the number of listeners doesn't affect the ownership.
You can see the example here:
http://weblog.bocoup.com/javascript-web-workers-chrome-5-supports-new-sharedworker
It has 2 frames, one containing the iframe and one inside the iframe. They both call the start method of the Worker so it's shared by 2 owners. Since the start method is called twice, the onconnect event should be fired twice, thus making connection.count equal 2.

In shared webworkers the context is alive till the last browser session end. shared webworkers can maintain the context around the browser tabs. They respond to the requests with the same context of data.
The change in context of data will affect all connections, the possibilities are you can update all the connections with single context change, you can maintain the data till the last connection end. you can maintain the connection changes in all views.
Here is a demo of Shared web workers with multiple connections.
http://www.antkorp.in/sharedworkers/

Related

Netty HttpServer Chrome Browser Multiple Requests

We use Netty, version 4.1.13. We create HttpServer, HttpServerInitializer, HttpServerHandler and start it through using a port.When we make a request from Chrome Browser, HttpServerInitializer is called 3 or 4 times (sometimes 3, sometimes 4) and it is called again after 10 seconds.When we make a request through Microsoft Edge or through console, it is called one times as expected and HttpServerHandler handles the rest.
What should we do to prevent HttpServerInitializer's handling unnecessary extra requests.We have session operations attached to pipeline on Initializer, so this is a critical issue for us.
The default behaviour of browsers for HTTP 1 is to open several connections (how many depends on the browser) to do requests in parallel. Like that, they can retrieve resources like css, js, images,... in parallel.
The number of connection is configurable into the browser. In general there are two preferences: the maximum number of connections by hostname and the total maximum number of opened connections.
See also: http://www.browserscope.org/?category=network&v=0
So, when you start a request with Chrome, it opens several connections, even if it use only one if there is not so much request done. The idle an unused connections will be closed after some seconds.
I think that's why you see the HttpServerInitializer being called several times, only because there are several connections. So, server side, it's normal, because you don't know if it's different clients or only one with many connections.
I advice you to not do costly operation on Connection Opened event, but only when you receive a valid message/request. Your initializer should only configure the necessary handlers on the pipeline which should be quick and simple, and nothing else.

Difference between timeout and browserTimeout

I just started using Selenium Grid.
The current problem I'm facing is when a test crashes. The browser stays open forever until I arrive and close it myself so the next set of tests can start.
I noticed that the NODE configuration has two timeout configurations, one for -timeout and another for -browserTimeout
For the -timeout, it says the browser will be "released" for another test. For -browserTimeout, it simply doesn't say anything.
I don't understand what it meant by "released".
What I need is the browser to be closed when the timeout happens.
What option will close the browser?
This documentation should help you out
Quoting the documentation
timeout 30 (300 is default) The timeout in seconds before the hub automatically releases a node that hasn't received any requests for more than the specified number of seconds. After this time, the node will be released for another test in the queue. This helps to clear client crashes without manual intervention. To remove the timeout completely, specify -timeout 0 and the hub will never release the node.
browserTimeout On the hub you can also set -browserTimeout 60 to make the maximum time a node is willing to hang inside the browser 60 seconds.
Here's my limited understanding
timeout - This value represents how long should the Grid wait before it treats a particular test session (a particular running test case) as stale, so that that particular test session can be cleaned up and the slot released so that some other test case can basically execute on the node. This parameter is relevant when lets say you are running a test case from within eclipse and you click on the RED button and end your test case abruptly. At that time the client (your test case) hasn't sent a "end-session" signal to the remote. So this session is stale and the grid has to clean up this orphan session.
browserTimeout - This value represents how long should the Grid wait before it treats a particular test session (a particular running test case) as stale, due to the browser getting hung (maybe due to a browser crash or due to a rogue javascript on the web application which has frozen the browser). Here the important thing to note is that the client (the test case running from within your IDE or a Continuous Integration tool such as Jenkins for e.g.,) is still active but its the browser that has got un-responsive.
So to safe guard your executions from orphaned test sessions due to client crashes use timeout and use browserTimeout to safe guard your grid from frozen browsers which refuses to return back and causes stalled test executions.

NullPointerExceptions while executing LoadTest on WSO2BPS

While performing loadtests on WSO2 BPS 3.2.0 we`ve ran onto the problem.
Let me tell you more about out project and our actions.
Our BPS process is designed to manage some interactions with 3 systems. Basically it is "spread" on two parts - first one to CREATE INSTANCE in one of systems, then waiting a bit, and then SELECT OFFER in instance context.
In real life it looks like: user wants to get a product, the application asks system for an offers and then the user selects offer from available ones.
IN BPS the first part is a straight-forward process, the second part is spread on two flows - one to refresh information with a new offers, and another is to wait if the user chooses one of them.
Our aim is to stand about 1000-1500 simulatious threads on the load-test. An external systems are simulated by mockups executed by LoadUI.
We can achieve our goal if we disable "Process-Level Monitoring Events" in deployment descriptor (set it to "none") of our process. Everything goes well and smooth for hours.
But if we enable this feature (and we need to), everything falls with an error very soon (on about 100-200 run):
[2015-07-28 17:47:02,573] ERROR {org.wso2.carbon.bpel.core.ode.integration.BPELProcessProxy} - Error processing response for MEX null
java.lang.NullPointerException
at org.wso2.carbon.bpel.core.ode.integration.BPELProcessProxy.onResponse(BPELProcessProxy.java:402)
at org.wso2.carbon.bpel.core.ode.integration.BPELProcessProxy.onAxisServiceInvoke(BPELProcessProxy.java:187)
at
[....Et cetera....]
After the first appearance of this error another one type appears - other threads just fall after the timeout.
It seems that database is ok (by the way, it is MySQL 5.6.25). The dashboard shows no extreme levels of input or output.
So I think the BPS itself makes a bottleneck. We have gave it 8gb heap and its conf options are set for extreme amounts of threads (if it possible negative values are set and if not - just ridiculously big like 100000).
Anyone has ever faced this problem? Appreciate any help very much.
Solved in BPS 3.5.0 version, refer to release-notes

Two peers RTMFP Chat: should I use NetGroup or not?

I made a Chat mainly inspired by the Cirrus Sample
Chat works fine, but on some cases, the "NetStream.Connect.Success" doesn't get triggered.
Both connections pass the ports check
Before switching to a NetGroup architecture and presuming that these problems are related to the connection process, I'd like to know :
- Would NetGroup act differently on NetStream connection process over a NetStream Direct Connection ?
- What are the limitations of NetGroup ? I read there was more latency when using it.
ok, so first and foremost, NetStream.Connect.Success fires on the NetConnection and not the NetStream. That is the biggest misconception and frustration for people trying to get this all set up. Adobe did this for legacy (historical) reasons. So check that first to make sure you are listening in the proper spot.
if you are sure you have the listener in the right spot, you may have NAT or firewall comms issues that prevent one peer from seeing the other in certain circumstances.
Now regarding groups:
NetGroup does not introduce latency (necessarily). In groups with less than 14 members, you have a full mesh (all members have a direct peer connection to all others). using a less than 14 member group will still net you a blazingly fast p2p connection provided you use sendToAllNeighbors(). Where you hear about latency is regarding post(). post runs a bunch of stuff that introduces new latency since it tries to contact my 3 descending, 3 ascending, my fractional connections, my 6 least latent and my 1 random every 10 seconds... and then tries to forward the message on to be distributed to the rest of the group. Even in small groups this can take a second or two.
Here is a link to a video from MAX that goes through all the nitty gritty (so to speak) on rtmfp and its ring based architecture Cool In-Depth Video About RTMFP

Flash - Loader errors in Firefox

I'm writing an application which pulls up to several dozen images from a server using Loader objects. It works fine in all browsers except Firefox, where I'm finding that, with over 6 or so connections, some simply never load - and I cease to get progress events (and can detect no errors/error events)
I extended the Loader class so that it will kill and reopen the transfer if it takes longer than ten seconds, but this temporary hack has created a new problem, in that when there are quite a few connections open, many of them will load 90-odd percent of the image, get killed for exceeding the time limit, open again, load 90-odd percent etc...until the traffic is low enough for it to actually complete. This means I'm getting transfers of many times the amount of data that is actually being requested!
It doesn't happen in any other browser (I was anticipating IE errors, so for Firefox to be the anomaly was unexpected!), I can write a class to manage Loaders, but wondered if anyone else had seen this problem?
Thanks in advance for any help!
Maybe try to limit number of concurrent connections.
Instead of loading all assets at once (then FP or browser manages the connections) try to build a queue.
Building a simple queue is fairly easy - just create an array of URLs and shift or pop a value every time loader has finished loading previous asset.
You might use an existing loader manager like LoaderMax or BulkLoader - they allow to create a queue, limit number of connections and are fairly robust. LoaderMax is my favourite.