WebWorker behaves slower in Firefox compared to Chrome - google-chrome

Subject. I have an application, where I spawn a single WebWorker and use it to unzip files in memory. I "postMessage" a single message per file I need, in WebWorker I access the in-memory archive and unzip file and return with postMessage the "Uint8Array" object back to main thread.
In Chrome it works ok: the files come to main thread as soon as they are extracted from zip. But in FireFox there is a 30 second delay, the same delay I had before I rewrote application to use WebWorker. It feels like FireFox has some kind of queue and returns results to main thread only after this queue is empty.
I suspect that such queue could be a queue of incoming postMessages from main thread to WebWorker, but I'm not sure.
Is there a way to overcome this issue in Firefox?
UPDATE
I made a logger of sent and received messages for postMessage in WebWorker and received message in onMessage.
And this is what I got:
Chrome: http://pastebin.com/7TUKwKZc
Firefox: http://pastebin.com/bSZ0yu33
So basically FireFox is using sync model for messaging with WebWorker thread, while Chrome is using asycn model.
Does somebody know how turn on async'ness for WebWorker threads in FireFox with JS?

Related

How to stop http request from dash_renderer

I am trying a build a realtime monitoring system for high frequency data. To increase the performance, I used the extendData property of dcc.Graph() and websocket. So that, the brouser does not need to send request to get data.
I found that it still not increasing the performance as expected. The reason I found is, from the browser, I see (by inspecting network from browser) after some miliseconds browser is still sendng request and the initiator is the dash_renderer.
This picture is for a vanilla example just to show even for a textbox example the http request goes on and on. And for my real time websocket dashboard the frequency of requests get very high.
My question is:
What dash_renderer do?
why it is sending http request?
And how to stop that?
If you run Dash in Debug mode, it has a feature called Hot Reloading which regularly (every 3 seconds by default) checks for changes to your codebase and updates your running app if it finds any. That check for updated code is what you're seeing in the network inspection.
To turn it off, either don't run in debug mode or explicitly set dev_tools_hot_reload to False like so:
app.run_server(debug=True, dev_tools_hot_reload=False)
Although it is late, After some experience, my realization is dash is not designed to work with websocket. It uses call-backs which actually sends requests to server and in server, the callback function (which is python) send back some result.
These call-backs are designed to send HTTP request to server.
For high speed data, the websocket should be used with extendTrace method of plotly.js in client side.

Consistent Empty Data using MediaRecorderAPI, intermittently

I have a simple setup for Desktop Capturing using html5 libraries.
This includes a simple webpage and a chrome-extension. I am using
Extension to get the sourceId
Using the sourceId I call navigator.mediaDevices.getUserMedia to get the MediaStream
This MediaStream is then fed into an instance of MediaRecorder for recording.
This setup works most of the times, but a few times I see that requestData() on MediaRecorder instance returns blob with empty data consistently. I am clueless as to what can cause a running setup to start misbehaving sometimes.
Some weird behaviour that I noticed in the bad state:
When I try to close/refresh the window it doesn't respond.
The MediaStreamTrack object in Step 2) above is 'live' but as soon as I go to Step 3) it becomes 'muted'.
There's no pattern to it, sometimes it even happens when I request for the MediaStreams the very first time(which rules out the possibility that there could be some dangling resources eating up the contexts)
Is there anything that I am doing wrong and am unaware of? Any help/pointers would be highly appreciated!

Web RTC Renegotiation Errors

I've set up a WebRTC application that works as follows: (Beginning at step 5, I stop using CALLER/CALLEE because either the CALLER or the CALLEE can initiate the stream)
CALLER creates peer connection with only a data channel, creates offer, sets local description, and sends offer to CALLEE.
CALLEE sets remote description, creates answer, sets local description, and sends answer to CALLER.
CALLER sets remote description.
CALLER and CALLEE can successfully communicate over the data channel.
PEERA adds an audio and/or video stream to peer connection.
PEERA's onnegotiationneeded event fires.
PEERA creates offer, sets local description, and sends offer to PEERB.
PEERB receives offer, sets remote description, creates answer, sets local description, and sends answer to PEERA.
If PEERA and PEERB are both using Chrome:
If PEERA is the CALLER, then everything behaves normally, and the stream is received successfully by PEERB.
If PEERA is the CALLEE, then PEERB blows up in step 8 when setting the LOCAL description. The stream is received by PEERB, but displays only as a black box when sent to a <video> element.
The error logged is:
Failed to set local answer sdp: Failed to push down transport description: Failed to set SSL role for the channel.
When both PEERA and PEERB are using FireFox:
PEERA can be either the CALLER or CALLEE, and everything behaves normally, and the stream is received successfully by PEERB.
When the CALLEE is using Firefox and the CALLER is using Chrome:
PEERA can be either the CALLER(Chrome) or CALLEE(Firefox), and everything behaves normally, and the stream is received successfully by PEERB.
When the CALLEE is using Chrome and the CALLER is using Firefox:
If PEERA is the CALLER(FireFox), then everything behaves normally, and the stream is received successfully by PEERB(Chrome).
If PEERA is the CALLEE(Chrome), then PEERB(FireFox) blows up in step 8, when setting the REMOTE description.
The error logged is:
DOMException [InvalidSessionDescriptionError: "ICE restart is unsupported at this time (new remote description changes either the ice-ufrag or ice-pwd)ice-ufrag (old): a59T34ixyZjsTUuJice-ufrag (new): rsCN1ugVKHJQzmMbice-pwd (old): KqOHtqdzFp6VwG+3hxbjcQFcice-pwd (new): uVvowvgsKIwuCq/bDmcGbSPA" code: 0 nsresult: 0x0]
Chrome<->Chrome renegotiation
The error you get when PEERA is the callee in the renegotiation is typically due to Chrome changing the DTLS role, however I am not able to reproduce your problem. I believe that this JSFiddle link illustrates the scenario you are describing, and I am able to successfully renegotiate the call using Chrome 47.
If you can still reproduce the problem, take a look at the a=setup: bits of the SDP that are generated in the offer/answer, and compare them to the initial offer/answer. If I'm right, you'll see that to begin with, CALLER will have a=setup:actpass in the offer, and CALLEE will have a=setup:active in the answer. This means that the CALLER is now playing the 'passive' DTLS role and the CALLEE is playing the 'active' DTLS role.
Then when you initiate a renegotiation, PEERA is more than likely sending a=setup:actpass. PEERB, which should send a=setup:passive, is sending a=setup:active instead, which essentially causes a DTLS role swap. The error is due to the fact that Chrome does not support DTLS role changing for peer connections.
There is an open ticket on the google chrome bug tracker related to this, where I have posted a reproduction of the issue you're describing using a different scenario: starting a video-only call and the callee renegotiating to add video+audio.
The only solution that I know of at this time is to "munge" (alter) the SDP prior to calling setLocalDescription, so that it has the values that you want. So, for example, if you are about to process an answer and you know you are the passive DTLS role, you can do like
answer.sdp = answer.sdp.replace('a=setup:active','a=setup:passive');
pc.setLocalDescription(answer).then(...);
Firefox<->Firefox renegotiation
Yep, everything works great! That's because Firefox "does the right thing" with the DTLS roles when renegotiating in all the tests I've run. Take a look at the difference between these SDPs and the Chrome SDPs.
Firefox<->Chrome renegotiation interop
I am able to reproduce the issue you are describing with InvalidSessionDescriptionError showing up in Firefox. I haven't been able to come up with a solution yet nor know the cause at this time.
I'm also having a myriad of other renegotiation interop issues. It's pretty discouraging at the moment.
Please post back if you learn more. Definitely still lots of struggling with renegotiation interop!

bug when websocket receive more than 2^15 bytes on chrome: Received a frame that sets compressed bit while another decompression is ongoing

I tried to passed a JSON via websocket to a html GUI. When size is upper than 32768 bytes, chrome raises this exception:
WebSocket connection to 'ws://localhost:8089/events/' failed: Received a frame that sets compressed bit while another decompression is ongoing
on the line where WebSocket is instantiated :
this._websocket = new WebSocket(url);
However it work fine on firefox. I used jetty 9.1.3 on server side and I tried with chrome 33 and 34 beta.
I forget to precise that if I send length message superior than 32768 bytes, on chrome's network debugging tools, it show 32768 bytes length instead of real message length.
Any ideas ?
When using Jetty 9.1.2.v20140210 I don't have any problems with the connection, but with the later 9.1.3.v20140225 version it fails and I get the error using Opera or Chrome. Firefox works fine on all versions.
I submitted a bug report to Jetty about this: https://bugs.eclipse.org/bugs/show_bug.cgi?id=431459
This might be a bug in Jetty.
permessage-deflate requires the compression bit be set on the first frame of a fragmented message - and only on that.
It might be that Jetty fragments outgoing message to 32k fragments, and sets the compression bit on all frames. If so, that's a bug.
I have just tested current Chrome 33 using Autobahn|Testsuite: everything works as expected .. including messages with 128k.
You can test Jetty using above testsuite. It'll catch the bug if there is one.

HTML 5 Application Cache catch events in Chrome

I've created a website using HTML 5 offline Application Cache and it works well in most cases, but for some users it fails. In Chrome, when the application is being cached, the progress is displayed for each file and also error messages if something goes wrong, like:
Application Cache Checking event
Application Cache Downloading event
...
Application Cache Progress event (7 of 521) http://localhost/HTML5App/js/main.js
...
Application Cache Error event: Failed to commit new cache to storage, would exceed quota.
I've added event listeners to window.applicationCache (error, noupdate, obsolete, etc.), but there is no information stored on the nature of the error.
Is there a way to access this information from the web site using JavaScript ?
I would like to determine somehow which file caused the error or what kind of error occurred.
I believe that the spec doesn't mention that the exact cause of the exception should be included in the error. Currently the console is your only friend.
To wit, your current error "exceed quota" is due to the fact that Chrome currently limits the storage to 5MB. You can work around this by creating an app package that requests unlimited_Storage via the permission model. See http://code.google.com/chrome/apps/docs/developers_guide.html#live for more details.
If you want specific error messages on the "onerror" handler raise a bug on http://crbug.com/new