I opened this matrix multiplication benchmarks and my browser (Firefox 7.0.1) froze until the benchmarks finished (I opened the page in an old Asus EeePC 1000H).
I heard that web workers were invented to separate processing from displaying the web pages. Is it possible to make use of the Web Workers API to make WebGL not stall the whole web browser ?
For the sake of clarity: the benchmark that you linked to does not use WebGL at all. (I should know, I wrote it.) And in the case of that particular benchmark you absolutely could run it in a Web Worker now and it would be perfectly fine.
(Fun fact - Web Workers didn't support TypedArrays when the benchmark was built, and since most of the matrix libraries rely on that it was impractical to run it in a Worker at that time. That has since been fixed.)
Anyway, to answer your original question: No, WebGL cannot run in a worker. The core blocker to this is that in order to get a WebGL context you need to call getContext on a canvas element. Web Workers explicitly disallow DOM access (which is a good thing, BTW!) and as such you'll never be able to access WebGL from a worker.
But that's not as bad as you might think. For one, consider that most all 3D rendering is actually happening in a different thread anyway. Specifically, a whole bunch of threads running on your GPU. The only part the browser has in it is to tell your graphics driver "Hey! Start rendering some triangles using this data!" and then it moves on without waiting for the triangles to actually be rendered. As such, while the draw commands must be executed from the main process, the time it spends blocking that process is (usually) very little.
Of course, that's not what's going to eat up a bunch of your time if you were coding a realtime game. You've got animations, physics, AI, collision detection, pathfinding... there's a lot of non-graphical tasks involved that will eat your CPU alive if you let them. In some case (animation), it's usually just gobs and gobs of matrix math, just like the benchmark you linked to! Fortunately for us, however, that type of processing CAN be done in a Worker, and all we need to communicate back to the main thread is the data required to render the scene.
Yes, this introduces some challenges in terms of synchronization and data transfer, but on the whole it will be vastly preferable to locking up your browser while we try and simulate those 500 boxes colliding.
Yes, on Firefox!
https://hacks.mozilla.org/2016/01/webgl-off-the-main-thread/
We’re happy to announce WebGL in Web Workers in Firefox 44+! Using the new OffscreenCanvas API you can now create a WebGL context off of the main thread.
By default you can't use WebGL in a Web Worker as Toji explained.
You can check out WebGLWorker which is a library that lets you do WebGL stuff in a Web Worker, by transparently proxying commands to the main thread.
Here is a nice blog post that explains how it works.
Related
I am trying to create a website where users can view and interact with room-furnishings in a 3D environment in a browser. Now, I do not wish to create anything from scratch, if it is possible to build upon existing open source efforts. So far, my research shows that:
The most established open source project I could build upon, that allows me to show 3D scenes on the browser and have users interact with them, uses Java3D for browser view, encapsulated in a java applet (sweethome3Dviewer).
Java 3D itself seems to be out of vogue, with most people recommending HTML5+WebGL (where unfortunately, I can't find any solutions that are as developed).
So here are my questions for this forum:
1) Are there any serious drawbacks of using a Java3D based approach?
I am talking of ANY drawbacks here, for example: "it is too slow"; "it is not stable"; "is limited by the number of concurrent users", etc.
2) What would you suggest I start with and build upon, if not the one based on Java3D?
Please note my preference for not re-inventing the wheel!
Yes, there is a serious drawback to using Java applets today: they are likely to simply not work at all.
The biggest problem is that the Java security system, which is intended to prevent programs like applets from accessing other parts of your computer (modifying files, running additional unsandboxed programs, etc.), has a history of security holes. Because of this history, there is a general consensus that permitting Java applets is simply not an acceptable security policy for the current day. Therefore, many browsers omit the Java plug-in or disable it by default.
And there are also browsers which simply have never had a Java browser plug-in at all, such as those on Android and iOS devices. Besides the security risk, there is also the issue that Java is “heavyweight” as web content goes — it can be seen as a waste of limited resources, for portable devices.
Thus, using Java applets is not a good choice: your applet will never work for many users, and those it does work for are taking an unnecessary security risk.
WebGL, on the other hand, is “just” another JavaScript-based API, which only does graphics, not lots of other things that have to be turned off by a “security manager” element. There are risks inherent to WebGL (GPU drivers are not the most security-minded thing out there) but in the current state of things it's unlikely that WebGL will be simply shut off rather than being fixed, if a problem is found.
We've developed a professional WebRTC application and are trying to give users an indication of how many streams their PC can handle (2-7). Is there an easy way of figuring this out (in browser or with a separate application)?
It's a conference application we offer to users browsing with Chrome.
Another question, if you work with for example 7 streams, are they divided over the different CPU cores? Or is the whole WebRTC deal included in the process for that browser tab?
WebRTC makes extensive use of threads, so it can utilize more than one core especially in multi-party conferences.
The simplest way to check is to make calls to yourself (each one = 2 calls in a mesh conference). If it's an MCU-style conference (likely with 7 participants), you need to simulate a one-way call (so you're doing one encode), plus decode N additional VP8 streams at "appropriate" resolutions.
This is complicated by Firefox, for example, using content analysis to selectively reduce resolution and/or frame rate of sent-video depending on load and outgoing bandwidth. However, for your case, it's more one of reception.
The short answer, though, is that it's hard to be sure, and will depend on the other senders too.
It seems that we can not get sensor data in the web workers. I wonder the reason behind it. The use case is that I am thinking about getting geolocation data in the worker thread and only send the processed version to the main thread.
For GPS, this post says it is not supported in the worker thread (no reason is given). And I double checked it, navigator.geolocation is not supported in web workers. For accelerator and gyroscope, we have DeviceOrientationEvent and DeviceMotionEvent. But we need to use them through the window object, which is not available to the worker thread. The same situation applies to ambient light event.
So my questions are:
Why navigator.geolocation is not supported in web workers? I don't see any reason to prevent it in the worker thread. I think there should be no thread safety or security problems.
Does navigator.geolocation belong to navigator? This looks like a silly question. But I cannot find a good explanation online quickly... Web workers have access to the navigator object. And I am confused why navigation.geolocation is not supported.
Why don't we have raw sensor readings from accelerator and gyroscope? I understand that the abstracted event is useful. But there are cases we want to use the raw data for processing. I find that PhoneGap provides ways to access raw sensor data, e.g., through navigator.accelerometer. But my understanding is that such API does not belong to the standardized HTML specification.
What are the related design decisions to decide whether general sensor reading should be supported in the worker thread or not? General sensor reading support in HTML is currently shelved according to W3C Device APIs Working Group. Seeing current sensor support (gps, accelerator, gyro), I think we will get abstracted DOM events. And it is likely to have raw sensor data readings through the navigator object.
OK. After reading some Chromium code, I have the answer to my own question 2 now. I still have no answer to the other 3 questions...
Answer to question 2: Does navigator.geolocation belong to navigator?
navigator.geolocation belongs to navigator in the main thread only, but doesn't belong to navigator in the worker thread.
The main reason is that even though the navigator in worker thread looks exactly the same as the one in main thread, those two navigators have independent implementations on the C++ side. That is why navigator.geolocation is not supported in the worker thread.
The related code is in Navigator.idl and WorkerNavigator.idl in Chromium code. You can see that they are two independent interfaces in the .idl files. And they have independent implementations on the C++ side of the binding. Navigator is an attribute of DOMWindow, while WorkerNavigator is an attribute of WorkerGlobalScope.
However, on the JavaScript side, they have the same name: navigator. Well, I understand that the two navigators are in two different scopes, so there is no name conflict. But when I use the APIs in JavaScript, I expect similar behavior on both main and worker threads if they have the same name. That's how the ambiguity happens.
I've been writing an extension that allows the user to issue voice commands to control their browser, and things were going great until I hit a catastrophic problem. It goes like this:
The speech recognition object is in continuous mode, and whenever the onerror: 'no-speech' or onend events fire, it restarts. This way, the extension is constantly waiting to accept input and reacts whenever a command is issued, even after 5 minutes of silence.
After a few days of of development, today I reached the point where I was testing it in practical use, and I found that after a little while (and with no change to anything on my part), my onend event started firing constantly. As in, looking at the console, I would see 18,000 requests being made in the space of three seconds, all being instantly denied, thus triggering onend and restarting the request.
I'm aware that it would be optimal to wait for sound before sending a request, or to have local speech recognition capabilities without the need for a remote server, but the present API does not allow that.
Are my suspicions correct? Am I getting request limited?
Are my suspicions correct? Am I getting request limited?
Yes
I'm aware that it would be optimal to wait for sound before sending a request, or to have local speech recognition capabilities without the need for a remote server, but the present API does not allow that.
To hide the IP source of your request you can use anonymizer networks like Tor, though it will not be fast.
It's naive to assume Google will spend resources to process all audio being recorded on your system. In your application development it is better to rely on API which provides at least some guarantees. It could be either commercial API or open source implementation like CMUSphinx.
With CMUSphinx, you can also properly implement command keyword detection and increase accuracy by specifying the grammar of the commands.
You could also use a Voice Activity Detection (VAD) algorithm to detect when a user is talking. This can be done by either setting a volume threshold or a frequency threshold (Human speech is usually less than 400hz for example). This way, you won't send useless requests to Google unless those conditions are meant. I would not recommend using Tor as this would significantly increase latency. CMUSphinx is probably the best local system option, but if still want to use a web-based service, I would recommend either using a Voice Activity Detection algorithm or finding a different web-based software.
I have a database of 3D models. I want user can rotate the model and view it in the web page.
So I have to implement an instant rendering algorithm for this.
A raytracing/raycasting method on CPU is preferred since the server has no GPU on it.
I understand that a primary-ray-only ray tracer with SSE and KD-Tree/BVH can be very fast. Besides, I want to add some GI effect(fake GI effect can be also OK for me, such as SSAO) in it.
How good can I achieve?
(some NPR rendering methods are also considerable)
In HTML5, you can render 3D objects with WebGL (an implementation of OpenGL) with some JavaScript. The problem is that WebGL is a client technology. Therefore, all the rendering is done by the browser.
There is one possible solution if you really want to distribute some logic on a server. You could use a server side language and send the vertices to the client through some database transactions. After, your website could make some AJAX calls to a server that would make certain operations within the server and return some vertices. The only problem is that this could require a lot of bandwidth.
Otherwise, another solution would be to use a tool such as Unity to create what you want. Then, you would need to embed the Unity Player in your web page.