LWJGL fullscreen slow (er) - lwjgl

I notice that fullscreen mode with LWJGL takes up a lot of resources. I looked at it with my profiler, and see that Display.update takes a considerable lot of time. Is there a solution for this? Is it a natural occurrence?

Display.update is the main method that contains all the pipeline logic with communication with OpenGL so by nature it is going to be the largest function of the application, much like Game.doLogic for instance. Because it contains all the OpenGL communication it is heavily influenced by OpenGL which in turn influences hardware, and obviously the larger the window the more pixels it has to draw, among other things, thus making the frame/render time longer and so finally influencing Display.update. So yes it is natural for it to take longer the bigger the frame's resolution.
How much more "resources" exactly? Does "resources" mean hardware or function timings or memory usage? I don't see much reason for Display.update to take up a noticeable amount of memory the larger the frame's dimensions.

Related

How to measure complete all-in performance of DOM changes?

I've found lots of information about measuring load time for pages and quite a bit about profiling FPS performance of interactive applications, but this is something slightly different than that.
Say I have a chart rendered in SVG and every click I make causes the chart to render slightly differently. I want to get a sense of the complete time elapsed between the click and the point in time that the pixels on the screen actually change. Is there a way to do this?
Measuring the Javascript time is straight forward but that doesn't take into consideration any of the time the browser spends doing any layout, flow, paint, etc.
I know that Chrome timeline view shows a ton of good information about this, which is great for digging into issues but not so great for taking measurements because the tool itself affects performance and, more importantly, it's Chrome only. I was hoping there was a browser independent technique that might work. Something akin to how the Navigation Performance API works for page load times.
you may consider using capturing hdmi capturing hardware (just google for it) or a high speed camera to create a video, which could be analyzed offline.
http://www.webpagetest.org/ supports capturing using software only, but I guess it would be too slow for what you want to measure.

Why is WebGL render speed so inconsistent?

In my application I plot about 8 million vertices with a single call to WebGL's drawarrays using the LINE_STRIP flag. I don't actually want one long line, I want about 200k short lines, so I cap all the short lines with extra vertices, and tell the vertex shader to "push" the line caps into negative z to create invisible bridges. The rendering is quasi-static (the user can click various things that trigger a re-render) so it doesn't have to be super-fast, but I'd really hoped it would take less than 200ms on modernish computers.
On my laptop [UPDATE: which runs Win7 using a few Intel i7s as its CPU and an integrated HD Graphics 4000 for a GPU] I get around 100ms in Chrome, which is good. Oddly though, Firefox gets around 1-2 seconds. On my Samsung Chromebook 550 I get anything from 600ms to 2s, often it starts quick and then subsequent renders get slower but it can get faster too.
Questions:
What might be causing the change in render speed on my Chromebook?
Why is Firefox so much slower than Chrome on my laptop?
Is it worth spending ages trying to make it run faster (i.e. can I expect much improvement)? Any tips?
Notes:
For the Chromebook repeated rendering tests, the only thing happening between renders is a uniform is changed to toggle between color palettes (implemented as textures). Chrome dev tools doesn't seem to think there are any major changes in the page's memory usage during the testing.
I'm using gl.finish and console.time to see how long the rendering is taking.
Except during debugging, I render to an orphaned canvas and then copy sections of the result to various small canvases on the page UPDATE: using drawImage (with the webgl canvas as the first argument). This probably does take a bit of time, but the numbers reported above don't seem to change much with or without the copy operation and with or without the webgl canvas attached to the page body (and visible).
UPDATE: There is a limit to how many vertices my laptop will render in one go, but the limit seems to fluctuate from moment to moment, if you go over the limit then it doesn't render anything. The number is around the 8million mark, but sometimes it's happy to go over 11million. I've now set it to batch 2 million at a time. Interestingly this seems to make my Chromebook go faster, but I can't be sure as it's so inconsistent.
UPDATE: I've disabled DEPTH_TEST and BLEND as I don't need them. I'm not convinced it made any difference.
UPDATE: I've tried rendering with POINTS instead of LINES. On my Chromebook it seemed to take about 1s with 0 point size (i.e. rendering nothing), and then around 1.5-2s as I increased the point size through 1,2, and 5.
UPDATE: Keeping everything on the z=0 plane doesn't seem to change the speed much, maybe it goes a little slower (which I'd expect as there are a lot more pixels to get through the fragment shader, though the fragment shader is just funneling a varying straight into gl_FragColor).
Although the usual (good) advice is to render as much as possible in each draw call, some (at least one which I know of) GPUs have internal buffers used while processing vertex data. Exceeding the capacity of these buffers can make your performance fall off a cliff. Reduce the size of your vertex batches until you start to see a performance drop from having batches which are too small.

Way to detect if WebGL viewport is on screen?

Is there any way to be able to query the GPU to tell me if my viewport in my webpage is currently on screen or not? For example, if I had a 3d scene rendering in a canvas in an iframe, is there a way to query the hardware (within my iframe and only the pixels or verts in the viewport) to say if I am on screen or scrolled off screen?
I'm curious as to whether this is something I can do at the vertex shader level. Does WebGL even perform the shader program on a viewport that is offscreen? Lets say if it is scrolled below the canvas, or the viewport is obstructed by another webpage browser window? Is there a way to query the compositing portion of webgl to see if it is even in view or Iterate through the "RenderObject" Tree to test if it is even onscreen and then return this value? I am trying to get much more performance out of a project I am working on and I am trying to only render what is visible on screen.
Any possible ideas? Is this even possible? Thanks!
RequestAnimationFrame is only reasonable way to handle unnecessary performance loss even semantically because window.requestAnimationFrame tells the browser that you wish to perform an animation... So browser will figure out how it should handle your wish in optimal way taking into account current page state.
But since iframes communicate using local storage you can push to them your base page state so each of them will decide should it RequestAnimationFrame or not. But im not shure that it is a good thing to have multiply render contexts on your page, they all eat resources and can't share them (data that stored in GPU is sandboxed) so eventually they will start to push each other from GPU memory and cause lags + GPU pipeline might be not so happy with all those tiny standalone entities. Fragmentation is main GPU performance enemy.
You don't ask this question at the canvas/WebGL level, because it might, for example, be scrolled back on screen before you draw another frame, and browsers don't want to not have content to show, so there's no provision to not draw.
I believe you will have to consult the DOM geometry properties (e.g. .scrollLeft) of your scrollable areas to determine whether the canvas is visible. There is enough information in said properties that you can do this generically without hardcoding knowledge of your page structure.
Also, make sure you are exclusively using requestAnimationFrame for your drawing/simulation scheduling; it will pause animations if the page is hidden/minimized/in another tab/otherwise explicitly invisible.

maximum stage and sprite size?

I'm making an action game and I'd like to know what should be the maximum size of the stage (mine is 660 x 500).
Also I'd like to know how big a game-sprite should be. Currently my biggest sprites have a size of 128 x 128 and I read somewhere on the internet that you should not make it bigger because of performance issues.
If you want to make e.g. big explosions with shockwaves even 128 x 128 does not look very big. What's the maximum size I can definitely use for sprites? I cannot find any real solution about this so I appreciate every hint I can get because this topic makes me a little bit nervous.
Cited from:
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/DisplayObject.html
http://kb2.adobe.com/cps/496/cpsid_49662.html
Display objects:
Flash Player 10 increased the maximum size of a bitmap to a maximum
pixel count of 16,777,215 (the decimal equivalent of 0xFFFFFF). There
is also a single-side limit of 8,191 pixels.
The largest square bitmap allowed is 4,095 x 4,095 pixels.
Content compiled to a SWF 9 target and running in Flash Player 10 or
later are still subject to Flash Player 9 limits (2880 x 2880 pixels).
In Flash Player 9 and earlier, the limitation is is 2880 pixels in
height and 2,880 pixels in width.
Stage
The usable stage size limit in Flash Player 10 is roughly 4,050 pixels
by 4,050 pixels. However, the usable size of the stage varies
depending on the settings of the QUALITY tag. In some cases, it's
possible to see graphic artifacts when stage size approaches the 3840
pixel range.
If you're looking for hard numbers, Jason's answer is probably the best you're going to do. Unfortunately, I think the only way to get a real answer for your question is to build your game and do some performance testing. The file size and dimensions of your sprite maps are going to effect RAM/CPU usage, but how much is too much is going to depend on how many sprites are on the stage, how they are interacting, and what platform you're deploying to.
A smaller stage will sometimes get you better performance (you'll tend to display fewer things), but what is more important is what you do with it. Also, a game with a stage larger than 800x600 may turn off potential sponsors (if you go that route with your game) because it won't fit on their portal site.
Most of my sprite sheets use tiles less than 64x64 pixels, but I have successfully implemented a sprite with each tile as large as 491x510 pixels. It doesn't have a super-complex animation, but the game runs at 60fps.
Bitmap caching is not necessarily the answer, but I found these resources to be highly informative when considering the impact of my graphics on performance.
http://help.adobe.com/en_US/as3/mobile/WS4bebcd66a74275c36c11f3d612431904db9-7ffc.html
and a video demo:
http://tv.adobe.com/watch/adobe-evangelists-paul-trani/optimizing-graphics/
Also, as a general rule, build your game so that it works first, then worry about optimization. A profiler can help you spot memory leaks and CPU spikes. FlashDevelop has one built in, or there's often a console in packages like FlashPunk, or the good old fashioned Windows Task Manager can be enough.
That might not be a concrete answer, but I hope it helps.

Maximum number of canvases (used as layers)?

I am writing an HTML5 canvas app in javascript. I am using multiple canvas elements as layers to support animation without having to re-draw the whole image every frame.
Is there a maximum number of canvas elements that I can layer on top of each other in this way -- (and see an appropriate result on all of the HTML5 platforms, of course).
Thank you.
I imagine you will probably hit a practical performance ceiling long before you hit the hard specified limit of somewhere between several thousand and 2,147,483,647 ... depending on the browser and what you're measuring (number of physical elements allowed on the DOM or the maximum allowable z-index).
This is correlated to another of my favorite answers to pretty much any question that involves the phrase "maximum number" - if you have to ask, you're probably Doing It Wrong™. Taking an approach that is aligned with the intended design is almost always just as possible, and avoids these unpleasant murky questions like "will my user's iPhone melt if I try to render 32,768 canvas elements stacked on top of each other?"
This is a question of the limits of the DOM, which are large. I expect you will hit a performance bottleneck before you hit a hard limit.
The key in your situation, I would say, is to prepare some simple benchmarks/tests that dynamically generate Canvases (of arbitrary number), fill them with content, and add them to the DOM. You should be able to construct your tests in such a way where A) if there is a hard limit you will spot it (using identifiable canvas content or exception handling), or B) if there is a performance limit you will spot it (using profiling or timers). Then perform these tests on a variety of browsers to establish your practical "limit".
There are also great resources available here https://developers.facebook.com/html5/build/games/ from the Facebook HTML5 games initiative. Therein are links to articles and open source benchmarking tools that address and test different strategies similar to yours.