I've already read all the stuff around scrolling:
Structuring an HTML5 Canvas/JS Game
and so on:
HTML5 Canvas tutorial
The secret to silky smooth JavaScript animation
Google search "HTML5 Scrolling"
Canvas Games
Build a vertical scrolling shooter game with HTML5 canvas
Math mayem
CAAT JavaScript framework
(The latest one is impressive, but even though almost everything is done there's nothing about scrolling).
Here's what I'm thinking about, and I didn't found something valueable about that. An idea just came to my mind and I'm wondering if I should take a lot of time thinking about that and trying, or not (that's why I'm asking here actually).
I'm planning to do a game with a scrolling "à la" Mario.
The big drawback about scrolling is that you have to redraw the whole background.
I've already avoided two performance problems of the sprite / scroll: create two canvas one top of each other:
the background
the sprites
And just erase the sprites.
The problem is about the background: I'm making a full copy of the background to the "visible" canvas. (note: there's no problem about flickering because writing in JavaScript is a blocking operation and all modern browsers handle vertical synch, so no double buffering needed).
Here's an old version of what I'm writing, but you'll get the big picture:
Test HTML5
Now I'm wondering for the scrolling: what if I do a "background div" instead of a canvas, with the appropriate CSS (background image for the background), and write the tiles on the image directly, then change CSS to simulate the scrolling? Should it be faster? If so, why? Is there any good idea out there for this?
On a semi-modern+ computer with a semi-recent+ browser, the fastest thing to do is probably to take a super-long div with background images, set overflow to hidden and scroll by adjusting scrollLeft or scrollTop properties. This is MUCH faster than adjusting CSS properties as it shouldn't trigger any reflow calculation in the CSS engine. Basically, any time you touch a DOM property that could have CSS impact, the whole (or at least a lot of) of) the structure of the document needs to be re-checked and re-rendered.
You could load in chunks of the backgrounds as they get close to avoid the one giant massive image load. I don't believe there is any 100% surefire way to pull an image out of memory across browsers but removing references to it in the DOM or in your CSS probably doesn't hurt when you've scrolled far enough past a piece of your background. That way the browser's garbage collector at least has the option of clearing memory.
In pan-mobile solutions that rely on webviews (like Cordova/Phonegap), however, well that's why I arrived at this question.
I have no idea but I'm pretty sure mixing HTML and canvas is a lousy idea for performance purposes. Right now I've got a not-super-complicated android game choking on 50x50 100px tiles in a canvas element in an android web view that also has some basic HTML in the mix for stuff like controls and separating a couple other canvas elements from the rest. I basically have a birds-eye-view ship cruising around and doing scans with a cheesy radiating circle grahic that reveals elements on a map fog-of-war style. Works great in a browser. Complete disaster in the cordova app version.
I suspect the only way I'm going to realize my pan-mobile game dev dreams is to use one of the many openGL-wrapped-in-a-canvas-API solutions out there and completely ditch the HTML which I find damned convenient for UI implementation, given that the bulk of my experience is in web UI. Another general tip for web view HTML is to avoid scrolling within internal elements if you can (so just let the body handle overflow). Scrolling overflow in anything but the body didn't even work in Android 2's webviews and it still seemed to cause 4.1's views to choke on an earlier/simpler version of the app I'm working on.
My final conclusion: after many tries, including Eirk Reppen suggestion, I write directly "raw" into hidden parts of the canvas and use CSS: all webbrowsers handle already image flickering and all that stuff around, and they have already optimized everything.
So each time I've tried to "optimize", the results were worse.
It seems that webbrowsers are made to handle properly basic stuff made by beginnners... maybe because 99% of HTML content is made by beginners.
Here is a demo of scrolling an oversize canvas by changing the CSS margin, this one based on scrolling with time: https://jsfiddle.net/6othtm0c/
And this version with mouse dragging: https://jsfiddle.net/ax7n8944/
HTML:
<div id="canvasdiv" style="width: 500px; height: 250px; overflow: hidden">
<canvas id="canvas" width="10000px" height="250px"></canvas>
</div>
JS for the scroll-with-time version:
canvas = document.getElementById("canvas");
context = canvas.getContext('2d');
for (var i = 0; i < 1000; i++) {
context.beginPath();
context.arc(Math.random() * 10000, Math.random() * 250, 20.0, 0, 2 * Math.PI, false);
context.stroke();
}
var t0 = window.performance.now();
setInterval(function(){
canvas.style.marginLeft = -(window.performance.now() - t0)/5 + "px";
}, 5);
fastest scrolling is to scroll using CSS. So you draw all background once, not only visible part, but all, and hide that is not visible, and use css to scroll it (margin, or position). No redraw, only CSS changes. This work really fastest. But if all map is really huge, other custom ways can be better.
Related
We are facing the following challenge: We are creating a behavioral experimentation library, which both needs to be able to show random shapes as well as display forms.
For the shape drawing part we use pixi.js, and even though we know it can also use canvas2D, we prefer it to use WebGL as its rendering engine, which uses the 3D context of the canvas. Pixi however doesn't really have the ability to draw form elements on the canvas, so we decided to use Zebra/Zebkit for this, but zebra can only draw to 2d context.
According to many sources, it's impossible to use 2D and 3D context simultaneously with a single canvas, or switch from 2D and 3D context (and vice versa) after the canvas has been initialized. We therefore decided to create 2 separate canvases, one with a 3D context to use with Pixi.js, and one with a 2D context to use with Zebra/zebkit. When necessary, we switch the canvases by showing one and hiding the other.
This works quite well when the canvases are integrated in the web page, but less optimal when we want to display the experiment fullscreen. It is very difficult to switch from one canvas to the other in fullscreen, because you can only choose one DOM element at the time to be displayed full screen, and weird stuff happens when you start hiding the full screen element to show another. My question is: what would be the best approach to tackle this problem? I already have several in mind:
Put both canvases in a container div, and display this container fullscreen instead of the canvases itself. I don't know if this is possible, or if this will have any negative side effects compared to showing a canvas in fullscreen directly.
Render the zebkit canvas on top of the pixi canvas by making sure it is on top of the overlay item, as suggested in How do I make a DIV visible on top of an HTML5 fullscreen video?. This situation seems very hacky though, and I smell inconsistency issues between the various browsers that are around already.
Use the method described in How do I make a DIV visible on top of an HTML5 fullscreen video? to render normal HTML form elements on the pixi canvas. I predict there will be some resolution/rendering problems to tackle though, because you don't have the degree of control over the pixel raster as you have with canvas items.
So I am making this horizontal scroll site which has a ton of images. I planned on using svgs for the entire site, but with only 20-30 svg images of medium to high complexity used in the page, and chrome already seems to be showing som jank and high paint times for scroll (and firefox is even worse, though safari seems to do a lot better).
Scroll timeline
View the site (scroll works on mac only, windows users can use arrow keys)
My question is, if I were to use pngs instead of svgs, would it reduce the paint times and hence the jank? Why is the browser struggling with only around 20 odd svg images?
As was my doubt, the problem turned out to be something completely different. Browsers are more than capable of handling multiple vector images. But what they aren't good at (and understandably so) is at redrawing those images very often.
Problem
My long horizontal scroll site was quite wide (30,000px). I had a background-color property applied to one of lower z-index'ed div's to represent the sky throughout the scroll site. I didn't want the sky to stretch the entire 30,000px since it essentially didn't change much, and hence gave it viewport width and height, with:
position:fixed;
Not a very smart move. Turns out that this property was causing my document layer to be repainted on every frame. Initially I though it was normal for browsers to do so on scroll, since Robby Leonardi's site, which I used as reference also repainted every frame.
Solution
Thanks to this article by one of the chrome dev tools developers, I set aside conventional wisdom, and make the sky layer
position:absolute;
and stretched it the entire site width, and boom! The paint rectangles were gone. And the scroll performance was smoother than butter.
Other solutions I tried
Hiding elements not near the viewport to make painting lighter, as suggested by #philipp, but didn't yeild any appreciable difference. Also, it felt super-hacky, and it wasn't targeting the root cause of the problem.
I tried modularizing my site into scenes and using translateZ(0) hack on each scene so that only the smaller scenes get repainted instead of the document. This actually helped quite a bit, and scroll was decent. Then,
I gave all the svg images their own layer by using translateZ(0). I started getting FPS of around 60 now, but again, this wasn't the right way of doing things.
I once had a similar thing. The SVG was 10 or more times as wide as the one shown above, it contained ~20k elements and was about 3MB in size. The only Thing what brought back performance (since it was a jump and run game) was an algorithm which was able to find all elements whose bounding box overlapped the viewport. With this I could use display: none; to hide everything what is invisible.
That reduced the amout of visible elements to ~150 per frame and the game ran again fluently.
I used a balanced binary tree (avl tree) and a one dimensional range query, since the height of the viewport was always the same as the image.
Good luck!
[EDIT]
Forgot to leave something like an anwer. From my Experience are large/huge SVG Graphics a bottleneck in rendering, especially if there is a lot of scripting happening. If you do not need any Interactivity with the elements from the Graphic, so nothing than a large Background Image, I would recommend to use a Tile map, based on PNG images, that is the standard way in Jump'n'Run games with huges »worlds«, so you can gain perfomance in two Points:
Rendering is faster,
You can »lazy ajax load« tiles, depending on visibility, to prevent users to download the »whole world« at startup.
Additinally you could use something like PIXI.js to render with WebGL, what will push the performance drastically and with it comes support for Tilemaps and Spritesheets.
If you insist on the advantages of Vector Grafics (Scaling, Interactivity) than you need to find a way to hide as much elements as possible to keep the frame rate high.
I scale an image down using css, but then its edges are jagged. However, if I quickly switch to another tab in chrome and then come back, it is drawn correctly. I assume that this is because of somethings that happens during redraw. Is there any way to force a redraw using jquery? I have tried adding classes, elements, and changing other attributes.
Ok, thanks to the bump, I will add my solution here. What is really happening is we are trying to force a repaint of the entire window. The following does the trick:
function reDraw(){
//hack to redraw the elements on the page to avoid choppy look of resized items
//prevent reDraw from firing too early
setTimeout(function(){$('body').hide().show(0)},66);
}
The show hide combination will force a repaint of the area affected. note that the 0 on show is needed. The 66ms delay is used because forcing a repaint immediately after applying styles (in this case, a css resize) will bypass the recalculate styles function in the browser. 66ms is aprox. 15fps so it should still appear to happen instantaneously on any screen running at 30fps (it will take two screen refreshes if all goes well). A small blip from pixelated to non-pixelated is visible on a 60fps display, but how many people pay that close attention anyway?
Anyway, that is our solution. For us, it was used on a website that is built very similar to a video game as far as animation loop and other things. Because the screen is being refreshed a lot already, we found we only needed to call the reDraw function after the resizing of a PNG, but your requirements may be different.
Note that this function can be very resource intensive, and I have observed that many browsers need to collect garbage after this so you may need to evaluate how necessary the realtime aspect is. Use sparingly.
Enjoy!
~techdude
I have to place on a web page a cylinder that looks like this:
it is composed by small images that overlaps to draw the curves on the surface. Every one of them is places on the page with a different img tag enveloped in an anchor with its own href. The z-index property of the img is used to make them overlap in the right way.
The cylinder has to be composed because it is dynamically created, as you can see from the image, its faces can have different colors.
What i need to do is to make all the faces clickable and each one has to point to a different URL.
My problem is, of course, that the cylinder has curves. And i have to be sure that the clicks points to the correct URL especially near the curves, it hasn't to be precise at pixel level, but at least acceptable.
I've tried to use a map with a single area for each of the images that composes the cylinder, but of course it didn't work, as i saw from the specifications, in such cases only the first declared map in the DOM works.
I'm thinking about to solve this via Javascript, but i think it wouldn't be an easy job, so i would be happy if someone can give me some advice on what should i try.
Oh, i cannot use HTML5 features to solve this.
Neat application of older technology to solve a challenging puzzle.
I can think of two ways forward for you. One is to put a transparent (rectangular) image on top of the cylinder and create an HTML image map, using the shape="poly" attribute. For resources, search for the HTML elements map and area for reference, especially the shape attribute. There should be many good tutorials online. Nowadays this technique isn't used that much any more, but it was really popular in the late 90s.
Another way is to use event delegation in javascript, attaching an event listener to the primary container. On each of your image "pixels" apply a CSS class for the appropriate portion of the cylinder it is in. In your event handler, you can do something differently depending on the class of the clicked on image, and you can do this without the massive overhead of attaching an event on each individual "pixel". In JQuery this would be something like:
$("#cylinder").on("click", ".green", function() { location.href = "green_url"; }
$("#cylinder").on("click", ".red", function() { location.href = "red url"; }
assuming you put class="green" on your green pixels and class="red" on your red pixels. (You can do this by quadrant or other technique; color is just an example).
Your best luck SVG ! https://developer.mozilla.org/en-US/docs/SVG/Tutorial
It is almost impossible with html dom elements to do this, you will have to bend it with CSS compatible all browsers.
There is also Canvas but you will have a hard time dealing with the clicks.
Only problem with SVG is that it's not supported in < IE8, and hardly in IE8. But bending a DOM element is also not available < IE9.
EDIT:
I saw that you can't use HTML5, so your only chance is generating the whole image in GD2 for example and trying to map the points. But what is the reason you can't use HTML5 ?
You might also try doing it using javascript / canvas via getImageData() function. This canvas function will rgba values of the given point. Using the alpha value you can check if mouse is over or clicking on the correct area or if it is a transparent area and nothing should happen.
I also made jquery plugin exactly for this purpose. Maybe it might help. http://www.cw-internetdienste.de/pixelselection/
I have a design I'm creating in CSS, and it has started to sort of, er, lazy scroll. By that I mean the scrollbar lags a bit when you are scrolling. What are common causes of this so that I can debug it from my site?
EDIT:
The document has very little content (not even a paragraph), so not much at all. No flash, two images.
EDIT:
I feel so stupid. Improperly formatted background: property was causing the issue. Thanks nonetheless, everyone.
It's likely to be due to heavy processing requirements via css.
(CSS does affect scrolling in every browser) I have seen this scenario many times (the worst case is with SVG). It usually hits browsers like Chrome hard because of it's AA.
There was a great website that detailed the heaviest to the safest properties to use in regards to CSS effects, sorry I don't have the link. Though from my experience I would say to consider:
Gradients: The more you feature or the larger the area they cover the more exponential the rendering calculations. Abusing stops and additional colors also adds to the mayhem.
Border-Radius: Is usually clipping off its internal content whatever it may be. I've noticed differences when excluded.
Opacity can be the main issue if coupled with other css effects. In certain scenarios I've found great improvements when removing opacity or reducing it's usage. As it's not just transparency it's driving it's also for some browsers anti-aliasing text.
Images: The way images can affect scrolling should be obvious, though I've discovered re-sizing imaged from it's native resolution can become a more noticeable factor.
Use of properties such as background-size:; draws huge power in certain situations, a workaround could be to scrap the div, replace with < img > and overlay with a blank div
containing text/ content.
Animations transitions & translations are obvious power eaters if abused, especially animation that loops continuously or re-sizes to the browser via percentages.
Bare in mind someone on a low spec celeron PC will have a terrible experience on a site that lags on your reasonably/ high powered PC/ mac