How to create retained graphic using SurfaceImageSource? - windows-runtime

I'd like to create a XAML application which progressively displays some graphic in a part of the screen. I can have a large number of elements, so I'd like to retain the elements already drawn and only draw the new ones as they come. But unfortunately, I can't get to the swap chain from the SurfaceImageSource (tell me if it is possible) so I can't just copy buffers with each draw call.
How can I draw new elements onto the SurfaceImageSource while preserving the old ones in a performant way?

I think a swap chain is something you would use with a screen rather than a surface like the one used with SurfaceImageSource, so I don't think it has a swap chain at all - it is just a single texture. I haven't tried but I would assume that if you don't clear it between render calls (with the ClearRenderTargetView method) - the contents should stay as they did. If you want to reuse only a part of the content - you should probably create a separate texture and a render target view that you would use to render to that texture and then copy the contents of that texture to the SurfaceImageSource one some way. I am not sure what the best way to do that would be - maybe memory mapping the textures and doing a memcpy between them or rendering one as a textured quad (two triangles) to the other one.

Related

AS3: Using mask vs Cropped images

there are spritesheets of tiles for a new game I'm developing. I'm planning to use them with mask layers. I mean for example there is 30 different tiles on each spritesheets, to use each one I'm planning to change spritesheet's x and y, so mask will show only the wanted tile.
But the problem is, it may force cpu.
For example if there are 30 tiles on the screen and if each spritesheets has 30 different tiles, that makes 900 tiles if I use mask layer instead of cropping each tiles.
So the problem is , If I use mask layer, does it effect the cpu in a bad way, or does only the part under the mask is calculated on cpu?
I hope I could define the problem clearly.
Thank you
-Ozan
Starling is a whole new field, as it's using Stage3D. This means you have to rethink your development flow - everything must be as textures and so there are a lot of limitations - you cannot simply design a button in your Flash IDE, give it a name and use it. I'm not saying it's bad, no, I just say you have to you it wise and if you don't have any experience with it - it will take time to learn.
I think you are doing some calculations wrong. You say For example if there are 30 tiles on the screen and if each spritesheets has 30 different tiles, that makes 900 tiles, which means you have 30 spritesheets with 30 tiles on each. This is a lot of tiles for your game, are you sure? :)
Anyways, the common approach is to use each tile of the spritesheet as an individual one. That's why it's called spritesheet. And the meaning of this is very simple - the memory it will use. Imagine you have 100 tiles in one spritesheet (for easier calculations), and this spritesheet is 1mb. If you splice it in smaller chunks (100 of them), the size of it will be close to 1mb also. So if you do this and delete the original source, the RAM that will be taken because of those bitmaps will be close to the original.
Then you want to use a single tile (or let's say your map is just "water" and you use only one tile). You instantiate many instances of the very same class, and because you use only one kind, the memory that will be taken in order to be displayed is 1/100 of the original 1mb.
What I mean is that the worst case scenario would be to use the total 100 of them at the same time, and this will take 1mb of memory (I'm talking for the images only). Every time you use less, the memory will decrease.
The approach of having a mask is worse, because even if you use a single tile, it will put all of the original spritesheet into memory. Single tile - 1mb. And it will also draw a mask object (Sprite) and will also need to precalculate that mask and remove the outer part of the Bitmap. This is more memory, more CPU calculations, and more graphic rendering (as it will draw the cropped Bitmap every time you instantiate).
I think this will give you an idea why it's used that way! :) If you have some fancy regions and that's the reason you want to use masking instead, then use some spritesheet packager program. It will provide you with a data file describing the regions of the spritesheet that are used, and so with a single class (there are many for that) it will parse your initial Bitmap, create Bitmap children for each chunk and destroy the original. And the coordinates won't matter.
Cheers! :)

Is there a performance benefit to pre-rendering an HTML5 canvas circle?

I understand it's often faster to pre-render graphics to an off-screen canvas. Is this the case for a shape as simple as a circle? Would it make a significant difference for rendering 100 circles at a game-like framerate? 50 circles? 25?
To break this into two slightly different problems, there are two aspects to what you're asking:
1) is drawing a shape off-screen and putting it on-screen faster
2) is drawing a shape one time and copying it to 100 different places faster than drawing a shape 100 times
The answer to the first one is "it depends".
That's a technique known as "buffering" and it's not really about speed.
The goal of buffering an image is to remove jerkiness from it.
If you drew everything on-screen, then as you loop through all of your objects and draw them, they're updating in real-time.
In the NES-days, that was normal, because there wasn't much room in memory, or much power to do anything about it, and because programmers didn't know much better, with the limited instructions they had to work with.
But that's not really the way games do things, these days.
Typically, they call all of the draw updates for one frame, then they take that whole frame as a finished image, and paste that whole thing on the screen.
The GPU (and GL/DirectX) takes care of this, by default, in a process called "double-buffering".
It's a double-buffer, because there's room for the "in-progress" buffer used for the updates, as well as the buffer that holds the final image from the last frame, that's being read by the monitor.
At the end of the frame processing, the buffers will "swap". The newly full frame will be sent to the monitor and the old frame will be overwritten with the new image data from the other draw calls.
Now, in HTML5, there isn't really access to the frame-buffer, so we do it ourselves; make every draw call to an offscreen canvas. When all of the updates are finished (the image is stable), then copy and paste that whole image to the onscreen canvas.
There is a large speed-optimization in here, called "blitting", which basically copies over only the parts that have changed, and reuses the old image.
There's a lot more to it than that, and there are a lot of caveats, these days, because of all of the special-effects we add, but there it is.
The second part of your question has to do with a concept called "instancing".
Instancing is similar to blitting, but while blitting is about only redrawing what's changed, instancing is about drawing the exact same thing several times in different places.
Say you're painting a forest in Photoshop.
You've got two options:
Draw every tree from scratch.
Draw one tree, copy it, paste it all over the image.
The downside of the second one is that each "instance" of the image looks exactly the same.
If your "template" image changes colour or takes damage, then all instances of the image do, too.
Also, if you had 87 different tree variations for an 8000 tree forest, making instances of them all would still be very fast, but it would take more memory, because you now need to save 87x more images than when it was just one tree, to reference on every draw call.
The upside is that it's still much, much faster.
To answer your specific question about X circles, versus instancing 1 circle:
Yes, it's still going to be a lot faster.
What a "lot" means, though, will change based on a lot of different things, because now you're talking about browsers on PCs.
How strong is the PC?
How good is the videocard?
How large is the canvas in software-pixels (not CSS pixels)?
How large are the circles? Do they have alpha-blending?
Is this written in WebGL or software?
If software is the canvas compositing in hardware mode?
For a typical PC, you should still be able to hit 60fps in Chrome, drawing 20 circles, I think (depending on what you're doing to them... ...just drawing them onscreen, every frame is simple), so in this case, the instances are still a "lot" faster, but it's not going to matter, because you've already passed the performance-ceiling of Canvas.
I don't know that the same would be true on phones/tablets, or battery-powered laptops/netbooks, though.
Yes, transferring from an offscreen canvas is faster than even primitive drawings like an arc-circle.
That's because the GPU just copies the pixels from the offscreen canvas (not much CPU effort required)

Is it possible to force multiple paint operations on a 2D canvas (software rendering)?

I am working on a game that uses HTML5 canvas 2D context drawing on a Chromecast device (which doesn't have hardware acceleration). I've noticed that drawing two objects in one frame will trigger a repaint of the entire region that contains both of them. As a "worst-case", imagine that I want to only change the color of the top-left and bottom-right pixels of a large canvas element. If I use two one-pixel fillRect calls to do this, it (WebKit/Blink, at least) will mark the entire canvas dirty, triggering a very expensive paint operation. I believe this should link to the code that performs this logic in Chromium:
https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/Source/core/html/HTMLCanvasElement.cpp&l=218
Is there any way to convince the browser to actually perform two small paint operations instead of one (excessively) large one? Or would this always be slower, despite the fact that it's re-drawing significantly less? I have tried putting the elements on different canvas elements layered on top of each other, but the browser still seems to catch it and batch them together (at least as shown by the repaint regions in DevTools).
As far as I know, currently you can't do much better than that; one main drawback is that double buffering is not implemented either, which would have improved the performance. We will be seeing improvements in these areas moving forward.

Dynamic UI window drawing in Flash?

Wasn't sure if this question belongs here, but I'll try. So I'm about to move my project from Director | Shockwave Player (if you ever heard of these) to Flash Player for numerous reasons and while I'm thinking how to better start off I got a question which really made me wonder. To the point.
Currently in Director each game window in the user interface (like small alerts or large windows with lot of elements in them) is drawn upon need - meaning that the actual window's graphical image is being put together from configuration (all sorts of properties like width, height and all elements and their properties) from numerous pieces of graphics (e.g. window background is made from 9 small pieces like the 4 corners, 4 middle pieces between corners for dynamic width and height and 1 central piece to fill the window) and then added to stage. This approach makes it easy to edit each graphical element without having to redraw the actual windows and everything in them if we would like to change the color scheme or improve something in a element. It also saves the resources as windows are being drawn only when requested.
Now I got to figure out if it's worth writing such code in Flash too as opposite to just creating all the windows and placing them in the library and adding to stage when they are requested. What do you think? Is it worth writing such implementation?
IMO, it depends on 1) How comfortable you are with Flash's drawing API / graphics class, and 2) How flexible each window/dialog needs to be.
If it's easier to just throw them together as static objects -- and if they don't necessarily need the flexibility to change dimensions/style much (can you count on one hand how many times they've needed to change since it was originally done in Director many years ago?), it's obviously easier to do this than going through the time/energy to recreate them dynamically, especially if you're not very comfortable with Flash's drawing API.
That said, a lot can be accomplished dynamically with Flash's drawing API, so if you have the time/interest I'd certainly suggest digging in and doing it the "right" way if you want to familiarize yourself with the drawing API.
My method for doing this sort of thing usually goes like this:
Create a separate class that extends Sprite/MovieClip; something like 'Dialog.as'.
Create public init(), show(), and hide() methods, and a private build() method.
init() is called just once upon initialization of your app, and takes some global parameters to hold on to internally (padding, colors, etc).
show() takes an argument of either text (a String), or even a Sprite/MovieClip -- whatever it is you want to show in this dialog. When called (when you want to spawn a window), it uses this -- plus whatever init parameters were originally passed in during init() -- to draw itself, and then unhides itself (tween the .alpha property, or simply set the .visible property).
When you want to close the dialog, make sure to invoke the hide() method, which first hides itself, and then cleans up whatever was created (removing listeners, etc) so that the next time it is called upon it can draw itself fresh.
It might also make sense to pass into this object a reference to the Stage (in the original init() call), so that it can center and size itself accordingly on the stage, and also addChild itself to the top of the display list so that it's always above everything else.
I hope this helps.
If is 2D you can check this:
http://pushbuttonengine.com/
if is 3D you can start here:
http://alternativaplatform.com/en/alternativa3d/
I can suggest you also Unity3D but this is outside of flash
Hope it helps

Howto dynamically render space background in actionscript3?

I'm creating a space game in actionscript/flex 3 (flash). The world is infinitely big, because there are no maps. For this to work I need to dynamically (programatically) render the background, which has to look like open space.
To make the world feel real and to make certain places look different than others, I must be able to add filters such as colour differences and maybe even a misty kind of transformation - these would then be randomly added and changed.
The player is able to "scroll" the "map" by flying to the sides of the screen, so that a certain part of the world is only visible at once but the player is able to go anywhere. The scrolling works by moving all objects except for the player in the opposite direction, making it look like it was the player that moved into that direction. The background also needs to be moved, but has to be different on the new discovered terrain (dynamically created).
Now my question is how I would do something like this, what kind of things do I need to use and how do I implement them? Performance also needs to be taken into account, as many more objects will be in the game.
You should only have views for objects that are within the visible area. You might want to use a quad tree for that.
The background should maybe be composed of a set of tiles, that you can repeat more or less randomly (do you really need a background, actually? wouldn't having some particles be enough?). Use the same technique here you use for the objects.
So in the end, you wind up having a model for objects and tiles or particles (that you would generate in the beginning). This way, you will only add a few floats (you can achieve additional performance, if you do not calculate positions of objects, that are FAR away. The quad tree should help you with that, but I think this shouldn't be necessary) If an object having a view leaves the stage, free the view, and use the quad tree to check, if new objects appear.
If you use a lot of objects/particles, consider using an object pool. If objects only move, and are not rotated/scaled, consider using DisplayObject::cacheAsBitmap.