What is good size ratio for a 2D tile-based game's chunks ? [Screen Ratio or 1:1 Ratio ?] - actionscript-3

I am making a 2D tile based game and I expect to have a really big world.
As I want movement between areas to be seamless I will obviously need to load the world in chunks.
So the question is :
Is it better if my chunk's size is based on my game's resolution
Is it better if my chunk's size is a perfect square
Let's have an example with simple numbers ;
If my game's resolution is 1024x768 and my tiles are 32x32,
I can fit 32x24 tiles in one screen.
Let's say I'd like my chunks a bit bigger than the screen,
Is it better to have a 128x128 tiles chunk
Is it better to have a 128x96 tiles chunk
As far as I know my question is irrelevant and either would do but I'm afraid I might end up facing an unexpected error if I choose the wrong one.

I think either direction you decide to take with handling chunk size, it is definitely going to be a wise decision to leave it abstracted enough to allow for some flexibility in size (if not for your own unit tests).
That being said, this is a question of performance really and doing textures/assets in powers of 2 was a good restriction back before dedicated GPU's were around. I'm not really sure if you'll see a huge difference between the two nowadays (although you might with it being flash) but it's usually a safe route to keep the tiles as a power of 2. In the past when working with rendering, keeping assets to a power of 2 meant it would always divide evenly and that saves on some computations.
Hope this helps! :)

Related

libgdx What texture size?

What size for my textures should I use so it looks good on android AND desktop and the performance is good on android? Do I need to create a texture for android and a texture for desktop?
For a typical 2D game you usually want to use the maximum texture size that all of your target devices support. That way you can pack the most images (TextureRegion) within a single texture and avoid multiple draw calls as much as possible. For that you can check the maximum texture size of all devices you want to support and take the lowest value. Usually the devices with the lowest size also have a lower performance, therefor using a different texture size for other devices is not necessary to increase the overall performance.
Do not use a bigger texture than you need though. E.g. if all of your images fit in a single 1024x1024 texture then there is no gain in using e.g. a 2048x02048 texture even if all your devices support it.
The spec guarantees a minimum of 64x64, but practically all devices support at least 1024x1024 and most newer devices support at least 2048x2048. If you want to check the maximum texture size on a specific device then you can run:
private static int getMaxTextureSize () {
IntBuffer buffer = BufferUtils.newIntBuffer(16);
Gdx.gl.glGetIntegerv(GL20.GL_MAX_TEXTURE_SIZE, buffer);
return buffer.get(0);
}
The maximum is always square. E.g. this method might give you a value of 4096 which means that the maximum supported texture size is 4096 texels in width and 4096 texels in height.
Your texture size should always be power of two, otherwise some functionality like the wrap functions and mipmaps might not work. It does not have to be square though. So if you only have 2 images of 500x500 then it is fine to use a texture of 1024x512.
Note that the texture size is not directly related to the size of your individual images (TextureRegion) that you pack inside it. You typically want to keep the size of the regions within the texture as "pixel perfect" as possible. Which means that ideally it should be exactly as big as it is projected onto the screen. For example, if the image (or Sprite) is projected 100 by 200 pixels on the screen then your image (the TextureRegion) ideally would be 100 by 200 texels in size. You should avoid unneeded scaling as much as possible.
The projected size varies per device (screen resolution) and is not related to your world units (e.g. the size of your Image or Sprite or Camera). You will have to check (or calculate) the exact projected size for a specific device to be sure.
If the screen resolution of your target devices varies a lot then you will have to use a strategy to handle that. Although that's not really what you asked, it is probably good to keep in mind. There are a few options, depending on your needs.
One option is to use one size somewhere within the middle. A common mistake is to use way too large images and downscale them a lot, which looks terrible on low res devices, eats way too much memory and causes a lot of render calls. Instead you can pick a resolution where both the up scaling and down scaling is still acceptable. This depends on the type of images, e.g. straight horizontal and vertical lines scale very well. Fonts or other high detailed images don't scale well. Just give it a try. Commonly you never want to have a scale factor more than 2. So either up scaling by more than 2 or down scaling by more than 2 will quickly look bad. The closer to 1, the better.
As #Springrbua correctly pointed out you could use mipmaps to have a better down scaling than 2 (mipmaps dont help for upscaling). There are two problems with that though. The first one is that it causes bleeding from one region to another, to prevent that you could increase the padding between the regions in the atlas. The other is that it causes more render calls. The latter is because devices with a lower resolution usually also have a lower maximum texture size and even though you will never use that maximum it still has to be loaded on that device. That will only be an issue if you have more images than can fit it in the lowest maximum size though.
Another option is to divide your target devices into categories. For example "HD", "SD" and such. Each group has a different average resolution and usually a different maximum texture size as well. This gives you the best of the both world, it allows you to use the maximum texture size while not having to scale too much. Libgdx comes with the ResolutionFileResolver which can help you with deciding which texture to use on which device. Alternatively you can use a e.g. different APK based on the device specifications.
The best way (regarding performance + quality) would be to use mipmaps.
That means you start with a big Texture(for example 1024*1024px) and downsample it to a fourth of its size, until you reach a 1x1 image.
So you have a 1024*1024, a 512*512, a 256*256... and a 1*1 Texture.
As much as i know you only need to provide the bigest (1024*1024 in the example above) Texture and Libgdx will create the mipmap chain at runtime.
OpenGL under the hood then decides which Texture to use, based on the Pixel to Texel ratio.
Taking a look at the Texture API, it seems like there is a 2 param constructor, where the first param is the FileHandle for the Texture and the second param is a boolean, indicating, whether you want to use mipmaps or not.
As much as i remember you also have to set the right TextureFilter.
To know what TextureFilter to you, i suggest to read this article.

Is there a performance benefit to pre-rendering an HTML5 canvas circle?

I understand it's often faster to pre-render graphics to an off-screen canvas. Is this the case for a shape as simple as a circle? Would it make a significant difference for rendering 100 circles at a game-like framerate? 50 circles? 25?
To break this into two slightly different problems, there are two aspects to what you're asking:
1) is drawing a shape off-screen and putting it on-screen faster
2) is drawing a shape one time and copying it to 100 different places faster than drawing a shape 100 times
The answer to the first one is "it depends".
That's a technique known as "buffering" and it's not really about speed.
The goal of buffering an image is to remove jerkiness from it.
If you drew everything on-screen, then as you loop through all of your objects and draw them, they're updating in real-time.
In the NES-days, that was normal, because there wasn't much room in memory, or much power to do anything about it, and because programmers didn't know much better, with the limited instructions they had to work with.
But that's not really the way games do things, these days.
Typically, they call all of the draw updates for one frame, then they take that whole frame as a finished image, and paste that whole thing on the screen.
The GPU (and GL/DirectX) takes care of this, by default, in a process called "double-buffering".
It's a double-buffer, because there's room for the "in-progress" buffer used for the updates, as well as the buffer that holds the final image from the last frame, that's being read by the monitor.
At the end of the frame processing, the buffers will "swap". The newly full frame will be sent to the monitor and the old frame will be overwritten with the new image data from the other draw calls.
Now, in HTML5, there isn't really access to the frame-buffer, so we do it ourselves; make every draw call to an offscreen canvas. When all of the updates are finished (the image is stable), then copy and paste that whole image to the onscreen canvas.
There is a large speed-optimization in here, called "blitting", which basically copies over only the parts that have changed, and reuses the old image.
There's a lot more to it than that, and there are a lot of caveats, these days, because of all of the special-effects we add, but there it is.
The second part of your question has to do with a concept called "instancing".
Instancing is similar to blitting, but while blitting is about only redrawing what's changed, instancing is about drawing the exact same thing several times in different places.
Say you're painting a forest in Photoshop.
You've got two options:
Draw every tree from scratch.
Draw one tree, copy it, paste it all over the image.
The downside of the second one is that each "instance" of the image looks exactly the same.
If your "template" image changes colour or takes damage, then all instances of the image do, too.
Also, if you had 87 different tree variations for an 8000 tree forest, making instances of them all would still be very fast, but it would take more memory, because you now need to save 87x more images than when it was just one tree, to reference on every draw call.
The upside is that it's still much, much faster.
To answer your specific question about X circles, versus instancing 1 circle:
Yes, it's still going to be a lot faster.
What a "lot" means, though, will change based on a lot of different things, because now you're talking about browsers on PCs.
How strong is the PC?
How good is the videocard?
How large is the canvas in software-pixels (not CSS pixels)?
How large are the circles? Do they have alpha-blending?
Is this written in WebGL or software?
If software is the canvas compositing in hardware mode?
For a typical PC, you should still be able to hit 60fps in Chrome, drawing 20 circles, I think (depending on what you're doing to them... ...just drawing them onscreen, every frame is simple), so in this case, the instances are still a "lot" faster, but it's not going to matter, because you've already passed the performance-ceiling of Canvas.
I don't know that the same would be true on phones/tablets, or battery-powered laptops/netbooks, though.
Yes, transferring from an offscreen canvas is faster than even primitive drawings like an arc-circle.
That's because the GPU just copies the pixels from the offscreen canvas (not much CPU effort required)

Is it computationally faster for browser to decrease than increase the size of an image?

Let say that I have an image with the size 200x200px. I also have two separate webpages. The first page has an image tag with the attributes width="100" height="100" so the image is downsampled by half. The second page has an image tag with the attributes width="400" height="400" so the image is oversampled to the double the original size.
Which one of the cases is computationally faster to execute? Downsampling or oversampling. Other names for the operation would be subsampling and interpolation or just decreasing image size and increasing it. My guts tell me that there is less to compute when decreasing the image size but I'm not sure.
It is true that with just one small image the difference is meaningless. And of course the best solution would be to avoid scaling of images in the first place. Nonetheless if the target application uses high number of constantly changing images in different scales and is used from a mobile device then knowing the difference might become valuable.
Thanks in advance.
Oversampling is supposed to be more expensive... it FOR SURE requires some kind of interpolation. Let's suppose the simplest one: the linear interpolation! It's already more expensive than calculating 'a single mod operator' (the only thing you need in order to do a downsampling). I don't think someone would do much different...
Trying to be more accurate about the browsers, let's consider that any modern browser uses some tricks like GPU and/or OpenMP (Multi-Processing) to render the images. But GPU requires upload data from CPU and it has a price. This data transfer is a narrow path. So, for small images, it's gonna be almost the same thing... no big difference!
Mobile devices don't have as many cores as a Desktop computer... so OpenMP is not gonna be much helpful for small images too.

Is there a trick to creating an animated gif of tv static that will allow it to be relatively small?

Apologies in advance, but this isn't really a photoshop question. Rather, I'm trying to come up with something that is convincing but exploits the compression and features of the gif format as best as possible to produce the smallest possible file for the animation.
Some constraints:
It needs to be at least 20 or 30 frames. I've tried with fewer (and since they're largely uncompressable 15 frames is half the size of 30, generally speaking)
Size needs to be no less than about 256x192
It doesn't need to be color though, nor even full grayscale. I've seen convincing stills with as few as about 16 grays
It can have a pattern, but not one that is instantly obvious to the human eye. If someone takes a single frame and after a minute or two can spot the pattern (which makes it compressable?) that's ok
Frames 2 through n can use quite a bit of alpha, but when I started using big horizontal stripes of alpha, it was instantly noticeable to my eyes. So you don't get to rack up a bunch of RLE with the easy cheat.
All of the above and still needs to look good at 30-33ms frame speed. No variable speed or relying on anything significantly faster than that.
Also acceptable: an apng that complies with the above constraints. Possibly even mpeg, if you can come up with that (I'm ignorant of how the DCT does its magic).
Ideally I could get something down in the 250kbyte range, but I'd settle for anything significantly smaller than the 9 meg monstrosity I cooked up last week.
Oh, and one last thing: obviously I don't expect anyone to supply the graphic for me. I'm just looking for some trick(s) that will let me get there myself eventually.
This is a very interesting question.
Static (random noise) by its nature is actually highly incompressible. Information theory says that true noise is basically incompressible, and the more patterns something contains the more compressible it becomes (to the point of a solid line of 1's or 0's being perfectly compressible.
The ideal would be to create a true noise generator (just random numbers), but that doesn't help within the constraints of your problem.
The best thing I can think of is storing a number of small tiles of static and displaying them in staggered fashion to prevent the eye catching on to any patterns. Aside from that, you won't have much luck compressing this beyond 256 x 192 x 20 / 2 or about 500 kilobytes ( assuming 20 frames with resolution of 256 x 192, using 4 bit color depth ).
Simply encoding your animated gif in 16 color mode should get you to that point.
Well old but still unanswered answer (not checked anyway)
so create the NoSignal image data
If it is not obvious how read this:
NoSignal in asm and C++
encode into gif
Had played with it a bit so I used resolution 320x240, the lowest bit resolution usable is 3 bit per pixel. Lower does not look good. Single global palette only (obvious) here 300KB example
[Notes]
if this is just for some app then generate the image on the run it is really just few lines of code see that linked answer in bullet #1
Yes, you can achieve that with a lossy GIF compression, or rather a specifically rigged compressor that outputs noisy LZW stream.
A best-case scenario for LZW compression is to output X pixels, then X+1 pixels, then X+2 pixels, etc. It's easy to make that noisy.
Try screwing up the gfc_lookup function to (almost) always return longest dictionary item and compress series of noisy frames with it:
https://github.com/pornel/giflossy/blob/master/src/gifwrite.c#L270
Not easily normally. Good randomness (high entropy) by definition does not compress well. Having it greyscale may help, but not much.
If you want to do this on a web page and you have (some) control, you can always write a very small bit of JS to help... if you can do this, then you can do the following:
Create a gif about 1.5x the size you need with high-entropy static.
Set the clipping to the size you want.
Then you randomly move it around by changing the starting offset.
As long as your offsets are a decent distance away from one another (and don't repeat patterns) it is usually difficult to discern it as movement, and it looks truly like static.
I did this trick about 20 years ago on an Amiga to emulate static on a limited-memory demo, and it worked remarkably well... it also does not require fast low-level code as all was done by changing offsets and the co-processor bitblit-ed the rest.

maximum stage and sprite size?

I'm making an action game and I'd like to know what should be the maximum size of the stage (mine is 660 x 500).
Also I'd like to know how big a game-sprite should be. Currently my biggest sprites have a size of 128 x 128 and I read somewhere on the internet that you should not make it bigger because of performance issues.
If you want to make e.g. big explosions with shockwaves even 128 x 128 does not look very big. What's the maximum size I can definitely use for sprites? I cannot find any real solution about this so I appreciate every hint I can get because this topic makes me a little bit nervous.
Cited from:
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/DisplayObject.html
http://kb2.adobe.com/cps/496/cpsid_49662.html
Display objects:
Flash Player 10 increased the maximum size of a bitmap to a maximum
pixel count of 16,777,215 (the decimal equivalent of 0xFFFFFF). There
is also a single-side limit of 8,191 pixels.
The largest square bitmap allowed is 4,095 x 4,095 pixels.
Content compiled to a SWF 9 target and running in Flash Player 10 or
later are still subject to Flash Player 9 limits (2880 x 2880 pixels).
In Flash Player 9 and earlier, the limitation is is 2880 pixels in
height and 2,880 pixels in width.
Stage
The usable stage size limit in Flash Player 10 is roughly 4,050 pixels
by 4,050 pixels. However, the usable size of the stage varies
depending on the settings of the QUALITY tag. In some cases, it's
possible to see graphic artifacts when stage size approaches the 3840
pixel range.
If you're looking for hard numbers, Jason's answer is probably the best you're going to do. Unfortunately, I think the only way to get a real answer for your question is to build your game and do some performance testing. The file size and dimensions of your sprite maps are going to effect RAM/CPU usage, but how much is too much is going to depend on how many sprites are on the stage, how they are interacting, and what platform you're deploying to.
A smaller stage will sometimes get you better performance (you'll tend to display fewer things), but what is more important is what you do with it. Also, a game with a stage larger than 800x600 may turn off potential sponsors (if you go that route with your game) because it won't fit on their portal site.
Most of my sprite sheets use tiles less than 64x64 pixels, but I have successfully implemented a sprite with each tile as large as 491x510 pixels. It doesn't have a super-complex animation, but the game runs at 60fps.
Bitmap caching is not necessarily the answer, but I found these resources to be highly informative when considering the impact of my graphics on performance.
http://help.adobe.com/en_US/as3/mobile/WS4bebcd66a74275c36c11f3d612431904db9-7ffc.html
and a video demo:
http://tv.adobe.com/watch/adobe-evangelists-paul-trani/optimizing-graphics/
Also, as a general rule, build your game so that it works first, then worry about optimization. A profiler can help you spot memory leaks and CPU spikes. FlashDevelop has one built in, or there's often a console in packages like FlashPunk, or the good old fashioned Windows Task Manager can be enough.
That might not be a concrete answer, but I hope it helps.