What size for my textures should I use so it looks good on android AND desktop and the performance is good on android? Do I need to create a texture for android and a texture for desktop?
For a typical 2D game you usually want to use the maximum texture size that all of your target devices support. That way you can pack the most images (TextureRegion) within a single texture and avoid multiple draw calls as much as possible. For that you can check the maximum texture size of all devices you want to support and take the lowest value. Usually the devices with the lowest size also have a lower performance, therefor using a different texture size for other devices is not necessary to increase the overall performance.
Do not use a bigger texture than you need though. E.g. if all of your images fit in a single 1024x1024 texture then there is no gain in using e.g. a 2048x02048 texture even if all your devices support it.
The spec guarantees a minimum of 64x64, but practically all devices support at least 1024x1024 and most newer devices support at least 2048x2048. If you want to check the maximum texture size on a specific device then you can run:
private static int getMaxTextureSize () {
IntBuffer buffer = BufferUtils.newIntBuffer(16);
Gdx.gl.glGetIntegerv(GL20.GL_MAX_TEXTURE_SIZE, buffer);
return buffer.get(0);
}
The maximum is always square. E.g. this method might give you a value of 4096 which means that the maximum supported texture size is 4096 texels in width and 4096 texels in height.
Your texture size should always be power of two, otherwise some functionality like the wrap functions and mipmaps might not work. It does not have to be square though. So if you only have 2 images of 500x500 then it is fine to use a texture of 1024x512.
Note that the texture size is not directly related to the size of your individual images (TextureRegion) that you pack inside it. You typically want to keep the size of the regions within the texture as "pixel perfect" as possible. Which means that ideally it should be exactly as big as it is projected onto the screen. For example, if the image (or Sprite) is projected 100 by 200 pixels on the screen then your image (the TextureRegion) ideally would be 100 by 200 texels in size. You should avoid unneeded scaling as much as possible.
The projected size varies per device (screen resolution) and is not related to your world units (e.g. the size of your Image or Sprite or Camera). You will have to check (or calculate) the exact projected size for a specific device to be sure.
If the screen resolution of your target devices varies a lot then you will have to use a strategy to handle that. Although that's not really what you asked, it is probably good to keep in mind. There are a few options, depending on your needs.
One option is to use one size somewhere within the middle. A common mistake is to use way too large images and downscale them a lot, which looks terrible on low res devices, eats way too much memory and causes a lot of render calls. Instead you can pick a resolution where both the up scaling and down scaling is still acceptable. This depends on the type of images, e.g. straight horizontal and vertical lines scale very well. Fonts or other high detailed images don't scale well. Just give it a try. Commonly you never want to have a scale factor more than 2. So either up scaling by more than 2 or down scaling by more than 2 will quickly look bad. The closer to 1, the better.
As #Springrbua correctly pointed out you could use mipmaps to have a better down scaling than 2 (mipmaps dont help for upscaling). There are two problems with that though. The first one is that it causes bleeding from one region to another, to prevent that you could increase the padding between the regions in the atlas. The other is that it causes more render calls. The latter is because devices with a lower resolution usually also have a lower maximum texture size and even though you will never use that maximum it still has to be loaded on that device. That will only be an issue if you have more images than can fit it in the lowest maximum size though.
Another option is to divide your target devices into categories. For example "HD", "SD" and such. Each group has a different average resolution and usually a different maximum texture size as well. This gives you the best of the both world, it allows you to use the maximum texture size while not having to scale too much. Libgdx comes with the ResolutionFileResolver which can help you with deciding which texture to use on which device. Alternatively you can use a e.g. different APK based on the device specifications.
The best way (regarding performance + quality) would be to use mipmaps.
That means you start with a big Texture(for example 1024*1024px) and downsample it to a fourth of its size, until you reach a 1x1 image.
So you have a 1024*1024, a 512*512, a 256*256... and a 1*1 Texture.
As much as i know you only need to provide the bigest (1024*1024 in the example above) Texture and Libgdx will create the mipmap chain at runtime.
OpenGL under the hood then decides which Texture to use, based on the Pixel to Texel ratio.
Taking a look at the Texture API, it seems like there is a 2 param constructor, where the first param is the FileHandle for the Texture and the second param is a boolean, indicating, whether you want to use mipmaps or not.
As much as i remember you also have to set the right TextureFilter.
To know what TextureFilter to you, i suggest to read this article.
Related
What is the intended difference between these 2 functions:
var size = cc.Director.getInstance().getWinSize();
var sizePx = cc.Director.getInstance().getWinSizeInPixels();
In my case they both return the exact same value.
In which cases should they return different values?
In recent versions of Cocos2dx the given answer is no longer accurate, specifically since the framework has dropped support of the explicit retina mode in favor of providing the programmer with ability to set the resolution of the game independently of the screen and asset resolution, performing scaling when necessary.
Strictly speaking, function getWinSize() returns the value of whatever resolution you choose (using CCGLView::setDesignResolution(float, float, ResolutionPolicy)), in pixels; getWinSizeInPixels() returns design resolution, multiplied by the content scale factor, which is, again, provided by you with CCDirector::setContentScaleFactor(float). If you do not provide the values with these functions, Cocos2dx will choose the design resolution based on an arbitrary value depending of the current platform. For example, on iOS it will use the size of the provided CAEAGLView in pixels (which may be less than the real device resolution in some cases), both getWinSize() and getWinSizeInPixels() will return the same value.
getWinSize() and getWinSizeInPixels() will return different values if you are scaling your resources to the game resolution. In such case getWinSizeInPixels() indicates what the resolution would be if you didn't have to scale the resources.
Some possible setups to illustrate how the system works:
1 set of images, 1 design resolution, many device resolutions = images scaled whenever design != device (lower quality of looks for upscale, unnecessary memory/cpu usage for downscale), single resolution layout code (presumably simpler)
multiple sets of images, 1 design resolution, many device resolutions = less need to scale the images because different assets cover wider scope of targets, single resolution layout code is preserved
multiple sets of images, multiple/adoptable design resolutions, many device resolutions = less need to scale, code must be explicitly agnostic of the resolution (presumably more complex)
It is possible I've got something wrong since I've started looking into Cocos2dx not so long ago, but that's the results I've got after some testing.
One returns points, logical pixels, the other physical pixels. In Retina displays both values are different (2x).
I am making a 2D tile based game and I expect to have a really big world.
As I want movement between areas to be seamless I will obviously need to load the world in chunks.
So the question is :
Is it better if my chunk's size is based on my game's resolution
Is it better if my chunk's size is a perfect square
Let's have an example with simple numbers ;
If my game's resolution is 1024x768 and my tiles are 32x32,
I can fit 32x24 tiles in one screen.
Let's say I'd like my chunks a bit bigger than the screen,
Is it better to have a 128x128 tiles chunk
Is it better to have a 128x96 tiles chunk
As far as I know my question is irrelevant and either would do but I'm afraid I might end up facing an unexpected error if I choose the wrong one.
I think either direction you decide to take with handling chunk size, it is definitely going to be a wise decision to leave it abstracted enough to allow for some flexibility in size (if not for your own unit tests).
That being said, this is a question of performance really and doing textures/assets in powers of 2 was a good restriction back before dedicated GPU's were around. I'm not really sure if you'll see a huge difference between the two nowadays (although you might with it being flash) but it's usually a safe route to keep the tiles as a power of 2. In the past when working with rendering, keeping assets to a power of 2 meant it would always divide evenly and that saves on some computations.
Hope this helps! :)
Let say that I have an image with the size 200x200px. I also have two separate webpages. The first page has an image tag with the attributes width="100" height="100" so the image is downsampled by half. The second page has an image tag with the attributes width="400" height="400" so the image is oversampled to the double the original size.
Which one of the cases is computationally faster to execute? Downsampling or oversampling. Other names for the operation would be subsampling and interpolation or just decreasing image size and increasing it. My guts tell me that there is less to compute when decreasing the image size but I'm not sure.
It is true that with just one small image the difference is meaningless. And of course the best solution would be to avoid scaling of images in the first place. Nonetheless if the target application uses high number of constantly changing images in different scales and is used from a mobile device then knowing the difference might become valuable.
Thanks in advance.
Oversampling is supposed to be more expensive... it FOR SURE requires some kind of interpolation. Let's suppose the simplest one: the linear interpolation! It's already more expensive than calculating 'a single mod operator' (the only thing you need in order to do a downsampling). I don't think someone would do much different...
Trying to be more accurate about the browsers, let's consider that any modern browser uses some tricks like GPU and/or OpenMP (Multi-Processing) to render the images. But GPU requires upload data from CPU and it has a price. This data transfer is a narrow path. So, for small images, it's gonna be almost the same thing... no big difference!
Mobile devices don't have as many cores as a Desktop computer... so OpenMP is not gonna be much helpful for small images too.
I.e in pixels, and in mb?
Wondering this for awhile thanks!
The maximum size of a JPEG image is 65535 x 65535 pixels.
The maximum size of a PNG image is 2^31-1 x 2^31-1 pixels. You would have great difficulty constructing an image this large due to memory constraints on typical computers.
Some older platforms cannot operate on files that are over two gigabytes in size. Of course, two gigabyte image files would be awkward to work with in most situations, so unless you're doing astronomy with amazing telescopes, I really wouldn't worry about it.
Since most displays are under 1920 x 1200 pixels, it would probably make sense to resize your images down to this size, unless your intention is to allow your clients to make photographic reproductions of your images -- in which case, give your clients as many pixels as you can.
The answer is, there is no such limit, but from a practical point of view, use of images larger than 2 MP (1600x1200) on your website wont make any sense,it wont be it useful/easier for a wide audience. and w.r.t size in MB, if you're seeking practical solution, an image with <2MB would likely serve you in any case.
Eventually the host computer will run out of memory and can't load the image - but I think it is safe to say that you'll never make that happen.
Images for webpages can be as big as you like. You need to think about the convenience (or lack thereof) of the users loading very big images on connections that are now mobile all the time and unstable to say the least.
I'm making an action game and I'd like to know what should be the maximum size of the stage (mine is 660 x 500).
Also I'd like to know how big a game-sprite should be. Currently my biggest sprites have a size of 128 x 128 and I read somewhere on the internet that you should not make it bigger because of performance issues.
If you want to make e.g. big explosions with shockwaves even 128 x 128 does not look very big. What's the maximum size I can definitely use for sprites? I cannot find any real solution about this so I appreciate every hint I can get because this topic makes me a little bit nervous.
Cited from:
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/DisplayObject.html
http://kb2.adobe.com/cps/496/cpsid_49662.html
Display objects:
Flash Player 10 increased the maximum size of a bitmap to a maximum
pixel count of 16,777,215 (the decimal equivalent of 0xFFFFFF). There
is also a single-side limit of 8,191 pixels.
The largest square bitmap allowed is 4,095 x 4,095 pixels.
Content compiled to a SWF 9 target and running in Flash Player 10 or
later are still subject to Flash Player 9 limits (2880 x 2880 pixels).
In Flash Player 9 and earlier, the limitation is is 2880 pixels in
height and 2,880 pixels in width.
Stage
The usable stage size limit in Flash Player 10 is roughly 4,050 pixels
by 4,050 pixels. However, the usable size of the stage varies
depending on the settings of the QUALITY tag. In some cases, it's
possible to see graphic artifacts when stage size approaches the 3840
pixel range.
If you're looking for hard numbers, Jason's answer is probably the best you're going to do. Unfortunately, I think the only way to get a real answer for your question is to build your game and do some performance testing. The file size and dimensions of your sprite maps are going to effect RAM/CPU usage, but how much is too much is going to depend on how many sprites are on the stage, how they are interacting, and what platform you're deploying to.
A smaller stage will sometimes get you better performance (you'll tend to display fewer things), but what is more important is what you do with it. Also, a game with a stage larger than 800x600 may turn off potential sponsors (if you go that route with your game) because it won't fit on their portal site.
Most of my sprite sheets use tiles less than 64x64 pixels, but I have successfully implemented a sprite with each tile as large as 491x510 pixels. It doesn't have a super-complex animation, but the game runs at 60fps.
Bitmap caching is not necessarily the answer, but I found these resources to be highly informative when considering the impact of my graphics on performance.
http://help.adobe.com/en_US/as3/mobile/WS4bebcd66a74275c36c11f3d612431904db9-7ffc.html
and a video demo:
http://tv.adobe.com/watch/adobe-evangelists-paul-trani/optimizing-graphics/
Also, as a general rule, build your game so that it works first, then worry about optimization. A profiler can help you spot memory leaks and CPU spikes. FlashDevelop has one built in, or there's often a console in packages like FlashPunk, or the good old fashioned Windows Task Manager can be enough.
That might not be a concrete answer, but I hope it helps.