Use Libgdx and TexturePacker with xxxdpi images get distorted - libgdx

I'm an android developer, building my first game using LibGDX. After reading A LOT, there is still one thing I don't understand.
I realize I should have one set of images, for the highest resolution 2560x1440 (For preventing ugly scaling up not vector images).
I'm using TexturePacker to pack my images and even using linear linear to get max quality:
TexturePacker.Settings settings = new TexturePacker.Settings();
settings.filterMin = Texture.TextureFilter.Linear;
settings.filterMag = Texture.TextureFilter.Linear;
settings.maxWidth = 4096;
settings.maxHeight = 2048;
TexturePacker.process(settings, inputDir, outputDir, packFileName);
I've set my camera, as recommended in several SO's, to fixed meters and avoid all the PPM stuff. From Screen's C'TOR:
mCamera = new OrthographicCamera();
mCamera.viewportWidth = 25.6f; ; // 2560 / 100
mCamera.viewportHeight = 14.4f ; // 1440 / 100
mCamera.position.set(mCamera.viewportWidth / 2, mCamera.viewportHeight / 2, 0f);
mCamera.update();
As you can see I chose to work with 1 meter = 100 pixels.
Now, that means I should draw my sprites with a 1/100 size, which i'm doing here:
TextureAtlas atlas = new TextureAtlas(Gdx.files.internal("images/pack.atlas"));
mImage = atlas.createSprite("image"));
mImage.setSize(mImage.getWidth() / 100 , mImage.getHeight() / 100);
This works and the images are displayed as intended. BUT, they are distorted as hell! a perfectly round image looks pixelated on the edges and not round.
So, my questions are:
Is this the best practice or should I work differently?
Am I correct having only the highest quality images?
Is the distortion means that my code is really scaling down the sprite and somehow libgdx has scaled it up again and damaged the quality?
Thanks!
EDIT
I'm gonna adapt #Tenfour04's answers to (1) and (2).
Regarding (3), the problem is with Genymotion :( , its pixelating the game. I've test it on real devices with several resolutions and it looks perfect.

You have one error in setting up your camera: You assume that the screen is always the same aspect ratio, but in reality it will vary. It's better to pick a constant height or width (choose based on the type of game you're making) and adjust the opposite dimension based on the aspect ratio of the current screen. For example:
mCamera.viewportHeight = 25.6f; //game screen always 25.6 meters tall
mCamera.viewportWidth = 25.6f / (float)Gdx.graphics.getHeight() *
(float)Gdx.graphics.getWidth();
"I should have one set of images for the highest resolution" This is not necessarily true. If you do this, you will have to load very high resolution images on devices that can't take advantage of them. This will use a lot of memory and inflate load times. It may be fine during development, but down the road you will probably want to have two or three different smaller resolution versions that you can selectively load.
For this reason, I also would avoid hard-coding that 100 everywhere in your code. Make pixelsPerMeter a variable that is set once when the game starts up. For now it can just be 100, but if you decide to make some smaller resolution art later, you can adjust it accordingly then without having to find all the occurrences of 100 in your code.
Your min filter should be MipMapLinearLinear or MipMapLinearNearest. Otherwise, performance will suffer when sprites are drawn smaller than their native resolution, and may look distorted as you described. You can look up mip mapping to see why this is the case.

Related

How to use Capabilities in Air AS3 to determine correct dpi

I am currently looking into the best practices of implementing images and graphics to a new Adobe air project and would like to know which is the best practice in using images . I have a fair bit of experience in Flash with web development and am familiar with a lot of the standard coding issues however as I'm new to integrating data into the mobile domain would like to ensure I'm doing it in the best possible way as the information available is often contradictory to tutorials and existing material online.
I am currently building the project with the following : Adobe Air, Flash CC for Android, testing on a Samsung g6.
I have my main app class and so now have set a class to determine the system Capabilities etc and now have access to the following information to pre plan my layout etc :
dpi : 640
stageResolution : 1440 width x 2464 height.
Adobe Templates for android as3 projects are 480 x 800.
Is it wise to stick with this as a size even though my target app is a higher resolution and to create all images and MovieClips to this size and to integrate a scaling mechanism for lower/higher resolution, or is it common practice to keep to the 480 x 800 template and allow for all resizing options within the code?
Having trawled through a number of links and articles on how the Capabilities doesn't faithfully cover the exact sizes and specifications etc, what dpi is best for images being loaded into the the app?
For background images and gallery images, ie splashScreen embedded images etc, what dpi is best used for maximum quality?
I loaded a 1440 x 2464 *.jpg #72 dpi and it filled the screen perfectly on mobile, I also loaded a 1440 x 2464 *.jpg #640 dpi and couldn't notice any difference so is this effectively not worth worrying about?
Should I just use relative images to size at 72 dpi for everything? Backgrounds, buttons etc, and or resize or bitmapData with them when I add them to the stage.
Example, I create a series of new images in photoshop, set them to 1 cm x 1 cm for a button in the app. I create the same in 72, 160, 640 dpi for variation and to see the difference.
I load them all into my project as is next to each other.
The 72 dpi 1cm x 1cm = Actual size on screen, just over 1 mm.
The 160 dpi 1cm x 1cm = Actual size on screen, just over 2.5 mm.
The 640 dpi 1cm x 1cm = Actual size on screen, just over 1.1 cm.
Clearly the definition and size is relative, the 640 dpi image is almost right to the scale on screen despite it being slightly larger than the size of the image I created, only 1mm but ok, but if I were to plan my layout using cm/mm in photoshop or whatever program, it would fail miserably in regards to layout if I relied on these equations!?
I sourced links like this http://www.dallinjones.com/ whereby it helps give an insight into conversion etc, however it still doesn't add up as it should.
According to the Pixels to Millimeters algorythm:
mm = ( pixels * 25.4) / dpi;
My equation in as3 amounts to the following :
1440 x 25.4 = 36,576 / 640 = 57.15mm
The actual screen size of the phone is 63+ mm !?
Also I've seen that vector graphics aren't as effective when being used on mobiles.
I've seen documentation suggesting that vectors need to be set to 'cache As Bitmap' also, is this for all vectors and drawn graphics?
Say I want to dynamically draw a background panel with rounded corners and a gradient that fills the stage.
drawBackgroundPanel(0, 0, stage.stageWidth, stage.stageHeight).
I create the rectangle, draw add backgroundPanel to stage. Do I then have to cache as bitmap also?
Thank you for your help.

libgdx What texture size?

What size for my textures should I use so it looks good on android AND desktop and the performance is good on android? Do I need to create a texture for android and a texture for desktop?
For a typical 2D game you usually want to use the maximum texture size that all of your target devices support. That way you can pack the most images (TextureRegion) within a single texture and avoid multiple draw calls as much as possible. For that you can check the maximum texture size of all devices you want to support and take the lowest value. Usually the devices with the lowest size also have a lower performance, therefor using a different texture size for other devices is not necessary to increase the overall performance.
Do not use a bigger texture than you need though. E.g. if all of your images fit in a single 1024x1024 texture then there is no gain in using e.g. a 2048x02048 texture even if all your devices support it.
The spec guarantees a minimum of 64x64, but practically all devices support at least 1024x1024 and most newer devices support at least 2048x2048. If you want to check the maximum texture size on a specific device then you can run:
private static int getMaxTextureSize () {
IntBuffer buffer = BufferUtils.newIntBuffer(16);
Gdx.gl.glGetIntegerv(GL20.GL_MAX_TEXTURE_SIZE, buffer);
return buffer.get(0);
}
The maximum is always square. E.g. this method might give you a value of 4096 which means that the maximum supported texture size is 4096 texels in width and 4096 texels in height.
Your texture size should always be power of two, otherwise some functionality like the wrap functions and mipmaps might not work. It does not have to be square though. So if you only have 2 images of 500x500 then it is fine to use a texture of 1024x512.
Note that the texture size is not directly related to the size of your individual images (TextureRegion) that you pack inside it. You typically want to keep the size of the regions within the texture as "pixel perfect" as possible. Which means that ideally it should be exactly as big as it is projected onto the screen. For example, if the image (or Sprite) is projected 100 by 200 pixels on the screen then your image (the TextureRegion) ideally would be 100 by 200 texels in size. You should avoid unneeded scaling as much as possible.
The projected size varies per device (screen resolution) and is not related to your world units (e.g. the size of your Image or Sprite or Camera). You will have to check (or calculate) the exact projected size for a specific device to be sure.
If the screen resolution of your target devices varies a lot then you will have to use a strategy to handle that. Although that's not really what you asked, it is probably good to keep in mind. There are a few options, depending on your needs.
One option is to use one size somewhere within the middle. A common mistake is to use way too large images and downscale them a lot, which looks terrible on low res devices, eats way too much memory and causes a lot of render calls. Instead you can pick a resolution where both the up scaling and down scaling is still acceptable. This depends on the type of images, e.g. straight horizontal and vertical lines scale very well. Fonts or other high detailed images don't scale well. Just give it a try. Commonly you never want to have a scale factor more than 2. So either up scaling by more than 2 or down scaling by more than 2 will quickly look bad. The closer to 1, the better.
As #Springrbua correctly pointed out you could use mipmaps to have a better down scaling than 2 (mipmaps dont help for upscaling). There are two problems with that though. The first one is that it causes bleeding from one region to another, to prevent that you could increase the padding between the regions in the atlas. The other is that it causes more render calls. The latter is because devices with a lower resolution usually also have a lower maximum texture size and even though you will never use that maximum it still has to be loaded on that device. That will only be an issue if you have more images than can fit it in the lowest maximum size though.
Another option is to divide your target devices into categories. For example "HD", "SD" and such. Each group has a different average resolution and usually a different maximum texture size as well. This gives you the best of the both world, it allows you to use the maximum texture size while not having to scale too much. Libgdx comes with the ResolutionFileResolver which can help you with deciding which texture to use on which device. Alternatively you can use a e.g. different APK based on the device specifications.
The best way (regarding performance + quality) would be to use mipmaps.
That means you start with a big Texture(for example 1024*1024px) and downsample it to a fourth of its size, until you reach a 1x1 image.
So you have a 1024*1024, a 512*512, a 256*256... and a 1*1 Texture.
As much as i know you only need to provide the bigest (1024*1024 in the example above) Texture and Libgdx will create the mipmap chain at runtime.
OpenGL under the hood then decides which Texture to use, based on the Pixel to Texel ratio.
Taking a look at the Texture API, it seems like there is a 2 param constructor, where the first param is the FileHandle for the Texture and the second param is a boolean, indicating, whether you want to use mipmaps or not.
As much as i remember you also have to set the right TextureFilter.
To know what TextureFilter to you, i suggest to read this article.

What is the difference between getWinSize getWinSizeInPixels in cc.Director?

What is the intended difference between these 2 functions:
var size = cc.Director.getInstance().getWinSize();
var sizePx = cc.Director.getInstance().getWinSizeInPixels();
In my case they both return the exact same value.
In which cases should they return different values?
In recent versions of Cocos2dx the given answer is no longer accurate, specifically since the framework has dropped support of the explicit retina mode in favor of providing the programmer with ability to set the resolution of the game independently of the screen and asset resolution, performing scaling when necessary.
Strictly speaking, function getWinSize() returns the value of whatever resolution you choose (using CCGLView::setDesignResolution(float, float, ResolutionPolicy)), in pixels; getWinSizeInPixels() returns design resolution, multiplied by the content scale factor, which is, again, provided by you with CCDirector::setContentScaleFactor(float). If you do not provide the values with these functions, Cocos2dx will choose the design resolution based on an arbitrary value depending of the current platform. For example, on iOS it will use the size of the provided CAEAGLView in pixels (which may be less than the real device resolution in some cases), both getWinSize() and getWinSizeInPixels() will return the same value.
getWinSize() and getWinSizeInPixels() will return different values if you are scaling your resources to the game resolution. In such case getWinSizeInPixels() indicates what the resolution would be if you didn't have to scale the resources.
Some possible setups to illustrate how the system works:
1 set of images, 1 design resolution, many device resolutions = images scaled whenever design != device (lower quality of looks for upscale, unnecessary memory/cpu usage for downscale), single resolution layout code (presumably simpler)
multiple sets of images, 1 design resolution, many device resolutions = less need to scale the images because different assets cover wider scope of targets, single resolution layout code is preserved
multiple sets of images, multiple/adoptable design resolutions, many device resolutions = less need to scale, code must be explicitly agnostic of the resolution (presumably more complex)
It is possible I've got something wrong since I've started looking into Cocos2dx not so long ago, but that's the results I've got after some testing.
One returns points, logical pixels, the other physical pixels. In Retina displays both values are different (2x).

Supporting Multiple screen sizes Android AIR by making stage MAX resolution

Hey everyone so this is not a duplicate! The question I have is if i make the stage width and height of my Android AIR 4.0 Application using FLASH CS6 to say 1080x1920 and make all my movieClips etc... fit the stage. Will it then be able to fit all lower screen sizes and scale automatically? instead of having to create multiple XML files and create a different size image for all available screen sizes?
I don't know a really good method of doing this, so i thought that this might be a logical approach since it's the largest possible already can't it just shrink down to all devices? I tested on my small screen and the only problem I am having is it not filling the whole width of the screen.
But then I Add this line of code in my Constructor and everything fits perfectly:
stage.align = StageAlign.TOP_LEFT;
stage.scaleMode = StageScaleMode.EXACT_FIT;
stage.displayState = StageDisplayState.FULL_SCREEN;
Any thoughts?
If you follow the Flex model, it's the other way around. Flex apps are generally built for the lowest screen density (not resolution) and scale upwards. The sizing and placements respond naturally to the screen size. At each screen density range, a new set of images are used and a new multiplier is used for all of the sizes and positions.
So let's look at it this way. In Flex, there are a set of DPI ranges. 120 (generally ignored), 160, 240, 320, 480, and 640. Every device uses a single one of those settings. Take the iPhone. You build for 160, but the iPhone is 320dpi. So all of the values you use, built for 160dpi, are doubled for your app on the iPhone. You use images twice the size, too.
For most of my apps, I have at least four different sizes (one each for 160, 240, 320, and 480 dpi ranges) of every single image in my project. The images only scale if I don't have an image for the dpi range. The goal should be to never scale any images, up or down. In each range, the images remain the same size at all times and the only thing that changes is the positioning of them.
Now, I've used Flex as my example here, since the layout engine it uses is extremely thorough and well thought out, but it is entirely possible to build a simple system for this in AS3 as well (I did it last year with relative ease).
The biggest thing you need to do is forget about screen resolution. In this day and age, especially with Android where there are hundreds of different screens in use, screen size and resolution are irrelevant. Screen density, on the other hand, is everything

Working with canvas in different screen sizes

I'm planing to create a simple game using the HTML5 <canvas> tag and compile the code into a native application using Phonegap, but the problem is that canvas use coordinates that are not relative to the size of it, so 20,20 on a 960x640 screen is different on a 480x800 one.
So I want to know how to work with a <canvas> (which will be in fullscreen) on different screen sizes.
So I want to know how to work with a (which will be in fullscreen) on different screen sizes.
This is a common problem that has a pretty easy resolution. Often this is done by separating hard canvas coordinates from what is sometimes called "model" coordinates.
It really depends on how your code is organized, but I assume the game has some height and width to the world that takes up most or all of the screen. The two aspect ratios of the screens you are targeting are 1.5 and 1.666, so you'll want to cater to to the smaller one
So you'll really want to do your game in a set of "model" coordinates that have no bearing on the screen or canvas sizes. Since you are only targeting two screen sizes, your model coordinates can perhaps be 960x640 since that is the smaller of the two aspect ratios. It doesn't have to be. It could be 100x50 for your model coordinates instead. But this example we'll use 960x640 as our model coordinates.
Internally, you never use anything but these model coordinates. You never ever think in any other coordinates when making your game.
When the screen size is 960x640 you won't have to change anything at all since its a 1:1 mapping, which is convenient.
Then when the screen size is actually 800x480, when it comes time to draw to the screen, you'll want to translate all of the model coordinates by (3/4), so the game will be made and internally use 960x480, but it will be drawn in the area of (720x480). You'll also want to take any mouse or touch input and multiply it by (4/3) to turn the screen coordinates into model coordinates.
This translation can be as easy as calling ctx.scale(3/4, 3/4) before you draw everything.
So both platforms will have code that is all written assuming the game is a size of 960x640. The only time that model coordinates become screen coordiantes is when drawing to the canvas (which is a different size) and converting canvas mouse/touch coordinates to model coords.
If that seems confusing to you I can try and make a sample.
Use innerWidth/innerHeight of window object:
canvas.height = window.innerHeight;
canvas.width = window.innerWidth;
It'll auto-adjust any screen; you must test for cross platform/screen compatibility!
Also, instead of using pre-defined x,y co-ordinates, use something like this:
var innerWidth = window.innerWidth;
x = innerWidth / 3;
Since you have just 2 screen sizes you can have 2 different canvas (and their logic behind) with different sizes.
If you don't like this solution i think you can only use sizes in % instead of pixel absolute dimension.
The last but not the least, try to set different values in metatag.