Working with canvas in different screen sizes - html

I'm planing to create a simple game using the HTML5 <canvas> tag and compile the code into a native application using Phonegap, but the problem is that canvas use coordinates that are not relative to the size of it, so 20,20 on a 960x640 screen is different on a 480x800 one.
So I want to know how to work with a <canvas> (which will be in fullscreen) on different screen sizes.

So I want to know how to work with a (which will be in fullscreen) on different screen sizes.
This is a common problem that has a pretty easy resolution. Often this is done by separating hard canvas coordinates from what is sometimes called "model" coordinates.
It really depends on how your code is organized, but I assume the game has some height and width to the world that takes up most or all of the screen. The two aspect ratios of the screens you are targeting are 1.5 and 1.666, so you'll want to cater to to the smaller one
So you'll really want to do your game in a set of "model" coordinates that have no bearing on the screen or canvas sizes. Since you are only targeting two screen sizes, your model coordinates can perhaps be 960x640 since that is the smaller of the two aspect ratios. It doesn't have to be. It could be 100x50 for your model coordinates instead. But this example we'll use 960x640 as our model coordinates.
Internally, you never use anything but these model coordinates. You never ever think in any other coordinates when making your game.
When the screen size is 960x640 you won't have to change anything at all since its a 1:1 mapping, which is convenient.
Then when the screen size is actually 800x480, when it comes time to draw to the screen, you'll want to translate all of the model coordinates by (3/4), so the game will be made and internally use 960x480, but it will be drawn in the area of (720x480). You'll also want to take any mouse or touch input and multiply it by (4/3) to turn the screen coordinates into model coordinates.
This translation can be as easy as calling ctx.scale(3/4, 3/4) before you draw everything.
So both platforms will have code that is all written assuming the game is a size of 960x640. The only time that model coordinates become screen coordiantes is when drawing to the canvas (which is a different size) and converting canvas mouse/touch coordinates to model coords.
If that seems confusing to you I can try and make a sample.

Use innerWidth/innerHeight of window object:
canvas.height = window.innerHeight;
canvas.width = window.innerWidth;
It'll auto-adjust any screen; you must test for cross platform/screen compatibility!
Also, instead of using pre-defined x,y co-ordinates, use something like this:
var innerWidth = window.innerWidth;
x = innerWidth / 3;

Since you have just 2 screen sizes you can have 2 different canvas (and their logic behind) with different sizes.
If you don't like this solution i think you can only use sizes in % instead of pixel absolute dimension.
The last but not the least, try to set different values in metatag.

Related

Use Libgdx and TexturePacker with xxxdpi images get distorted

I'm an android developer, building my first game using LibGDX. After reading A LOT, there is still one thing I don't understand.
I realize I should have one set of images, for the highest resolution 2560x1440 (For preventing ugly scaling up not vector images).
I'm using TexturePacker to pack my images and even using linear linear to get max quality:
TexturePacker.Settings settings = new TexturePacker.Settings();
settings.filterMin = Texture.TextureFilter.Linear;
settings.filterMag = Texture.TextureFilter.Linear;
settings.maxWidth = 4096;
settings.maxHeight = 2048;
TexturePacker.process(settings, inputDir, outputDir, packFileName);
I've set my camera, as recommended in several SO's, to fixed meters and avoid all the PPM stuff. From Screen's C'TOR:
mCamera = new OrthographicCamera();
mCamera.viewportWidth = 25.6f; ; // 2560 / 100
mCamera.viewportHeight = 14.4f ; // 1440 / 100
mCamera.position.set(mCamera.viewportWidth / 2, mCamera.viewportHeight / 2, 0f);
mCamera.update();
As you can see I chose to work with 1 meter = 100 pixels.
Now, that means I should draw my sprites with a 1/100 size, which i'm doing here:
TextureAtlas atlas = new TextureAtlas(Gdx.files.internal("images/pack.atlas"));
mImage = atlas.createSprite("image"));
mImage.setSize(mImage.getWidth() / 100 , mImage.getHeight() / 100);
This works and the images are displayed as intended. BUT, they are distorted as hell! a perfectly round image looks pixelated on the edges and not round.
So, my questions are:
Is this the best practice or should I work differently?
Am I correct having only the highest quality images?
Is the distortion means that my code is really scaling down the sprite and somehow libgdx has scaled it up again and damaged the quality?
Thanks!
EDIT
I'm gonna adapt #Tenfour04's answers to (1) and (2).
Regarding (3), the problem is with Genymotion :( , its pixelating the game. I've test it on real devices with several resolutions and it looks perfect.
You have one error in setting up your camera: You assume that the screen is always the same aspect ratio, but in reality it will vary. It's better to pick a constant height or width (choose based on the type of game you're making) and adjust the opposite dimension based on the aspect ratio of the current screen. For example:
mCamera.viewportHeight = 25.6f; //game screen always 25.6 meters tall
mCamera.viewportWidth = 25.6f / (float)Gdx.graphics.getHeight() *
(float)Gdx.graphics.getWidth();
"I should have one set of images for the highest resolution" This is not necessarily true. If you do this, you will have to load very high resolution images on devices that can't take advantage of them. This will use a lot of memory and inflate load times. It may be fine during development, but down the road you will probably want to have two or three different smaller resolution versions that you can selectively load.
For this reason, I also would avoid hard-coding that 100 everywhere in your code. Make pixelsPerMeter a variable that is set once when the game starts up. For now it can just be 100, but if you decide to make some smaller resolution art later, you can adjust it accordingly then without having to find all the occurrences of 100 in your code.
Your min filter should be MipMapLinearLinear or MipMapLinearNearest. Otherwise, performance will suffer when sprites are drawn smaller than their native resolution, and may look distorted as you described. You can look up mip mapping to see why this is the case.

How to set up a standard paperscript canvas for paperscripts using different scales of numbers?

If I make several scripts with Paperscript coding drawings in a random scale, be it lines of 0.1 length or 1000 length (script var value!), then how do I proceed making these several scripts viewable in several canvases that have the same size?
(Envision for example a webpage that shows several drawings in a same size html-canvas-box, all having a different drawing, each more zoomed in on an object, all coded with the correct values in Paperscript.)
In my case the canvas is 100% by 57.7% set in a div. I want the drawings I make in whatever scale to show up there with as a maximum of extra in my paperscript maybe setting the view height and width, or adjusting the scale.
Right now I am having a hard time to not have my view behave strangely.
When I use data-paper-resize="true", resize="true" or nothing at all in the <canvas> as an attribute, then what I am showing in my view doesn't remain the same in different browser sizes.
Is it that I used percentage for the canvas? Should I define something in my paperscript? Can I do this outside of the drawing script with some jQuery?
EDIT
When I change my CSS for the canvas from % to em my drawing is solid, like the examples on the site of paperjs itself. But also like on the site of paperjs itself, they are not responsive. Meaning you can adjust your browsers size and the canvas takes up the same amount of pixels on the screen. Not sure how this will work for me in the long run, like say with mobile devices...
It does cut out the strange behavior. Any ideas on making a fully responsive paperscript canvas that behaves normal?

What sort of approach should I take for scaling sprites?

What sort of approach should I take when I'm writing a game that uses sprites.
Say for example, my phone runs with a 1080p resolution. If I wanted to run my game on my phone without some weird stretching going on, would I have to use a large sprite sheet with huge sprites, or would I just write the game with a small sprite sheet, using the original sizes for each sprite (without upscaling), and just let everything be automatically scaled by LibGDX?
Thank you!
I would recommend storing the image larger. You could then enable mipmapping and tweak the texture filters. (See libgdx texture filters and mipmap)
This way, the image gets automatically scaled into a variety of sizes on runtime, and then the appropriate image gets selected depending on the size the image is shown.

Supporting Multiple screen sizes Android AIR by making stage MAX resolution

Hey everyone so this is not a duplicate! The question I have is if i make the stage width and height of my Android AIR 4.0 Application using FLASH CS6 to say 1080x1920 and make all my movieClips etc... fit the stage. Will it then be able to fit all lower screen sizes and scale automatically? instead of having to create multiple XML files and create a different size image for all available screen sizes?
I don't know a really good method of doing this, so i thought that this might be a logical approach since it's the largest possible already can't it just shrink down to all devices? I tested on my small screen and the only problem I am having is it not filling the whole width of the screen.
But then I Add this line of code in my Constructor and everything fits perfectly:
stage.align = StageAlign.TOP_LEFT;
stage.scaleMode = StageScaleMode.EXACT_FIT;
stage.displayState = StageDisplayState.FULL_SCREEN;
Any thoughts?
If you follow the Flex model, it's the other way around. Flex apps are generally built for the lowest screen density (not resolution) and scale upwards. The sizing and placements respond naturally to the screen size. At each screen density range, a new set of images are used and a new multiplier is used for all of the sizes and positions.
So let's look at it this way. In Flex, there are a set of DPI ranges. 120 (generally ignored), 160, 240, 320, 480, and 640. Every device uses a single one of those settings. Take the iPhone. You build for 160, but the iPhone is 320dpi. So all of the values you use, built for 160dpi, are doubled for your app on the iPhone. You use images twice the size, too.
For most of my apps, I have at least four different sizes (one each for 160, 240, 320, and 480 dpi ranges) of every single image in my project. The images only scale if I don't have an image for the dpi range. The goal should be to never scale any images, up or down. In each range, the images remain the same size at all times and the only thing that changes is the positioning of them.
Now, I've used Flex as my example here, since the layout engine it uses is extremely thorough and well thought out, but it is entirely possible to build a simple system for this in AS3 as well (I did it last year with relative ease).
The biggest thing you need to do is forget about screen resolution. In this day and age, especially with Android where there are hundreds of different screens in use, screen size and resolution are irrelevant. Screen density, on the other hand, is everything

Gap Between Sprites that are Tiled and Scaled

I am working on a map application and I have come across an issue with how my tiles are laying while scaling.
Here is a basic look at my structure:
There is obviously a lot more going on, but you get the idea. Now, I scale the Map App Sprite to zoom in. When that scaling occurs, there is a gap between each tile.
You can see the gap where 4 tiles meet here:
I am caching everything as a bitmap. For each Layer (which all extend Bitmap), I have smoothing set to true and pixelSnapping set to PixelSnapping.ALWAYS (pixel snapping shouldn't help here, but it shouldn't hurt either).
Does anyone have any suggestions on how to fix this issue?
(For the sake of completeness, the Map app is built entirely using AS3 and it is embedded in a Flex app)
Using integers for tile x,y locations and calculating those locations correctly is most likely the fix here, unless the images have seams in them!
The code that calculates and sets the x,y locations would be needed to properly pinpoint the issue in the code.
But, also if you are scaling that container sprite, you would want to ensure that you scale so that the width/height of a tile is an integer value.
For example, if you scale your sprite that contains these tiles, the widths/heights of the individual tiles might not always be integers, therefore creating those seams you see.
What you could do in that case is do your scaling by adjusting your width/height values by integer values, taking into account proportions, as opposed to using scaleX and scaleY on your container sprite.
Without seeing your code it's difficult to be sure, but it is possibly just a visual artifact due to scaling - eg: a 250px wide bitmap scaled to 155% should be rendered at 387.5px wide but thats impossible so its rendered at 388px wide - with the 0.5px part rendered as 1px at 50% alpha to give 'appearance' of 0.5px.
Ensuring scaled bitmaps widths/heights are always integers may solve it?
This looks like a rounding error.
Without code it's hard to know: it would be a great asset to you and us if posted a barebones example of your tiling class. In the process of subtraction you may very well discover your solution.
I'd offer that you should test what happens when you scale and algin four 100x100 bitmap images at various fine grain steps, to detect if it's a Flash rendering issue or a defect in your class.