Why would I want to use unit scale? (Libgdx) - libgdx

I have looked into the SuperKaolio example on Libgdx github repo. It is basically a test for integrating Tiled maps with Libgdx. They are using the unit scale 1/16 and if I have understood it correctly it means that the world no longer is based on a grid of pixels but on a grid of units (each 16 pixels wide). This is the code and comment in the example:
// load the map, set the unit scale to 1/16 (1 unit == 16 pixels)
map = new TmxMapLoader().load("data/maps/tiled/super-koalio/level1.tmx");
renderer = new OrthogonalTiledMapRenderer(map, 1 / 16f);
I am basically wondering why you would want to do that. I only got problems doing it and can't see any obvious advantages.
For example, one problem I had was adding a BitMap font. It didn't scale at all with the background and one pixel in the font occupied an entire unit. Image here.
I'm using this code for drawing the font. It's a standard 14 points arial font included in libgdx
BitmapFont font = new BitmapFont();
font.setColor(Color.YELLOW);
public void draw(){
spriteBatch.begin();
font.draw(batch, "Score: " + thescore, camera.position.x, 10f);
spriteBatch.end();
}

I assume there is a handy reason to have a 1/16th scale for tiled maps (perhaps for doing computations on which tile is being hit or changing tiles (they're at handy whole-number indices).
Anyway, regardless of what transformation (and thus what "camera" and thus what projection matrix) is used for rendering your tiles, you can use a different camera for your UI.
Look at the Superjumper demo, and see it uses a separate "guiCam" to render the "GUI" elements (pause button, game over text, etc). The WorldRenderer has its own camera that uses world-space coordinates to update and display the world.
This way you can use the appropriate coordinates for each aspect of your display.

Related

libGDX repeated background texture drawing difficulties

Like everyone else, I'm having trouble following how libgdx's coordinate transformations. I'm creating a scrabble-like game, and dragging a finger on the screen pans the camera. The tiles are Actors on a Stage, and I'm doing the camera transformations on the stage's camera. That all works. Now I'm trying to give it a fancy background for the tiles to sit on. I can't quite figure out the draw method to make it work.
//Assets class
static final Texture feltBackground = new Texture(
Gdx.files.internal("felt_background.png"));
feltBackground.setWrap(Texture.TextureWrap.Repeat,
Texture.TextureWrap.Repeat);
//board rendering snippet
private void drawBackground() {
batch.setProjectionMatrix(board.getCamera().combined);
batch.begin();
screenUpperLeft.set(0, 0, 0);
screenLowerRight.set(Gdx.graphics.getWidth(),
Gdx.graphics.getHeight(), 0);
board.getCamera().unproject(screenUpperLeft);
board.getCamera().unproject(screenLowerRight);
batch.draw(Assets.feltBackground,
0, 0,
(int)screenUpperLeft.x, (int)screenUpperLeft.y,
(int)screenLowerRight.x, (int)screenLowerRight.y);
batch.end();
}
What I'm attempting in drawBackground is:
set boundary points to screen bounds
unproject those points into world space to find the region of the texture I should draw
draw that region to the screen origin
However, with the code like this I'm having various weird problems like the lower boundary of the background region starting offscreen, the upper boundary moving when the camera is zoomed, the texture panning faster than my finger's movement (although the boundaries of the background region pan correctly), the y axis pans mirrored, etc. Changing the code changes these symptoms, but I haven't found a pattern to try to get closer to being correct. Any words of wisdom for drawing like this?
Edit:
Here are some screenshots to add clarity.
When I open the game, there is no background.
I can drag up, which moves up the upper boundary of the background (if there is a lower boundary, I can't ever find it)
I can drag the left boundary right, but like the bottom I can't ever find a right boundary (if it exists)
Whenever you see weird synchronization issues between your camera and the stuff you're trying to draw, it's generally a symptom of confusing world coordinates with viewpoint coordinates (or vise versa) somewhere in your code.
You're using a SpriteBatch to do the rendering, right? When I look up the API for the version of SpriteBatch#draw() which you are calling, it says that those first two floats are the position that you're rendering the sprite in the world, and the four integers have to do with the source and height of the source image.
When you pass (0,0) as the position, it is drawing the image at 0,0 in your game world, which is not (0,0) on your viewport.
I would recommend a simpler approach- instead of trying to project or unproject the coordinates, set the draw location (those first two floats) to stage.getCamera().position.x and stage.getCamera().position.y.

Is there a way to have smooth/subpixel motion without turning on smoothing on graphics?

I'm creating this 2D, pixel art game. When the camera follows the player (it uses easing), on the final approach, the position gets several subpixel adjustments.
If I have smoothing ON (on my graphic assets), the graphics look good (sharp. it's pixel art) but the subpixel motion is jerky/jumpy.
If I have smoothing OFF, the subpixel motion is smooth, but the pixel art graphics look blurry.
I'm using Flash player v21. I've tried this with Starling and with Flash's display list.
You have a pixelated object that is moving in increments of less than the pixel size, but you don't want to restrict your mathematical easing to integers, or even worse, factors of 8 or what have you. The solution I am using in my project for this exact issue is posted below (I just got it working last week!)
Concept
create a driver that is controlled by the easing using floating point numbers.
Allow this driver to then control where the actual display object is rendered. We can use a constraint to only allow the display object to render on your chosen resolution.
Code Example
// you'll put these lines or equivalent in the correct spots for your particular needs.
// SCALE_UP will be your resolution control. If your pixels are 4 pixels wide, use 4.
const SCALE_UP: int = 4;
var d:CharacterDriver = new CharacterDriver();
var c:Character = new Character();
c._driver = d; // I've found it useful to be able to reference the driver
d._drives = c; // or the thing the driver drives via the linked object.
// you don't have to do this.
then when you are ready to do your easing of the driver:
function yourEase(c:Character, d:CharacterDriver):void{
c.x = Math.ceil(d.x - Math.ceil(d.x)%SCALE_UP);//this converts a floating point number into a factor of SCALE_UP
c.y = Math.ceil(d.y - Math.ceil(d.y)%SCALE_UP);
Now this will make your character move around 4 pixels at a time, but still be able to experience easing!
The bit with the modulo (%) operator is the key. For instance, 102-102%4 = 100. 103-103%4 = 100. 104-104%4 = 104.
In case anyone is confused by that, look at what 102%4 does: 4 goes into 102 25 times with a remainder of 2. so 102%4 = 2. Then 102 - 2 = 100.
In your case, since the "camera" is following the player (i.e. the background is moving, right?) then you really need to apply drivers to everything in the background instead, but it is basically the same idea.
Hope this helps.
since you specifically mentioned the "final approach" i think your problem comes from the fact that the easing equations puts your graphics at fractional coordinates, especially while getting closer to the target, but you should also notice it during the rest of the animation.
depending on the easing "engine" that you're using you should be able to set a "round values" flag, so all the coordinates set will be integer values and not fractional
if that's not possible, find a way in your display objects to round the x and y values every time they change

Actionscript 3 number drawing recognition

I am working on a game to compare a kid drawing (with mouse or gesture) to numbers from 1 to 9, is converting the drawing to bitmap and compare it with number converted to bitmap a good idea?
and how to handle the difference in size (width-height) between the 2 images?
Thanks
You can do it with image comparison, but it's pretty tricky to get it right.What I would suggest is:
Pre-generate small (10x10 pixels or even smaller) grayscale images of the numbers and blur them a little
Convert drawing to grayscale
Blur drawing a bit
Crop borders from drawing
Resize drawing down to the size of your number images;
Compare the small drawing image with the generated number images, pixel by pixel and be lenient to what you accept as a match.
You can try Mouse Gesture Recognition
var gesture:MouseGesture = new MouseGesture(stage);
gesture.addGesture('B','260123401234');
gesture.addEventListener(GestureEvent.MATCH,matchHandler);
function matchHandler(event:GestureEvent):void
{
trace (event.datas + ' matched !')
}

Scrolling large tile map using cocos2d-x gives me black tiles where there should be green tiles

I am loading 400x400 tile map created using Tiled software.
One tile is 120 pixels for total of 48000x48000 pixels.
I load like this
regionMap->initWithTMXFile("background2.tmx");
mapLayer->addChild(regionMap, 0, enTagTileMap);
mapLayer->setAnchorPoint(CCPoint(0,1));
Then I scroll like this.
mapLayer->setPosition(position);
When I vertically scroll to about this position, I do not get the tiles from the map anymore, I just get black tiles.
x=0 , y=5483.748535
When I horizontally scroll, I do not get the same problem even when I reach this position.
x=-48000, y=400
Thanks for advance.
I think it's fair to assume that cocos2d-x's tilemap renderer is a direct port of the one in cocos2d-iphone. If true, they both have the same restriction of a maximum of 65,536 vertices (16,384 tiles) that can be displayed (not counting empty tiles).
Your tilemap is 400x400 = 160,000 tiles assuming there is only one layer and there aren't any "empty" tiles (empty == tile locations with GID value 0). That means about ten times the number of tiles cocos2d will/can render.
Cocos2d will just render up to 16,384 tiles and then stop, the remaining tiles will not be rendered so you'll see the background color (default: black).
A common but awkward workaround is to split the map into several TMX files and align them in code.

Bad quality texture stage3D

I'm drawing a simple square in stage3D, but the quality of the numbers and the edges in the picture is not as high as it should be:
Here's the example with the (little) source code, I've put the most in one file.
http://users.telenet.be/fusion/SquareQuality/
http://users.telenet.be/fusion/SquareQuality/srcview/
I'm using mipmapping, in my shader I use "<2d, miplinear, repeat>", the texture is 256x256 jpg (bigger than on the image), also tried a png, tried "mipnearest" and tried without mipmapping. Anti-alias 4, but 10 or more doesn't help at all...
Any ideas?
Greetings,
Thomas
Are you using antialiasing for backBuffer?
// Listen for when the Context3D is created for it
stage3D.addEventListener(Event.CONTEXT3D_CREATE, onContext3DCreated);
function onContext3DCreated(ev:Event): void
{
var context3D:Context3D = stage3D.context3D;
// Setup the back buffer for the context
context3D.configureBackBuffer(stage.stageWidth, stage.stageHeight,
0, // no antialiasing (values 2-16 for antialiasing)
true);
}
I think that the size of your resource texture is too high. The GPU renders your scene pixel by pixel in the fragment shader. When it renders a pixel of your texture, the fragment shader gets a varying that represents the texture UV. The GPU simply takes the color of the pixel on that UV coordinate of your texture.
Now, when your texture size is too high, you will lose information because two neighboring pixels on the screen will correspond with non-neighboring pixels on the texture resource. For example: if you draw a texture 10 times smaller than the resource, you will get something like this (were each character corresponds with a pixel, in one dimension):
Texture: 0123456789ABCDEFGHIJKLM
Screen: 0AK
I'VE FOUND IT!!!
I went to the Starling forum and found an answer from Daniel from Starling:
"If you're using TRILINEAR, you're already using the best quality available. One additional thing you could try is to set the "antialiasing" value of Starling to a high value, e.g. 16, and see if that helps."
So I came across this article that said trilinear is only used when you put the argument "linear" in your fragment shader, in my example program:
"tex ft0, v0, fs0 <2d, linear, miplinear, repeat>".
Greetings,
Thomas