How to draw BitmapFont to fill rectangular region? - libgdx

With SpriteBatch, I can use this method to stretch a TextureRegion to fill a fixed "world region"
/** Draws a rectangle with the bottom left corner at x,y and stretching the region to cover the given width and height. */
public void draw (TextureRegion region, float x, float y, float width, float height)
Is there an equivalent method in BitmapFont ?

There isn't. If you need different sizes of fonts you need to create them and add them to the assets individually, or you can use Gdx Freetype which generates them on the fly (thus saving a lot of space).
https://github.com/libgdx/libgdx/wiki/Gdx-freetype
As link-only answers are not valid in here, I will just copy the relevant parts of the wiki entry for completeness:
Download the latest nightly build.
Open libgdx-nightly-latest.zip/extensions/gdx-freetype and do the following:
extract gdx-freetype.jar and gdx-freetype-natives.jar to your core project's libs folder
link gdx-freetype.jar to your core, android and desktop project
link gdx-freetype-natives.jar to your desktop project
copy armeabi/libgdx-freetype.so to your android project's libs/armeabi folder
copy armeabi-v7a/libgdx-freetype.so to your android project's libs/armeabi-v7a folder
In code:
FreeTypeFontGenerator generator = new
FreeTypeFontGenerator(Gdx.files.internal("fonts/myfont.ttf"));
BitmapFont font12 = generator.generateFont(12); // font size 12 pixels
BitmapFont font25 = generator.generateFont(25); // font size 25 pixels
generator.dispose(); // don't forget to dispose to avoid memory leaks!

Related

How to make two or more different font files use a common Atlas in libgdx?

I use different fonts for different screens. some of the fonts are only few letters, so it would be inefficient to generate an atlas for each font.So is there a way to make all of my fonts use a single Atlas?
You can use a constructor that takes a TextureRegion. For example, if your font and image were named myFont.fnt and myFont.png, you can put myFont.png in with the rest of your sprite source images and pack it into the texture atlas. Put the .fnt files in with the rest of your assets. Then after loading the texture atlas:
myFont = new BitmapFont(Gdx.files.internal("myFont.fnt"), myTextureAtlas.findRegion("myFont"));
To use it in a Skin, you'll want to add it to a Skin before loading the Json file:
skin = new Skin(); // instantiate blank skin
skin.add(myFont, "myFont");
skin.load(Gdx.files.internal("mySkin.json"));
The Json file can reference the font by the name you use when adding it to the skin.
Although, I'd highly advise using AssetManager for everything. In this case, your skin Json could define the font normally. The only extra step you need beyond loading everything to the asset manager the usual way would be to add a BitmapFontLoader.BitmapFontParameter that specifies the TextureAtlas that contains the region. The AssetManager will use this to determine that the BitmapFont is dependent on the atlas, so it will load them in the correct order.
BitmapFontLoader.BitmapFontParameter fontParam = new BitmapFontLoader.BitmapFontParameter(){{
atlasName = "mySkin.pack";
}};
assetManager.load("myFont.fnt", BitmapFont.class, fontParam);
The atlas should be the same one the skin uses.

Resize Screen based on Device Resolution Monogame Windows Phone 8

I develop game using monogame framework.
code of game itself is converted from XNA code, which i coded previously before.
I develop game asset (background, etc) for 800*480 resolution. problem appear when i try run it in different device resolution (emulator 720p).
How to resize it actually?
I already add this code, but nothing happen
public Game1()
{
_graphics = new GraphicsDeviceManager(this);
this.Windows.ClientSizeChanged += Window_ClientSizeChanged;
}
void Window_ClientSizeChanged(object sender, EventArgs e)
{
int currentWidth = this.Window.ClientBounds.Width;
int currentHeight = this.Windows.ClientBounds.Height;
}
that code only return 0 value for both currentWidth and currentHeight;
i also follow this code
this.Window.ClientSizeChanged += new EventHandler<EventArgs>(Window_ClientSizeChanged);
void Window_ClientSizeChanged(object sender, EventArgs e)
{
graphics.PreferredBackBufferWidth = Window.ClientBounds.Width;
graphics.PreferredBackBufferHeight = Window.ClientBounds.Height;
graphics.ApplyChanges();
}
But it has similar result. Nothing happen for resize.
Please point me a way to fix this. Because my game only be able to run in 480*800 resolution. In other resolution, it will display some space instead.
Thanks before.
How I handled it for my game was, something that I think was a simple solution.
You need two things: design dimensions, and device dimensions. Sounds like you coded it for 800 * 480 so those are your design dimensions.
So you need to get the actual device width and height first. I do this in the App.xaml.cs because if you want to reuse your code for another platform (we did) you will want to do their specific way to get screen dimensions.
So in App.xaml.cs you can do this:
private void Application_Launching(object sender, LaunchingEventArgs e)
{
// Get device viewport.
double scaleFactor = (double)App.Current.Host.Content.ScaleFactor / 100f;
double width = App.Current.Host.Content.ActualWidth * scaleFactor;
double height = App.Current.Host.Content.ActualHeight * scaleFactor;
ResolutionSystem.SetResolutionAndScale((float)width, (float)height);
}
Now what you are doing is getting the devices scale factor (with 1.0 being 800 * 480 for WP8). You then get the ActualWidth and ActualHeight (which are always 800 and 480) and then multiply by the scaling factor. So there you have your device dimensions now, and you're not wiring into the game logic itself (unless your logic needed to be based off of the possibility of a mid game landscape to portrait switch).
So then what we do is set the game's scaling Matrix. There's a lot of other stuff I had to do with scaling and render targets due to shaders and UI but I'll keep it simple. Essentially you want to now get a scaling matrix for your sprite batch that you run the gameplay on. This means you do everything in your desired design viewport bounds then scale it to the desired size after the fact (you may want to consider redoing your assets for a higher resolution like WXGA or even 1080p if you can).
So to get our scale matrix in our ResolutionSystem.cs class we call:
public static void SetResolutionAndScale(float deviceWidth, float deviceHeight)
{
ScaleMatrix = Matrix.CreateScale(DeviceWidth / DesignWidth,
DeviceHeight / DesignHeight,
1f);
}
This function for us does a lot more but I trimmed it all out. So you should now have a scale matrix for you to store however you'd like.
Now in your draw call which can be done in numerous ways you simple include that scale matrix as the Matrix transformMatrix parameter in the SpriteBatch.Begin call.
So something like this:
spriteBatch.Begin(SpriteSortMode.FrontToBack, null, null, null, null, null, ResolutionSystem.ScaleMatrix);
And you should now have something that works. You can check out my game SkyFehl in the Windows Phone Store and test it out. It was used with design parameters of 1280 * 768. Porting over to iOS was relatively easy using this method (I won't link it in case that's frowned on). Hopefully I made some form of sense.
Making Resolution independent games, here is a blog which will give you a step by step procedure to achieve it.
http://www.craftworkgames.com/blog/monogame-code-snippets/monogame-resolution-independence/

Why would I want to use unit scale? (Libgdx)

I have looked into the SuperKaolio example on Libgdx github repo. It is basically a test for integrating Tiled maps with Libgdx. They are using the unit scale 1/16 and if I have understood it correctly it means that the world no longer is based on a grid of pixels but on a grid of units (each 16 pixels wide). This is the code and comment in the example:
// load the map, set the unit scale to 1/16 (1 unit == 16 pixels)
map = new TmxMapLoader().load("data/maps/tiled/super-koalio/level1.tmx");
renderer = new OrthogonalTiledMapRenderer(map, 1 / 16f);
I am basically wondering why you would want to do that. I only got problems doing it and can't see any obvious advantages.
For example, one problem I had was adding a BitMap font. It didn't scale at all with the background and one pixel in the font occupied an entire unit. Image here.
I'm using this code for drawing the font. It's a standard 14 points arial font included in libgdx
BitmapFont font = new BitmapFont();
font.setColor(Color.YELLOW);
public void draw(){
spriteBatch.begin();
font.draw(batch, "Score: " + thescore, camera.position.x, 10f);
spriteBatch.end();
}
I assume there is a handy reason to have a 1/16th scale for tiled maps (perhaps for doing computations on which tile is being hit or changing tiles (they're at handy whole-number indices).
Anyway, regardless of what transformation (and thus what "camera" and thus what projection matrix) is used for rendering your tiles, you can use a different camera for your UI.
Look at the Superjumper demo, and see it uses a separate "guiCam" to render the "GUI" elements (pause button, game over text, etc). The WorldRenderer has its own camera that uses world-space coordinates to update and display the world.
This way you can use the appropriate coordinates for each aspect of your display.

AS3 air multiple resolutions, assets and layout iphone and ipad app

I am building an app in AS3/Air and I would like to target both iPhone and iPad resolutions. I understand the different aspect ratios between iPhone and iPad however the app I am building currently has different layout and slightly different content to fit the different screen sizes. I currently have 2 versions of the app already built, one for iPhone the other for iPad. All assets have been created with the target platform in mind but now, I would like to combine the 2 apps into a single one.
I am thinking I will rename each each screen file to iphone_login, ipad_menu, ipad_settings etc and include them all in the same build. Then during startup, check what device the user is on and set iphone_ or ipad_ and also set the resolution at this time too.
I prefer not to have black edges going from iphone resolution to ipad so my questions are:
Is my approach a suitable one considering the outcome I would like?
How do I determine what device a user is on to show the correct files, assets and resolution?
I understand the app size will increase at least double by adding 2 sets of assets and 2 sets of code files but considering the differences in design layout and content I don't see another solution, apart from keeping 2 apps.
Thanks :)
What's the problem? iPad and iPhone have different resolution and dpi combination, check them and identify current platform.
Get view you need by class like this:
public static const PAGE1:String = "page1";
public static const PAGE2:String = "page2";
private static var PHONE_VIEW_NAME_2_CLASS:Dictionary = new Dictionary();
private static var TABLET_VIEW_NAME_2_CLASS:Dictionary = new Dictionary();
public class ViewProvider
{
{
PHONE_VIEW_NAME_2_CLASS[ViewProvider.PAGE1] = Page1PhoneView;
PHONE_VIEW_NAME_2_CLASS[ViewProvider.PAGE2] = Page2PhoneView;
TABLET_VIEW_NAME_2_CLASS[ViewProvider.PAGE1] = Page1TabletView;
TABLET_VIEW_NAME_2_CLASS[ViewProvider.PAGE2] = Page2TabletView;
}
public function ViewProvider()
{
}
public static function isTablet():Boolean {
...analyze Capabilities.screenResolutionY, Capabilities.screenResolutionX and Capabilities.screenDPI
}
public static function getViewClass(name:String):Class
{
return isTablet() ? TABLET_VIEW_NAME_2_CLASS[name] : PHONE_VIEW_NAME_2_CLASS[name];
}
}
And in your program
navigator.pushView(ViewProvider.getViewClass(ViewProvider.PAGE1))
All coordinated, paddings and another position numbers, font sizes etc correct with multiplier depending on runtime dpi by simular way...
I'm in the middle of a similar problem.
My solution is to have the images at the best resolution in a file pool, and then downscale them depending on the device when the app starts. You can do this also with non animated vector assets and put them into a bitmapData object.
Another option is to always have the asset pool with files at the maximum resolution needed loaded in the memory, and downscale them at runtime when they are needed. This works well if you will be using some asset in different places at different sizes, for example an icon.
As for the code, you should find a way to separate the code that manages data, the codes that manages the logic, and the code that "paints" the UI. This way you will be able to reuse most of the code in both versions, but only change the code that "paints" the UI. Check the MVC pattern for more info.

AS3 visible bounds of display object are offset inconsistently

I am using this function, adapted from Plastic Sturgeon (http://plasticsturgeon.com/2010/09/as3-get-visible-bounds-of-transparent-display-object/) to get the visible bounds of a display object.
public static function getVisibleBounds(source:DisplayObject):Rectangle
{
var matrix:Matrix = source.transform.concatenatedMatrix;
var data:BitmapData = new BitmapData(1000, 1000,true,0x00000000);
data.draw(source, matrix);
var bounds:Rectangle = data.getColorBoundsRect(0xFFFFFFFF,0x000000,false);
data.dispose();
return bounds;
}
However, the bounds are offset from the object, depending on the stage size. It works perfectly for the default stage size (550px×400px), but when either dimension is increased, it moves in the direction opposite to that dimension (when x is increased, it is offset from the object leftward, and when y is increased, it is offset from the object downward.) It doesn't do this consistently. The offset(stage dimension) is non-linear, as it is 0 for a certain range of stage dimensions, then for stage dimensions greater than that range, it quickly rises with the stage dimension. The offset is also different depending on what I changed the stage dimension from, e.g. if I go from 400px to 1000px in stages, testing movie in between, the boundaries are offset differently than if I go from 400px to 1000px all at once, or without testing movie at intermediate stages. Sometimes the offset only changes with one dimension, and the other dimension doesn't do anything. Also the published file is different from the test. I tried putting the function in the same file as the display object, instead of in an external file, but that's still unreliable. I wonder if there's some fix that could reliably give me the actual visible boundaries of the display object, regardless of the stage size and all this other stuff.
My computer runs Windows Vista Home Premium 32-bit, and I am using Adobe Flash Professional CS5.5.
This may be an issue that can be solved by setting some stage properties. First try setting the stage not to scale:
this.stage.scaleMode = "noScale";
Then set some alignment rules:
this.stage.align = "TL";
If that helps, it may be that your bitmap copying was running into some issues with scaling bugs.