I develop game using monogame framework.
code of game itself is converted from XNA code, which i coded previously before.
I develop game asset (background, etc) for 800*480 resolution. problem appear when i try run it in different device resolution (emulator 720p).
How to resize it actually?
I already add this code, but nothing happen
public Game1()
{
_graphics = new GraphicsDeviceManager(this);
this.Windows.ClientSizeChanged += Window_ClientSizeChanged;
}
void Window_ClientSizeChanged(object sender, EventArgs e)
{
int currentWidth = this.Window.ClientBounds.Width;
int currentHeight = this.Windows.ClientBounds.Height;
}
that code only return 0 value for both currentWidth and currentHeight;
i also follow this code
this.Window.ClientSizeChanged += new EventHandler<EventArgs>(Window_ClientSizeChanged);
void Window_ClientSizeChanged(object sender, EventArgs e)
{
graphics.PreferredBackBufferWidth = Window.ClientBounds.Width;
graphics.PreferredBackBufferHeight = Window.ClientBounds.Height;
graphics.ApplyChanges();
}
But it has similar result. Nothing happen for resize.
Please point me a way to fix this. Because my game only be able to run in 480*800 resolution. In other resolution, it will display some space instead.
Thanks before.
How I handled it for my game was, something that I think was a simple solution.
You need two things: design dimensions, and device dimensions. Sounds like you coded it for 800 * 480 so those are your design dimensions.
So you need to get the actual device width and height first. I do this in the App.xaml.cs because if you want to reuse your code for another platform (we did) you will want to do their specific way to get screen dimensions.
So in App.xaml.cs you can do this:
private void Application_Launching(object sender, LaunchingEventArgs e)
{
// Get device viewport.
double scaleFactor = (double)App.Current.Host.Content.ScaleFactor / 100f;
double width = App.Current.Host.Content.ActualWidth * scaleFactor;
double height = App.Current.Host.Content.ActualHeight * scaleFactor;
ResolutionSystem.SetResolutionAndScale((float)width, (float)height);
}
Now what you are doing is getting the devices scale factor (with 1.0 being 800 * 480 for WP8). You then get the ActualWidth and ActualHeight (which are always 800 and 480) and then multiply by the scaling factor. So there you have your device dimensions now, and you're not wiring into the game logic itself (unless your logic needed to be based off of the possibility of a mid game landscape to portrait switch).
So then what we do is set the game's scaling Matrix. There's a lot of other stuff I had to do with scaling and render targets due to shaders and UI but I'll keep it simple. Essentially you want to now get a scaling matrix for your sprite batch that you run the gameplay on. This means you do everything in your desired design viewport bounds then scale it to the desired size after the fact (you may want to consider redoing your assets for a higher resolution like WXGA or even 1080p if you can).
So to get our scale matrix in our ResolutionSystem.cs class we call:
public static void SetResolutionAndScale(float deviceWidth, float deviceHeight)
{
ScaleMatrix = Matrix.CreateScale(DeviceWidth / DesignWidth,
DeviceHeight / DesignHeight,
1f);
}
This function for us does a lot more but I trimmed it all out. So you should now have a scale matrix for you to store however you'd like.
Now in your draw call which can be done in numerous ways you simple include that scale matrix as the Matrix transformMatrix parameter in the SpriteBatch.Begin call.
So something like this:
spriteBatch.Begin(SpriteSortMode.FrontToBack, null, null, null, null, null, ResolutionSystem.ScaleMatrix);
And you should now have something that works. You can check out my game SkyFehl in the Windows Phone Store and test it out. It was used with design parameters of 1280 * 768. Porting over to iOS was relatively easy using this method (I won't link it in case that's frowned on). Hopefully I made some form of sense.
Making Resolution independent games, here is a blog which will give you a step by step procedure to achieve it.
http://www.craftworkgames.com/blog/monogame-code-snippets/monogame-resolution-independence/
Related
Under certain conditions, picking a resolution with Camera.setMode() adds black bars to the camera input, "letterboxing" it. I understand that setMode() uses some kind of hidden algorithm that picks a resolution from one of your camera's available resolutions and then crops it to fit your desired dimensions, but apparently sometimes it would rather add black bars than crop it.
This behavior is dependent on what camera I'm using. Some cameras seem to always crop and never letterbox. This may be related to what available resolutions they have. But what's really strange is that the letterboxing only ever happens when I try it in a Flash Player ActiveX control, like in Internet Explorer. It doesn't happen when I try the exact same SWF in Flash Player Projector or Google Chrome. This seems to imply that different Flash Player versions use a different algorithm to select and fit a resolution to the desired dimensions.
Here's a very simple example of code that's been creating this problem for me. In this case I'm providing a 4x3 resolution to setMode(), which means it must be selecting a 16x9 resolution even though 640x480 is one of the camera's available resolutions.
public class Flashcam extends Sprite
{
private var _camera:Camera = Camera.getCamera("0");
public var _video:Video;
private var _width:int = 640;
private var _height:int = 480;
public function Flashcam()
{
_camera.setMode(_width, _height, 15);
_video = new Video(_camera.width, _camera.height);
addChild(_video);
_video.attachCamera(_camera);
}
}
Is there any way to stop Camera from letterboxing its input? If not, is there some way to tell whether or not it's being letterboxed and which camera resolution has been automatically selected so that I can write my own code to account for it?
I am trying to set up my new project to support multiple resolutions.
As explained here all you need is to get a folder for each resolution and select one depending on the frame size (adjusting your content scale depending on your design resolution).
As i understand it, i could work on low res (480x320) and then add higher resolution assets for the other cases and everything should work (using as base my design resolution for everything).
I am working on windows, and by default it is using a frame size of 960x640, which makes my app to use a folder with a higher res than my design resolution (480x320).
How can i change my frame size (outside of my app delegate so i don't affect behaviours outside of windows) and scale it in windows?
I think it is a good idea to work with a frame size equal to your design size but if it is better to keep this default frame size or use another design resolution i would like to hear some reasons for that.
After posting this in cocos2dx site i got the answer.
All you need to do is set a GLView for your director in main, so when your appdelegate gets it, it is using the proper resolution and zoom factor already.
Your main.cpp should end up looking like:
#include "main.h"
#include "AppDelegate.h"
#include "cocos2d.h"
USING_NS_CC;
int APIENTRY _tWinMain(HINSTANCE hInstance,
HINSTANCE hPrevInstance,
LPTSTR lpCmdLine,
int nCmdShow)
{
UNREFERENCED_PARAMETER(hPrevInstance);
UNREFERENCED_PARAMETER(lpCmdLine);
// create the application instance
AppDelegate app;
auto director = Director::getInstance();
director->setDisplayStats(true);
auto glview = GLViewImpl::createWithRect("MyApp", Rect(0, 0, 480, 320), 2.0f);
director->setOpenGLView(glview);
return Application::getInstance()->run();
That will make your game run in windows with a resolution of 480x320 and scaled to the double (2.0).
Credit for this goes to mr. IQD :P
I have looked into the SuperKaolio example on Libgdx github repo. It is basically a test for integrating Tiled maps with Libgdx. They are using the unit scale 1/16 and if I have understood it correctly it means that the world no longer is based on a grid of pixels but on a grid of units (each 16 pixels wide). This is the code and comment in the example:
// load the map, set the unit scale to 1/16 (1 unit == 16 pixels)
map = new TmxMapLoader().load("data/maps/tiled/super-koalio/level1.tmx");
renderer = new OrthogonalTiledMapRenderer(map, 1 / 16f);
I am basically wondering why you would want to do that. I only got problems doing it and can't see any obvious advantages.
For example, one problem I had was adding a BitMap font. It didn't scale at all with the background and one pixel in the font occupied an entire unit. Image here.
I'm using this code for drawing the font. It's a standard 14 points arial font included in libgdx
BitmapFont font = new BitmapFont();
font.setColor(Color.YELLOW);
public void draw(){
spriteBatch.begin();
font.draw(batch, "Score: " + thescore, camera.position.x, 10f);
spriteBatch.end();
}
I assume there is a handy reason to have a 1/16th scale for tiled maps (perhaps for doing computations on which tile is being hit or changing tiles (they're at handy whole-number indices).
Anyway, regardless of what transformation (and thus what "camera" and thus what projection matrix) is used for rendering your tiles, you can use a different camera for your UI.
Look at the Superjumper demo, and see it uses a separate "guiCam" to render the "GUI" elements (pause button, game over text, etc). The WorldRenderer has its own camera that uses world-space coordinates to update and display the world.
This way you can use the appropriate coordinates for each aspect of your display.
I am building an app in AS3/Air and I would like to target both iPhone and iPad resolutions. I understand the different aspect ratios between iPhone and iPad however the app I am building currently has different layout and slightly different content to fit the different screen sizes. I currently have 2 versions of the app already built, one for iPhone the other for iPad. All assets have been created with the target platform in mind but now, I would like to combine the 2 apps into a single one.
I am thinking I will rename each each screen file to iphone_login, ipad_menu, ipad_settings etc and include them all in the same build. Then during startup, check what device the user is on and set iphone_ or ipad_ and also set the resolution at this time too.
I prefer not to have black edges going from iphone resolution to ipad so my questions are:
Is my approach a suitable one considering the outcome I would like?
How do I determine what device a user is on to show the correct files, assets and resolution?
I understand the app size will increase at least double by adding 2 sets of assets and 2 sets of code files but considering the differences in design layout and content I don't see another solution, apart from keeping 2 apps.
Thanks :)
What's the problem? iPad and iPhone have different resolution and dpi combination, check them and identify current platform.
Get view you need by class like this:
public static const PAGE1:String = "page1";
public static const PAGE2:String = "page2";
private static var PHONE_VIEW_NAME_2_CLASS:Dictionary = new Dictionary();
private static var TABLET_VIEW_NAME_2_CLASS:Dictionary = new Dictionary();
public class ViewProvider
{
{
PHONE_VIEW_NAME_2_CLASS[ViewProvider.PAGE1] = Page1PhoneView;
PHONE_VIEW_NAME_2_CLASS[ViewProvider.PAGE2] = Page2PhoneView;
TABLET_VIEW_NAME_2_CLASS[ViewProvider.PAGE1] = Page1TabletView;
TABLET_VIEW_NAME_2_CLASS[ViewProvider.PAGE2] = Page2TabletView;
}
public function ViewProvider()
{
}
public static function isTablet():Boolean {
...analyze Capabilities.screenResolutionY, Capabilities.screenResolutionX and Capabilities.screenDPI
}
public static function getViewClass(name:String):Class
{
return isTablet() ? TABLET_VIEW_NAME_2_CLASS[name] : PHONE_VIEW_NAME_2_CLASS[name];
}
}
And in your program
navigator.pushView(ViewProvider.getViewClass(ViewProvider.PAGE1))
All coordinated, paddings and another position numbers, font sizes etc correct with multiplier depending on runtime dpi by simular way...
I'm in the middle of a similar problem.
My solution is to have the images at the best resolution in a file pool, and then downscale them depending on the device when the app starts. You can do this also with non animated vector assets and put them into a bitmapData object.
Another option is to always have the asset pool with files at the maximum resolution needed loaded in the memory, and downscale them at runtime when they are needed. This works well if you will be using some asset in different places at different sizes, for example an icon.
As for the code, you should find a way to separate the code that manages data, the codes that manages the logic, and the code that "paints" the UI. This way you will be able to reuse most of the code in both versions, but only change the code that "paints" the UI. Check the MVC pattern for more info.
i'm attempting to use the Capabilities class to draw an accurately sized sprite on screen at exactly (2.5" x 5") regardless of the screen's resolution, but while i believe the code is correct, the size of the sprite is not accurate - when actually measuring it with a ruler.
function inchesToPixels(inches:Number):uint
{
return Math.round(Capabilities.screenDPI * inches);
}
var mySprite:Sprite = new Sprite();
mySprite.graphics.beginFill(0x000000, 0.5);
mySprite.graphics.drawRect(0, 0, inchesToPixels(2.5), inchesToPixels(5));
mySprite.graphics.endFill();
addChild(mySprite);
I'm not entirely sure about this, but my feeling is that the screenDPI value being returned by the Capabilities class would be the same value for two monitors running the same resolution, even if the monitors had different physical dimensions.
To illustrate, if you have two monitors, one which is 14" and the other which is 28", both displaying the same resolution of 800 x 600 pixels, that screenDPI property will return the same thing because they're both using the same resolution.
However, the number of dots in a literal, real-world inch on each screen will be different because of the physical dimensions of the monitor. So when you're running your code and measuring the on-screen Sprite you create with a ruler, it's not going to match up to real-world inches. I'm not sure how you could get around this problem (if I'm right about what's causing it), it seems like it'd be difficult.
Debu
I suggest at the start of your app telling the user "I detected your monitor is XX inches" (where XX is calculated from screenDPI), and allow the user to type in a correct monitor size.