Libgdx, Setting the full screen resolution during runtime causes application to render at the wrong size - libgdx

I am only using the desktop Application, no mobile.
I am experimenting with letting the user set the screen resolution during run time. I give him the Display Modes available and he applies one. This part actually works. The problem occurs when i save this mode and try to set this display mode the next time they launch the game.
I am using preferences to store the mode the user selected. I am unable to access preferences before the Create method in my Game class, or in the DesktopLauncher Object, where you normally set up the config file and pass it into the application. So my DesktopLauncher looks like this.
val config = Lwjgl3ApplicationConfiguration()
config.setFullscreenMode(Lwjgl3ApplicationConfiguration.getDisplayMode())
Lwjgl3Application(MainGame(), config)
I use the current screen resolution on the creation of the application. Then in my Create method in my MainGame class i get the mode they set from preferences and i set it like so...
override fun create() {
var modes = Gdx.graphics.displayModes.toList()
val mode = Gdx.graphics.displayMode
val preference: Preferences = Gdx.app.getPreferences("screenPreference")
val screenWidth = preference.getInteger("width", mode.width)
val screenHeight = preference.getInteger("height", mode.height)
val refreshRate = preference.getInteger("refreshRate", mode.refreshRate)
modes = modes.filter { it.width == screenWidth }
modes = modes.filter { it.height == screenHeight }
modes = modes.filter { it.refreshRate == refreshRate }
if (modes.isNotEmpty()) {
Gdx.graphics.setFullscreenMode(modes[0])
}
....
}
To summarize i get the list of modes, i pull from preferences what was set last, and i filter the list according to what was in preferences. This should leave one item left in the list and i apply it. If for some reason the list is empty, then i don't set it, or there is no preference set i just apply the current mode again.
This is where the weird stuff happens. I have checked all the numbers when creating my screens and cameras, and they are all correct. I do receive the correct resolution, but the application doesn't render correctly. Below are a couple examples of what happens.
In the first image you see the bounds of the application to the screen. My application only renders in the bottom corner, and the rest is black. What happened to achieve this effect is i started the application with a smaller resolution than my native resolution, so 1280x1024, then in my create method i set the application full screen mode to 1920x1080 before building the rest of my application. I have checked my cameras and my viewports, and they all have the resolution 1920x1080, but the image is not filling the entire screen.
And a second.
This one is what happens when i reverse the settings. So i start at native resolution 1920x1080, and in my create method i set it to 1280x1024, again before creating the rest of my application. This gives me black bars on both sides of the image like id expect, but the application is HUGE, and only a portion of it fits in the window, the rest goes out of bounds, as depicted by the dotted lines.
It will remain like this the entire time, unless i change the resolution while the application is running, it will then correct itself for the rest of the applications life.
I am confounded by this effect i am getting, and am looking for an answer as to why, or how to fix it.

I found the issue that was causing the image to render incorrectly. I was setting the display mode in the create() function in my main game class. This function is not run on the rendering thread, and you do not want to use Gdx.graphics on anything other than the rendering thread, as described in the libgdx wiki https://github.com/libgdx/libgdx/wiki/Threading
There is a function where you can pass in a lambda to be run on the rendering thread.
Gdx.app.postRunnable {
Gdx.graphics.setFullscreenMode(Gdx.graphics.getDisplayMode(modes[0]))
}
After passing that into postRunnable the game renders correctly on launch.

Related

How to use the constant resolution during webrtc video transmission?

I am using janus to build my webrtc SFU server. I need the chrome browser to send the video resolution from a start to a fixed value and remain unchanged during the transfer. Where should I set it?
I tried setting the degradationPreference in the js code, but it didn't work, the resolution will still change, it seems that chrome does not support this parameter.
var senderList = config.pc.getSenders();
var sender = config.pc.getSenders().find(function(s) {return s.track.kind == "video"});
if(sender) {
var parameters = sender.getParameters();
parameters.degradationPreference = "maintain-resolution";
sender.setParameters(parameters);
}
Image1 with varying resolution
Image2 with varying resolution
I looked at frameHeightSend/frameWidthSend in chrome://webrtc-internals, hoping it will keep the same value from the start, but now it grows slowly at startup and will fluctuate during subsequent transfers.
I found a message that sets the constant resolution in IOS, which is set when the screen is shared, and whether there are similar settings in chrome.

How do I keep the AS3 Camera class from adding black bars?

Under certain conditions, picking a resolution with Camera.setMode() adds black bars to the camera input, "letterboxing" it. I understand that setMode() uses some kind of hidden algorithm that picks a resolution from one of your camera's available resolutions and then crops it to fit your desired dimensions, but apparently sometimes it would rather add black bars than crop it.
This behavior is dependent on what camera I'm using. Some cameras seem to always crop and never letterbox. This may be related to what available resolutions they have. But what's really strange is that the letterboxing only ever happens when I try it in a Flash Player ActiveX control, like in Internet Explorer. It doesn't happen when I try the exact same SWF in Flash Player Projector or Google Chrome. This seems to imply that different Flash Player versions use a different algorithm to select and fit a resolution to the desired dimensions.
Here's a very simple example of code that's been creating this problem for me. In this case I'm providing a 4x3 resolution to setMode(), which means it must be selecting a 16x9 resolution even though 640x480 is one of the camera's available resolutions.
public class Flashcam extends Sprite
{
private var _camera:Camera = Camera.getCamera("0");
public var _video:Video;
private var _width:int = 640;
private var _height:int = 480;
public function Flashcam()
{
_camera.setMode(_width, _height, 15);
_video = new Video(_camera.width, _camera.height);
addChild(_video);
_video.attachCamera(_camera);
}
}
Is there any way to stop Camera from letterboxing its input? If not, is there some way to tell whether or not it's being letterboxed and which camera resolution has been automatically selected so that I can write my own code to account for it?

How create fully responsive html5 game using Phaser library?

I'm using Phaser 2.5.0 (tried 2.4.8 also) and I'm trying to create fully responsive game using phaser library.
Everything is ok when I refresh page, here is image:
but when I change rotation to portrait dialog message is not ok, image:
When I refresh page using portrait mode I get ok dialog:
So problem is when I resize browser window.
Here is code for init phaser game:
var game = new Phaser.Game(window.innerWidth, window.innerHeight, Phaser.AUTO);
Tried also:
var game = new Phaser.Game("100%", "100%", Phaser.AUTO);
Here is create function located in main.js:
create: function(){
console.log('create');
this.game.antialias = true;
this.game.stage.smoothed = true;
this.input.maxPointers = 1;
this.scale.scaleMode = Phaser.ScaleManager.SHOW_ALL;
this.scale.minHeight = 0;
this.game.time.advancedTiming = true;
},
So to tell again problem is when I resize browser window but when I do refresh then it's not working ok. Game must support resize.
Here is one simple example which is giving almost same result, on refresh is ok but when doing resize is not ok: https://jsfiddle.net/CroDac/tv010u0t/
Thank you
There was/is an old, but apparently not fixed, bug in iOS that caused (causes?) this. There are several workarounds on the net, on of them is here:
https://github.com/scottjehl/iOS-Orientationchange-Fix
From the description:
This fix works by listening to the device's accelerometer to predict
when an orientation change is about to occur. When it deems an
orientation change imminent, the script disables user zooming,
allowing the orientation change to occur properly, with zooming
disabled. The script restores zoom again once the device is either
oriented close to upright, or after its orientation has changed. This
way, user zooming is never disabled while the page is in use.
Maybe this can help you....

AS3 air multiple resolutions, assets and layout iphone and ipad app

I am building an app in AS3/Air and I would like to target both iPhone and iPad resolutions. I understand the different aspect ratios between iPhone and iPad however the app I am building currently has different layout and slightly different content to fit the different screen sizes. I currently have 2 versions of the app already built, one for iPhone the other for iPad. All assets have been created with the target platform in mind but now, I would like to combine the 2 apps into a single one.
I am thinking I will rename each each screen file to iphone_login, ipad_menu, ipad_settings etc and include them all in the same build. Then during startup, check what device the user is on and set iphone_ or ipad_ and also set the resolution at this time too.
I prefer not to have black edges going from iphone resolution to ipad so my questions are:
Is my approach a suitable one considering the outcome I would like?
How do I determine what device a user is on to show the correct files, assets and resolution?
I understand the app size will increase at least double by adding 2 sets of assets and 2 sets of code files but considering the differences in design layout and content I don't see another solution, apart from keeping 2 apps.
Thanks :)
What's the problem? iPad and iPhone have different resolution and dpi combination, check them and identify current platform.
Get view you need by class like this:
public static const PAGE1:String = "page1";
public static const PAGE2:String = "page2";
private static var PHONE_VIEW_NAME_2_CLASS:Dictionary = new Dictionary();
private static var TABLET_VIEW_NAME_2_CLASS:Dictionary = new Dictionary();
public class ViewProvider
{
{
PHONE_VIEW_NAME_2_CLASS[ViewProvider.PAGE1] = Page1PhoneView;
PHONE_VIEW_NAME_2_CLASS[ViewProvider.PAGE2] = Page2PhoneView;
TABLET_VIEW_NAME_2_CLASS[ViewProvider.PAGE1] = Page1TabletView;
TABLET_VIEW_NAME_2_CLASS[ViewProvider.PAGE2] = Page2TabletView;
}
public function ViewProvider()
{
}
public static function isTablet():Boolean {
...analyze Capabilities.screenResolutionY, Capabilities.screenResolutionX and Capabilities.screenDPI
}
public static function getViewClass(name:String):Class
{
return isTablet() ? TABLET_VIEW_NAME_2_CLASS[name] : PHONE_VIEW_NAME_2_CLASS[name];
}
}
And in your program
navigator.pushView(ViewProvider.getViewClass(ViewProvider.PAGE1))
All coordinated, paddings and another position numbers, font sizes etc correct with multiplier depending on runtime dpi by simular way...
I'm in the middle of a similar problem.
My solution is to have the images at the best resolution in a file pool, and then downscale them depending on the device when the app starts. You can do this also with non animated vector assets and put them into a bitmapData object.
Another option is to always have the asset pool with files at the maximum resolution needed loaded in the memory, and downscale them at runtime when they are needed. This works well if you will be using some asset in different places at different sizes, for example an icon.
As for the code, you should find a way to separate the code that manages data, the codes that manages the logic, and the code that "paints" the UI. This way you will be able to reuse most of the code in both versions, but only change the code that "paints" the UI. Check the MVC pattern for more info.

supportedInterfaceOrientations method not working on iOS 6 UIViewController

In iOS 5, my application I used the method to change my orientation:
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation {
return (interfaceOrientation == UIInterfaceOrientationLandscapeRight);
}
In iOS 6 I think I'm supposed to do this, but it does nothing! My app is rotated not the way I want it to be.
- (BOOL) shouldAutorotate
{
return YES;
}
-(NSUInteger)supportedInterfaceOrientations
{
return UIInterfaceOrientationMaskLandscapeRight;
}
It's how I was adding my viewController.
I replaced this line code:
[window addSubview:viewController.view];
by this line:
[window setRootViewController:viewController];
When the device changes orientation, in iOS 6, the system asks the application which orientation it supports. The application will return a number of orientations it accepts.
How does the application determine it's orientation?
First the application checks it's info.plist orientations (this is very important to determine which orientation to use for launch.
Secondly it asks it's application delegate for it's orientations, this can be specified via the -(NSUInteger)application:supportedInterfaceOrientationsForWindow: method. This effectively overrides the info.plist setting, this implementation is optional. By default iPad apps orientate in all directions and iPhone in all but upside down.
Lastly the application delegate queries it's top most view controller, this can be a UINavigationController or a UIViewController... etc, then it specifies how to be presented and if it wants to autorotate. These UIViewControllers can use shouldAutorotate: and supportedInterfaceOrientations methods to tell the app delegate how to present it.
You must make sure you set the root view controller of your window.
Also if you are presenting any other view controllers in full screen such as a modal view controller, this is the one responsible for determining orientation changes or not.
For me the solution worked, the case was different a bit, because I had a UINavigationController.
My case was that I needed all Portrait window except one. I had to enable all landscape and the portrait orientations in targets (otherwise it crashes on the only landscape view).
So in this case:
create a subclass for UINavigationController,
insert the rotationspecific stuff there
and use the subclass instead of UINavigationController.
If you are using a UINavigationController, it sends the supportedInterfaceOrientations: message to the UINavigationController itself rather than the top most view controller (i.e. what you would actually want it to do).
Instead of having to subclass UINavigationController to fix it, you can simply add a category to UINavigationController.
I created a new variable in my singleton: canRotate, and I set it to YES when I want the viewcontroller to support some orientation.
In appDelegate I added this:
- (NSUInteger)application:(UIApplication *)application supportedInterfaceOrientationsForWindow:(UIWindow *)window
{
if([[SharedData sharedInstance] canRotate])
{
return (UIInterfaceOrientationMaskPortrait | UIInterfaceOrientationMaskLandscapeLeft | UIInterfaceOrientationMaskLandscapeRight);
}
return UIInterfaceOrientationMaskPortrait;
}
in my case it works. I have a custom tabbar and UINavigationviewControllers are added as subviews to rootViewController.
I know this sounds pretty elementary, but I was wracking my brain to figure out orientation issues while testing on my iPhone - I had the physical auto lock mode in Portrait mode - so nothing I changed programmatically mattered - thought this should be troubleshooting step number 1!