I'm working on a universal app, and I need for one page within this app to use landscape orientation (for taking a photo), whilst the majority of the app is primarily portrait orientation.
How do I tell the OS and the designer this, as the SupportedOrientation and Orientation properties from PhoneApplicationPage from the Silverlight toolkit, don't appear to exist on Page
If you want to make a different orientation on the Page, then I've been able to make it like this - by setting Page's orientation in its constructor:
public MainPage()
{
this.InitializeComponent();
DisplayInformation.AutoRotationPreferences = DisplayOrientations.Portrait;
}
public Page1()
{
this.InitializeComponent();
DisplayInformation.AutoRotationPreferences = DisplayOrientations.Landscape;
}
Here is simple example with three Pages - each with different orientation.
Related
I am only using the desktop Application, no mobile.
I am experimenting with letting the user set the screen resolution during run time. I give him the Display Modes available and he applies one. This part actually works. The problem occurs when i save this mode and try to set this display mode the next time they launch the game.
I am using preferences to store the mode the user selected. I am unable to access preferences before the Create method in my Game class, or in the DesktopLauncher Object, where you normally set up the config file and pass it into the application. So my DesktopLauncher looks like this.
val config = Lwjgl3ApplicationConfiguration()
config.setFullscreenMode(Lwjgl3ApplicationConfiguration.getDisplayMode())
Lwjgl3Application(MainGame(), config)
I use the current screen resolution on the creation of the application. Then in my Create method in my MainGame class i get the mode they set from preferences and i set it like so...
override fun create() {
var modes = Gdx.graphics.displayModes.toList()
val mode = Gdx.graphics.displayMode
val preference: Preferences = Gdx.app.getPreferences("screenPreference")
val screenWidth = preference.getInteger("width", mode.width)
val screenHeight = preference.getInteger("height", mode.height)
val refreshRate = preference.getInteger("refreshRate", mode.refreshRate)
modes = modes.filter { it.width == screenWidth }
modes = modes.filter { it.height == screenHeight }
modes = modes.filter { it.refreshRate == refreshRate }
if (modes.isNotEmpty()) {
Gdx.graphics.setFullscreenMode(modes[0])
}
....
}
To summarize i get the list of modes, i pull from preferences what was set last, and i filter the list according to what was in preferences. This should leave one item left in the list and i apply it. If for some reason the list is empty, then i don't set it, or there is no preference set i just apply the current mode again.
This is where the weird stuff happens. I have checked all the numbers when creating my screens and cameras, and they are all correct. I do receive the correct resolution, but the application doesn't render correctly. Below are a couple examples of what happens.
In the first image you see the bounds of the application to the screen. My application only renders in the bottom corner, and the rest is black. What happened to achieve this effect is i started the application with a smaller resolution than my native resolution, so 1280x1024, then in my create method i set the application full screen mode to 1920x1080 before building the rest of my application. I have checked my cameras and my viewports, and they all have the resolution 1920x1080, but the image is not filling the entire screen.
And a second.
This one is what happens when i reverse the settings. So i start at native resolution 1920x1080, and in my create method i set it to 1280x1024, again before creating the rest of my application. This gives me black bars on both sides of the image like id expect, but the application is HUGE, and only a portion of it fits in the window, the rest goes out of bounds, as depicted by the dotted lines.
It will remain like this the entire time, unless i change the resolution while the application is running, it will then correct itself for the rest of the applications life.
I am confounded by this effect i am getting, and am looking for an answer as to why, or how to fix it.
I found the issue that was causing the image to render incorrectly. I was setting the display mode in the create() function in my main game class. This function is not run on the rendering thread, and you do not want to use Gdx.graphics on anything other than the rendering thread, as described in the libgdx wiki https://github.com/libgdx/libgdx/wiki/Threading
There is a function where you can pass in a lambda to be run on the rendering thread.
Gdx.app.postRunnable {
Gdx.graphics.setFullscreenMode(Gdx.graphics.getDisplayMode(modes[0]))
}
After passing that into postRunnable the game renders correctly on launch.
Under certain conditions, picking a resolution with Camera.setMode() adds black bars to the camera input, "letterboxing" it. I understand that setMode() uses some kind of hidden algorithm that picks a resolution from one of your camera's available resolutions and then crops it to fit your desired dimensions, but apparently sometimes it would rather add black bars than crop it.
This behavior is dependent on what camera I'm using. Some cameras seem to always crop and never letterbox. This may be related to what available resolutions they have. But what's really strange is that the letterboxing only ever happens when I try it in a Flash Player ActiveX control, like in Internet Explorer. It doesn't happen when I try the exact same SWF in Flash Player Projector or Google Chrome. This seems to imply that different Flash Player versions use a different algorithm to select and fit a resolution to the desired dimensions.
Here's a very simple example of code that's been creating this problem for me. In this case I'm providing a 4x3 resolution to setMode(), which means it must be selecting a 16x9 resolution even though 640x480 is one of the camera's available resolutions.
public class Flashcam extends Sprite
{
private var _camera:Camera = Camera.getCamera("0");
public var _video:Video;
private var _width:int = 640;
private var _height:int = 480;
public function Flashcam()
{
_camera.setMode(_width, _height, 15);
_video = new Video(_camera.width, _camera.height);
addChild(_video);
_video.attachCamera(_camera);
}
}
Is there any way to stop Camera from letterboxing its input? If not, is there some way to tell whether or not it's being letterboxed and which camera resolution has been automatically selected so that I can write my own code to account for it?
I am building an app in AS3/Air and I would like to target both iPhone and iPad resolutions. I understand the different aspect ratios between iPhone and iPad however the app I am building currently has different layout and slightly different content to fit the different screen sizes. I currently have 2 versions of the app already built, one for iPhone the other for iPad. All assets have been created with the target platform in mind but now, I would like to combine the 2 apps into a single one.
I am thinking I will rename each each screen file to iphone_login, ipad_menu, ipad_settings etc and include them all in the same build. Then during startup, check what device the user is on and set iphone_ or ipad_ and also set the resolution at this time too.
I prefer not to have black edges going from iphone resolution to ipad so my questions are:
Is my approach a suitable one considering the outcome I would like?
How do I determine what device a user is on to show the correct files, assets and resolution?
I understand the app size will increase at least double by adding 2 sets of assets and 2 sets of code files but considering the differences in design layout and content I don't see another solution, apart from keeping 2 apps.
Thanks :)
What's the problem? iPad and iPhone have different resolution and dpi combination, check them and identify current platform.
Get view you need by class like this:
public static const PAGE1:String = "page1";
public static const PAGE2:String = "page2";
private static var PHONE_VIEW_NAME_2_CLASS:Dictionary = new Dictionary();
private static var TABLET_VIEW_NAME_2_CLASS:Dictionary = new Dictionary();
public class ViewProvider
{
{
PHONE_VIEW_NAME_2_CLASS[ViewProvider.PAGE1] = Page1PhoneView;
PHONE_VIEW_NAME_2_CLASS[ViewProvider.PAGE2] = Page2PhoneView;
TABLET_VIEW_NAME_2_CLASS[ViewProvider.PAGE1] = Page1TabletView;
TABLET_VIEW_NAME_2_CLASS[ViewProvider.PAGE2] = Page2TabletView;
}
public function ViewProvider()
{
}
public static function isTablet():Boolean {
...analyze Capabilities.screenResolutionY, Capabilities.screenResolutionX and Capabilities.screenDPI
}
public static function getViewClass(name:String):Class
{
return isTablet() ? TABLET_VIEW_NAME_2_CLASS[name] : PHONE_VIEW_NAME_2_CLASS[name];
}
}
And in your program
navigator.pushView(ViewProvider.getViewClass(ViewProvider.PAGE1))
All coordinated, paddings and another position numbers, font sizes etc correct with multiplier depending on runtime dpi by simular way...
I'm in the middle of a similar problem.
My solution is to have the images at the best resolution in a file pool, and then downscale them depending on the device when the app starts. You can do this also with non animated vector assets and put them into a bitmapData object.
Another option is to always have the asset pool with files at the maximum resolution needed loaded in the memory, and downscale them at runtime when they are needed. This works well if you will be using some asset in different places at different sizes, for example an icon.
As for the code, you should find a way to separate the code that manages data, the codes that manages the logic, and the code that "paints" the UI. This way you will be able to reuse most of the code in both versions, but only change the code that "paints" the UI. Check the MVC pattern for more info.
In iOS 5, my application I used the method to change my orientation:
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation {
return (interfaceOrientation == UIInterfaceOrientationLandscapeRight);
}
In iOS 6 I think I'm supposed to do this, but it does nothing! My app is rotated not the way I want it to be.
- (BOOL) shouldAutorotate
{
return YES;
}
-(NSUInteger)supportedInterfaceOrientations
{
return UIInterfaceOrientationMaskLandscapeRight;
}
It's how I was adding my viewController.
I replaced this line code:
[window addSubview:viewController.view];
by this line:
[window setRootViewController:viewController];
When the device changes orientation, in iOS 6, the system asks the application which orientation it supports. The application will return a number of orientations it accepts.
How does the application determine it's orientation?
First the application checks it's info.plist orientations (this is very important to determine which orientation to use for launch.
Secondly it asks it's application delegate for it's orientations, this can be specified via the -(NSUInteger)application:supportedInterfaceOrientationsForWindow: method. This effectively overrides the info.plist setting, this implementation is optional. By default iPad apps orientate in all directions and iPhone in all but upside down.
Lastly the application delegate queries it's top most view controller, this can be a UINavigationController or a UIViewController... etc, then it specifies how to be presented and if it wants to autorotate. These UIViewControllers can use shouldAutorotate: and supportedInterfaceOrientations methods to tell the app delegate how to present it.
You must make sure you set the root view controller of your window.
Also if you are presenting any other view controllers in full screen such as a modal view controller, this is the one responsible for determining orientation changes or not.
For me the solution worked, the case was different a bit, because I had a UINavigationController.
My case was that I needed all Portrait window except one. I had to enable all landscape and the portrait orientations in targets (otherwise it crashes on the only landscape view).
So in this case:
create a subclass for UINavigationController,
insert the rotationspecific stuff there
and use the subclass instead of UINavigationController.
If you are using a UINavigationController, it sends the supportedInterfaceOrientations: message to the UINavigationController itself rather than the top most view controller (i.e. what you would actually want it to do).
Instead of having to subclass UINavigationController to fix it, you can simply add a category to UINavigationController.
I created a new variable in my singleton: canRotate, and I set it to YES when I want the viewcontroller to support some orientation.
In appDelegate I added this:
- (NSUInteger)application:(UIApplication *)application supportedInterfaceOrientationsForWindow:(UIWindow *)window
{
if([[SharedData sharedInstance] canRotate])
{
return (UIInterfaceOrientationMaskPortrait | UIInterfaceOrientationMaskLandscapeLeft | UIInterfaceOrientationMaskLandscapeRight);
}
return UIInterfaceOrientationMaskPortrait;
}
in my case it works. I have a custom tabbar and UINavigationviewControllers are added as subviews to rootViewController.
I know this sounds pretty elementary, but I was wracking my brain to figure out orientation issues while testing on my iPhone - I had the physical auto lock mode in Portrait mode - so nothing I changed programmatically mattered - thought this should be troubleshooting step number 1!
I am developing an iPhone application. In this application, UIViewController (vc1) presents another UIViewController (vc2). vc1 supports both Portrait and Landscape orientations; vc2 supports only Portrait orientation.
When vc2 is presented, it asks vc1: shouldAutorotateToInterfaceOrientation: and this returns YES
In iOS5 (Beta 7) willRotateToInterfaceOrientation:, didRotateFromInterfaceOrientation: are not getting called for this sequence. But, this works fine in iOS4. Is this a bug in iOS5?
I had reported a bug to Apple and I got the following reply:
"Engineering has determined that this issue behaves as intended based on the following information:
The presentation behavior is correct - if it behaved differently in previous versions, that was a bug. The arguably unexpected change in behavior regards the dismiss of VC1 which no longer gets rotation callbacks but it will layout in portrait.
There are other ways to determine what your orientation is when a view controller lays itself out. For various reasons, relying on the rotation callbacks proved to be problematic.
In general, viewController rotation callbacks occur in two cases:
The device orientation changes for view controllers in the window hierarchy
Mixed interface orientation presentations. (Bottom controller only supports portrait, device is in landscape, and a view controller that supports landscape is presented.) However this is arguably a misfeature.
Try using viewWillLayoutSubviews: in iOS 5."
I had faced a similar issue when testing my app on iOS5. The layout of subviews in the main view controller used to get messed up if the orientation changed when a modal view controller was active.
What I did was to store the current orientation flag in the main controller. This flag is updated in two places in the main controller
willAnimateRotationToInterfaceOrientation:
viewWillLayoutSubviews (this is on iOS5 only)
I write all the logic to adjust the subviews by comparing the current orientation with the stored value. If they are different - update the stored orientation, update your subviews.
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
// Any orientation is OK
return YES;
}
- (void)willAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration
{
portrait = UIInterfaceOrientationIsPortrait(toInterfaceOrientation);
// Code to update subview layout goes here
}
-(void)viewWillLayoutSubviews
{
BOOL isPortraitNow = UIInterfaceOrientationIsPortrait(self.interfaceOrientation);
if(isPortraitNow != portrait)
{
DLog(#"Interfaceorientation mismatch!, correcting");
portrait = isPortraitNow;
// Code to update subview layout goes here
}
}