Under certain conditions, picking a resolution with Camera.setMode() adds black bars to the camera input, "letterboxing" it. I understand that setMode() uses some kind of hidden algorithm that picks a resolution from one of your camera's available resolutions and then crops it to fit your desired dimensions, but apparently sometimes it would rather add black bars than crop it.
This behavior is dependent on what camera I'm using. Some cameras seem to always crop and never letterbox. This may be related to what available resolutions they have. But what's really strange is that the letterboxing only ever happens when I try it in a Flash Player ActiveX control, like in Internet Explorer. It doesn't happen when I try the exact same SWF in Flash Player Projector or Google Chrome. This seems to imply that different Flash Player versions use a different algorithm to select and fit a resolution to the desired dimensions.
Here's a very simple example of code that's been creating this problem for me. In this case I'm providing a 4x3 resolution to setMode(), which means it must be selecting a 16x9 resolution even though 640x480 is one of the camera's available resolutions.
public class Flashcam extends Sprite
{
private var _camera:Camera = Camera.getCamera("0");
public var _video:Video;
private var _width:int = 640;
private var _height:int = 480;
public function Flashcam()
{
_camera.setMode(_width, _height, 15);
_video = new Video(_camera.width, _camera.height);
addChild(_video);
_video.attachCamera(_camera);
}
}
Is there any way to stop Camera from letterboxing its input? If not, is there some way to tell whether or not it's being letterboxed and which camera resolution has been automatically selected so that I can write my own code to account for it?
Related
I have a project that needs to be made in Animate CC using ActionScript3.
The situation is that I have to make resolution options, meaning that if I select a button it will change the resolution of my project to said resolution like 800x600 etc. Like video games you know.
i've tried using
res1.addEventListener(MouseEvent.CLICK, setRes1);
function setRes1(event:MouseEvent):void
{
stage.fullScreenSourceRect = new Rectangle(0, 0, 800, 600);
stage.displayState = StageDisplayState.FULL_SCREEN;
}
But it doesn't seem to work, is there even a way to do this?
Not using the Flash Player, but If you target AIR yes you can do it accessing the properties of NativeWindow. Stage.fullScreenSourceRect is read-only. But you can modify application size accessing NativeApplication.nativeApplication.nativeWindow.width = ...; This will not modify your pc screen resolution, but only resize the application window. You can change resolution in AIR though using the ScreenMode class. Detailed working is described here: https://helpx.adobe.com/flash-player/release-note/fp_31_air_31_release_notes.html#ScreenModeConfigurationforAIRDesktop
I am only using the desktop Application, no mobile.
I am experimenting with letting the user set the screen resolution during run time. I give him the Display Modes available and he applies one. This part actually works. The problem occurs when i save this mode and try to set this display mode the next time they launch the game.
I am using preferences to store the mode the user selected. I am unable to access preferences before the Create method in my Game class, or in the DesktopLauncher Object, where you normally set up the config file and pass it into the application. So my DesktopLauncher looks like this.
val config = Lwjgl3ApplicationConfiguration()
config.setFullscreenMode(Lwjgl3ApplicationConfiguration.getDisplayMode())
Lwjgl3Application(MainGame(), config)
I use the current screen resolution on the creation of the application. Then in my Create method in my MainGame class i get the mode they set from preferences and i set it like so...
override fun create() {
var modes = Gdx.graphics.displayModes.toList()
val mode = Gdx.graphics.displayMode
val preference: Preferences = Gdx.app.getPreferences("screenPreference")
val screenWidth = preference.getInteger("width", mode.width)
val screenHeight = preference.getInteger("height", mode.height)
val refreshRate = preference.getInteger("refreshRate", mode.refreshRate)
modes = modes.filter { it.width == screenWidth }
modes = modes.filter { it.height == screenHeight }
modes = modes.filter { it.refreshRate == refreshRate }
if (modes.isNotEmpty()) {
Gdx.graphics.setFullscreenMode(modes[0])
}
....
}
To summarize i get the list of modes, i pull from preferences what was set last, and i filter the list according to what was in preferences. This should leave one item left in the list and i apply it. If for some reason the list is empty, then i don't set it, or there is no preference set i just apply the current mode again.
This is where the weird stuff happens. I have checked all the numbers when creating my screens and cameras, and they are all correct. I do receive the correct resolution, but the application doesn't render correctly. Below are a couple examples of what happens.
In the first image you see the bounds of the application to the screen. My application only renders in the bottom corner, and the rest is black. What happened to achieve this effect is i started the application with a smaller resolution than my native resolution, so 1280x1024, then in my create method i set the application full screen mode to 1920x1080 before building the rest of my application. I have checked my cameras and my viewports, and they all have the resolution 1920x1080, but the image is not filling the entire screen.
And a second.
This one is what happens when i reverse the settings. So i start at native resolution 1920x1080, and in my create method i set it to 1280x1024, again before creating the rest of my application. This gives me black bars on both sides of the image like id expect, but the application is HUGE, and only a portion of it fits in the window, the rest goes out of bounds, as depicted by the dotted lines.
It will remain like this the entire time, unless i change the resolution while the application is running, it will then correct itself for the rest of the applications life.
I am confounded by this effect i am getting, and am looking for an answer as to why, or how to fix it.
I found the issue that was causing the image to render incorrectly. I was setting the display mode in the create() function in my main game class. This function is not run on the rendering thread, and you do not want to use Gdx.graphics on anything other than the rendering thread, as described in the libgdx wiki https://github.com/libgdx/libgdx/wiki/Threading
There is a function where you can pass in a lambda to be run on the rendering thread.
Gdx.app.postRunnable {
Gdx.graphics.setFullscreenMode(Gdx.graphics.getDisplayMode(modes[0]))
}
After passing that into postRunnable the game renders correctly on launch.
I am building an app in AS3/Air and I would like to target both iPhone and iPad resolutions. I understand the different aspect ratios between iPhone and iPad however the app I am building currently has different layout and slightly different content to fit the different screen sizes. I currently have 2 versions of the app already built, one for iPhone the other for iPad. All assets have been created with the target platform in mind but now, I would like to combine the 2 apps into a single one.
I am thinking I will rename each each screen file to iphone_login, ipad_menu, ipad_settings etc and include them all in the same build. Then during startup, check what device the user is on and set iphone_ or ipad_ and also set the resolution at this time too.
I prefer not to have black edges going from iphone resolution to ipad so my questions are:
Is my approach a suitable one considering the outcome I would like?
How do I determine what device a user is on to show the correct files, assets and resolution?
I understand the app size will increase at least double by adding 2 sets of assets and 2 sets of code files but considering the differences in design layout and content I don't see another solution, apart from keeping 2 apps.
Thanks :)
What's the problem? iPad and iPhone have different resolution and dpi combination, check them and identify current platform.
Get view you need by class like this:
public static const PAGE1:String = "page1";
public static const PAGE2:String = "page2";
private static var PHONE_VIEW_NAME_2_CLASS:Dictionary = new Dictionary();
private static var TABLET_VIEW_NAME_2_CLASS:Dictionary = new Dictionary();
public class ViewProvider
{
{
PHONE_VIEW_NAME_2_CLASS[ViewProvider.PAGE1] = Page1PhoneView;
PHONE_VIEW_NAME_2_CLASS[ViewProvider.PAGE2] = Page2PhoneView;
TABLET_VIEW_NAME_2_CLASS[ViewProvider.PAGE1] = Page1TabletView;
TABLET_VIEW_NAME_2_CLASS[ViewProvider.PAGE2] = Page2TabletView;
}
public function ViewProvider()
{
}
public static function isTablet():Boolean {
...analyze Capabilities.screenResolutionY, Capabilities.screenResolutionX and Capabilities.screenDPI
}
public static function getViewClass(name:String):Class
{
return isTablet() ? TABLET_VIEW_NAME_2_CLASS[name] : PHONE_VIEW_NAME_2_CLASS[name];
}
}
And in your program
navigator.pushView(ViewProvider.getViewClass(ViewProvider.PAGE1))
All coordinated, paddings and another position numbers, font sizes etc correct with multiplier depending on runtime dpi by simular way...
I'm in the middle of a similar problem.
My solution is to have the images at the best resolution in a file pool, and then downscale them depending on the device when the app starts. You can do this also with non animated vector assets and put them into a bitmapData object.
Another option is to always have the asset pool with files at the maximum resolution needed loaded in the memory, and downscale them at runtime when they are needed. This works well if you will be using some asset in different places at different sizes, for example an icon.
As for the code, you should find a way to separate the code that manages data, the codes that manages the logic, and the code that "paints" the UI. This way you will be able to reuse most of the code in both versions, but only change the code that "paints" the UI. Check the MVC pattern for more info.
I am using this function, adapted from Plastic Sturgeon (http://plasticsturgeon.com/2010/09/as3-get-visible-bounds-of-transparent-display-object/) to get the visible bounds of a display object.
public static function getVisibleBounds(source:DisplayObject):Rectangle
{
var matrix:Matrix = source.transform.concatenatedMatrix;
var data:BitmapData = new BitmapData(1000, 1000,true,0x00000000);
data.draw(source, matrix);
var bounds:Rectangle = data.getColorBoundsRect(0xFFFFFFFF,0x000000,false);
data.dispose();
return bounds;
}
However, the bounds are offset from the object, depending on the stage size. It works perfectly for the default stage size (550px×400px), but when either dimension is increased, it moves in the direction opposite to that dimension (when x is increased, it is offset from the object leftward, and when y is increased, it is offset from the object downward.) It doesn't do this consistently. The offset(stage dimension) is non-linear, as it is 0 for a certain range of stage dimensions, then for stage dimensions greater than that range, it quickly rises with the stage dimension. The offset is also different depending on what I changed the stage dimension from, e.g. if I go from 400px to 1000px in stages, testing movie in between, the boundaries are offset differently than if I go from 400px to 1000px all at once, or without testing movie at intermediate stages. Sometimes the offset only changes with one dimension, and the other dimension doesn't do anything. Also the published file is different from the test. I tried putting the function in the same file as the display object, instead of in an external file, but that's still unreliable. I wonder if there's some fix that could reliably give me the actual visible boundaries of the display object, regardless of the stage size and all this other stuff.
My computer runs Windows Vista Home Premium 32-bit, and I am using Adobe Flash Professional CS5.5.
This may be an issue that can be solved by setting some stage properties. First try setting the stage not to scale:
this.stage.scaleMode = "noScale";
Then set some alignment rules:
this.stage.align = "TL";
If that helps, it may be that your bitmap copying was running into some issues with scaling bugs.
I've built an AIR application with flash/as3 that has a webcam display on the stage. While building the app, and in all my tests everything looks and works just dandy, but when I publish for AIR the image gets stretched. The bounds of the image seem to stay the same, but the actual cam output is what's distorted. Has anyone come into this problem before?
I should add, this is a desktop app, which is permanently installed on one machine, so device compatibility should not be an issue.
this is the camera setup:
var cam:Camera = Camera.getCamera();
cam.setMode(280,380,20);
var video:Video = new Video(380,380);
this is where i first call the camera...
video.attachCamera(cam);
video.x = 355;
video.scaleX = -1;
video.y = -100;
addChildAt(video, 0);
the reason for the odd sizing, is that it sits behind a frame, that changes positions throughout the interactive.
Not necessarily the answer you are looking for, but you should keep this in mind:
You are asking the camera to capture at the resolution of 280 x 380, which is not a standard 4:3 aspect ratio.
When you call cam.setMode(280,380,20); the docs say that Flash will try to set the cameras resolution to your specifications, and if the camera does not support that resolution it will try to find one that matches. So you may or may not be getting this actual resolution.
setMode() has a fourth parameter, which can disable this functionality. Read the docs on that so you understand the implications :)
Then you display the video in a Video object that is 380x380. So I would expect the image to be stretched in the horizontal direction (b/c the original source is only 280).
It's not clear why this behaves differently: are you saying that running the debug version of the app works, but when you export the release build and run that it looks funky?
Finally, what is scaleX = -1 doing? I recall this as some sort of nifty trick I used in the past... but it's purpose here is escaping me :)
Yep, source code would be cool. Btw, i suggest you, as soon as you get the video streaming running, to set by hand the video.width and video.height property.
This will force the cam to display at the correct size.