I have tried all the solutions over the network but no one of these cover all the rotation and orientation cases.
Is there a complete and better solution or documentation to get me able to use mediacapture object well?
If you look at the CameraStarterKit sample from the Microsoft GitHub repository, you'll get a much better idea for how to handle rotation of the camera. It targets Windows 10, but a lot of the code should be portable back to 8.1.
Mainly, it comes down to this:
// Receive notifications about rotation of the device and UI and apply any necessary rotation to the preview stream and UI controls
private readonly DisplayInformation _displayInformation = DisplayInformation.GetForCurrentView();
private readonly SimpleOrientationSensor _orientationSensor = SimpleOrientationSensor.GetDefault();
private SimpleOrientation _deviceOrientation = SimpleOrientation.NotRotated;
private DisplayOrientations _displayOrientation = DisplayOrientations.Portrait;
// Rotation metadata to apply to the preview stream and recorded videos (MF_MT_VIDEO_ROTATION)
// Reference: http://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh868174.aspx
private static readonly Guid RotationKey = new Guid("C380465D-2271-428C-9B83-ECEA3B4A85C1");
/// <summary>
/// Gets the current orientation of the UI in relation to the device (when AutoRotationPreferences cannot be honored) and applies a corrective rotation to the preview
/// </summary>
private async Task SetPreviewRotationAsync()
{
// Only need to update the orientation if the camera is mounted on the device
if (_externalCamera) return;
// Calculate which way and how far to rotate the preview
int rotationDegrees = ConvertDisplayOrientationToDegrees(_displayOrientation);
// The rotation direction needs to be inverted if the preview is being mirrored
if (_mirroringPreview)
{
rotationDegrees = (360 - rotationDegrees) % 360;
}
// Add rotation metadata to the preview stream to make sure the aspect ratio / dimensions match when rendering and getting preview frames
var props = _mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview);
props.Properties.Add(RotationKey, rotationDegrees);
await _mediaCapture.SetEncodingPropertiesAsync(MediaStreamType.VideoPreview, props, null);
}
/// <summary>
/// Registers event handlers for hardware buttons and orientation sensors, and performs an initial update of the UI rotation
/// </summary>
private void RegisterEventHandlers()
{
// If there is an orientation sensor present on the device, register for notifications
if (_orientationSensor != null)
{
_orientationSensor.OrientationChanged += OrientationSensor_OrientationChanged;
// Update orientation of buttons with the current orientation
UpdateButtonOrientation();
}
_displayInformation.OrientationChanged += DisplayInformation_OrientationChanged;
}
But this is just part of the code. You should have a look at the full file (if not the full sample) to get a better understanding of how it works.
i have a problem running my wp (8.0) app properly in the background when HERE Drive+ is running in the foreground. my app is a location based app.
i've setup a little demo project to reproduce and isolate and simplify the problem.
I have a GeoLocator, which is checking and displaying the current location in the PositionChanged event to a label, when in foreground. when running in background, it displays a toast every 5 seconds (to show me, that the PositionChanged event is still triggered).
pretty forward stuff, that works.
public MainPage()
{
InitializeComponent();
DataContext = this;
App.LocationWatcher.ReportInterval = 5000;
App.LocationWatcher.DesiredAccuracy = PositionAccuracy.High;
App.LocationWatcher.PositionChanged += LocationWatcherOnPositionChanged;
App.LocationWatcher.StatusChanged += LocationWatcherOnStatusChanged;
}
private void LocationWatcherOnStatusChanged(Geolocator sender, StatusChangedEventArgs args)
{
DisplayToast("Status:", args.Status.ToString());
}
private void LocationWatcherOnPositionChanged(Geolocator sender, PositionChangedEventArgs args)
{
if (!App.IsRunningInBackground)
{
Dispatcher.BeginInvoke(() => {
this.tbkLastUpdatedValue.Text = DateTime.Now.Ticks.ToString();
this.tbkLatitudeValue.Text = args.Position.Coordinate.Latitude.ToString();
this.tbkLongitudeValue.Text = args.Position.Coordinate.Longitude.ToString();
});
}
else
{
DisplayToast("Location:", args.Position.Coordinate.Latitude.ToString());
}
}
So, now the problem: When my app runs in background, it displays it's toasts, when i run any other app (including the normal map, which actually uses gps), but when i run HERE Drive+, my PositionChanged and my StatusChanged events dont get triggered anymore.
there is an app on the marketplace that is capable to run in background when here drive+ is running in foreground, as stated in the marketplace comments (in german only)
any ideas how to solve this or what may cause that problem?
The Windows SDK is no longer. An alternative could be to use the HERE Maps API for Javascript, which supports vector map data rendering.
I tried following a combination of Lee Brimlow's blitting tutorial series and and the technique in Rex Van der spuy's "advanced game design with flash"
I am a developer working on a web online virutal world made in flash. I made a phone application (works similar to the phone in grand theft auto games). Anyway, when a message is sent we want to play this crazy animation of an envelope flying around and transforming with sparkles around it. It was laggy (especially on older computers) so I thought it would be a great chance to use blitting. However, the blitting animation actually plays slower than a regular movieclip!! What the heck is going on here? Is blitting only better for mobile devices and actually slower on computers? Maybe I am doing something wrong. Here is my code:
// THIS PART HAPPENS WHEN PHONE IT INITIALIZED
//**
//---------------- Blitting stuff ----------------------------------
// add this bitmap stage to the display list so we can see it
_bitmapStage = new BitmapData(550, 400, true, 0xD6D6D6);
_phoneItself.addChild(new Bitmap(_bitmapStage));
var _spritesheetClass:Class = getDefinitionByName("ESpritesheet_1") as Class;
_spritesheet = new _spritesheetClass() as BitmapData;
_envelopeBlit = new BlitSprite(_spritesheet, BlitConfig.envelopeAnimAry , _bitmapStage);
_envelopeBlit.x = -100;
_envelopeBlit.y = 0;
_envelopePlayTimer = new Timer(5, 0);
_envelopePlayTimer.addEventListener(TimerEvent.TIMER, onEnterTimerFrame);
_envelopeBlit.addEventListener("ENV_ANIM_DONE", onEnvAnimFinished);
// a "BlitSprite" is a class that I made. It looks like this:
package com.fs.util_j.blit_utils
{
import flash.display.BitmapData;
import flash.events.Event;
import flash.events.EventDispatcher;
import flash.geom.Point;
import flash.geom.Rectangle;
public class BlitSprite extends EventDispatcher
{
private var _fullSpriteSheet:BitmapData;
private var _rects:Array;
private var _bitmapStage:BitmapData;
private var pos:Point = new Point ();
public var x:Number = 0;
public var y:Number = 0;
public var _animIndex:
int = 0;
private var _count:int = 0;
public var animate:Boolean = true;
private var _whiteTransparent:BitmapData;
private var _envelopeAnimAry:Array;
private var _model:Object;
public function BlitSprite(fullSpriteSheet:BitmapData, envelopeAnimAry:Array, bitmapStage:BitmapData, model:Object = null)
{
_fullSpriteSheet = fullSpriteSheet;
_envelopeAnimAry = envelopeAnimAry;
_bitmapStage = bitmapStage;
_model= model;
init();
}
private function init():void
{
// _whiteTransparent = new BitmapData(100, 100, true, 0x80FFffFF);
this.addEventListener("ENV_ANIM_DONE", onEvnAnimDone);
}
protected function onEvnAnimDone(event:Event):void
{
}
public function render():void
{
// pos.x = x - _rects[_animIndex].width*.5;
// pos.y = y - _rects[_animIndex].width*.5;
// if (_count % 1 == 0 && animate == true)
// {
// trace("rendering");
if (_animIndex == (_envelopeAnimAry.length - 1) )
{
// _animIndex = 0;
dispatchEvent(new Event("ENV_ANIM_DONE", true));
animate = false;
// trace("!!!!animate over " + _model.animOver);
// if (_model != null)
// {
// _model.animOver = true;
// }
// trace("!!!!animate over " + _model.animOver);
}
else
{
_animIndex++;
}
pos.x = x + _envelopeAnimAry[_animIndex][1];
pos.y = y + _envelopeAnimAry[_animIndex][2];
_bitmapStage.copyPixels(_fullSpriteSheet, _envelopeAnimAry[_animIndex][0], pos, null, null, true);
}
}
}
// THIS PART HAPPENS WHEN PHONE'S SEND BUTTON IS CLICKED
_envelopeBlit.animate = true;
_envelopeBlit._animIndex = 0;
_darkSquare.visible = true;
_envelopePlayTimer.addEventListener(TimerEvent.TIMER, onEnterTimerFrame);
_envelopePlayTimer.start();
it also uses BlitConfig which stores the info about the spritesheet spit out by TexturePacker
package com.fs.pack.phone.configuration
{
import flash.geom.Rectangle;
public final class BlitConfig
{
public static var _sending_message_real_20001:Rectangle = new Rectangle(300,1020,144,102);
public static var _sending_message_real_20002:Rectangle = new Rectangle(452,1012,144,102);
public static var _sending_message_real_20003:Rectangle = new Rectangle(852,852,146,102);
public static var _sending_message_real_20004:Rectangle = new Rectangle(2,1018,146,102);
public static var _sending_message_real_20005:Rectangle = new Rectangle(702,822,148,102);
.
.
.
public static var _sending_message_real_20139:Rectangle = new Rectangle(932,144,1,1);
public static var envelopeAnimAry:Array = [
// rectangle, x offset, y offset
[ _sending_message_real_20001, 184,155],
[ _sending_message_real_20002, 184,155],
[ _sending_message_real_20003, 183,155],
[ _sending_message_real_20004, 183,155],
.
.
.
[ _sending_message_real_20139, 0,0]
]
public function BlitConfig()
{
}
}
}
EDIT:
Knowing that this is not mobile, my answer below is irrelevant. I will leave it there, though, in case someone is having trouble with blitting on mobile in the future.
With regards to this specific question, you are running your timer every 5ms. First off, the lowest range that a Timer is accurate is >15ms so that will never be a viable solution. For any Timer relating to displaying soemthing on the stage, you should never do it less than a single frame. (1000/stage.framerate. ~40ms for a 30fps app)
For blitting, the goal is to reduce calculations and rendering. The way you have this set up right now, it looks like you are blitting every 5ms. That is actually more than 8 times as often as the MovieClip is rendering. You should reduce how often you blit. Only do it when a change has actually been made beyond translation. Doing it any more often than that is overkill and the reason it is so slow (again, creating bitmaps is slow)
In general, you do not want to blit in an AIR for Mobile application (which I assume you are doing since you mentioned the phone being initialized). I'm not sure if it is okay to do it using other/native SDKs, but avoid it in AIR.
Essentially, it comes down to how blitting works. Blitting takes a screen capture and displays that on the stage rather than the actual object. In general, this is great. It means that your display objects, particularly vectors which are slow to render, have to render far less often. It is especially good when animating because an object tends to re-render every time it is translated in any way, but not a bitmap.
On mobile platforms, however, creating that bitmap is incredibly slow. I've never looked into how the SDK creates the Bitmaps, but it doesn't do it efficiently (it often makes me wonder if it does it pixel-by-pixel). On desktops, this is generally fine. There is plenty of CPU and plenty of RAM to make this happen quickly. On mobile, however, that luxury is not there at the moment. So when you blit and create that bitmap, it takes a while to run that process.
The problem is exacerbated on high-resolution screens. An app I developed from January to May of this year selectively used blitting to use filters in a GPU accelerated environment. On an iPad 2, the blitting took my app from 30fps to ~24fps. Not a big deal, not anything the user would notice. On an iPad 3 with retina display, however, it dropped down to 10fps. It makes sense when you think about it, as retina iPads have 4x as many pixels as non-retina iPads do.
If you do want to use blitting on mobile, I recommend a few things:
Use GPU rendering mode. Without it, you stand no chance. Be aware that, at least with pre-AIR 3.7, filters were not supported in GPU mode. I am unsure if that is still the case. You should avoid using filters on mobile regardless, though, as they are very slow to render
Make sure to test a release-mode application. Depending on build settings, the difference between debug mode and a release mode app can be substantial, especially on iOS. An app I just developed went from taking 2-3 seconds to create a new Flex View in debug mode to less than a frame (~40ms) to do it in release mode on an iPhone 4
Use blitting sparingly. Only do it where absolutely necessary
Look for ways to simplify your display list. It is easy to have an object with 40 children to create a button. Instead, look for ways to simplify that into fewer objects and fewer filters (even if removing a filter requires you add another object). I don't believe this will help with the actual blitting process, but it should help with rendering the objects in the first place.
So in general, use blitting sparingly on mobile because bitmap creation is slow.
Anybody know how to programmatically zoom the AudioVideoCaptureDevice in Windows Phone 8?
I am using AudioVideoCaptureDevice (and yes, I want that specific device so I can control the VideoTorchMode property). I can't for the life of me figure out the zooming though. I am painting a Canvas using a VideoBrush mapped to the AudioVideoCaptureDevice. I'd like to implement Pinch-Zoom or even a simple +/- button to Zoom the camera.
What am I missing?
I'm not familiar with any API in WP8 that would allow you to programmetically set the zoom on a PhotoCaptureDevice/AudioVideoCaptureDevice. My theory is that you can do it manually by implementing your own Pinch-to-zoom functionality and making sure that region is focused.
For information on how to Focus on a region using WP8 Camera APIs see Nokia's Camera Explorer. The core of what you're looking for can be found on this architectural guide under "tap-to-focus".
private async void videoCanvas_Tap(object sender, GestureEventArgs e)
{
System.Windows.Point uiTapPoint = e.GetPosition(VideoCanvas);
if (_focusSemaphore.WaitOne(0))
{
// Get tap coordinates as a foundation point
Windows.Foundation.Point tapPoint = new Windows.Foundation.Point(uiTapPoint.X, uiTapPoint.Y);
double xRatio = VideoCanvas.ActualWidth / _dataContext.Device.PreviewResolution.Width;
double yRatio = VideoCanvas.ActualHeight / _dataContext.Device.PreviewResolution.Height;
// adjust to center focus on the tap point
Windows.Foundation.Point displayOrigin = new Windows.Foundation.Point(
tapPoint.X - _focusRegionSize.Width / 2,
tapPoint.Y - _focusRegionSize.Height / 2);
// adjust for resolution difference between preview image and the canvas
Windows.Foundation.Point viewFinderOrigin = new Windows.Foundation.Point(displayOrigin.X / xRatio, displayOrigin.Y / yRatio);
Windows.Foundation.Rect focusrect = new Windows.Foundation.Rect(viewFinderOrigin, _focusRegionSize);
// clip to preview resolution
Windows.Foundation.Rect viewPortRect = new Windows.Foundation.Rect(0, 0, _dataContext.Device.PreviewResolution.Width, _dataContext.Device.PreviewResolution.Height);
focusrect.Intersect(viewPortRect);
_dataContext.Device.FocusRegion = focusrect;
// show a focus indicator
FocusIndicator.SetValue(Shape.StrokeProperty, _notFocusedBrush);
FocusIndicator.SetValue(Canvas.LeftProperty, uiTapPoint.X - _focusRegionSize.Width / 2);
FocusIndicator.SetValue(Canvas.TopProperty, uiTapPoint.Y - _focusRegionSize.Height / 2);
FocusIndicator.SetValue(Canvas.VisibilityProperty, Visibility.Visible);
CameraFocusStatus status = await _dataContext.Device.FocusAsync();
if (status == CameraFocusStatus.Locked)
{
FocusIndicator.SetValue(Shape.StrokeProperty, _focusedBrush);
_manuallyFocused = true;
_dataContext.Device.SetProperty(KnownCameraPhotoProperties.LockedAutoFocusParameters,
AutoFocusParameters.Exposure & AutoFocusParameters.Focus & AutoFocusParameters.WhiteBalance);
}
else
{
_manuallyFocused = false;
_dataContext.Device.SetProperty(KnownCameraPhotoProperties.LockedAutoFocusParameters, AutoFocusParameters.None);
}
_focusSemaphore.Release();
}
}
Here's how to implement your own pinch-to-zoom functionality in WP8 # Pinch To Zoom functionality in windows phone 8
One thing I'd add to the pinch-to-zoom code sample in your case is a Clip specification on a parent control to make sure you're not accidentally rendering images tens or hundreds of times bigger then the screen and killing your app's performance.
I'm trying to use the C++ Accelerometer interface in the Windows::Devices::Sensors namespace on the Windows Phone 8. The code is very similar to a C# project I have that works, but I can't get the C++ event to fire like I can with my C# code.
My C++ code is a C# project with a C++ component, the C++ component just opens up the Accelerometer device for reading, and then tries to setup an event to fire whenever data is ready:
AccelerometerWrapper::AccelerometerWrapper() {
Accelerometer^ acc = Accelerometer::GetDefault();
accReading = acc->ReadingChanged::add( ref new TypedEventHandler<Accelerometer^, AccelerometerReadingChangedEventArgs^>(this, &AccelerometerWrapper::ReadingChanged));
}
void AccelerometerWrapper::ReadingChanged(Accelerometer^ sender, AccelerometerReadingChangedEventArgs^ e) {
...
}
Unfortunately, my ReadingChanged() function is never being called. I've looked around for a Start() method or somesuch but I can't find anything. I'm basing most of my knowledge off of the AccelerometerCPP example, but I can't actually test that as it is a generic WinRT (e.g. Windows 8, not Windows Phone 8) example, and my computer does not have an accelerometer. Everything compiles and runs, the event is just never triggered.
EDIT: I have successfully run a test to verify that I can manually call acc->GetCurrentReading(), so the accelerometer is working, it just seems to be the event that is not getting triggered.
Thank you in advance!
I'm not a C++ expert, but the new Accelerometer sensor works on my machine using C#.
private void MainPage_Loaded(object sender, RoutedEventArgs e)
{
var accelerometer = Accelerometer.GetDefault();
if (accelerometer != null)
{
accelerometer.ReadingChanged += accelerometer_ReadingChanged;
}
}
void accelerometer_ReadingChanged(Accelerometer sender, AccelerometerReadingChangedEventArgs args)
{
Debug.WriteLine(args.Reading.AccelerationX + ", " + args.Reading.AccelerationY + "," + args.Reading.AccelerationZ);
}
When we run that code snippet we can see the following expected output:
Just a guess for C++, do you need to cache the Accelerometer instance somewhere instead of letting it go out of scope?