Programmatically zooming the AudioVideoCaptureDevice? - windows-phone-8

Anybody know how to programmatically zoom the AudioVideoCaptureDevice in Windows Phone 8?
I am using AudioVideoCaptureDevice (and yes, I want that specific device so I can control the VideoTorchMode property). I can't for the life of me figure out the zooming though. I am painting a Canvas using a VideoBrush mapped to the AudioVideoCaptureDevice. I'd like to implement Pinch-Zoom or even a simple +/- button to Zoom the camera.
What am I missing?

I'm not familiar with any API in WP8 that would allow you to programmetically set the zoom on a PhotoCaptureDevice/AudioVideoCaptureDevice. My theory is that you can do it manually by implementing your own Pinch-to-zoom functionality and making sure that region is focused.
For information on how to Focus on a region using WP8 Camera APIs see Nokia's Camera Explorer. The core of what you're looking for can be found on this architectural guide under "tap-to-focus".
private async void videoCanvas_Tap(object sender, GestureEventArgs e)
{
System.Windows.Point uiTapPoint = e.GetPosition(VideoCanvas);
if (_focusSemaphore.WaitOne(0))
{
// Get tap coordinates as a foundation point
Windows.Foundation.Point tapPoint = new Windows.Foundation.Point(uiTapPoint.X, uiTapPoint.Y);
double xRatio = VideoCanvas.ActualWidth / _dataContext.Device.PreviewResolution.Width;
double yRatio = VideoCanvas.ActualHeight / _dataContext.Device.PreviewResolution.Height;
// adjust to center focus on the tap point
Windows.Foundation.Point displayOrigin = new Windows.Foundation.Point(
tapPoint.X - _focusRegionSize.Width / 2,
tapPoint.Y - _focusRegionSize.Height / 2);
// adjust for resolution difference between preview image and the canvas
Windows.Foundation.Point viewFinderOrigin = new Windows.Foundation.Point(displayOrigin.X / xRatio, displayOrigin.Y / yRatio);
Windows.Foundation.Rect focusrect = new Windows.Foundation.Rect(viewFinderOrigin, _focusRegionSize);
// clip to preview resolution
Windows.Foundation.Rect viewPortRect = new Windows.Foundation.Rect(0, 0, _dataContext.Device.PreviewResolution.Width, _dataContext.Device.PreviewResolution.Height);
focusrect.Intersect(viewPortRect);
_dataContext.Device.FocusRegion = focusrect;
// show a focus indicator
FocusIndicator.SetValue(Shape.StrokeProperty, _notFocusedBrush);
FocusIndicator.SetValue(Canvas.LeftProperty, uiTapPoint.X - _focusRegionSize.Width / 2);
FocusIndicator.SetValue(Canvas.TopProperty, uiTapPoint.Y - _focusRegionSize.Height / 2);
FocusIndicator.SetValue(Canvas.VisibilityProperty, Visibility.Visible);
CameraFocusStatus status = await _dataContext.Device.FocusAsync();
if (status == CameraFocusStatus.Locked)
{
FocusIndicator.SetValue(Shape.StrokeProperty, _focusedBrush);
_manuallyFocused = true;
_dataContext.Device.SetProperty(KnownCameraPhotoProperties.LockedAutoFocusParameters,
AutoFocusParameters.Exposure & AutoFocusParameters.Focus & AutoFocusParameters.WhiteBalance);
}
else
{
_manuallyFocused = false;
_dataContext.Device.SetProperty(KnownCameraPhotoProperties.LockedAutoFocusParameters, AutoFocusParameters.None);
}
_focusSemaphore.Release();
}
}
Here's how to implement your own pinch-to-zoom functionality in WP8 # Pinch To Zoom functionality in windows phone 8
One thing I'd add to the pinch-to-zoom code sample in your case is a Clip specification on a parent control to make sure you're not accidentally rendering images tens or hundreds of times bigger then the screen and killing your app's performance.

Related

URLImage on MapContainer not displayed on actual device and flickering in simulator

My app features a map, on which the user's avatar is displayed in the center and where markers including photo should be added when the user moves the map.
On the simulator, the markers are added but the images disappear as soon as I release the pointer then only the placeholders remain (this is what I call flickering). On the device, nothing is shown apart from the user's avatar.
As you can see the image does not remain on the map, only the placeholder does. The user icon is southern on the map but it is shown.
Please note: I am not receiving 404 errors and there is only one listener on the map (see below):
Here is how I trigger the map update:
googleMap.addMapListener((source, zoom, center) -> {
showReportsOnMap(googleMap, center, theme, currentForm, selectCategoryButton.getWidth());
});
And here is how I add the reports the map:
public void showReportsOnMap(
MapContainer currentMap,
Coord center,
Resources theme,
Form f,
int reportImageWidth) {
/**
* Get the map borders (CAUTION : it can be NaN)
*/
Coord NE = currentMap.getCoordAtPosition(currentMap.getAbsoluteX() + currentMap.getWidth(), currentMap.getAbsoluteY());
Coord SW = currentMap.getCoordAtPosition(currentMap.getAbsoluteX(), currentMap.getAbsoluteY() + currentMap.getHeight());
boolean bordersKnownAndValid = false;
// Checks that the borders does not contain NaN as longitudes and latitudes
if (!Double.isNaN(NE.getLatitude())
&& !Double.isNaN(NE.getLongitude())
&& !Double.isNaN(SW.getLatitude())
&& !Double.isNaN(SW.getLongitude())) {
// The borders can be used
bordersKnownAndValid = true;
}
if (bordersKnownAndValid) {
ArrayList<Report> localReports = (ArrayList<Report>) (Report.getReportsWithinBoundingBounds(NE, SW, selectedCategoryIdToBeShownOnMap).get(1));
// Revalidate only if we have something new to show
if (localReports.size() > 0) {
currentMap.clearMapLayers();
currentMap.addMarker(ParametresGeneraux.getCurrentUser().getUserIcon(),
new Coord(ParametresGeneraux.getCurrentUser().getCurrentUserLocation().getLatitude(),
ParametresGeneraux.getCurrentUser().getCurrentUserLocation().getLongitude()),
ParametresGeneraux.getCurrentUser().getUserNickname(), "", null);
Image tempPlaceholder = Image.createImage(
reportImageWidth,
reportImageWidth,
ParametresGeneraux.accentColor);
Graphics gr = tempPlaceholder.getGraphics();
gr.setAntiAliased(true);
gr.setColor(ParametresGeneraux.accentColor);
gr.fillArc(0, 0, reportImageWidth, reportImageWidth, 0, 360);
EncodedImage roundPlaceholder = EncodedImage.createFromImage(tempPlaceholder, true);
// Add the report on the map
for (Report report : localReports) {
String photoFilenameInStorage = Report.getFilename(report.getPhotoPath())
+ ParametresGeneraux.SUFFIX_ON_MAP_IMAGE;
EncodedImage reportIcon = EncodedImage.createFromImage(URLImage.createToStorage(roundPlaceholder,
photoFilenameInStorage,
report.getPhotoPath(),
ParametresGeneraux.RESIZE_SCALE_WITH_ROUND_MASK
),
false); // we want transparency png otherwise it shows black edges
currentMap.addMarker(reportIcon,
new Coord(report.getLocation().getLatitude(), report.getLocation().getLongitude()
),
report.getCategory().getName(), "",
(evt) -> {
// Opens the detail form about this report
new ReportDetailsForm(theme, report, f.getClass()).show();
});
}
currentMap.setCameraPosition(new Coord(center.getLatitude(), center.getLongitude()));
currentMap.zoom(new Coord(center.getLatitude(),
center.getLongitude()),
ParametresGeneraux.getUserZoomLevelOnMap());
currentMap.animate();
//f.forceRevalidate();
}
}
}
So I guess that the flickering in the simulator is a kind of slow motion of what happens on the device although the device does not show the placeholder.
What should I do to make the markers appear with an image?
EDIT March 8th 2017
On simulator, if I show a Dialog just before adding the marker to the map with this code :
Dialog.show("Photo", report.getAddress(), Dialog.TYPE_INFO, reportIcon, "OK", null);
The icon is well displayed in the Dialog (see screen capture below)
and then the image appears on the map without flickering any more as depicted below :
However on an actual Android device even the Dialog does not appear.
Finally I don't know why the Dialog makes then the markers behave as expected on the simulator but not on the device, so I am a bit at lost!
Any help would be precious.
The problem is that URLImage may not have finished downloading by the time you added it as a marker. If you call EncodedImage.createFromImage(urlImage) before URLImage has finished downloading, then you'll be creating an encoded image of the urlImage's placeholder.
the com.codename1.io.Util class includes quite a few methods for downloading images from URLs. Some are blocking, and some use a callback. Either way you just need to ensure that the image is actually downloaded before adding it to a map.
NOTE: Normally this wouldn't be an issue with URLImage - e.g. if you were adding it to a Button or a Label. It is only a problem here because the MapContainer is native, and it actually needs to pass the image data to the native layer at the time that setMarker() is called.

MediaCapture Windows Phone and Windows 8.1 App handle orientation working on all scenarios

I have tried all the solutions over the network but no one of these cover all the rotation and orientation cases.
Is there a complete and better solution or documentation to get me able to use mediacapture object well?
If you look at the CameraStarterKit sample from the Microsoft GitHub repository, you'll get a much better idea for how to handle rotation of the camera. It targets Windows 10, but a lot of the code should be portable back to 8.1.
Mainly, it comes down to this:
// Receive notifications about rotation of the device and UI and apply any necessary rotation to the preview stream and UI controls
private readonly DisplayInformation _displayInformation = DisplayInformation.GetForCurrentView();
private readonly SimpleOrientationSensor _orientationSensor = SimpleOrientationSensor.GetDefault();
private SimpleOrientation _deviceOrientation = SimpleOrientation.NotRotated;
private DisplayOrientations _displayOrientation = DisplayOrientations.Portrait;
// Rotation metadata to apply to the preview stream and recorded videos (MF_MT_VIDEO_ROTATION)
// Reference: http://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh868174.aspx
private static readonly Guid RotationKey = new Guid("C380465D-2271-428C-9B83-ECEA3B4A85C1");
/// <summary>
/// Gets the current orientation of the UI in relation to the device (when AutoRotationPreferences cannot be honored) and applies a corrective rotation to the preview
/// </summary>
private async Task SetPreviewRotationAsync()
{
// Only need to update the orientation if the camera is mounted on the device
if (_externalCamera) return;
// Calculate which way and how far to rotate the preview
int rotationDegrees = ConvertDisplayOrientationToDegrees(_displayOrientation);
// The rotation direction needs to be inverted if the preview is being mirrored
if (_mirroringPreview)
{
rotationDegrees = (360 - rotationDegrees) % 360;
}
// Add rotation metadata to the preview stream to make sure the aspect ratio / dimensions match when rendering and getting preview frames
var props = _mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview);
props.Properties.Add(RotationKey, rotationDegrees);
await _mediaCapture.SetEncodingPropertiesAsync(MediaStreamType.VideoPreview, props, null);
}
/// <summary>
/// Registers event handlers for hardware buttons and orientation sensors, and performs an initial update of the UI rotation
/// </summary>
private void RegisterEventHandlers()
{
// If there is an orientation sensor present on the device, register for notifications
if (_orientationSensor != null)
{
_orientationSensor.OrientationChanged += OrientationSensor_OrientationChanged;
// Update orientation of buttons with the current orientation
UpdateButtonOrientation();
}
_displayInformation.OrientationChanged += DisplayInformation_OrientationChanged;
}
But this is just part of the code. You should have a look at the full file (if not the full sample) to get a better understanding of how it works.

Flash Actionscript 3 getCamera bizarre discrepancies and artefacts on video

We're using the Actionscript getCamera() API to access and stream the camera over RTMP. I'm not sure if anyone's come across this but depending on the browser we get some really strange effects on the camera input. In some cases the camera shows stretched and often in Firefox is shows central with a green bar on the right.
Is is possible to access the camera as cropped 16:9 on all browsers?
var videoWidth:Number = 427;
var videoHeight:Number = 240;
camera = Camera.getCamera();
// here are all the quality and performance settings
if (camera != null)
{
camera.setMode(videoWidth, videoHeight, videoFrameRate, false); // false gives framerate priority apparently?? http://www.flash-communications.net/technotes/setMode/index.html
camera.setQuality(videoBitrate, videoQuality);
camera.setKeyFrameInterval(2);
}
else
{
sourceVideoLabel.text = "No Camera Found\n";
}
In Chrome:
In Firefox:

AS3 Blitting is Slower than a Movieclip. Why?

I tried following a combination of Lee Brimlow's blitting tutorial series and and the technique in Rex Van der spuy's "advanced game design with flash"
I am a developer working on a web online virutal world made in flash. I made a phone application (works similar to the phone in grand theft auto games). Anyway, when a message is sent we want to play this crazy animation of an envelope flying around and transforming with sparkles around it. It was laggy (especially on older computers) so I thought it would be a great chance to use blitting. However, the blitting animation actually plays slower than a regular movieclip!! What the heck is going on here? Is blitting only better for mobile devices and actually slower on computers? Maybe I am doing something wrong. Here is my code:
// THIS PART HAPPENS WHEN PHONE IT INITIALIZED
//**
//---------------- Blitting stuff ----------------------------------
// add this bitmap stage to the display list so we can see it
_bitmapStage = new BitmapData(550, 400, true, 0xD6D6D6);
_phoneItself.addChild(new Bitmap(_bitmapStage));
var _spritesheetClass:Class = getDefinitionByName("ESpritesheet_1") as Class;
_spritesheet = new _spritesheetClass() as BitmapData;
_envelopeBlit = new BlitSprite(_spritesheet, BlitConfig.envelopeAnimAry , _bitmapStage);
_envelopeBlit.x = -100;
_envelopeBlit.y = 0;
_envelopePlayTimer = new Timer(5, 0);
_envelopePlayTimer.addEventListener(TimerEvent.TIMER, onEnterTimerFrame);
_envelopeBlit.addEventListener("ENV_ANIM_DONE", onEnvAnimFinished);
// a "BlitSprite" is a class that I made. It looks like this:
package com.fs.util_j.blit_utils
{
import flash.display.BitmapData;
import flash.events.Event;
import flash.events.EventDispatcher;
import flash.geom.Point;
import flash.geom.Rectangle;
public class BlitSprite extends EventDispatcher
{
private var _fullSpriteSheet:BitmapData;
private var _rects:Array;
private var _bitmapStage:BitmapData;
private var pos:Point = new Point ();
public var x:Number = 0;
public var y:Number = 0;
public var _animIndex:
int = 0;
private var _count:int = 0;
public var animate:Boolean = true;
private var _whiteTransparent:BitmapData;
private var _envelopeAnimAry:Array;
private var _model:Object;
public function BlitSprite(fullSpriteSheet:BitmapData, envelopeAnimAry:Array, bitmapStage:BitmapData, model:Object = null)
{
_fullSpriteSheet = fullSpriteSheet;
_envelopeAnimAry = envelopeAnimAry;
_bitmapStage = bitmapStage;
_model= model;
init();
}
private function init():void
{
// _whiteTransparent = new BitmapData(100, 100, true, 0x80FFffFF);
this.addEventListener("ENV_ANIM_DONE", onEvnAnimDone);
}
protected function onEvnAnimDone(event:Event):void
{
}
public function render():void
{
// pos.x = x - _rects[_animIndex].width*.5;
// pos.y = y - _rects[_animIndex].width*.5;
// if (_count % 1 == 0 && animate == true)
// {
// trace("rendering");
if (_animIndex == (_envelopeAnimAry.length - 1) )
{
// _animIndex = 0;
dispatchEvent(new Event("ENV_ANIM_DONE", true));
animate = false;
// trace("!!!!animate over " + _model.animOver);
// if (_model != null)
// {
// _model.animOver = true;
// }
// trace("!!!!animate over " + _model.animOver);
}
else
{
_animIndex++;
}
pos.x = x + _envelopeAnimAry[_animIndex][1];
pos.y = y + _envelopeAnimAry[_animIndex][2];
_bitmapStage.copyPixels(_fullSpriteSheet, _envelopeAnimAry[_animIndex][0], pos, null, null, true);
}
}
}
// THIS PART HAPPENS WHEN PHONE'S SEND BUTTON IS CLICKED
_envelopeBlit.animate = true;
_envelopeBlit._animIndex = 0;
_darkSquare.visible = true;
_envelopePlayTimer.addEventListener(TimerEvent.TIMER, onEnterTimerFrame);
_envelopePlayTimer.start();
it also uses BlitConfig which stores the info about the spritesheet spit out by TexturePacker
package com.fs.pack.phone.configuration
{
import flash.geom.Rectangle;
public final class BlitConfig
{
public static var _sending_message_real_20001:Rectangle = new Rectangle(300,1020,144,102);
public static var _sending_message_real_20002:Rectangle = new Rectangle(452,1012,144,102);
public static var _sending_message_real_20003:Rectangle = new Rectangle(852,852,146,102);
public static var _sending_message_real_20004:Rectangle = new Rectangle(2,1018,146,102);
public static var _sending_message_real_20005:Rectangle = new Rectangle(702,822,148,102);
.
.
.
public static var _sending_message_real_20139:Rectangle = new Rectangle(932,144,1,1);
public static var envelopeAnimAry:Array = [
// rectangle, x offset, y offset
[ _sending_message_real_20001, 184,155],
[ _sending_message_real_20002, 184,155],
[ _sending_message_real_20003, 183,155],
[ _sending_message_real_20004, 183,155],
.
.
.
[ _sending_message_real_20139, 0,0]
]
public function BlitConfig()
{
}
}
}
EDIT:
Knowing that this is not mobile, my answer below is irrelevant. I will leave it there, though, in case someone is having trouble with blitting on mobile in the future.
With regards to this specific question, you are running your timer every 5ms. First off, the lowest range that a Timer is accurate is >15ms so that will never be a viable solution. For any Timer relating to displaying soemthing on the stage, you should never do it less than a single frame. (1000/stage.framerate. ~40ms for a 30fps app)
For blitting, the goal is to reduce calculations and rendering. The way you have this set up right now, it looks like you are blitting every 5ms. That is actually more than 8 times as often as the MovieClip is rendering. You should reduce how often you blit. Only do it when a change has actually been made beyond translation. Doing it any more often than that is overkill and the reason it is so slow (again, creating bitmaps is slow)
In general, you do not want to blit in an AIR for Mobile application (which I assume you are doing since you mentioned the phone being initialized). I'm not sure if it is okay to do it using other/native SDKs, but avoid it in AIR.
Essentially, it comes down to how blitting works. Blitting takes a screen capture and displays that on the stage rather than the actual object. In general, this is great. It means that your display objects, particularly vectors which are slow to render, have to render far less often. It is especially good when animating because an object tends to re-render every time it is translated in any way, but not a bitmap.
On mobile platforms, however, creating that bitmap is incredibly slow. I've never looked into how the SDK creates the Bitmaps, but it doesn't do it efficiently (it often makes me wonder if it does it pixel-by-pixel). On desktops, this is generally fine. There is plenty of CPU and plenty of RAM to make this happen quickly. On mobile, however, that luxury is not there at the moment. So when you blit and create that bitmap, it takes a while to run that process.
The problem is exacerbated on high-resolution screens. An app I developed from January to May of this year selectively used blitting to use filters in a GPU accelerated environment. On an iPad 2, the blitting took my app from 30fps to ~24fps. Not a big deal, not anything the user would notice. On an iPad 3 with retina display, however, it dropped down to 10fps. It makes sense when you think about it, as retina iPads have 4x as many pixels as non-retina iPads do.
If you do want to use blitting on mobile, I recommend a few things:
Use GPU rendering mode. Without it, you stand no chance. Be aware that, at least with pre-AIR 3.7, filters were not supported in GPU mode. I am unsure if that is still the case. You should avoid using filters on mobile regardless, though, as they are very slow to render
Make sure to test a release-mode application. Depending on build settings, the difference between debug mode and a release mode app can be substantial, especially on iOS. An app I just developed went from taking 2-3 seconds to create a new Flex View in debug mode to less than a frame (~40ms) to do it in release mode on an iPhone 4
Use blitting sparingly. Only do it where absolutely necessary
Look for ways to simplify your display list. It is easy to have an object with 40 children to create a button. Instead, look for ways to simplify that into fewer objects and fewer filters (even if removing a filter requires you add another object). I don't believe this will help with the actual blitting process, but it should help with rendering the objects in the first place.
So in general, use blitting sparingly on mobile because bitmap creation is slow.

Kinetic.js don't lose grip on mouse out

When you drag an object and mouse is out of rendering area, dragging stops (firing an event) and user loses a grip.
It's extremelly inconvenient, taking into account that all other technologies (Flash, raw HTML5 Canvas, etc) allows to save the grip even if mouse is out.
Is there a way to solve the problem?
UPDATE: Up to the moment solved the problem by changing library file and binding listeners to the document, not to the container. I know that it's bad to hack into library files, but after inspecting the library's source code I haven't found out way around.
you could check if the element is out of sight and if so bring it back:
shape.on('dragend', function() {
var pos = shape.getPosition();
var layer = pos.getLayer();
if (pos.y < 0) {
pos.y = 0;
}
var maxY = layer.getHeight() - shape.getHeight();
if (pos.y > maxY) {
pos.y = maxY
}
shape.setPosition(pos);
}
Look at element.setCapture(). You can call it from within an event handler for a mouse event, eg. mousedown:
function mouseDown(e) {
e.target.setCapture();
e.target.addEventListener("mousemove", mouseMoved, false);
}
Although browser support is a bit spotty (IE and Firefox support it, not sure about other browsers), for cross browser use you would have to fall back to the binding on the document approach you've already hit upon.