I am developing a running tracker/pedometer app, I am using geolocator for the same,I am keeping the movement threshold property of the geoLocator to 10 here is my piece of code.
button click event
private void StartButton_Click(object sender, RoutedEventArgs e)
{
myLocator = new Geolocator();
myLocator.DesiredAccuracy = PositionAccuracy.Default;
myLocator.MovementThreshold = 10;
myLocator.ReportInterval=500;
myLocator.PositionChanged += myGeoLocator_PositionChanged;
_startTime = System.Environment.TickCount;
_timer.Start();
}
void myGeoLocator_PositionChanged(Geolocator sender, PositionChangedEventArgs args)
{
Dispatcher.BeginInvoke(() =>
{
var coord = new GeoCoordinate(args.Position.Coordinate.Latitude, args.Position.Coordinate.Longitude);
if (_line.Path.Count > 0)
{
var previousPoint = _line.Path.Last();
distance += coord.GetDistanceTo(previousPoint);
var millisPerKilometer = (1000.0 / distance) * (System.Environment.TickCount - _previousPositionChangeTick);
_kilometres += Math.Round(distance, 2);
distanceLabel.Text = string.Format("{0:f2} meters", _kilometres);
MessageBox.Show("Changed");
}
else
{
Map.Center = coord;
}
_line.Path.Add(coord);
_previousPositionChangeTick = System.Environment.TickCount;
});
}
The problem is that the position changed event is only getting called once, I am trying to debug the code in emulator by changing the location points but still the event do not get called. where am I doing wrong??
Your code will work on a real device. However, in order to test on the emulator, try by setting the DesiredAccuracy property to High.
From How to test apps that use location data for Windows Phone:
If your app uses the GeoCoordinateWatcher class, you have to specify a value of GeoPositionAccuracy.High in the constructor or in the DesiredAccuracy property of the class before you can test your app with the location sensor simulator. If you leave the accuracy at its default value of GeoPositionAccuracy.Default, the PositionChanged event doesn’t recognize position changes that occur in the location sensor simulator.
There is also another workaround, that consists on running the native Maps app, which seems to fix the problem:
Set a current location in the emulator.
Run your app. It reports the current location as Redmond.
Run the Maps application. It correctly goes to the location set in
step 1.
Run your app again. Now it uses the correct current location.
Source: http://social.msdn.microsoft.com/Forums/wpapps/en-US/c2cc57b1-ba1f-48fb-b285-d6cfbb8f393a/windows-phone-8-emulator-returns-microsofts-location-only
Related
Currently I'm using Lumia.Imaging to get preview frame and display it.
I create new method "GetPreview()" to go though pixels, find red pixel and than I would like to calculate mean value of red pixels for every frame.
My problem is that when I'm going through pixel there are lags in app :(
What is the proper solution to calculate mean of red pixels for every frame without performance loss?
Additional how to turn on Flash light when preview starts ?
private async Task startCameraPreview()
{
// Create a camera preview image source (from the Lumia Imaging SDK)
_cameraPreviewImageSource = new CameraPreviewImageSource();
// Checking id of back camera
DeviceInformationCollection devices = await Windows.Devices.Enumeration.DeviceInformation.FindAllAsync(Windows.Devices.Enumeration.DeviceClass.VideoCapture);
String backCameraId = devices.FirstOrDefault(x => x.EnclosureLocation != null && x.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Back).Id;
await _cameraPreviewImageSource.InitializeAsync(backCameraId); // use the back camera
var previewProperties = await _cameraPreviewImageSource.StartPreviewAsync();
fps = previewProperties.FrameRate.Numerator/previewProperties.FrameRate.Denominator;
_cameraPreviewImageSource.PreviewFrameAvailable += drawPreview; // call the drawPreview method every time a new frame is available
// Create a preview bitmap with the correct aspect ratio using the properties object returned when the preview started.
var width = 640.0;
var height = (width / previewProperties.Width) * previewProperties.Height;
var bitmap = new WriteableBitmap((int)width, (int)height);
_writeableBitmap = bitmap;
// Create a BitmapRenderer to turn the preview Image Source into a bitmap we hold in the PreviewBitmap object
_effect = new FilterEffect(_cameraPreviewImageSource);
_effect.Filters = new IFilter[0]; // null filter for now
_writeableBitmapRenderer = new WriteableBitmapRenderer(_effect, _writeableBitmap);
}
private async void drawPreview(IImageSize args)
{
// Prevent multiple rendering attempts at once
if (_isRendering == false)
{
_isRendering = true;
await _writeableBitmapRenderer.RenderAsync(); // Render the image (with no filter)
// Draw the image onto the previewImage XAML element
await Windows.ApplicationModel.Core.CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(CoreDispatcherPriority.High,
() =>
{
getPreview();
previewImage.Source = _writeableBitmap; // previewImage is an image element in MainPage.xaml
_writeableBitmap.Invalidate(); // force the PreviewBitmap to redraw
});
_isRendering = false;
}
}
private void getPreview()
{
var pixelBuffer = _writeableBitmap.PixelBuffer;
for (uint i = 0; i + 4 < pixelBuffer.Length; i += 4)
{
var red = pixelBuffer.GetByte(i + 2);
}
}
Instead of inspecting all pixels after the Lumia Imaging SDK has processed the image, but before you invalidate the bitmap you could:
Invalidate the writeable bitmap imidiatelly then do your analysis step in a seperate async Task. That means that the content will be displayed right away, and your analysis will be done seperatelly. Some pseudocode based on your sample would be:
await Windows.ApplicationModel.Core.CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(CoreDispatcherPriority.High, () => {
var analysisTask = Task.Run(() => getPreview());
previewImage.Source = _writeableBitmap; // previewImage is an image element in MainPage.xaml
_writeableBitmap.Invalidate(); // force the PreviewBitmap to redraw
await analysisTask;
});
This way the task of analysing the image doesn't block the update on the screen. Of course this option might not be viable if you need the result of the analysis in the rendering chain itself.
Create a custom filter for the analysis, this way you will be able to take advantage of the optimized Lumia Imaging SDK processing.
To get started on writing custom filters look at the documentation.
In my project, at the start of game accelerometer event works fine. While game reaches the game over page and click on restart button. All the objects are working good when restarting the whole game also all values have been reset but accelerometer is not working.
Thanks in Advance.
The code follows:
if (Accelerometer.isSupported)
{
acc = new Accelerometer();
acc.addEventListener(AccelerometerEvent.UPDATE,updateFn);
}
public function updateFn(e:AccelerometerEvent):void
{
targetX = e.accelerationX * 9.8;
}
Just register the Accelerator once the app was launched/activated and save it's value to global variables on each accelerator update and disabled it each time the app is put to the background / deactivated / exit. Eg:
NativeApplication.nativeApplication.addEventListener(Event.DEACTIVATE, handleApplicationDeactivated);
NativeApplication.nativeApplication.addEventListener(Event.ACTIVATE, handleApplicationActivated);
function handleApplicationActivated( e:Event):void {
// Check if Accelerometer is already activated
if( acc != null ) return;
acc = new Accelerometer();
acc.addEventListener(AccelerometerEvent.UPDATE,update);
}
function update( e:AccelerometerEvent ):void {
GlobalVars.accX = e.accelerationX;
GlobalVars.accY = e.accelerationY;
GlobalVars.accZ = e.accelerationZ;
}
function handleApplicationDeactivated( e:Event):void {
acc.removeEventListener(AccelerometerEvent.UPDATE,update);
acc = null;
}
edit: you might will want to use this activate/deactivate code instead : NativeApplication DEACTIVATE-ACTIVATE app when in the background since the NativeApplication have some issues.
Scenario:
I want a user to see a map and their current position. Then, if they click "start", navigation will begin and they'll see their "route" drawn onto the map as their position changes, similar to how some fitness apps work that map out your run/walk. The goal is to do this in real-time as the user's position changes.
Options:
The way I see it, there are two options: 1) use a RouteQuery and Map.AddRoute from the starting position, to the next position (when the position changes), keeping track of the last position, and always drawing a new MapRoute from that position to the new, or 2) displaying the user's current position as a dot that moves as their position changes, and then maybe when they press "stop", draw a MapRoute for each of their positions in order to show their full route.
I'd really prefer option #1 because the user can see their route progression, etc., as they go.
Here is the code that I'm using:
XAML:
<maps:Map x:Name="MainMap" />
<Button x:Name="btnStart" Content="Start"/>
<Button x:Name="btnStop" Content="Stop" IsEnabled="False"/>
Code-behind:
Global Variables:
GeoCoordinateWatcher watcher;
List<GeoCoordinate> listCoordinates;
GeoCoordinate lastCoordinate;
btnStart.Tap():
private void btnStart_Tap(object sender, GestureEventArgs e)
{
if (watcher == null)
{
watcher = new GeoCoordinateWatcher(GeoPositionAccuracy.High);
watcher.MovementThreshold = 20;
watcher.StatusChanged += watcher_StatusChanged;
watcher.PositionChanged += watcher_PositionChanged;
}
watcher.Start();
}
watcher.StatusChanged():
private void watcher_StatusChanged(object sender, GeoPositionStatusChangedEventArgs e)
{
switch (e.Status)
{
case GeoPositionStatus.Initializing:
btnStart.IsEnabled = false;
btnStop.IsEnabled = true;
break;
case GeoPositionStatus.NoData:
lblStatus.Text = "location data is not available.";
break;
case GeoPositionStatus.Ready:
lblStatus.Text = "location data is available.";
break;
}
}
watcher.PositionChanged():
void watcher_PositionChanged(object sender, GeoPositionChangedEventArgs<GeoCoordinate> e)
{
if (listCoordinates == null)
{
// first time through:
listCoordinates = new List<GeoCoordinate>();
listCoordinates.Add(e.Position.Location);
lastCoordinate = e.Position.Location;
return;
}
else
{
listCoordinates.Add(e.Position.Location);
DrawRoute(e.Position.Location);
lastCoordinate = e.Position.Location;
}
}
DrawRoute function:
private void DrawRoute(GeoCoordinate newPosition)//
{
RouteQuery query = new RouteQuery()
{
TravelMode = TravelMode.Driving,
Waypoints = new List<GeoCoordinate>() { MainMap.Center, newPosition }
};
query.QueryCompleted += RouteQueryCompleted;
query.QueryAsync();
MainMap.Center = newPosition;
lastCoordinate = newPosition;
}
And finally, RouteQueryCompleted():
void RouteQueryCompleted(object sender, QueryCompletedEventArgs<Route> e)
{
mapRoute = new MapRoute(e.Result);
MainMap.AddRoute(mapRoute);
}
What happens:
It appears to work for a second as I begin driving, a short line is drawn where my start position is, but then about 10 second in, a line is randomly drawn down a nearby street (probably equivalent to 3 or 4 blocks long) and then down another block on a side road (while the whole time I haven't even driven ONE block, let alone make any turns!). It's very bizarre and definitely not accurate. I can upload a screenshot to better illustrate it if need be.
Can anyone see what I'm doing wrong in my code or is there a better way to accomplish this? I wasn't sure if this was the best way but I wasn't able to find any examples suggesting otherwise.
I ended up using MapPolyLine to draw a line between the last GeoCoordinate and the new one.
MapPolyline line = new MapPolyline();
line.StrokeColor = Colors.Blue;
line.StrokeThickness = 15;
line.Path.Add(lastCoordinate);
line.Path.Add(pos);
MainMap.MapElements.Add(line);
I am not sure why you are using RouteQuery for your task. Generally, you use this when you want the map sdk to determine a route for you given a set of coordinates. In your case however, you always know where you are through PositionChanged event. It will be easier to plot directly on the map as you move.
Something like this
void watcher_PositionChanged(object sender, GeoPositionChangedEventArgs<GeoCoordinate> e) {
Plot(e.Position.Location);
}
void Plot(GeoCoordinate pos) {
var ellipse = new Ellipse();
ellipse.Fill = new SolidColorBrush(System.Windows.Media.Colors.Blue);
ellipse.Height = 15;
ellipse.Width = 15;
ellipse.Opacity = 25;
var mapOverlay = new MapOverlay();
mapOverlay.Content = ellipse;
mapOverlay.PositionOrigin = new System.Windows.Point(0.5, 0.5);
mapOverlay.GeoCoordinate = pos;
var mapLayer = new MapLayer();
mapLayer.Add(mapOverlay);
MainMap.Layers.Add(mapLayer);
}
I'm using the PhotoCaptureDevice class and I can capture the camera frame, but when I am getting an error while copying the image data in the CameraCaptureFrame.CaptureSequence of the CameraCaptureSequence into a MemoryStream and then save it in the Camera Roll. This is the code snippet of what I'm trying to do.
PhotoCaptureDevice cam;
cam = await PhotoCaptureDevice.OpenAsync(<front/rear depending on user input>,<resolution depends on user input>);
CameraCaptureSequence seq;
seq = cam.CreateCaptureSequence(1);
cam.SetProperty(KnownCameraGeneralProperties.PlayShutterSoundOnCapture, true);
MemoryStream captureStream1 = new MemoryStream();
seq.Frames[0].CaptureStream = captureStream1.AsOutputStream();//This stream is for saving the image data to camera roll
await cam.PrepareCaptureSequenceAsync(seq);
await seq.StartCaptureAsync();
bool a = seq.Frames[0].CaptureStream.Equals(0);//This value is false during debugging
if(capturestream1.Length>0)//This condition evaluates to false
{
MediaLibrary library = new MediaLibrary();
Picture picture1 = library.SavePictureToCameraRoll("image1", captureStream1);
}
else
{
//Logic to handle condition
}
As I've added in the comments, the variable bool a evaluates to false which I checked by debugging the code. But for some reason the capturestream1.Length property is 0.
Here's a code snippet on how to capture a sequence with a single image and saving that image to the MediaLibrary. Obviously this is a bit of a trivial example for this API since sequence are really good for capturing multiple images and meshing them up together with post-processing.
private async void MainPage_Loaded(object sender, RoutedEventArgs e)
{
using (MemoryStream stream = new MemoryStream())
using (var camera = await PhotoCaptureDevice.OpenAsync(CameraSensorLocation.Back,
PhotoCaptureDevice.GetAvailableCaptureResolutions(CameraSensorLocation.Back).First()))
{
var sequence = camera.CreateCaptureSequence(1);
sequence.Frames[0].CaptureStream = stream.AsOutputStream();
camera.PrepareCaptureSequenceAsync(sequence);
await sequence.StartCaptureAsync();
stream.Seek(0, SeekOrigin.Begin);
using (var library = new MediaLibrary())
{
library.SavePictureToCameraRoll("currentImage.jpg", stream);
}
}
}
When you run this code snippet you can see the image stored on the device:
You can find a full working sample as part of Nokia's Camera Explorer app that demos end-to-end usecases for the WP8 camera APIs: http://projects.developer.nokia.com/cameraexplorer
I'm working with spark video components, however the spark videoObject is null, when using a dynamic video source object it still null. Cameras are being detected properly, however when using a variable it's null, when using the Camera object directly usb camera is detected and videoobject still null... any ideas???
Now when using Camera.names all "cameras" are null, when playing a video from apache virtualhosts it plays well, this is so so weird...!
As requested, updated code:
import mx.controls.Alert;
import mx.events.FlexEvent;
import spark.components.VideoPlayer;
private var vidPlyr:VideoPlayer = null;
protected function winAppCreated(event:FlexEvent):void {
// Video Player
vidPlyr = new VideoPlayer();
vidPlyr.width = 320;
vidPlyr.height = 240;
// Video from apache virtualhost:
vidPlyr.source = "http://flex.test.capimg/JormaKaukonenCracksInTheFinish.flv";
addElement(vidPlyr);
var cameraTV:Camera = Camera.getCamera(Camera.names[0]);
var cameraUSB:Camera = Camera.getCamera(Camera.names[1]);
if (cameraTV) {
vidPlyr.videoDisplay.videoObject.attachCamera(cameraTV);
} else {
Alert.show("no TV card - " + Camera.names[0]);
// Alert shows: "no TV card - SAA7130 Analog TV Card"
}
if (cameraUSB) {
vidPlyr.videoDisplay.videoObject.attachCamera(cameraUSB);
} else {
Alert.show("no USB camera - " + Camera.names[1]);
// Alert shows: "no USB camera - USB2.0 Grabber"
}
}
This is a screenshot of running app.
I took a look at the VideoPlayer code, a lot of this class' properties have setters that look like this:
public function set source(value:Object):void
{
if (videoDisplay)
{
// do the real work
}
else
{
// store the value so we can use it later
}
}
VideoDisplay is the a skin part of the video player class. When you set the source, the skin must not have initialized the videoObject property. I would set the source, and then wait before trying to attach the camera.
Flex has a callLater() method that may solve this problem. callLater() will execute a function you specify on the next Flex update cycle:
// after setting the source
callLater(attachCamera);
// define a new function 'attachCamera' to call later
private function attachCamera():void
{
// if the videoObject property is not null
if (vidPlyr.videoDisplay.videoOjbect != null)
{
// attach the camera here
}
else
{
trace("cannot attach the camera, videoObject is still null");
}
}
[Edit]
The API to get a camera is strange, the signature is:
public static function getCamera(name:String = null):Camera
But that name argument is not the actual name of the camera. It supposed to be a String representation of the camera's index in the Camera.names array. Quoting the docs:
name:String (default = null) — Specifies which camera to get, as determined from the array returned by the names property. For most applications, get the default camera by omitting this parameter. To specify a value for this parameter, use the string representation of the zero-based index position within the Camera.names array. For example, to specify the third camera in the array, use Camera.getCamera("2").
Try doing something more generic like this when you attach the camera with callLater(attachCamera):
private function attachCamera():void
{
var cameras:Array = Camera.names;
var length:int = cameras.length;
var cameraObjects:Array = [];
for (var i:int = 0; i < length; i++)
{
cameraObjects.push( Camera.getCamera( i.toString() );
}
// use your own logic to select a camera, if there's more than one
if (cameraObjects.length > 0 && vidPlyr.videoDisplay.videoOjbect != null)
{
vidPlyr.videoDisplay.videoOjbect.attachCamera( cameraObjects[0] );
}
}