MediaCapture Windows Phone and Windows 8.1 App handle orientation working on all scenarios - windows-phone-8.1

I have tried all the solutions over the network but no one of these cover all the rotation and orientation cases.
Is there a complete and better solution or documentation to get me able to use mediacapture object well?

If you look at the CameraStarterKit sample from the Microsoft GitHub repository, you'll get a much better idea for how to handle rotation of the camera. It targets Windows 10, but a lot of the code should be portable back to 8.1.
Mainly, it comes down to this:
// Receive notifications about rotation of the device and UI and apply any necessary rotation to the preview stream and UI controls
private readonly DisplayInformation _displayInformation = DisplayInformation.GetForCurrentView();
private readonly SimpleOrientationSensor _orientationSensor = SimpleOrientationSensor.GetDefault();
private SimpleOrientation _deviceOrientation = SimpleOrientation.NotRotated;
private DisplayOrientations _displayOrientation = DisplayOrientations.Portrait;
// Rotation metadata to apply to the preview stream and recorded videos (MF_MT_VIDEO_ROTATION)
// Reference: http://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh868174.aspx
private static readonly Guid RotationKey = new Guid("C380465D-2271-428C-9B83-ECEA3B4A85C1");
/// <summary>
/// Gets the current orientation of the UI in relation to the device (when AutoRotationPreferences cannot be honored) and applies a corrective rotation to the preview
/// </summary>
private async Task SetPreviewRotationAsync()
{
// Only need to update the orientation if the camera is mounted on the device
if (_externalCamera) return;
// Calculate which way and how far to rotate the preview
int rotationDegrees = ConvertDisplayOrientationToDegrees(_displayOrientation);
// The rotation direction needs to be inverted if the preview is being mirrored
if (_mirroringPreview)
{
rotationDegrees = (360 - rotationDegrees) % 360;
}
// Add rotation metadata to the preview stream to make sure the aspect ratio / dimensions match when rendering and getting preview frames
var props = _mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview);
props.Properties.Add(RotationKey, rotationDegrees);
await _mediaCapture.SetEncodingPropertiesAsync(MediaStreamType.VideoPreview, props, null);
}
/// <summary>
/// Registers event handlers for hardware buttons and orientation sensors, and performs an initial update of the UI rotation
/// </summary>
private void RegisterEventHandlers()
{
// If there is an orientation sensor present on the device, register for notifications
if (_orientationSensor != null)
{
_orientationSensor.OrientationChanged += OrientationSensor_OrientationChanged;
// Update orientation of buttons with the current orientation
UpdateButtonOrientation();
}
_displayInformation.OrientationChanged += DisplayInformation_OrientationChanged;
}
But this is just part of the code. You should have a look at the full file (if not the full sample) to get a better understanding of how it works.

Related

Loading Blender animation into libGDX

Hi Blender & libGDX Gurus,
I am trying to load an blender animation into libGDX for an android app. So I created an animation using Blenders action editor, I exported this into .fbx format. I then ran the fbx-conv tool to create a .G3DB file. I then proceeded to upload this file into libGDX using the modelLoader.
Everything seems to work fine (I am not receiving any error messages) except that the screen is blank. I can't see any animations or the model.
I have run this code in a Samsung galaxy tablet running kitkat, nexus phone running marshmallow and an emulator, but the same result.
I went thru the tutorials and am using some of the code to upload one of my blender models. I still can't figure this out and I need help figuring this out.
Any help will be greatly appreciated.
Here is the link to the Blender file:
Animated Low-Poly Horse No Lights and no Camera
Here is my code where I am uploading the model in libGDX. I basically am using the code from the tutorials.
#Override
public void create () {
// Create camera sized to screens width/height with Field of View of 75 degrees
camera = new PerspectiveCamera(
75,
Gdx.graphics.getWidth(),
Gdx.graphics.getHeight());
// Move the camera 5 units back along the z-axis and look at the origin
camera.position.set(0f,0f,7f);
camera.lookAt(0f,0f,0f);
// Near and Far (plane) represent the minimum and maximum ranges of the camera in, um, units
camera.near = 0.1f;
camera.far = 300.0f;
camera.update();
// A ModelBatch to batch up geometry for OpenGL
modelBatch = new ModelBatch();
// Model loader needs a binary json reader to decode
UBJsonReader jsonReader = new UBJsonReader();
// Create a model loader passing in our json reader
G3dModelLoader modelLoader = new G3dModelLoader(jsonReader);
// Now load the model by name
// Note, the model (g3db file ) and textures need to be added to the assets folder of the Android proj
model = modelLoader.loadModel(Gdx.files.getFileHandle("AnimatedLowPolyHorseStageFenced_Ver5.g3db", Files.FileType.Internal));
// Now create an instance. Instance holds the positioning data, etc of an instance of your model
modelInstance = new ModelInstance(model);
//fbx-conv is supposed to perform this rotation for you... it doesnt seem to
modelInstance.transform.rotate(1, 0, 0, -90);
//move the model down a bit on the screen ( in a z-up world, down is -z ).
modelInstance.transform.translate(0, 0, -2);
// Finally we want some light, or we wont see our color. The environment gets passed in during
// the rendering process. Create one, then create an Ambient ( non-positioned, non-directional ) light.
environment = new Environment();
environment.set(new ColorAttribute(ColorAttribute.AmbientLight, 0.8f, 0.8f, 0.8f, 1.0f));
environment.add(new DirectionalLight().set(0.8f, 0.8f, 0.8f, -1f, -0.8f, -0.2f));
// You use an AnimationController to um, control animations. Each control is tied to the model instance
controller = new AnimationController(modelInstance);
// Pick the current animation by name
controller.setAnimation("Armature|ArmatureAction",1, new AnimationListener(){
#Override
public void onEnd(AnimationDesc animation) {
// this will be called when the current animation is done.
// queue up another animation called "balloon".
// Passing a negative to loop count loops forever. 1f for speed is normal speed.
//controller.queue("Armature|ArmatureAction",-1,1f,null,0f);
}
#Override
public void onLoop(AnimationDesc animation) {
// TODO Auto-generated method stub
}
});
}
#Override
public void resize(int width, int height) {
super.resize(width, height);
}
#Override
public void render () {
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
//Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
// For some flavor, lets spin our camera around the Y axis by 1 degree each time render is called
// You need to call update on the animation controller so it will advance the animation. Pass in frame delta
controller.update(Gdx.graphics.getDeltaTime());
// Like spriteBatch, just with models! pass in the box Instance and the environment
modelBatch.begin(camera);
modelBatch.render(modelInstance, environment);
modelBatch.end();
}
When converting to G3DB with fbxconv you got a warning,
"Mesh contains vertices with zero bone weights".
Try the following steps:
- Add a a new bone to your blend
- Connect it to non-animated (or all) vertices
- Re-export & convert
If you still get the warning, repeat but connect the new bone to all vertices.
I know this is an old question, but i had a similar problem recently and this worked.

AS3 Blitting is Slower than a Movieclip. Why?

I tried following a combination of Lee Brimlow's blitting tutorial series and and the technique in Rex Van der spuy's "advanced game design with flash"
I am a developer working on a web online virutal world made in flash. I made a phone application (works similar to the phone in grand theft auto games). Anyway, when a message is sent we want to play this crazy animation of an envelope flying around and transforming with sparkles around it. It was laggy (especially on older computers) so I thought it would be a great chance to use blitting. However, the blitting animation actually plays slower than a regular movieclip!! What the heck is going on here? Is blitting only better for mobile devices and actually slower on computers? Maybe I am doing something wrong. Here is my code:
// THIS PART HAPPENS WHEN PHONE IT INITIALIZED
//**
//---------------- Blitting stuff ----------------------------------
// add this bitmap stage to the display list so we can see it
_bitmapStage = new BitmapData(550, 400, true, 0xD6D6D6);
_phoneItself.addChild(new Bitmap(_bitmapStage));
var _spritesheetClass:Class = getDefinitionByName("ESpritesheet_1") as Class;
_spritesheet = new _spritesheetClass() as BitmapData;
_envelopeBlit = new BlitSprite(_spritesheet, BlitConfig.envelopeAnimAry , _bitmapStage);
_envelopeBlit.x = -100;
_envelopeBlit.y = 0;
_envelopePlayTimer = new Timer(5, 0);
_envelopePlayTimer.addEventListener(TimerEvent.TIMER, onEnterTimerFrame);
_envelopeBlit.addEventListener("ENV_ANIM_DONE", onEnvAnimFinished);
// a "BlitSprite" is a class that I made. It looks like this:
package com.fs.util_j.blit_utils
{
import flash.display.BitmapData;
import flash.events.Event;
import flash.events.EventDispatcher;
import flash.geom.Point;
import flash.geom.Rectangle;
public class BlitSprite extends EventDispatcher
{
private var _fullSpriteSheet:BitmapData;
private var _rects:Array;
private var _bitmapStage:BitmapData;
private var pos:Point = new Point ();
public var x:Number = 0;
public var y:Number = 0;
public var _animIndex:
int = 0;
private var _count:int = 0;
public var animate:Boolean = true;
private var _whiteTransparent:BitmapData;
private var _envelopeAnimAry:Array;
private var _model:Object;
public function BlitSprite(fullSpriteSheet:BitmapData, envelopeAnimAry:Array, bitmapStage:BitmapData, model:Object = null)
{
_fullSpriteSheet = fullSpriteSheet;
_envelopeAnimAry = envelopeAnimAry;
_bitmapStage = bitmapStage;
_model= model;
init();
}
private function init():void
{
// _whiteTransparent = new BitmapData(100, 100, true, 0x80FFffFF);
this.addEventListener("ENV_ANIM_DONE", onEvnAnimDone);
}
protected function onEvnAnimDone(event:Event):void
{
}
public function render():void
{
// pos.x = x - _rects[_animIndex].width*.5;
// pos.y = y - _rects[_animIndex].width*.5;
// if (_count % 1 == 0 && animate == true)
// {
// trace("rendering");
if (_animIndex == (_envelopeAnimAry.length - 1) )
{
// _animIndex = 0;
dispatchEvent(new Event("ENV_ANIM_DONE", true));
animate = false;
// trace("!!!!animate over " + _model.animOver);
// if (_model != null)
// {
// _model.animOver = true;
// }
// trace("!!!!animate over " + _model.animOver);
}
else
{
_animIndex++;
}
pos.x = x + _envelopeAnimAry[_animIndex][1];
pos.y = y + _envelopeAnimAry[_animIndex][2];
_bitmapStage.copyPixels(_fullSpriteSheet, _envelopeAnimAry[_animIndex][0], pos, null, null, true);
}
}
}
// THIS PART HAPPENS WHEN PHONE'S SEND BUTTON IS CLICKED
_envelopeBlit.animate = true;
_envelopeBlit._animIndex = 0;
_darkSquare.visible = true;
_envelopePlayTimer.addEventListener(TimerEvent.TIMER, onEnterTimerFrame);
_envelopePlayTimer.start();
it also uses BlitConfig which stores the info about the spritesheet spit out by TexturePacker
package com.fs.pack.phone.configuration
{
import flash.geom.Rectangle;
public final class BlitConfig
{
public static var _sending_message_real_20001:Rectangle = new Rectangle(300,1020,144,102);
public static var _sending_message_real_20002:Rectangle = new Rectangle(452,1012,144,102);
public static var _sending_message_real_20003:Rectangle = new Rectangle(852,852,146,102);
public static var _sending_message_real_20004:Rectangle = new Rectangle(2,1018,146,102);
public static var _sending_message_real_20005:Rectangle = new Rectangle(702,822,148,102);
.
.
.
public static var _sending_message_real_20139:Rectangle = new Rectangle(932,144,1,1);
public static var envelopeAnimAry:Array = [
// rectangle, x offset, y offset
[ _sending_message_real_20001, 184,155],
[ _sending_message_real_20002, 184,155],
[ _sending_message_real_20003, 183,155],
[ _sending_message_real_20004, 183,155],
.
.
.
[ _sending_message_real_20139, 0,0]
]
public function BlitConfig()
{
}
}
}
EDIT:
Knowing that this is not mobile, my answer below is irrelevant. I will leave it there, though, in case someone is having trouble with blitting on mobile in the future.
With regards to this specific question, you are running your timer every 5ms. First off, the lowest range that a Timer is accurate is >15ms so that will never be a viable solution. For any Timer relating to displaying soemthing on the stage, you should never do it less than a single frame. (1000/stage.framerate. ~40ms for a 30fps app)
For blitting, the goal is to reduce calculations and rendering. The way you have this set up right now, it looks like you are blitting every 5ms. That is actually more than 8 times as often as the MovieClip is rendering. You should reduce how often you blit. Only do it when a change has actually been made beyond translation. Doing it any more often than that is overkill and the reason it is so slow (again, creating bitmaps is slow)
In general, you do not want to blit in an AIR for Mobile application (which I assume you are doing since you mentioned the phone being initialized). I'm not sure if it is okay to do it using other/native SDKs, but avoid it in AIR.
Essentially, it comes down to how blitting works. Blitting takes a screen capture and displays that on the stage rather than the actual object. In general, this is great. It means that your display objects, particularly vectors which are slow to render, have to render far less often. It is especially good when animating because an object tends to re-render every time it is translated in any way, but not a bitmap.
On mobile platforms, however, creating that bitmap is incredibly slow. I've never looked into how the SDK creates the Bitmaps, but it doesn't do it efficiently (it often makes me wonder if it does it pixel-by-pixel). On desktops, this is generally fine. There is plenty of CPU and plenty of RAM to make this happen quickly. On mobile, however, that luxury is not there at the moment. So when you blit and create that bitmap, it takes a while to run that process.
The problem is exacerbated on high-resolution screens. An app I developed from January to May of this year selectively used blitting to use filters in a GPU accelerated environment. On an iPad 2, the blitting took my app from 30fps to ~24fps. Not a big deal, not anything the user would notice. On an iPad 3 with retina display, however, it dropped down to 10fps. It makes sense when you think about it, as retina iPads have 4x as many pixels as non-retina iPads do.
If you do want to use blitting on mobile, I recommend a few things:
Use GPU rendering mode. Without it, you stand no chance. Be aware that, at least with pre-AIR 3.7, filters were not supported in GPU mode. I am unsure if that is still the case. You should avoid using filters on mobile regardless, though, as they are very slow to render
Make sure to test a release-mode application. Depending on build settings, the difference between debug mode and a release mode app can be substantial, especially on iOS. An app I just developed went from taking 2-3 seconds to create a new Flex View in debug mode to less than a frame (~40ms) to do it in release mode on an iPhone 4
Use blitting sparingly. Only do it where absolutely necessary
Look for ways to simplify your display list. It is easy to have an object with 40 children to create a button. Instead, look for ways to simplify that into fewer objects and fewer filters (even if removing a filter requires you add another object). I don't believe this will help with the actual blitting process, but it should help with rendering the objects in the first place.
So in general, use blitting sparingly on mobile because bitmap creation is slow.

How do I create a WP8 Live Tile with a custom layout?

I'm looking to create a Live Tile which is similar to the Iconic tile but allows me to use a custom Count value (i.e. non-integer string).
The closest I've come is that I must create the contents using a bitmap and then use that image as the tile. Unfortunately I don't know how this is commonly done.
I'm looking to create tiles similar to the one that's described in this question (though this question is orthogonal to my issue): Custom live tile rendering issue on Windows Phone (7/8)
In short,
Is WriteableBitmap the best way of creating Live Tile layouts?
Is there a mechanism by which I can convert XAML into the Live Tile?
An example of the layout I'd like to achieve is somewhat displayed in the Skype Live Tile seen here.
As far as I can tell, creating a custom bitmap is the way to go. I found this answer along with this article to be very helpful when I was doing my live tiles.
If you don't mind purchasing third-party controls you can check out Telerik's LiveTileHelper control (if you're a member of Nokia's developer program you already have access to this).
For my first app I opted to roll my own solution based on the first two links. I have a base class that handles the work of taking a FrameworkElement (each derived class is responsible for generating the FrameworkElement that contains the information to render) and creating the corresponding WritableBitmap instance which I then save as a .PNG using the ToolStack C# PNG Writer Library.
As an example, here's my code to generate the control that represents a small pinned secondary tile in one of my apps:
/// <summary>
/// Returns the fully populated and initialized control that displays
/// the information that should be included in the tile image.
/// </summary>
/// <remarks>
/// We manually create the control in code instead of using a user control
/// to avoid having to use the XAML parser when we do this work in our
/// background agent.
/// </remarks>
/// <returns>
/// The fully populated and initialized control that displays
/// the information that should be included in the tile image.
/// </returns>
protected override FrameworkElement GetPopulatedTileImageControl()
{
var layoutRoot = new Grid()
{
Background = new System.Windows.Media.SolidColorBrush( System.Windows.Media.Color.FromArgb( 0, 0, 0, 0 ) ),
HorizontalAlignment = HorizontalAlignment.Stretch,
VerticalAlignment = VerticalAlignment.Stretch,
Height = TileSize.Height,
Width = TileSize.Width,
Margin = new Thickness( 0, 12, 0, 0 )
};
var stopName = new TextBlock()
{
Text = Stop.Description,
TextTrimming = TextTrimming.WordEllipsis,
TextWrapping = TextWrapping.Wrap,
Margin = new Thickness( 7, 0, 7, 12 ),
MaxHeight = 135,
Width = TileSize.Width - 14,
VerticalAlignment = VerticalAlignment.Bottom,
HorizontalAlignment = HorizontalAlignment.Stretch,
FontFamily = (System.Windows.Media.FontFamily) Application.Current.Resources[ "PhoneFontFamilySemiBold" ],
FontSize = (double) Application.Current.Resources[ "PhoneFontSizeMediumLarge" ],
Style = (Style) Application.Current.Resources[ "PhoneTextNormalStyle" ]
};
Grid.SetColumn( stopName, 0 );
Grid.SetRow( stopName, 0 );
layoutRoot.Children.Add( stopName );
return layoutRoot;
}
This is a super-simple control with just a TextBlock, but you can easily expand on this. Note that I don't use a UserControl here as I also run this code in a background agent where you have significant memory constraints.
Once I have a control I generate a WritableBitmap like this:
/// <summary>
/// Renders the tile image to a <see cref="WritableBitmap"/> instance.
/// </summary>
/// <returns>
/// A <see cref="WritableBitmap"/> instance that contains the rendered
/// tile image.
/// </returns>
private WriteableBitmap RenderTileImage()
{
var tileControl = GetPopulatedTileImageControl();
var controlSize = new Size( TileSize.Width, TileSize.Height );
var tileImage = new WriteableBitmap( (int) TileSize.Width, (int) TileSize.Height );
// The control we're rendering must never be smaller than the tile
// we're generating.
tileControl.MinHeight = TileSize.Height;
tileControl.MinWidth = TileSize.Width;
// Force layout to take place.
tileControl.UpdateLayout();
tileControl.Measure( TileSize );
tileControl.Arrange( new Rect( new Point( 0, 0 ), TileSize ) );
tileControl.UpdateLayout();
tileImage.Render( tileControl, null );
tileImage.Invalidate();
tileControl = null;
GC.Collect( 2, GCCollectionMode.Forced, true );
// Adjust the rendered bitmap to handle the alpha channel better.
CompensateForRender( tileImage );
return tileImage;
}
Again, I'm making explicit calls to GC.Collect to help keep my memory consumption under control when running this code as part of my background agent. The CompensateForRender method is based on the code in the linked article.
Hope this helps.

Programmatically zooming the AudioVideoCaptureDevice?

Anybody know how to programmatically zoom the AudioVideoCaptureDevice in Windows Phone 8?
I am using AudioVideoCaptureDevice (and yes, I want that specific device so I can control the VideoTorchMode property). I can't for the life of me figure out the zooming though. I am painting a Canvas using a VideoBrush mapped to the AudioVideoCaptureDevice. I'd like to implement Pinch-Zoom or even a simple +/- button to Zoom the camera.
What am I missing?
I'm not familiar with any API in WP8 that would allow you to programmetically set the zoom on a PhotoCaptureDevice/AudioVideoCaptureDevice. My theory is that you can do it manually by implementing your own Pinch-to-zoom functionality and making sure that region is focused.
For information on how to Focus on a region using WP8 Camera APIs see Nokia's Camera Explorer. The core of what you're looking for can be found on this architectural guide under "tap-to-focus".
private async void videoCanvas_Tap(object sender, GestureEventArgs e)
{
System.Windows.Point uiTapPoint = e.GetPosition(VideoCanvas);
if (_focusSemaphore.WaitOne(0))
{
// Get tap coordinates as a foundation point
Windows.Foundation.Point tapPoint = new Windows.Foundation.Point(uiTapPoint.X, uiTapPoint.Y);
double xRatio = VideoCanvas.ActualWidth / _dataContext.Device.PreviewResolution.Width;
double yRatio = VideoCanvas.ActualHeight / _dataContext.Device.PreviewResolution.Height;
// adjust to center focus on the tap point
Windows.Foundation.Point displayOrigin = new Windows.Foundation.Point(
tapPoint.X - _focusRegionSize.Width / 2,
tapPoint.Y - _focusRegionSize.Height / 2);
// adjust for resolution difference between preview image and the canvas
Windows.Foundation.Point viewFinderOrigin = new Windows.Foundation.Point(displayOrigin.X / xRatio, displayOrigin.Y / yRatio);
Windows.Foundation.Rect focusrect = new Windows.Foundation.Rect(viewFinderOrigin, _focusRegionSize);
// clip to preview resolution
Windows.Foundation.Rect viewPortRect = new Windows.Foundation.Rect(0, 0, _dataContext.Device.PreviewResolution.Width, _dataContext.Device.PreviewResolution.Height);
focusrect.Intersect(viewPortRect);
_dataContext.Device.FocusRegion = focusrect;
// show a focus indicator
FocusIndicator.SetValue(Shape.StrokeProperty, _notFocusedBrush);
FocusIndicator.SetValue(Canvas.LeftProperty, uiTapPoint.X - _focusRegionSize.Width / 2);
FocusIndicator.SetValue(Canvas.TopProperty, uiTapPoint.Y - _focusRegionSize.Height / 2);
FocusIndicator.SetValue(Canvas.VisibilityProperty, Visibility.Visible);
CameraFocusStatus status = await _dataContext.Device.FocusAsync();
if (status == CameraFocusStatus.Locked)
{
FocusIndicator.SetValue(Shape.StrokeProperty, _focusedBrush);
_manuallyFocused = true;
_dataContext.Device.SetProperty(KnownCameraPhotoProperties.LockedAutoFocusParameters,
AutoFocusParameters.Exposure & AutoFocusParameters.Focus & AutoFocusParameters.WhiteBalance);
}
else
{
_manuallyFocused = false;
_dataContext.Device.SetProperty(KnownCameraPhotoProperties.LockedAutoFocusParameters, AutoFocusParameters.None);
}
_focusSemaphore.Release();
}
}
Here's how to implement your own pinch-to-zoom functionality in WP8 # Pinch To Zoom functionality in windows phone 8
One thing I'd add to the pinch-to-zoom code sample in your case is a Clip specification on a parent control to make sure you're not accidentally rendering images tens or hundreds of times bigger then the screen and killing your app's performance.

How can I load a Papervision/Flex application (SWF) as a material on a Papervision plane?

I am trying to build a portfolio application similar to the used by Whitevoid. I am using Flex 4 and Papervision3D 2. I have everything working except for one issue. When I try to load an external SWF as a material on one of the planes, I can see any native Flex or Flash components in their correct positions, but the papervision objects are not being rendered properly. It looks like the viewport is not being set in the nested swf. I have posted my code for loading the swf below.
private function loadMovie(path:String=""):void
{
loader = new Loader();
request = new URLRequest(path);
loader.contentLoaderInfo.addEventListener(Event.INIT, addMaterial);
loader.load(request);
}
private function addMaterial(e:Event):void
{
movie = new MovieClip();
movie.addChild(e.target.content);
var width:Number = 0;
var height:Number = 0;
width = loader.contentLoaderInfo.width;
height = loader.contentLoaderInfo.height;
//calculate the aspect ratio of the swf
var matAR:Number = width/height;
if (matAR > aspectRatio)
{
plane.scaleY = aspectRatio / matAR;
}
else if (matAR < aspectRatio)
{
plane.scaleX = matAR / aspectRatio;
}
var mat:MovieMaterial = new MovieMaterial(movie, false, true, false, new Rectangle(0, 0, width, height));
mat.interactive = true;
mat.smooth = true;
plane.material = mat;
}
Below I have posted two pictures. The first is a shot of the application running by itself. The second is the application as a MovieMaterial on a Plane. You can see how the button created as a spark object in the mxml stays in the correct position, but papervision sphere (which is rotating) is in the wrong location. Is there something I am missing here?
Man. I haven't seen that site in a while. Still one of the cooler PV projects...
What do you mean by:
I cannot properly see the scene rendered in Papervision
You say you can see the components in their appropriate positions, as in: you have a plane with what looks like the intended file loading up? But I'm guessing that you can't interact with it.
As far as I know, and I've spent a reasonable amount of time trying to make something similar work, the MovieMaterial (which I assume you're using) draws a Bitmap of whatever contents exist in your MovieClip, and if you set it to animated=true, then it will render out a series of bitmaps - equating animation. What it's not doing, is displaying an actual MovieClip (or SWF) on the plane. So you may see your components, but this is how:
MovieMaterial.as line 137
// ______________________________________________________________________ CREATE BITMAP
/**
*
* #param asset
* #return
*/
protected function createBitmapFromSprite( asset:DisplayObject ):BitmapData
{
// Set the new movie reference
movie = asset;
// initialize the bitmap since it's new
initBitmap( movie );
// Draw
drawBitmap();
// Call super.createBitmap to centralize the bitmap specific code.
// Here only MovieClip specific code, all bitmap code (maxUVs, AUTO_MIP_MAP, correctBitmap) in BitmapMaterial.
bitmap = super.createBitmap( bitmap );
return bitmap;
}
Note in the WhiteVoid you never actually interact with a movie until it "lands" = he's very likely swapping in a Movie on top of the bitmap textured plane.
The part that you are interacting with is probably another plane that holds the "button" that simply becomes visible on mouseover.
I think PV1.0 had access to real swfs as a material but this changed in 2.0. Sadly. Hopefully Molehill will.
cheers