Create Depth Mask of camera scene in Spark AR - spark-ar-studio

I need to generate a Depth Mask image of my 3D environment in Spark AR. I need this so I can use it as a Depth Mask in a Scene Render Pass for a glow effect. Currently, the glow is rendering over the top of other objects in my scene. I can't just render everything else on top because the user can move around and view the scene from any angle.

Related

What controls coordinate system origin in 2d rendering scene

If I create new "Empty (2d)" project then a scene with 1 camera is pre-created there.
The camera default position and rotation is more like 3d scene though: position=-10,10,10 rotation=-35,-45,0 projection="Perspective".
If I add a label in the scene then new Canvas is created with another camera with these properties: (pos=0,0,1000, rot=0,0,0, projection=ortho). This camera is more like 2d way.
The default project settings is: design size=960x640 with fixed width.
Now, the label is shown in the middle of the canvas/screen even though the label's position=0,0.
Canvas grid shows the origin at bottom-left corner.
I assume that only the second camera with ortho projection is used because it has higher priority. The main camera is of no use at this moment. Is this correct? Is there any purpose for the main camera in 2d application?
why is the label shown in the center of the canvas? It's center position is at (480,320). The label is direct child of the canvas. What controls the origin in the canvas?
if I create a node and place another label (pos=0,0) in that node then again the label is in the center of the node. Where do I set the parent node coordinate system origin?
little off topic question: where do I set the scene which is shown at app startup?

Forge viewer coordinate units and mapping those to a ThreeJS scene

I'm trying to show HTML content in 3D alongside a Forge mode using the ThreeJS CSS3dRenderer (https://threejs.org/docs/#examples/en/renderers/CSS3DRenderer).
An example of the functionality: http://learningthreejs.com/data/2013-04-30-closing-the-gap-between-html-and-webgl/index.html
In purely a normal ThreeJS context the the steps needed for that are:
Create a ThreeJS scene #1 for the DOM content.
Create another scene #2 for the 3D content.
Add CSS3DObjects with the HTML content into scene #1
Add a matching 3D element into scene #2 with blending enabled. This makes the HTML content able to occlude the objects in scene #2.
Add other 3D objects into scene #2.
Render #1 with CSS3DRenderer and #2 with the normal WebGLRenderer.
In the Forge viewer context scene #2 is replaced with an overlay scene.
I applied the tricks featured here https://forge.autodesk.com/blog/transparent-background-viewer-long-last to make the CSS scene visible through the viewer scene and that works ok, except for the strange effect that when the renderer alpha is enabled elements in the overlay scene are only rendered when the occlude the Forge model.
The problem I'm having is point 4. In the case of two normal ThreeJS scenes the position, rotation and scale of the CSS3DObject in scene #1 can just be copied to the object in scene #2 and they match perfectly. Not so between a ThreeJS scene and the viewer's overlay scene as the units don't match up. (Or that's how I've reasoned it.)
Is there some way I can transform between these units?
According to our Engineering - hats off for Cleve to put the demo together:
The camera you are using NOP_VIEWER.impl.camera isn't a THREE camera. It is a UnifiedCamera defined by LMV to hold both a perspective and orthographic camera. You need to get the right camera from the UnifiedCamera and use that for the render. Just realized that the CSS3DRenderer was taken from a more recent version of THREE.js and it assumes things that aren't true about THREE.js r71 that LMV uses. I noticed that it uses camera.isPerspectiveCamera to decide when a camera is perspective, That isn't defined in r71 which uses instance of THREE.PerspectiveCamera.
camera = new THREE.PerspectiveCamera(
45, window.innerWidth / window.innerHeight, 0.1, 4000
);
The CSS3DRenderer renders into a flat plane that is composited with the LMV 3D scene. I found an example that faked putting the CSS3D scene with a WebGL scene by zapping transparent holes in the WebGL scene where CSS3D objects were in front of the WebGL scene. This is the example: https://codepen.io/Fyrestar/pen/QOXJaJ

Slow BitmapData.draw() with matrix from camera video

Im building an iOS AIR app using AS3/Flash builder.
The app grabs the camera stream into a video object and then draws that to a BitmapData. The camera stream is 1280x720px and the container is 2208x1242px (iPhone 7+) so I need to scale the footage. I also need to rotate it around the center point:
mat.identity();
mat.translate(-video.width/2,-video.height/2);
mat.rotate(Math.PI/2);
mat.translate(video.height/2,video.width/2);
mat.scale(deviceSize.height/cam.width,deviceSize.width/cam.height);
I have timed the drawing operation with matrix:
videoBitmapData.draw(video,mat); //9-11ms
And without matrix:
videoBitmapData.draw(video); //2-3ms
Clearly the transformation is slowing me down.
Is there a faster way to do this? Maybe drawing first, then applying the matrix somehow?
Pre-rotating something?
Leveraging native scaling/rotation?
Can I skip using the video object somehow? Accessing the camera data in a more raw form?
I see no difference using render mode GPU vs CPU.
Thanks!
Edit:
I managed to get down to about half by doing this:
1. Put the video object inside a Sprite
2. Transform the Sprite.
3. Draw the Sprite to the BitmapData.
(5-6ms)
Suprisingly, this was more consistent than this:
1. Put the video object inside a Sprite
2. Transform the video object inside the Sprite
3. Draw the sprite to the BitmapData
(5-12ms)
Bonus:
I only needed part of the pixels from the camera (full width x 100px of the height). When realizing I could use clipRect with the .draw command I managed to get down to 0-1ms.
Thanks to Organis

Away3D rotate scene with mouse

I have a simple scene using the Away3D library and the scene display a simple shape. I'm now dealing with Mouse Events trying to get the effect of rotating the 3D object based on the main coordinates system, but i do not get how to get the initial values when mouse is pressed and what to assign when mouse is moving.
Anyone can help me?
Look at the Intermediate Globe example in the Away3D repository. There is an HoverController class that should have all the functionality you need to handle mouse inputs and rotate the camera around your 3D shape.

GetPixel for a bitmap under another?

Any help on this would be much appreciated.
I am creating an interactive map in flash / actionscript 3.0 and would like to allow a user to click on a location in the map in order to find the elevation at that point. In the stage, I have a base map of the area sitting on top of a black and white image where the value of each pixel represents height in feet.
So far, using getPixel to retrieve the elevation works great, but when the base map is sitting on top of the black and white elevation surface, getPixel retrieves values for the base map, not the underlying image. Is there a way to display the base map to the user while still using getPixel to retrieve values from the underlying image?
Many thanks,
Matt
Simply use getPixel() on the black / white image, not the container.
I assume you have a container Sprite and 2 children, the black / white image and on top the base map. Attach the click listener to the container Sprite and retrieve the first child with getChildAt(0). Get the BitmapData of that child and call getPixel(x, y) on it.
If the underlying image is not being shown to the user, it need not even be in the display list; it can exist in memory only as a BitmapData.
Add your MouseEvent.CLICK listener function to the base map users can click on, and in that function use the event x and y to do a BitmapData.getPixel(x,y) on your elevation map (greyscale image) BitmapData object.