If I create new "Empty (2d)" project then a scene with 1 camera is pre-created there.
The camera default position and rotation is more like 3d scene though: position=-10,10,10 rotation=-35,-45,0 projection="Perspective".
If I add a label in the scene then new Canvas is created with another camera with these properties: (pos=0,0,1000, rot=0,0,0, projection=ortho). This camera is more like 2d way.
The default project settings is: design size=960x640 with fixed width.
Now, the label is shown in the middle of the canvas/screen even though the label's position=0,0.
Canvas grid shows the origin at bottom-left corner.
I assume that only the second camera with ortho projection is used because it has higher priority. The main camera is of no use at this moment. Is this correct? Is there any purpose for the main camera in 2d application?
why is the label shown in the center of the canvas? It's center position is at (480,320). The label is direct child of the canvas. What controls the origin in the canvas?
if I create a node and place another label (pos=0,0) in that node then again the label is in the center of the node. Where do I set the parent node coordinate system origin?
little off topic question: where do I set the scene which is shown at app startup?
Related
We are designing an experience in Spark AR Studio and have run into a conundrum. Basically, our goal is to pin a Rectangle element containing an image texture so that it always lies just above the effect picker UI in Instagram across all devices, as in the placement in this image:
Goal Placement
We are attempting to accomplish this via pinning the Rectangle item item to the bottom and left side of a Canvas element set to Camera Space and Safe Touch Area. This seems to create a successful positioning like in the Goal Image on some devices, but on others (especially the iPhone 8 and iPhone 11) the image is still partially obscured by the UI, which is counter to our goal.
Here is our scene setup:
Rectangle element properties
The Rectangle element in the Scene panel
The Rectangle's position in the 2D Scene editor
An example of undesired behavior:
The Rectangle is obscured by the Effect UI
We have also tried to achieve this via placing the Rectangle dynamically using the Device Insets patch, but this also seems to not help and only provides an accurate pinning above the UI in some circumstances.
Any advice is greatly appreciated!
I'm trying to show HTML content in 3D alongside a Forge mode using the ThreeJS CSS3dRenderer (https://threejs.org/docs/#examples/en/renderers/CSS3DRenderer).
An example of the functionality: http://learningthreejs.com/data/2013-04-30-closing-the-gap-between-html-and-webgl/index.html
In purely a normal ThreeJS context the the steps needed for that are:
Create a ThreeJS scene #1 for the DOM content.
Create another scene #2 for the 3D content.
Add CSS3DObjects with the HTML content into scene #1
Add a matching 3D element into scene #2 with blending enabled. This makes the HTML content able to occlude the objects in scene #2.
Add other 3D objects into scene #2.
Render #1 with CSS3DRenderer and #2 with the normal WebGLRenderer.
In the Forge viewer context scene #2 is replaced with an overlay scene.
I applied the tricks featured here https://forge.autodesk.com/blog/transparent-background-viewer-long-last to make the CSS scene visible through the viewer scene and that works ok, except for the strange effect that when the renderer alpha is enabled elements in the overlay scene are only rendered when the occlude the Forge model.
The problem I'm having is point 4. In the case of two normal ThreeJS scenes the position, rotation and scale of the CSS3DObject in scene #1 can just be copied to the object in scene #2 and they match perfectly. Not so between a ThreeJS scene and the viewer's overlay scene as the units don't match up. (Or that's how I've reasoned it.)
Is there some way I can transform between these units?
According to our Engineering - hats off for Cleve to put the demo together:
The camera you are using NOP_VIEWER.impl.camera isn't a THREE camera. It is a UnifiedCamera defined by LMV to hold both a perspective and orthographic camera. You need to get the right camera from the UnifiedCamera and use that for the render. Just realized that the CSS3DRenderer was taken from a more recent version of THREE.js and it assumes things that aren't true about THREE.js r71 that LMV uses. I noticed that it uses camera.isPerspectiveCamera to decide when a camera is perspective, That isn't defined in r71 which uses instance of THREE.PerspectiveCamera.
camera = new THREE.PerspectiveCamera(
45, window.innerWidth / window.innerHeight, 0.1, 4000
);
The CSS3DRenderer renders into a flat plane that is composited with the LMV 3D scene. I found an example that faked putting the CSS3D scene with a WebGL scene by zapping transparent holes in the WebGL scene where CSS3D objects were in front of the WebGL scene. This is the example: https://codepen.io/Fyrestar/pen/QOXJaJ
The application I am writing is not a game, but does require many of the features one would use in a game... displaying a 2D scene, moving the camera to pan and zoom, rotating or otherwise animating objects within the scene. But the display of the scene will be controlled via numerous regular windows controls.
The best comparison I can think of right now is a level editor. The majority of the user interface is a standard window with panes that contain different controls. The scene is contained in another child window. When the user makes adjustments such as camera location, the scene responds accordingly.
So far, everything I've seen about cocos is geared around a single window. Is it possible to embed a scene into a child window as I've described?
You can add your custom layer to your main scene using Director::getInstance()->getRunningScene()->addChild(...) function
I am trying to move a sprite to the mouse position on click.
However, the coordinates I am getting from Gdx.input.getX()and Gdx.input.getY() is relative to the top left corner, and the setPosition() method of Sprite is relative to the bottom left corner.
Why is this so, and how do I position my sprite where the mouse was clicked?
Screen coordinates come from Android and use a Y-down frame of reference. Cameras in libgdx by default use a Y-up frame of reference (because OpenGL by convention also uses a Y-up frame of reference).
If you prefer the Y-down frame of reference, you can use camera.setToOrtho(true); method to flip it upside down. You might prefer this if coming from a Flash background.
But in general, the safe way to translate screen coordinates from a touch into the camera's coordinate system is to do the following. This will work regardless of what platform you're on and whatever coordinate system you chose for the camera. For example, for some types of games, you wouldn't even be using a camera that matches the screen resolution, but you'd still want screen coordinates converted to camera coordinates. Also, if you have a camera that moves around the world, this will automatically change the touch point to world coordinates.
tempVector3.set(Gdx.input.getX(), Gdx.input.getY(), 0);
camera.unproject(tempVector3);
//now tempVector3 contains the touch point in camera coordinates.
It uses Vector3 because this also works for 3D cameras.
Any help on this would be much appreciated.
I am creating an interactive map in flash / actionscript 3.0 and would like to allow a user to click on a location in the map in order to find the elevation at that point. In the stage, I have a base map of the area sitting on top of a black and white image where the value of each pixel represents height in feet.
So far, using getPixel to retrieve the elevation works great, but when the base map is sitting on top of the black and white elevation surface, getPixel retrieves values for the base map, not the underlying image. Is there a way to display the base map to the user while still using getPixel to retrieve values from the underlying image?
Many thanks,
Matt
Simply use getPixel() on the black / white image, not the container.
I assume you have a container Sprite and 2 children, the black / white image and on top the base map. Attach the click listener to the container Sprite and retrieve the first child with getChildAt(0). Get the BitmapData of that child and call getPixel(x, y) on it.
If the underlying image is not being shown to the user, it need not even be in the display list; it can exist in memory only as a BitmapData.
Add your MouseEvent.CLICK listener function to the base map users can click on, and in that function use the event x and y to do a BitmapData.getPixel(x,y) on your elevation map (greyscale image) BitmapData object.