Projecting 3D points in 2D plane - language-agnostic

I have noisy 3D point cloud [x,y,z] plane for which, I fit a 2D plane (z = ax+by+c) by computing the coefficients [a,b,c] using least-squares. How can I project the points [x,y,z] onto this plane. I somewhat understand the math behind this but I am unsure how to program this in python.

Related

How to obtain the physical coordinates of the nodes in an ITK :: Mesh obtained from a 3D volume of ct images

I am using ITK library to get a mesh from a 3D image, the 3D image is a volume of slices. I get the mesh using itk::BinaryMask3DMeshSource. But I need to get its physical coordinate for each mesh node and I don't know how to do it.
I know how to obtain with ITK the physical coordinate of a voxel in a image using the TransformIndexToPhysicalPoint function. But when I have a mesh like this or an ITK::Mesh I don't know how to do it. I need to know if there is any relationship between the nodes of the mesh and the voxels in the image to find the physical coordinates.
Mesh points should already be in physical space, judging by both the code and the accompanying comment.

What is meant by regressing convolutional features to a quaternion representation of Rotation?

I'm interested in Robot Manipulation, I was reading the paper "PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes" and found the following sentence in the introduction section where it explains the three related tasks of PoseCNN. This is the third task.
The 3D Rotation R is estimated by regressing convolutional features extracted inside the bounding box of the object to a quaternion representation of R.
What is meant by regressing convolutional features to a quaternion representation of Rotation? How to regress to quaternion representation? Can we also use rotation matrix instead of quaternion. Can we say to regress convolutional features to a rotation matrix? If yes what will be the difference between the two?
"regressing convolutional features" means that you use the features extracted by the network for predicting some numbers.
In your case you are trying to predict the numbers of a quaternions which represent a rotation matrix.
I think the reason they are regressing a quaternions and not a rotation matrix is because it they are more compact, more numerically stable, and more efficient. For more information on the differences look at Quaternions and spatial rotation
Also i think you could try to regress the rotation matrix directly, if you look at the loss they use for the regression of the quaternions you see they convert the quaternions to there rotation matrix representation. So the loss itself is on the rotation matrix and not directly on the quaternions

Deep Learning for 3D Point Clouds, volume detection and meshing

I'm working on an archaeological excavation point cloud dataset with over 2.5 Billion points. This points come from a trench, a cube 10 x 10 x 3 m. Each point cloud is a layer, the gaps between are the excavated volumes. There are 444 volumes from this trench, 700 individual point clouds.
Can anyone give me some direction to any algorithms which can mesh these empty spaces? I'm already doing this semi-automatically using Open3D and other python libraries, but if we could train the program to assess all the point clouds and deduce the volumes it would save us a lot of time and hopefully get better results.

Using CUDA textures to store 2D surfaces

I am currently developing a 3D heat flow simulation on a 3D triangular mesh (basically any shape) with CUDA.
I was thinking of exploiting spatial locality by using CUDA textures or surfaces. Since I have a 3D mesh I thought that a 3D texture would be appropriate. After looking on different examples, however, I am not so sure anymore: 3D Textures are often used for volumes not for surfaces like in my case.
Can I use 3D textures for polygon meshes? Does it make sense? If not, are there other approaches or data structures in CUDA of use for my case?
Using 3D textures to store surface meshes is in fact a good idea. To better point this out, let me recall the clever approach in
Octree Textures on the GPU, GPU Gems 2
using 2D and 3D meshes to store an OctTree and to
Create an OctTree using a 3D texture;
Fastly traverse the OctTree by exploiting the filtering properties of the 3D texture;
Storing the surface polygons by a 2D texture.
OCTTREE TRAVERSAL BY THE FILTERING FEATURES OF A 3D TEXTURE
The tree is stored as an 8-bit RGBA 3D texture mapped in the unit cube [0,1]x[0,1]x[0,1], named as indirection pool. Each node of the tree is an indirection grid. Each child node is identified by the first three coordinates of the RGBA, while the fourth stores some other information, for example, if the node is a leaf or not or if it is empty.
Consider the QuadTree example reported in the paper (figure borrowed from the paper).
The A, B, C and D nodes (boxes) are stored as the texture elements (0,0), (1,0), (2,0) and (3,0), respectively, containing, for a QuadTree, 4 elements, each element storing a link to the child node. In this way, any access to the tree can be done by exploiting the hardware filtering features of the texture memory, a possibility that is illustrated in the following figure:
and by the following code (it is written in Cg, but I'm sure it can be easily ported to CUDA):
STORING THE TREE ELEMENTS BY A 2D TEXTURE
The elements of the tree can be stored by the classical approach exploiting the (u,v) coordinates, see UV mapping. The paper linked to above discusses a way to improve this method, but this is beyond the scope of this answer.

When using 3D Texture In CUDA, why we don't need to set Texture coordinate?

In OpenGL, after creating a 3D Texture, we always need to draw a proxy geometry such as GL_QUADS to contain the 3D Texture and set the texture coordinate in this function: glTexCoord3f.
However, when I use 3D texture in CUDA, I've never found a function like glTexCoord3f to point out the texture coordinate. Actually, we just use the CUDA array and then bind the array to the texture. After this, we can use texture fetch function tex3D to get the value.
Therefore, I'm very confused about that how can the tex3D function run correctly even though we've never set the texture coordinate before????
Thanks for answering.
The texture coordinates are the input arguments to the tex3D() fetch function.
In more detail, in OpenGL, when you call glTexCoord3f() it specifies the texture coordinates at the next issued vertex. The texture coordinates at each pixel of the rendered polygon are interpolated from the texture coordinates specified at the vertices of the polygon (typically triangles).
In CUDA, there is no concept of polygons, or interpolation of coordinates. Instead, each thread is responsible for computing (or loading) its texture coordinates and specifying them explicitly for each fetch. This is where tex3D() comes in.
Note that if you use GLSL pixel shaders to shade your polygons in OpenGL, you actually do something very similar to CUDA -- you explicitly call a texture fetch function, passing it coordinates. These coordinates can be computed arbitrarily. The difference is that you have the option of using input coordinates that are interpolated at each pixel. (And you can think of each pixel as a thread!)