How to obtain the physical coordinates of the nodes in an ITK :: Mesh obtained from a 3D volume of ct images - itk

I am using ITK library to get a mesh from a 3D image, the 3D image is a volume of slices. I get the mesh using itk::BinaryMask3DMeshSource. But I need to get its physical coordinate for each mesh node and I don't know how to do it.
I know how to obtain with ITK the physical coordinate of a voxel in a image using the TransformIndexToPhysicalPoint function. But when I have a mesh like this or an ITK::Mesh I don't know how to do it. I need to know if there is any relationship between the nodes of the mesh and the voxels in the image to find the physical coordinates.

Mesh points should already be in physical space, judging by both the code and the accompanying comment.

Related

Surface mesh to volume mesh

I have a closed surface mesh generated using Meshlab from point clouds. I need to get a volume mesh for that so that it is not a hollow object. I can't figure it out. I need to get an *.stl file for printing. Can anyone help me to get a volume mesh? (I would prefer an easy solution rather than a complex algorithm).
Given an oriented watertight surface mesh, an oracle function can be derived that determines whether a query line segment intersects the surface (and where): shoot a ray from one end-point and use the even-odd rule (after having spatially indexed the faces of the mesh).
Volumetric meshing algorithms can then be applied using this oracle function to tessellate the interior, typically variants of Marching Cubes or Delaunay-based approaches (see 3D Surface Mesh Generation in the CGAL documentation). The initial surface will however not be exactly preserved.
To my knowledge, MeshLab supports only surface meshes, so it is unlikely to provide a ready-to-use filter for this. Volume mesher packages should however offer this functionality (e.g. TetGen).
The question is not perfectly clear. I try to give a different interpretation. According to your last sentence:
I need to get an *.stl file for printing
It means that you need a 3D model that is ok for being fabricated using a 3D printer, i.e. you need a watertight mesh. A watertight mesh is a mesh that define in a unambiguous way the interior of a volume and corresponds to a mesh that is closed (no boundary), 2-manifold (mainly that each edge is shared exactly by two face), and without self intersections.
MeshLab provide tools for both visualizing boundaries, non manifold and self-intersection. Correcting them is possible in many different ways (deletion of non manifoldness and hole filling or drastic remeshing).

Using CUDA textures to store 2D surfaces

I am currently developing a 3D heat flow simulation on a 3D triangular mesh (basically any shape) with CUDA.
I was thinking of exploiting spatial locality by using CUDA textures or surfaces. Since I have a 3D mesh I thought that a 3D texture would be appropriate. After looking on different examples, however, I am not so sure anymore: 3D Textures are often used for volumes not for surfaces like in my case.
Can I use 3D textures for polygon meshes? Does it make sense? If not, are there other approaches or data structures in CUDA of use for my case?
Using 3D textures to store surface meshes is in fact a good idea. To better point this out, let me recall the clever approach in
Octree Textures on the GPU, GPU Gems 2
using 2D and 3D meshes to store an OctTree and to
Create an OctTree using a 3D texture;
Fastly traverse the OctTree by exploiting the filtering properties of the 3D texture;
Storing the surface polygons by a 2D texture.
OCTTREE TRAVERSAL BY THE FILTERING FEATURES OF A 3D TEXTURE
The tree is stored as an 8-bit RGBA 3D texture mapped in the unit cube [0,1]x[0,1]x[0,1], named as indirection pool. Each node of the tree is an indirection grid. Each child node is identified by the first three coordinates of the RGBA, while the fourth stores some other information, for example, if the node is a leaf or not or if it is empty.
Consider the QuadTree example reported in the paper (figure borrowed from the paper).
The A, B, C and D nodes (boxes) are stored as the texture elements (0,0), (1,0), (2,0) and (3,0), respectively, containing, for a QuadTree, 4 elements, each element storing a link to the child node. In this way, any access to the tree can be done by exploiting the hardware filtering features of the texture memory, a possibility that is illustrated in the following figure:
and by the following code (it is written in Cg, but I'm sure it can be easily ported to CUDA):
STORING THE TREE ELEMENTS BY A 2D TEXTURE
The elements of the tree can be stored by the classical approach exploiting the (u,v) coordinates, see UV mapping. The paper linked to above discusses a way to improve this method, but this is beyond the scope of this answer.

Geolocation, map and polygon intersection?

I need to retrieve the latitude and longitude coordinates of the intersection of a polygon with the street (look the blue point on the edge of the circle. image here!!!)
I need this data in order to calculate the road length from center of the circle, to its edge). Does anybody know if this task is possible, and if yes which technology allows for doing that ?
This works only if you have the vector data of all streets. This does not work with an image (jpg bmp).
When you have the vector data, you do a simple circle with line intersection, which you have learned in school.
You might transform the vectors first to a cartesian x,y plane such that you dont use latitude, longitude from the street vectors.
vector data, you can get for free from OpenStreetMap, or from TomTom or NavTeq when it is a huge project. Sometimes the state provides this data, too.
A common data format for such vector data is the ESRI shp file format. (.shp)

Bulk texture uploads

I have a specialised rendering app that needs to load up any number of jpegs from a pdf, and then write out the images into a rendered page inside a kernel. This is oversimplified, but the point is that I want to find a way to collectively send up 'n' images as textures, and then, within the kernel, to index into this collective of textures for tex2d() calls. Any ideas welcome for doing this gracefully.
As a side question, I haven't yet found a way to decode the jpeg images in the kernel, forcing me to decode on the CPU and then send up (slowly) a large bitmap. Can i improve this?
First: if texture upload performance is not a bottleneck, consider not bulk uploading. Here are some suggestions, each with different trade-offs.
For varying-sized textures, consider creating a texture atlas. This is a technique popular in game development that packs many textures into a single 2D image. This requires offsetting texture coordinates to the corner of the image in question, and it precludes the use of texture coordinate clamping and wrapping. So you would need to store the offset of the corner of each sub-texture instead of its ID. There are various tools available for creating texture atlases.
For constant-sized textures, or for the case where you don't mind the waste of varying-sized textures, you could consider using a layered texture. This is a texture with a number of independent layers that can be indexed at texture fetch time using a separate layer index. Quote from the link above:
A one-dimensional or two-dimensional layered texture (also know as texture array in Direct3D and array texture in OpenGL) is a texture made up of a sequence of layers, all of which are regular textures of same dimensionality, size, and data type.
A one-dimensional layered texture is addressed using an integer index and a floating-point texture coordinate; the index denotes a layer within the sequence and the coordinate addresses a texel within that layer. A two-dimensional layered texture is addressed using an integer index and two floating-point texture coordinates; the index denotes a layer within the sequence and the coordinates address a texel within that layer.
A layered texture can only be a CUDA array by calling cudaMalloc3DArray() with the cudaArrayLayered flag (and a height of zero for one-dimensional layered texture).
Layered textures are fetched using the device functions described in tex1Dlayered() and tex2Dlayered(). Texture filtering (see Texture Fetching) is done only within a layer, not across layers.
Layered textures are only supported on devices of compute capability 2.0 and higher.
You could consider a hybrid approach: sort the textures into same-sized groups and use a layered texture for each group. Or use a layered texture atlas, where the groups are packed such that each layer contains one or a few textures from each group to minimize waste.
Regarding your side question: a google search for "cuda jpeg decode" turns up a lot of results, including at least one open source project.

When using 3D Texture In CUDA, why we don't need to set Texture coordinate?

In OpenGL, after creating a 3D Texture, we always need to draw a proxy geometry such as GL_QUADS to contain the 3D Texture and set the texture coordinate in this function: glTexCoord3f.
However, when I use 3D texture in CUDA, I've never found a function like glTexCoord3f to point out the texture coordinate. Actually, we just use the CUDA array and then bind the array to the texture. After this, we can use texture fetch function tex3D to get the value.
Therefore, I'm very confused about that how can the tex3D function run correctly even though we've never set the texture coordinate before????
Thanks for answering.
The texture coordinates are the input arguments to the tex3D() fetch function.
In more detail, in OpenGL, when you call glTexCoord3f() it specifies the texture coordinates at the next issued vertex. The texture coordinates at each pixel of the rendered polygon are interpolated from the texture coordinates specified at the vertices of the polygon (typically triangles).
In CUDA, there is no concept of polygons, or interpolation of coordinates. Instead, each thread is responsible for computing (or loading) its texture coordinates and specifying them explicitly for each fetch. This is where tex3D() comes in.
Note that if you use GLSL pixel shaders to shade your polygons in OpenGL, you actually do something very similar to CUDA -- you explicitly call a texture fetch function, passing it coordinates. These coordinates can be computed arbitrarily. The difference is that you have the option of using input coordinates that are interpolated at each pixel. (And you can think of each pixel as a thread!)