Multiple Convex Hull for a single point cloud - configuration

I am working on a Configuration Space subject (C-Space) for a 6 DOF robot arm.
From a simulation I can get a point cloud that define my C-Space.
From this C-Space, I would like to be able to know if a robot configuration (set of joints angles), is inside the C-Space or not.
So I would like to define a 6 dimensions model from my C-Space, like a combination of a lot of convex hull with a given radius.
And then, I would like to create or use a function that give me if my configuration is inside one of the convex hull (so inside the C-Space, which that means that the configuration is in collision).
Do you have any ideas ?
Thanks a lot.

The question is not completely clear yet. I am guessing that you have a point cloud from a laser scanner and would like to approximate the output of the point cloud with a set of convex objects to perform collision query later.
If the point clouds is already clustered into sets, convex hull for each set can be found fairly quickly using the quick-hull algorithm.
If you want to also find the clusters, then a convex decomposition algorithm, like the Volumetric Hierarchical Approximate Convex Decomposition maybe what you are looking for. However, there may need to be an intermediate step to transform the point cloud into a mesh object to pass as an input to V-HACD.

Related

3D annotation for instance segmentation

I'm trying to annotate some data for 3D instance segmentation. While it's fairly straightforward to draw masks for each 2D plane, it's not obvious how to connect the same "instances" together post-annotation (ie. connect the "red" masks together, connect the "blue" masks together) without laboriously making sure the instances are instance-matched (ie. colour-coded to make sure "red" masks always connect with "red" masks).
A naive approach I have thought of is to make many 2D segmentation masks, and calculate the center of mass for each object detected. I can later re-assign the instances based on the closest matching center of mass, but I worry this would inadvertently generate "crossed-over" segmentation instances (illustrated below). What are some high-throughput strategies to generate 3D annotations?
The boundary of your 2-d slices could be used as constraints to obtain the optimal 3-d surface, as proposed in 1.
However, I think it is easier to generate 3-d labels from markers, such as 2. Its implementation is available in here (Fill free open an issue if you encounter any problems :P).
Also, the napari package could be useful to develop the GUI without much effort.
[1] Grady, Leo. "Minimal surfaces extend shortest path segmentation methods to 3D." IEEE Transactions on Pattern Analysis and Machine Intelligence 32.2 (2008): 321-334.
[2] Falcão, Alexandre X., and Felipe PG Bergo. "Interactive volume segmentation with differential image foresting transforms." IEEE Transactions on Medical Imaging 23.9 (2004): 1100-1108.
You can use 3D Slicer's Segment Editor. It is free, open-source, has many built-in tools, and customizable/extensible in Python or C++ (you can plug in your own segmentation method with minimal effort). To solve a segmentation task, typically you first figure out a good segmentation workflow (what tools to use, in what combination and what parameters) using interactive GUI, then if necessary you can make it semi-automatic or fully automatic using Python scripting.
You can create a segmentation by contouring every image slice, but it would be too tedious. Instead, you can use 3D region growing (Grow from seeds effect) or segment on just a few slices and interpolate between them (Fill between slices effect).

Surface mesh to volume mesh

I have a closed surface mesh generated using Meshlab from point clouds. I need to get a volume mesh for that so that it is not a hollow object. I can't figure it out. I need to get an *.stl file for printing. Can anyone help me to get a volume mesh? (I would prefer an easy solution rather than a complex algorithm).
Given an oriented watertight surface mesh, an oracle function can be derived that determines whether a query line segment intersects the surface (and where): shoot a ray from one end-point and use the even-odd rule (after having spatially indexed the faces of the mesh).
Volumetric meshing algorithms can then be applied using this oracle function to tessellate the interior, typically variants of Marching Cubes or Delaunay-based approaches (see 3D Surface Mesh Generation in the CGAL documentation). The initial surface will however not be exactly preserved.
To my knowledge, MeshLab supports only surface meshes, so it is unlikely to provide a ready-to-use filter for this. Volume mesher packages should however offer this functionality (e.g. TetGen).
The question is not perfectly clear. I try to give a different interpretation. According to your last sentence:
I need to get an *.stl file for printing
It means that you need a 3D model that is ok for being fabricated using a 3D printer, i.e. you need a watertight mesh. A watertight mesh is a mesh that define in a unambiguous way the interior of a volume and corresponds to a mesh that is closed (no boundary), 2-manifold (mainly that each edge is shared exactly by two face), and without self intersections.
MeshLab provide tools for both visualizing boundaries, non manifold and self-intersection. Correcting them is possible in many different ways (deletion of non manifoldness and hole filling or drastic remeshing).

Should I use MySQL Geo-Spatial data types for vector graphics

I am working on a project where I need to store and do computations on SVG paths and points (preferably in MySQL). I need to be able to quickly query whether a point lies within a path. MySQL's Geo-spatial features seems to support this kind of query with the ST_Within function.
However, I have found 2 opposing claims regarding whether MySQL's Geo-spatial functionality takes into account the 'curvature of the earth'. "I understand spatial will factor in the curvature of the earth" and "all calculations are performed assuming Euclidean (planar) geometry as opposed to the geocentric system (coordinates on the Earth's surface)". So, my question is which of the claims is true and whether/how does this effect me?
Also, any general advice on whether I should be taking this approach of storing SVG objects as MySQL Geo-spatial data types is welcome.
Upon further research, it seems that the second claim is true. That is, all computations in MySQL are done without regards to the curvature of the earth and just assumes a flat plane. References:
https://www.percona.com/blog/2013/10/21/using-the-new-mysql-spatial-functions-5-6-for-geo-enabled-applications/
http://www.programering.com/a/MTNwQjMwATI.html
http://blog.karmona.com/index.php/2010/11/01/the-geospatial-cloud/
General advice on whether I should be taking this approach of storing SVG objects as MySQL Geo-spatial data types is still very much welcome.

How can I create a classifier using the feature map of a CNN?

I intend to make a classifier using the feature map obtained from a CNN. Can someone suggest how I can do this?
Would it work if I first train the CNN using +ve and -ve samples (and hence obtain the weights), and then every time I need to classify an image, I apply the conv and pooling layers to obtain the feature map? The problem I find in this, is that the image I want to classify, may not have a similar feature map, and hence I wouldn't be able to find the distance correctly. As the order of the features may by different in the layer.
You can use the same CNN for classification if you used (for example) the cross entropy loss-(also known as softmax with loss). In this case, you should take the argmax of your last layer (the node with the highest score), and that would be the class given by the network. However, all the architectures used in machine learning would expect at testing time an input similar to those used during training.

OpenGL Newbie - Best way to move objects about in a scene

I'm new to OpenGL and graphics programming in general, though I've always been interested in the topic so have a grounding in the theory.
What I'd like to do is create a scene in which a set of objects move about. Specifically, they're robotic soccer players on a field. The objects are:
The lighting, field and goals, which don't change
The ball, which is a single mesh which will undergo translation and rotation but not scaling
The players, which are each composed of body parts, each of which are translated and rotated to give the illusion of a connected body
So to my GL novice mind, I'd like to load these objects into the scene and then just move them about. No properties of the vertices will change, either their positioning nor texture/normals/etc. Just the transformation of their 'parent' object as a whole.
Furthermore, the players all have identical bodies. Can I optimise somehow by loading the model into memory once, then painting it multiple times with a different transformation matrix each time?
I'm currently playing with OpenTK which is a lightweight wrapper on top of OpenGL libraries.
So a helpful answer to this question would either be:
What parts of OpenGL give me what I need? Do I have to redraw all the faces every frame? Just those that move? Can I just update some transformation matrices? How simple can I make this using OpenTK? What would psuedocode look like? Or,
Is there a better framework that's free (ideally open source) and provides this level of abstraction?
Note that I require any solution to run in .NET across multiple platforms.
Using so called vertex arrays is probably the surest way to optimize such a scene. Here's a good tutorial:
http://www.songho.ca/opengl/gl_vertexarray.html
A vertex array or more generally, a gl data array holds data like vertex positions, normals, colors. You can also have an array that hold indexes to these buffers to indicate in which order to draw them.
Then you have a few closely related functions which manage these arrays, allocate them, set data to them and paint them. You can perform a rendering of a complex mesh with just a single OpenGL command like glDrawElements()
These arrays generally reside on the host memory, A further optimization is to use vertex buffer objects which are the same concept as regular arrays but reside on the GPU memory and can be somewhat faster. Here's abit about that:
http://www.songho.ca/opengl/gl_vbo.html
Working with buffers as opposed to good old glBegin() .. glEnd() has the advantage of being compatible with OpenGL ES. in OpenGL ES, arrays and buffers are the only way to draw stuff.
--- EDIT
Moving things, rotating them and transforming them in the scene is done using the Model View matrix and does not require any changes to the mesh data. To illustrate:
you have your initialization:
void initGL() {
// create set of arrays to draw a player
// set data in them
// create set of arrays for ball
// set data in them
}
void drawScene {
glMatrixMode(GL_MODEL_VIEW);
glLoadIdentity();
// set up view transformation
gluLookAt(...);
drawPlayingField();
glPushMatrix();
glTranslate( player 1 position );
drawPlayer();
glPopMatrix();
glPushMatrix();
glTranslate( player 2 position );
drawPlayer();
glPopMatrix();
glPushMatix();
glTranslate( ball position );
glRotate( ball rotation );
drawBall();
glPopMatrix();
}
Since you are beginning, I suggest sticking to immediate mode rendering and getting that to work first. If you get more comfortable, you can improve to vertex arrays. If you get even more comfortable, VBOs. And finally, if you get super comfortable, instancing which is the fastest possible solution for your case (no deformations, only whole object transformations).
Unless you're trying to implement something like Fifa 2009, it's best to stick to the simple methods until you have a demonstrable efficiency problem. No need to give yourself headaches prematurely.
For whole object transformations, you typically transform the model view matrix.
glPushMatrix();
// do gl transforms here and render your object
glPopMatrix();
For loading objects, you'll even need to come up with some format or implement something that can load mesh formats (obj is one of the easiest formats to support). There are high-level libraries to simplify this but I recommend going with OpenGL for the experience and control that you'll have.
I'd hoped the OpenGL API might be easy to navigate via the IDE support (intellisense and such). After a few hours it became apparent that some ground rules need to be established. So I stopped typing and RTFM.
http://www.glprogramming.com/red/
Best advice I could give to anyone else who finds this question when finding their OpenGL footing. A long read, but empowering.