Interpolation Contours with Spline in ArcGIS 10 - gis

I have contours (about 5000 polylines) in shapefile and I need to interpolate those with Spline. But Spline function in ArcGIS 10 can only interpolate points, and my contours are polylines. I hope that anyone has a solution to my problem. Thanks a lot!

You are looking for the Topo to Raster tool (in Spatial Analyst and/or 3D analyst extensions). I think there is also a "Feature to Point" tool available with an ArcInfo license.

Related

Multiple Convex Hull for a single point cloud

I am working on a Configuration Space subject (C-Space) for a 6 DOF robot arm.
From a simulation I can get a point cloud that define my C-Space.
From this C-Space, I would like to be able to know if a robot configuration (set of joints angles), is inside the C-Space or not.
So I would like to define a 6 dimensions model from my C-Space, like a combination of a lot of convex hull with a given radius.
And then, I would like to create or use a function that give me if my configuration is inside one of the convex hull (so inside the C-Space, which that means that the configuration is in collision).
Do you have any ideas ?
Thanks a lot.
The question is not completely clear yet. I am guessing that you have a point cloud from a laser scanner and would like to approximate the output of the point cloud with a set of convex objects to perform collision query later.
If the point clouds is already clustered into sets, convex hull for each set can be found fairly quickly using the quick-hull algorithm.
If you want to also find the clusters, then a convex decomposition algorithm, like the Volumetric Hierarchical Approximate Convex Decomposition maybe what you are looking for. However, there may need to be an intermediate step to transform the point cloud into a mesh object to pass as an input to V-HACD.

DXF generation using ezdxf: polyline containing spline fit points

I am developing a program, and one of the requirements is to take DXF as input. The input is limited to 2D case only. The program itself is in C++/Qt, but to test it I need some sample DXF input. The spline import is already implemented, the next step is polyline with spline fit points or control points added. I decided to use Python/ezdxf to generate such polyline, as I don't have Autocad.
My first approach was to create a spline from fit points utilizing add_spline_control_frame, then convert it to polyline. The problem is there turned out to be no conversion from spline to polyline (although I think I saw it in the docs, but cannot find it anymore).
The current approach is to make polyline by add_polyline2d(points), making each point to be with DXF flag field equal 8 (spline vertex created by spline-fitting). The problem is points need to be of type DXFVertex (docs state Vertex, but it is absent), and that type is private for ezdxf.
Please share your approaches either to the problems I've faced with ezdxf, or to the initial problem.
P.S. I tried to use LibreCAD to generate such a polyline, but it's hardly possible to make a closed polyline from spline fit points there.
The ability to create B-splines by the POLYLINE entity was used by AutoCAD before in DXF R2000 the SPLINE entity was added. The usage of this feature is not documented by Autodesk and also not promoted by ezdxf in any way.
Use the SPLINE entity if you can, but if you have to use DXF R12 - there is a helper class in ezdxf to create such splines ezdxf.render.R12Spline and an usage example here.
But you will be disappointed BricsCAD and AutoCAD show a very visible polygon structure:
Because not only the control points, but also the approximated curve points have to be stored as polyline points, to get a smoother curve you have to use many approximation points, but then you can also use a regular POLYLINE as approximation. I assume the control points were only stored to keep the spline editable.
All I know about this topic is documented in the r12spline.py file. If you find a better way to create smooth B-splines for DXF R12 with fewer approximation points, please let me know.
Example to approximate a SPLINE entity spline as points, which can be used by the POLYLINE entity:
bspline = spline.construction_tool()
msp.add_polyline3d(bpline.approximate(segments=20))
The SPLINE entity is a 3D entity, if you want to squash the spline into the xy-plane, remove the z-axis:
xy_pts = [p.xy for p in bpline.approximate(segments=20)]
msp.add_polyline2d(xy_pts)
# or as LWPOLYLINE entity:
msp.add_lwpolyline(xy_pts, format='xy')

What is meant by regressing convolutional features to a quaternion representation of Rotation?

I'm interested in Robot Manipulation, I was reading the paper "PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes" and found the following sentence in the introduction section where it explains the three related tasks of PoseCNN. This is the third task.
The 3D Rotation R is estimated by regressing convolutional features extracted inside the bounding box of the object to a quaternion representation of R.
What is meant by regressing convolutional features to a quaternion representation of Rotation? How to regress to quaternion representation? Can we also use rotation matrix instead of quaternion. Can we say to regress convolutional features to a rotation matrix? If yes what will be the difference between the two?
"regressing convolutional features" means that you use the features extracted by the network for predicting some numbers.
In your case you are trying to predict the numbers of a quaternions which represent a rotation matrix.
I think the reason they are regressing a quaternions and not a rotation matrix is because it they are more compact, more numerically stable, and more efficient. For more information on the differences look at Quaternions and spatial rotation
Also i think you could try to regress the rotation matrix directly, if you look at the loss they use for the regression of the quaternions you see they convert the quaternions to there rotation matrix representation. So the loss itself is on the rotation matrix and not directly on the quaternions

Deep Learning for 3D Point Clouds, volume detection and meshing

I'm working on an archaeological excavation point cloud dataset with over 2.5 Billion points. This points come from a trench, a cube 10 x 10 x 3 m. Each point cloud is a layer, the gaps between are the excavated volumes. There are 444 volumes from this trench, 700 individual point clouds.
Can anyone give me some direction to any algorithms which can mesh these empty spaces? I'm already doing this semi-automatically using Open3D and other python libraries, but if we could train the program to assess all the point clouds and deduce the volumes it would save us a lot of time and hopefully get better results.

Surface mesh to volume mesh

I have a closed surface mesh generated using Meshlab from point clouds. I need to get a volume mesh for that so that it is not a hollow object. I can't figure it out. I need to get an *.stl file for printing. Can anyone help me to get a volume mesh? (I would prefer an easy solution rather than a complex algorithm).
Given an oriented watertight surface mesh, an oracle function can be derived that determines whether a query line segment intersects the surface (and where): shoot a ray from one end-point and use the even-odd rule (after having spatially indexed the faces of the mesh).
Volumetric meshing algorithms can then be applied using this oracle function to tessellate the interior, typically variants of Marching Cubes or Delaunay-based approaches (see 3D Surface Mesh Generation in the CGAL documentation). The initial surface will however not be exactly preserved.
To my knowledge, MeshLab supports only surface meshes, so it is unlikely to provide a ready-to-use filter for this. Volume mesher packages should however offer this functionality (e.g. TetGen).
The question is not perfectly clear. I try to give a different interpretation. According to your last sentence:
I need to get an *.stl file for printing
It means that you need a 3D model that is ok for being fabricated using a 3D printer, i.e. you need a watertight mesh. A watertight mesh is a mesh that define in a unambiguous way the interior of a volume and corresponds to a mesh that is closed (no boundary), 2-manifold (mainly that each edge is shared exactly by two face), and without self intersections.
MeshLab provide tools for both visualizing boundaries, non manifold and self-intersection. Correcting them is possible in many different ways (deletion of non manifoldness and hole filling or drastic remeshing).