grid of 10km * 10 km from centroids - gis

I have a point layer which are the centroids of each cell of a grid. Each cells is of 10km per 10 km. How can I recreate this grid from the centroids and the cell size?
I have to say that I am a newbie in gis things.
Thanks

Have a look at this question - you're effectively asking for what Matthew didn't want, which is the simpler version. Make a circular buffer around each centroid, and convert that to a square with Feature Envelope to Polygon. (or possibly Minimum Bounding Geometry, depending on your licence level)
For the record, this kind of question is better suited to GIS.SE, since it's not technically a programming question.

Related

How to get the radial distance to the boundary given a point in ITK?

I'm loading a 3D CT model and doing thinning algorithms on it. Now I'd like to calculate how much thinning the algorithms do. How can I know the distances between skeleton points and their nearest/farthest boundary points?
Compute the distance transform of the skeleton points and boundary points (stored as a binary mask). Your answer lies therin.

Calculate 3D coordinates from 2D Image plane accounting for perspective without direct access to view/projection matrix

First time asking a question on the stack exchange, hopefully this is the right place.
I can't seem to develop a close enough approximation algorithm for my situation as I'm not exactly the best in terms of 3D math.
I have a 3d environment in which I can access the position and rotation of any object, including my camera, as well as run trace lines from any two points to get distances between a point and a point of collision. I also have my camera's field of view. I do not have any form of access to the world/view/projection matrices however.
I also have a collection of 2d images that are basically a set of screenshots of the 3d environment from the camera, each collection is from the same point and angle and the average set is taken at about an average of a 60 degree angle down from the horizon.
I have been able to get to the point of using "registration point entities" that can be placed in the 3d world that represent the corners of the 2d image, and then when a point is picked on the 2d image it is read as a coordinate with range 0-1, which is then interpolated between the 3d positions of the registration points. This seems to work well, but only if the image is a perfect top down angle. When the camera is tilted and another dimension of perspective is introduced, the results become more grossly inaccurate as there no compensation for this perspective.
I don't need to be able to calculate the height of a point, say a window on a sky scraper, but at least the coordinate at the base of the image plane, or which if I extend a line out from my image from a specified image space point I need at least the point that the line will intersect with the ground if there was nothing in the way.
All of the material I found about this says to just deproject the point using the world/view/projection matrices, which I find straightforward in itself except I don't have access to these matrices, just data I can collect at screenshot time and other algorithms use complex maths I simply don't grasp yet.
One end goal of this would be able to place markers in the 3d environment where a user clicks in the image, while not being able to run a simple deprojection from the user's view.
Any help would be appreciated, thanks.
Edit: Herp derp, while my implementation for doing so is a bit odd due to the limitations of my situation, the solution essentially boiled down to ananthonline's answer about simply recalculating the view/projection matrices.
Between position, rotation and FOV of the camera, could you not calculate the View/Projection matrices of the camera (songho.ca/opengl/gl_projectionmatrix.html) - thus allowing you to unproject known 3D points?

Calculate the area a camera can see on a plane

I have a camera with the coordinates x,y at height h that is looking onto the x-y-plane at a specific angle, with a specific field of view. I want to calculate the 4 corners the camera can see on the plane.
There is probably some kind of formula for that, but I can't seem to find it on google.
Edit: I should probably mention that I mean a camera in the 3D-Graphics sense. Specifically I'm using XNA.
I've had to do similar things for debugging graphics code in 3D games. I found the easiest way of thinking about it was creating the vectors representing the corners of the screen and then calculating the intersection with whatever relevant objects (in this case, a plane).
Take a look at your view-projection (or whatever your Camera's matrix stack looks like multiplied together) and realize the screen space they output to has corners with homogonized coordinates of (-1, -1), (-1, 1), (1, -1), (1, 1). Knowing this, you're left with one free variable and can solve for the vector representing the corner of the camera.
Although, this is a pain. It's much easier to construct a vector for each corner as if the camera isn't rotated or translated and then transform them by the view matrix to give you world space vectors. Then you can calculate the intersection between the vectors and the plane to get your four corners.
I have a day job, so I leave the math to you. Some links that may help with that, however:
Angle/Field of view calculations
Line plane intersection
ignoring lens distortions and assuming the lens is almost at the focal point then you just have a triangle formed by the sensor size and the lens, then the lens to the subject - similar trianlges gives you the size of the subject plane.
If you want a tilted object plane that's just a projection onto the perpendicular object plane

Create a Graph from points in a Grid that contains holes

I've got a continuous plane (2-D) containing polygonal obstacles. I am uniformly sampling the plane at discrete positions to create a uniform grid of points. The grid does not have points where obstacles lie (i.e. holes where ever an obstacle is) as shown in the image below.
(Please view the image at http://i48.tinypic.com/2efnblg.png for a clear idea of what I'm attempting to accomplish. I couldn't embed it.)
Can anyone point me to some good implementations with optimal worst-case time-complexity?
Solved the problem using recursion.

Radius of multiple latitude/longitude points

I have a program that takes as input an array of lat/long points. I need to perform a check on that array to ensure that all of the points are within a certain radius. So, for example, the maximum radius I will allow is 100 miles. Given an array of lat/long (coming from a MySQL database, could be 10 points could be 10000) I need to figure out if they will all fit in a circle with radius of 100 miles.
Kinda stumped on how to approach this. Any help would be greatly appreciated.
Find the smallest circle containing all points, and compare its radius to 100.
It's easiest way for me to solve this is by converting the coordinates to (X,Y,Z), then finding the distance along a sphere.
Assuming Earth is a sphere (totally untrue) with radius R...
X = R * cos(long) * cos(lat)
Y = R * sin(long) * cos(lat)
Z = R * sin(lat)
At this point, you can approximate the distance between the points using the extension of the pythagorean theorem for threespace:
dist = sqrt((x1-x2)^2 + (y1-y2)^2 + (z1-z2)^2)
But to find the actual distance along the surface, you're going to need to know the angle subtended by the two points from the origin (center of the Earth).
Representing your locations as vectors V1 = (X1, Y1, Z1) and V2 = (X2, Y2, Z2), the angle is:
angle = arcsin((V1 x V2) / (|V1||V2|)), where x is the cross-product.
The distance is then:
dist = (Earth's circumference) * angle / (2 * pi)
Of course, this doesn't take into account changes in elevation or the fact that the Earth is wider at the equator.
Apologies for not writing my math in LaTeX.
The answer below involves pretending that the earth is a perfect sphere, which should give a more accurate answer than treating the earth as a flat plane.
To figure out the radius of a set of lat/lon points, you must first ensure that your set of points is "hemispherical", ie. all the points can fit into some arbitrary half of your perfect sphere.
Check out Section 3 in the paper "Optimal algorithms for some proximity problems on the Gaussian sphere with applications" by Gupta and Saluja. I don't have a specific link, but I believe that you can find a copy online for free. This paper is not sufficient to implement a solution. You'll also need Appendix 1 in "Approximating Centroids for the Maximum Intersection of Spherical Polygons" by Ha and Yoo.
I wouldn't use Megiddo's algorithm for doing the linear programming part of the hemisphericity testing. Instead, use Seidel's algorithm for solving Linear Programming problems, described in "Small-Dimensional Linear Programming and Convex Hulls Made Easy" by Raimund Seidel. Also see "Seidel’s Randomized Linear Programming Algorithm" by Kurt Mehlhorn and Section 9.4 from "Real-Time Collision Detection" by Christer Ericson.
Once you have determined that your points are hemispherical, move on to Section 4 of the paper by Gupta and Saluja. This part shows how to actually get the "smallest enclosing circle" for the points.
To do the required quadratic programming, see the paper "A Randomized Algorithm for Solving Quadratic Programs" by N.D. Botkin. This tutorial is helpful, but the paper uses (1/2)x^T G x - g^T x and the web tutorial uses (1/2)x^T H x + c^T x. One adds the terms and the other subtracts, leading to sign-related problems. Also see this example 2D QP problem. A hint: if you're using C++, the Eigen library is very good.
This method is a little more complicated than some of the 2D methods above, but it should give you more accurate results than just ignoring the curvature of the earth completely. This method also has O(n) time complexity, which is likely asymptotically optimal.
Note: The method described above may not handle duplicate data well, so you may want to check for duplicate lat/lon points before finding the smallest enclosing circle.
Check out the answers to this question. It gives a way to measure the distance between any two (lat,long) points. Then use a smallest enclosing circle algorithm.
I suspect that finding a smallest enclosing circle may be difficult enough on a plane, so to eliminate the subtleties of working with latitude and longitude and spherical geometry, you should probably consider mapping your points to the XY plane. That will introduce some amount of distortion, but if your intended scale is 100 miles you can probably live with that. Once you have a circle and its center on the XY plane, you can always map back to the terrestial sphere and re-check your distances.