I want to create a negative buffer distance by adding a inner buffer zone inside the polygons;but when i use qgis to create this buffer with negative value the output layer don't display anything like show you this screenshots:
who have any idea to fix this problem ?
QGis is helping you out by pointing out that your layer is in degrees which means that the -0.1 distance you have asked for is in degrees not metres (or any other useful length unit). It's not clear what size your polygon is but if it is smaller than 0.1 degrees (roughly 10km) then there is no buffer to display.
So you need to reproject your layer to a planar projection. See this question for details of how to proceed.
Related
assume we have two rectified photos with known pixel position from stereo cameras and we want to draw the disparity map
what would be the closest pixel if the pixel in the right photo is moving in both direction? I know that the farthest point is the point which has the minimum value if we do q.x -p.x (p is a pixel in the Left photo) so is the maxim value from this is the closest?
Thank you
Disparity maps are usually written with signed values which indicate which direction the pixel moves from one image to the other in the stereo pair. For instance if you have a pixel in the left view at location <100,250> and in the right view the corresponding pixel is at location <115,250> then the disparity map for the left view at location <100,250> would have a value of 15. The disparity map for the right view at location <115,250> would have a value of -15.
Disparity maps can be multi-channel images usually with the x-shift in the first channel and the y-shift in the second channel. If you are looking at high resolution stereo pairs with lots of disparity you might not be able to fit all possible disparity values into an 8-bit image. In the film industry most disparity maps are stored as 16 or 32 bit floating point images.
There is no standard method of scaling disparity and it is generally frowned upon since disparity is meant to describe a "physical/concrete/immutable/etc" property. However, sometimes it is necessary. For instance if you want to record disparity of a large stereo pair in an 8-bit image you will have to scale the values to fit into the 8-bit container. You can do this in many different ways.
One way to scale a disparity map is to take the largest absolute disparity value and divide all values by a factor that will reduce that value to the maximum value in your signed 8-bit world (128). This method is easy to scale back to the original disparity range using a simple multiplier but can obviously lead to a reduction in detail due to the step reduction created by the division. For example, if I have an image with a disparity range of 50 to -200 meaning I have 250 possible disparity values. I can divide all values by 200/128 = 1.5625. This gives me a range of 32 to -128 or 160 possible disparity values. When I scale those value back up using a multiply I get 50 to -200 again but now there are only 160 possible disparity values within that range.
Another method using the above disparity range is to simply shift the range. The total range is 250, our signed 8-bit container can hold 256 values so we subtract 250-128 = 72 from all values which gives us a new range of 122 to -128. This allows us to keep all of the disparity steps and get the exact input image back simply by adding our shift factor back into the image.
Conversely, if you have a disparity map with range -5 to 10. You might want to expand that range to include subpixel disparity values. So you might scale 10 up to 128 and -5 down to -64. This gives a broader range of values but the total number of possible values will change from frame to frame depending on the input disparity range.
The problem with scaling methods is that they can be lossy and each saved image will have a scaling factor/method that needs to be reversed. If each image has a separate scaling factor, then that factor has to be stored with the image. If each image has the same scaling factor then there will be a larger degradation of the data due to the reduction of possible values. This is why it is generally good practice to store disparity maps at higher bit-depths to ensure the integrity of the data.
I have a point layer which are the centroids of each cell of a grid. Each cells is of 10km per 10 km. How can I recreate this grid from the centroids and the cell size?
I have to say that I am a newbie in gis things.
Thanks
Have a look at this question - you're effectively asking for what Matthew didn't want, which is the simpler version. Make a circular buffer around each centroid, and convert that to a square with Feature Envelope to Polygon. (or possibly Minimum Bounding Geometry, depending on your licence level)
For the record, this kind of question is better suited to GIS.SE, since it's not technically a programming question.
I am making a 3D space game in Stage3D and would like a field of stars drawn behind ALL other objects. I think the problem I'm encountering is that the distances involved are very high. If I have the stars genuinely much farther than other objects I have to scale them to such a degree that they do not render correctly - above a certain size the faces seem to flicker. This also happens on my planet meshes, when scaled to their necessary sizes (12000-100000 units across).
I am rendering the stars on flat plane textures, pointed to face the camera. So long as they are not scaled up too much, they render fine, although obviously in front of other objects that are further away.
I have tried all manner of depthTestModes (Context3DCompareMode.LESS, Context3DCompareMode.GREATER and all the others) combined with including and excluding the mesh in the z-buffer, to get the stars to render only if NO other pixels are present where the star would appear, without luck.
Is anyone aware of how I could achieve this - or, even better, know why, above a certain size meshes do not render properly? Is there an arbitrary upper limit that I'm not aware of?
I don't know Stage3D, and I'm talking in OpenGL language here, but the usual way to draw a background/skybox is to draw the background close up, not far, draw the background first, and either disable depth buffer writing while the background is being drawn (if it does not require depth buffering itself) or clear the depth buffer after the background is drawn and before the regular scene is.
Your flickering of planets may be due to lack of depth buffer resolution; if this is so, you must choose between
drawing the objects closer to the camera,
moving the camera frustum near plane farther out or far plane closer (this will increase depth buffer resolution across the entire scene), or
rendering the scene multiple times at mutually exclusive depth ranges (this is called depth peeling).
You should use starling. It can work
http://www.adobe.com/devnet/flashplayer/articles/away3d-starling-interoperation.html
http://www.flare3d.com/blog/2012/07/24/flare3d-2-5-starling-integration/
You have to look at how projection and vertex shader output is done.
The vertex shader output has four components: x,y,z,w.
From that, pixel coordinates are computed:
x' = x/w
y' = y/w
z' = z/w
z' is what ends up in the z buffer.
So by simply putting z = w*value at the end of your vertex shader you can output any constant value. Just put value = .999 and there you are! Your regular depth less test will work.
I'm loading a 3D CT model and doing thinning algorithms on it. Now I'd like to calculate how much thinning the algorithms do. How can I know the distances between skeleton points and their nearest/farthest boundary points?
Compute the distance transform of the skeleton points and boundary points (stored as a binary mask). Your answer lies therin.
I have a program that takes as input an array of lat/long points. I need to perform a check on that array to ensure that all of the points are within a certain radius. So, for example, the maximum radius I will allow is 100 miles. Given an array of lat/long (coming from a MySQL database, could be 10 points could be 10000) I need to figure out if they will all fit in a circle with radius of 100 miles.
Kinda stumped on how to approach this. Any help would be greatly appreciated.
Find the smallest circle containing all points, and compare its radius to 100.
It's easiest way for me to solve this is by converting the coordinates to (X,Y,Z), then finding the distance along a sphere.
Assuming Earth is a sphere (totally untrue) with radius R...
X = R * cos(long) * cos(lat)
Y = R * sin(long) * cos(lat)
Z = R * sin(lat)
At this point, you can approximate the distance between the points using the extension of the pythagorean theorem for threespace:
dist = sqrt((x1-x2)^2 + (y1-y2)^2 + (z1-z2)^2)
But to find the actual distance along the surface, you're going to need to know the angle subtended by the two points from the origin (center of the Earth).
Representing your locations as vectors V1 = (X1, Y1, Z1) and V2 = (X2, Y2, Z2), the angle is:
angle = arcsin((V1 x V2) / (|V1||V2|)), where x is the cross-product.
The distance is then:
dist = (Earth's circumference) * angle / (2 * pi)
Of course, this doesn't take into account changes in elevation or the fact that the Earth is wider at the equator.
Apologies for not writing my math in LaTeX.
The answer below involves pretending that the earth is a perfect sphere, which should give a more accurate answer than treating the earth as a flat plane.
To figure out the radius of a set of lat/lon points, you must first ensure that your set of points is "hemispherical", ie. all the points can fit into some arbitrary half of your perfect sphere.
Check out Section 3 in the paper "Optimal algorithms for some proximity problems on the Gaussian sphere with applications" by Gupta and Saluja. I don't have a specific link, but I believe that you can find a copy online for free. This paper is not sufficient to implement a solution. You'll also need Appendix 1 in "Approximating Centroids for the Maximum Intersection of Spherical Polygons" by Ha and Yoo.
I wouldn't use Megiddo's algorithm for doing the linear programming part of the hemisphericity testing. Instead, use Seidel's algorithm for solving Linear Programming problems, described in "Small-Dimensional Linear Programming and Convex Hulls Made Easy" by Raimund Seidel. Also see "Seidel’s Randomized Linear Programming Algorithm" by Kurt Mehlhorn and Section 9.4 from "Real-Time Collision Detection" by Christer Ericson.
Once you have determined that your points are hemispherical, move on to Section 4 of the paper by Gupta and Saluja. This part shows how to actually get the "smallest enclosing circle" for the points.
To do the required quadratic programming, see the paper "A Randomized Algorithm for Solving Quadratic Programs" by N.D. Botkin. This tutorial is helpful, but the paper uses (1/2)x^T G x - g^T x and the web tutorial uses (1/2)x^T H x + c^T x. One adds the terms and the other subtracts, leading to sign-related problems. Also see this example 2D QP problem. A hint: if you're using C++, the Eigen library is very good.
This method is a little more complicated than some of the 2D methods above, but it should give you more accurate results than just ignoring the curvature of the earth completely. This method also has O(n) time complexity, which is likely asymptotically optimal.
Note: The method described above may not handle duplicate data well, so you may want to check for duplicate lat/lon points before finding the smallest enclosing circle.
Check out the answers to this question. It gives a way to measure the distance between any two (lat,long) points. Then use a smallest enclosing circle algorithm.
I suspect that finding a smallest enclosing circle may be difficult enough on a plane, so to eliminate the subtleties of working with latitude and longitude and spherical geometry, you should probably consider mapping your points to the XY plane. That will introduce some amount of distortion, but if your intended scale is 100 miles you can probably live with that. Once you have a circle and its center on the XY plane, you can always map back to the terrestial sphere and re-check your distances.