I've got a continuous plane (2-D) containing polygonal obstacles. I am uniformly sampling the plane at discrete positions to create a uniform grid of points. The grid does not have points where obstacles lie (i.e. holes where ever an obstacle is) as shown in the image below.
(Please view the image at http://i48.tinypic.com/2efnblg.png for a clear idea of what I'm attempting to accomplish. I couldn't embed it.)
Can anyone point me to some good implementations with optimal worst-case time-complexity?
Solved the problem using recursion.
Related
Let's say I use the Graphics class at runtime to draw some vector shapes dynamically. For example a square and a circle.
Is there a way to create a new shape at runtime where those 2 vectors shapes overlap?
Those kind of operations are very common in all vector design programs such as Illustrator, Corel, etc... but I haven't found anything in Adobe's documentation, nor anywhere else, to do it by code.
Although drawing operations on the Graphics class are described in terms of lines, points etc this is - as far as you're concerned - just telling it what to draw onto a bitmap. There's no way to remove a shape once drawn, short of clear(), which just wipes the whole thing clean.
I don't fully understand why, as the vector data must be retained - there's no loss of quality on scaling after drawing, for example.
If you don't want to get into some hardcore maths (for anything beyond straight lines, you'll need to) there's an answer here which might help if you've ever used PixelBender:
How to calculate intersection between shapes in flash / action script ? (access to shape's segments and nodes?)
Failing that, if it's just cosmetic you could play around with masking shapes (will probably end up quite hacky though) - however, if you actually want to use the intersection to draw or describe a shape you will need to dig out your maths book or look for a good graphics library.
Hope this helps
I would like to create a wire frame effect using a shader program written in AGAL for Stage3D.
I have been Googling and I understand that I can determine how close a pixel is to the edge of a triangle using barycentric coordinates (BC) passed into the fragment program via the vertex program, then colour it accordingly if it is close enough.
My confusion is in what method I would use to pass this information into the shader program. I have a simple example set up with a cube, 8 vertices and an index buffer to draw triangles between using them.
If I was to place the BC's into the vertex buffer then that wouldn't make sense as they would need to be different depending on which triangle was being rendered; e.g. Vetex1 might need (1,0,0) when rendered with Vetex2 and Vetex3, but another value when rendered with Vetex5 and Vetex6. Perhaps I am not understanding the method completely.
Do I need to duplicate vertex positions and add the aditional data into the vertex buffer, essentially making 3 vertices per triangle and tripling my vertex count?
Do I always give the vertex a (1,0,0), (0,1,0) or (0,0,1) value or is this just an example?
Am I over complicating this and is there an easier way to do wire-frame with shaders and Stage3d?
Hope that fully explains my problems. Answers are much appreciated, thanks!
It all depends on your geomtery, and this problem is in fact a problem of graph vertex coloring: you need your geometry graph to be 3-colorable. The good starting point is the Wikipedia article.
Just for example, let's assume that (1, 0, 0) basis vector is red, (0, 1, 0) is green and (0, 0, 1) is blue. It's obvious that if you build your geometry using the following basic element
then you can avoid duplicating vertices, because such graph will be 3-colorable (i.e. each edge, and thus each triangle, will have differently colored vertices). You can tile this basic element in any direction, and the graph will remain 3-colorable:
You've stumbled upon the thing that drives me nuts about AGAL/Stage3D. Limitations in the API prevent you from using shared vertices in many circumstances. Wireframe rendering is one example where things break down...but simple flat shading is another example as well.
What you need to do is create three unique vertices for each triangle in your mesh. For each vertex, add an extra param (or design your engine to accept vertex normals and reuse those, since you wont likely be shading your wireframe).
Assign each triangle a unit vector A[1,0,0], B[0,1,0], or C[0,0,1] respectively. This will get you started. Note, the obvious solution (thresholding in the fragment shader and conditionally drawing pixels) produces pretty ugly aliased results. Check out this page for some insight in techniques to anti-alias your fragment program rendered wireframes:
http://cgg-journal.com/2008-2/06/index.html
As I mentioned, you need to employ a similar technique (unique vertices for each triangle) if you wish to implement flat shading. Since there is no equivalent to GL_FLAT and no way to make the varying registers return an average, the only way to implement flat shading is for each vertex pass for a given triangle to calculate the same lighting...which implies that each vertex needs the same vertex normal.
First time asking a question on the stack exchange, hopefully this is the right place.
I can't seem to develop a close enough approximation algorithm for my situation as I'm not exactly the best in terms of 3D math.
I have a 3d environment in which I can access the position and rotation of any object, including my camera, as well as run trace lines from any two points to get distances between a point and a point of collision. I also have my camera's field of view. I do not have any form of access to the world/view/projection matrices however.
I also have a collection of 2d images that are basically a set of screenshots of the 3d environment from the camera, each collection is from the same point and angle and the average set is taken at about an average of a 60 degree angle down from the horizon.
I have been able to get to the point of using "registration point entities" that can be placed in the 3d world that represent the corners of the 2d image, and then when a point is picked on the 2d image it is read as a coordinate with range 0-1, which is then interpolated between the 3d positions of the registration points. This seems to work well, but only if the image is a perfect top down angle. When the camera is tilted and another dimension of perspective is introduced, the results become more grossly inaccurate as there no compensation for this perspective.
I don't need to be able to calculate the height of a point, say a window on a sky scraper, but at least the coordinate at the base of the image plane, or which if I extend a line out from my image from a specified image space point I need at least the point that the line will intersect with the ground if there was nothing in the way.
All of the material I found about this says to just deproject the point using the world/view/projection matrices, which I find straightforward in itself except I don't have access to these matrices, just data I can collect at screenshot time and other algorithms use complex maths I simply don't grasp yet.
One end goal of this would be able to place markers in the 3d environment where a user clicks in the image, while not being able to run a simple deprojection from the user's view.
Any help would be appreciated, thanks.
Edit: Herp derp, while my implementation for doing so is a bit odd due to the limitations of my situation, the solution essentially boiled down to ananthonline's answer about simply recalculating the view/projection matrices.
Between position, rotation and FOV of the camera, could you not calculate the View/Projection matrices of the camera (songho.ca/opengl/gl_projectionmatrix.html) - thus allowing you to unproject known 3D points?
I am trying to randomly generate a directed graph for the purpose of making a puzzle game similar to the ice sliding puzzles from pokemon.
This is essentially what I want to be able to randomly generate: http://bulbanews.bulbagarden.net/wiki/Crunching_the_numbers:_Graph_theory
I need to be able to limit the size of the graph in an x and y dimension. In the example in the link, it would be restricted to an 8x4 grid.
The problem I am running in to is not randomly generating the graph, but randomly generating a graph which I can properly map out in a 2d space, since I need something (like a rock) on the opposite side of a node, to make it visually make sense when you stop sliding. The problem with this is sometimes the rock ends up in the path between two other nodes or possibly on another node itself, which causes the entire graph to become broken.
After discussing the problem with a few people I know, we came to a couple of conclusions that may lead to a solution. Including the obstacles in the grid as part of the graph when constructing it. Start out with a fully filled grid and just draw a random path and delete out blocks that will make that path work, though the problem then becomes figuring out which ones to delete so that you don't accidentally introduce an additional, shorter path. We were also thinking a dynamic programming algorithm may be beneficial, though none of us are too skilled with creating dynamic programming algorithms from nothing. Any ideas or references about what this problem is officially called (if it's an official graph problem) would be most helpful.
I wouldn't look at it as a graph problem, since as you say the representation is incomplete. To generate a puzzle I would work directly on a grid, and work backwards; first fix the destination spot, then place rocks in some way to reach it from one or more spots, and iteratively add stones to reach those other spots, with the constraint that you never add a stone which breaks all the paths to the destination.
You might want to generate a planar graph, which means that the edges of the graph will not overlap each other in a two dimensional space. Another definition of planar graphs ist that each planar graph does not have any subgraphs of the type K_3,3 (complete bi-partite with six nodes) or K_5 (complete graph with five nodes).
There's a paper on the fast generation of planar graphs.
I have a similar problem to this post. I need to display up to 1000 polygons on an embedded Google map. The polygons are in a SQL database, and I can render each one as a single KML file on the fly using a custom HttpHandler (in ASP.NET), like this http://alpha.foresttransparency.org/concession.1.kml .
Even on my (very fast) development machine, it takes a while to load up even a couple dozen shapes. So two questions, really:
What would be a good strategy for rendering these as markers instead of overlays once I'm beyond a certain zoom level?
Is there a publicly available algorithm for simplifying a polygon (reducing the number of points) so that I'm not showing more points than make sense at a certain zoom level?
For your second question: you need the Douglas-Peucker Generalization Algorithm
For your first question, could you calculate the area of a particular polygon, and relate each zoom level to a particular minimum area, so as you zoom in or out polygon's disappear and markers appear depending on the zoom level.
For the second question, I'd use Mark Bessey's suggestion.
I don't know much aobut KML, but I think the usual solution to question #2 involves iterating over the points, and deleting any line segments under a certain size. This will cause some "unfortunate" effects in some cases, but it's relatively fast and easy to do.
I would recommend 2 things:
- Calculate and combine polygons that are touching. This involves a LOT of processing and hard math, but I've done it so I know it's possible.
- Create your own overlay instead of using KML in PNG format, while you combine them in the previous suggestion. You'll have to create a LOT of PNGs but it is blazing fast on the client.
Good luck :)
I needed a solution to your #2 question a little bit ago and after looking at a few of the available line-simplification algorithms, I created my own.
The process is simple and it seems to work well, though it can be a bit slow if you don't implement it correctly:
P[0..n] is your array of points
Let T[n] be defined as the triangle formed by points P[n-1], P[n], P[n+1]
Max is the number of points you are trying to reduce this line to.
Calculate the area of every possible triangle T[1..n-1] in the set.
Choose the triangle T[i] with the smallest area
Remove the point P[i] to essentially flatten the triangle
Recalculate the area of the affected triangles T[n-1], T[n+1]
Go To Step #2 if the number of points > Max