Calculate if intersection between polygon and raster image - language-agnostic

If I have an binary image and a irregular convex polygon how can I calculate if they intersect each other? The coordinates of the polygon are described in terms of the image.
I have a few ideas on this, coming from either a collision detection or fill algorithm perspective, but I don't think either would be optimal. I'm sure there is a tried and tested method for this but can't think of the keywords.
Here is an example of what I mean:
In this case it should return true.

I would recommend this following algorithm:
Traverse the border of the polygon using Bresenham's algorithm for each line, and at each pixel, sample the raster. If it's a color you accept to be visible, such as a nonzero alpha, report an intersection.
This has the advantage of only working over the edges of the polygon, so you don't need to iterate over all the pixels inside the polygon.

Related

HTML5 Canvas: Quadratic Curve with lengthwise color split

Problem
I struggle to split a CanvasRenderingContext2D.quadraticCurveTo() path lengthwise into two colors.
Question (framings)
How can I split the color of a quadratic curve lengthwise?
How can a draw two parallel quadratic curves?
Background
The user must draw a line in an annotation tool and indicate polarity based on the line's color (this convention is predefined and can not be changed). Example:
Current best solution
The user specifies N points through which to draw a smooth line (based on this StackOverflow answer). To split the path, I calculate a perpendicular vector for each subpath, merge them to find the average perpendicular vector for the entire path, and redraw the line twice, once shifted up and once shifted down along the perpendicular.
This approach works fine for most curves:
However, it fails, e.g., for back curving curves:
Next, I would try using a perpendicular gradient as described in this blog. However, the computation seems to be highly inefficient and I would appreciate any hints on how else I could solve this problem.

Drawing over terrain with depth test?

i'm trying to render geometrical shapes over uneven terrain (loaded from heightmap / shapes geometry is also generated based on averaged heights across the heightmap however they do not fit it exactly). I have the following problem - somethimes the terrain shows through the shape like showed on the picture.
Open Image
I need to draw both terrain and shapes with depth testing enabled so they do not obstruct other objects in the scene.. Could someone suggest a solution to make sure the shapes are always rendered on top ? Lifting them up is not really feasible... i need to replace the colors of actual pixel on the terrain and doing this in pixel shader seems too expensive..
thanks in advance
I had a similar problem and this is how I solved it:
You first render the terrain and keep the depth buffer. Do not render
any objects
Render solid bounding box of the shape you want to put on the terrain.
You need to make sure that your bounding box covers all
the height range the shape covers
An over-conservative estimation is to use the global minimum and maximum elevation of the entire
terrain
In the pixel shader, you read depth buffer and reconstructs world space position
You check if this position is inside your shape
In your case you can check if its xy (xz) projection is within the given distance from
the center of your given circle
Transform this position into your shape's local coordinate system and compute the desired color
Alpha-blend over the render target
This method results in shapes perfectly aligned with the terrain surface. It also does not produce any artifacts and works with any terrain.
The possible drawback is that it requires using deferred-style shading and I do not know if you can do this. Still, I hope this might be helpful for you.

How do I pass barycentric coordinates to an AGAL shader? (AGAL wireframe shader)

I would like to create a wire frame effect using a shader program written in AGAL for Stage3D.
I have been Googling and I understand that I can determine how close a pixel is to the edge of a triangle using barycentric coordinates (BC) passed into the fragment program via the vertex program, then colour it accordingly if it is close enough.
My confusion is in what method I would use to pass this information into the shader program. I have a simple example set up with a cube, 8 vertices and an index buffer to draw triangles between using them.
If I was to place the BC's into the vertex buffer then that wouldn't make sense as they would need to be different depending on which triangle was being rendered; e.g. Vetex1 might need (1,0,0) when rendered with Vetex2 and Vetex3, but another value when rendered with Vetex5 and Vetex6. Perhaps I am not understanding the method completely.
Do I need to duplicate vertex positions and add the aditional data into the vertex buffer, essentially making 3 vertices per triangle and tripling my vertex count?
Do I always give the vertex a (1,0,0), (0,1,0) or (0,0,1) value or is this just an example?
Am I over complicating this and is there an easier way to do wire-frame with shaders and Stage3d?
Hope that fully explains my problems. Answers are much appreciated, thanks!
It all depends on your geomtery, and this problem is in fact a problem of graph vertex coloring: you need your geometry graph to be 3-colorable. The good starting point is the Wikipedia article.
Just for example, let's assume that (1, 0, 0) basis vector is red, (0, 1, 0) is green and (0, 0, 1) is blue. It's obvious that if you build your geometry using the following basic element
then you can avoid duplicating vertices, because such graph will be 3-colorable (i.e. each edge, and thus each triangle, will have differently colored vertices). You can tile this basic element in any direction, and the graph will remain 3-colorable:
You've stumbled upon the thing that drives me nuts about AGAL/Stage3D. Limitations in the API prevent you from using shared vertices in many circumstances. Wireframe rendering is one example where things break down...but simple flat shading is another example as well.
What you need to do is create three unique vertices for each triangle in your mesh. For each vertex, add an extra param (or design your engine to accept vertex normals and reuse those, since you wont likely be shading your wireframe).
Assign each triangle a unit vector A[1,0,0], B[0,1,0], or C[0,0,1] respectively. This will get you started. Note, the obvious solution (thresholding in the fragment shader and conditionally drawing pixels) produces pretty ugly aliased results. Check out this page for some insight in techniques to anti-alias your fragment program rendered wireframes:
http://cgg-journal.com/2008-2/06/index.html
As I mentioned, you need to employ a similar technique (unique vertices for each triangle) if you wish to implement flat shading. Since there is no equivalent to GL_FLAT and no way to make the varying registers return an average, the only way to implement flat shading is for each vertex pass for a given triangle to calculate the same lighting...which implies that each vertex needs the same vertex normal.

algorithm to draw filled symmetric polygon?

I'm looking for the series of steps necessary to draw a filled polygon. I will create a function that renders it to a bitmap. I'm writing in a language similar to visual basic, but without most of the object oriented stuff like classes and inheritance, and the drawing capabilities are drawline() and drawrect() and that is it, but it can scale and rotate a completed bitmap object, so, when I fill the polygon, it will be one dot at a time in a for loop or a while loop, however, I can convert the bitmap to a byte array if that makes any difference (might be faster?) so if you have a method that would treat a completed polygon line as a byte array and fill it that way, might be faster than 100,000 plot(x,y) commands? I don't know, either way would be interesting to look at.
I'm not trying to draw irregular polygons, just symmetrical (radial symmetry) with an arbitrary number of sides, minimum 3, centered in the bitmap area.
Drawing method is cartesian with 0,0 being uppper left of the bitmap. I guess the inputs would look something like:
drawpolygon(bitmapobj,width,height,sides,radius)
Perhaps radius is not necessary since the size of the bitmap will be the limit of the polygon?
Looking for steps in English instead of code, if possible, but code could be useful if it doesn't have too many language specific aspects (for instance, c++ has a bunch of declarations, type casting pointers, stuff I don't have to deal with and am not 100% sure how to convert to the language I'm using).
There is an equation given here (the last one).
By looping over all the x and y coordinates, and checking that the output of this equation is less than zero, you can determine which points are 'inside' and colour them appropraitely.

How to calculate Polygon points from a simple line for a specific width?

I currently develop an application that creates polygons from lines and I experience a small problem:
I have a set of points, representing a line. I would like to create a polygon that displays the line with a specific width (e.g. for a street). I have several ideas how to calculate the outer polygon points, but I think they are too complicated...
My best idea was the one pictured below: Every point of the line must be projected to at least two points: Both points must be 90° to the following line segment and have a distance half of the preferred polygon width.
This works good, as you can see at the end and start points of the pictured polygon. Now the complicated part: With this method, at a corner, each point gets four points. But these points are not correct for the outer polygon, because they are in the shape. The lines intersected and created an ugly polygon.
How can I find the correct points for such a polygon? I think my method is far too complicated for solving this problem.
Can anybody help me with this (propably very common) problem?
Info: I tagged this with openstreetmap because renderer like Mapnik have this problem, too.
What you are looking for is a polygon (or line) offsetting algorithm. This is not necessarily an easy problem to solve, by the way: An algorithm for inflating/deflating (offsetting, buffering) polygons.
For the last couple of weeks I've been working on a line offsetting algorithm for Maperitive. In my case I only needed to offset the line so I wasn't looking for a solution to create a buffered polygon around it, but I guess the algorithm could be extended further in the future:
Basic flow (roughly, but the devil is in the details):
For each polyline point find a point that has an L distance from the original point and lies on a line that's orthogonal to the original line and goes through the original point.
Now draw an offset line through that new point. The line must be parallel to the original line.
For corner angles you must extend the two neighbouring offset lines and find the intersection point, which will be the next point of the offset line.
Some things to observe:
Notice the miter limit applied on concave angles to the right of the picture.
Before calculating the offset line you need to simplify the original polyline to exclude segments that are too small to hold the offset (the results can be seen at the center left of the picture).
I only implemented support for miter joins, but a good algorithm should be able to render round joins, too (using arcs).