Question
How can I optimally get the list of vertices for the 3d shape (a convex hull) formed by a set of intersecting planes in 3d space that contains all feasible solutions. Assuming that a 3d shape is formed by the intersection of the planes?
Is there a name for the algorithm I'm seeking for? My background in this problem space is not very good and a nudge in the right direction would satisfy as an answer.
Background
I have a list of planes in the form of A, B, C, d that follow the form Ax + By + Cz - d = 0 and I want to determine the list of vertices formed by the intersection of the planes (-d for the sake of implementation details, I don't want the minus sign to be absorbed as part of the constant). Once I have a good solution I want to create a program for it.
Plane Data
1, 0, 0, 3.3927
1, 0, 0, -3.5354
0, 1, 0, -1.8034
0, 1, 0, 5.1248
0, 0, 1, 0.8506
0, 0, 1, 2.3506
Visualization of planes
Attempt
My intuition tells me that I should determine plane, plane intersection -> line, line intersection -> point, for every plane intersection, but I feel as if this is just the naive/brute force way of doing what I need. Is there perhaps a faster way or a specific algorithm of doing this especially as the number of planes approaches 10, 100, ..., n planes?
reference: https://math.stackexchange.com/questions/475953/how-to-calculate-the-intersection-of-two-planes
My googling so far has led me to think about using Cramer's rule, to help the issue with a more general solution, but it seems rather iterative because I can't just put in all 6 planes in at the same time. At a glance it seems I would have to do some sorting to get the list of planes that intersect with each other. The other thing is that Cramer's rule breaks down a bit and I would have to do a lot of edge case catching to ensure that I can get things working, EX: if I have two planes in 3d space.
reference: https://github.com/guiriosoficial/CramersRule
Related
I have some ActionScript3 code I'm using to create liquid-like "droplets", and when they're first generated they look like a curved square (that's as close as I can get them to being a circle). I've tried and failed a lot here but my goal is to make these droplets look more organic and free-form, as if you were looking closely at rain drops on your windshield before they start dripping.
Here's what I have:
var size:int = (100 - asset.width) / 4,
droplet:Shape = new Shape();
droplet.graphics.beginFill(0xCC0000);
droplet.graphics.moveTo(size / 2, 0);
droplet.graphics.curveTo(size, 0, size, size / 2);
droplet.graphics.curveTo(size, size, size / 2, size);
droplet.graphics.curveTo(0, size, 0, size / 2);
droplet.graphics.curveTo(0, 0, size / 2, 0);
// Apply some bevel filters and such...
Which yields a droplet shaped like this:
When I try adding some randomness to the size or the integers or add more curves in the code above, I end up getting jagged points and some line overlap/inversion.
I'm really hoping someone who is good at math or bezier logic can see something obvious that I need to do to make my consistently rounded-corner square achieve shape randomness similar to this:
First off, you can get actual circle-looking cirles using beziers by using 0.55228 * size rather than half-size (in relation to bezier curves, this is sometimes called kappa). It only applies if you're using four segments, and that's where the other hint comes in: the more points you have, the more you can make your shape "creep", so you might actually want more segments, in which case it becomes easier to simply generate a number of points on a circle (fairly straight forward using good old sine and cosine functions and a regularly spaced angle), and then come up with the multi-segment Catmul-Rom curve through those points instead. Catmul-Rom curves and Bezier curves are actually different representations of the same curvatures, so you can pretty much trivially convert from one to the other, explained over at http://pomax.github.io/bezierinfo/#catmullconv (last item in the section gives the translation if you don't care about the maths). You can then introduce as much random travel as you want (make the upper points a little stickier and "jerk" them down when they get too far from the bottom points to get that sticky rain look)
I've been trying to work with more complicated shaders, and have run into issues with the coordinate systems used by the vertex shader and texture sampler. In short: they don't seem to make any sense, and when trying to test them I end up getting inconsistent results. To make matters worse, the internet has little in the way of documentation, and most of the information I've found seems to expect me to know how this works already. I was hoping someone could clarify the following:
The vertex shaders pass an (x, y, z) representing a location on the render target. What are acceptable values for x, y, and z?
How do x and y correspond to the width and height of the back buffer (assuming that it's the render target)?
How do x and y correspond to the width and height on an output texture (assuming that it's the render target)?
When x=0 and y=0 where does the vertex sit, location-wise?
The texture samplers sample a texture at a (u, v) coordinate. What are acceptable values for u and v?
How do u and v correspond with the width and height of the texture being sampled?
How do AGAL's wrap, clamp, and repeat flags alter sampling, and what is the default behavior when one isn't given?
when sampling at u=0 and v=0, which pixel is returned location-wise?
EDIT:
From my tests, I believe the answers are:
Unsure
-1 is left/bottom, 1 is right/top
Unsure
At the center of the output
Unsure
0 is left/bottom, 1 is right/top
Unsure
The far bottom-left of the texture
You normally use the coordinate system of your own and then multiply the position of each vertex by MVP (model-view-projection) matrix to get NDC coordinates that can be fed to GPU as an output of vertex shader. There is a nice article explaining all that for Stage3D.
Correct. And z is in range [0, 1]
Rendering to a render target is the same as rendering to backbuffer - you output NDC from your vertex shader so the real size of the texture is irrelevant.
Yup, center of the screen.
Normally, it`s [0, 1] but you can use values that go out of that range and then the output depends on texture wrap mode (like repeat or clamp) set on the sampler.
(0, 0) is left/top, (1, 1) is right/bottom.
Default one is repeat. Those modes decide what you will get when you sample using coordinate that is out of range of [0, 1]. With repeat [1.5, 1.5] will result in [0.5, 0.5] while [1.0, 1.0] will be the result if the mode is set to clamp.
Top-left pixel of the texture.
Z buffering is a better rendering technique compared to z sorting, since it can render intersecting 3D objects.
Say, I have an Array containing two Object instances as following:
{v1:new Vector3D(0, 0, 0), v2:new Vector3D(100, 0, 0), v3:new Vector3D(100, 0, 100)}
{v1:new Vector3D(0, 100, 50), v2:new Vector3D(100, 100, 50), v3:new Vector3D(100, 0, 100)}
Those are two Object instances, each containing three Vector3D instances that represent the three vertices of a triangle.
I'll use Matrix3D.transformVector() and Vector3D.project() to draw the triangles with the graphics property of the stage.
When under such circumstances without any sprites created, how can I use Z buffering to draw out each pixel?
I'm with you. I miss z-buffering in pure AS3. At this point there is no z buffering provided for Flash/ActionScript-3. Your current options are:
Reordering sprites
Culling
File a request about z-buffering to Abobe
I'm going to provide two links which are fairly enough to pick up the logic behind z-sorting:
http://www.infiniteturtles.co.uk/blog/fast-sorting-in-as3
http://www.simppa.fi/blog/the-fastest-way-to-z-sort-and-handle-objects-in-as3/
These are well described articles so won't be too hard to understand the idea. Even the source codes is provided.
I have a single point and a set of shapes. I need to know if the point is contained within the compound shape of those shapes. That is, where all of the shapes intersect.
But that is the easy part.
If the point is outside the compound shape I need to find the position within that compound shape that is closest to the point.
These shapes can be of the type:
square
circle
ring (circle with another circle cut out of the center)
inverse circle (basically just the circular hole and a never ending fill outside that hole, or to the end of the canvas is there must be a limit to its size)
part of circle (as in a pie chart)
part of ring (as above but
line
The example below has an inverted circle (the biggest circle with grey surrounding it), a ring (topleft) a square and a line.
If we don't consider the line, then the orange part is the shape to constrain to. If the line is taken into account then the saturated orange part of the line is the shape to constrain to.
The black small dots represent the points that need to be constrained. The blue dots represent the desired result. (a 1, b 2 etc.)
Point "f" has no corresponding constrained result, since it is already in the orange area.
For the purpose of this example, only point "e" is constrained to the line, all others are constrained to the orange orange area.
If none of the shapes would intersect, then the point cannot be constrained. If the constraint would consist of two lines that cross eachother, then every point would be constrained to the same position (the exact position where the lines cross).
I have found methods that come close to this, but none that I can combine to produce the above functionality.
Some similar questions that I found:
Points within a semi circle
What algorithm can I use to determine points within a semi-circle?
Point closest to MovieClip
Flash: Closest point to MovieClip
Closest point through Minkowski Sum (this will work if I can convert the compound shape to polygons)
http://www.codezealot.org/archives/153
Select edge of polygon closest to point (similar to above)
For a point in an irregular polygon, what is the most efficient way to select the edge closest to the point?
PS: I noticed that the orange area may actually come across as yellow on some screens. It's the colored area in any case.
This isn't much of an answer, but it's a bit too long to fit into a comment ...
It's tempting to think, and therefore to advise you, to find the nearest point in each of the shapes to the point of interest, and to find the nearest of those nearest points.
BUT
The area you are interested in is constructed by union, intersection and difference of other areas and there will, therefore, be no general relationship between the closest points of the original shapes and the closest point of the combined shape. If you understand what I mean. For example, while the closest point of A union B is the closest of the set {closest point of A, closest point of B}, the closest point of A intersection B is not a simple function of that same set; at least not for the general case.
I suggest, therefore, that you are going to have to compute the (complex) shape which represents the area of interest and use one of the algorithms you've already discovered to find the closest point to your point of interest.
I look forward to someone much better versed in computational geometry proving me wrong.
Let's call I the intersection of all the shapes, C the contour of I, p the point you want to constrain and r the result point. We have:
If p is in I, then r = p
If p is not in I, then r is in C. So r is the nearest point in C to p.
So I think what you should do is the following:
If p is inside of all the shapes, return p.
Compute the contour C of the intersection of all the shapes, it is defined by a list of parts (segments, arcs, ...).
Find the nearest point to p in every part of C (computed in 2.) and return the nearest point among them to p.
I've discussed this question at length with my brother, and together we came to conclude that any resulting point will always lie on either the point where two shapes intersect, or where a shape intersects with the line from that shape perpendicular to the original point.
In the case of a circular shape constraint, the perpendicular line equals the line to its center. In the case of a line shape constraint, the perpendicular line is (of course) the line perpendicular to itself. In the case of a rectangle, the perpendicular line is the line perpendicular to the closest edge.
(And the same, theoretically, for complex polygon constraints.)
So a new approach (that I'll have to test still) will be to:
calculate all intersecting (with a shape constraint or with the perpendicular line from the original point to the shape constraint) points
keep only those that are valid: that lie within (comply with) all constraints
select the one closest to the original point
If this works, then one more optimization could be to determine first, which intersecting points are nearest and check if they are valid, and then work outward away from the original point until a valid one is found.
If this does not work, I will have another look at the polygon clipping method. For that approach I've come across this useful post:
Compute union of two arbitrary shapes
where clipping complex polygons is made much easier through http://code.google.com/p/gpcas/
The method holds true for all the cases (all points and their results) above, and also for a number of other scenarios that we tested (on paper).
I will try a live version tomorrow at work.
With this toolbox I was performing calibration of my camera.
However the toolbox outputs results in matrix form, and being a noob I don't really understand mathy stuff.
The matrix is in the following form.
Where R is a rotation matrix, T is a translation vector.
And these are the results I got from the toolbox. It outputs values in pixels.
-0.980755 -0.136184 -0.139905 217.653207
0.148552 -0.055504 -0.987346 995.948880
0.126695 -0.989128 0.074666 371.963957
0.000000 0.000000 0.000000 1.000000
Using this data can I know how much my camera is rotated and distance of it from the calibration object?
The distance part is easy. The translation from the origin is given by the first three numbers in the rightmost column. This represents the translation in the x, y, and z directions respectively. In your example, the camera's position p = (px, py, pz) = (217.653207, 995.948880, 371.963957). You can take the Euclidean distance between the camera's location and the location of the calibration object (cx, cy, cz). That is it would just be sqrt( (px-cx)2 + (py-cy)2 + (pz-cz)2 )
The more difficult part regards the rotation which is captured in the upper left 3x3 elements of the matrix. Without knowing exactly how they arrived at this, you're somewhat out of luck. That is, it's not easy to convert that back to Euler Angles, if that's what you want. However, you can transform those elements into a Quaternion Rotation which will give you the unique unit vector and angle to rotate the camera to that orientation. The specifics of the computation are provided here. Once you have the Quaternion rotation, you can easily apply it to the vectors n = (0, 0, 1), up = (0, 1, 0) and right = (1, 0, 0) to get the normal (direction the camera is pointed), up and right vectors. The right vector is only useful if you are interested in slewing the camera left or right from its current position.
I'm guessing the code uses the 'standard' formation - then you will find more details in the opencv library docs or their book.