To locate points as initialization of GVF snakes - draw

I want to detect several shapes around a reference shape. The centroids point of the reference shape is already determined. Basically the position of the surrounding shapes are almost fix in terms of angle. How do I locate points into the surrounding shapes so I use them as the initialization points of the GVF snakes. (Automatic detecting based on reference shape).

Related

Detect coordinates within a shape

Theres 2 parts to my problem and they are related. I have a weird shape on my interface illustrated below, I am trying to randomly spawn MovieClips within its' boundaries but I am having some trouble finding a good way to do it.
Question 1: I can run an If condition to check with bitMapData.hitTest to see if the MovieClip has randomly spawn within this shape, if it doesn't simply retry with a new set of random coordinates. However, is there a better way? Like a way to only take into account coordinates within the shape? There will be plenty of MC spawned at one go so I am hoping to lessen the load, or at least find an efficient way to do this calculation.
Question 2: The MovieClips spawned within this shape will eventually have collision detection mechanics that will repel itself when interacted with. Is there a way to contain them within this shape via some kind of boundary detection?
If it was a square, we could easily have contained them with a quick check on all 4 edges, but not with this shape. Currently I am thinking of using bitMapData.hitTest again to detect for out-of-bounds after being repelled, but how do I know which Point() is the nearest 'edge' of this shape to return the MC to?
For question 1: I'm going to go on an assumption that you have some geometry data about the shape.
One method you can use to check if a point is within a shape is to take that point, then draw a line from that point to infinity (the edge of the screen) in one direction. Then count how many times that line intersects an edge of the shape. If it's odd, the point is within the shape (or on the edge) and if it's even, than that point is outside of the shape.
First link in google: https://www.geeksforgeeks.org/how-to-check-if-a-given-point-lies-inside-a-polygon/
Or can also try a more simple method (at the cost of doing more work): if the above shape is generated with all squares and rectangles and you know the point and size of all of those: can just do a check for the point vs all the squares and rectangles that make up the shape.
For question 2: As Organis mentioned, I'd go with a library like Box2D to do this. You'll most likely spend tons of time (that you may not want to) if you try to implement this alone.
The big issue is how much cpu or gpu the code uses. You're trying to avoid using any collision detection. Collision detection is having code do calculations to determine the edges of an object. It should be the last option.
Most of the time you know there's no need for collision detection. You know where everything is and how big it is. Everything has a centerpoint and comparing simple number coordinates is the lightweight way to check if there's a need to check further.
When things get near each other, you only need to do a collision detection on the immediate area around an object. See how your shape fits in a box that is easy to check for collisions? That box should get a collision check before the actual jagged shape inside it.
Yes that collision detection box has to be drawn or mapped but it's done when the object is defined, not when the game is playing. If you are using sprite sheets, keep an xml of the boxes or circles around the shapes.

Canvas: Mouse Events

I know it is not possible to add eventhandlers to specific circles or rectangles in canvas. But there are some nice frameworks like EaselJS, KineticJS, Paper.js, Fabric.js that support the eventhandling on specific elements.
Can someone explain me how do they work?
I think there are only two solutions.
1. You create for each element a new canvas region and put them on each other. In this way you can give each region an event handler.
2. You have only one canvas region and one event handler. In this way you have to do complex mathematical calculations to find out if a specific element was clicked. If you have only circles or rectangles, this solution might be easy. But if you have path with lots of curves, this solution is quite difficult.
I don't want to use the libraries. So it would be nice, if someone can help me.
Here's a BRIEF summary of how canvas drawing libraries work
An unaltered canvas is just a big bitmap. Once you draw shapes on the canvas, they are unaccessible, forgotten pixels.
Canvas drawing libraries store all your shapes into “retained” objects. Each shape object has sufficient information about itself to allow the drawing library to redraw it whenever necessary.
The canvas drawing libraries are the "controllers" for objects. The libraries have the algorithms necessary to track, manipulate and redraw all shape objects as necessary.
The following information is retained about every shape object:
Basic identification
ID,
Shape name
Parent or Container
Inherent properties of the shape:
Rectangular shapes( rect, image, text) know width and height.
Circular shapes (circles, elipses, regular polygons, stars) know radius and sidecount.
Lines know length.
Curved shapes (arcs, beziers, paths) know anchor points and control points.
Text also knows…well, the text!
Images also know their pixel data (usually stored in javascript Image objects)
Transformational information:
Starting X/Y coordinate
Translations—accumulated movements off the starting coordinate.
Rotations—accumulated rotations of this shape (usually in radians).
Scalings—accumulated resizings
Other transforms (less common) are skews and warps
Layering information—the current z-index
Styling information:
StrokeColor,
StrokeWidth,
FillColor,
Opacity,
isVisible,
lineCaps,
cornerRadius
Tracking abilities:
Bounding box—the smallest rectangle that completely contains this shape
This is used for “hit testing” to see if the mouse is inside this object (for selecting and dragging)
If you don't want to use a library, you may find my answer in this thread helpful. As markE says once the canvas is written to there is no way of tracking that data (unless you care to loop through each individual pixel and test its colour; though that is only really useful for pixel level collision detection).

When drawing on a canvas, should calculations be done relative to cartesian plane coordinates?

I've been seeing a lot of canvas-graphics-related javascript projects and libraries lately and was wondering how they handle the coordinate system. When drawing shapes and vectors on the canvas, are the points calculated based on a cartesian plane and converted for the canvas, or is everything calculated directly for the canvas?
I tried playing around with drawing a circle by graphing all its tangent lines until the line intersections start to resemble a curve and found the difference between the cartesian planes I'm familiar with and the coordinate system used by web browsers very confusing. The function for a circle, for example, "y^2 + x^2 = r^2" would need to be translated to "(y-1)^2 + (x-1)^2 = r^2" to be seen on the canvas. And then negative slopes were positive slopes on the canvas and everything would be upside down :/ .
The easiest way for me to think about it was to pretend the origin of a cartesian plane was in the center of the canvas and adjust my coordinates accordingly. On a 500 x 500 canvas, the center would be 250,250, so if I ended up with a point at 50,50, it would be drawn at (250 + 50, 250 - 50) = (300, 200).
I get the feeling I'm over-complicating this, but I can't wrap my mind around the clean way to work with a canvas.
Current practice can perhaps be exemplified by a quote from David Flanagan's book "JavaScript : The Definitive Guide", which says that
Certain canvas operations and attributes (such as extracting raw
pixel values and setting shadow offsets) always use this default
coordinate system
(the default coordinate system is that of the canvas). And it continues with
In most canvas operations, when you specify the coordinates
of a point, it is taken to be a point in the current coordinate system
[that's for example the cartesian plane you mentioned, #Walkerneo],
not in the default coordinate system.
Why is using a "current coordinate system" more useful than using directly the canvas c.s. ?
First and foremost, I believe, because it is independent of the canvas itself, which is tied to the screen (more specifically, the default coordinate system dimensions are expressed in pixels). Using for instance a Cartesian (orthogonal) coordinate system makes it easy for you (well, for me too, obviously :-D ) to specify your drawing in terms of what you want to draw, leaving the task of how to draw it to the transformations offered by the Canvas API. In particular, you can express dimensions in the natural units of your drawing, and perform a scale and a translation to fit (or not, as the case may be...) your drawing to the canvas.
Furthermore, using transformations is often a clearer way to build your drawing since it allows you to get "farther" from the underlying coord system and specify your drawing in terms of higher level operations ('scale', 'rotate', 'translate' and the more general 'transform'). The abovementioned book gives a very nice exemple of the power of this approach, drawing a Koch (fractal) snowflake in many fewer lines that would be possible (if at all) using canvas coordinates.
The HTML5 canvas, like most graphics systems, uses a coordinate system where (0,0) is in the top left and the x-axis and y-axis go from left to right and top down respectively. This makes sense if you think about how you would create a graphics system with nothing but a block of memory: the simplest way to map coordinates (x,y) to a memory slot is to take x+w*y, where w is the width of a line.
This means that the canvas coordinate system differs from what you use in mathematics in two ways: (0,0) is not the center like it usually is, and y grows down rather than up. The last part is what makes your figures upside down.
You can set transformations on the canvas that make the coordinate system more like what you are used to:
var ctx = document.getElementById('canvas').getContext('2d');
ctx.translate(250,250); // Move (0,0) to (250, 250)
ctx.scale(1,-1); // Make y grow up rather than down

Calculate if intersection between polygon and raster image

If I have an binary image and a irregular convex polygon how can I calculate if they intersect each other? The coordinates of the polygon are described in terms of the image.
I have a few ideas on this, coming from either a collision detection or fill algorithm perspective, but I don't think either would be optimal. I'm sure there is a tried and tested method for this but can't think of the keywords.
Here is an example of what I mean:
In this case it should return true.
I would recommend this following algorithm:
Traverse the border of the polygon using Bresenham's algorithm for each line, and at each pixel, sample the raster. If it's a color you accept to be visible, such as a nonzero alpha, report an intersection.
This has the advantage of only working over the edges of the polygon, so you don't need to iterate over all the pixels inside the polygon.

HTML5 canvas hittesting

I have some images drawn on a HTML5 Canvas and I want to check if they are hit on mouse click. Seems easy, I have the bounds of the images, however the images are transformed (translated and scaled). Unfortunately, the context does not have a method to get the current transform matrix, and also, there is no API for matrices multiplication.
Seems the only solution is to keep track of the transforms myself and implement matrix multiplication.
Suggestions are welcomed.
This is a common problem in the 3D (OpenGL) graphics world as well.
The solution is to create an auxiliary canvas object (which is not displayed), and to redraw your image into it. The draw is exactly the same as with your main canvas draw, except that each element gets drawn with a unique color. You then look up the pixel corresponding to your mouse pick, and read off its color, which will give you the corresponding element (if any).
This is a commonly used method in the OpenGL world. You can find descriptions of it by Googling terms like "opengl object picking". Here is one of the many search results.
Update: The HTML5 canvas spec now includes hit regions. I'm not sure to what degree these are supported by browsers yet.