Canvas: Mouse Events - html

I know it is not possible to add eventhandlers to specific circles or rectangles in canvas. But there are some nice frameworks like EaselJS, KineticJS, Paper.js, Fabric.js that support the eventhandling on specific elements.
Can someone explain me how do they work?
I think there are only two solutions.
1. You create for each element a new canvas region and put them on each other. In this way you can give each region an event handler.
2. You have only one canvas region and one event handler. In this way you have to do complex mathematical calculations to find out if a specific element was clicked. If you have only circles or rectangles, this solution might be easy. But if you have path with lots of curves, this solution is quite difficult.
I don't want to use the libraries. So it would be nice, if someone can help me.

Here's a BRIEF summary of how canvas drawing libraries work
An unaltered canvas is just a big bitmap. Once you draw shapes on the canvas, they are unaccessible, forgotten pixels.
Canvas drawing libraries store all your shapes into “retained” objects. Each shape object has sufficient information about itself to allow the drawing library to redraw it whenever necessary.
The canvas drawing libraries are the "controllers" for objects. The libraries have the algorithms necessary to track, manipulate and redraw all shape objects as necessary.
The following information is retained about every shape object:
Basic identification
ID,
Shape name
Parent or Container
Inherent properties of the shape:
Rectangular shapes( rect, image, text) know width and height.
Circular shapes (circles, elipses, regular polygons, stars) know radius and sidecount.
Lines know length.
Curved shapes (arcs, beziers, paths) know anchor points and control points.
Text also knows…well, the text!
Images also know their pixel data (usually stored in javascript Image objects)
Transformational information:
Starting X/Y coordinate
Translations—accumulated movements off the starting coordinate.
Rotations—accumulated rotations of this shape (usually in radians).
Scalings—accumulated resizings
Other transforms (less common) are skews and warps
Layering information—the current z-index
Styling information:
StrokeColor,
StrokeWidth,
FillColor,
Opacity,
isVisible,
lineCaps,
cornerRadius
Tracking abilities:
Bounding box—the smallest rectangle that completely contains this shape
This is used for “hit testing” to see if the mouse is inside this object (for selecting and dragging)

If you don't want to use a library, you may find my answer in this thread helpful. As markE says once the canvas is written to there is no way of tracking that data (unless you care to loop through each individual pixel and test its colour; though that is only really useful for pixel level collision detection).

Related

what is it that you can do with canvas and not with SVG?

I am working on project I need to choose between the SVG and Canvas. I am finding SVG best at most of the things.
Are there anythings that you can not do with the SVG but you can with Canvas?
First,
If you already have invested time in the SVG learning curve, you might well complete your project without Canvas because these 2 elements do remarkably similar things. Canvas has a fairly large & steep learning curve that you might want to avoid "mid-project".
Contrary to popular opinion...Both Canvas and SVG use vector drawing commands!
They use vector commands to paint lines and curves on their drawing surfaces. Both Canvas and SVG render those drawings onto their element surfaces as pixels.
And...
Both Canvas and SVG can transform (offset, rotate and scale) their drawings. Both Canvas and SVG can apply style to their drawings (fill color, stroke color, opacity, etc).
But...
SVG goes one step further and "remembers" all the drawing commands. This means SVG can reissue those drawing commands even when scaling. So SVG is ideal for drawings that must be scaled without becoming "jaggy". You can even use CSS to re-style your drawings (change colors, opacity, position, rotation, etc). That's often very useful. For example, you can make an SVG leopard change the color of its spots with CSS!
Canvas just "draws and forgets" -- it remembers nothing about what or where it's drawn. As such, it's a lighter element. You say: "But canvas does all those games with moving players". With canvas, the programming practice is to erase the canvas and redraw any shapes in their new positions. This gives the illusion that the shapes are being commanded to move (which they are not). Canvas is built to be extremely fast at these redraws and will easily redraw modestly complex game scenes at 60 frames per second. This speed comes at the cost. You must "remember" where your scene elements are so you can later re-render them in their new positions and with their new stylings. (No simple CSS ability to make a canvas leopard change its spots).
SVG drawings come with the traditional mouse events already built in. Canvas only fires the traditional mouse events on the canvas as a whole and not on individual drawings (because canvas forgets about the shapes it's drawn). Therefore, if you want to get mouse events related to an individual canvas shape you must (1) Remember where you drew your shapes (2) listen for the mouse event on the whole canvas, (3) check if the mouse is inside any shape (this is easily done mathematically) and then handle the mouse event for your discovered shape. Canvas elements require more code to impliment.
IMHO, one particular use-case where canvas shines:
In addition to these functional differences, Canvas lets you examine and change any pixel on the canvas surface. In particular, this valuable pixel information lets you do some nice tasks with images:
Recolor any part of an image at the pixel level (if you use HSL you can even recolor an image while retaining the important Lighting & Saturation so the result doesn't look like someone slapped paint on the image),
"Knockout" the background around a person/item in an image,
Detect and Floodfill part of an image (eg, change the color of a user-clicked flower petal from green to yellow -- just that clicked petal!),
Do Perspective warping (e.g. wrap an image around the curve of a cup),
Examine an image for content (eg. facial recognition),
Answer questions about an image: Is there a car parked in this image of my parking spot?,
Apply standard image filters (grayscale, sepia, etc)
Apply any exotic image filter you can dream up (Sobel Edge Detection),
Combine images. If dear Grandma Sue couldn't make it to the family reunion, just "photoshop" her into the reunion image. Don't like Cousin Phil -- just "photoshop him out,
Play a video / Grab a frame from a video,
Export the canvas content as a .jpg | .png image (you can even optionally crop or annotate the image and export the result as a new image),
Many more image pixel manipulations that haven't come to mind!
Canvas is a pixel manipulation element.
SVG is vector element container.
I feel distinction between these two is outside the scope of StackOverflow to dicuss, as it is related to computer art basics, not programming and there is a lot of information available if not in search engines then in WikiPedia. For example one would use pixel-based media for pixel games.

html5 basic paint tool

I'm new to html5. And I'm trying to create a basic painting tool.
What I want to do in this tool is to have one or more shapes(maybe overlapping) and to paint the shapes without getting the colors overlapped. If a circle is drawn inside a rectangle and if I start coloring the circle, the rectangle should not be painted even if the mouse is dragged over it unless the dragging starts inside it.
To achieve this should I use multiple canvases or shapes?
Thanks in advance.
Well, first you need to program in the idea of keeping track of separate shapes. If you haven't already done that see here for a tutorial.
I imagine your shapes will all be kept as images or in-memory canvases themselves. I'm not sure how else you can do it.
There are a million ways you could do this, here's one:
When you start your drawing operation you need to detect which shape you're on. Then you draw that shape to an in-memory canvas and switch that temporary canvas' globalcompositeoperation to source-atop. This will make sure the paint can only paint in the already opaque regions of that shape (if that's your intent here, which it seems to be).
All while you are painting you will want to update the temporary canvas and redraw the main canvas constantly. While you are redrawing the main canvas, instead of painting that shape's image file you'll want to paint the temporary canvas (if you use canvases to keep the shapes you can just update those in real time).
If you are not using temporary canvases for each shape, when you stop the drawing operation you are gonna have to update the image associated with the shape to complete the operation.
Using an in-memory canvas (not added to the DOM) for every shape (that is the size of the shape and no larger) will make coding things slightly easier and might not be that bad on performance. I'd give it a try with 100 and 1000 (or more) in-memory canvases on your targeted platforms to see though.
The alternative is to use one in-memory canvas and have an HTMLImageElement (png) that represents every shape, but using the canvas.toImageURL function can be a bit of a performance hit in itself. I'd try both methods to see which works best in your case. If the shape count is small enough, it probably doesn't matter which.

Draw shapes/text on canvas using layers or z-index

I draw several text elements using a for loop.
But I want the first element to be drawn on top of all the other elments.
Other than reversing the loop, is there a way to define a layer number for a drawn element like text or shapes?
No, the HTML5 Canvas—like SVG—uses a "painters model" when rendering: the ink you lay down immediately dries on the canvas; successive draw calls go on top of the result.
Further, HTML5 Canvas—unlike SVG or HTML—uses a non-retained (or immediate) graphics mode: no objects are preserved corresponding to the original drawing commands after you have issued them.
Your options are:
Change your loop, or otherwise implement your own layering system that queues up draw calls and then issues them in order from bottom to top.
As #Stoive suggests, create separate (non-displayed) canvas elements programmatically, draw to them and then blit the results back to your main canvas in the order you like.
Create multiple (displayed) canvases on the page and layer them using CSS, drawing to each as its own layer.
The last option allows you the most freedom, including the ability to dirty/clear just one of the layers at any time, or re-order the layers without having to recomposite them yourself.
There is no concept of layers in canvas in the 2D context - think of it as a programmable paintbrush-like application.
You can, however, draw one canvas onto another using context.drawImage - so if you maintain each 'layer' in it's own canvas, and then compose them into the one for display, you could emulate the concept of layers.

HTML5 canvas hittesting

I have some images drawn on a HTML5 Canvas and I want to check if they are hit on mouse click. Seems easy, I have the bounds of the images, however the images are transformed (translated and scaled). Unfortunately, the context does not have a method to get the current transform matrix, and also, there is no API for matrices multiplication.
Seems the only solution is to keep track of the transforms myself and implement matrix multiplication.
Suggestions are welcomed.
This is a common problem in the 3D (OpenGL) graphics world as well.
The solution is to create an auxiliary canvas object (which is not displayed), and to redraw your image into it. The draw is exactly the same as with your main canvas draw, except that each element gets drawn with a unique color. You then look up the pixel corresponding to your mouse pick, and read off its color, which will give you the corresponding element (if any).
This is a commonly used method in the OpenGL world. You can find descriptions of it by Googling terms like "opengl object picking". Here is one of the many search results.
Update: The HTML5 canvas spec now includes hit regions. I'm not sure to what degree these are supported by browsers yet.

How can I turn an image file of a game map into boundaries in my program?

I have an image of a basic game map. Think of it as just horizontal and vertical walls which can't be crossed. How can I go from a png image of the walls to something in code easily?
The hard way is pretty straight forward... it's just if I change the image map I would like an easy way to translate that to code.
Thanks!
edit: The map is not tile-based. It's top down 2D.
I dabble in video games, and I personally would not want the hassle of checking the boundaries of pictures on the map. Wouldn't it be cleaner if these walls were objects that just happened to have an image property (or something like it)? The image would display, but the object would have well defined coordinates and a function could decide whether an object was hit every time the player moved.
I need more details.
Is your game tile based? Is it 3d?
If its tile based, you could downsample your image to the tile resolution and then do a 1:1 conversion with each pixel representing a tile.
I suggest writing a script that takes each individual pixel and determines if it represents part of a wall or not (ie black or white). Then, code your game so that walls are built from individual little block, represented by the pixels. Shouldn't be TOO hard...
If you don't need to precompute anything using the map info. You can just check in runtime logic using getPixel(x,y) like function.
Well, i can see two cases with two different "best solution" depending on where your graphic comes from:
Your graphics is tiled, and thus you can easily "recognize" a block because it's using the same graphics as other blocks and all you would have to do is a program that, when given a list of "blocking tiles" and a map can produce a "collision map" by comparing each tile with tiles in the "blocking list".
Your graphics is just some graphics (e.g. it could be a picture, or some CG graphics) and you don't expect pixels for a block to be the same as pixels from another block. You could still try to apply an "edge detection" algorithm on your picture, but my guess is then that you should rather split your picture in a BG layer and a FG layer so that the FG layer has a pre-defined color (or alpha=0) and test pixels against that color to define whether things are blocking or not.
You don't have much blocking shapes, but they are usually complex (polygons, ellipses) and would be unefficient to render using a bitmap of the world or to pack as "tile attributes". This is typically the case for point-and-click adventure games, for instance. In that case, you're probably to create path that match your boundaries with a vector drawing program and dig for a library that does polygon intersection or bezier collisions.
Good luck and have fun.