HTML5 canvas hittesting - html

I have some images drawn on a HTML5 Canvas and I want to check if they are hit on mouse click. Seems easy, I have the bounds of the images, however the images are transformed (translated and scaled). Unfortunately, the context does not have a method to get the current transform matrix, and also, there is no API for matrices multiplication.
Seems the only solution is to keep track of the transforms myself and implement matrix multiplication.
Suggestions are welcomed.

This is a common problem in the 3D (OpenGL) graphics world as well.
The solution is to create an auxiliary canvas object (which is not displayed), and to redraw your image into it. The draw is exactly the same as with your main canvas draw, except that each element gets drawn with a unique color. You then look up the pixel corresponding to your mouse pick, and read off its color, which will give you the corresponding element (if any).
This is a commonly used method in the OpenGL world. You can find descriptions of it by Googling terms like "opengl object picking". Here is one of the many search results.
Update: The HTML5 canvas spec now includes hit regions. I'm not sure to what degree these are supported by browsers yet.

Related

Canvas: Mouse Events

I know it is not possible to add eventhandlers to specific circles or rectangles in canvas. But there are some nice frameworks like EaselJS, KineticJS, Paper.js, Fabric.js that support the eventhandling on specific elements.
Can someone explain me how do they work?
I think there are only two solutions.
1. You create for each element a new canvas region and put them on each other. In this way you can give each region an event handler.
2. You have only one canvas region and one event handler. In this way you have to do complex mathematical calculations to find out if a specific element was clicked. If you have only circles or rectangles, this solution might be easy. But if you have path with lots of curves, this solution is quite difficult.
I don't want to use the libraries. So it would be nice, if someone can help me.
Here's a BRIEF summary of how canvas drawing libraries work
An unaltered canvas is just a big bitmap. Once you draw shapes on the canvas, they are unaccessible, forgotten pixels.
Canvas drawing libraries store all your shapes into “retained” objects. Each shape object has sufficient information about itself to allow the drawing library to redraw it whenever necessary.
The canvas drawing libraries are the "controllers" for objects. The libraries have the algorithms necessary to track, manipulate and redraw all shape objects as necessary.
The following information is retained about every shape object:
Basic identification
ID,
Shape name
Parent or Container
Inherent properties of the shape:
Rectangular shapes( rect, image, text) know width and height.
Circular shapes (circles, elipses, regular polygons, stars) know radius and sidecount.
Lines know length.
Curved shapes (arcs, beziers, paths) know anchor points and control points.
Text also knows…well, the text!
Images also know their pixel data (usually stored in javascript Image objects)
Transformational information:
Starting X/Y coordinate
Translations—accumulated movements off the starting coordinate.
Rotations—accumulated rotations of this shape (usually in radians).
Scalings—accumulated resizings
Other transforms (less common) are skews and warps
Layering information—the current z-index
Styling information:
StrokeColor,
StrokeWidth,
FillColor,
Opacity,
isVisible,
lineCaps,
cornerRadius
Tracking abilities:
Bounding box—the smallest rectangle that completely contains this shape
This is used for “hit testing” to see if the mouse is inside this object (for selecting and dragging)
If you don't want to use a library, you may find my answer in this thread helpful. As markE says once the canvas is written to there is no way of tracking that data (unless you care to loop through each individual pixel and test its colour; though that is only really useful for pixel level collision detection).

HTML 5 canvas image special effects

I'm working on a 2d tile based HTML5 canvas application and I would like to know what kind of special effects I can apply to the images as their being drawn ( context.drawImage(...) ). The only trick I've come across is modifying the canvas.globalAlpha value. This leaves some color from the previous frames, creating a blurring or dazed effect if things on the canvas object are moving from frame to frame.
Is there something for rendering images that is comparable to setting the context.fillStyle to an ARGB value for primitive shapes?
Is there a multiply mode? ie: multiply the image pixel color by the destination color. This could be used for primitive lighting. (I've toyed around with context.globalCompositionOperation but didn't find anything interesting)
Are there any other cool effects you've come across?
NOTE: I don't want to use WebGL for this application, and it's a game. That means it's realtime and I can't modify each pixel with javascript code because that takes too long. (although I could probably do that when the player dies and nothing is moving on the screen anymore)
Canvas doesn't provide any predefined effects.
To manipulate the image shape you can use matrix transformations.
To manipulate the image pixels and colors you should use getImageData method - http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#pixel-manipulation.
Then use google to find algorithms to apply some effects like swirl, blur, emboss etc.

Diagrams in SVG versus HTML5 Canvas

I want to start a project where I need to draw diagrams consisting of rounded rectangles connected with lines and a JavaScript action when I click some elements. This needs to work in all modern browsers.
Both SVG and HTML5 Canvas seem to be able to do this so I wonder what would be best.
Also I don't want to reinvent the wheel, so if there are libraries that do such things I would like to know; I took a look at Raphaël and some other JavaScript drawing libraries but they don't give all the functionality I need. In Google's API there is such a tool but it is very limited.
Use SVG because—as a retained-mode drawing API—you can attach event listeners directly to specific elements, and change properties of specific elements and have the page magically update. Further, as a vector-based format, it is resolution-independent.
HTML5 Canvas, by comparison, is a non-retained-mode (aka immediate-mode) drawing API; every pixel you draw is blended with all other pixels on the canvas, with no concept of the original shape. Further, as a raster-based format, you would need to do some extra work to get the drawing commands to adjust for different display sizes.
In general, you should use Canvas if and only if you need:
Direct setting of pixels (e.g. blurs, sparkly effects), or
Direct getting of pixels (e.g. reading a user's drawing to save as a PNG, sampling portions of the image to detect visual overlaps), or
massive number of 'objects' that won't move much or track individual events (SVG can be slow to redraw with thousands of objects).
Note also that you don't have to choose only one or the other. You can draw SVG onto canvas. You can include bitmaps (images) in SVG. You can even include HTML5 Canvas in SVG via <foreignElement>. You can have a single HTML page with multiple layered canvases and SVG elements with transparent backgrounds, intermingling the output of each.

How canvas tag is beneficial in HTML5?

I am a junior developer I can't understand how canvas tag is beneficial for us?
I read lot of articles on that but I can't get the root benefit getting from the canvas tag.
Think of the difference between canvas and svg as the difference betwee Photoshop and Illustrator (or Gimp and Inkscape for you OSS folks). One deals with bitmaps and the other vector art.
With canvas, since you are drawing in bitmap, you can smudge, blur, burn, dodge your images easily. But since it's bitmap you can't easily draw a line and then decide to reposition the line. You need to delete the old line and then draw a new line.
With svg, since you are drawing vectors, you can easily move, scale, rotate, reposition, flip your drawings. But since it's vectors you can't easily blur the edges according to line thickness or seamlessly meld a red circle into a blue square. You need to simulate blurring by drawing intermediate polygons between objects.
Sometimes their use case overlaps. Like a lot of people use canvas to do simple line drawings and keep track of the objects as data structures in javascript. But really, they both serve different purposes. If you try to implement general purpose vector drawing in pure javascript on top of canvas I doubt you'd be faster than using svg which is most likely implemented in C.
Basically, thanks to canvas, we can now draw/render 2D shapes using HTML5 and the canvas API.
As an example of what's possible now with canvas, see this
Some possible uses for Canvas:
Image drawing program
Photo editing/manipulation
2D Games
Advanced image viewing such as Microsoft's Deep Zoom
If you can't understand how it's beneficial, then maybe it isn't from your point of view at least. Don't think that because it's there I have to use it somehow, pick and choose what technologies work for you based on what you're trying to build, an Accounting web app probably wouldn't need a canvas for instance.
The canvas will enable you to draw pixel perfect graphics.
The cool projects that came to mind for me are:
Visualize gps data. GPS data is just an XML list of coordinates. You could easily build something in canvas to "connect the dots".
An mobile app where the user can actual sign a document with her finger - canvas allows you to export out the rendered canvas drawing to PNG where it can be saved on the server.
In a game where you have avatars, you can allow the user to actual draw on the avatar. Moustaches anyone?
Other stuff:
In iOS / Android using lots of CSS3
effects like box-shadow can lead to
poor performance, especially when
animating. You can do a lot of these
graphics in a single canvas tag,
here's an example:
http://everytimezone.com/. This thing is flawless on an ipad.
Cool background effects. For example try going to Paul Irish's
site and move your cursor around the
background: http://paulirish.com/
In this HTML5 book sponsored by Google a lot of the effects are using
canvas:
http://www.20thingsilearned.com/ -
particularly the "page flip"
animations.
my personal take on canvas (and when I actually found a use case for canvas) is the ability to color pick and color change per pixel in a canvas element - actually moving the image from something we don't have any information about what is happening inside it to an element like all other DOM elements (and yes, I know about the current problems with canvas and DOM - sure this would be taken care of in the future)
sure - canvas made some sort of animation easier and pluginless, but that we could do before (mostly with flash) - I think the real importance is the ability to know what is happening on the page.

How can I turn an image file of a game map into boundaries in my program?

I have an image of a basic game map. Think of it as just horizontal and vertical walls which can't be crossed. How can I go from a png image of the walls to something in code easily?
The hard way is pretty straight forward... it's just if I change the image map I would like an easy way to translate that to code.
Thanks!
edit: The map is not tile-based. It's top down 2D.
I dabble in video games, and I personally would not want the hassle of checking the boundaries of pictures on the map. Wouldn't it be cleaner if these walls were objects that just happened to have an image property (or something like it)? The image would display, but the object would have well defined coordinates and a function could decide whether an object was hit every time the player moved.
I need more details.
Is your game tile based? Is it 3d?
If its tile based, you could downsample your image to the tile resolution and then do a 1:1 conversion with each pixel representing a tile.
I suggest writing a script that takes each individual pixel and determines if it represents part of a wall or not (ie black or white). Then, code your game so that walls are built from individual little block, represented by the pixels. Shouldn't be TOO hard...
If you don't need to precompute anything using the map info. You can just check in runtime logic using getPixel(x,y) like function.
Well, i can see two cases with two different "best solution" depending on where your graphic comes from:
Your graphics is tiled, and thus you can easily "recognize" a block because it's using the same graphics as other blocks and all you would have to do is a program that, when given a list of "blocking tiles" and a map can produce a "collision map" by comparing each tile with tiles in the "blocking list".
Your graphics is just some graphics (e.g. it could be a picture, or some CG graphics) and you don't expect pixels for a block to be the same as pixels from another block. You could still try to apply an "edge detection" algorithm on your picture, but my guess is then that you should rather split your picture in a BG layer and a FG layer so that the FG layer has a pre-defined color (or alpha=0) and test pixels against that color to define whether things are blocking or not.
You don't have much blocking shapes, but they are usually complex (polygons, ellipses) and would be unefficient to render using a bitmap of the world or to pack as "tile attributes". This is typically the case for point-and-click adventure games, for instance. In that case, you're probably to create path that match your boundaries with a vector drawing program and dig for a library that does polygon intersection or bezier collisions.
Good luck and have fun.