Just starting to experiment with filling the canvas, and I'm trying to apply a texture to an object (the blobs from the blob example - http://www.blobsallad.se/). This example is using the 2D context, and doesn't appear to be implementing webGL. All the information on texturing I could find uses webGL, and I was wondering how easy it would be to accomplish this feat. Is there anyway I could incorporate the texturing features of webGL to this canvas without rewriting the code? Summed up, I guess this question is asking whether or not the methods available to the 2D context are also available to the webGL context... If so I suppose I could just change the context and apply my texture? If I'm thinking about this all wrong or am confused conceptually, please let me know.
Thanks,
Brandon
I've experimented with drawing an image to a 2d canvas before using it as a texture for a WebGL canvas. It works, but the performance is horrible (it really varies from browser to browser). I'm currently considering a few other options for refactoring it. I wouldn't recommend it for anything that more than statically drawing an image to one or two 2d canvases.
You can see an example of the craziness in lanyard/src/render/SurfaceTileRenderer.js in the project at: http://github.com/fintler/lanyard
Are you looking to apply a texture to a 2D shape?
Try something like this
http://jsfiddle.net/3U9pm/
Related
I need some help with rendering a NURBS surface in webGL.
Some days ago our professor assigned us to draw with NURBS a flag and to animate it.
We have to use webGL (and cannot trhee.js...).
I have no idea on how to proceed (even though I know the theory about NURBS and tessellation more or less).
Any hint?
Disclaimer: I'm not asking for a solution. It's against the rules and I want to get it myself. I just need to be pointed on the right direction.
Thanks in advance
Just because you can't use three.js doesn't mean you can't look at it to figure out how it works! This example renders NURBS and you can view the source code. (Hint: It uses THREE.NURBSSurface, THREE.NURBSUtils, etc... which is then plugged into a ParametricBufferGeometry)
As for the WebGL part, if you're familiar with OpenGL, it's a lot of the same stuff just cut back a bit on features. You need to make a canvas with a context WebGL, generate all your data on the CPU (definitions of the surface, tessellation, etc), and then pass all the vertex and index data to the GPU rendering it all with a shader.
I would suggest you to start with following two lectures:
Drawing Bézier Curves
Drawing Lines is Hard
and finally, use this WebGL example as starting point for your assignement:
Resolution independent rendering of Bezier curves in WebGL
Good luck and happy coding! If you achieve something good, please let us know!
The HTML canvas element is impressive and what people are doing with it is mind blowing. When I study the javascript that developers use, it's not always apparent if what I'm seeing qualifies for the term "WebGL" or not. Where is the line drawn between what is and is not WebGL?
It's WebGL if they're using a WebGLRenderingContext. See: https://www.khronos.org/registry/webgl/specs/1.0/
Example 1 from that document shows:
var canvas = document.getElementById('canvas1');
var gl = canvas.getContext('webgl');
Generally there is two types of use of <canvas> element right now: 2D and WebGL. Those called context.
You get context using canvas.getContext('yourContextHere');. The easiest way to identify which context is used, just search for getContext and see what is used as first argument.
There is one 2d context, while there is many variations of webgl, like experimental-webgl, webkit-3d and few others, but they will usually contain or webgl or 3d word in it.
Another big difference is dimensions, 2d context is obviously only 2 dimensions, still it is possible to do some math with matrices and simulate 3 dimensions, this technique sometimes is used in order to draw something simple with feel of 3d. But that is rare.
WebGL is based OpenGL ES 2.0 and has own functionality, totally different from 2d context. So if you learn some of it then it will be very easy to distinguish between them.
Let's say I use the Graphics class at runtime to draw some vector shapes dynamically. For example a square and a circle.
Is there a way to create a new shape at runtime where those 2 vectors shapes overlap?
Those kind of operations are very common in all vector design programs such as Illustrator, Corel, etc... but I haven't found anything in Adobe's documentation, nor anywhere else, to do it by code.
Although drawing operations on the Graphics class are described in terms of lines, points etc this is - as far as you're concerned - just telling it what to draw onto a bitmap. There's no way to remove a shape once drawn, short of clear(), which just wipes the whole thing clean.
I don't fully understand why, as the vector data must be retained - there's no loss of quality on scaling after drawing, for example.
If you don't want to get into some hardcore maths (for anything beyond straight lines, you'll need to) there's an answer here which might help if you've ever used PixelBender:
How to calculate intersection between shapes in flash / action script ? (access to shape's segments and nodes?)
Failing that, if it's just cosmetic you could play around with masking shapes (will probably end up quite hacky though) - however, if you actually want to use the intersection to draw or describe a shape you will need to dig out your maths book or look for a good graphics library.
Hope this helps
I'm trying to do a game development API for Google's GWT to make Canvas Games, and I got a question with the prerendering issue.
First: I am not entirely sure how browsers/Javascript/GWT manage a deleted canvas, if its data stay on memory or not, after using a removeChild() or RootPanel.Remove() (with GWT), or even the correct method to remove it from memory.
So the solution I've came about is using multiple (as needed) big, hidden canvases as a pre-render palette and use drawImage() magic to jump around the prerendered images drawing on the main context, and having my own problems with insertion, removal, empty spaces, etc.
Is this the best solution? Or should I try using one little canvas for every little image and texture that is prerendered? Or should I try something completely different whatsoever?
Thanks in advance, and sorry for my spelling.
using a canvas to pre-render your items is a good idea, however it's not always the best choice.
If your items are complex (with gradient, shadows and visual effect), so yes it will be good. But if your items are simple (images, polygons, simple bezier curves, ...), your framerate won't increase but can decrease (because of the drawImage). It's then better to render in realtime.
From my experiments, you won't lose performance by using several small canvas (may be few memory) but it can be easier to manage than a big canvas (like an object oriented scene).
If your items change sometimes, you are sure to easily manage the size of your temporary canvases.
Hope this help.
I have some images drawn on a HTML5 Canvas and I want to check if they are hit on mouse click. Seems easy, I have the bounds of the images, however the images are transformed (translated and scaled). Unfortunately, the context does not have a method to get the current transform matrix, and also, there is no API for matrices multiplication.
Seems the only solution is to keep track of the transforms myself and implement matrix multiplication.
Suggestions are welcomed.
This is a common problem in the 3D (OpenGL) graphics world as well.
The solution is to create an auxiliary canvas object (which is not displayed), and to redraw your image into it. The draw is exactly the same as with your main canvas draw, except that each element gets drawn with a unique color. You then look up the pixel corresponding to your mouse pick, and read off its color, which will give you the corresponding element (if any).
This is a commonly used method in the OpenGL world. You can find descriptions of it by Googling terms like "opengl object picking". Here is one of the many search results.
Update: The HTML5 canvas spec now includes hit regions. I'm not sure to what degree these are supported by browsers yet.