How to merge three.js renders with other HTML elements? - html

I am very much new to programming, stackoverflow and everything here in general. So please bear with me.
I have made simple scenes using Three.js and some effects using HTML Canvas separately.
I'd like to know if there is anyway to combine the two, or in general, how to combine the Three.js renderer canvas with other HTML elements.
My current code includes something like :
var renderer = new THREE.WebGLRenderer({ antialias: true, alpha: true });
renderer.domElement.id = 'WebGLCanvas';
document.body.appendChild(renderer.domElement);

A seamless integration of WebGL content and HTML elements is not possible. You can position HTML elements on top of the renderer's canvas element or you can do it the other way around. By setting the alpha flag of THREE.WebGLRenderer to true, you can place the renderer's canvas on top of other HTML content.
However, you should keep in mind that WebGL content is always rendered separately. It's not possible to merge 3D objects like meshes, lines or point clouds with HTML elements so they are sorted and rendered together.
The usage of THREE.CSS3DRenderer in combination with THREE.WebGLRenderer is a good example for a proper usage. Check out the following demo that shows both renderers integrated in simple application.
https://threejs.org/examples/css3d_orthographic.html

Related

Render WASM Graphics to Custom HTML Canvas

I have setup a WASM project using Rust and a game engine called Bevy to create graphics within a Svelte app. However, when I run the init() function generated by wasm-pack, it creates a canvas element for the graphics to be rendered into. Is there any way to make it render to a canvas I have created or to style the canvas it generates?
You should be able to specify the canvas bevy should render to by setting the WindowDescriptor's canvas field
The docs say "If set, this selector will be used to find a matching html canvas element, rather than creating a new one. Uses the CSS selector format."
When you create the WindowDescriptor add the canvas selector as a field.
let window_descriptor = WindowDescriptor {
canvas: "#mycanvas",
..default()
};

Why cloning canvas? Need explanation on tutorial

I've followed tutorial here: http://hashrocket.com/blog/posts/using-tiled-and-canvas-to-render-game-screens to create Tiled map on cavas. I've made some improvements to the solution, but rendering stuff remained the same:
var self = this,
layer = self._canvas.canvas.cloneNode( false );
layer = layer.getContext( "2d" );
Basically, I have somewhere reference to canvas HTML, and here I'm cloning it (just like in tutorial). Next I made some logic and draw tile on that clone:
layer.drawImage( ... );
Finally after whole drawing tiles is over, the clone is painted on main canvas:
self._canvas.drawImage( layer.canvas, 0, 0 );
My question is why? When I did same algorithm not on layer, but main canvas instead, rendered image was the same. Is there some logic behind it? Only thing that came to my mind is that we can somehome prevent rendering layer, on catched error, to canvas. Tutorial meantion only about we’ll set up a scratch canvas to render to for a slight performance improvement
You're drawing on a back buffer. This prevents the browser from trying to render the canvas to screen while drawing, and aside from the potential performance improvement also prevents potential flickering. (This applies mostly to double buffering, but this method is quite similar)
About buffering and canvas
A) As the scratch layer is memory-only there is no need for the browser to try to update the content for each monitor refresh - it is draw once only to the main canvas which then is updated in whole.
B) If you moved things around (which is typical when tiling) using drawImage() with offset/clipping and to itself, the browser does not have to create a temporary bitmap, copy the content over, then copy back to a different position, and finally destroy the temporary bitmap.

Downloading a dynamically generated SVG file from the browser

If I create an image using HTML SVG element, can I then offer this as an SVG file download to the user. For example I may want to load an SVG image, apply some basic transformations to it, add some text, then let the user download the result as a vector image.
Is that possible? I have been doing something similar with Canvas but have been struggling creating a vector image. I wasn't aware that SVG elements were so versatile when I cam across them this morning but if I can do the above it would be great.
Simple solution using a data URI:
var svg_root = document.getElementById('your_svg_root_element_here');
var svg_source = svg_root.outerHTML;
var svg_data_uri = 'data:image/svg+xml;base64,' + btoa(svg_source);
var link = document.getElementById('anchor_element');
link.setAttribute('href', svg_data_uri);
Although it worked, when clicking on the link, the browser stalled for a few seconds.
This seems to be the simplest solution and should be compatible with all modern browsers. However, it has some noticeable overhead. If someone else knows a different solution (maybe using blobs or something similar), please add here as another answer!

Draw shapes on HTML5 Canvas...with video

I've been Googling around a bit for an answer and haven't found a definitive one either way: is it possible to play a video using an HTML5 canvas, and also allow the user to draw on this video? The use case, for some context, is to play a video on infinite loop so the user can draw multiple boxes over specific areas to indicate regions of interest.
As a bonus (:P), if I can figure out how to do this on its own, any hints as to how this could be done within Drupal? I'm already looking at the Canvas Field module, but if you have any hints on this point too (though the first one is the priority), that'd be awesome!
You can draw html5 video elements onto a canvas. The drawImage method accepts a video element in the first parameter just like an image element. This will take the current "frame" of the video element and render it onto the canvas. To get fluid playback of the video you will need to draw the video to the canvas repeatedly.
You can then draw on the canvas normally, making sure you redraw everything after each update of the video frame.
Here is a demo of video on canvas
here is a in-depth look into video and the canvas
I recently received this request from a client to provide this feature, and it must be CMS-friendly. The technique involves three big ideas
a drawing function
repeatedly calling upon the same drawing function
using requestAnimationFrame to paint the next frame
Assuming you have a video element already, you'd take the following steps
Hide the video element
Create a canvas element whose height/width match the video element, store this somewhere
Get the context of the canvas element with `canvas.getContext('2d') and also store that somewhere
Create a drawing function
In that drawing function, you would use canvas.drawImage(src, x, y) where src is the edited version of the current frame of the video;
In that drawing function, use recursion to call itself again
I can give you two examples of this being done (and usable for content management systems)
The first is here: https://jsfiddle.net/yywL381w/19/
A company called SDL makes a tool called Media Manager that hosts videos. What you see is a jQuery plugin that takes its parameters from a data-* , makes a request from the Media Manager Rest API, creates a video, and adds effects based entirely on data* attributes. That plugin could easily be tweaked to work with videos called from other sources. You can look at the repo for it for more details on usage.
Another example is here: http://codepen.io/paceaux/pen/egLOeR
That is not a jQuery plugin; it's an ES6 class instead. You can create an image/video and apply a cropping effect with this:
let imageModule = new ImageCanvasModule(module);
imageModule.createCanvas();
imageModule.drawOnCanvas();
imageModule.hideOriginal();
You'll observe, in the ImageCanvasModule class, this method:
drawFrame () {
if (this.isVideo && this.media.paused) return false;
let x = 0;
let width = this.media.offsetWidth;
let y = 0;
this.imageFrames[this.module.dataset.imageFrame](this.backContext);
this.backContext.drawImage(this.media, x, y, width, this.canvas.height);
this.context.drawImage(this.backCanvas, 0, 0);
if (this.isVideo) {
window.requestAnimationFrame(()=>{
this.drawFrame();
});
}
}
The class has created a second canvas, to use for drawing. That canvas isn't visible, it's just their to save the browser some heartache.
The "manipulation" that is content manageable is this.imageFrames[this.module.dataset.imageFrame](this.backContext);
The "frame" is an attribute stored on the image/video (Which could be output by a template in the CMS). This gets the name of the imageFrame, and runs it as a matching function. It also sends in the context (so I can toggle between drawing on the back canvas or main canvas if needed)
then this.backContext.drawImage(this.media, x, y, width, this.canvas.height); draws the image on the back context.
Finally, this appears on the main canvas with this.context.drawImage(this.backCanvas, 0, 0); where I take the backcanvas, and draw it on to the main canvas. So the canvas that's visible has the least amount of manipulations possible.
And at the end, because this is a video, we want to draw a new frame. So we have the function call itself:
if (this.isVideo) {
window.requestAnimationFrame(()=>{
this.drawFrame();
});
This whole setup allows us to use the CMS to output data-* attributes containing the type of frame the user wants to be drawn around the image. the JavaScript then produces a canvasified version of that image or video. Sample markup might look like:
<video muted loop autoplay data-image-frame="wedgeTop">

How to capture an image of an HTML element, and maintain transparency?

I'm working on a page that will allow a webmaster to add styles to their twitter feed. Several of these styles use transparency in their display. I want to create a list of images for them to choose from. Currently I am taking screenshots on a checked background such as this:
But that isn't really what I want.
Is their some method of capturing an image of an HTML element, and maintaining the transparency?
EDIT: I'm just digging into this, so I'm coming across new topics, such as HTML5 Canvas, and -moz-element. Is it possible to set a canvas background to the html element using -moz-element, then extract the image data from the canvas? I'm going to try this unless someone who's 'been there done that' heads me off.
EDIT: The -moz-element and canvas was a deadend. -moz-element will set the item as a background, but will not allow you to save background image. And canvas doesn't save its background, even when the background is a normal image.
It requires a little bit of work, but it is doable, as long as it's HTML you're laying out. You can't recover the transparency of markup in pages you've downloaded without saving those pages and editing them locally. By rendering the HTML elements multiple times, on different background colors, the opacity can be derived using an image editor. You're going to need a decent image editor, I use GIMP.
Render the elements you want to save three times, on a black, a white and a neutral gray background(#888).
Using a screen capture program, capture those displays and crop them to the exact same areas.
In GIMP open those images as layers and order them black, white and gray, top to bottom.
Set the top, black layer to difference mode. This will give a grayscale difference between the black and white layers.
Merge down the top layer. This will leave us with two layers. The gray background layer and the grayscale difference. Invert the colors of the difference layer, and copy it to the clipboard.
Add a layer mask to the gray background layer and paste the clipboard into the layer mask.
Delete the grayscale layer and apply the layer mask on the gray background layer. That should leave one layer with opacity similar to the original.
The opacity is off by a bit, but if we duplicate the layer and merge it with itself, it's right in the ballpark.
It's probably not pixel perfect, but it is proof of concept. Opacity of HTML markup can be captured.
Using Puppeteer makes this much easier to do. It runs a web-page in-memory.
Start a local fileserver - python -m SimpleHTTPServer 8080
Then this script should do the trick:
const puppeteer = require('puppeteer')
;(async () => {
const browser = await puppeteer.launch()
const page = await browser.newPage()
await page.goto('http://localhost:8080/index.html', {
waitUntil: 'networkidle0'
})
const elements = await page.$('body')
await page.evaluate(() => (document.body.style.background = 'transparent'))
await elements.screenshot({ path: 'myImg.png', omitBackground: true })
await browser.close()
})()
the docs for .screenshot() are here.
What you'd need is a web browser that can render into an image buffer in memory instead of the screen.
My guess is that all browsers can do this (that should be part of the renderers) but I'm not aware of any browser where you can access this function, at least not from JavaScript.
If you download the WebKit sources, there should be test cases which do something like that :-/
No, there's no software that will allow you to take screenshots and preserve the transparency of individual visual elements as a transparent spot in the image, because that's not how a screenshot works - screenshots are WYSIWYG, by definition, all elements in your screenshot will always have a non-transparent background.
I think your best bet here is to recreate the desired portion as an image, where you can control the transparency normally. It's not the best solution, but if you're doing this a lot with the same kinds of things, it will be much faster for you rather than cropping/editing screenshots.