Draw shapes on HTML5 Canvas...with video - html

I've been Googling around a bit for an answer and haven't found a definitive one either way: is it possible to play a video using an HTML5 canvas, and also allow the user to draw on this video? The use case, for some context, is to play a video on infinite loop so the user can draw multiple boxes over specific areas to indicate regions of interest.
As a bonus (:P), if I can figure out how to do this on its own, any hints as to how this could be done within Drupal? I'm already looking at the Canvas Field module, but if you have any hints on this point too (though the first one is the priority), that'd be awesome!

You can draw html5 video elements onto a canvas. The drawImage method accepts a video element in the first parameter just like an image element. This will take the current "frame" of the video element and render it onto the canvas. To get fluid playback of the video you will need to draw the video to the canvas repeatedly.
You can then draw on the canvas normally, making sure you redraw everything after each update of the video frame.
Here is a demo of video on canvas
here is a in-depth look into video and the canvas

I recently received this request from a client to provide this feature, and it must be CMS-friendly. The technique involves three big ideas
a drawing function
repeatedly calling upon the same drawing function
using requestAnimationFrame to paint the next frame
Assuming you have a video element already, you'd take the following steps
Hide the video element
Create a canvas element whose height/width match the video element, store this somewhere
Get the context of the canvas element with `canvas.getContext('2d') and also store that somewhere
Create a drawing function
In that drawing function, you would use canvas.drawImage(src, x, y) where src is the edited version of the current frame of the video;
In that drawing function, use recursion to call itself again
I can give you two examples of this being done (and usable for content management systems)
The first is here: https://jsfiddle.net/yywL381w/19/
A company called SDL makes a tool called Media Manager that hosts videos. What you see is a jQuery plugin that takes its parameters from a data-* , makes a request from the Media Manager Rest API, creates a video, and adds effects based entirely on data* attributes. That plugin could easily be tweaked to work with videos called from other sources. You can look at the repo for it for more details on usage.
Another example is here: http://codepen.io/paceaux/pen/egLOeR
That is not a jQuery plugin; it's an ES6 class instead. You can create an image/video and apply a cropping effect with this:
let imageModule = new ImageCanvasModule(module);
imageModule.createCanvas();
imageModule.drawOnCanvas();
imageModule.hideOriginal();
You'll observe, in the ImageCanvasModule class, this method:
drawFrame () {
if (this.isVideo && this.media.paused) return false;
let x = 0;
let width = this.media.offsetWidth;
let y = 0;
this.imageFrames[this.module.dataset.imageFrame](this.backContext);
this.backContext.drawImage(this.media, x, y, width, this.canvas.height);
this.context.drawImage(this.backCanvas, 0, 0);
if (this.isVideo) {
window.requestAnimationFrame(()=>{
this.drawFrame();
});
}
}
The class has created a second canvas, to use for drawing. That canvas isn't visible, it's just their to save the browser some heartache.
The "manipulation" that is content manageable is this.imageFrames[this.module.dataset.imageFrame](this.backContext);
The "frame" is an attribute stored on the image/video (Which could be output by a template in the CMS). This gets the name of the imageFrame, and runs it as a matching function. It also sends in the context (so I can toggle between drawing on the back canvas or main canvas if needed)
then this.backContext.drawImage(this.media, x, y, width, this.canvas.height); draws the image on the back context.
Finally, this appears on the main canvas with this.context.drawImage(this.backCanvas, 0, 0); where I take the backcanvas, and draw it on to the main canvas. So the canvas that's visible has the least amount of manipulations possible.
And at the end, because this is a video, we want to draw a new frame. So we have the function call itself:
if (this.isVideo) {
window.requestAnimationFrame(()=>{
this.drawFrame();
});
This whole setup allows us to use the CMS to output data-* attributes containing the type of frame the user wants to be drawn around the image. the JavaScript then produces a canvasified version of that image or video. Sample markup might look like:
<video muted loop autoplay data-image-frame="wedgeTop">

Related

webgl: how to clone the canvas as-is

I have a webgl canvas. It is being continuously updated (simulation).
Now I want to freeze the current content of the canvas. I am continuously getting updates for the simulation which I need to keep feeding to the visualizer. So my idea of achieving this is to clone the exact state of the current webgl canvas on to a new one, and hide the current one, which continues to get updated. Then I can remove the frozen one and the live simulation is being shown again.
I haven't been able to achieve this, and examples I've found on the web like this one:Any way to clone HTML5 canvas element with its content?
only apply to 2D canvases.
Google search didn't help much either.
This one:
how to copy another canvas data on the canvas with getContex('webgl')?
seemed promising but I haven't been able to figure out how to apply it.
Cloning the canvas appear to me to be an heavy and weird solution.
The simplest way to achieve what you want to do is to prevent the frame buffer to be presented (swapped, then cleared) to HTML canvas. Do do so, you simply have to avoid calling any gl.clear, gl.drawArrays or gl.drawElements during your loop.
For example suppose you have two functions, one running your simulation, the other your GL draw:
function simulate() {
// update simulation here
}
function draw() {
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT|gl.DEPTH_BUFFER_BIT);
// do drawing stuff here
gl.drawArrays(gl.TRIANGLES, 0, 12345);
// etc...
}
From this point, if you want to "freeze" the canvas content, you simply have to stop calling the "draw" function within your global loop. For example:
function loop() {
simulate();
if(!freeze) draw();
requestAnimationFrame(loop);
}
You may uses other methods to achieve the same effect. For example, you can draw your scene to a texture, then draw the texture on the canvas. By this way, you also can control when the texture is cleared and drawn again, while it still rendered in the canvas.
However, to implements the render-to-texture method, you will have some more heavy modification to done in your code: you'll need an additionnal shader to draw the texture on screen, and take some time to play with frameBuffer and renderBuffer objects.

Creating video with MediaCapture with differing Preview and Record streams

I'm trying to create a Windows Store application that allows the UI to be captured and overlaid on video coming from a web camera.
I'm using the MediaCapture class to manage the video capture process. I've created a MFT (based on the Grayscale sample) that allows me to accomplish this in a basic manner. This MFT has been added as an effect to the Record Stream of the MediaCapture class, and I'm able to create a video file with the UI overlaid on the camera video easily enough. (Easily is a relative term)
The problem that I've hit is that the overlay from the MFT is showing up in the preview stream, which is also being displayed on screen. So the UI is being displayed normally, and also in the video stream. This is a bad result, as I don't want the effect applied to the preview stream and don't want the user to see the UI in the video preview, only in the resulting recording.
Is there a way to make the MediaCapture class use the effect only on the record stream, and not the preview stream?
If there is not an easy way to do this, can this be implemented by creating a custom sink? The MediaCapture could record to the custom sink, and the custom sink would add the overlay and save to video?
With some cameras (USB webcams in particular), record/preview/photo all come from the same video stream. So applying an effect to one applies the effect to all. Whether video streams are identical or independent is given by MediaCapture.MediaCaptureSettings.VideoDeviceCharacteristic.
So in your case, using a custom sink seems the way to go. IMFSinkWriter can be used to encode frames after adding the overlay.
For reference, code snippet adding an effect to preview+record for any type of camera (effectively the opposite of what you are trying to do):
MediaCapture capture = ...;
await capture.AddEffectAsync(MediaStreamType.VideoPreview, "Extensions.MyEffect", null);
// If preview and record are different streams, also add the effect there
if ((capture.MediaCaptureSettings.VideoDeviceCharacteristic != VideoDeviceCharacteristic.AllStreamsIdentical) &&
(capture.MediaCaptureSettings.VideoDeviceCharacteristic != VideoDeviceCharacteristic.PreviewRecordStreamsIdentical))
{
await capture.AddEffectAsync(MediaStreamType.VideoRecord, "Extensions.MyEffect", null);
}

How to build a soundcloud like player?

With the help of waveformjs.org I am able to draw waveform for a mp3 file. I can also use
wav2json to get json data for audio file hosted on my own server and I don't have to depend on soundcloud for that. So far so good. Now, the next challenges for me are
To fill the color in waveform as the audio playing progresses.
On clicking anywhere on waveform audio should start playing from that point.
Has anyone done similar? I tried to look into soundcloud js SDK but no success. Any pointers in this directions will be appreciated, thanks.
As to changing the color my earlier answer here may help:
Soundcloud modify the waveform color
As to moving the position when you click on the waveform you can do something like this:
Assuming you have a x position already from the click event adjusted relative to the canvas.
/// get a representation of x relative to what the waveform represents:
var pos = (x - waveFormX) / waveformWidth;
/// To convert this to time, simply multiply this factor with duration which
/// is in seconds:
audio.currentTime = audio.duration * pos;
Hope this helps!
Update
The requestAnimationFrame is optimized not only for performance but also power consumption as more and more users are on mobile devices. For this reason the browsers may or may not pause or reduce frame rate of rAF when in a different tab or when browser tab is invisible. This can cause a position based approach to render the waveform delayed or not at all when tab is switched.
I would recommend to always use a time based approach so neither FPS or other behavior will interrupt the drawing of the waveform.
You can force the waveform to be updated at the actual current time as luckily we can attach this to the currentTime property of the audio element.
In the rAF loop simply update the position like this using a "reversed" formula of the above:
pos = audio.currentTime / audio.duration * waveformWidth + waveformX
When the tab becomes visible again this approach will make sure the waveform continues based on the actual time position of the audio.

Changing size of an embedded flash swf in HTML

I have an HTML page with a background image.
I now want to add a flash (swf) to the corner of the page (a paper-like fold that can be peeled). The thing is the the flash is a small piece of UI (fold) that later opens to a larger view that covers more of the HTML (2 states that switch when clicking). I'm using swfObject.js and embedSwf:
swfobject.embedSWF("fold.swf", "pageFold", "auto", "400px", "9.0.0");
And my HTML is:
<div id="pageFold"></div>
I did several different tries with the width and height parameters, including having them fixed, but my problem is that the swf keeps on covering the HTML's backgound image, even when the swf is in it's smaller state (the folded corner)
Ideally, I would like to have one of the two:
Have a transparent background to the swf, so that I'll always see the HTML background image.
Dynamically change the size of the embedded swf - when it in the smaller state (a folded corner), keep the embedded object small. When the swf is clicked and therefore the flash UI is enlarged, give the embedded object some more space in the HTML.
Is any of these possible?
Any other options or ideas?
Thanks.
swfobject.embedSWF(swfUrlStr, replaceElemIdStr, widthStr, heightStr, swfVersionStr, xiSwfUrlStr, flashvarsObj, parObj, attObj, callbackFn)
you need set parObj and attObj as "wmode=transparent" (maybe {wmode:"transparent"})
Also you may need use
stage.scaleMode=StageScaleMode.NO_SCALE in your swf
Where callbackFn is a JavaScript function that has an event object as a parameter:
function callbackFn(e) { ... }
Properties of this event object are:
success, Boolean to indicate whether the creation of a Flash plugin-in <object> DOM was successful or not
id, String indicating the ID used in swfobject.registerObject
ref, HTML object element reference (returns undefined when success=false)
Use id parameter in callback function to manipulate your flash object - move, resize, etc

Getting a reference to image loaded inside an HTML 5 Canvas

I am loading an image into an HTML 5 Canvas using drawImage method. How do I get a reference to that image later (maybe on some mouse-clicked event) so that I can do a transformation to the image (like rotation)?
Save a reference to the Image object you used to paint to the canvas.
Then,
Delete the canvas (clearRect)
Make the transformations using the canvas context
Draw the Image again
Go to 1 when you need to refresh the canvas
You can't. Once it's drawn on the canvas it's just pixels. No reference to the original source is retained by the canvas. If you want to maintain state information you have to do that yourself. Alternatively use SVG, that retains an internal DOM.