Im building an iOS AIR app using AS3/Flash builder.
The app grabs the camera stream into a video object and then draws that to a BitmapData. The camera stream is 1280x720px and the container is 2208x1242px (iPhone 7+) so I need to scale the footage. I also need to rotate it around the center point:
mat.identity();
mat.translate(-video.width/2,-video.height/2);
mat.rotate(Math.PI/2);
mat.translate(video.height/2,video.width/2);
mat.scale(deviceSize.height/cam.width,deviceSize.width/cam.height);
I have timed the drawing operation with matrix:
videoBitmapData.draw(video,mat); //9-11ms
And without matrix:
videoBitmapData.draw(video); //2-3ms
Clearly the transformation is slowing me down.
Is there a faster way to do this? Maybe drawing first, then applying the matrix somehow?
Pre-rotating something?
Leveraging native scaling/rotation?
Can I skip using the video object somehow? Accessing the camera data in a more raw form?
I see no difference using render mode GPU vs CPU.
Thanks!
Edit:
I managed to get down to about half by doing this:
1. Put the video object inside a Sprite
2. Transform the Sprite.
3. Draw the Sprite to the BitmapData.
(5-6ms)
Suprisingly, this was more consistent than this:
1. Put the video object inside a Sprite
2. Transform the video object inside the Sprite
3. Draw the sprite to the BitmapData
(5-12ms)
Bonus:
I only needed part of the pixels from the camera (full width x 100px of the height). When realizing I could use clipRect with the .draw command I managed to get down to 0-1ms.
Thanks to Organis
With the help of waveformjs.org I am able to draw waveform for a mp3 file. I can also use
wav2json to get json data for audio file hosted on my own server and I don't have to depend on soundcloud for that. So far so good. Now, the next challenges for me are
To fill the color in waveform as the audio playing progresses.
On clicking anywhere on waveform audio should start playing from that point.
Has anyone done similar? I tried to look into soundcloud js SDK but no success. Any pointers in this directions will be appreciated, thanks.
As to changing the color my earlier answer here may help:
Soundcloud modify the waveform color
As to moving the position when you click on the waveform you can do something like this:
Assuming you have a x position already from the click event adjusted relative to the canvas.
/// get a representation of x relative to what the waveform represents:
var pos = (x - waveFormX) / waveformWidth;
/// To convert this to time, simply multiply this factor with duration which
/// is in seconds:
audio.currentTime = audio.duration * pos;
Hope this helps!
Update
The requestAnimationFrame is optimized not only for performance but also power consumption as more and more users are on mobile devices. For this reason the browsers may or may not pause or reduce frame rate of rAF when in a different tab or when browser tab is invisible. This can cause a position based approach to render the waveform delayed or not at all when tab is switched.
I would recommend to always use a time based approach so neither FPS or other behavior will interrupt the drawing of the waveform.
You can force the waveform to be updated at the actual current time as luckily we can attach this to the currentTime property of the audio element.
In the rAF loop simply update the position like this using a "reversed" formula of the above:
pos = audio.currentTime / audio.duration * waveformWidth + waveformX
When the tab becomes visible again this approach will make sure the waveform continues based on the actual time position of the audio.
I am receiving base64 encoded bitmap data through sockets, I decoding this data using atob and now have the bitmap data ready to be drawn on an HTML5 canvas.
I was reading this post
Data URI leak in Safari (was: Memory Leak with HTML5 canvas)
and although I am able to draw my bitmap data on the canvas using context.drawImage(..) I am looking for a different approach, due to memory leaks when alling drawImage too many times.
This post
Data URI leak in Safari (was: Memory Leak with HTML5 canvas)
refers to "creating a pixel data array and writing it to the canvas with putImageData" however there is no code to support this.
How can I create a pixel data array from my bitmap data ?
Basically I want to draw my bitmap data on the canvas, by using putImageData.
Thank you.
May be this you would found useful. Link.
If this is not what you were looking for then elaborate a little more about your query,
i will surely try to help.
I've been Googling around a bit for an answer and haven't found a definitive one either way: is it possible to play a video using an HTML5 canvas, and also allow the user to draw on this video? The use case, for some context, is to play a video on infinite loop so the user can draw multiple boxes over specific areas to indicate regions of interest.
As a bonus (:P), if I can figure out how to do this on its own, any hints as to how this could be done within Drupal? I'm already looking at the Canvas Field module, but if you have any hints on this point too (though the first one is the priority), that'd be awesome!
You can draw html5 video elements onto a canvas. The drawImage method accepts a video element in the first parameter just like an image element. This will take the current "frame" of the video element and render it onto the canvas. To get fluid playback of the video you will need to draw the video to the canvas repeatedly.
You can then draw on the canvas normally, making sure you redraw everything after each update of the video frame.
Here is a demo of video on canvas
here is a in-depth look into video and the canvas
I recently received this request from a client to provide this feature, and it must be CMS-friendly. The technique involves three big ideas
a drawing function
repeatedly calling upon the same drawing function
using requestAnimationFrame to paint the next frame
Assuming you have a video element already, you'd take the following steps
Hide the video element
Create a canvas element whose height/width match the video element, store this somewhere
Get the context of the canvas element with `canvas.getContext('2d') and also store that somewhere
Create a drawing function
In that drawing function, you would use canvas.drawImage(src, x, y) where src is the edited version of the current frame of the video;
In that drawing function, use recursion to call itself again
I can give you two examples of this being done (and usable for content management systems)
The first is here: https://jsfiddle.net/yywL381w/19/
A company called SDL makes a tool called Media Manager that hosts videos. What you see is a jQuery plugin that takes its parameters from a data-* , makes a request from the Media Manager Rest API, creates a video, and adds effects based entirely on data* attributes. That plugin could easily be tweaked to work with videos called from other sources. You can look at the repo for it for more details on usage.
Another example is here: http://codepen.io/paceaux/pen/egLOeR
That is not a jQuery plugin; it's an ES6 class instead. You can create an image/video and apply a cropping effect with this:
let imageModule = new ImageCanvasModule(module);
imageModule.createCanvas();
imageModule.drawOnCanvas();
imageModule.hideOriginal();
You'll observe, in the ImageCanvasModule class, this method:
drawFrame () {
if (this.isVideo && this.media.paused) return false;
let x = 0;
let width = this.media.offsetWidth;
let y = 0;
this.imageFrames[this.module.dataset.imageFrame](this.backContext);
this.backContext.drawImage(this.media, x, y, width, this.canvas.height);
this.context.drawImage(this.backCanvas, 0, 0);
if (this.isVideo) {
window.requestAnimationFrame(()=>{
this.drawFrame();
});
}
}
The class has created a second canvas, to use for drawing. That canvas isn't visible, it's just their to save the browser some heartache.
The "manipulation" that is content manageable is this.imageFrames[this.module.dataset.imageFrame](this.backContext);
The "frame" is an attribute stored on the image/video (Which could be output by a template in the CMS). This gets the name of the imageFrame, and runs it as a matching function. It also sends in the context (so I can toggle between drawing on the back canvas or main canvas if needed)
then this.backContext.drawImage(this.media, x, y, width, this.canvas.height); draws the image on the back context.
Finally, this appears on the main canvas with this.context.drawImage(this.backCanvas, 0, 0); where I take the backcanvas, and draw it on to the main canvas. So the canvas that's visible has the least amount of manipulations possible.
And at the end, because this is a video, we want to draw a new frame. So we have the function call itself:
if (this.isVideo) {
window.requestAnimationFrame(()=>{
this.drawFrame();
});
This whole setup allows us to use the CMS to output data-* attributes containing the type of frame the user wants to be drawn around the image. the JavaScript then produces a canvasified version of that image or video. Sample markup might look like:
<video muted loop autoplay data-image-frame="wedgeTop">
I am loading an image into an HTML 5 Canvas using drawImage method. How do I get a reference to that image later (maybe on some mouse-clicked event) so that I can do a transformation to the image (like rotation)?
Save a reference to the Image object you used to paint to the canvas.
Then,
Delete the canvas (clearRect)
Make the transformations using the canvas context
Draw the Image again
Go to 1 when you need to refresh the canvas
You can't. Once it's drawn on the canvas it's just pixels. No reference to the original source is retained by the canvas. If you want to maintain state information you have to do that yourself. Alternatively use SVG, that retains an internal DOM.