HTML5 video and canvas CPU optimization - html

I am making an app with HTML5 video along with a canvas drawing on top of it (640x480 px). The objective was to record the canvas drawing along with the encoded video to produce it as a single video. I was able to do all these using FFMPEG. But the problem I am facing is, when the HTML5 video is running it takes around 50% of my CPU. Since drawing on canvas is also demanding CPU, the browser freezes after certain time and the CPU usage for that tab on chrome is showing continously > 100. I have tried to optimize html5 canvas rendering. But nothing helped. Is there a way to run this video with much less CPU usage? or any other optimizations possible?

There is not very much you can if the hardware need to use the CPU to decode and display the video. The keyword is compromise.
A few things can be done though that removes additional barriers. These must be considered general tips though:
Efficient looping
Make sure to use requestAnimationFrame() to invoke your loop in case you aren't.
setTimeout()/setInterval() are relatively performance-heavy and cannot sync properly to the monitor refresh rate.
Reduce update load
Also if you're not already doing this: the canvas is usually updated at 60 FPS while a video is rarely above 30/29.97 FPS (in Europe 25 FPS). This means you can skip every second frame update and still show the video at optimal rate. Use a toggle to achieve this.
Video at 25 FPS will be re-synced to 30 FPS (monitors typically runs at 60 Hz even for European models which are electronically re-synced internally, which also means the browser need to deal with drop/double-frames etc. internally - nothing we can do here).
// draw video at 30 FPS
var toggle = false;
(function loop() {
toggle = !toggle;
if (toggle) { /* draw frame here */ }
requestAnimationFrame(loop);
})();
Avoid scaling of the canvas element
Make sure canvas size and CSS size is the exact same. Or put simple: don't use CSS to set the size of the canvas at all.
Disable alpha channel composition
You can disable alpha composition of the canvas in some browsers to get a little more speed. Consumer-video never come with an alpha-channel.
// disable alpha-composition of the canvas element where supported
var context = canvas.getContext("2d", {alpha: false});
Tweak settings at encoding stage
Make sure to encode the video using a balance between size and decompression load. The more a video is compressed the more need to be reconstructed, at a cost. Encode with different encoder settings to find a balance that works in your scenario.
Also consider aspects such as color-depth i.e. 16 vs 24 bit.
The H264 codec is preferable as it has wide support in various display interface hardware.
Reduce video FPS
If the content of the video allows, f.ex. there is little movement or changes, encode using 15 FPS instead of 30 FPS. If so, also use a MODULO instead of a toggle (as shown above) where you can skip 3 frames and update canvas only at the 4.:
// draw video at 15 FPS
var modToggle = 0;
(function loop() {
if (modToggle++ % 4 === 0) { /* draw frame here */ }
requestAnimationFrame(loop);
})();
Encode video at smaller source size
Encode the video at a slightly smaller size dividable by 8 (in this case I would even suggest half size 320x240 - experiment!). Then draw using the scale parameters of the drawImage() method:
context.drawImage(video, 0, 0, video.videoWidth, video.videoHeight, 0, 0, 640, 480);
This help reduce the amount of data needed to be loaded and decoded but will of course reduce the quality. How it turns out depends again on the content.
You can also turn off interpolation using imageSmoothingEnabled set to false on the context (note: the property need a prefix in some browsers). For this you may not want to reduce the size as much as 50% but only slightly (something like 600x420 in this case).
Note: even if you "reduce" the frame rate the canvas is still redrawn at 60 FPS, but since it doesn't do any actual work on the intermediate frames it's still off-loading the CPU/GPU giving you a less tight performance budget over-all.
Hope this gives some input.

Related

Optimizing h264 MediaFoundation encoding

I'm writing a screen capture application that uses the UWP Screen Capture API.
As a result, I get a callback each frame with a ID3D11Texture2D with the target screen or application's image, which I want to use as input to MediaFoundation's SinkWriter to create an h264 in mp4 container file.
There are two issues I am facing:
The texture is not CPU readable and attempts to map it fail
The texture has padding (image stride > pixel width * pixel format size), I assume
To solve them, my approach has been to:
Use ID3D11DeviceContext::CopyResource to copy the original texture into a new one created with D3D11_USAGE_STAGING and D3D11_CPU_ACCESS_READ flags
Since that texture too has padding, create a IMFMediaBuffer buffer wrapping it with MFCreateDXGISurfaceBuffer cast it to IMF2DBuffer and use IMF2DBuffer::ContiguousCopyTo to another IMFMediaBuffer that I create with MFCreateMemoryBuffer
So I'm basically copying around each and every frame two times, once on the GPU and once on the CPU, which works, but seems way inefficient.
What is a better way of doing this? Is MediaFoundation smart enough to deal with input frames that have padding?
The inefficiency comes from your attempt to Map rather than use the texture as video encoder input. MF H.264 encoder, which is in most cases a hardware encoder, can take video memory backed textures as direct input and this is what you want to do (setting up encoder respectively - see D3D/DXGI device managers).
Padding does not apply to texture frames. In case of padding in frames with traditional system memory data Media Foundation primitives are normally capable to process data with padding: Image Stride and MF_MT_DEFAULT_STRIDE, and other Video Format Attributes.

Is there a way to have smooth/subpixel motion without turning on smoothing on graphics?

I'm creating this 2D, pixel art game. When the camera follows the player (it uses easing), on the final approach, the position gets several subpixel adjustments.
If I have smoothing ON (on my graphic assets), the graphics look good (sharp. it's pixel art) but the subpixel motion is jerky/jumpy.
If I have smoothing OFF, the subpixel motion is smooth, but the pixel art graphics look blurry.
I'm using Flash player v21. I've tried this with Starling and with Flash's display list.
You have a pixelated object that is moving in increments of less than the pixel size, but you don't want to restrict your mathematical easing to integers, or even worse, factors of 8 or what have you. The solution I am using in my project for this exact issue is posted below (I just got it working last week!)
Concept
create a driver that is controlled by the easing using floating point numbers.
Allow this driver to then control where the actual display object is rendered. We can use a constraint to only allow the display object to render on your chosen resolution.
Code Example
// you'll put these lines or equivalent in the correct spots for your particular needs.
// SCALE_UP will be your resolution control. If your pixels are 4 pixels wide, use 4.
const SCALE_UP: int = 4;
var d:CharacterDriver = new CharacterDriver();
var c:Character = new Character();
c._driver = d; // I've found it useful to be able to reference the driver
d._drives = c; // or the thing the driver drives via the linked object.
// you don't have to do this.
then when you are ready to do your easing of the driver:
function yourEase(c:Character, d:CharacterDriver):void{
c.x = Math.ceil(d.x - Math.ceil(d.x)%SCALE_UP);//this converts a floating point number into a factor of SCALE_UP
c.y = Math.ceil(d.y - Math.ceil(d.y)%SCALE_UP);
Now this will make your character move around 4 pixels at a time, but still be able to experience easing!
The bit with the modulo (%) operator is the key. For instance, 102-102%4 = 100. 103-103%4 = 100. 104-104%4 = 104.
In case anyone is confused by that, look at what 102%4 does: 4 goes into 102 25 times with a remainder of 2. so 102%4 = 2. Then 102 - 2 = 100.
In your case, since the "camera" is following the player (i.e. the background is moving, right?) then you really need to apply drivers to everything in the background instead, but it is basically the same idea.
Hope this helps.
since you specifically mentioned the "final approach" i think your problem comes from the fact that the easing equations puts your graphics at fractional coordinates, especially while getting closer to the target, but you should also notice it during the rest of the animation.
depending on the easing "engine" that you're using you should be able to set a "round values" flag, so all the coordinates set will be integer values and not fractional
if that's not possible, find a way in your display objects to round the x and y values every time they change

How to lock FPS with requestAnimationFrame?

I used script from Paul Irish
https://gist.github.com/paulirish/1579671
to create animation loop inside html site.
It works although it's faster in fullscreen mode than in browser window.
Also, I observed different speeds depending on canvas size and depending on browser I use.
Question: How can I ensure stable frame rate using the script?
Code is available here (Beginning WebGL, chapter 1 by Brian Danchilla):
https://github.com/bdanchilla/beginningwebgl/blob/master/01/2D_movement.html
Something like this should work. If the time delta between two frames is shorter than your FPS limit, the update function returns and waits for the next frame. But this will only limit the updates from happening too quickly; like emackey said, there's always the possibility the update loop will run more slowly.
var updateId,
previousDelta = 0,
fpsLimit = 30;
function update(currentDelta) {
updateId = requestAnimationFrame(update);
var delta = currentDelta - previousDelta;
if (fpsLimit && delta < 1000 / fpsLimit) {
return;
}
/* your code here */
previousDelta = currentDelta;
}
To embellish what #emackey said,
The short answer is you can't. You could ask the computer to do an infinite amount of work each frame. I can't promise to do that work in a finite amount of time.
On top of that each computer has a different amount of power. A cheap integrated GPU has much less power than a high end graphics card. An intel i3 is much slower than an i7.
You also mentioned changing the canvas size. Drawing a 300x150 canvas is only 45000 pixels worth of work. Drawing a 1920x1080 canvas would be 2,073,600 pixels of work or 46x more work
The best you can do is do the least amount of work possible, and or remove features on slow hardware either automatically or by user choice. Most games do this. They graphics setting options where the user can choose resolution, texture res, anti-alising levels and all kinds of other things.
That said, you can try to do your computations so things in your app move at a consistent speed relative to time. The framerate might slower on a slow machine or with a larger canvas but the distance something moves per second will remain the same.
You can do this by using the time value passed into requestAnimationFrame
function render(time) {
// time is time in milliseconds since the page was loaded
...do work...
requestAnimationFrame(render);
}
requestAnimationFrame(render);
For example here is NON framerate independent animation
function render(time) {
xPosition = xPosition + velocity;
...
requestAnimationFrame(render);
}
requestAnimationFrame(render);
and here is frame rate independent animation
var then = 0;
function render(time) {
var timeInSeconds = time * 0.001;
var deltaTimeInSeconds = timeInSeconds - then;
then = timeInSeconds;
xPosition = xPosition + velocityInUnitsPerSecond * deltaTimeInSeconds;
...
requestAnimationFrame(render);
}
requestAnimationFrame(render);
Note: The time passed into requestAnimationFrame is higher resolution than Date.now()
Here's an article on it with animations
You can't enforce a stable frame rate directly. Your page is not the only app running on the user's platform, and platform capabilities vary widely. requestAnimationFrame runs as fast as it can, not exceeding the display update interval on the target device, but potentially much slower depending on available CPU, GPU, memory, and other limitations.
The standard practice here is to measure the amount of time that has elapsed since the previous animation frame, typically with Date.now(), and each frame advance the animation by that amount of time. To the human eye, this makes the resulting animation run at a consistent speed, even if the frame rate is highly variable.
For example, sites such as Shadertoy and GLSL Sandbox run full-screen GLSL shaders and pass in a uniform called time (or iGlobalTime), which is a float representing the number of seconds elapsed since the shader started. This time value increases at irregular intervals depending on how long each animation frame took to render, but the result is that the float appears to count upwards at a stable 1.0 per second. In this way, shader playback based on this time value can appear consistent.

I converted scrolling background to a bitmap, which now uses more memory, but scrolls much faster, how/why?

So I'm in the process of making an as3 game with a scrolling cave background. I have it set up to randomly generate a 30x30 cave (900 tiles). I would generate the cave then add all of the tiles as children to a "Background" movieclip. I was having some issues with it lagging so I decided to convert the background to a bitmap. Before I did this trace(System.totalMemory); output a value of around 20,000,000. Afterwards it's around 28,000,000, however the lagging/background-scrolling issues I had seem to have stopped. Why would it use more memory, and why would it alleviate my scrolling issues despite this? Here's the important part of the code.
//My cave is 1800 x 1800 pixels
var bitMapData:BitmapData = new BitmapData(1800, 1800);
//Drawing the cave to a bitmapdata
bitMapData.draw(background1);
var bitMap:Bitmap = new Bitmap(bitMapData);
//Removing all of the tiles from the background
while(background1.numChildren > 0) {
background1.removeChildAt(0);
}
//adding the bitmap to my background
background1.addChild(bitMap);
Any insight is greatly appreciated.
See, drawing a MovieClip is always done through vector renderer, this involves querying its structure, querying display list and more, also in case those MCs of yours have uneven scale (unequal and not in the line of powers of 2) even bitmap rendering is slowed. Having one Bitmap instead of 900 MCs lowers display list traversal time by a great margin (900 vs 1 - isn't it a decent improvement?). Of course, bitmaps occupy more memory than MCs, but this is same old memory vs performance issue that every programmer hits sooner or later. Don't worry about this 8M bitmap, just don't make too many bitmaps this big for mobile platform.

How to build a soundcloud like player?

With the help of waveformjs.org I am able to draw waveform for a mp3 file. I can also use
wav2json to get json data for audio file hosted on my own server and I don't have to depend on soundcloud for that. So far so good. Now, the next challenges for me are
To fill the color in waveform as the audio playing progresses.
On clicking anywhere on waveform audio should start playing from that point.
Has anyone done similar? I tried to look into soundcloud js SDK but no success. Any pointers in this directions will be appreciated, thanks.
As to changing the color my earlier answer here may help:
Soundcloud modify the waveform color
As to moving the position when you click on the waveform you can do something like this:
Assuming you have a x position already from the click event adjusted relative to the canvas.
/// get a representation of x relative to what the waveform represents:
var pos = (x - waveFormX) / waveformWidth;
/// To convert this to time, simply multiply this factor with duration which
/// is in seconds:
audio.currentTime = audio.duration * pos;
Hope this helps!
Update
The requestAnimationFrame is optimized not only for performance but also power consumption as more and more users are on mobile devices. For this reason the browsers may or may not pause or reduce frame rate of rAF when in a different tab or when browser tab is invisible. This can cause a position based approach to render the waveform delayed or not at all when tab is switched.
I would recommend to always use a time based approach so neither FPS or other behavior will interrupt the drawing of the waveform.
You can force the waveform to be updated at the actual current time as luckily we can attach this to the currentTime property of the audio element.
In the rAF loop simply update the position like this using a "reversed" formula of the above:
pos = audio.currentTime / audio.duration * waveformWidth + waveformX
When the tab becomes visible again this approach will make sure the waveform continues based on the actual time position of the audio.