I am receiving base64 encoded bitmap data through sockets, I decoding this data using atob and now have the bitmap data ready to be drawn on an HTML5 canvas.
I was reading this post
Data URI leak in Safari (was: Memory Leak with HTML5 canvas)
and although I am able to draw my bitmap data on the canvas using context.drawImage(..) I am looking for a different approach, due to memory leaks when alling drawImage too many times.
This post
Data URI leak in Safari (was: Memory Leak with HTML5 canvas)
refers to "creating a pixel data array and writing it to the canvas with putImageData" however there is no code to support this.
How can I create a pixel data array from my bitmap data ?
Basically I want to draw my bitmap data on the canvas, by using putImageData.
Thank you.
May be this you would found useful. Link.
If this is not what you were looking for then elaborate a little more about your query,
i will surely try to help.
Related
I'm writing a screen capture application that uses the UWP Screen Capture API.
As a result, I get a callback each frame with a ID3D11Texture2D with the target screen or application's image, which I want to use as input to MediaFoundation's SinkWriter to create an h264 in mp4 container file.
There are two issues I am facing:
The texture is not CPU readable and attempts to map it fail
The texture has padding (image stride > pixel width * pixel format size), I assume
To solve them, my approach has been to:
Use ID3D11DeviceContext::CopyResource to copy the original texture into a new one created with D3D11_USAGE_STAGING and D3D11_CPU_ACCESS_READ flags
Since that texture too has padding, create a IMFMediaBuffer buffer wrapping it with MFCreateDXGISurfaceBuffer cast it to IMF2DBuffer and use IMF2DBuffer::ContiguousCopyTo to another IMFMediaBuffer that I create with MFCreateMemoryBuffer
So I'm basically copying around each and every frame two times, once on the GPU and once on the CPU, which works, but seems way inefficient.
What is a better way of doing this? Is MediaFoundation smart enough to deal with input frames that have padding?
The inefficiency comes from your attempt to Map rather than use the texture as video encoder input. MF H.264 encoder, which is in most cases a hardware encoder, can take video memory backed textures as direct input and this is what you want to do (setting up encoder respectively - see D3D/DXGI device managers).
Padding does not apply to texture frames. In case of padding in frames with traditional system memory data Media Foundation primitives are normally capable to process data with padding: Image Stride and MF_MT_DEFAULT_STRIDE, and other Video Format Attributes.
I am creating an app that capture the web-cam at a certain point(when an event is triggered), like taking a snapshot of the camera and encode the snapshot to base64. But looking at the example online, they first draw that snapshot to a canvas and then convert that canvas to base64. Is there a way to skip "drawing to canvas" part?
No, canvas is the only possible way to read pixels in a browser.
This is why the video stream must be drawn to the canvas first.
I've got a large, detailed, interactive vector object with dynamic text that will frequently translate horizontally from off the screen onto the screen when the user needs to look at it, then back off the screen when the user is done. If I set myVector.cacheAsBitmap = true before translating it and myVector.cacheAsBitmap = false after translating it, what happens to all of those bitmaps that are generated each time? Do I have to dispose of them myself?
Adobe help about Bitmap caching:
Turning on bitmap caching for an animated object that contains complex
vector graphics (such as text or gradients) improves performance.
However, if bitmap caching is enabled on a display object such as a
movie clip that has its timeline playing, you get the opposite result.
On each frame, the runtime must update the cached bitmap and then
redraw it onscreen, which requires many CPU cycles. The bitmap caching
feature is an advantage only when the cached bitmap can be generated
once and then used without the need to update it.
If you turn on bitmap caching for a Sprite object, the object can be
moved without causing the runtime to regenerate the cached bitmap.
Changing the x and y properties of the object does not cause
regeneration. However, any attempt to rotate it, scale it, or change
its alpha value causes the runtime to regenerate the cached bitmap,
and as a result, hurts performance.
Conclusion
If you make a simple translational movement along the x or y axis, your bitmap is created only one time.
Do I have to dispose of them myself?
It seems that you can't touch the bitmap cache only used internally by Flash player.
I've built a photo booth app for an installation and I need to take a screen shot of the bitmap data with it's frame... I foolishly nested my objects all wrong way too early in the game so taking a picture of just the display object is moot at this juncture, and I dont have time to re-organize everything.
I know the encoder can output stills of the stage, but can that be a defined set of coords? I have an 800x600 region that needs to output, ignoring the rest of the stage.
I am playing with other options as well, but if there is anything that seems obvious, i would greatly appreaciate it!
You can get the whole stage as a bitmap data and then use the copypixels method to copy the region you need.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/BitmapData.html#copyPixels%28%29
Or you can use the draw method of BitmapData class to draw a display object into that bitmap data.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/BitmapData.html#draw%28%29
I'm writing a XUL application using HTML Canvas to display Bitmap images.
I'm generating ImageDatas and importing them in a canvas using the putImageData function :
for(var pageIndex=0;pageIndex<100;pageIndex++){
this.img = imageDatas[pageIndex];
/* Create the Canvas element */
let imgCanvasTmp = document.createElementNS("http://www.w3.org/1999/xhtml",'html:canvas');
imgCanvasTmp.setAttribute('width', this.img.width);
imgCanvasTmp.setAttribute('height', this.img.height);
/* Import the image into the Canvas */
imgCanvasTmp.getContext('2d').putImageData(this.img, 0, 0);
/* Use the Canvas into another part of the program (Commented out for testing) */
// this.displayCanvas(imgCanvasTmp,pageIndex);
}
The images are well imported but there seems to be a memory leak due to the putImageData function.
When exiting the "for" loop, I would expect the memory allocated for the Canvas to be freed but, by executing the code without executing putImageData, I noticed that my program at the end uses 100Mb less (my images are quite big).
I came to the conclusion that the putImageData function prevents the garbage collector freeing the allocated memory.
Do you have any idea how I could force the garbage collector to free the memory? Is there any way to empty the Canvas?
I already tried to delete the canvas using the delete operator or to use the clearRect function but it did nothing.
I also tried to reuse the same canvas to display the image at each iteration but the amount of memory used did not changed, as if the image where imported without deleting the existing ones...
You could try clearing the canvas after the loop. Looking at your code, imgCanvasTmp is still reachable, and can't be garbage collected. Note that you might need to let the browser idle for a few minutes before the garbage collector to kick in. Also, maybe you want to clear imageDatas too.