html5 canvas always returns 8bits of color when using getImageData() - html

I am using an html5 canvas to render and image, do some basic editing of the image than trying to use the getImageData(0 function to read through the pixels and do some work. I have notices thou, no matter what bit depth the source image is (8 bit, 16 bit , 24 bit) the getImageData() method allows returns 8-bit (256 colors). this in not desirable. I would like the getImageData(0 method to spit out as many colors as it recieved.
I have read through the documentation and the canvas should be able to handle any bit depth you throw at it (figuratively) but I cant see anywhere to set the bit depth higher

Canvas will always return 24-bit data + an 8-bit alpha channel (RGBA). Each component value will of course have 8 bits or 256 values. This is per specification. It will never return 8-bit indexed image data however so if you somehow run into 8-bit (indexed) image data then you are probably reading the data wrong or from the from object/array.
From the specification:
imagedata . data
Returns the one-dimensional array containing the data in RGBA order,
as integers in the range 0 to 255.
And just to cover the opposite aspect: if you draw in a 8-bit indexed palette image such as a PNG-8 or a GIF using 2 - 256 colors their indexed palette will always be converted to RGBA buffer (it's actually converted to RGBA at load time by the browser so this is not something canvas do itself).
To read data from canvas you have two levels (or three for more advanced use), the image data object which contains various information including a reference to the actual pixel array view:
var imageData = context.getImageData(x, y, w, h);
From this object we obtain the data view for the pixels which is by default a Uint8ClampedArray view:
var pixelData = imageData.data;
And for more advanced usage we could get the raw byte buffer from this (if you need to provide other views, ie. Uint32Array) can be obtained from:
var rawBytes = pixelData.buffer;
var buffer32 = new Uint32Array(rawBytes);
But lets stick to the default 8-bit clamped view - to read from it you need to know that the pixels are always packed into RGBA or as 32-bit values. So we can get a single pixel by doing:
var r = pixelData[0];
var g = pixelData[1];
var b = pixelData[2];
var a = pixelData[3];
The next pixel will start at index 4 and so on.
If you for some reason need to reduce the palette to indexed palette you would have to provide the algorithm for this yourselves. There are many out there from simple and bad to more complex and accurate ones. But this is not something you will be able to do out of the box with canvas. Some pointers can be found in this answer or you can use a library such as this which will create a (animated) GIF from canvas.
Also be aware of that if an image drawn into canvas didn't fulfill cross-origin requirements (CORS) the canvas will be "tainted" (security-wise) and the getImageData() will return a an array with values set to 0.

ImageData (returned by getImageData) property data gives you an array in which each entry is a colour channel in sequence red, green, blue and alpha and not the actual colour. e.g.
red=imgData.data[0];
green=imgData.data[1];
blue=imgData.data[2];
alpha=imgData.data[3];
colour = '#' + (red<16)?'0':'' + red.toString(16) +
(green<16)?'0':'' + green.toString(16) +
(blue<16)?'0':'' + blue.toString(16);

Related

Is there a way to have smooth/subpixel motion without turning on smoothing on graphics?

I'm creating this 2D, pixel art game. When the camera follows the player (it uses easing), on the final approach, the position gets several subpixel adjustments.
If I have smoothing ON (on my graphic assets), the graphics look good (sharp. it's pixel art) but the subpixel motion is jerky/jumpy.
If I have smoothing OFF, the subpixel motion is smooth, but the pixel art graphics look blurry.
I'm using Flash player v21. I've tried this with Starling and with Flash's display list.
You have a pixelated object that is moving in increments of less than the pixel size, but you don't want to restrict your mathematical easing to integers, or even worse, factors of 8 or what have you. The solution I am using in my project for this exact issue is posted below (I just got it working last week!)
Concept
create a driver that is controlled by the easing using floating point numbers.
Allow this driver to then control where the actual display object is rendered. We can use a constraint to only allow the display object to render on your chosen resolution.
Code Example
// you'll put these lines or equivalent in the correct spots for your particular needs.
// SCALE_UP will be your resolution control. If your pixels are 4 pixels wide, use 4.
const SCALE_UP: int = 4;
var d:CharacterDriver = new CharacterDriver();
var c:Character = new Character();
c._driver = d; // I've found it useful to be able to reference the driver
d._drives = c; // or the thing the driver drives via the linked object.
// you don't have to do this.
then when you are ready to do your easing of the driver:
function yourEase(c:Character, d:CharacterDriver):void{
c.x = Math.ceil(d.x - Math.ceil(d.x)%SCALE_UP);//this converts a floating point number into a factor of SCALE_UP
c.y = Math.ceil(d.y - Math.ceil(d.y)%SCALE_UP);
Now this will make your character move around 4 pixels at a time, but still be able to experience easing!
The bit with the modulo (%) operator is the key. For instance, 102-102%4 = 100. 103-103%4 = 100. 104-104%4 = 104.
In case anyone is confused by that, look at what 102%4 does: 4 goes into 102 25 times with a remainder of 2. so 102%4 = 2. Then 102 - 2 = 100.
In your case, since the "camera" is following the player (i.e. the background is moving, right?) then you really need to apply drivers to everything in the background instead, but it is basically the same idea.
Hope this helps.
since you specifically mentioned the "final approach" i think your problem comes from the fact that the easing equations puts your graphics at fractional coordinates, especially while getting closer to the target, but you should also notice it during the rest of the animation.
depending on the easing "engine" that you're using you should be able to set a "round values" flag, so all the coordinates set will be integer values and not fractional
if that's not possible, find a way in your display objects to round the x and y values every time they change

Canvas underlying bitmap

How many underlying bitmaps has Canvas element. I think it must have at least two, one for buffer, one for screen projection. Is it browser specific or is it standarized (if yes, then where) count?
"How many bitmaps make up the Canvas element?"
Zero!
The canvas element maintains an array containing the red, green, blue & alpha values of each pixel it will display on the screen. In modern browsers that array is typed Uint8ClampedArray. You can think of this array as a kind of bitmap--but it is not.
So the closest answer to your question: Canvas has one or more RGBA pixel arrays that acts as a "backing store" or "buffer".
When the browser requests the canvas to redraw, this array (or an alpha pre-multiplied version of it) is blitted to the GPU (or if no GPU exists, to the CPU).
You can fetch this RGBA array using:
// Fetch the imageData object that contains the pixel array
var imgData = context.getImageData(0,0,canvas.width,canvas.height);
// Fetch the pixel array itself
var pixelArray = imgData.data;
Then you can manipulate each pixels RGBA values and push your manipulated array back to the canvas using context.putImageData(pixelArray,0,0). Executing putImageData will automatically cause the browser to blit the modified array onto the display during the browsers next screen refresh.
You can refer to the "WhatWG" for official standards & specifications relating to the canvas element: https://developers.whatwg.org/the-canvas-element.html#the-canvas-element.
However, each browser may implement the standards & specifications as they see fit and some browsers take longer to implement the standards and specifications. Some browsers are now open-source so you can look at each browser's implementing code, if needed.

Error #2077: This filter operation cannot be performed with the specified input parameters

Error #2077: This filter operation cannot be performed with the specified input parameters.
at flash.display::BitmapData/applyFilter()
I got this error message trying to apply a BitmapFilter (specifically an inner DropShadowFilter) to a BitmapData via .applyFilter
I've never seen this message before and Googling did not immediately answer the question, and I saw someone confounded as to why it applied to JPEG and not PNG images. So hopefully this question will help someone else. I'll include my simple solution below.
Reading the BitmapData.applyFilter documentation, it's fairly obvious what the problem is. I tried to apply a DropShadowFilter to a BitmapData without transparency (aka, no alpha channel, only 24 bits per pixel.) The docs state which filters require transparency (replicated here for convenience):
Each type of filter has certain requirements, as follows:
BlurFilter — This filter can use source and destination images that are either opaque or transparent. If the formats of the images do not match, the copy of the source image that is made during the filtering matches the format of the destination image.
BevelFilter, DropShadowFilter, GlowFilter, ChromeFilter — The destination image of these filters must be a transparent image. Calling DropShadowFilter or GlowFilter creates an image that contains the alpha channel data of the drop shadow or glow. It does not create the drop shadow onto the destination image. If you use any of these filters with an opaque destination image, an exception is thrown.
ConvolutionFilter — This filter can use source and destination images that are either opaque or transparent.
ColorMatrixFilter — This filter can use source and destination images that are either opaque or transparent.
DisplacementMapFilter — This filter can use source and destination images that are either opaque or transparent, but the source and destination image formats must be the same.
Creating a BitmapData with transparency is easy - it's the 3rd parameter to the constructor:
// args are: width, height, is_transparent, default_color
var bd:BitmapData = new BitmapData(1024, 768, true, 0xff000000);
Note that when you create a transparent BitmapData, you must specify a 32-bit integer for the default color (the 4th parameter). If you merely specify 0xffffff (24-bit white), you'll get a blank image as the alpha value (the highest 8 bits) are 0.

Windows Store App - SwapChainPanel DrawLine Performance

I am developing a Windows Store App using XAML / C#. The app also has a Windows Runtime Component, which is used for showing a Chart ouput using DirectX.
I am using SwapChainPanel approach for drawing the lines (x-axis, y-axis and waveform).
I chose this approach from the below MSDN sample (refer scenario 3 - D2DPanel)
http://code.msdn.microsoft.com/windowsapps/XAML-SwapChainPanel-00cb688b
Here is my question,
My waveform contains a huge number of data (ranging from 1,000 to 20,000 set of points). I am calling DrawLine continuously for all these points during each Render function call.
The control also provides panning and zooming but keeps the StrokeWidth constant irrespective of zoom level, hence the visible area (render target) might be much less than the lines I am drawing.
Does calling DrawLine for the area which are going to be off-screen cause performance issues?
I tried PathGeometry & GeometryRealization but I am not able to control the StrokeWidth at various zoom level.
My Render method typically resembles the below snippet. The lineThickness is controlled to be same irrespective of zoom level.
m_d2dContext->SetTransform(m_worldMatrix);
float lineThickness = 2.0f / m_zoom;
for (unsigned int i = 0; i < points->Size; i += 2)
{
double wavex1 = points->GetAt(i);
double wavey1 = points->GetAt(i + 1);
if (i != 0)
{
m_d2dContext->DrawLine(Point2F(prevX, prevY), Point2F(wavex1, wavey1), brush, lineThickness);
}
prevX = wavex1;
prevY = wavey1;
}
I'm kind of new to DirectX, but not to C++. Any thoughts?
Short answer: It probably will. It's good practice to push a clip before drawing. For instance, in your case, you'd do a call to ID2D1DeviceContext::PushAxisAlignedClip with the bounds of your drawing surface. That'll ensure no drawing calls attempt to draw outside the surface's bounds.
Long answer: Really, it depends on a handful of factors, including but not limited to what target the device context is drawing to, the display hardware, and the display driver. For instance, if you're drawing to a CPU-backed ID2D1Bitmap, it's probably fair to assume that there won't be much of a difference.
However, if you're directly drawing to some hardware-backed surface (a GPU bitmap, or a bitmap created from an IDXGISurface), it can get a little hairy. For example, consider this comment from an excellently documented MSDN sample. Here, the where the code is about to Clear an ID2D1Bitmap created from an IDXGISurface:
// The Clear call must follow and not precede the PushAxisAlignedClip call.
// Placing the Clear call before the clip is set violates the contract of the
// virtual surface image source in that the application draws outside the
// designated portion of the surface the image source hands over to it. This
// violation won't actually cause the content to spill outside the designated
// area because the image source will safeguard it. But this extra protection
// has a runtime cost associated with it, and in some drivers this cost can be
// very expensive. So the best performance strategy here is to never create a
// situation where this protection is required. Not drawing outside the appropriate
// clip does that the right way.

Trying to convert openGL to MFC coordinates and having Problems with "gluProject"

To clarify things, what i am trying to do is to get the openGL coordinates and manipulate them in my mfc code. not to get an openGL object. i'm using the mfc to control the position of the objects in the openGL.
Hi, i'm trying to find the naswer on the web and can't find a full solution that i can use and that will work...
I'm developing a MFC project with static picture as the canvas for an openGL class that draw the grphics for my game.
On moush down, i need to retrive a shape coordinate from the openGL class.
I'm looking for a way to convert the openGL coordinates to MFC coordinates but no matter what i try i get junk after using the gluProject or gluUnProject (i've tried to do both ways but non is working)
GLdouble modelMatrix[16];
glGetDoublev(GL_MODELVIEW_MATRIX,modelMatrix);
GLdouble projMatrix[16];
glGetDoublev(GL_PROJECTION_MATRIX,projMatrix);
int viewport[4];
glGetIntegerv(GL_VIEWPORT,viewport);
POINT mouse; // Stores The X And Y Coords For The Current Mouse Position
GetCursorPos(&mouse); // Gets The Current Cursor Coordinates (Mouse Coordinates)
ScreenToClient(hWnd, &mouse);
GLdouble winX, winY, winZ; // Holds Our X, Y and Z Coordinates
winX; = (float)point.x; // Holds The Mouse X Coordinate
winY; = (float)point.y; // Holds The Mouse Y Coordinate
winY = (float)viewport[3] - winY;
glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
GLdouble posX=s1->getPosX(), posY=s1->getPosY(), posZ=s1->getPosZ(); // Hold The Final Values
gluUnProject( winX, winY, winZ, modelMatrix, projMatrix, viewport, &posX, &posY, &posZ);
gluProject(posX, posY, posZ, modelMatrix, projMatrix, viewport, &winX, &winY, &winZ);
This is just part of the code i've tried. ofcourse not gluProject and gluUnProject together. just had them both here to show.....and i know there is lots of junk over there, its from some of my tries...
p.s. i've tried many many more combinations and examples from the web and nothing seem to work in my case....
Can any one show me what is the right way to do the transformation?
10x
It looks like you're trying to retrieve the object (or objects) that is/are at a particular point. If this is the case, gluProject and/or gluUnProject isn't really a very suitable tool for the task. OpenGL has a selection mode intended specifically for this kind of task.
In typical use, you specify a small square (e.g., 5x5 pixels) around the mouse click spot with gluPickMatrix, set selection mode with glRenderMode, set a buffer with glSwelectBuffer, and then draw your scene. The drawing doesn't go to the screen, but fills the buffer you specified wiyh records of what was drawn within the specified area.