Check color encoding in the default framebuffer draw buffer for SRGB - draw

I am in the process of implementing sRGB space in my application.
From what I read here, here, here and here I should operate in linear RGB (LRGB) space for the whole pipeline and I dont have to care about any gamma correction since OpenGL will take care about it for me given I enable GL_FRAMEBUFFER_SRGB​ and the image is either GL_SRGB8 or GL_SRGB8_ALPHA8.
So my idea is to be sure I submit inputs in LRGB space and then I render my final texture (doing depth peeling) on the default framebuffer that will take care of porting my colors to the SRGB space so that I will see them on LRGB on my monitor.
Ok, now I want to check the default draw buffer of the default framebuffer.
int[] drawBuffer = new int[1];
gl3.glGetIntegerv(GL3.GL_DRAW_BUFFER, drawBuffer, 0);
System.out.println("draw buffer " + (drawBuffer[0] == GL3.GL_BACK ? "BACK " : drawBuffer[0]));
this confirms my draw buffer is GL_BACK, now I want to check the color encoding
int[] framebufferAttachmentParameter = new int[1];
gl3.glGetFramebufferAttachmentParameteriv(GL3.GL_DRAW_FRAMEBUFFER, GL3.GL_BACK,
GL3.GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING, framebufferAttachmentParameter, 0);
but glGetFramebufferAttachmentParameteriv fails:
glGetError() returned the following error codes after a call to glGetFramebufferAttachmentParameteriv( 0x8CA9, 0x405, 0x8210, <[I>, 0x0): GL_INVALID_ENUM ( 1280 0x500)
Reading here they say
If the default framebuffer is bound to target then attachment must be one of
GL_FRONT_LEFT, GL_FRONT_RIGHT, GL_BACK_LEFT, or
GL_BACK_RIGHT
But my code say my draw buffer is GL_BACK. Reading here they say the GL_BACK is just an alias that indicates both GL_BACK_LEFT and GL_BACK_RIGHT if I do stereotic rendering.
So my questions are:
am I right assuming that I am writing to GL_BACK_LEFT given my draw buffer returns GL_BACK
how can I check if stereotic rendering is on/off? Of course since I dont know even how to turn it on, I assume it is off. But is something I do by just enabling both GL_BACK/FRONT_LEFT/RIGHT buffers or something else?
can I turn on SRGB space on the default draw buffer of the default framebuffer?

I got an answer on the opengl forums, I think it can be useful for anyone visiting this question if I leave this here
https://www.opengl.org/discussion_boards/showthread.php/185198-Check-color-encoding-in-the-default-framebuffer-draw-buffer-for-SRGB?p=1263059#post1263059

Related

Is there a way to have smooth/subpixel motion without turning on smoothing on graphics?

I'm creating this 2D, pixel art game. When the camera follows the player (it uses easing), on the final approach, the position gets several subpixel adjustments.
If I have smoothing ON (on my graphic assets), the graphics look good (sharp. it's pixel art) but the subpixel motion is jerky/jumpy.
If I have smoothing OFF, the subpixel motion is smooth, but the pixel art graphics look blurry.
I'm using Flash player v21. I've tried this with Starling and with Flash's display list.
You have a pixelated object that is moving in increments of less than the pixel size, but you don't want to restrict your mathematical easing to integers, or even worse, factors of 8 or what have you. The solution I am using in my project for this exact issue is posted below (I just got it working last week!)
Concept
create a driver that is controlled by the easing using floating point numbers.
Allow this driver to then control where the actual display object is rendered. We can use a constraint to only allow the display object to render on your chosen resolution.
Code Example
// you'll put these lines or equivalent in the correct spots for your particular needs.
// SCALE_UP will be your resolution control. If your pixels are 4 pixels wide, use 4.
const SCALE_UP: int = 4;
var d:CharacterDriver = new CharacterDriver();
var c:Character = new Character();
c._driver = d; // I've found it useful to be able to reference the driver
d._drives = c; // or the thing the driver drives via the linked object.
// you don't have to do this.
then when you are ready to do your easing of the driver:
function yourEase(c:Character, d:CharacterDriver):void{
c.x = Math.ceil(d.x - Math.ceil(d.x)%SCALE_UP);//this converts a floating point number into a factor of SCALE_UP
c.y = Math.ceil(d.y - Math.ceil(d.y)%SCALE_UP);
Now this will make your character move around 4 pixels at a time, but still be able to experience easing!
The bit with the modulo (%) operator is the key. For instance, 102-102%4 = 100. 103-103%4 = 100. 104-104%4 = 104.
In case anyone is confused by that, look at what 102%4 does: 4 goes into 102 25 times with a remainder of 2. so 102%4 = 2. Then 102 - 2 = 100.
In your case, since the "camera" is following the player (i.e. the background is moving, right?) then you really need to apply drivers to everything in the background instead, but it is basically the same idea.
Hope this helps.
since you specifically mentioned the "final approach" i think your problem comes from the fact that the easing equations puts your graphics at fractional coordinates, especially while getting closer to the target, but you should also notice it during the rest of the animation.
depending on the easing "engine" that you're using you should be able to set a "round values" flag, so all the coordinates set will be integer values and not fractional
if that's not possible, find a way in your display objects to round the x and y values every time they change

Transparency issues with 3d particles and 3d models, libgdx

I got some strange issues with transparency and 3d particles. A short vid to illustrate:
https://youtu.be/ZHKI1X3MjhY
As you can see I have a 3d particle effect, fire burning. Inside it is a 3 model with no alpha blending and it shows just fine. then in the far distance there is a small skeleton (with blending and alphatest turned on) and it also shows just fine through the fire. Then I turn camera and look at the warrior skeleton and it just disappear and instead you see what is behind him. I turn camera again and the mage skeleton also vanishes, but you can see the trees a bit further away just fine and they have the exact same settings for blending and alpha test. If I move the character say 20 yards away it also starts showing through the fire effect.
So it seems to have something to do with distance from the 3d particle effect...
The 3d particle batch is an extended BillboardParticleBatch like this:
protected Renderable allocRenderable(){
BlendingAttribute ba=new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE,1f);
Renderable r = super.allocRenderable();
r.material = new Material( ba,
// new DepthTestAttribute(GL20.GL_LEQUAL, 0.0f, 0.5f, true),
// r.material.set(new FloatAttribute(FloatAttribute.AlphaTest, 0.0f),
TextureAttribute.createDiffuse(texture));
return r;
}
All the characters and the trees are created with following attributes:
if (alpha) {
FloatAttribute floatAttribute = new FloatAttribute(FloatAttribute.AlphaTest, 0.5f);
BlendingAttribute blendingAttribute = new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA, 1f);
for (int i = 0; i < bulletEntity.modelInstance.materials.size; i++){
bulletEntity.modelInstance.materials.get(i).set(blendingAttribute);
bulletEntity.modelInstance.materials.get(i).set(floatAttribute);
}
}
The models are drawn first then the particles, I tried changing order but no difference. I have tried a lot of different setups for alphatest, depthtest and blendingattribute but can not find anything that works.
EDIT
I removed the Blending attribute from the 3d-models and now it looks as it should regarding the particle effect. However I need most materials on my character models to have blending set..
Anyone got any clue why this is happening when I enable blending?
I also tried to use the BillboardParticleBatch without extending it in case I had done something there but the effect then is even worse. All models with blending enabled appear in-front of the particle effect even though they stand behind it.
ModelBatch sorts your render calls (check this link, really, it is a must read), to avoid incorrect behavior (as you're experiencing). The actual sorting/rendering happens at the call to ModelBatch#end. By default it uses the DefaultRenderableSorter, which is a default implementation. Of course, because that implementation isn't aware of your scene, it might not fit exactly your needs.
The DefaultRenderableSorter tries to guess the location of each model based on their transformation matrix. Based on that location and the camera's location it will sort them so that:
First all opaque objects are rendered from front to back (because whatever is behind an opaque object isn't visible anyway, so that reduces unneeded calls to the fragment shader).
Secondly all transparent objects are rendered from back to front (because as soon as a transparent object is rendered then everything that is rendered after that and is behind it, will not be visible).
To decide whether an object is transparent, the BlendingAttribute#blended member is used. (So you could, if you really wanted to, set that member to false to force it to be treated (sorted) as if it was opaque)
So, the order in which you call ModelBatch#render is not necessarily the order in which they are actually executed. If you want to force to render whatever you've added to the batch in between, then call the ModelBatch#flush(). Of course, doing this a lot defeats some of the purpose of ModelBatch in the first place.
Instead you could implement your own RenderableSorter which has more knowledge about your scene and can therefor do a better job sorting than the default implementation. (however if flush() works for you and there's no other issue, then just flush might be the easiest solution for you).
That said, there a various other solutions you could try as well. E.g. the regions of the particles are fully transparent, so the fragment shader might as well discard those all together. Try adding FloatAttribute.AlphaTest with a value of 0.5f to the particles. If that messes with your blending then gradually reduce the value to e.g. 0.05f.
Also, you could add a DepthTestAttribute with depthMask set to false (new DepthTestAttribute(false)). This will prevent the particles from writing to the depth buffer. (but also might cause other things to show in front of the particles).

How to animate textures in a 3d model?

I wish to have a animated 3d texture in my LibGDX code but I am struggling to find out how to do it.
I assume how this "should" be done is either;
a) Directly accessing and modifying the texture on the model. (via a pixmap? ByteBuffer?)
or
b) Prerendering a big image containing all the frames (say, 20) and then moving the UV co-ordinates to create the illusion of the animation. (akin to ImageStrips in 2d/webdesign).
I did work out how I could completely replace the material eachtime, but that seems a much worse way of doing it. So if anyone could show the commands I need to do either a) or b) (or a similar optimal method) I would be great-fall.
Maths I am fine with. The intricacies of OpenGLES or GDX I am not :)
(The solution should at least work HTML/Android compiles, ideally everything)
Since the latest release it is very easy to play a 2d animation on a 3d surface. First make sure to get familiar with the 2d animation concept, as explained over here: https://github.com/libgdx/libgdx/wiki/2D-Animation. Then, instead of using a spritebatch, you can use the TextureRegion (which Animation#getKeyFrame returns) to set the material of the surface, as shown here: https://github.com/libgdx/libgdx/blob/master/tests/gdx-tests/src/com/badlogic/gdx/tests/g3d/TextureRegion3DTest.java. So basically you would get in your render method:
attribute.set(animation.getKeyFrame(stateTime, true));
Or if you want a more generic approach:
instance.getMaterial("<name of material>").get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
Or, if there's only one material in the ModelInstance:
instance.materials.get(0).get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
If you have the memory for it I would definetly choose b), it is easier on the processor. Also, you would only change a uniform's value. However, due to preprocessing it might take some time to open the application.
Get you uniform variable, where you compile your shaders, animationPos should be global.
Gluint animationPos = glGetUniformLocation(shaderProgram, "nameoftheuniform");
Your main loop should pass animationPos value to the shader:
Gluniform1i ( animationPos, curentAnimationIndex);
Add this your fragment shader variables:
uniform int animationPos;
Fragment shader main:
float texCoordY = texCoord.y; //texture coordinates should be passed from vertex shader
float texCoordX = texCoord.x/20.0f; //we are dividing it with 20 since it is the amount of textures that we have and if we use it directly it would try to use all the texture. Whereas the texture stores at 20 different textures.
float textureIndex = 1.0f*animationPos/20.0f; //Pointer to the start of the animation texture.
gl_fragColor = texture2D ( yourTexture, vec2( textureIndex + texCoordX, texCoordY));
Above code assumes that you expanded your textures in the x direction, you can also try to expand it like a matrix, then you need to change the texCoord calculation part. Also that we are using 20 textures.
The option a) is more heavy on the processor and you will be changing the texture every time so it will use pci a bit more, but easier on memory. The question is more like a design decision, but I guess 20 images can be handled so go with option b).
Edit: Added code.

html5 canvas always returns 8bits of color when using getImageData()

I am using an html5 canvas to render and image, do some basic editing of the image than trying to use the getImageData(0 function to read through the pixels and do some work. I have notices thou, no matter what bit depth the source image is (8 bit, 16 bit , 24 bit) the getImageData() method allows returns 8-bit (256 colors). this in not desirable. I would like the getImageData(0 method to spit out as many colors as it recieved.
I have read through the documentation and the canvas should be able to handle any bit depth you throw at it (figuratively) but I cant see anywhere to set the bit depth higher
Canvas will always return 24-bit data + an 8-bit alpha channel (RGBA). Each component value will of course have 8 bits or 256 values. This is per specification. It will never return 8-bit indexed image data however so if you somehow run into 8-bit (indexed) image data then you are probably reading the data wrong or from the from object/array.
From the specification:
imagedata . data
Returns the one-dimensional array containing the data in RGBA order,
as integers in the range 0 to 255.
And just to cover the opposite aspect: if you draw in a 8-bit indexed palette image such as a PNG-8 or a GIF using 2 - 256 colors their indexed palette will always be converted to RGBA buffer (it's actually converted to RGBA at load time by the browser so this is not something canvas do itself).
To read data from canvas you have two levels (or three for more advanced use), the image data object which contains various information including a reference to the actual pixel array view:
var imageData = context.getImageData(x, y, w, h);
From this object we obtain the data view for the pixels which is by default a Uint8ClampedArray view:
var pixelData = imageData.data;
And for more advanced usage we could get the raw byte buffer from this (if you need to provide other views, ie. Uint32Array) can be obtained from:
var rawBytes = pixelData.buffer;
var buffer32 = new Uint32Array(rawBytes);
But lets stick to the default 8-bit clamped view - to read from it you need to know that the pixels are always packed into RGBA or as 32-bit values. So we can get a single pixel by doing:
var r = pixelData[0];
var g = pixelData[1];
var b = pixelData[2];
var a = pixelData[3];
The next pixel will start at index 4 and so on.
If you for some reason need to reduce the palette to indexed palette you would have to provide the algorithm for this yourselves. There are many out there from simple and bad to more complex and accurate ones. But this is not something you will be able to do out of the box with canvas. Some pointers can be found in this answer or you can use a library such as this which will create a (animated) GIF from canvas.
Also be aware of that if an image drawn into canvas didn't fulfill cross-origin requirements (CORS) the canvas will be "tainted" (security-wise) and the getImageData() will return a an array with values set to 0.
ImageData (returned by getImageData) property data gives you an array in which each entry is a colour channel in sequence red, green, blue and alpha and not the actual colour. e.g.
red=imgData.data[0];
green=imgData.data[1];
blue=imgData.data[2];
alpha=imgData.data[3];
colour = '#' + (red<16)?'0':'' + red.toString(16) +
(green<16)?'0':'' + green.toString(16) +
(blue<16)?'0':'' + blue.toString(16);

Windows Store App - SwapChainPanel DrawLine Performance

I am developing a Windows Store App using XAML / C#. The app also has a Windows Runtime Component, which is used for showing a Chart ouput using DirectX.
I am using SwapChainPanel approach for drawing the lines (x-axis, y-axis and waveform).
I chose this approach from the below MSDN sample (refer scenario 3 - D2DPanel)
http://code.msdn.microsoft.com/windowsapps/XAML-SwapChainPanel-00cb688b
Here is my question,
My waveform contains a huge number of data (ranging from 1,000 to 20,000 set of points). I am calling DrawLine continuously for all these points during each Render function call.
The control also provides panning and zooming but keeps the StrokeWidth constant irrespective of zoom level, hence the visible area (render target) might be much less than the lines I am drawing.
Does calling DrawLine for the area which are going to be off-screen cause performance issues?
I tried PathGeometry & GeometryRealization but I am not able to control the StrokeWidth at various zoom level.
My Render method typically resembles the below snippet. The lineThickness is controlled to be same irrespective of zoom level.
m_d2dContext->SetTransform(m_worldMatrix);
float lineThickness = 2.0f / m_zoom;
for (unsigned int i = 0; i < points->Size; i += 2)
{
double wavex1 = points->GetAt(i);
double wavey1 = points->GetAt(i + 1);
if (i != 0)
{
m_d2dContext->DrawLine(Point2F(prevX, prevY), Point2F(wavex1, wavey1), brush, lineThickness);
}
prevX = wavex1;
prevY = wavey1;
}
I'm kind of new to DirectX, but not to C++. Any thoughts?
Short answer: It probably will. It's good practice to push a clip before drawing. For instance, in your case, you'd do a call to ID2D1DeviceContext::PushAxisAlignedClip with the bounds of your drawing surface. That'll ensure no drawing calls attempt to draw outside the surface's bounds.
Long answer: Really, it depends on a handful of factors, including but not limited to what target the device context is drawing to, the display hardware, and the display driver. For instance, if you're drawing to a CPU-backed ID2D1Bitmap, it's probably fair to assume that there won't be much of a difference.
However, if you're directly drawing to some hardware-backed surface (a GPU bitmap, or a bitmap created from an IDXGISurface), it can get a little hairy. For example, consider this comment from an excellently documented MSDN sample. Here, the where the code is about to Clear an ID2D1Bitmap created from an IDXGISurface:
// The Clear call must follow and not precede the PushAxisAlignedClip call.
// Placing the Clear call before the clip is set violates the contract of the
// virtual surface image source in that the application draws outside the
// designated portion of the surface the image source hands over to it. This
// violation won't actually cause the content to spill outside the designated
// area because the image source will safeguard it. But this extra protection
// has a runtime cost associated with it, and in some drivers this cost can be
// very expensive. So the best performance strategy here is to never create a
// situation where this protection is required. Not drawing outside the appropriate
// clip does that the right way.