Windows Store App - SwapChainPanel DrawLine Performance - windows-runtime

I am developing a Windows Store App using XAML / C#. The app also has a Windows Runtime Component, which is used for showing a Chart ouput using DirectX.
I am using SwapChainPanel approach for drawing the lines (x-axis, y-axis and waveform).
I chose this approach from the below MSDN sample (refer scenario 3 - D2DPanel)
http://code.msdn.microsoft.com/windowsapps/XAML-SwapChainPanel-00cb688b
Here is my question,
My waveform contains a huge number of data (ranging from 1,000 to 20,000 set of points). I am calling DrawLine continuously for all these points during each Render function call.
The control also provides panning and zooming but keeps the StrokeWidth constant irrespective of zoom level, hence the visible area (render target) might be much less than the lines I am drawing.
Does calling DrawLine for the area which are going to be off-screen cause performance issues?
I tried PathGeometry & GeometryRealization but I am not able to control the StrokeWidth at various zoom level.
My Render method typically resembles the below snippet. The lineThickness is controlled to be same irrespective of zoom level.
m_d2dContext->SetTransform(m_worldMatrix);
float lineThickness = 2.0f / m_zoom;
for (unsigned int i = 0; i < points->Size; i += 2)
{
double wavex1 = points->GetAt(i);
double wavey1 = points->GetAt(i + 1);
if (i != 0)
{
m_d2dContext->DrawLine(Point2F(prevX, prevY), Point2F(wavex1, wavey1), brush, lineThickness);
}
prevX = wavex1;
prevY = wavey1;
}
I'm kind of new to DirectX, but not to C++. Any thoughts?

Short answer: It probably will. It's good practice to push a clip before drawing. For instance, in your case, you'd do a call to ID2D1DeviceContext::PushAxisAlignedClip with the bounds of your drawing surface. That'll ensure no drawing calls attempt to draw outside the surface's bounds.
Long answer: Really, it depends on a handful of factors, including but not limited to what target the device context is drawing to, the display hardware, and the display driver. For instance, if you're drawing to a CPU-backed ID2D1Bitmap, it's probably fair to assume that there won't be much of a difference.
However, if you're directly drawing to some hardware-backed surface (a GPU bitmap, or a bitmap created from an IDXGISurface), it can get a little hairy. For example, consider this comment from an excellently documented MSDN sample. Here, the where the code is about to Clear an ID2D1Bitmap created from an IDXGISurface:
// The Clear call must follow and not precede the PushAxisAlignedClip call.
// Placing the Clear call before the clip is set violates the contract of the
// virtual surface image source in that the application draws outside the
// designated portion of the surface the image source hands over to it. This
// violation won't actually cause the content to spill outside the designated
// area because the image source will safeguard it. But this extra protection
// has a runtime cost associated with it, and in some drivers this cost can be
// very expensive. So the best performance strategy here is to never create a
// situation where this protection is required. Not drawing outside the appropriate
// clip does that the right way.

Related

Is there a way to have smooth/subpixel motion without turning on smoothing on graphics?

I'm creating this 2D, pixel art game. When the camera follows the player (it uses easing), on the final approach, the position gets several subpixel adjustments.
If I have smoothing ON (on my graphic assets), the graphics look good (sharp. it's pixel art) but the subpixel motion is jerky/jumpy.
If I have smoothing OFF, the subpixel motion is smooth, but the pixel art graphics look blurry.
I'm using Flash player v21. I've tried this with Starling and with Flash's display list.
You have a pixelated object that is moving in increments of less than the pixel size, but you don't want to restrict your mathematical easing to integers, or even worse, factors of 8 or what have you. The solution I am using in my project for this exact issue is posted below (I just got it working last week!)
Concept
create a driver that is controlled by the easing using floating point numbers.
Allow this driver to then control where the actual display object is rendered. We can use a constraint to only allow the display object to render on your chosen resolution.
Code Example
// you'll put these lines or equivalent in the correct spots for your particular needs.
// SCALE_UP will be your resolution control. If your pixels are 4 pixels wide, use 4.
const SCALE_UP: int = 4;
var d:CharacterDriver = new CharacterDriver();
var c:Character = new Character();
c._driver = d; // I've found it useful to be able to reference the driver
d._drives = c; // or the thing the driver drives via the linked object.
// you don't have to do this.
then when you are ready to do your easing of the driver:
function yourEase(c:Character, d:CharacterDriver):void{
c.x = Math.ceil(d.x - Math.ceil(d.x)%SCALE_UP);//this converts a floating point number into a factor of SCALE_UP
c.y = Math.ceil(d.y - Math.ceil(d.y)%SCALE_UP);
Now this will make your character move around 4 pixels at a time, but still be able to experience easing!
The bit with the modulo (%) operator is the key. For instance, 102-102%4 = 100. 103-103%4 = 100. 104-104%4 = 104.
In case anyone is confused by that, look at what 102%4 does: 4 goes into 102 25 times with a remainder of 2. so 102%4 = 2. Then 102 - 2 = 100.
In your case, since the "camera" is following the player (i.e. the background is moving, right?) then you really need to apply drivers to everything in the background instead, but it is basically the same idea.
Hope this helps.
since you specifically mentioned the "final approach" i think your problem comes from the fact that the easing equations puts your graphics at fractional coordinates, especially while getting closer to the target, but you should also notice it during the rest of the animation.
depending on the easing "engine" that you're using you should be able to set a "round values" flag, so all the coordinates set will be integer values and not fractional
if that's not possible, find a way in your display objects to round the x and y values every time they change

Transparency issues with 3d particles and 3d models, libgdx

I got some strange issues with transparency and 3d particles. A short vid to illustrate:
https://youtu.be/ZHKI1X3MjhY
As you can see I have a 3d particle effect, fire burning. Inside it is a 3 model with no alpha blending and it shows just fine. then in the far distance there is a small skeleton (with blending and alphatest turned on) and it also shows just fine through the fire. Then I turn camera and look at the warrior skeleton and it just disappear and instead you see what is behind him. I turn camera again and the mage skeleton also vanishes, but you can see the trees a bit further away just fine and they have the exact same settings for blending and alpha test. If I move the character say 20 yards away it also starts showing through the fire effect.
So it seems to have something to do with distance from the 3d particle effect...
The 3d particle batch is an extended BillboardParticleBatch like this:
protected Renderable allocRenderable(){
BlendingAttribute ba=new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE,1f);
Renderable r = super.allocRenderable();
r.material = new Material( ba,
// new DepthTestAttribute(GL20.GL_LEQUAL, 0.0f, 0.5f, true),
// r.material.set(new FloatAttribute(FloatAttribute.AlphaTest, 0.0f),
TextureAttribute.createDiffuse(texture));
return r;
}
All the characters and the trees are created with following attributes:
if (alpha) {
FloatAttribute floatAttribute = new FloatAttribute(FloatAttribute.AlphaTest, 0.5f);
BlendingAttribute blendingAttribute = new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA, 1f);
for (int i = 0; i < bulletEntity.modelInstance.materials.size; i++){
bulletEntity.modelInstance.materials.get(i).set(blendingAttribute);
bulletEntity.modelInstance.materials.get(i).set(floatAttribute);
}
}
The models are drawn first then the particles, I tried changing order but no difference. I have tried a lot of different setups for alphatest, depthtest and blendingattribute but can not find anything that works.
EDIT
I removed the Blending attribute from the 3d-models and now it looks as it should regarding the particle effect. However I need most materials on my character models to have blending set..
Anyone got any clue why this is happening when I enable blending?
I also tried to use the BillboardParticleBatch without extending it in case I had done something there but the effect then is even worse. All models with blending enabled appear in-front of the particle effect even though they stand behind it.
ModelBatch sorts your render calls (check this link, really, it is a must read), to avoid incorrect behavior (as you're experiencing). The actual sorting/rendering happens at the call to ModelBatch#end. By default it uses the DefaultRenderableSorter, which is a default implementation. Of course, because that implementation isn't aware of your scene, it might not fit exactly your needs.
The DefaultRenderableSorter tries to guess the location of each model based on their transformation matrix. Based on that location and the camera's location it will sort them so that:
First all opaque objects are rendered from front to back (because whatever is behind an opaque object isn't visible anyway, so that reduces unneeded calls to the fragment shader).
Secondly all transparent objects are rendered from back to front (because as soon as a transparent object is rendered then everything that is rendered after that and is behind it, will not be visible).
To decide whether an object is transparent, the BlendingAttribute#blended member is used. (So you could, if you really wanted to, set that member to false to force it to be treated (sorted) as if it was opaque)
So, the order in which you call ModelBatch#render is not necessarily the order in which they are actually executed. If you want to force to render whatever you've added to the batch in between, then call the ModelBatch#flush(). Of course, doing this a lot defeats some of the purpose of ModelBatch in the first place.
Instead you could implement your own RenderableSorter which has more knowledge about your scene and can therefor do a better job sorting than the default implementation. (however if flush() works for you and there's no other issue, then just flush might be the easiest solution for you).
That said, there a various other solutions you could try as well. E.g. the regions of the particles are fully transparent, so the fragment shader might as well discard those all together. Try adding FloatAttribute.AlphaTest with a value of 0.5f to the particles. If that messes with your blending then gradually reduce the value to e.g. 0.05f.
Also, you could add a DepthTestAttribute with depthMask set to false (new DepthTestAttribute(false)). This will prevent the particles from writing to the depth buffer. (but also might cause other things to show in front of the particles).

How to animate textures in a 3d model?

I wish to have a animated 3d texture in my LibGDX code but I am struggling to find out how to do it.
I assume how this "should" be done is either;
a) Directly accessing and modifying the texture on the model. (via a pixmap? ByteBuffer?)
or
b) Prerendering a big image containing all the frames (say, 20) and then moving the UV co-ordinates to create the illusion of the animation. (akin to ImageStrips in 2d/webdesign).
I did work out how I could completely replace the material eachtime, but that seems a much worse way of doing it. So if anyone could show the commands I need to do either a) or b) (or a similar optimal method) I would be great-fall.
Maths I am fine with. The intricacies of OpenGLES or GDX I am not :)
(The solution should at least work HTML/Android compiles, ideally everything)
Since the latest release it is very easy to play a 2d animation on a 3d surface. First make sure to get familiar with the 2d animation concept, as explained over here: https://github.com/libgdx/libgdx/wiki/2D-Animation. Then, instead of using a spritebatch, you can use the TextureRegion (which Animation#getKeyFrame returns) to set the material of the surface, as shown here: https://github.com/libgdx/libgdx/blob/master/tests/gdx-tests/src/com/badlogic/gdx/tests/g3d/TextureRegion3DTest.java. So basically you would get in your render method:
attribute.set(animation.getKeyFrame(stateTime, true));
Or if you want a more generic approach:
instance.getMaterial("<name of material>").get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
Or, if there's only one material in the ModelInstance:
instance.materials.get(0).get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
If you have the memory for it I would definetly choose b), it is easier on the processor. Also, you would only change a uniform's value. However, due to preprocessing it might take some time to open the application.
Get you uniform variable, where you compile your shaders, animationPos should be global.
Gluint animationPos = glGetUniformLocation(shaderProgram, "nameoftheuniform");
Your main loop should pass animationPos value to the shader:
Gluniform1i ( animationPos, curentAnimationIndex);
Add this your fragment shader variables:
uniform int animationPos;
Fragment shader main:
float texCoordY = texCoord.y; //texture coordinates should be passed from vertex shader
float texCoordX = texCoord.x/20.0f; //we are dividing it with 20 since it is the amount of textures that we have and if we use it directly it would try to use all the texture. Whereas the texture stores at 20 different textures.
float textureIndex = 1.0f*animationPos/20.0f; //Pointer to the start of the animation texture.
gl_fragColor = texture2D ( yourTexture, vec2( textureIndex + texCoordX, texCoordY));
Above code assumes that you expanded your textures in the x direction, you can also try to expand it like a matrix, then you need to change the texCoord calculation part. Also that we are using 20 textures.
The option a) is more heavy on the processor and you will be changing the texture every time so it will use pci a bit more, but easier on memory. The question is more like a design decision, but I guess 20 images can be handled so go with option b).
Edit: Added code.

Html Canvas - Translate, Rotate, Clip - Faster restores with using context.restore

After asking a question regarding animation speed a few days ago, the stackoverflow gang once again solved my problem. However, this has led to another question. [The more you know, the more you realise you don't know.]
Basically the fewer state changes to my canvas, the faster things will go. If I am just changing the fillStyle, then using ctx.save and ctx.restore is overkill, as all states are restored. Overkill = Slow. Instead just keep the oldvalue of fillStyle somewhere and put just that back in once you have finished.
So how do you do this for ctx.translate(x, y), ctx.rotate(angle) and ctx.clip()?
How can I restore these guys to their states before my changes WITHOUT having to use ctx.restore?
Your can untransform by using negative values.
ctx.translate(100,100);
// draw lots of stuff
ctx.translate(-100,-100);
ctx.scale(.75,.50);
// draw stuff
ctx.scale(-.75,-.50);
ctx.rotate(Math.PI/4);
// draw stuff
ctx.rotate(-Math.PI/4);
If you do multiple transforms, you must undo them in reverse order
ctx.translate(100,100);
ctx.scale(.75,.50);
ctx.rotate(Math.PI/4);
// draw lots of stuff
ctx.rotate(-Math.PI/4);
ctx.scale(-.75,-.50);
ctx.translate(-100,-100);
But when translating (moving) a few items, it's faster to use an offset instead of a transform.
strokeRect(20+100,20+100,50,30);
fillRect(20+100,20+100,50,30);
Clipping is semi-permanent so you must save/restore the entire context state to undo clip:
context.save();
// define a clipping path
context.clip();
// draw stuff
context.restore();
Transforms are done using a transformation matrix. Canvas gives you access to that matrix using the context.setTransform method.
scaleX=.75;
scaleY=.50;
skewX=0;
skewY=0;
translateX=100;
translateY=100;
context.setTransform(scaleX, skewX, skewY, scaleY, translateX, translateY);
// draw stuff
context.setTransform(-scaleX, -skewX, -skewY, -scaleY, -translateX, -translateY);
To also set the matrix for rotation, you must set a combination of the scale & skew values like this:
var radianAngle=Math.PI/4;
var cos=Math.cos(radianAngle);
var sin=Math.sin(radianAngle);
context.setTransform(cos,sin,-sin,-cos,0,0);
// draw stuff
context.setTransform(-cos,-sin,sin,cos,0,0);
To do rotation along with other transforms, just add the rotation values to the scale and skew values.
context.setTransform(scaleX+cos, skewX+sin, skewY-sin, scaleY-cos, translateX, translateY);
// draw stuff
context.setTransform(-scaleX-cos, -skewX-sin, -skewY+sin, -scaleY+cos, -translateX, -translate);
Just to correct the assumption of the question :
False : • The whole context state is saved/restored when using save()/restore() methods.
Let's be modest : An idea that comes to mind in 30 seconds is most likely to be found (and improved) by the developers of major Browsers. So truth is :
True : • Saving the context does almost nothing, and restore applies only on what just occurred.
If in doubt, you can look at the code, but it takes quite some time to be familiar with it (i did it with webKit's canvas => confirmed for this one ).
But it's much easier to look at the various jsperf made on the subject : they show that the gain when hand-saving/restoring one or two properties is moderate to small ==> only what changes is restored.
When hand-saving/restoring more things, then save and restore becomes faster because of Javascript's overhead.
(http://jsperf.com/save-restore-vs-translate-twice/4)
Another thing : talking about 'overkill' seems very exaggerated. Not only because, as seen before, context's save might be faster, but also because, best win is 2X, so we are talking about proudly taking 2ns instead of 4ns for the save. This has to be compared to the time taken for the draw, and might very well not be worth it.
Two last things :
• the bug risk induced by manually saving/restoring ('oops ! i forgot to restore that in that function !').
• the rounding errors that might occur (scale(x,x) => then scale(1/x, 1/x) )
In fact you can save time with no risk is by :
1) batching commands : whenever possible (it all depends on your app, really), batch all commands that expects a given context state.
2) similarly, you can define conventions/rules that prevents you to save/restore the context. For instance : 'always set fillStyle just before filling'. This way you never have to worry about current fillSyle. What you can do here also greatly depends on your app (and wether it's using external APIs or not), but can save a great deal of time for numerous draws.
So my advice would be to use the manual save/restore only for obvious simple case (ex: you just change globalAlpha), and to use conventions/rules to reduce the context state changes to a minimum.

Trying to convert openGL to MFC coordinates and having Problems with "gluProject"

To clarify things, what i am trying to do is to get the openGL coordinates and manipulate them in my mfc code. not to get an openGL object. i'm using the mfc to control the position of the objects in the openGL.
Hi, i'm trying to find the naswer on the web and can't find a full solution that i can use and that will work...
I'm developing a MFC project with static picture as the canvas for an openGL class that draw the grphics for my game.
On moush down, i need to retrive a shape coordinate from the openGL class.
I'm looking for a way to convert the openGL coordinates to MFC coordinates but no matter what i try i get junk after using the gluProject or gluUnProject (i've tried to do both ways but non is working)
GLdouble modelMatrix[16];
glGetDoublev(GL_MODELVIEW_MATRIX,modelMatrix);
GLdouble projMatrix[16];
glGetDoublev(GL_PROJECTION_MATRIX,projMatrix);
int viewport[4];
glGetIntegerv(GL_VIEWPORT,viewport);
POINT mouse; // Stores The X And Y Coords For The Current Mouse Position
GetCursorPos(&mouse); // Gets The Current Cursor Coordinates (Mouse Coordinates)
ScreenToClient(hWnd, &mouse);
GLdouble winX, winY, winZ; // Holds Our X, Y and Z Coordinates
winX; = (float)point.x; // Holds The Mouse X Coordinate
winY; = (float)point.y; // Holds The Mouse Y Coordinate
winY = (float)viewport[3] - winY;
glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
GLdouble posX=s1->getPosX(), posY=s1->getPosY(), posZ=s1->getPosZ(); // Hold The Final Values
gluUnProject( winX, winY, winZ, modelMatrix, projMatrix, viewport, &posX, &posY, &posZ);
gluProject(posX, posY, posZ, modelMatrix, projMatrix, viewport, &winX, &winY, &winZ);
This is just part of the code i've tried. ofcourse not gluProject and gluUnProject together. just had them both here to show.....and i know there is lots of junk over there, its from some of my tries...
p.s. i've tried many many more combinations and examples from the web and nothing seem to work in my case....
Can any one show me what is the right way to do the transformation?
10x
It looks like you're trying to retrieve the object (or objects) that is/are at a particular point. If this is the case, gluProject and/or gluUnProject isn't really a very suitable tool for the task. OpenGL has a selection mode intended specifically for this kind of task.
In typical use, you specify a small square (e.g., 5x5 pixels) around the mouse click spot with gluPickMatrix, set selection mode with glRenderMode, set a buffer with glSwelectBuffer, and then draw your scene. The drawing doesn't go to the screen, but fills the buffer you specified wiyh records of what was drawn within the specified area.