Output values in Pixel Blender (trace) - actionscript-3

I'm absolutely new to Pixel Blender (started a couple of hours ago).
My client wants a classic folding effect for his app, I've shown him some example of folding effect via masks and he didn't like them, so I decide to dive in Pixel Blender to try to write him a custom shader.
Thx God I've found this one and I'm modyfing it by playing with values. But how can I trace / print / echo values from Pixel Blender ToolKit? This would speed up a lot all the tests I'm doing.
Here i've found in the comments that it's not possible, is it true?
Thx a lot

Well, you cannot directly trace values in Pixel Bender, but you can, for example, make a certain bitmap, then apply that filter with requested values and trace reultant bitmap's pixel values to find out what point corresponded the one selected.
For example, you make a 256x256 bitmap, with each point (x,y) having "x" red and "y" green component value. Then you apply that filter with selected values, and either display the result, or respond to clicks and trace underlying color's red and green values, which will give you the exact point on the source bitmap.

Related

AS3 - How to calculate intersection between drawing and bitmap

I'm trying to create a handwriting game with AS3 on Adobe Animate. I've created my board, functions(drawing, erasing, saving, printing and color pannel) so far. But i need to show a score. To do it i thought if i can calculate the percentege of intersection between drawing and a bitmap image(which is my background for now).
Is there any way to do it? Or can you at least tell me with which function should i try that? Thanks a lot.
Note: Here is 2 images from my game. You can easily understand what am i trying to explain and do.
players will try to draw correctly(drawn board)
Empty Board
just a suggestion,
lets assuming that you are recording draw data, a set of points according the frame rate that records mouse positions inside an array.
i used 8 points in my own example, the result would be like this: (6 of 8 = 75% passed)
► black line is correct path(trace btimap) ► red is client draw
we need to search whole of the points array and validate them, so a percentage will be gain easily
how to validate
each point contain x and y, to check if its placed on a black pixel (bitmap trace) we just do
if (bitmapData.getPixel(point.x, point.y) == 0x0) // 0x0 is black
getPixel returns an integer that represents an RGB pixel value from a
BitmapData object at a specific point (x, y). The getPixel() method
returns an unmultiplied pixel value. No alpha information is returned.
Improvment
this practice would be more accurate when there is really more captured points during draw, also the Trace-Bitmap must be like this (above image), not a Dashed (smoothed, styled, ...) Line, however you can use this trace bitmap in background (invisible) and only present a dashed copy of that with a colorful background (like grass and rock textures or any graphical improves) to players.
Note
also define a maximum search size if you need more speed for validating draw. this maximum will be used to ignoring some points, for example if max=5 and we have 10 points, 0,2,4,6,8 can be ignored

Transparency issues with 3d particles and 3d models, libgdx

I got some strange issues with transparency and 3d particles. A short vid to illustrate:
https://youtu.be/ZHKI1X3MjhY
As you can see I have a 3d particle effect, fire burning. Inside it is a 3 model with no alpha blending and it shows just fine. then in the far distance there is a small skeleton (with blending and alphatest turned on) and it also shows just fine through the fire. Then I turn camera and look at the warrior skeleton and it just disappear and instead you see what is behind him. I turn camera again and the mage skeleton also vanishes, but you can see the trees a bit further away just fine and they have the exact same settings for blending and alpha test. If I move the character say 20 yards away it also starts showing through the fire effect.
So it seems to have something to do with distance from the 3d particle effect...
The 3d particle batch is an extended BillboardParticleBatch like this:
protected Renderable allocRenderable(){
BlendingAttribute ba=new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE,1f);
Renderable r = super.allocRenderable();
r.material = new Material( ba,
// new DepthTestAttribute(GL20.GL_LEQUAL, 0.0f, 0.5f, true),
// r.material.set(new FloatAttribute(FloatAttribute.AlphaTest, 0.0f),
TextureAttribute.createDiffuse(texture));
return r;
}
All the characters and the trees are created with following attributes:
if (alpha) {
FloatAttribute floatAttribute = new FloatAttribute(FloatAttribute.AlphaTest, 0.5f);
BlendingAttribute blendingAttribute = new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA, 1f);
for (int i = 0; i < bulletEntity.modelInstance.materials.size; i++){
bulletEntity.modelInstance.materials.get(i).set(blendingAttribute);
bulletEntity.modelInstance.materials.get(i).set(floatAttribute);
}
}
The models are drawn first then the particles, I tried changing order but no difference. I have tried a lot of different setups for alphatest, depthtest and blendingattribute but can not find anything that works.
EDIT
I removed the Blending attribute from the 3d-models and now it looks as it should regarding the particle effect. However I need most materials on my character models to have blending set..
Anyone got any clue why this is happening when I enable blending?
I also tried to use the BillboardParticleBatch without extending it in case I had done something there but the effect then is even worse. All models with blending enabled appear in-front of the particle effect even though they stand behind it.
ModelBatch sorts your render calls (check this link, really, it is a must read), to avoid incorrect behavior (as you're experiencing). The actual sorting/rendering happens at the call to ModelBatch#end. By default it uses the DefaultRenderableSorter, which is a default implementation. Of course, because that implementation isn't aware of your scene, it might not fit exactly your needs.
The DefaultRenderableSorter tries to guess the location of each model based on their transformation matrix. Based on that location and the camera's location it will sort them so that:
First all opaque objects are rendered from front to back (because whatever is behind an opaque object isn't visible anyway, so that reduces unneeded calls to the fragment shader).
Secondly all transparent objects are rendered from back to front (because as soon as a transparent object is rendered then everything that is rendered after that and is behind it, will not be visible).
To decide whether an object is transparent, the BlendingAttribute#blended member is used. (So you could, if you really wanted to, set that member to false to force it to be treated (sorted) as if it was opaque)
So, the order in which you call ModelBatch#render is not necessarily the order in which they are actually executed. If you want to force to render whatever you've added to the batch in between, then call the ModelBatch#flush(). Of course, doing this a lot defeats some of the purpose of ModelBatch in the first place.
Instead you could implement your own RenderableSorter which has more knowledge about your scene and can therefor do a better job sorting than the default implementation. (however if flush() works for you and there's no other issue, then just flush might be the easiest solution for you).
That said, there a various other solutions you could try as well. E.g. the regions of the particles are fully transparent, so the fragment shader might as well discard those all together. Try adding FloatAttribute.AlphaTest with a value of 0.5f to the particles. If that messes with your blending then gradually reduce the value to e.g. 0.05f.
Also, you could add a DepthTestAttribute with depthMask set to false (new DepthTestAttribute(false)). This will prevent the particles from writing to the depth buffer. (but also might cause other things to show in front of the particles).

How to animate textures in a 3d model?

I wish to have a animated 3d texture in my LibGDX code but I am struggling to find out how to do it.
I assume how this "should" be done is either;
a) Directly accessing and modifying the texture on the model. (via a pixmap? ByteBuffer?)
or
b) Prerendering a big image containing all the frames (say, 20) and then moving the UV co-ordinates to create the illusion of the animation. (akin to ImageStrips in 2d/webdesign).
I did work out how I could completely replace the material eachtime, but that seems a much worse way of doing it. So if anyone could show the commands I need to do either a) or b) (or a similar optimal method) I would be great-fall.
Maths I am fine with. The intricacies of OpenGLES or GDX I am not :)
(The solution should at least work HTML/Android compiles, ideally everything)
Since the latest release it is very easy to play a 2d animation on a 3d surface. First make sure to get familiar with the 2d animation concept, as explained over here: https://github.com/libgdx/libgdx/wiki/2D-Animation. Then, instead of using a spritebatch, you can use the TextureRegion (which Animation#getKeyFrame returns) to set the material of the surface, as shown here: https://github.com/libgdx/libgdx/blob/master/tests/gdx-tests/src/com/badlogic/gdx/tests/g3d/TextureRegion3DTest.java. So basically you would get in your render method:
attribute.set(animation.getKeyFrame(stateTime, true));
Or if you want a more generic approach:
instance.getMaterial("<name of material>").get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
Or, if there's only one material in the ModelInstance:
instance.materials.get(0).get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
If you have the memory for it I would definetly choose b), it is easier on the processor. Also, you would only change a uniform's value. However, due to preprocessing it might take some time to open the application.
Get you uniform variable, where you compile your shaders, animationPos should be global.
Gluint animationPos = glGetUniformLocation(shaderProgram, "nameoftheuniform");
Your main loop should pass animationPos value to the shader:
Gluniform1i ( animationPos, curentAnimationIndex);
Add this your fragment shader variables:
uniform int animationPos;
Fragment shader main:
float texCoordY = texCoord.y; //texture coordinates should be passed from vertex shader
float texCoordX = texCoord.x/20.0f; //we are dividing it with 20 since it is the amount of textures that we have and if we use it directly it would try to use all the texture. Whereas the texture stores at 20 different textures.
float textureIndex = 1.0f*animationPos/20.0f; //Pointer to the start of the animation texture.
gl_fragColor = texture2D ( yourTexture, vec2( textureIndex + texCoordX, texCoordY));
Above code assumes that you expanded your textures in the x direction, you can also try to expand it like a matrix, then you need to change the texCoord calculation part. Also that we are using 20 textures.
The option a) is more heavy on the processor and you will be changing the texture every time so it will use pci a bit more, but easier on memory. The question is more like a design decision, but I guess 20 images can be handled so go with option b).
Edit: Added code.

Shadow-casting shaders in Stage3D

I've been working a lot with AGAL vertex and fragment shaders. I've got individual objects lit correctly (including specular shading) but I'd like to have objects cast shadows on OTHER objects. I have looked online, but I think most people working directly with AGAL have built custom Stage3D libraries and the shadow-casting solution doesn't seem to be in the public domain. Anyone willing to change that?
I'd like to know how to get an object to cast a shadow on another. I can't post what I've tried, because I can't get my head around where to begin on this problem. How would you pass the information (whether other objects are blocking the light) into another object's shader?
Thanks.
IT's called Deferred shading, you have to do 2 pass of vertex and fragment shaders.
In the first pass you accumulate informations about distances, normals, occlusion...
In the second pass you render and apply the informations of the first pass to make shadows.
Another options is ShadowMapping:
Basic shadowmap
The basic shadowmap algorithm consists in two passes. First, the scene is rendered from the point of view of the light. Only the depth of each fragment is computed. Next, the scene is rendered as usual, but with an extra test to see it the current fragment is in the shadow.
The “being in the shadow” test is actually quite simple. If the current sample is further from the light than the shadowmap at the same point, this means that the scene contains an object that is closer to the light. In other words, the current fragment is in the shadow.

Drawing lines ontop of texture in Direct3D

Im am working in Direct3d11 with Windows 8 Store apps.
I have been searching google and missing a few points, that i would be happy if someone could point out for me.
So far i have managed to created buffers, shaders and getting a texture sampled with D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST and i can ofcause change it to a LINELIST and get a line of my points.
What am I to look for when i want to draw the texture and also draw some lines or a trianglelist as lines ontop of the texture. I want to show the texture and a mesh ontop of it.
What are my next step.
A simple approach would be to fist render the object setting the render state to D3D11_FILL_SOLID. Then render the same object again but setting the render state to D3D11_FILL_WIREFRAME.
The "wireframe pass" shader can be very simple based on your needs, just remember to change the shading from the regular pass or else you wont be able to see the wireframe.