Bad quality texture stage3D - actionscript-3

I'm drawing a simple square in stage3D, but the quality of the numbers and the edges in the picture is not as high as it should be:
Here's the example with the (little) source code, I've put the most in one file.
http://users.telenet.be/fusion/SquareQuality/
http://users.telenet.be/fusion/SquareQuality/srcview/
I'm using mipmapping, in my shader I use "<2d, miplinear, repeat>", the texture is 256x256 jpg (bigger than on the image), also tried a png, tried "mipnearest" and tried without mipmapping. Anti-alias 4, but 10 or more doesn't help at all...
Any ideas?
Greetings,
Thomas

Are you using antialiasing for backBuffer?
// Listen for when the Context3D is created for it
stage3D.addEventListener(Event.CONTEXT3D_CREATE, onContext3DCreated);
function onContext3DCreated(ev:Event): void
{
var context3D:Context3D = stage3D.context3D;
// Setup the back buffer for the context
context3D.configureBackBuffer(stage.stageWidth, stage.stageHeight,
0, // no antialiasing (values 2-16 for antialiasing)
true);
}

I think that the size of your resource texture is too high. The GPU renders your scene pixel by pixel in the fragment shader. When it renders a pixel of your texture, the fragment shader gets a varying that represents the texture UV. The GPU simply takes the color of the pixel on that UV coordinate of your texture.
Now, when your texture size is too high, you will lose information because two neighboring pixels on the screen will correspond with non-neighboring pixels on the texture resource. For example: if you draw a texture 10 times smaller than the resource, you will get something like this (were each character corresponds with a pixel, in one dimension):
Texture: 0123456789ABCDEFGHIJKLM
Screen: 0AK

I'VE FOUND IT!!!
I went to the Starling forum and found an answer from Daniel from Starling:
"If you're using TRILINEAR, you're already using the best quality available. One additional thing you could try is to set the "antialiasing" value of Starling to a high value, e.g. 16, and see if that helps."
So I came across this article that said trilinear is only used when you put the argument "linear" in your fragment shader, in my example program:
"tex ft0, v0, fs0 <2d, linear, miplinear, repeat>".
Greetings,
Thomas

Related

AS3 - How to calculate intersection between drawing and bitmap

I'm trying to create a handwriting game with AS3 on Adobe Animate. I've created my board, functions(drawing, erasing, saving, printing and color pannel) so far. But i need to show a score. To do it i thought if i can calculate the percentege of intersection between drawing and a bitmap image(which is my background for now).
Is there any way to do it? Or can you at least tell me with which function should i try that? Thanks a lot.
Note: Here is 2 images from my game. You can easily understand what am i trying to explain and do.
players will try to draw correctly(drawn board)
Empty Board
just a suggestion,
lets assuming that you are recording draw data, a set of points according the frame rate that records mouse positions inside an array.
i used 8 points in my own example, the result would be like this: (6 of 8 = 75% passed)
► black line is correct path(trace btimap) ► red is client draw
we need to search whole of the points array and validate them, so a percentage will be gain easily
how to validate
each point contain x and y, to check if its placed on a black pixel (bitmap trace) we just do
if (bitmapData.getPixel(point.x, point.y) == 0x0) // 0x0 is black
getPixel returns an integer that represents an RGB pixel value from a
BitmapData object at a specific point (x, y). The getPixel() method
returns an unmultiplied pixel value. No alpha information is returned.
Improvment
this practice would be more accurate when there is really more captured points during draw, also the Trace-Bitmap must be like this (above image), not a Dashed (smoothed, styled, ...) Line, however you can use this trace bitmap in background (invisible) and only present a dashed copy of that with a colorful background (like grass and rock textures or any graphical improves) to players.
Note
also define a maximum search size if you need more speed for validating draw. this maximum will be used to ignoring some points, for example if max=5 and we have 10 points, 0,2,4,6,8 can be ignored

How to animate textures in a 3d model?

I wish to have a animated 3d texture in my LibGDX code but I am struggling to find out how to do it.
I assume how this "should" be done is either;
a) Directly accessing and modifying the texture on the model. (via a pixmap? ByteBuffer?)
or
b) Prerendering a big image containing all the frames (say, 20) and then moving the UV co-ordinates to create the illusion of the animation. (akin to ImageStrips in 2d/webdesign).
I did work out how I could completely replace the material eachtime, but that seems a much worse way of doing it. So if anyone could show the commands I need to do either a) or b) (or a similar optimal method) I would be great-fall.
Maths I am fine with. The intricacies of OpenGLES or GDX I am not :)
(The solution should at least work HTML/Android compiles, ideally everything)
Since the latest release it is very easy to play a 2d animation on a 3d surface. First make sure to get familiar with the 2d animation concept, as explained over here: https://github.com/libgdx/libgdx/wiki/2D-Animation. Then, instead of using a spritebatch, you can use the TextureRegion (which Animation#getKeyFrame returns) to set the material of the surface, as shown here: https://github.com/libgdx/libgdx/blob/master/tests/gdx-tests/src/com/badlogic/gdx/tests/g3d/TextureRegion3DTest.java. So basically you would get in your render method:
attribute.set(animation.getKeyFrame(stateTime, true));
Or if you want a more generic approach:
instance.getMaterial("<name of material>").get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
Or, if there's only one material in the ModelInstance:
instance.materials.get(0).get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
If you have the memory for it I would definetly choose b), it is easier on the processor. Also, you would only change a uniform's value. However, due to preprocessing it might take some time to open the application.
Get you uniform variable, where you compile your shaders, animationPos should be global.
Gluint animationPos = glGetUniformLocation(shaderProgram, "nameoftheuniform");
Your main loop should pass animationPos value to the shader:
Gluniform1i ( animationPos, curentAnimationIndex);
Add this your fragment shader variables:
uniform int animationPos;
Fragment shader main:
float texCoordY = texCoord.y; //texture coordinates should be passed from vertex shader
float texCoordX = texCoord.x/20.0f; //we are dividing it with 20 since it is the amount of textures that we have and if we use it directly it would try to use all the texture. Whereas the texture stores at 20 different textures.
float textureIndex = 1.0f*animationPos/20.0f; //Pointer to the start of the animation texture.
gl_fragColor = texture2D ( yourTexture, vec2( textureIndex + texCoordX, texCoordY));
Above code assumes that you expanded your textures in the x direction, you can also try to expand it like a matrix, then you need to change the texCoord calculation part. Also that we are using 20 textures.
The option a) is more heavy on the processor and you will be changing the texture every time so it will use pci a bit more, but easier on memory. The question is more like a design decision, but I guess 20 images can be handled so go with option b).
Edit: Added code.

Which shader to use in order to render a mesh as is?

I am using GL 2.0 (in order to display pictures that are not power of 2), and I am trying to simply render a mesh (that displays some triangles).
When using GL 1.0, I didn't have any problem, but now, I have to pass a ShaderProgram object as a parameter.
How can I make it work like it would in GL 1.0?
Should I make a shader that simply does nothing?
You have to use a vertex shader to convert world space coordinates into screen space coordinates. And you need a pixel shader to look up texture coordinates for each rendered pixel of your quad.
Look at the shaders that Libgdx uses for its SpriteBatch, they are pretty minimal texture-a-quad shaders. You can literally use SpriteBatch.createDefaultShader() to get them or just use them as inspiration for your own shaders.
The libgdx wiki page on shaders already contains an example code for a simple shader:
https://github.com/libgdx/libgdx/wiki/Shaders
I assume it's basically the same as the createDefaultShader() as in P.T.'s answer...
Hope it helps...

Stage3D AGAL Diffuse Light Shader, Dynamic Object Questions

There are quite a few ActionScript Stage3D tutorial examples 'out there'. I am thankful to all those who attempt to provide us newbies with working code. That said I don't believe I have yet found an example which I think is correctly handling the Lambertian reflectance as outlined by the Wikipedia entry, but I hesitate to add that, as a newbie, perhaps I just have failed to understand the implementations.
Here's what I think is the basic requirement of any implementation -- that it be able to compare the orientation of the light source [I am purposely limiting this discussion to the simpler case of a 'directional light' mimicking the Sun, rather than a 'spot light'.] to the orientation of the normal to the face to be illuminated.
And here is what I think is the heart of the problem I am seeing -- that the computation of the normal of the face, in almost every case, is being performed when the geometry of the object is being created. So, the value of the normal being passed to the shader is expressed in terms of the local object space.
Now I know that the way this wonderful forum works, you would prefer that I just give some sample code, so someone could identify a specific mistake in either the setup code used on the CPU or the shader code used on the GPU. Because I can find examples of this problem using many different frameworks, I think what I [and I imagine many others] need, is not a specific coded solution, but some specific clarification of what is actually required to get a photo-realistic rendering of an object in the simple, base case of fixed camera view point, a non-moving light source, and ignoring considerations of specularity, etc.
So, when performing the dot product of the two vectors:
A. Should the Vector3D value representing the normal to the triangular face to be illuminated be calculated using the object space values of the three vertices or the world space values after transformation?
B. If world space values are required, as I believe, should the dynamically-calculated normal workload be performed each render cycle on the CPU or the GPU?
Thank you.
Terry I ran into the same problem recently, I suspect you have solved it or moved on. But I thought I would answer the question anyways.
I choose to transform the normal in the vertex shader.
var vertexShader:Array = [
"m44 vt0, va0, vc0", // transform vertex positions (va0) by the world camera data (vc0)
// this result has the camera angle as part of the matrix, which is not good when calculating light
"mov op, vt0", // move the transformed vertex data (vt0) in output position (op)
"add v0, va1, vc12.xy", // add in the UV offset (va1) and the animated offset (vc12) (may be 0 for non animated), and put in v0 which holds the UV offset
"mov v1, va3", // pass texture color and brightness (va3) to the fragment shader via v1
"m44 v2, va2, vc4", // transform vertex normal, send to fragment shader
// the transformed vertices without the camera data
"m44 v3, va0, vc8", // the transformed vertices with out the camera data, works great for default AND for translated cube, rotated cube broken still
in the fragment shader
// normalize the light position
"sub ft1, v3, fc0", // subtract the light position from the transformed vertex postion
"nrm ft1.xyz, ft1", // normalize the light position (ft1)
// non flat shading
"dp3 ft2, ft1, v2", // dot the transformed normal with light direction
"sat ft2, ft2", // Clamp dot between 1 and 0, put result in ft2.
Last but not least, what data you pass in! Took quite a bit of experimenting for me to find the magic.
// mvp is the camera data mixed with each object world matrix
_context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, mvp, true); // aka vc0
// $model is the model to be drawn
var invmat:Matrix3D = $model.modelMatrix.clone();
invmat.transpose();
_context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 4, invmat, true); // aka vc4
var wsmat:Matrix3D = $model.worldSpaceMatrix.clone();
_context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 8, wsmat, true); // aka vc8
I hope this helps someone in the future.

Why would I want to use unit scale? (Libgdx)

I have looked into the SuperKaolio example on Libgdx github repo. It is basically a test for integrating Tiled maps with Libgdx. They are using the unit scale 1/16 and if I have understood it correctly it means that the world no longer is based on a grid of pixels but on a grid of units (each 16 pixels wide). This is the code and comment in the example:
// load the map, set the unit scale to 1/16 (1 unit == 16 pixels)
map = new TmxMapLoader().load("data/maps/tiled/super-koalio/level1.tmx");
renderer = new OrthogonalTiledMapRenderer(map, 1 / 16f);
I am basically wondering why you would want to do that. I only got problems doing it and can't see any obvious advantages.
For example, one problem I had was adding a BitMap font. It didn't scale at all with the background and one pixel in the font occupied an entire unit. Image here.
I'm using this code for drawing the font. It's a standard 14 points arial font included in libgdx
BitmapFont font = new BitmapFont();
font.setColor(Color.YELLOW);
public void draw(){
spriteBatch.begin();
font.draw(batch, "Score: " + thescore, camera.position.x, 10f);
spriteBatch.end();
}
I assume there is a handy reason to have a 1/16th scale for tiled maps (perhaps for doing computations on which tile is being hit or changing tiles (they're at handy whole-number indices).
Anyway, regardless of what transformation (and thus what "camera" and thus what projection matrix) is used for rendering your tiles, you can use a different camera for your UI.
Look at the Superjumper demo, and see it uses a separate "guiCam" to render the "GUI" elements (pause button, game over text, etc). The WorldRenderer has its own camera that uses world-space coordinates to update and display the world.
This way you can use the appropriate coordinates for each aspect of your display.