Which shader to use in order to render a mesh as is? - libgdx

I am using GL 2.0 (in order to display pictures that are not power of 2), and I am trying to simply render a mesh (that displays some triangles).
When using GL 1.0, I didn't have any problem, but now, I have to pass a ShaderProgram object as a parameter.
How can I make it work like it would in GL 1.0?
Should I make a shader that simply does nothing?

You have to use a vertex shader to convert world space coordinates into screen space coordinates. And you need a pixel shader to look up texture coordinates for each rendered pixel of your quad.
Look at the shaders that Libgdx uses for its SpriteBatch, they are pretty minimal texture-a-quad shaders. You can literally use SpriteBatch.createDefaultShader() to get them or just use them as inspiration for your own shaders.

The libgdx wiki page on shaders already contains an example code for a simple shader:
https://github.com/libgdx/libgdx/wiki/Shaders
I assume it's basically the same as the createDefaultShader() as in P.T.'s answer...
Hope it helps...

Related

How to animate textures in a 3d model?

I wish to have a animated 3d texture in my LibGDX code but I am struggling to find out how to do it.
I assume how this "should" be done is either;
a) Directly accessing and modifying the texture on the model. (via a pixmap? ByteBuffer?)
or
b) Prerendering a big image containing all the frames (say, 20) and then moving the UV co-ordinates to create the illusion of the animation. (akin to ImageStrips in 2d/webdesign).
I did work out how I could completely replace the material eachtime, but that seems a much worse way of doing it. So if anyone could show the commands I need to do either a) or b) (or a similar optimal method) I would be great-fall.
Maths I am fine with. The intricacies of OpenGLES or GDX I am not :)
(The solution should at least work HTML/Android compiles, ideally everything)
Since the latest release it is very easy to play a 2d animation on a 3d surface. First make sure to get familiar with the 2d animation concept, as explained over here: https://github.com/libgdx/libgdx/wiki/2D-Animation. Then, instead of using a spritebatch, you can use the TextureRegion (which Animation#getKeyFrame returns) to set the material of the surface, as shown here: https://github.com/libgdx/libgdx/blob/master/tests/gdx-tests/src/com/badlogic/gdx/tests/g3d/TextureRegion3DTest.java. So basically you would get in your render method:
attribute.set(animation.getKeyFrame(stateTime, true));
Or if you want a more generic approach:
instance.getMaterial("<name of material>").get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
Or, if there's only one material in the ModelInstance:
instance.materials.get(0).get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
If you have the memory for it I would definetly choose b), it is easier on the processor. Also, you would only change a uniform's value. However, due to preprocessing it might take some time to open the application.
Get you uniform variable, where you compile your shaders, animationPos should be global.
Gluint animationPos = glGetUniformLocation(shaderProgram, "nameoftheuniform");
Your main loop should pass animationPos value to the shader:
Gluniform1i ( animationPos, curentAnimationIndex);
Add this your fragment shader variables:
uniform int animationPos;
Fragment shader main:
float texCoordY = texCoord.y; //texture coordinates should be passed from vertex shader
float texCoordX = texCoord.x/20.0f; //we are dividing it with 20 since it is the amount of textures that we have and if we use it directly it would try to use all the texture. Whereas the texture stores at 20 different textures.
float textureIndex = 1.0f*animationPos/20.0f; //Pointer to the start of the animation texture.
gl_fragColor = texture2D ( yourTexture, vec2( textureIndex + texCoordX, texCoordY));
Above code assumes that you expanded your textures in the x direction, you can also try to expand it like a matrix, then you need to change the texCoord calculation part. Also that we are using 20 textures.
The option a) is more heavy on the processor and you will be changing the texture every time so it will use pci a bit more, but easier on memory. The question is more like a design decision, but I guess 20 images can be handled so go with option b).
Edit: Added code.

libGDX: same texture with shaders, different textures without

my name ist Tom (Ger) and i am developing a small 3D game with libGDX.
when i am using a Model, ModelInstance with a ModelBatch and the Environment, i can render different ModelInstances (with different Models) with there right textures.
But i need to use a shader for some wobble effects.
But when i use a shader everything works finde, except for the textures. there are the same for every ModelInscance i want to render.
i guess there is a texture binding problem. I load my Models this way:
assets = new AssetManager();
assets.load("blob.g3db", Model.class);
and fetch them with a simple:
public static Model getModel(String name) {
return assets.get(name + ".g3db", Model.class);
}
So i guess the assetsManager is loading the textures as well (cause it works without the shader).
My Question is:
How can i render differend 3D Objects with a Shader with there correct Textures?
Thanks in Advance...
Tom
The Models and the ModelInstances have a Material, where you can set a Texture, Color and other things to it.
So if 2 ModelInstances share the same Model you can set different Materials to their ModelInstances. By doing this you have different Textures. The DefaultShader implementation takes care about them. If you create your own Shader you need to take care about them.
Important: It does not work without Shader, cause you always render with Shader. You don't set the Shader manually, but libgdx uses DefaultShader by default.
I suggest you read some of Xoppas tutorials.

Shadow-casting shaders in Stage3D

I've been working a lot with AGAL vertex and fragment shaders. I've got individual objects lit correctly (including specular shading) but I'd like to have objects cast shadows on OTHER objects. I have looked online, but I think most people working directly with AGAL have built custom Stage3D libraries and the shadow-casting solution doesn't seem to be in the public domain. Anyone willing to change that?
I'd like to know how to get an object to cast a shadow on another. I can't post what I've tried, because I can't get my head around where to begin on this problem. How would you pass the information (whether other objects are blocking the light) into another object's shader?
Thanks.
IT's called Deferred shading, you have to do 2 pass of vertex and fragment shaders.
In the first pass you accumulate informations about distances, normals, occlusion...
In the second pass you render and apply the informations of the first pass to make shadows.
Another options is ShadowMapping:
Basic shadowmap
The basic shadowmap algorithm consists in two passes. First, the scene is rendered from the point of view of the light. Only the depth of each fragment is computed. Next, the scene is rendered as usual, but with an extra test to see it the current fragment is in the shadow.
The “being in the shadow” test is actually quite simple. If the current sample is further from the light than the shadowmap at the same point, this means that the scene contains an object that is closer to the light. In other words, the current fragment is in the shadow.

Drawing lines ontop of texture in Direct3D

Im am working in Direct3d11 with Windows 8 Store apps.
I have been searching google and missing a few points, that i would be happy if someone could point out for me.
So far i have managed to created buffers, shaders and getting a texture sampled with D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST and i can ofcause change it to a LINELIST and get a line of my points.
What am I to look for when i want to draw the texture and also draw some lines or a trianglelist as lines ontop of the texture. I want to show the texture and a mesh ontop of it.
What are my next step.
A simple approach would be to fist render the object setting the render state to D3D11_FILL_SOLID. Then render the same object again but setting the render state to D3D11_FILL_WIREFRAME.
The "wireframe pass" shader can be very simple based on your needs, just remember to change the shading from the regular pass or else you wont be able to see the wireframe.

What is "drawing context" exactly? What is the role of getcontext() method?

What is getContext() method and what is drawing context exactly? why we always pass the string 2d to the getContext() method?
Context is a way to choose what you are going to do with your canvas.
For moment you can use getContext for 2d (2dcanvas) or for 3d (WebGL).
HTML5 Specification say's about getContext :
"Returns an object that exposes an API for drawing on the canvas. The first argument specifies the desired API. Subsequent arguments are handled by that API."
You can find specifications for each API there :
https://html.spec.whatwg.org/multipage/canvas.html#dom-canvas-getcontext
It is also good to know that "webgl" is the correct name for API but for moment, as it is experimental you should use "experimental-webgl" to start creating WebGL content
In Computer graphics, a drawing context is an abstraction (class/object) that encapsulates how you are going to draw stuff.
At a 100k foot level, computer graphics is about converting drawing commands to pixels (image). How you go from commands to pixels is what the graphics pipeline is all about (very broad and deep subject). A drawing context exposes drawing methods and properties to achieve this.
Example of a drawing commands: drawLine, drawPath, drawRect (you get the idea).
Example of drawing properties: fill color, stroke color, stroke style, font size, clipping region etc
In the context (pardon the pun) of web, you have two drawing contexts - canvas for 2d drawing and and webgl for 3d drawing.