libGDX: same texture with shaders, different textures without - libgdx

my name ist Tom (Ger) and i am developing a small 3D game with libGDX.
when i am using a Model, ModelInstance with a ModelBatch and the Environment, i can render different ModelInstances (with different Models) with there right textures.
But i need to use a shader for some wobble effects.
But when i use a shader everything works finde, except for the textures. there are the same for every ModelInscance i want to render.
i guess there is a texture binding problem. I load my Models this way:
assets = new AssetManager();
assets.load("blob.g3db", Model.class);
and fetch them with a simple:
public static Model getModel(String name) {
return assets.get(name + ".g3db", Model.class);
}
So i guess the assetsManager is loading the textures as well (cause it works without the shader).
My Question is:
How can i render differend 3D Objects with a Shader with there correct Textures?
Thanks in Advance...
Tom

The Models and the ModelInstances have a Material, where you can set a Texture, Color and other things to it.
So if 2 ModelInstances share the same Model you can set different Materials to their ModelInstances. By doing this you have different Textures. The DefaultShader implementation takes care about them. If you create your own Shader you need to take care about them.
Important: It does not work without Shader, cause you always render with Shader. You don't set the Shader manually, but libgdx uses DefaultShader by default.
I suggest you read some of Xoppas tutorials.

Related

How to set a texture filter

How important is it to set a texture filter?
In the book Java Game Development with LibGDX in chapter 3 they set a texture filter.
When I load video assets with the assetmanager I can't convert a textureregion to a texture to set the texture filter.
But I can however set a texture filter on the entire spritesheet like so:
textureAtlas = assetManager.get("images/packed/game.pack.atlas") // all images are found in this global static variable
textureAtlas!!.findRegion("button").texture.setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Linear)
How important is it to set a texture filter? Is this an ok solution? How can I get the textures from the atlas?
Textures always have a filter. If you don't set one, it will have the default filter of (Nearest, Nearest). This filter is appropriate for retro graphics (pixellated look). Otherwise, you'll most likely want to use (MipMapLinearLinear, Linear). If your game is mostly done and you've identified sprite drawing as a performance bottle-neck, then you can downgrade to (MipMapLinearNearest, Linear).
When creating an atlas using the TexturePacker, there is an option for texture filter, and if you set that you don't have to set it after you load the TextureAtlas in your game. You could also add a line at the top of your pack file like this:
filter: MipMapLinearLinear,Linear
Otherwise, if you want to set it on the atlas, it is fine with a single-page atlas to do what you did, and apply the filter using a texture reference from any of the texture regions, since they are all referencing the same Texture instance. But TextureAtlases can have multiple pages, so it would be more appropriate to do this:
for (Texture texture : textureAtlas.getTextures())
texture.setFilter(...);
Edit: To add settings to a TexturePacker build, put a text file named pack.json in the directory with the source images. You only have to add the settings that you want to change from the defaults. LibGDX can read simplified json that omits quotation marks for elements with no whitespace. So to just set the texture filter, this is all you need in the file:
{
filterMin: MipMapLinearLinear,
filterMag: Linear
}

Actionscript 3: Drawing lines and bitmaps the right way

I'm just getting started with Flash/ActionScript and it seems to be the general consensus to create Sprites, Bitmaps, MovieClips, etc for various objects in order to represent pictures and other graphics.
However, the way I'm used to writing games and whatnot in other languages is to just loop repeatedly and each frame use something similar to the Graphics object to redraw the scene on the main Sprite. Is this how it's also done in Flash, and is it good practice? I can do it this way, but I'm wondering if there's some Flash ecosystem standard instead.
Here's an example of the way I'm used to:
public class MyApp extends Sprite
{
public function MyApp()
{
var t:Timer = new Timer(20);
t.addEventListener(TimerEvent.TIMER, update);
t.start();
}
public function update(e:TimerEvent)
{
this.graphics.clear();
//Rendering code and updating of objects.
}
}
Is this acceptable?
Well, it depends.
In Flash, you have the option of relying on the Flash Player's vector rasterizer and rendering system, which will figure out all the redrawing for you. For instance, you can draw once to a Sprite then simply apply transforms to the sprite (set x, y, width, height, rotation, scaleX, scaleY, transform.matrix, transform.colorTransform, etc). Any of these objects could be a vector shape or a bitmap, and you can also use cacheAsBitmap and cacheAsBitmapMatrix for even more redraw optimization. The Flash Player will only redraw areas that change, on the frame that they change. I would consider this the traditional "Flash way".
Using the Graphics API is just a programmatic way to create vector shape data. Think of it as a code alternative to drawing in the Flash IDE. You could draw using Graphics once when the object is created, or if you needed to change the actual shape (ie not just the transform) you are correct that you would clear() and redraw it. However, ideally you would not be doing that a lot. If you find yourself redrawing the shape a lot, you might want to move to a pre-rendered sprite-sheet approach. In that case you use BitmapData to more quickly copy pre-drawn pixel data to a Bitmap object. This is generally faster than relying on the vector rasterizer to render your Graphics commands, as long as you use the fast pixel methods like copyPixels(). This is probably closer to the sort of rendering systems you are used to in other platforms that don't have a vector rasterizer built in.
Lastly, it's worth noting that the newest (and fastest) way to render objects in Flash is completely different than all that. It's called Stage3D and it uses a completely different rendering pipeline than the vector rasterizer. It's powered by GPU rendering APIs, so it's blazing fast (great for games) but has no vector rasterizing abilities. It can be used for both 3D and 2D. It's a bit more involved to work with, but there are some useful frameworks to make it easier, most notably the Starling 2D framework.
Hope that helps.
The "Flash way" is to use EnterFrame event instead of using timer to draw. You must make your calculation whenever you want but let flash draw you scene.
It works the same way in actionscript.
public class App extends Sprite // adding "my" to identifier names doesn't add any information, so there's no real point in doing it
{
public function App()
{
addEventListener(Event.ENTER_FRAME, update); // "each frame"
}
private function update(e:Event):void //not just parameters of functions have a type, but also their return value
{
graphics.clear(); // no need for "this" here
//Rendering code and updating of objects.
}
}
Keep in mind that the Graphics API is vector based and as such will only draw so many things before dropping performance.
Sprite is a general purpose container, not to be confused with what the term "sprite" stands for in a sprite sheet.
What you are probably referring to when saying "main Sprite" is some rectangular region of pixels that you can manipulate.In this case, a BitmapData is what you want, which is displayed with a Bitmap object.
BitmapData does not offer a graphics property. Essentially, drawing vectors and manipulating pixels are treated separately in As3. If you want to draw a line in a BitmapData object, you'd have to first draw the line as a vector into a Sprite (or better Shape, if all you want to do is draw on it) using its graphics property, then use draw() of BitmapData to set its pixels according to the drawn line.

How to animate textures in a 3d model?

I wish to have a animated 3d texture in my LibGDX code but I am struggling to find out how to do it.
I assume how this "should" be done is either;
a) Directly accessing and modifying the texture on the model. (via a pixmap? ByteBuffer?)
or
b) Prerendering a big image containing all the frames (say, 20) and then moving the UV co-ordinates to create the illusion of the animation. (akin to ImageStrips in 2d/webdesign).
I did work out how I could completely replace the material eachtime, but that seems a much worse way of doing it. So if anyone could show the commands I need to do either a) or b) (or a similar optimal method) I would be great-fall.
Maths I am fine with. The intricacies of OpenGLES or GDX I am not :)
(The solution should at least work HTML/Android compiles, ideally everything)
Since the latest release it is very easy to play a 2d animation on a 3d surface. First make sure to get familiar with the 2d animation concept, as explained over here: https://github.com/libgdx/libgdx/wiki/2D-Animation. Then, instead of using a spritebatch, you can use the TextureRegion (which Animation#getKeyFrame returns) to set the material of the surface, as shown here: https://github.com/libgdx/libgdx/blob/master/tests/gdx-tests/src/com/badlogic/gdx/tests/g3d/TextureRegion3DTest.java. So basically you would get in your render method:
attribute.set(animation.getKeyFrame(stateTime, true));
Or if you want a more generic approach:
instance.getMaterial("<name of material>").get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
Or, if there's only one material in the ModelInstance:
instance.materials.get(0).get(TextureAttribute.class, TextureAttribute.Diffuse).set(animation.getKeyFrame(stateTime, true));
If you have the memory for it I would definetly choose b), it is easier on the processor. Also, you would only change a uniform's value. However, due to preprocessing it might take some time to open the application.
Get you uniform variable, where you compile your shaders, animationPos should be global.
Gluint animationPos = glGetUniformLocation(shaderProgram, "nameoftheuniform");
Your main loop should pass animationPos value to the shader:
Gluniform1i ( animationPos, curentAnimationIndex);
Add this your fragment shader variables:
uniform int animationPos;
Fragment shader main:
float texCoordY = texCoord.y; //texture coordinates should be passed from vertex shader
float texCoordX = texCoord.x/20.0f; //we are dividing it with 20 since it is the amount of textures that we have and if we use it directly it would try to use all the texture. Whereas the texture stores at 20 different textures.
float textureIndex = 1.0f*animationPos/20.0f; //Pointer to the start of the animation texture.
gl_fragColor = texture2D ( yourTexture, vec2( textureIndex + texCoordX, texCoordY));
Above code assumes that you expanded your textures in the x direction, you can also try to expand it like a matrix, then you need to change the texCoord calculation part. Also that we are using 20 textures.
The option a) is more heavy on the processor and you will be changing the texture every time so it will use pci a bit more, but easier on memory. The question is more like a design decision, but I guess 20 images can be handled so go with option b).
Edit: Added code.

Which shader to use in order to render a mesh as is?

I am using GL 2.0 (in order to display pictures that are not power of 2), and I am trying to simply render a mesh (that displays some triangles).
When using GL 1.0, I didn't have any problem, but now, I have to pass a ShaderProgram object as a parameter.
How can I make it work like it would in GL 1.0?
Should I make a shader that simply does nothing?
You have to use a vertex shader to convert world space coordinates into screen space coordinates. And you need a pixel shader to look up texture coordinates for each rendered pixel of your quad.
Look at the shaders that Libgdx uses for its SpriteBatch, they are pretty minimal texture-a-quad shaders. You can literally use SpriteBatch.createDefaultShader() to get them or just use them as inspiration for your own shaders.
The libgdx wiki page on shaders already contains an example code for a simple shader:
https://github.com/libgdx/libgdx/wiki/Shaders
I assume it's basically the same as the createDefaultShader() as in P.T.'s answer...
Hope it helps...

LibGdx Texture loading in game and using it for various listeners

hi i am developing a game using libgdx. I want to make the texture object available to entire application. I have a requirement like, initialize the texture in one application listener and i want to use it in another application listener. Can anyone help me on this.
There are two ways I think you can do this. First, you could read the data into a static variable. For an example of this, take a look at the Art class in metagun demo: Art.java. The second way, which I have not tried yet, is to use the new AssetManager class. There is example use in the AssetManager test. These should help you access your textures more easily.
You don't need to have 2 or more Application listeners. Actually that only makes things harder.
Use Screens instead (extending Game in your core class instead of directly implementing ApplicationListener).
Either way, you should be able to just send the textures as arguments. For example I have a class Assets that contains all the textures and I sent it to each screen. You can make them static as Doran suggested too.