Effectively organizing LibGDX project with custom shader - libgdx

I have been learning to use LibGDX and OpenGL, but I am having trouble figuring out how to implement custom shaders in a reasonable fashion.
The way I have been including shaders in my projects is to create a class that implements the gdx.graphics.g3d.Shader class. Then I use this class to compile my custom shader program and use it to render meshes.
I create my meshes by making a ModelInstance from a .g3db file, and then I pass them to the shader by calling a ModelBatch instance's render() method.
My confusion starts when I need to apply different textures for each of my meshes. Right now I am just setting a uniform on my shader before each modelBatch.render() call.
Here are my questions:
Is setting uniforms this way reasonable, even in a large project?
Is this the intended way of implement-ing the Shader class and the correct way of using OpenGl shaders in general? (Creating 1 shader and then applying it to all my models)
How do I use DRY principles with shaders? In other words, if I want to use a slightly different shader on a model but want to keep the same lighting as everything else. Is it best to simply include if statements in my shader and use a uniform as a flag for these special cases? Otherwise it seems I would have to create a new shader and just copy&paste most of my original shader.

Have a look at: http://blog.xoppa.com/creating-a-shader-with-libgdx/
No, even more: rendering will only happen after calling batch.end();, so it might not produce correct results. Instead use a Material or Environment see http://blog.xoppa.com/using-materials-with-libgdx/
You typically would create a new shader (recompile) whenever the material changes. You can set a ShaderProvider on the ModelBatch to manage this.
It depends on the use-case, but typically branching in a shader is not a good idea. Instead you can use precompiler directives to compile a different version of your shader. Have a look at the default shader and notice the #if directives: https://github.com/libgdx/libgdx/blob/master/gdx/src/com/badlogic/gdx/graphics/g3d/shaders/default.vertex.glsl

Related

Custom lighting environments and shaders in Forge Viewer

Some models, especially ones produced through photogrammetry, don't look that good with any of the lighting presets the viewer can offer. They're often quite dark and surfaces are "shiny". What options do I have in modifying the shading? I'd just like to have a uniformly lit model.
I know I can replace the shader material on the model fragments but then I will lose the model textures. As far as I know I can't combine shaders in three.js. Is there a way to introduce my own custom lighting environment?
Unfortunately there's no official way of customizing the environment: How to add custom environment map for background in autodesk forge?.
I think you could hack your way out of this, though, for example, by switching to one of the "simpler" environment presets, and finding an angle where the photogrammetry output is lit reasonably well:
viewer.impl.matman().setEnvRotation(angle);
viewer.impl.renderer().setEnvRotation(angle);
While doing that, you could also play with the exposure settings:
viewer.impl.matman().setEnvExposure(exposure);
viewer.impl.renderer().setEnvExposure(exposure);

TweenEngine library vs libGDX Interpolation class

I'm developing a game based on the libGDX framework in eclipse that will have smoothly movements between my entities, so for that, I have used TweenEngine (and it is working nice), but recently I found that libGDX have its own class for interpolations.
I know both are for tweening (or for nice and smoothly animations), and I just want to know if there are technical difference or limitations between this 2 options basically because if both are the same, I would opt for the second one since it is already inside libGDX.
From one side there's generally better to use included tools because of integration and final application size (not saying of setting up the environment or exporting project problems).
On the other hand please notice that Libgdx Interpolation starts to be useful only when you are using it with Scene2D actions. And here is the problem in my opinion because you must implement stage and actors mechanisms - which is good idea by itelf but can be almost impossible if you have created almost whole application without it.
Then I would recommend you:
choose Scene2D actions + Interpolation easing if you are able to implement Scene2D in your project and the easing actions are only reason you want to use Tween Engine
choose Universal Tween Engine if you want to stay independent of new machanisms and use Sprites etc in traditional way

How to manipulate pixels of rendered screen or Display Object in Actionscript 3?

My game engine doesn't render into a BitmapData like Flixel/Flashpunk does.
Instead it uses the Display list of Flash player internally.
I need to do some post processing, like scan lines, and wobble, glitch etc on the final rendered screen (e.g- http://goo.gl/Enwae). I have done render-to-texture in OpenGL and used a pixel shader used to manipulate the final rendered scene.
How do I implement the equivalent of same in Actionscript 3? I saw the reference for Pixel Bender and Shader Filter classes. Can someone give a simple example or point me to relevant information to the context specified here?
I'm pretty sure you'll need to render the screen to BitmapData at some point if you want to manipulate the pixels in that way.
You could try using BitmapData.draw() to render the entire stage to BitmapData, but performance will most likely be miserable unless you're just using it on a fairly static screen (like a menu).
Otherwise you're probably better off with a game engine that blits to a bitmap canvas instead of using the display list.

3d objects in actionscript 3 without plugins

I am fairly new to actionscript and was wondering is it possible to create 3d shapes (cones, spheres, cubes) using actionscript.
I would like not to have to use a plugin.
The shapes must be 3d as I need to rotate them.
Here you will find Adobes documentation for what you are looking for:
http://help.adobe.com/en_US/ActionScript/3.0_ProgrammingAS3/WS7D38179D-58B9-409c-9E5C-257DDECB1A02.html
Is there any specific reason that you don't want to use 3D libraries like away3D?
I think Matej's answer does not fully cover the topic as the link he gave only describes a classic display list approach to drawing 3D objects. Depending on your needs, drawing things using classic display list can be slow, as it is not GPU accelerated. If you want to utilize your GPU you can use Stage3D APIs - that does not require any external frameworks. Here's an excellent article for starters.
And even though you can render 3D content using 'raw' Context3D, I`d recommend using a framework like Away3D or Alternativa3D. Both are open source, by the way.

Is Interfacing both java.awt.Graphics2D and Html5 Canvas Context in GWT feasible?

I have a java library which heavily uses java.awt.Graphics2d.
I want to port my library to html5 canvas by using gwt.
So I'm planning to write an interface (or just a class), say common.Graphics2d,
an adapter class, say com.test.awt.Graphics2d, implements common.Graphics2d and uses java.awt.Graphics2d
and another adapter class, say com.test.gwt.Graphics2d, implements common.Graphics2d and uses com.google.gwt.canvas.dom.client.Context2d.
Then I will replace all java.awt.Graphics2d with common.Graphics2d.
So after that, my library will work on both gwt and java.
The problem here is to implement graphics2d methods, and configuration by canvas context 2d. Is it feasible to implement same functionality with canvas?
I have done a similar thing. I have an interface which represents aview and two implementations of said interface. One for Android using its android.graphics classes and a second implementations in GWT using com.google.gwt.canvas.client.Canvas.
The GWT canvas stuff seems pretty full-featured to me. You can draw shapes, display text and images, move, rotate, scale...
It probably depends on the functions you use (for instance color gradient may not be easy). for basic drawing functions, the number of methods you really need to implement is very small.
You can have a look (and reuse) classes from my jvect-clipboard package for instance (on sourceforge). Basically, all geometric methods can use the general path drawing commmand, and you are left with storing colors and the like.
Have a look for instance at the implementation for SVG or WMF output, you will see that the code is pretty simple, especially for SVG (although it doesn't cover all possibilities, in particular gradients).