libgdx desktop multiple textures multiple shaders - libgdx

I'd like to ask for help or tutorials. I've googled with several keywords, none of them gave me the result i was looking for. I'm trying to achieve to render multiple textures in one frame, with multiple shaders. For now i have a Grayscale and a Blur shader working. I read through most of the tutorials here, and some others. But all i found where for a single image, not a whole sets of textures.
Lets say i've got something like this, where
A is the Screen
B is a Font (BitmapFont)
C to F are the textures
Now i would like to render each texture differently.
B gets a GrayScale shader
C gets a Blur shader ( i got this - quite simple, but the code is not reusable)
D gets GrayScale and a Blur (same as above)
E to F are staying as they where ( i got this... easy)
Finally A gets a Blur shader again.
I appreciate any help i get.

Related

Non-polygon based 3D-Model in ThreeJS (like in HelloRacer.com)

I am currently working on a project using ThreeJs.
Right now I use a wavefront-obj to represent my 3D-Models, but I would like to use something like -IGES or -STEP. These formats are not supported by ThreeJS, but I have seen the http://helloracer.com/webgl/ and for me, due to the short loading time, it seems like this model is not based on polygons as you can see by the smooth surface. The model seems to be .js so it is ThreeJS-JSON format?
Is such an model created by loading an IGES/STEP to for example Clara.io and export it to threejs-JSON? I have not the chance to test it by my self, because I do not have a IGES/STEP model right now, but I would let someone create one.
With wavefront I am not able to create such smooth surfaces without getting a huge loading time and a slow render.
as you can see, the surface and lightning is not nearly as smooth as on the posted example.
Surfaces in the demo you've linked are not smooth, they're still polygonal.
I think smooth shading is what you're looking for. The thing is, usually a model is shaded based on its normals. Thus, what normals we set to vertices of the model is crucial to what we'll get on a screen. Based on your description, your models have separate normal for every triangle in it. Like on this picture, each vertex of each triangle has the same normal as triangle itself. Thus, when we interpolate this normals across a triangle, we get the same value for every point of the triangle. Shading calculations yield uniform illumination and the triangle appears flat on a rendered image.
To achieve effect of smooth surface, we need other values for vertex normals, like ones on this image:
If we save this sphere to any format with those normals and try to render it, interpolated normals'll change smoothly across the surface of triangles comprising the sphere. Thus shading calculations'll yield smoothly changing illumination, and the surface'll appear smooth.
So, to recap, models you try to render need to have "smoothed" vertex normals to appear smooth on a screen.
UPD. Judging by your screenshot, your model uses refractive material. The same idea applies to refraction calculations since they're based on normal values too.

AS3 Detecting Borders in Bitmap

I need a library which, fed with a bitmap, returns me an array of rectangles with coordinates and dimensions of the different areas found in the image.
I'll give a graphic example:
From this:
I want this:
Or from this:
I want this:
Is there such a library?
If I want to write one on my own where can I start to inform myself about it?
To my knowledge, the best you'll find are image filters, and color conversion methods, but not the kind of complicated edge detection you're looking for.
Of course, your query supersedes the canny edge detection, and is focused on image boundaries, but I've found no material on that even beyond AS3.

Drawing shapes versus rendering images?

I am using Pygame 1.9.2a with Python 2.7 for designing an experiment and have been so far using Pygame only on a need basis and am not familiar with all Pygame classes or concepts (Sprites, for instance, I have no idea about).
I am required to draw many (45 - 50 at one time) shapes on the screen at different locations to create a crowded display. The shapes vary from displaced Ts , displaced Ls to line intersections. [ Like _| or † or ‡ etc.]! I'm sorry that I am not able to post an image of this because I apparently do not have a reputation of 10, which is necessary to post images.
I also need these shapes in 8 different orientations. I was initially contemplating generating point lists and using these to draw lines. But, for a single shape, I will need four points and I need 50 of these shapes. Again, I'm not sure how to rotate these once drawn. Can I use the Pygame Transform or something? I think they can be used, say on Rects. Or will I have to generate points for the different orientations too, so that when drawn, they come out looking rotated, that is, in the desired orientation?
The alternative I was thinking of was to generate images for the shapes in GIMP or some software like that. But, for any screen, I will have to load around 50 images. Will I have to use Pygame Image and make 50 calls for something like this? Or is there an easier way to handle multiple images?
Also, which method would be a bigger hit to performance? Since, it is an experiment, I am worried about timing precision too. I don't know if there is a different way to generate shapes in Pygame. Please help me decide which of these two (or a different method) is better to use for my purposes.
Thank you!
It is easer to use pygame.draw.rect() or pygame.draw.polygon() (because you don't need to know how to use GIMP or InkScape :) ) but you have to draw it on another pygame.Surface() (to get bitmap) and than you can rotate it, add alpha (to make transparet) and than you can put it on screen.
You can create function to generate images (using Surface()) with all shapes in different orientations at program start. If you will need better looking images you can change function to load images created in GIMP.
Try every method on your own - this is the best method to check which one is good for you.
By the way: you can save generated images pygame.image.save() and then load it. You can have all elements on one image and use part of image Surface.get_clip()

Can I Keep Stage3D From Writing Into the Z-Buffer?

I want to render concealed objects and achieve a similar effect as the one shown in the link in Stage3D.
Silhouette Effect in Torchlight 2
I already know how to do this theoretically. I have to draw the object twice:
Once with normal settings and
once with a different depth sorting mode where only pixels that are behind rendered geometry are shown. Also, to prevent weird effects later on, these pixels can't be rendered into the depth buffer.
I can set the correct depth sorting mode in Stage3D with Context3DCompareMode.GREATER.
Is it also possible to have Stage3D render pixels into the back buffer, but not the z buffer?
If I can't keep Stage3D from rendering to the depth buffer, the effect will look like this:
Glitchy Silhoutte Effect
Yes, you can turn off depth and stencil buffer. Check context3d.configureBackBuffer method.
If anyone comes across this, there are two things you should be aware of:
1) As Volgogradetzzz make sure you have a stencil/depth buffer as part of your backbuffer using Context3D.configureBackBuffer(...)
2) If you need to turn on or off depth pixel writing at different moments you can use set the depthMask argument in this function:
public function setDepthTest(depthMask:Boolean, passCompareMode:String):void
A little strange to find this feature in a function of this name, as depth write masking affects the results, not the test itself.

How do I pass barycentric coordinates to an AGAL shader? (AGAL wireframe shader)

I would like to create a wire frame effect using a shader program written in AGAL for Stage3D.
I have been Googling and I understand that I can determine how close a pixel is to the edge of a triangle using barycentric coordinates (BC) passed into the fragment program via the vertex program, then colour it accordingly if it is close enough.
My confusion is in what method I would use to pass this information into the shader program. I have a simple example set up with a cube, 8 vertices and an index buffer to draw triangles between using them.
If I was to place the BC's into the vertex buffer then that wouldn't make sense as they would need to be different depending on which triangle was being rendered; e.g. Vetex1 might need (1,0,0) when rendered with Vetex2 and Vetex3, but another value when rendered with Vetex5 and Vetex6. Perhaps I am not understanding the method completely.
Do I need to duplicate vertex positions and add the aditional data into the vertex buffer, essentially making 3 vertices per triangle and tripling my vertex count?
Do I always give the vertex a (1,0,0), (0,1,0) or (0,0,1) value or is this just an example?
Am I over complicating this and is there an easier way to do wire-frame with shaders and Stage3d?
Hope that fully explains my problems. Answers are much appreciated, thanks!
It all depends on your geomtery, and this problem is in fact a problem of graph vertex coloring: you need your geometry graph to be 3-colorable. The good starting point is the Wikipedia article.
Just for example, let's assume that (1, 0, 0) basis vector is red, (0, 1, 0) is green and (0, 0, 1) is blue. It's obvious that if you build your geometry using the following basic element
then you can avoid duplicating vertices, because such graph will be 3-colorable (i.e. each edge, and thus each triangle, will have differently colored vertices). You can tile this basic element in any direction, and the graph will remain 3-colorable:
You've stumbled upon the thing that drives me nuts about AGAL/Stage3D. Limitations in the API prevent you from using shared vertices in many circumstances. Wireframe rendering is one example where things break down...but simple flat shading is another example as well.
What you need to do is create three unique vertices for each triangle in your mesh. For each vertex, add an extra param (or design your engine to accept vertex normals and reuse those, since you wont likely be shading your wireframe).
Assign each triangle a unit vector A[1,0,0], B[0,1,0], or C[0,0,1] respectively. This will get you started. Note, the obvious solution (thresholding in the fragment shader and conditionally drawing pixels) produces pretty ugly aliased results. Check out this page for some insight in techniques to anti-alias your fragment program rendered wireframes:
http://cgg-journal.com/2008-2/06/index.html
As I mentioned, you need to employ a similar technique (unique vertices for each triangle) if you wish to implement flat shading. Since there is no equivalent to GL_FLAT and no way to make the varying registers return an average, the only way to implement flat shading is for each vertex pass for a given triangle to calculate the same lighting...which implies that each vertex needs the same vertex normal.