Best way to draw text in 3d space - libgdx

I have been developing a game in libgdx, which utilises cards with a number of stats on them. Originally I had done the game in 2d but decided to try itn out in 3d and have had some success so far. However, I cannot work out the best way to draw the text stats onto the cards. I have had no real luck with any of the approaches taken, I tried using a FBO to create the text over the texture and remapping the texture, but this didnt seem to work, and i tried drawing the text and then applying the same transformations from the cards to the text, but this too achieved nothing.
What would be the best approach to going about this, and where can I get guidance on how to execute this?

What you're looking for is signed distance field rendering.
The LibGDX Wiki has an article on it here.

Related

how to setup Decals in libgdx

I am trying to fake a shadow in 3D space. ShadowDirectionalLight does not suit my purposes, therefore I am gonna run a texture based animation.
I was thinking of using Decals for my animations. Can someone provide an example of how to use them? Creation and how render cycle should be constructed is puzzling to me. I found this post but it seems out of date.

how to achieved sprite reflection in libgdx

I am currently working on an open world game like pokemon using libgdx.I am currently stuck on this effect that I really want to be done before moving to other features of my game. do you have any idea how to achieve this reflection effect? .concept behind?
https://gamedev.stackexchange.com/questions/102940/libgdx-sprite-reflection
For basic reflection, to draw a texture flipped vertically you can simply scale its height by -1. Then simply redraw the texture an appropriate distance under the player.
You shoudl also add a clipping rectangle around the water's edge so the reflection only draws where the water is. For perfomance purposes, it would also be good to only do this when the player is near water.
I can't give you actual code as I don;t know anything about your code, but once you've had a go at handling reflection as abaove, come back here and ask any more specific questions taht you may have.
This question is too broad and opinion based and hence is highly likely to be voted down.

Drawing shapes versus rendering images?

I am using Pygame 1.9.2a with Python 2.7 for designing an experiment and have been so far using Pygame only on a need basis and am not familiar with all Pygame classes or concepts (Sprites, for instance, I have no idea about).
I am required to draw many (45 - 50 at one time) shapes on the screen at different locations to create a crowded display. The shapes vary from displaced Ts , displaced Ls to line intersections. [ Like _| or † or ‡ etc.]! I'm sorry that I am not able to post an image of this because I apparently do not have a reputation of 10, which is necessary to post images.
I also need these shapes in 8 different orientations. I was initially contemplating generating point lists and using these to draw lines. But, for a single shape, I will need four points and I need 50 of these shapes. Again, I'm not sure how to rotate these once drawn. Can I use the Pygame Transform or something? I think they can be used, say on Rects. Or will I have to generate points for the different orientations too, so that when drawn, they come out looking rotated, that is, in the desired orientation?
The alternative I was thinking of was to generate images for the shapes in GIMP or some software like that. But, for any screen, I will have to load around 50 images. Will I have to use Pygame Image and make 50 calls for something like this? Or is there an easier way to handle multiple images?
Also, which method would be a bigger hit to performance? Since, it is an experiment, I am worried about timing precision too. I don't know if there is a different way to generate shapes in Pygame. Please help me decide which of these two (or a different method) is better to use for my purposes.
Thank you!
It is easer to use pygame.draw.rect() or pygame.draw.polygon() (because you don't need to know how to use GIMP or InkScape :) ) but you have to draw it on another pygame.Surface() (to get bitmap) and than you can rotate it, add alpha (to make transparet) and than you can put it on screen.
You can create function to generate images (using Surface()) with all shapes in different orientations at program start. If you will need better looking images you can change function to load images created in GIMP.
Try every method on your own - this is the best method to check which one is good for you.
By the way: you can save generated images pygame.image.save() and then load it. You can have all elements on one image and use part of image Surface.get_clip()

As3 3D - Ink drop spreading in water

I'm researching the following problem:
Let's say I have a glass of some fluid (water for example). The fluid is completely transparent and I don't have to render it at all.
However a ink drop is dropped in the glass and it's spreading in the water.
The whole thing should be 3D and user should be able to rotate the camera and see the spreading in real time.
I have researched a couple of way to approach this problem, but it turned out that most of them are dead end.
Тhe only approach that has some success was to use enormous amount of particles which form the skeleton of the "inc spread". The physics simulation of the process of spreading is far form perfect, but let's say it's not a problem.
The problem is the rendering part.
As far as I know I'll not be able to speed up the z-sort process greatly by using the flash GPU acceleration, because the upload of those particles to the GPU memory every frame is quite slow?
Can somebody confirm that please?
The other thing that I'm struggling with is the final render. I tried a whole bunch of filters in combination with "post process" techniques to create smooth lines and gradients between the dots, but the result it terrible. If somebody know some article that could help me with that I'll be very grateful.
Overall if there is another viable approach tho the problem please let me know.
Thanks in advance.
Cheers.
You should probably look at Computational Fluid Dynamics in general to get a basic understanding. This should make it easy to play with actionscript implementations like Eugene's Fluid Solver, either in 2D or 3D, tweaking fluid properties to get the look and feel you're after

Any known solutions for Image Differencing in Actionscript?

I'm working with a few programming buddies to create an AS interface to the kinect and one problem we're running into is image differencing. We need to be able to throw out image data that doesn't change from image to image so we can pin-point only things that are moving(i.e. people).
Anyone have any experience with this or a direction we can go?
I would consider creating a pixel bender shader to find the difference and also do any other math or tracking. Pixel bender gets its own thread outside of the normal flash player so you can get some more horse power for your setup. Pixel Bender shaders can be applied to bitmaps, vectors, or video so I think it is perfect for this project. Good Luck!
http://www.adobe.com/devnet/flash/articles/pixel_bender_basics.html
And is is a full collection of shaders including difference
Take a look at the threshold method on BitmapData.
It'll allow you to do this stuff. Their docs have a simple example so check that out.
It might be a long shot, and this is just me rambling, but in sound theory (strange how I'm relating it to image cancellation, but here goes...) the concept of cancellation is when you take a wave sample and add its inverse. It's how you make acapellas from instrumentals + originals or instrumentals from acapellas + originals.
Perhaps you can invert the new image and "normalize" the two to get your offsets? I.e. the first image is 'black on white' and the second image is 'white on black', and then detect the differences for the bitmap data. I know I did a similar method of finding collisions with AS3 a few years back. This would, in theory, cancel out any 'repeating' pixels and leave you with just the changes from the last frame.
With BitmapData your values are going to be from 0 to 255, so if you can implement a cancellation (because a lot of parts of the image are going to stay the same from frame t frame) then you can easily find the changes from the previous frame.
Just a thought! Whatever your solution is it's going to have to be quick in order to beat the flash runtimes' slow speeds. Your Kinect read FPS rate will be greatly hindered with bad code.
Here is some frame differencing code I wrote awhile back. It uses bitmapData: http://actionsnippet.com/?p=2820
I also used this to capture moving colors in a video feed: http://actionsnippet.com/?p=2736