Displaying 100 Floating Cubes Using DirectX OR OpenGL - language-agnostic

I'd like to display 100 floating cubes using DirectX or OpenGL.
I'm looking for either some sample source code, or a description of the technique. I have trouble getting more one cube to display correctly.
I've combed the net for a good series of tutorials and although they talk about how to do 3D primitives, what I can't find is information on how to do large numbers of 3D primitives - cubes, spheres, pyramids, and so forth.

You say you have enough trouble getting one cube to display... so I am not sure if you have got one to display or not.
Basically... put your code for writing a cube in one function, then just call that function 100 times.
void DrawCube()
{
//code to draw the cube
}
void DisplayCubes()
{
for(int i = 0; i < 10; ++i)
{
for(int j = 0; j < 10; ++j)
{
glPushMatrix();
//alter these values depending on the size of your cubes.
//This call makes sure that your cubes aren't drawn overtop of each other
glTranslatef(i*5.0, j*5.0, 0);
DrawCube();
glPopMatrix();
}
}
}
That is the basic outline for how you could go about doing this. If you want something more efficient take a look into Display Lists sometime once you have the basics figured out :)

Just use glTranslatef (or the DirectX equivalent) to draw a cube using the same code, but moving the relative point where you draw it. Maybe there's a better way to do it though, I'm fairly new to OpenGL. Be sure to set your viewpoint so you can see them all.

Yeah, if you were being efficient you'd throw everything into the same vertex buffer, but I don't think drawing 100 cubes will push any GPU produced in the past 5 years, so you should be fine following the suggestions above.
Write a basic pass through vertex shader, shade however you desire in the pixel shader. Either pass in a world matrix and do the translation in the vertex shader, or just compute the world space vertex positions on the CPU side (do this if your cubes are going to stay fixed).
You could get fancy and do geometry instancing etc, but just get the basics going first.

This answer isn't just for OP's question. It also answers a more general question - displaying many cubes in general.
Drawing many cube meshes
This is probably the most naive way of doing things. We draw the same cube mesh with many different transformation matrices:
prepare();
for (int i = 0; i < numCubes; i++) {
setTransformation(matrices[i]);
drawCube();
}
/* and so on... */
The nice thing is that this is SUPER easy to implement, and it's not too slow (at least for 100 cubes). I'd recommend this as a starter.
The problem
Ok, but let's say you want to make a Minecraft clone, or at least some sort of project that requires thousands, if not tens of thousands of cubes to be rendered. That's where the performance starts to go down. The problem is that each drawCube() sends a draw call to the GPU, and the time in each draw call adds up, so that eventually, it's unbearable.
However, we can fix this. The solution is batching, a way to do only one draw call for all of the cubes.
Batching
We join all the (transformed) cubes into one single mesh. This means that we will have to deal with only one draw call, instead of thousands. Here is some pseudocode for doing so:
vector<float> transformedVerts;
for (int i = 0; i < numCubes; i++) {
cubeData = cubes[i];
for (int j = 0; j < numVertsPerCube; j++) {
vert = verts[j];
/* We transform the position by the transformation matrix. */
vec3 vposition = matrices[i] * verts.position;
transformedVerts.push(vposition);
/* We don't need to transform the colors, so we just directly push them. */
transformedVerts.push(vert.color);
}
}
...
sendDataToBuffer(transformedVerts);
If the cubes are moving, or one of the cubes is added or deleted, you'll have to recalculate transformedVerts and then resend it to the buffer - but this is minor.
Then at the end we draw the entire lumped-together mesh in one draw call, instead of many.

Related

box2d(libgdx) stacked bodies not stable

When I make a lot of bodies (rectangles) stacked on each other they don't stand stable. Even if restitution is set to 0. They are bouncy and fall off each other. I tried setting the density to a very low value, but it didn't change.
Is there a possibility to fix that?
shape.setAsBox(0.1f, 0.1f, new Vector2(0, 0), 0);
bDef = new BodyDef();
bDef.type = BodyDef.BodyType.DynamicBody;
bDef.position.set(0, 0);
fDef.shape = shape;
fDef.density = 0.001f;
fDef.friction = 0.5f;
fDef.restitution = 0.0f;
for (int i = 1; i < 50; i++) {
bDef.position.set(0, i * 0.201f);
body1 = world.createBody(bDef);
fixture = body1.createFixture(fDef);
}
So long as this is the problem that I think it is, then yes, this is fixable. Albeit with additional work. But in the short term, no.
When I run the "Vertical Stack" test of the Testbed from Box2D 2.3.2, I see a bunch of boxes drop down on top of each other in a vertical stack. At first they stack, then they wobble, then they tip over. This is how Box2D is designed to work. I didn't like that behavior so I changed it.
I suspect that this is essentially the same problem as you are seeing. The "fix" I used to modify this and gain a stable stacking behavior, involves changing the internal Box2D library code.
Algorithmically speaking, I modified the way position resolution is done from resolving impulses being applied one collision manifold point at a time, to resolving impulses being applied in order of penetration depth or simultaneously when they're almost equal. Code wise, I essentially made this algorithmic change to Erin's b2ContactSolver::SolvePositionConstraints and b2ContactSolver::SolveTOIPositionConstraints methods (in Erin's b2ContactSolver.cpp).
Code for this altered algorithm is in the dev branch of my fork of Box2D (see the SolvePositionConstraint function in the ContactSolver.cpp file). Unfortunately, I don't have a diff that would just apply to those methods as my code changes extend beyond those. Additionally, the changes that I've made modify the Box2D library public interface so even if you can get it to build for your platform, you'll still need to adapt your code to the new interface I have.

LibGDX guidance - sprite tracing 2D infinite random bezier curve

I've been able to apply a smooth animation to my sprite and control it using the accelerometer. My sprite is fixed to move left and right along the x-aixs.
From here, I need to figure out how to create a vertical infinite wavy line for the sprite to attempt to trace. the aim of my game is for the user to control the sprite's left/right movement with the accelerometer in an attempt to trace the never ending wavy line as best they can, whilst the sprite and camera both move in a vertical direction to simulate "moving along the line." It would be ideal if the line was randomly generated.
I've researched about splines, planes, bezier curves etc, but I can't find anything that seems to relate close enough to what I'm trying to achieve.
I'm just seeking some guidance as to what methods I could possibly use to achieve this. Any ideas?
You could use sum of 4 to 5 sine waves (each with different amplitude, wavelength and phase difference). All 3 of those parameters could be random.
The resulting curve would be very smooth (since it is primarily sinusoidal) yet it'll look random (it's time period would be LCM of all 4 to 5 random wavelengths which is a huge number).
So the curve won't repeat for a long time, yet it will not be hard on memory. Concerning computational complexity, you can always tune it by changing number of sine terms with FPS.
It should look like this.
It's really easy to implement too. (even I could generate above image.. haha)
Hope this helps. Maths rocks. :D
(The basic idea here is a finite Fourier series which I think should be ideal for your use case)
Edit:
You can create each term like this and assign random values to all terms.
public class SineTerm {
private float amplitude;
private float waveLength;
private float phaseDifference;
public SineTerm(float amplitude, float waveLength, float phaseDifference) {
this.amplitude = amplitude;
this.waveLength = waveLength;
this.phaseDifference = phaseDifference;
}
public float evaluate(float x) {
return amplitude * (float) Math.sin(2 * Math.PI * x / waveLength + phaseDifference);
}
}
Now create an array of SineTerms and add all values returned by evaluate(x) (use one coordinate of sprite as input). Use the output as other coordinate of sprite. You should be good to go.
The real trick would be in tuning those random numbers.
Good luck.

Actionscript 3: translating coordinates from object's 3D space to another's?

I feel like this has probably been asked/answered here, and if so, I apologize for the bandwidth, but I don't see any explanation.
How does one translate from one object's coordinate space to another in Flash AS3? I can take a point in an object and translate it to global coordinates using local3DToGlobal() and then to another object's local using globalToLocal3D() -- but is there a direct way?
Thus, if I wanted one object to be able to say to another: 'move your top left corner to my top left corner', even through the two objects are in different z-spaces, rotated 3-dimensionally, etc.
I assume it is in the matrix3D matrix manipulations —
Matrix multiplication? TransformVector()? deltaTransformVector()?
I have been poring over the API but would really appreciate a concrete example.
Thanks!
One approach would be getRelativeMatrix3D(), called from the transform property of a display object, as in: transform.getRelativeMatrix3d(root).position.
Returns a Matrix3D object, which can transform the space of a
specified display object in relation to the current display object's
space. You can use the getRelativeMatrix3D() method to move one
three-dimensional display object relative to another three-dimensional
display object.
From Adobe's Performing complex 3D transformations, there is an example using Matrix3D objects for reordering display, in which faces of a box are reordered to ensure that layering of 3D display objects corresponds to the relative depths after rotations have been applied:
var faces:Array;
for (var i:uint = 0; i < 6; i++)
{
faces[i].z = faces[i].child.transform.getRelativeMatrix3D(root).position.z;
this.removeChild(faces[i].child);
}
faces.sortOn("z", Array.NUMERIC | Array.DESCENDING);
for (i = 0; i < 6; i++)
{
this.addChild(faces[i].child);
}

AS3 - Find the most abundant pixel colour in BitmapData

Basically, I want to get the most common ARGB value that appears in a BitmapData. That is, I want to know which exact pixel colour is the most abundant in the image. I tried going through every pixel of the image and counting whenever a colour that already exists comes up, but that's way too slow, even with relatively small images. Does anybody know a faster method for this, maybe using the BitmapData.histogram() function or something?
Ideally the process should be near instantaneous for images around at least 1000x1000 pixels.
Run through bitmapData.getVector() with a Dictionary to hold numbers, then sort that Dictionary's key-value pairs by value and get the key of maximum.
var v:Vector.<uint>=yourBitmapData.getVector(yourBitmapData.rect);
var d:Dictionary=new Dictionary();
for (var i:int=v.length-1; i>=0;i--) {
if (d[v[i]]) d[v[i]]++; else d[v[i]]=1;
}
var maxkey:String=v[0].toString();
var maxval:int=0;
for (var k:String in d) {
if (d[k]>maxval) {
maxval=d[k];
maxkey=k;
}
}
return parseInt(maxkey); // or just maxkey
I haven't worked with shaders at all, but I think you might be able to get faster results. Looping through pixels is faster at the shader level.
I'd try by creating essentially the same loop in the shader, and paint the entire resulting bitmap with the most used colour and sample that (unless you can get a variable directly out of the shader)
this should be significantly faster

Huge area texture?

This is a very general question that's not related to a specific language. I'm having this array of int's:
int[100][100] map;
This contains just tile numbers, and is rendered as 256x256 tiles. So it's basically just a tile map or whatever it should be called. Thing is I want to be able to write anything to the map, anywhere, and it should stay there. For example be able to paint on stuff on the ground such as grass, flowers, stones and other stuff making the terrain more varied without having to render each of these sprites a huge number of times every time it renders. But making each tile contain it's own texture to write to would be terribly memory consuming at that would be 256x256x100x100 = 655360000 pixels to store. Would'nt that be like gigabytes of data or something!?
Does anyone know of a good general sulotion to make what I'm trying to do without killing too much memory?
If someone wonders I'm using C++ with HGE (Haaf's Game Engine).
EDIT: I've choosen to limit the amount of stuff on screen so that it can render. But look here so maybe you'll understand what I try to achieve:
Link to image because I'm not allowed to use image tags :(
If it's just tile based then you only store one instance of each unique tile and each unique "overlay" (flower, rock, etc.). You reference it by id or memory location as you have been doing.
You'd simply store a location (tile number and location on tile) and a reference to an overlay to "paint" it without consuming a lot of memory.
Also, I'm sure you know this but you only render what's on screen. So memory usage is pretty much constant once everything is loaded up.
I'm not exactly sure what you are trying to do, but you should probably have the tiles in separate layers. So say that for each "tile" you have a list of textures ordered bottom-up that you blend together, that way you only store the tile indexes.
Instead of storing just the tile number, store the overlay number and offset position also.
struct map_zone {
int tile; // tile number
int overlay; // overlay number (flower, rock, etc). In most cases will be zero
int overlay_offset_x; // draw overlay at X pixels across from left
int overlay_offset_y; // draw overlay at Y pixels down from top
}
map_zone[100][100] map;
And for rendering:
int x, y;
for(y = 0; y < 100; ++y) {
for(x = 0; x < 100; ++x) {
render_tile(map[y][x].tile)
render_overlay(map[y][x].overlay, map[y][x].overlay_offset_x, map[y][x].overlay_offset_y);
}
}
It's arguably faster to store the overlays and offsets in separate arrays from the tiles, but having each area on the map self-contained like this is easier to understand.
you have to use alpha maps..
you are going to paint a texture 256x256 which maps your whole terrain. for each channel r,g,b,a you will tile your terrain with your another texture..
r = sand.jpg
g = grass.jpg
b = water.jpg
a = soil.jpg
in shader, you will check the color of alpha map and paint with these textures..
i am doing such a thing now and i did like that