Can Stage3D draw objects behind all others, irrespective of actual distance? - actionscript-3

I am making a 3D space game in Stage3D and would like a field of stars drawn behind ALL other objects. I think the problem I'm encountering is that the distances involved are very high. If I have the stars genuinely much farther than other objects I have to scale them to such a degree that they do not render correctly - above a certain size the faces seem to flicker. This also happens on my planet meshes, when scaled to their necessary sizes (12000-100000 units across).
I am rendering the stars on flat plane textures, pointed to face the camera. So long as they are not scaled up too much, they render fine, although obviously in front of other objects that are further away.
I have tried all manner of depthTestModes (Context3DCompareMode.LESS, Context3DCompareMode.GREATER and all the others) combined with including and excluding the mesh in the z-buffer, to get the stars to render only if NO other pixels are present where the star would appear, without luck.
Is anyone aware of how I could achieve this - or, even better, know why, above a certain size meshes do not render properly? Is there an arbitrary upper limit that I'm not aware of?

I don't know Stage3D, and I'm talking in OpenGL language here, but the usual way to draw a background/skybox is to draw the background close up, not far, draw the background first, and either disable depth buffer writing while the background is being drawn (if it does not require depth buffering itself) or clear the depth buffer after the background is drawn and before the regular scene is.
Your flickering of planets may be due to lack of depth buffer resolution; if this is so, you must choose between
drawing the objects closer to the camera,
moving the camera frustum near plane farther out or far plane closer (this will increase depth buffer resolution across the entire scene), or
rendering the scene multiple times at mutually exclusive depth ranges (this is called depth peeling).

You should use starling. It can work
http://www.adobe.com/devnet/flashplayer/articles/away3d-starling-interoperation.html
http://www.flare3d.com/blog/2012/07/24/flare3d-2-5-starling-integration/

You have to look at how projection and vertex shader output is done.
The vertex shader output has four components: x,y,z,w.
From that, pixel coordinates are computed:
x' = x/w
y' = y/w
z' = z/w
z' is what ends up in the z buffer.
So by simply putting z = w*value at the end of your vertex shader you can output any constant value. Just put value = .999 and there you are! Your regular depth less test will work.

Related

When drawing on a canvas, should calculations be done relative to cartesian plane coordinates?

I've been seeing a lot of canvas-graphics-related javascript projects and libraries lately and was wondering how they handle the coordinate system. When drawing shapes and vectors on the canvas, are the points calculated based on a cartesian plane and converted for the canvas, or is everything calculated directly for the canvas?
I tried playing around with drawing a circle by graphing all its tangent lines until the line intersections start to resemble a curve and found the difference between the cartesian planes I'm familiar with and the coordinate system used by web browsers very confusing. The function for a circle, for example, "y^2 + x^2 = r^2" would need to be translated to "(y-1)^2 + (x-1)^2 = r^2" to be seen on the canvas. And then negative slopes were positive slopes on the canvas and everything would be upside down :/ .
The easiest way for me to think about it was to pretend the origin of a cartesian plane was in the center of the canvas and adjust my coordinates accordingly. On a 500 x 500 canvas, the center would be 250,250, so if I ended up with a point at 50,50, it would be drawn at (250 + 50, 250 - 50) = (300, 200).
I get the feeling I'm over-complicating this, but I can't wrap my mind around the clean way to work with a canvas.
Current practice can perhaps be exemplified by a quote from David Flanagan's book "JavaScript : The Definitive Guide", which says that
Certain canvas operations and attributes (such as extracting raw
pixel values and setting shadow offsets) always use this default
coordinate system
(the default coordinate system is that of the canvas). And it continues with
In most canvas operations, when you specify the coordinates
of a point, it is taken to be a point in the current coordinate system
[that's for example the cartesian plane you mentioned, #Walkerneo],
not in the default coordinate system.
Why is using a "current coordinate system" more useful than using directly the canvas c.s. ?
First and foremost, I believe, because it is independent of the canvas itself, which is tied to the screen (more specifically, the default coordinate system dimensions are expressed in pixels). Using for instance a Cartesian (orthogonal) coordinate system makes it easy for you (well, for me too, obviously :-D ) to specify your drawing in terms of what you want to draw, leaving the task of how to draw it to the transformations offered by the Canvas API. In particular, you can express dimensions in the natural units of your drawing, and perform a scale and a translation to fit (or not, as the case may be...) your drawing to the canvas.
Furthermore, using transformations is often a clearer way to build your drawing since it allows you to get "farther" from the underlying coord system and specify your drawing in terms of higher level operations ('scale', 'rotate', 'translate' and the more general 'transform'). The abovementioned book gives a very nice exemple of the power of this approach, drawing a Koch (fractal) snowflake in many fewer lines that would be possible (if at all) using canvas coordinates.
The HTML5 canvas, like most graphics systems, uses a coordinate system where (0,0) is in the top left and the x-axis and y-axis go from left to right and top down respectively. This makes sense if you think about how you would create a graphics system with nothing but a block of memory: the simplest way to map coordinates (x,y) to a memory slot is to take x+w*y, where w is the width of a line.
This means that the canvas coordinate system differs from what you use in mathematics in two ways: (0,0) is not the center like it usually is, and y grows down rather than up. The last part is what makes your figures upside down.
You can set transformations on the canvas that make the coordinate system more like what you are used to:
var ctx = document.getElementById('canvas').getContext('2d');
ctx.translate(250,250); // Move (0,0) to (250, 250)
ctx.scale(1,-1); // Make y grow up rather than down

Can I Keep Stage3D From Writing Into the Z-Buffer?

I want to render concealed objects and achieve a similar effect as the one shown in the link in Stage3D.
Silhouette Effect in Torchlight 2
I already know how to do this theoretically. I have to draw the object twice:
Once with normal settings and
once with a different depth sorting mode where only pixels that are behind rendered geometry are shown. Also, to prevent weird effects later on, these pixels can't be rendered into the depth buffer.
I can set the correct depth sorting mode in Stage3D with Context3DCompareMode.GREATER.
Is it also possible to have Stage3D render pixels into the back buffer, but not the z buffer?
If I can't keep Stage3D from rendering to the depth buffer, the effect will look like this:
Glitchy Silhoutte Effect
Yes, you can turn off depth and stencil buffer. Check context3d.configureBackBuffer method.
If anyone comes across this, there are two things you should be aware of:
1) As Volgogradetzzz make sure you have a stencil/depth buffer as part of your backbuffer using Context3D.configureBackBuffer(...)
2) If you need to turn on or off depth pixel writing at different moments you can use set the depthMask argument in this function:
public function setDepthTest(depthMask:Boolean, passCompareMode:String):void
A little strange to find this feature in a function of this name, as depth write masking affects the results, not the test itself.

How do I pass barycentric coordinates to an AGAL shader? (AGAL wireframe shader)

I would like to create a wire frame effect using a shader program written in AGAL for Stage3D.
I have been Googling and I understand that I can determine how close a pixel is to the edge of a triangle using barycentric coordinates (BC) passed into the fragment program via the vertex program, then colour it accordingly if it is close enough.
My confusion is in what method I would use to pass this information into the shader program. I have a simple example set up with a cube, 8 vertices and an index buffer to draw triangles between using them.
If I was to place the BC's into the vertex buffer then that wouldn't make sense as they would need to be different depending on which triangle was being rendered; e.g. Vetex1 might need (1,0,0) when rendered with Vetex2 and Vetex3, but another value when rendered with Vetex5 and Vetex6. Perhaps I am not understanding the method completely.
Do I need to duplicate vertex positions and add the aditional data into the vertex buffer, essentially making 3 vertices per triangle and tripling my vertex count?
Do I always give the vertex a (1,0,0), (0,1,0) or (0,0,1) value or is this just an example?
Am I over complicating this and is there an easier way to do wire-frame with shaders and Stage3d?
Hope that fully explains my problems. Answers are much appreciated, thanks!
It all depends on your geomtery, and this problem is in fact a problem of graph vertex coloring: you need your geometry graph to be 3-colorable. The good starting point is the Wikipedia article.
Just for example, let's assume that (1, 0, 0) basis vector is red, (0, 1, 0) is green and (0, 0, 1) is blue. It's obvious that if you build your geometry using the following basic element
then you can avoid duplicating vertices, because such graph will be 3-colorable (i.e. each edge, and thus each triangle, will have differently colored vertices). You can tile this basic element in any direction, and the graph will remain 3-colorable:
You've stumbled upon the thing that drives me nuts about AGAL/Stage3D. Limitations in the API prevent you from using shared vertices in many circumstances. Wireframe rendering is one example where things break down...but simple flat shading is another example as well.
What you need to do is create three unique vertices for each triangle in your mesh. For each vertex, add an extra param (or design your engine to accept vertex normals and reuse those, since you wont likely be shading your wireframe).
Assign each triangle a unit vector A[1,0,0], B[0,1,0], or C[0,0,1] respectively. This will get you started. Note, the obvious solution (thresholding in the fragment shader and conditionally drawing pixels) produces pretty ugly aliased results. Check out this page for some insight in techniques to anti-alias your fragment program rendered wireframes:
http://cgg-journal.com/2008-2/06/index.html
As I mentioned, you need to employ a similar technique (unique vertices for each triangle) if you wish to implement flat shading. Since there is no equivalent to GL_FLAT and no way to make the varying registers return an average, the only way to implement flat shading is for each vertex pass for a given triangle to calculate the same lighting...which implies that each vertex needs the same vertex normal.

What does a pixel shader actually do?

I'm relatively new to graphics programming, and I've just been reading some books and have been scanning through tutorials, so please pardon me if this seems a silly question.
I've got the basics of directx11 up and running, and now i'm looking to have some fun. so naturally I've been reading heavily into the shader pipeline, and i'm already fascinated. The idea of writing a simple, minuscule piece of code that has to be efficient enough to run maybe tens of thousands of times every 60th of a second without wasting resources has me in a hurry to grasp the concept before continuing on and possibly making a mess of things. What i'm having trouble with is grasping what the pixel shader is actually doing.
Vertex shaders are simple to understand, you organize the vertices of an object in uniform data structures that relate information about it, like position and texture coordinates, and then pass each vertex into the shader to be converted from 3d to 2d by way of trasformation matrices. As long as i understand it, i can work out how to code it.
But i don't get pixel shaders. What i do get is that the output of the vertex shader is the input of the pixel shader. So wouldn't that just be handing the pixel shader the 2d coordinates of the polygon's vertices? What i've come to understand is that the pixel shader receives individual pixels and performs calculations on them to determine things like color and lighting. But if that's true, then which pixels? the whole screen or just the pixels that lie within the transformed 2d polygon?
or have i misunderstood something entirely?
Vertex shaders are simple to understand, you organize the vertices of an object in uniform data structures that relate information about it, like position and texture coordinates, and then pass each vertex into the shader to be converted from 3d to 2d by way of trasformation matrices.
After this, primitives (triangles or multiples of triangles) are generated and clipped (in Direct3D 11, it is actually a little more complicated thanks to transform feedback, geometry shaders, tesselation, you name it... but whatever it is, in the end you have triangles).
Now, fragments are "generated", i.e. a single triangle is divided into little cells with a regular grid, the output attributes of the vertex shader are interpolated according to each grid cell's relative position to the three vertices, and a "task" is set up for each little grid cell. Each of these cells is a "fragment" (if multisampling is used, several fragments may be present for one pixel1).
Finally, a little program is executed over all these "tasks", this is the pixel shader (or fragment shader).
It takes the interpolated vertex attributes, and optionally reads uniform values or textures, and produces one output (it can optionally produce several outputs, too). This output of the pixel shader refers to one fragment, and is then either discarded (for example due to depth test) or blended with the frame buffer.
Usually, many instances of the same pixel shader run in parallel at the same time. This is because it is more silicon efficient and power efficient to have a GPU run like this. One pixel shader does not know about any of the others running at the same time.
Pixel shaders commonly run in a group (also called "warp" or "wavefront"), and all pixel shaders within one group execute the exact same instruction at the same time (on different data). Again, this allows to build more powerful chips that user less energy, and cheaper.
1Note that in this case, the fragment shader still only runs once for every "cell". Multisampling only decides whether or not it stores the calculated value in one of the higher resolution extra "slots" (subsamples) according to the (higher resolution) depth test. For most pixels on the screen, all subsamples are the same. However, on edges, only some subsamples will be filled by close-up geometry whereas some will keep their value from further away "background" geometry. When the multisampled image is resolved (that is, converted to a "normal" image), the graphics card generates a "mix" (in the easiest case, simply the arithmetic mean) of these subsamples, which results in everything except edges coming out the same as usual, and edges being "smoothed".
Your understanding of pixel shaders is correct in that it "receives individual pixels and performs calculations on them to determine things like color and lighting."
The pixels the shader receives are the individual ones calculated during the rasterization of the transformed 2d polygon (the triangle to be specific). So whereas the vertex shader processes the 3 points of the triangle, the pixel shader processes the pixels, one at a time, that "fill in" the triangle.

How can I turn an image file of a game map into boundaries in my program?

I have an image of a basic game map. Think of it as just horizontal and vertical walls which can't be crossed. How can I go from a png image of the walls to something in code easily?
The hard way is pretty straight forward... it's just if I change the image map I would like an easy way to translate that to code.
Thanks!
edit: The map is not tile-based. It's top down 2D.
I dabble in video games, and I personally would not want the hassle of checking the boundaries of pictures on the map. Wouldn't it be cleaner if these walls were objects that just happened to have an image property (or something like it)? The image would display, but the object would have well defined coordinates and a function could decide whether an object was hit every time the player moved.
I need more details.
Is your game tile based? Is it 3d?
If its tile based, you could downsample your image to the tile resolution and then do a 1:1 conversion with each pixel representing a tile.
I suggest writing a script that takes each individual pixel and determines if it represents part of a wall or not (ie black or white). Then, code your game so that walls are built from individual little block, represented by the pixels. Shouldn't be TOO hard...
If you don't need to precompute anything using the map info. You can just check in runtime logic using getPixel(x,y) like function.
Well, i can see two cases with two different "best solution" depending on where your graphic comes from:
Your graphics is tiled, and thus you can easily "recognize" a block because it's using the same graphics as other blocks and all you would have to do is a program that, when given a list of "blocking tiles" and a map can produce a "collision map" by comparing each tile with tiles in the "blocking list".
Your graphics is just some graphics (e.g. it could be a picture, or some CG graphics) and you don't expect pixels for a block to be the same as pixels from another block. You could still try to apply an "edge detection" algorithm on your picture, but my guess is then that you should rather split your picture in a BG layer and a FG layer so that the FG layer has a pre-defined color (or alpha=0) and test pixels against that color to define whether things are blocking or not.
You don't have much blocking shapes, but they are usually complex (polygons, ellipses) and would be unefficient to render using a bitmap of the world or to pack as "tile attributes". This is typically the case for point-and-click adventure games, for instance. In that case, you're probably to create path that match your boundaries with a vector drawing program and dig for a library that does polygon intersection or bezier collisions.
Good luck and have fun.