Non-polygon based 3D-Model in ThreeJS (like in HelloRacer.com) - json

I am currently working on a project using ThreeJs.
Right now I use a wavefront-obj to represent my 3D-Models, but I would like to use something like -IGES or -STEP. These formats are not supported by ThreeJS, but I have seen the http://helloracer.com/webgl/ and for me, due to the short loading time, it seems like this model is not based on polygons as you can see by the smooth surface. The model seems to be .js so it is ThreeJS-JSON format?
Is such an model created by loading an IGES/STEP to for example Clara.io and export it to threejs-JSON? I have not the chance to test it by my self, because I do not have a IGES/STEP model right now, but I would let someone create one.
With wavefront I am not able to create such smooth surfaces without getting a huge loading time and a slow render.
as you can see, the surface and lightning is not nearly as smooth as on the posted example.

Surfaces in the demo you've linked are not smooth, they're still polygonal.
I think smooth shading is what you're looking for. The thing is, usually a model is shaded based on its normals. Thus, what normals we set to vertices of the model is crucial to what we'll get on a screen. Based on your description, your models have separate normal for every triangle in it. Like on this picture, each vertex of each triangle has the same normal as triangle itself. Thus, when we interpolate this normals across a triangle, we get the same value for every point of the triangle. Shading calculations yield uniform illumination and the triangle appears flat on a rendered image.
To achieve effect of smooth surface, we need other values for vertex normals, like ones on this image:
If we save this sphere to any format with those normals and try to render it, interpolated normals'll change smoothly across the surface of triangles comprising the sphere. Thus shading calculations'll yield smoothly changing illumination, and the surface'll appear smooth.
So, to recap, models you try to render need to have "smoothed" vertex normals to appear smooth on a screen.
UPD. Judging by your screenshot, your model uses refractive material. The same idea applies to refraction calculations since they're based on normal values too.

Related

Drawing over terrain with depth test?

i'm trying to render geometrical shapes over uneven terrain (loaded from heightmap / shapes geometry is also generated based on averaged heights across the heightmap however they do not fit it exactly). I have the following problem - somethimes the terrain shows through the shape like showed on the picture.
Open Image
I need to draw both terrain and shapes with depth testing enabled so they do not obstruct other objects in the scene.. Could someone suggest a solution to make sure the shapes are always rendered on top ? Lifting them up is not really feasible... i need to replace the colors of actual pixel on the terrain and doing this in pixel shader seems too expensive..
thanks in advance
I had a similar problem and this is how I solved it:
You first render the terrain and keep the depth buffer. Do not render
any objects
Render solid bounding box of the shape you want to put on the terrain.
You need to make sure that your bounding box covers all
the height range the shape covers
An over-conservative estimation is to use the global minimum and maximum elevation of the entire
terrain
In the pixel shader, you read depth buffer and reconstructs world space position
You check if this position is inside your shape
In your case you can check if its xy (xz) projection is within the given distance from
the center of your given circle
Transform this position into your shape's local coordinate system and compute the desired color
Alpha-blend over the render target
This method results in shapes perfectly aligned with the terrain surface. It also does not produce any artifacts and works with any terrain.
The possible drawback is that it requires using deferred-style shading and I do not know if you can do this. Still, I hope this might be helpful for you.

HTML5 Canvas not preserving the draw order

I am trying to draw a rough outline of a building in canvas.
I'm achieving the effect below by creating a series of squares for each side, plus the top 'roof' and then drawing them in sequence basically following the Painter's algorithm.
The screenshot on the left is showing how it should look. This is painting each square separately.
To improve performance I want as few .stroke() and .fill() calls as possible so I queue up all the moveTo() and lineTo() calls and paint them all in one big go.
Tests have shown that (at least for lines) this gives a massive performance improvement and I've verified it myself.
Unfortunately as you can see from the right screenshot, when I paint the buildings only once at the end the layering basically gets destroyed. It paints things in a seemingly random order.
Is the canvas supposed to work this way? Why doesn't it draw everything in the order I told it to draw in like the first screenshot?
Does anyone know a good work around for this behaviour?
If you're sending it all the moveTos and lineTos etc as one big batch, it's going to draw them as if you were rendering one large shape (where you'd want to see all the inner strokes).
There's a minor performance penalty for running multiple draw operations, but it's usually not worth making your code harder to understand and debug.

PV3D DAE Import - Random normals flipped, random scale?

I am developing a PV3D application that imports DAE models exported by Blender's Collada Exporter plugin (1.4). When I build them in Blender, I use exact dimensions (the end-game is to have scale models in PV3D).
Using the same scale of dimensions, some models appear in PV3D extremely tiny, while others are the appropriate size. Many appear with rotations bearing no resemblance to how they were constructed in Blender. Also, I have to flip the normals in Blender in order to get them to display properly in PV3D, and even then, occasional triangles will appear in PV3D with normals still reversed. I can't seem to discern a pattern amongst which models appear tiny. Same goes for the randomly flipping normals - I there doesn't seem to be a pattern to it.
Has anyone had any experience with a problem like this? I can't even think of how to tackle it - the symptoms seem to point to something with the way PV3D handles the import, or how Blender handles the export, and the 3D math is way beyond me.
I had a similar problem with the normals, I found that after applying scale/rotation to objdata (I had to make it single user first) the normals were facing in the direction which corresponded to what I was seeing in papervision.
This should fix your scaling issues too.
I finally found the source of the problem a while back, and just remembered I should update this post.
Turns out, the normals weren't being flipped. My models contained relative acute angles and sharp, flat projections (think a low grade ramp). When viewed from certain angles, the z-sorting (which sorts by object center by default) was incorrectly sorting the faces because the acute angles and flat, sharp projections caused the poly's center to be farther away than another poly's center behind it.
The effect was consistent from all my view angles because the camera was restricted to a single, fixed orbit around the models, so the same thing happened in reverse from the other side of the model, making it appear like the normals were flipped.
As for the scale issues - I never figured that out. I moved to Sketchup for my model creation, and that seemed to solve it.

Howto dynamically render space background in actionscript3?

I'm creating a space game in actionscript/flex 3 (flash). The world is infinitely big, because there are no maps. For this to work I need to dynamically (programatically) render the background, which has to look like open space.
To make the world feel real and to make certain places look different than others, I must be able to add filters such as colour differences and maybe even a misty kind of transformation - these would then be randomly added and changed.
The player is able to "scroll" the "map" by flying to the sides of the screen, so that a certain part of the world is only visible at once but the player is able to go anywhere. The scrolling works by moving all objects except for the player in the opposite direction, making it look like it was the player that moved into that direction. The background also needs to be moved, but has to be different on the new discovered terrain (dynamically created).
Now my question is how I would do something like this, what kind of things do I need to use and how do I implement them? Performance also needs to be taken into account, as many more objects will be in the game.
You should only have views for objects that are within the visible area. You might want to use a quad tree for that.
The background should maybe be composed of a set of tiles, that you can repeat more or less randomly (do you really need a background, actually? wouldn't having some particles be enough?). Use the same technique here you use for the objects.
So in the end, you wind up having a model for objects and tiles or particles (that you would generate in the beginning). This way, you will only add a few floats (you can achieve additional performance, if you do not calculate positions of objects, that are FAR away. The quad tree should help you with that, but I think this shouldn't be necessary) If an object having a view leaves the stage, free the view, and use the quad tree to check, if new objects appear.
If you use a lot of objects/particles, consider using an object pool. If objects only move, and are not rotated/scaled, consider using DisplayObject::cacheAsBitmap.

How can I turn an image file of a game map into boundaries in my program?

I have an image of a basic game map. Think of it as just horizontal and vertical walls which can't be crossed. How can I go from a png image of the walls to something in code easily?
The hard way is pretty straight forward... it's just if I change the image map I would like an easy way to translate that to code.
Thanks!
edit: The map is not tile-based. It's top down 2D.
I dabble in video games, and I personally would not want the hassle of checking the boundaries of pictures on the map. Wouldn't it be cleaner if these walls were objects that just happened to have an image property (or something like it)? The image would display, but the object would have well defined coordinates and a function could decide whether an object was hit every time the player moved.
I need more details.
Is your game tile based? Is it 3d?
If its tile based, you could downsample your image to the tile resolution and then do a 1:1 conversion with each pixel representing a tile.
I suggest writing a script that takes each individual pixel and determines if it represents part of a wall or not (ie black or white). Then, code your game so that walls are built from individual little block, represented by the pixels. Shouldn't be TOO hard...
If you don't need to precompute anything using the map info. You can just check in runtime logic using getPixel(x,y) like function.
Well, i can see two cases with two different "best solution" depending on where your graphic comes from:
Your graphics is tiled, and thus you can easily "recognize" a block because it's using the same graphics as other blocks and all you would have to do is a program that, when given a list of "blocking tiles" and a map can produce a "collision map" by comparing each tile with tiles in the "blocking list".
Your graphics is just some graphics (e.g. it could be a picture, or some CG graphics) and you don't expect pixels for a block to be the same as pixels from another block. You could still try to apply an "edge detection" algorithm on your picture, but my guess is then that you should rather split your picture in a BG layer and a FG layer so that the FG layer has a pre-defined color (or alpha=0) and test pixels against that color to define whether things are blocking or not.
You don't have much blocking shapes, but they are usually complex (polygons, ellipses) and would be unefficient to render using a bitmap of the world or to pack as "tile attributes". This is typically the case for point-and-click adventure games, for instance. In that case, you're probably to create path that match your boundaries with a vector drawing program and dig for a library that does polygon intersection or bezier collisions.
Good luck and have fun.