PV3D DAE Import - Random normals flipped, random scale? - actionscript-3

I am developing a PV3D application that imports DAE models exported by Blender's Collada Exporter plugin (1.4). When I build them in Blender, I use exact dimensions (the end-game is to have scale models in PV3D).
Using the same scale of dimensions, some models appear in PV3D extremely tiny, while others are the appropriate size. Many appear with rotations bearing no resemblance to how they were constructed in Blender. Also, I have to flip the normals in Blender in order to get them to display properly in PV3D, and even then, occasional triangles will appear in PV3D with normals still reversed. I can't seem to discern a pattern amongst which models appear tiny. Same goes for the randomly flipping normals - I there doesn't seem to be a pattern to it.
Has anyone had any experience with a problem like this? I can't even think of how to tackle it - the symptoms seem to point to something with the way PV3D handles the import, or how Blender handles the export, and the 3D math is way beyond me.

I had a similar problem with the normals, I found that after applying scale/rotation to objdata (I had to make it single user first) the normals were facing in the direction which corresponded to what I was seeing in papervision.
This should fix your scaling issues too.

I finally found the source of the problem a while back, and just remembered I should update this post.
Turns out, the normals weren't being flipped. My models contained relative acute angles and sharp, flat projections (think a low grade ramp). When viewed from certain angles, the z-sorting (which sorts by object center by default) was incorrectly sorting the faces because the acute angles and flat, sharp projections caused the poly's center to be farther away than another poly's center behind it.
The effect was consistent from all my view angles because the camera was restricted to a single, fixed orbit around the models, so the same thing happened in reverse from the other side of the model, making it appear like the normals were flipped.
As for the scale issues - I never figured that out. I moved to Sketchup for my model creation, and that seemed to solve it.

Related

Resizing big images for object detection

I need to perform object detection using deep learning on "huge" images, say 10000x10000 pixels.
At some point in the workflow, I need to resize the images down to something more manageable say 640x640. At the moment, I am achieving this using opencv :
import cv2
img = cv2.imread("some/path/to/my/img")
h, w = 640, 640
img = cv2.resize("some/path/to/my/img_resized", (w,h))
Now, when I am trying to look at some of these pictures (e.g. to check my bounding boxes are well-defined) with my human eye, I "can't see anything" in the sense that the resize is so aggressive that the image is heavily pixelated.
Does this cause an issue for the training of the algorithm ? Because in the end, I can get back the bounding boxes output by the model back to the original image (100000x10000px) using some transform. That is not an issue. But I can't tell if working on such pixelated images during training causes something to go wrong ?
It really depends what information is lost during the resizing. From 10000x10000 to 640x640 I would assume almost everything relevant is lost making the problem a lot harder if even solvable at all.
If you can't solve the problem (seeing the objects in the resized image) it is a very bad starting point to solve the problem with a neural network. I would still try and see if the network does anything.
It probably won't work good. An easy approach trying to solve this is splitting up the initial image in patches and do the detection on them and combine the results. This can work but depending on the problem might not be sufficient.
If this is not sufficient for your problem you might wanna do some state of the art research and try to find someone with a similar problem. I know that medical images also can be quite big. Also people dealing with satellite images might have the same problem of very big input images and maybe came up with ways to solve this.

Non-polygon based 3D-Model in ThreeJS (like in HelloRacer.com)

I am currently working on a project using ThreeJs.
Right now I use a wavefront-obj to represent my 3D-Models, but I would like to use something like -IGES or -STEP. These formats are not supported by ThreeJS, but I have seen the http://helloracer.com/webgl/ and for me, due to the short loading time, it seems like this model is not based on polygons as you can see by the smooth surface. The model seems to be .js so it is ThreeJS-JSON format?
Is such an model created by loading an IGES/STEP to for example Clara.io and export it to threejs-JSON? I have not the chance to test it by my self, because I do not have a IGES/STEP model right now, but I would let someone create one.
With wavefront I am not able to create such smooth surfaces without getting a huge loading time and a slow render.
as you can see, the surface and lightning is not nearly as smooth as on the posted example.
Surfaces in the demo you've linked are not smooth, they're still polygonal.
I think smooth shading is what you're looking for. The thing is, usually a model is shaded based on its normals. Thus, what normals we set to vertices of the model is crucial to what we'll get on a screen. Based on your description, your models have separate normal for every triangle in it. Like on this picture, each vertex of each triangle has the same normal as triangle itself. Thus, when we interpolate this normals across a triangle, we get the same value for every point of the triangle. Shading calculations yield uniform illumination and the triangle appears flat on a rendered image.
To achieve effect of smooth surface, we need other values for vertex normals, like ones on this image:
If we save this sphere to any format with those normals and try to render it, interpolated normals'll change smoothly across the surface of triangles comprising the sphere. Thus shading calculations'll yield smoothly changing illumination, and the surface'll appear smooth.
So, to recap, models you try to render need to have "smoothed" vertex normals to appear smooth on a screen.
UPD. Judging by your screenshot, your model uses refractive material. The same idea applies to refraction calculations since they're based on normal values too.

Rotating a rectangular solid about the y axis without image distortion using canvas renderer (three.js)

I've spent several hours trying to work around this issue... when rendering really simple shape (ie. a cube with very low complexity) and using a texture map feature of Three.js, when you rotate the cube the image seems to be distorted while in rotation, and then you can see a line which runs across the surface of the cube which appears as distortion.
http://screencast.com/t/VpSPRsr1Jkss
I understand that is a limitation of canvas rendering - but it seems like this is is a really simple thing to do - rotate a cube that has an image on one face without the distortion.
Is there another canvas library or approach i can take? I was really looking forward to using Three.js for animating some logos and other elemnets - but we can't have distortion like that in a logo or a customer facing landing page.
Thanks for reading, I'm open to suggestions here.
I don't accept increasing the complexity of the face as a solution because that just distributes the distortion through out the face. I really just want to render the image to a flat surface and be able to rotate that object.
The distortion you see is because only two triangles make that plane.
A quick fix is to have more detailed plane.
If you are using PlaneGeometry, increase the number of segments.
If you are using CubeGeometry, increase the number of segments on the plane you need (2 out of 3).
It will take a bit of fiddling to find the best balance between a decent look and optimal performance (as more segments will require more computing). Hopefully for simple scene you'll get away with no major delays.

Alternativa3D: Actionscript3: How to avoid z-fighting in imported 3DS model?

I can't seem to find a specific solution for my problem so I hope someone here can help me.
I am experimenting with alternativa 3D in Actionscript3 and I managed to upload a textured .3DS model from 3D Max.
The object is a complex spaceship that wasn't intended to be used in a game but I wanted to use it as an example.
The problem is:
Since the imported model is complex it has a lot of overlapping parts. Alternativa z-sorting engine don't react well to this overlapping and the output is jittery texture(i don't know how else to call it) in the overlapping places.
I know next time to model my objects with as less overlapping parts as possible but I am sure this problem will reappear in other forms in the future.
The Alternativa documentation suggests using Decal objects instead of Mash objects but I can't seem to convert imported object Mashs to Decay objects.
Any help will be appreciated.
If you have a model where faces directly intersect one another, then I'd suggest this, not the engine, is the problem.
A well-built 3d model shouldn't have any intersecting faces - you may not notice or think it's a problem in a program like 3dsMAX, you can get away with it more - but it'll certainly show up in a real-time engine.

Randomly Generate Directed Graph on a grid

I am trying to randomly generate a directed graph for the purpose of making a puzzle game similar to the ice sliding puzzles from pokemon.
This is essentially what I want to be able to randomly generate: http://bulbanews.bulbagarden.net/wiki/Crunching_the_numbers:_Graph_theory
I need to be able to limit the size of the graph in an x and y dimension. In the example in the link, it would be restricted to an 8x4 grid.
The problem I am running in to is not randomly generating the graph, but randomly generating a graph which I can properly map out in a 2d space, since I need something (like a rock) on the opposite side of a node, to make it visually make sense when you stop sliding. The problem with this is sometimes the rock ends up in the path between two other nodes or possibly on another node itself, which causes the entire graph to become broken.
After discussing the problem with a few people I know, we came to a couple of conclusions that may lead to a solution. Including the obstacles in the grid as part of the graph when constructing it. Start out with a fully filled grid and just draw a random path and delete out blocks that will make that path work, though the problem then becomes figuring out which ones to delete so that you don't accidentally introduce an additional, shorter path. We were also thinking a dynamic programming algorithm may be beneficial, though none of us are too skilled with creating dynamic programming algorithms from nothing. Any ideas or references about what this problem is officially called (if it's an official graph problem) would be most helpful.
I wouldn't look at it as a graph problem, since as you say the representation is incomplete. To generate a puzzle I would work directly on a grid, and work backwards; first fix the destination spot, then place rocks in some way to reach it from one or more spots, and iteratively add stones to reach those other spots, with the constraint that you never add a stone which breaks all the paths to the destination.
You might want to generate a planar graph, which means that the edges of the graph will not overlap each other in a two dimensional space. Another definition of planar graphs ist that each planar graph does not have any subgraphs of the type K_3,3 (complete bi-partite with six nodes) or K_5 (complete graph with five nodes).
There's a paper on the fast generation of planar graphs.