UV mapping in Stage3D / AS3 - actionscript-3

I've wrote a little wavefront's .obj file parser (3d model format), I'm able to display the geometry correctly but am having problems texturing it correctly.
The only way I'm able to get a correct texture is by dividing the model in my 3d editor, exporting and parsing it this way.. ie: I'm no longer sharing vertex data, each triangle is on it own so my indexBuffer's array looks like this [0,1,2,3,4,5,6...] which I want to avoid.
The correct texture/inefficient geometry (No reusing of vertices: 36 vertices):
Correct http://imageshack.us/a/img29/2242/textureright.jpg
Wrong texture/right topology (Sharing data: 8 vertices only = efficient):
Wrong http://imageshack.us/a/img443/6160/texturewrong.jpg
I thought to try and separate the UVs buffer from the indexBuffer destined to the vertices but didn't found a way to do it; if indeed it is doable.
I also messed with the agal code but haven't achieved any results.
The desired end is being able to pass different UVs coordinates to the same vertex in context of the triangle being drawn atm.
What to do?
Thanks. (I'm new to 3d programming)

It might seem like you need just one vertex per 'vertex location' of your model but, from what I understand of an .obj parser, you need to define your vertices around the FACES. This means you may have multiple vertices for some locations - depending on how many faces adjoin that location - but the pay off is you can have different UV coordinates for those vertices in the same location.
I'd suggest altering your parser to create vertices based on the faces they define rather than solely their positions. I know this bumps up the number of vertices but, from what I've read, it's unavoidable if you need different UVs for the same vertex location.
So, unfortunately, I'm pretty sure your first option is the way to go.

it seems like your welding operation is wrong. For welding vertices you must be sure that positions, UV-coordinates, normals and tangents(if you need them) are equal

Related

How do I find the region a point lies within

Below I have an image representation of a map with different regions labeled on it.
My problem is that I need to find out what region a randomly generated point on the map will be in.
I know the x_min, y_min, x_max, y_max of all the different regions meaning I have the coordinates for all the vertices of each rectangular region. I also know the coordinate of the point.
What you can do, and what I have done, is just go through a big condition statement checking through one by one if the x & y coordinate of the point is between the x_min and x_max and y_min and y_max of every region. However, I feel like there has to be a more scalable, generalizable, and efficient way to do this. I however cannot find a way to do so, at least not something that isn't in a library for a different programming language. I thought of maybe doing something where I split the map in half, find out which half the point lies in, count up all the regions in that half, check if there is one region left and if not, split the map in half again and go through the process again. I just don't have a good idea of how that can be implemented and whether that is feasible or better that the current method I have.

U-Net segmentation without having mask

I am new to deep learning and Semantic segmentation.
I have a dataset of medical images (CT) in Dicom format, in which I need to segment tumours and organs involved from the images. I have labelled organs contoured by our physician which we call it RT structure stored in Dicom format also.
As far as I know, people usually use "mask". Does it mean I need to convert all the contoured structure in the rt structure to mask? or I can use the information from the RT structure (.dcm) directly as my input?
Thanks for your help.
There is a special library called pydicom that you need to install before you can actually decode and later visualise the X-ray image.
Now, since you want to apply semantic segmentation and you want to segment the tumours, the solution to this is to create a neural network which accepts as input a pair of [image,mask], where, say, all the locations in the mask are 0 except for the zones where the tumour is, which are marked with 1; practically your ground truth is the mask.
Of course for this you will have to implement your CustomDataGenerator() which must yield at every step a batch of [image,mask] pairs as stated above.

Using the scale transfer function for the Point Gaussian representation in Paraview?

I have run the cyclone case from the OpenFOAM tutorials and want to view it using the builtin paraFOAM viewer which is based on Paraview 5.4.0.
The simulation has a number of particles in the diameter range of [2e-5, 1e-4] and i would like to scale the size of particles with the diameter array provided with the results.
To do this i select the Point Gaussian representation for the lagrangian fields (kinematiccloud), select Advanced properties, and select 'Scale by data array' after which the diameter array is chosen by default (although its not possible to change it to another field, which I suspect is a bug) but all the particles disappear from the view, as can be seen in the following screenshot:
My guess is that i need to chose proper values of the Gaussian radius and for the scale transfer function but there is no documentation to which it should be set. I have tried trial-and-error but i cannot find any settings for which i can get the particles back and have them render at different sizes.
Can someone enlighten me on how to set the Gaussian radius and scale transfer function properly?
The PointGaussian has just been improved and configuration is now automatic. You may want to try the last release of ParaView.
More info here :
https://blog.kitware.com/major-improvements-on-the-point-gaussian-representation/

Limit the number o edges between vertex in mxGraph

Is there a function to prevent more than onde edge between two vertex in mxGraph? Actually I'm using mxGraph.multiplicities, however it limit the number of edges between all types of vertex and not between on type of edge.
Usually you will want to accomplish this via setting setMultigraph to false.
However if you need to distinguish between different kinds of vertices or even having edges with a direction (allowing to connect both A->B and B->A), the way I did it in the past was by overloading getEdgeValidationError, where your logic can determine if and when 2 vertices can be connected.

Quadtree for collisions with latitude/longitude (earth size)

I have a Google Map and a server sends a list of objects that have a position with a small radius (100m max). I need to quickly be able to know if a position is colliding with something in the list and draw on the map everything.
I'm thinking I should use a Quadtree (very useful in 2D collisions for games) but my issue is I'm not limited to a screen but to the earth !
Sure, if I have 100 objects it's not a problem but at any time the server can send me new objects that I need to add to the list and so my Quadtree could drastically change or become unbalanced.
What should I do ? Should I still use a Quadtree and modify the entire tree if a new element is added outside of the current boundaries ? Should I set the boundaries to the max latitude longitude (but could have issue with double precision) ? Or does someone knows a better data structure for that type of problem ?
rXp
To avoid issues with double precision, especially at the splitting border of a quad cell, it is advisable to use integer coordinates in the quad tree.
convert double lat/lon to int by multiplying with 1E6, this results in a precision of about 10cm.
You can use a space-filling-curve, for example a z curve.