Intersection of a ray and a delaunay triangulation - intersection

How can I intersect a 3d ray with a 2D Constrained Delaunay Triangulation created from 3d points using the project_xy traits?
In a cgal-discuss post they comment using a tree if I have to make many queries. I don't have that many, thought, around 200 of them. I might have, however, lots of points, > 200 milion.
They also comment another approach:
A third alternative is to locate an endpoint in the triangulation and
to walk towards the other end point collecting the cells you traverse.
But I don't understand how can we test that we've traversed the triangulation. In my case, the 3d triangulated mesh is a model of terrain, which is close to be a plane, meaning that most of the times only one intersection will exist and that I can bound the ray as a segment, if necessary.
Is it worth it to build a tree? what other approach could I follow? Iterating over all the faces seems highly inefficient.
Some typedefs I have, to give some context:
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef CGAL::Projection_traits_xy_3<K> Gt;
typedef K::Point_3 Point3;
typedef CGAL::Triangulation_vertex_base_2<Gt> Vb;
typedef CGAL::Delaunay_mesh_face_base_2<Gt> Fb;
typedef CGAL::Triangulation_data_structure_2<Vb, Fb> Tds;
typedef CGAL::Constrained_Delaunay_triangulation_2<Gt, Tds> CDT;

You can use the line_walk()function to get all the cells in two traversed by the ray. Then you simply need to filter out them in 3D, using the segment-Triangle do_intersect() function.

Related

Is it possible to get pointers to the real and imaginary parts of a cuDoubleComplex?

Consider a cuDoubleComplex array a in device memory. Is it possible get pointers to the real and imaginary parts of a without allocating and doing a deep copy into two new double arrays?
something like this:
real_a = //points to real part of a
imag_a = //points to imaginary part of a
instead of something like:
/*allocate real_a and imag_a here */
for(int j=0; j<numElements; j++){
real_a[j]= a[j].x;
imag_a[j]= a[j].y;
}
CUDA does have something like this for numbers, but not for arrays/pointers.
The reason is that I would like to be able to call cuBLAS D rather than Z functions on the real and imaginary parts separately. For example,
cublasDgemm(...,real_a,...,somearray,...,anotherarray,...)
Is it possible get pointers to the real and imaginary parts of a
without allocating and doing a deep copy into two new double arrays?
That can be done:
double* real_a = reinterpret_cast<double*>(&a[0].x); //points to real part of a
double* imag_a = reinterpret_cast<double*>(&a[0].y); //points to imaginary part of a
but note that you need to use a stride of 2 when accessing the pointers to get the correct real or imaginary elements.
The reason is that I would like to be able to call cuBLAS D rather
than Z functions on the real and imaginary parts separately.
This will work with BLAS functions which operate on your real or imaginary pointers as vectors, because those BLAS routines allow a stride to be passed (which must be two in this case).
For example,
cublasDgemm(...,real_a,...,somearray,...,anotherarray,...)
That won't work with the pointers you can directly get as I have shown here. BLAS functions which would treat the array as a matrix do support strided source and destination data, but that stride is applied to the start of each column with the flattened matrix, but not to elements within a column, which is what you would need to make this work correctly.

Determination of formula for a 3 independent variable issue

I have 3 arrays of X, Y and Z. Each have 8 elements. Now for each possible combination of (X,Y,Z) I have a V value.
I am looking to find a formula e.g. V=f(X,Y,Z). Any idea about how that can be done?
Thank you in advance,
Astry
You have a function sampled on a (possibly nonuniform) 3D grid, and want to evaluate the function at any arbitrary point within the volume. One way to approach this (some say the best) is as a multivariate spline evaluation. https://en.wikipedia.org/wiki/Multivariate_interpolation
First, you need to find which rectangular parallelepiped contains the (x,y,z) query point, then you need to interpolate the value from the nearest points. The easiest thing is to use trilinear interpolation from the nearest 8 points. If you want a smoother surface, you can use quadratic interpolation from 27 points or cubic interpolation from 64 points.
For repeated queries of a tricubic spline, your life would be a bit easier by preprocessing the spline to generate Hermite patches/volumes, where your sample points not only have the function value, but also its derivatives (∂/∂x, ∂/∂y, ∂/∂z). That way you don't need messy code for the boundaries at evaluation time.

UV mapping in Stage3D / AS3

I've wrote a little wavefront's .obj file parser (3d model format), I'm able to display the geometry correctly but am having problems texturing it correctly.
The only way I'm able to get a correct texture is by dividing the model in my 3d editor, exporting and parsing it this way.. ie: I'm no longer sharing vertex data, each triangle is on it own so my indexBuffer's array looks like this [0,1,2,3,4,5,6...] which I want to avoid.
The correct texture/inefficient geometry (No reusing of vertices: 36 vertices):
Correct http://imageshack.us/a/img29/2242/textureright.jpg
Wrong texture/right topology (Sharing data: 8 vertices only = efficient):
Wrong http://imageshack.us/a/img443/6160/texturewrong.jpg
I thought to try and separate the UVs buffer from the indexBuffer destined to the vertices but didn't found a way to do it; if indeed it is doable.
I also messed with the agal code but haven't achieved any results.
The desired end is being able to pass different UVs coordinates to the same vertex in context of the triangle being drawn atm.
What to do?
Thanks. (I'm new to 3d programming)
It might seem like you need just one vertex per 'vertex location' of your model but, from what I understand of an .obj parser, you need to define your vertices around the FACES. This means you may have multiple vertices for some locations - depending on how many faces adjoin that location - but the pay off is you can have different UV coordinates for those vertices in the same location.
I'd suggest altering your parser to create vertices based on the faces they define rather than solely their positions. I know this bumps up the number of vertices but, from what I've read, it's unavoidable if you need different UVs for the same vertex location.
So, unfortunately, I'm pretty sure your first option is the way to go.
it seems like your welding operation is wrong. For welding vertices you must be sure that positions, UV-coordinates, normals and tangents(if you need them) are equal

Apple FFT Accelerate Framework Inverse FFT from Array of Real Numbers

I am using the accelerate framework FFT functions to produce a spectrogram of a sound sample. This part works great. However, I want to (effectively) manipulate the spectrum directly (ie manipulate the real numbers), and then call the inverse again, how would I go about doing that? It looks like the INVERSE call expects an array of IMAGINARY numbers, but how can I produce that from my manipulated real numbers? I have tried making the realp array my reals, and the imagp part zero, but that doesn't seem to work.
The reason I ask this is because I wish to run an FFT on a voice audio sample, and then run the FFT again and then lifter the low part of the cepstrum (thus hopefully separating the vocal tract components from the pitch) and then run an inverse FFT again to produce a spectrogram showing the vocal tract (formant) information more clearly (ie, without the pitch information). However, I seem to be running into problems on the inverse FFT, into which I am passing in my real values (cepstrum) in the realp array and the imagp is zero. I think I am doing something wrong here and the results are unexpected.
You need to process the complex forward FFT results, rather than the real magnitudes, or else the shape of the IFFT result spectrum will be distorted. Don't consider them imaginary numbers, consider them to be part of a 2D vector containing the required angular phase information.
If your cepstrum lifter/filter alters only the real magnitudes, then you can try using the amount of change of the real magnitudes as scaling factors to alter your forward complex FFT result before doing a complex IFFT.

How to represent stereo audio data for FFT

How should stereo (2 channel) audio data be represented for FFT? Do you
A. Take the average of the two channels and assign it to the real component of a number and leave the imaginary component 0.
B. Assign one channel to the real component and the other channel to the imag component.
Is there a reason to do one or the other? I searched the web but could not find any definite answers on this.
I'm doing some simple spectrum analysis and, not knowing any better, used option A). This gave me an unexpected result, whereas option B) went as expected. Here are some more details:
I have a WAV file of a piano "middle-C". By definition, middle-C is 260Hz, so I would expect the peak frequency to be at 260Hz and smaller peaks at harmonics. I confirmed this by viewing the spectrum via an audio editing software (Sound Forge). But when I took the FFT myself, with option A), the peak was at 520Hz. With option B), the peak was at 260Hz.
Am I missing something? The explanation that I came up with so far is that representing stereo data using a real and imag component implies that the two channels are independent, which, I suppose they're not, and hence the mess-up.
I don't think you're taking the average correctly. :-)
C. Process each channel separately, assigning the amplitude to the real component and leaving the imaginary component as 0.
Option B does not make sense. Option A, which amounts to convert the signal to mono, is OK (if you are interested in a global spectrum).
Your problem (double freq) is surely related to some misunderstanding in the use of your FFT routines.
Once you take the FFT you need to get the Magnitude of the complex frequency spectrum. To get the magnitude you take the absolute of the complex spectrum |X(w)|. If you want to look at the power spectrum you square the magnitude spectrum, |X(w)|^2.
In terms of your frequency shift I think it has to do with you setting the imaginary parts to zero.
If you imagine the complex Frequency spectrum as a series of complex vectors or position vectors in a cartesian space. If you took one discrete frequency bin X(w), there would be one real component representing its direction in the real axis (x -direction), and one imaginary component in the in the imaginary axis (y - direction). There are four important values about this discrete frequency, 1. real value, 2. imaginary value, 3. Magnitude and, 4. phase. If you just take the real value and set imaginary to 0, you are setting Magnitude = real and phase = 0deg or 90deg. You have hence forth modified the resulting spectrum, and applied a bias to every frequency bin. Take a look at the wiki on Magnitude of a vector, also called the Euclidean norm of a vector to brush up on your understanding. Leonbloy was correct, but I hope this was more informative.
Think of the FFT as a way to get information from a single signal. What you are asking is what is the best way to display data from two signals. My answer would be to treat each independently, and display an FFT for each.
If you want a really fast streaming FFT you can read about an algorithm I wrote here: www.depthcharged.us/?p=176