Am i required to do a fftshift or iffshift after cross correlation? - fft

I am doing cross correlation for 2 vectors, v1, v2, following the fft method.
I am not allowed to use a lib so i had to do it old style and my knowledge on this is pretty limitted.
The process is this:
FFT on v1
FFT on v2
cross = v1*conjugated(v2)
result = IFFT(cross)
My question is, do i need to do a shift for the result vector. If yes would it be a FFTSHIFT or an IFFTSHIFT?

Related

fftw library, what is the output size of fftw_plan_dft_r2c_1d?

Im new to fftw library. I have an array of n real data and use fftw_plan_dft_r2c_1d to find fft spectrum. What would be the size of the output. Is it n as same as the input? Also, is the result center around 0 Hz or I have to manually do it?
For a real-to-complex transform you get N / 2 + 1 complex outputs for N real inputs (the redundant symmetric outputs are not generated).
The 0 Hz component is in bin 0.
This is all covered in the FFTW manual.
This is not an answer to your question, but I hope it can be a solution to your problem.
If you only want to find the spectrum of your data , you might use the "halfcomplex" format.
Here is an piece of code:
double *in,*out;
fftw_plan myplan;
in = fftw_malloc (N*sizeof(double));
out = fftw_malloc (N*sizeof(double));
myplan = fftw_plan_r2r_1d(N,in,out,FFTW_R2HC,FFTW_FORWARD);
// Fill in[] with your data.
...
fftw_execute(myplan);
Now out contains r0, r1, r2, ..., rn/2, i(n+1)/2-1, ..., i2, i1 , as it is written in the manual.
r0 ,out[0],is the mean value of your data/signal.
r1 ,out[1],is the real part of the first element of the DFT.
...
i0 is 0 because you're using real data , so it isn't stored in out.
i1 ,out[N-1],is the imaginary part of the first element of the DFT.
i2 ,out[N-2],is the imaginary part of the second element of the DFT.
If N is a even number , then r(N/2) out[N/2] is the Nyquist frequency amplitude.
Im new to fftw library
Remember that FFTW computes only the product of your data by the trigonometric functions, but it don't normalize them.
You can find more info about the halfcomplex here.

CUDA Warp Divergence

I' m developing with cuda and have an arithmetic problem, which I could implement with or without warp diverengence.
With warp divergence it would look like:
float v1;
float v2;
//calculate values of v1 and v2
if(v2 != 0)
v1 += v2*complicated_math();
//store v1
Without warp divergence the version looks like:
float v1;
float v2;
//calculate values of v1 and v2
v1 += v2*complicated_math();
//store v1
The Question is, which version is faster?
In other words how expensive is a warp disable compared to some extra calculation and addition of 0?
Your question has no single answer. This heavily depends on the amount of extra calculations, divergence frequency, type of hardware, dimensions and many more aspects. The best way is simply to program both and use profiling to determine the best solution in this particular case and situation.

cuSPARSE dense times sparse

I need to calculate the following matrix math:
D * A
Where D is dense, and A is sparse, in CSC format.
cuSPARSE allows multiplying sparse * dense, where sparse matrix is in CSR format.
Following a related question, I can "convert" CSC to CSR simply by transposing A.
Also I can calculate (A^T * D^T)^T, as I can handle getting the result transposed.
In this method I can also avoid "transposing" A, because CSR^T is CSC.
The only problem is that cuSPARSE doesn't support transposing D in this operation, so I have to tranpose it beforehand, or convert it to CSR, which is a total waste, as it is very dense.
Is there any workaround?Thanks.
I found a workaround.
I changed the memory accesses to D in my entire code.
If D is an mxn matrix, and I used to access it by D[j * m + i], now I'm accessing it by D[i * n + j], meaning I made it rows-major instead of columns-major.
cuSPARSE expectes matrices in column-major format, and because rows-major transposed is columns-major, I can pass D to cuSPARSE functions as a fake transpose without the need to make the transpose.

Generate Matrix from Another Matrix

Started learning octave recently. How do I generate a matrix from another matrix by applying a function to each element?
eg:
Apply 2x+1 or 2x/(x^2+1) or 1/x+3 to a 3x5 matrix A.
The result should be a 3x5 matrix with the values now 2x+1
if A(1,1)=1 then after the operation with output matrix B then
B(1,1) = 2.1+1 = 3
My main concern is a function that uses the value of x like that of finding the inverse or something as indicated above.
regards.
You can try
B = A.*2 + 1
The operator . means application of the following operation * to each element of the matrix.
You will find a lot of documentation for Octave in the distribution package and on the Web. Even better, you can usually also use the extensive documentation on Matlab.
ADDED. For more complex operations you can use arrayfun(), e.g.
B = arrayfun(#(x) 2*x/(x^2+1), A)

Normal vector from least squares-derived plane

I have a set of points and I can derive a least squares solution in the form:
z = Ax + By + C
The coefficients I compute are correct, but how would I get the vector normal to the plane in an equation of this form? Simply using A, B and C coefficients from this equation don't seem correct as a normal vector using my test dataset.
Following on from dmckee's answer:
a x b = (a2b3 − a3b2), (a3b1 − a1b3), (a1b2 − a2b1)
In your case a1=1, a2=0 a3=A b1=0 b2=1 b3=B
so = (-A), (-B), (1)
Form the two vectors
v1 = <1 0 A>
v2 = <0 1 B>
both of which lie in the plane and take the cross-product:
N = v1 x v2 = <-A, -B, +1> (or v2 x v1 = <A, B, -1> )
It works because the cross-product of two vectors is always perpendicular to both of the inputs. So using two (non-colinear) vectors in the plane gives you a normal.
NB: You probably want a normalized normal, of course, but I'll leave that as an exercise.
A little extra color on the dmckee answer. I'd comment directly, but I do not have enough SO rep yet. ;-(
The plane z = Ax + By + C only contains the points (1, 0, A) and (0, 1, B) when C=0. So, we would be talking about the plane z = Ax + By. Which is fine, of course, since this second plane is parallel to the original one, the unique vertical translation that contains the origin. The orthogonal vector we wish to compute is invariant under translations like this, so no harm done.
Granted, dmckee's phrasing is that his specified "vectors" lie in the plane, not the points, so he's arguably covered. But it strikes me as helpful to explicitly acknowledge the implied translations.
Boy, it's been a while for me on this stuff, too.
Pedantically yours... ;-)