I know that frequency multiplication = circular convolution in time space for discrete signals (vectors).
I also know that "the convolution theorem yields the desired linear convolution result only if x(n) and h(n) are padded with zeros prior to the DFT such that their respective lengths are Nx+Nh-1, essentially zeroing out all circular artifacts."
and everything works with vectors.. but my goal is circular convolution with matrices as in this paper:
http://developer.download.nvidia.com/compute/cuda/2_2/sdk/website/projects/convolutionFFT2D/doc/convolutionFFT2D.pdf
If you watch the first two figures (figure 1 and 2) you'll see that the kernel is padded in a weird way I've never seen before, what's this?
Solved by padding and extending the matrix to get rid of circular artifacts (see nvidia CUDA sdk papers)
Related
I am running a cfft on a signal. The output seems to show symmetry. I know that
an fft is symmetrical, but the code
arm_cfft_f32(&arm_cfft_sR_f32_len512, &FFTBuf[0], 0, 1);
arm_cmplx_mag_f32(&FFTBuf[0], &FFTMagBuf[0], FFT_LEN);
accounts for this as the FFTMagBuf is Half the length of the Input array.
The output though, still appears to show symmetry
[1]https://imgur.com/K0uMDAm
arrows point to my whistle, which shows nicely, surrounded by much noise.
the middle one is probably a harmonic(my whistling is crap). but left right symmetry is noticeable.
I am using an stm32f4 disco board, and the samples are from the on-board mems microphone, and each block of samples(in this case 1024, to give an fft of 512 length) is passed through a hann window.
I am using a modified version of tony dicola's spectrogramui.py for visualization.
According to the documentation arm_cmplx_mag_f32 computes the magnitude of a complex signal. That's why FFTMagBuf has to be half the size of FFTBuf: both arrays hold real numbers but, the complex samples are made of two reals. It's unrelated to the simmetry of the FFT.
So, the output signal has exactly the same number of samples as the input.
That is, you compute the complex FFT of a real signal, which has some kind of symmetry (you need to account for complex conjugation too), and you take the magnitude, which is symmetric. Of course, the plot is then symmetric too.
...and if so under what circumstances?
A Convolutional Layer usually yields an output of lesser size. Is it possible to reverse/invert such an operation by flipping/transposing the used kernel and providing padding or likewise?
Just looking at the convolutional layer's operation here - without pooling layers, concatenation, non-linear activation functions etc.
I'm not looking for any of the several trainable versions of reverse convolutional operations. Such can be achieved by strides $\geq 1$ in the output space or intrinsic padding in the input space for example. Vincent Dumoulin and Francesco Visin provide very elucidating, animated gifs on their github page. And the Deep Learning community is divided over the naming of these operations: Transpose convolution, fractionally strided convolution and deconvolution are all used (the latter, although widely used, is very misleading since it's no proper mathematical deconvolution).
I believe this is where the difference between a transposed convolution and a deconvolution is essential.
A deconvolution is the mathematical inverse of what a convolution does, whereas a transposed convolution reverses only the spatial transformation between input and output. Meaning, if you want to reverse the changes concerning the shape of the output, a transposed convolution will do the job, but it will not be the mathematical inverse concerning the values it produces. I wrote a few words about this topic in an article.
Well the deconvolution is defined very clearly.
In convo, you multiply the map with the input frame like vector miltiplication, summ it and ASSIGN the value to output.
In deconvo, you take the output value (one), multiplie it with the map, quasi highlighting the points what influated the output and ADD it to former input layer, (what of course must be filled with zeros in the begin. This must give the same "input" layer as in forward pass.
Recently I learn DM_Script for TEM image processing
I needed Gaussian blur process and I found one whose name is 'Gaussian Blur' in http://www.dmscripting.com/recent_updates.html
This code implements Gaussian blur algorithm by multiplying the fast fourier transform(FFT) of source image by the FFT of Gaussian-kernel image and finally doing inverse fourier transform of it.
Here is the part of the code,
// Carry out the convolution in Fourier space
compleximage fftkernelimg:=realFFT(kernelimg) (-> FFT of Gaussian-kernel image)
compleximage FFTSource:=realfft(warpimg) (-> FFT of source image)
compleximage FFTProduct:=FFTSource*fftkernelimg.modulus().sqrt()
realimage invFFT:=realIFFT(FFTProduct)
The point I want to ask is this
compleximage FFTProduct:=FFTSource*fftkernelimg.modulus().sqrt()
Why does the FFT of Gaussian-kernel need '.modulus().sqrt()' for the convolution?
It is related to the fact that the fourier transform of a Gaussian function becomes another Gaussian function?
Or It is related to a sort of limitation of discrete fourier transform?
Please answer me
Thanks
This is related to the general precision limitation of any floating point numeric computing. (see f.e. here, or more in depth here)
A rotational (real-valued) Gaussian of stand.dev. sigma should be transformed into a 100% real-values rotational Gaussioan of 1/sigma. However, doing this numerically will show you deviations: Just try the following:
number sigma = 30
number A0 = 1
realimage first := RealImage( "First", 8, 256, 256 )
first = A0 * exp( - (iradius**2/(2*sigma*sigma) ))
first.showimage()
complexImage second := FFT(first)
second.Showimage()
image nonZeroImaginaryMask = ( 0 != second.Imaginary() )
nonZeroImaginaryMask.Showimage()
nonZeroImaginaryMask.SetLimits(0,1)
When you then multiply these complex images (before back-transferring) you are introducing even more errors. By using modulus, one ensures that the forward transformed kernel is purely real and hence a better "damping" curve.
A better implementation of a FFT filtering code would actually create the FFT(Gaussian) directly with a std.dev of 1/sigma, as this is the analytically correct result. Doing a FFT of the kernel only makes sense if the kernel (or its FFT) is not analytically known.
In general: When implementing any "maths" into a program code, it can pay hugely to think it through with numerical computation limits in the back of your head. Reduce actual computation whenever possible (i.e. compute analytically and use the result instead of relying on brute force numerical computation) and try to "reshape" equations when possible, f.e. avoid large sums over many small numbers, be careful about checks against exact numeric values, try to avoid expressions which are very sensitive on small numerica errors etc.
Can QR algorithm find repeat eigenvalues (https://en.wikipedia.org/wiki/QR_algorithm) ? I.e. Does it support the case when not all N eigen value for real matrix N x N are distinct?
How extend QR algorithm to support finding complex eigenvalues?
In principle yes. It will work if the eigenvalues are really all eigenvalues, i.e., the algebraic and geometric multiplicity are the same.
If the multiple eigenvalue occurs in an Jordan-block of size s, then the unavoidable floating point error during the iteration will almost surely result in a star-shaped perturbation into an eigenvalue cluster with relative error of size mu^(1/s) where mu is the machine constant of the floating point data type.
The reason this happens is that on the irreducible invariant subspace corresponding to a Jordan block of size s the characteristic polynomial of the reduction of the linear operator to this subspace has is (λ-λ[j])^s. During the computation this gets perturbed to (λ-λ[j])^s+μq(λ) which in first approximation has roots close to λ[j]+μ^(1/s)*z[k], where z[k] denotes the s roots of 0=z^s+q(λ[k]). What the perturbation function q is is quite random, accumulated floating point truncation errors, and depends on details of the method.
I need to preform multiple convolutions with small matrices and kernels, and I was hoping that utilizing the many processors of the GPU would enable me to it as fast as possible.
The problem is as follows: I have many matrices (~1,000 to ~10,000) or relatively small sizes (~15x15 down to 1x1 - as in scalar), and a certain number of convolution masks (~20 to 1). I need to convolve all the matrices with each convolution mask
example:
A; %5,000 matrices of size 10x10, A(i) = a 10x10 matrix
B; 10 matrices of size 5x5, B(k) = a 5x5 matrix
res(j)=conv(A,B(1)); %res(j) is the result of convolving all 5,000
%matrices in A by the j'th kernel B(j)
the goal is computing res(1),...,res(10) as quickly as possible
I would like to hear suggestions about how to implement the most efficient algorithm.
FFT based convolution would probably be too slow.
Every implementation I've seen so far is for 2d convolution, meant to convolve 2 large matrices, while I need to convolve many small matrices.
I know very little about CUDA programming right now, but I'm in the process of learning.
I was hoping to figure this out myself, but due to time constraints, I am forced to ask for any advice anyone with experience can give me, while I learn how to code in CUDA.
Thank you!
p.s. any pointers to an implementation that suits my purposes is more than appreciated. I am a university students, and this is for a small research project, so nothing I need to pay for please...
I do not pretend to give an ultimate answer to your question, but I would just like to point out a couple of things:
As you mentioned, a first possibility would be to use the FFT approach. A problem on this line is that (correct me if I'm wrong) the cuFFT library is primarily designed to cope with large matrices, so to fruitfully benefit from this approach would be developing FFT routines efficient for small matrices. I just want to indicate that there are some algorithms of this kind, please see for example the paper: Small Discrete Fourier Transforms on GPUs. I have no direct experience with the performance of CUDA FFTs on small matrices of the indicated type, but perhaps it could be interesting for you since the mask matrices are in a low number (10) and so you can "recycle" their FFTs for a large number of convolutions (5000).
If you decide not to use the FFT approach, then, if you have a GPU architecture with compute capability >=3.5, then dynamic parallelism could be a good candidate to calculate convolutions. If you regard the evaluation of each convolution matrix element as an interpolation, then you will have interpolation problems of size 15x15 and dynamic parallelism could help, see the post: Benefit of splitting a big CUDA kernel and using dynamic parallelism
One approach is to use ArrayFire's GFOR loop, which I work on.
You can tile as many small convolutions into one big kernel launch as you want, as long as you don't run out of GPU memory, as follows:
array x = randu(5); // the input
array y = randu(m,5); // the output
array f = constant(1,3); // the kernel
gfor (array k, 0, m-1) {
y(span,k) = convolve(x,f);
}
Good luck!