Write two vectors as convolution - fft
How to write two column vectors as an analytic convolution so that the discrete FFT may be used. MATLAB syntax is used.
Consider:
a set of vectors which, when sorted into a step function appears as any of the following:
[1,1,1,1,0,0,0,0], or [1,1,1,1,1,0,0,0], or [1,1,1,1,1,1,0,0]
(...the location at which the function "steps up" varies over members of this set)
The other is random vec=[1,0,1,0,1,1,1,0], and obviously both contain only 0s and 1s.
Is it possible to write these vectors as an analytic convolution? I would like the 1st, 2nd, 3rd, 4th... entries of the convolution to have values of:
sum(vec.*[1,0,0,0,0,0,0,0])
sum(vec.*[1,1,0,0,0,0,0,0])
sum(vec.*[1,1,1,0,0,0,0,0])
sum(vec.*[1,1,1,1,0,0,0,0])
...
sum(vec.*[1,1,1,1,1,1,1,1])
For speed, I am trying to avoid use of a for-loop. I cannot vectorize because this requires terabytes of RAM. (I work with vectors that are not of length 8, but rather length nearly a million).
The convolution theorem gives the function R from the convolution of functions L and 1/w from the Fourier transform F and its inverse F-1 as,
Clearly, the function 1/(w-w') in the convolution is from 1/w under F; it's as if you just set w'=0. But if I use analogous reasoning in my [1,1,1,1,0,0,0,0], I get either [1,1,1,1,1,1,1,1], the identity under .* in MATLAB or [0,0,0,0,0,0,0,0](a very boring result).
What is the mistake in reasoning I've made?
Related
Using Softmax Activation function after calculating loss from BCEWithLogitLoss (Binary Cross Entropy + Sigmoid activation)
I am going through a Binary Classification tutorial using PyTorch and here, the last layer of the network is torch.Linear() with just one neuron. (Makes Sense) which will give us a single neuron. as pred=network(input_batch) After that the choice of Loss function is loss_fn=BCEWithLogitsLoss() (which is numerically stable than using the softmax first and then calculating loss) which will apply Softmax function to the output of last layer to give us a probability. so after that, it'll calculate the binary cross entropy to minimize the loss. loss=loss_fn(pred,true) My concern is that after all this, the author used torch.round(torch.sigmoid(pred)) Why would that be? I mean I know it'll get the prediction probabilities in the range [0,1] and then round of the values with default threshold of 0.5. Isn't it better to use the sigmoid once after the last layer within the network rather using a softmax and a sigmoid at 2 different places given it's a binary classification?? Wouldn't it be better to just out = self.linear(batch_tensor) return self.sigmoid(out) and then calculate the BCE loss and use the argmax() for checking accuracy?? I am just curious that can it be a valid strategy?
You seem to be thinking of the binary classification as a multi-class classification with two classes, but that is not quite correct when using the binary cross-entropy approach. Let's start by clarifying the goal of the binary classification before looking at any implementation details. Technically, there are two classes, 0 and 1, but instead of considering them as two separate classes, you can see them as opposites of each other. For example, you want to classify whether a StackOverflow answer was helpful or not. The two classes would be "helpful" and "not helpful". Naturally, you would simply ask "Was the answer helpful?", the negative aspect is left off, and if that wasn't the case, you could deduce that it was "not helpful". (Remember, it's a binary case, there is no middle ground). Therefore, your model only needs to predict a single class, but to avoid confusion with the actual two classes, that can be expressed as: The model predicts the probability that the positive case occurs. In context of the previous example: What is the probability that the StackOverflow answer was helpful? Sigmoid gives you values in the range [0, 1], which are the probabilities. Now you need to decide when the model is confident enough for it to be positive by defining a threshold. To make it balanced, the threshold is 0.5, therefore as long as the probability is greater than 0.5 it is positive (class 1: "helpful") otherwise it's negative (class 0: "not helpful"), which is achieved by rounding (i.e. torch.round(torch.sigmoid(pred))). After that the choice of Loss function is loss_fn=BCEWithLogitsLoss() (which is numerically stable than using the softmax first and then calculating loss) which will apply Softmax function to the output of last layer to give us a probability. Isn't it better to use the sigmoid once after the last layer within the network rather using a softmax and a sigmoid at 2 different places given it's a binary classification?? BCEWithLogitsLoss applies Sigmoid not Softmax, there is no Softmax involved at all. From the nn.BCEWithLogitsLoss documentation: This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability. By not applying Sigmoid in the model you get a more numerically stable version of the binary cross-entropy, but that means you have to apply the Sigmoid manually if you want to make an actual prediction outside of training. [...] and use the argmax() for checking accuracy?? Again, you're thinking of the multi-class scenario. You only have a single output class, i.e. output has size [batch_size, 1]. Taking argmax of that, will always give you 0, because that is the only available class.
How to calculate a combination of convolution and correlation by FFT?
I'm trying to achieve an algorithm to efficiently calculate a combination of convolution and correlation such as following : c(x,y)=(sum of i, (sum of j, a(x-i,y+j)*b(i,j))) I have known that 1-D convolution or correlation can be solved by a conv b = ifft(fft(a).*fft(b)) a corr b = ifft(fft(a).*conjg(fft(b))) But I have no idea about the combination of them in 2-D or N-D problems. I think it is similar to 2-D convolution, but I don't know the specific deduction process.
The correlation can be written in terms of the convolution by reversing one of the arguments: corr(x(t),y(t)) = conv(x(t),y(-t)) Thus, if you want the x-axis to behave like a convolution and the y-axis to behave like a correlation, reverse the y-axis only and compute the convolution. It doesn’t matter if you use a spatial or frequency domain implementation.
DM Script, why does the fourier transform of gaussian-kenel needs modulus
Recently I learn DM_Script for TEM image processing I needed Gaussian blur process and I found one whose name is 'Gaussian Blur' in http://www.dmscripting.com/recent_updates.html This code implements Gaussian blur algorithm by multiplying the fast fourier transform(FFT) of source image by the FFT of Gaussian-kernel image and finally doing inverse fourier transform of it. Here is the part of the code, // Carry out the convolution in Fourier space compleximage fftkernelimg:=realFFT(kernelimg) (-> FFT of Gaussian-kernel image) compleximage FFTSource:=realfft(warpimg) (-> FFT of source image) compleximage FFTProduct:=FFTSource*fftkernelimg.modulus().sqrt() realimage invFFT:=realIFFT(FFTProduct) The point I want to ask is this compleximage FFTProduct:=FFTSource*fftkernelimg.modulus().sqrt() Why does the FFT of Gaussian-kernel need '.modulus().sqrt()' for the convolution? It is related to the fact that the fourier transform of a Gaussian function becomes another Gaussian function? Or It is related to a sort of limitation of discrete fourier transform? Please answer me Thanks
This is related to the general precision limitation of any floating point numeric computing. (see f.e. here, or more in depth here) A rotational (real-valued) Gaussian of stand.dev. sigma should be transformed into a 100% real-values rotational Gaussioan of 1/sigma. However, doing this numerically will show you deviations: Just try the following: number sigma = 30 number A0 = 1 realimage first := RealImage( "First", 8, 256, 256 ) first = A0 * exp( - (iradius**2/(2*sigma*sigma) )) first.showimage() complexImage second := FFT(first) second.Showimage() image nonZeroImaginaryMask = ( 0 != second.Imaginary() ) nonZeroImaginaryMask.Showimage() nonZeroImaginaryMask.SetLimits(0,1) When you then multiply these complex images (before back-transferring) you are introducing even more errors. By using modulus, one ensures that the forward transformed kernel is purely real and hence a better "damping" curve. A better implementation of a FFT filtering code would actually create the FFT(Gaussian) directly with a std.dev of 1/sigma, as this is the analytically correct result. Doing a FFT of the kernel only makes sense if the kernel (or its FFT) is not analytically known. In general: When implementing any "maths" into a program code, it can pay hugely to think it through with numerical computation limits in the back of your head. Reduce actual computation whenever possible (i.e. compute analytically and use the result instead of relying on brute force numerical computation) and try to "reshape" equations when possible, f.e. avoid large sums over many small numbers, be careful about checks against exact numeric values, try to avoid expressions which are very sensitive on small numerica errors etc.
QR algorithm repeating eigenvalues
Can QR algorithm find repeat eigenvalues (https://en.wikipedia.org/wiki/QR_algorithm) ? I.e. Does it support the case when not all N eigen value for real matrix N x N are distinct? How extend QR algorithm to support finding complex eigenvalues?
In principle yes. It will work if the eigenvalues are really all eigenvalues, i.e., the algebraic and geometric multiplicity are the same. If the multiple eigenvalue occurs in an Jordan-block of size s, then the unavoidable floating point error during the iteration will almost surely result in a star-shaped perturbation into an eigenvalue cluster with relative error of size mu^(1/s) where mu is the machine constant of the floating point data type. The reason this happens is that on the irreducible invariant subspace corresponding to a Jordan block of size s the characteristic polynomial of the reduction of the linear operator to this subspace has is (λ-λ[j])^s. During the computation this gets perturbed to (λ-λ[j])^s+μq(λ) which in first approximation has roots close to λ[j]+μ^(1/s)*z[k], where z[k] denotes the s roots of 0=z^s+q(λ[k]). What the perturbation function q is is quite random, accumulated floating point truncation errors, and depends on details of the method.
How to find a function that fits a given set of data points in Julia?
So, I have a vector that corresponds to a given feature (same dimensionality). Is there a package in Julia that would provide a mathematical function that fits these data points, in relation to the original feature? In other words, I have x and y (both vectors) and need to find a decent mapping between the two, even if it's a highly complex one. The output of this process should be a symbolic formula that connects x and y, e.g. (:x)^3 + log(:x) - 4.2454. It's fine if it's just a polynomial approximation. I imagine this is a walk in the park if you employ Genetic Programming, but I'd rather opt for a simpler (and faster) approach, if it's available. Thanks
Turns out the Polynomials.jl package includes the function polyfit which does Lagrange interpolation. A usage example would go: using Polynomials # install with Pkg.add("Polynomials") x = [1,2,3] # demo x y = [10,12,4] # demo y polyfit(x,y) The last line returns: Poly(-2.0 + 17.0x - 5.0x^2)` which evaluates to the correct values. The polyfit function accepts a maximal degree for the output polynomial, but defaults to using the length of the input vectors x and y minus 1. This is the same degree as the polynomial from the Lagrange formula, and since polynomials of such degree agree on the inputs only if they are identical (this is a basic theorem) - it can be certain this is the same Lagrange polynomial and in fact the only one of such a degree to have this property. Thanks to the developers of Polynomial.jl for leaving me just to google my way to an Answer.
Take a look to MARS regression. Multi adaptive regression splines.