Related
Xilinx's complex multiplier IP documentation (PG104) has this to say about input and output bit-width setting:
Output Width: Selects the width of the output product real and imaginary
components. The values are automatically initialized to provide the full-precision
product when the A and B operand widths are set. The natural width of a complex
multiplication is the sum of the input widths plus one. If Output Width is set to be
less than this natural width, the least significant bits are truncated or rounded, as
selected by the next GUI field.
(Italics are mine.) So if I multiply an 8-bit number by another 8-bit number, it wants to have the full precision output be a 17-bit number. The inputs and outputs are assumed to be signed integers.
The largest magnitude signed number that 8 bits can represent is -128 (0x80). 128*128=16384 or 0x4000 which is 15 bits. Add a sign bit and we're safe with a 16-bit output.
The largest positive is 127 (0x3F). 127*127=16129 or 0x3F01. Again, safe with 16 bits.
What am I missing? Why do they insist on the extra bit?
It is a complex multiplication. That is sum of two multiplications.
pr = ar x br - ai x bi
pi = ar x bi + ai x br
The sum adds the extra bit.
How to write two column vectors as an analytic convolution so that the discrete FFT may be used. MATLAB syntax is used.
Consider:
a set of vectors which, when sorted into a step function appears as any of the following:
[1,1,1,1,0,0,0,0], or [1,1,1,1,1,0,0,0], or [1,1,1,1,1,1,0,0]
(...the location at which the function "steps up" varies over members of this set)
The other is random vec=[1,0,1,0,1,1,1,0], and obviously both contain only 0s and 1s.
Is it possible to write these vectors as an analytic convolution? I would like the 1st, 2nd, 3rd, 4th... entries of the convolution to have values of:
sum(vec.*[1,0,0,0,0,0,0,0])
sum(vec.*[1,1,0,0,0,0,0,0])
sum(vec.*[1,1,1,0,0,0,0,0])
sum(vec.*[1,1,1,1,0,0,0,0])
...
sum(vec.*[1,1,1,1,1,1,1,1])
For speed, I am trying to avoid use of a for-loop. I cannot vectorize because this requires terabytes of RAM. (I work with vectors that are not of length 8, but rather length nearly a million).
The convolution theorem gives the function R from the convolution of functions L and 1/w from the Fourier transform F and its inverse F-1 as,
Clearly, the function 1/(w-w') in the convolution is from 1/w under F; it's as if you just set w'=0. But if I use analogous reasoning in my [1,1,1,1,0,0,0,0], I get either [1,1,1,1,1,1,1,1], the identity under .* in MATLAB or [0,0,0,0,0,0,0,0](a very boring result).
What is the mistake in reasoning I've made?
When trying to find the answer I came across this and was wondering if this is true and why it is.
https://stackoverflow.com/a/489870/5712298
If anyone can explain it to me or link me to a page explaining it that would be great.
Stackoverflow markup does not support mathematical notation well, and most readers of this will be programmers, so I am going to use common programming expression syntax:
* multiplication
^ exponentiation
/ division
x[i] Element i of an array x
== equality
PROD product
This deals with the question of whether, given a radix r terminating fraction a/(r^n), there is a terminating radix s fraction b/(s^m) with exactly the same value, a, b integers, r and s positive integers, n and m non-negative integers.
a/(r^n)==b/(s^m) is equivalent to b==a*(s^m)/(r^n). a/(r^n) is exactly equal to some radix s terminating fraction if, and only if, there exists a positive integer m such that a*(s^m)/(r^n) is an integer.
Consider the prime factorization of r, PROD(p[i]^k[i]). If, for some i, p[i]^k[i] is a term in the prime factorization of r, then p[i]^(n*k[i]) is a term in the prime factorization of r^n.
a*(s^m)/(r^n) is an integer if, and only if, every p[i]^(n*k[i]) in the prime factorization of r^n is also a factor of a*(s^m)
First suppose p[i] is also a factor of s. Then for sufficiently large m, p[i]^(n*k[i]) is a factor of s^m.
Now suppose p[i] is not a factor of s. p[i]^(n*k[i]) is a factor of a*(s^m) if, and only if, it is a factor of a.
The necessary and sufficient condition for the existence of a non-negative integer m such that b==a*(s^m)/(r^n) is an integer is that, for each p[i]^k[i] in the prime factorization of r, either p[i] is a factor of s or p[i]^(n*k[i]) is a factor of a.
Applying this to the common case of r=10 and s=2, the prime factorization of r is (2^1)*(5^1). 2 is a factor of 2, so we can ignore it. 5 is not, so we need 5^n to be a factor of a.
Consider some specific cases:
Decimal 0.1 is 1/10, 5 is not factor of 1, so there is no exact binary fraction equivalent.
Decimal 0.625, 625/(10^3). 5^3 is 125, which is a factor of 625, so there is an exact binary fraction equivalent. (It is binary 0.101).
The method in the referenced answer https://stackoverflow.com/a/489870/5712298 is equivalent to this for decimal to binary. It would need some work to extend to the general case, to allow for prime factors whose exponent is not 1.
To interpolate a signal in frequency domain, one can pad zeros in time domain and do an FFT.
Suppose the number of elements in a given vector X is N and Y is the same as X but padded one sided with N zeros. Then the following give the same result.
$$\hat{x}(k)=\sum_{n=0}^{2N-1} Y(n)e^{i2\pi k n/2N},\quad k=0,...,2N-1,$$
$$\hat{x}(k)=\sum_{n=0}^{ N-1} X(n)e^{i2\pi k n/2N},\quad k=0,...,2N-1.$$
Now if we use FFTW package, the first equation needs 2N memory space for the input vector while the second one needs only N memory space (I do not know if it is even possible to do in the existing FFTW package)! Also the computational complexity lowers from 2N^2log(2N) to 2N^2log(N). The problem is worse whenever we do a 2D FFT or 3D FFT. Is it possible to do the second approach using FFTW package? This is fairly easy to do in MATLAB though.
If x is a 2N signal padded with zeros above N , its DFT writes :
If k is even :
Hence, the coefficients of even frequencies arise from the N-point discrete Fourier transform of x(n).
if k is odd :
Hence, the coefficients of odd frequencies arise from the N-point discrete Fourier transform of x(n)exp(i*M_PI*n/N).
Thus, the discrete Fourier transform of a zero-padded 2N signal resumes to two DFT of signals of length N and fftw can be used to compute them.
The overall computation time will be 2*c*N*ln(N), where c is a constant. It is expected to be faster than the direct computation of the DFT c*2*N*ln(2*N). Remember that ln(2*N)=ln(2)+ln(N) : as N gets large, the extra work in case of direct computation is negligible compared to ln(N) : the trick becomes useless, even if the dimension is larger than one. It does not affect complexity.
Moreover, FFTW is really efficient, using lots of features of your PC if it is correctly installed, and it will be hard to do better than this in any case, even if the presented trick is used. Finally, if the input signal is real, you may use fftw_plan fftw_plan_dft_r2c_2d : only half the coefficients in the Fourier space are computed and stored.
Regarding memory requirements, if you are really short of memory, you can use the FFTW_IN_PLACE flag and use the same array for input and output. Yet, it is slightly slower.
The procedure presented above can be extended to compute the DFT of a LN signal of a N-point signal padded with (L-1)N zeros : it resumes to the computation of L DFTs of length N.
Do you have any reference showing how MATLAB handles and optimizes the DFT of padded signals compared to FFTW ?
EDIT : Further research about the 3D case :
The 3D DFT of a padded 3D signal x(n,m,p) is :
If k_n, k_m and k_p are even :
If k_n and k_m are even and k_p is odd :
...There are 8 cases.
So, the computation of the 3d dft of a 3D x of size NxNxN padded to 2Nx2Nx2N resumes to the computation of 8 3d dft of size NxNxN. Size a 3d dft is a combination of 3 1d dft, the total number of dft of size N is 3x8xNxN while the direct computation requires 3x(2N)*(2N) dft of size 2N. Computational time is 24cN^3ln(N) against 24cN^3ln(2N) : a small gain is possible...Again fftw is fast...
Yet, instead of using a black-box 3d fft, let's compute the 8 dfts of size N at once, by performing the 1d dfts in each direction.
1d dft along N : 2 cases, NxN dfts => 2cN^3ln(N)
1d dft along M : 2 cases, 2NxN dfts => 4cN^3ln(N)
1d dft along P : 2 cases, 2Nx2N dfts => 8cN^3ln(N)
Hence, the total computation time is expected to be 14cN^3ln(N) against 24cN^3ln(2N) : a small gain is possible...Again fftw is fast...
Moreover, the computation of
requires only a single call to exp : first compute w=exp(I*M_PI/N) then update wn=wn*w; x(n)=x(n)*wn or use pow if precision becomes an issue.
I know that Knapsack is NP-complete while it can be solved by DP. They say that the DP solution is pseudo-polynomial, since it is exponential in the "length of input" (i.e. the numbers of bits required to encode the input). Unfortunately I did not get it. Can anybody explain that pseudo-polynomial thing to me slowly ?
The running time is O(NW) for an unbounded knapsack problem with N items and knapsack of size W. W is not polynomial in the length of the input though, which is what makes it pseudo-polynomial.
Consider W = 1,000,000,000,000. It only takes 40 bits to represent this number, so input size = 40, but the computational runtime uses the factor 1,000,000,000,000 which is O(240).
So the runtime is more accurately said to be O(N.2bits in W), which is exponential.
Also see:
How to understand the knapsack problem is NP-complete?
The NP-Completeness of Knapsack
Complexity of dynamic programming algorithm for the 0-1 knapsack problem
Pseudo-polynomial time
In most of our problems, we're dealing with large lists of numbers which fit comfortably inside standard int/float data types. Because of the way most processors are built to handle 4-8 byte numbers at a time at no additional cost (relative to numbers than fit in, say, 1 byte), we rarely encounter a change in running time from scaling our numbers up or down within ranges we encounter in real problems - so the dominant factor remains just the sheer quantity of data points, the n or m factors that we're used to.
(You can imagine that the Big-O notation is hiding a constant factor that divides-out 32 or 64 bits-per-datum, leaving only the number-of-data-points whenever each of our numbers fit in that many bits or less)
But try reworking with other algorithms to act on data sets involving big ints - numbers that require more than 8 bytes to represent - and see what that does to the runtime. The magnitude of the numbers involved always makes a difference, even in the other algorithms like binary sort, once you expand beyond the buffer of safety conventional processors give us "for-free" by handling 4-8 byte batches.
The trick with the Knapsack algorithm that we discussed is that it's unusually sensitive (relative to other algorithms ) to the magnitude of a particular parameter, W. Add one bit to W and you double the running time of the algorithm. We haven't seen that kind of dramatic response to changes in value in other algorithms before this one, which is why it might seem like we're treating Knapsack differently - but that's a genuine analysis of how it responds in a non-polynomial fashion to changes in input size.
The way I understand this is that the capacity would've been O(W) if the capacity input were an array of [1,2,...,W], which has a size of W. But the capacity input is not an array of numbers, it's instead a single integer. The time complexity is about the relationship to the size of input. The size of an integer is NOT the value of the integer, but the number of bits representing it. We do later convert this integer W into an array [1,2,...,W] in the algorithm, leading people into mistakenly thinking W is the size, but this array is not the input, the integer itself is.
Think of input as "an array of stuff", and the size as "how many stuff in the array". The item input is actually an array of n items in the array so size=n. The capacity input is NOT an array of W numbers in it, but a single integer, represented by an array of log(W) bits. Increase the size of it by 1 (adding 1 meaningful bit), W doubles so run time doubles, hence the exponential time complexity.
The Knapsack algorithm's run-time is bound not only on the size of the input (n - the number of items) but also on the magnitude of the input (W - the knapsack capacity) O(nW) which is exponential in how it is represented in computer in binary (2^n) .The computational complexity (i.e how processing is done inside a computer through bits) is only concerned with the size of the inputs, not their magnitudes/values.
Disregard the value/weight list for a moment. Let's say we have an instance with knapsack capacity 2. W would take two bits in the input data. Now we shall increase the knapsack capacity to 4, keeping the rest of the input. Our input has only grown by one bit, but the computational complexity has increased twofold. If we increase the capacity to 1024, we would have just 10 bits of the input for W instead of 2, but the complexity has increased by a factor of 512. Time complexity grows exponentially in the size of W in binary (or decimal) representation.
Another simple example that helped me understand the pseudo-polynomial concept is the naive primality testing algorithm. For a given number n we are checking if it's divided evenly by each integer number in range 2..ān, so the algorithm takes ā(nā1) steps. But here, n is the magnitude of the input, not it's size.
Now The regular O(n) case
By contrast, searching an array for a given element runs in polynomial time: O(n). It takes at most n steps and here n is the size of the input (the length of the array).
[ see here ]
Calculating bits required to store decimal number
Complexity is based on input. In knapsack problem, Inputs are size, max Capacity, and profit, weight arrays. We construct dp table as size * W so we feel as its of polynomial time complexity. But, input W is an integer, not an array. So, it will be O(size*(no Of bits required to store given W)). If no of bits increase by 1, then running time doubles. Thus it is exponential, thereby pseudo-polynomial.