f(n)= 1 + 2 + 3 + ยท ยท + n
g(n) = 3(n^2) + nlogn
Determining f = O(g) or
f = โฆ(g) or f = ฮ(g)
.As per my effort and understanding one guess It might be f=O(g) as g(n) has a n^2 power which grows faster than n .
Another way : if divided both by n , f(n) will have a constant 1 and g(n) : nlogn which grows faster than constant 1 . so , f=O(g) .
Is that a correct answer?
What actually is scaling property of Big-O ?
How to prove : For any constant c > 0, cf(n) is O(f(n)).
Understanding so far :
cf(n) < (c + k)f(n) holds for all n > 0 and k > 0.
i. Constant factors are ignored.
ii. Only the powers and functions of n should be exploited
It is this ignoring of constant factors that motivates for such a
notation. Which proves f is O(f).
Is this explanation enough to prove that scaling property of Big-O ?
f(n)=O(g(n)) if there is a positive constant c such as.
|f(n)| <= c*|g(n)| for all n>=n(initial)
and since f(n)=(n(n-1))/2 ----> (n^2)
n^2<= n^2 + nlogn (ignore the constants), for all n>=1 then yes your answer is right.
Related
I'm trying to find the n0 of a function with a big omega size of n^3 where c=2.25
๐(๐) = 3๐^3 โ 39๐^2 + 360๐ + 20. In order to prove that ๐(๐) is ฮฉ(๐^3),
we need constants ๐, ๐0 > 0 such that ๐(๐) โฅ ๐๐^3 for every ๐ โฅ ๐0
If c=2.25, how do I find the smallest integer that satisfies n0?
I found this on stack: reversible "binary to number" predicate
But I don't understand
:- use_module(library(clpfd)).
binary_number(Bs0, N) :-
reverse(Bs0, Bs),
binary_number(Bs, 0, 0, N).
binary_number([], _, N, N).
binary_number([B|Bs], I0, N0, N) :-
B in 0..1,
N1 #= N0 + (2^I0)*B,
I1 #= I0 + 1,
binary_number(Bs, I1, N1, N).
Example queries:
?- binary_number([1,0,1], N).
N = 5.
?- binary_number(Bs, 5).
Bs = [1, 0, 1] .
Could somebody explain me the code
Especialy this : binary_number([], _, N, N). (The _ )
Also what does library(clpfd) do ?
And why reverse(Bs0, Bs) ? I took it away it still works fine...
thx in advance
In the original, binary_number([], _, N, N)., the _ means you don't care what the value of the variable is. If you used, binary_number([], X, N, N). (not caring what X is), Prolog would issue a singleton variable warning. Also, what this predicate clause says is that when the first argument is [] (the empty list), then the 3rd and 4th arguments are unified.
As explained in the comments, use_module(library(clpfd)) causes Prolog to use the library for Constraint Logic Programming over Finite Domains. You can also find lots of good info on it via Google search of "prolog clpfd".
Normally, in Prolog, arithmetic expressions of comparison require that the expressions be fully instantiated:
X + Y =:= Z + 2. % Requires X, Y, and Z to be instantiated
Prolog would evaluate and do the comparison and yield true or false. It would throw an error if any of these variables were not instantiated. Likewise, for assignment, the is/2 predicate requires that the right hand side expression be fully evaluable with specific variables all instantiated:
Z is X + Y. % Requires X and Y to be instantiated
Using CLPFD you can have Prolog "explore" solutions for you. And you can further specify what domain you'd like to restrict the variables to. So, you can say X + Y #= Z + 2 and Prolog can enumerate possible solutions in X, Y, and Z.
As an aside, the original implementation could be refactored a little to avoid the exponentiation each time and to eliminate the reverse:
:- use_module(library(clpfd)).
binary_number(Bin, N) :-
binary_number(Bin, 0, N).
binary_number([], N, N).
binary_number([Bit|Bits], Acc, N) :-
Bit in 0..1,
Acc1 #= Acc*2 + Bit,
binary_number(Bits, Acc1, N).
This works well for queries such as:
| ?- binary_number([1,0,1,0], N).
N = 10 ? ;
no
| ?- binary_number(B, 10).
B = [1,0,1,0] ? ;
B = [0,1,0,1,0] ? ;
B = [0,0,1,0,1,0] ? ;
...
But it has termination issues, as pointed out in the comments, for cases such as, Bs = [1|_], N #=< 5, binary_number(Bs, N). A solution was presented by #false which simply modifies the above helps solve those termination issues. I'll reiterate that solution here for convenience:
:- use_module(library(clpfd)).
binary_number(Bits, N) :-
binary_number_min(Bits, 0,N, N).
binary_number_min([], N,N, _M).
binary_number_min([Bit|Bits], N0,N, M) :-
Bit in 0..1,
N1 #= N0*2 + Bit,
M #>= N1,
binary_number_min(Bits, N1,N, M).
I am new to Prolog and while I can understand the code, I find it hard to create a program. I am trying to create a function that takes an integer and return 2^(integer) example pow(4) returns 16 (2^4). I also need it to be in a loop to keep taking input until user inputs negative integer then it exits.
In this example, C is counter, X is user input, tried to include variable for output but cant think how to integrate it.
pow(0):- 0.
pow(1):- 2.
pow(X):-
X > 1,
X is X-1,
power(X),
C is X-1,
pow(X1),
X is 2*2.
pow(X):- X<0, C is 0.
pow(C).
You really need to read something about Prolog before trying to program in it. Skim through http://en.wikibooks.org/wiki/Prolog, for example.
Prolog doesn't have "functions": there are predicates. All inputs and outputs are via predicate parameters, the predicate itself doesn't return anything.
So pow(0):- 0. and pow(1):- 2. don't make any sense. What you want is pow(0, 0). and pow(1, 2).: let the first parameter be the input, and the second be the output.
X is X-1 also doesn't make sense: in Prolog variables are like algebra variables, X means the same value through the whole system of equations. Variables are basically write-once, and you have to introduce new variables in this and similar cases: X1 is X-1.
Hope that's enough info to get you started.
The [naive] recursive solution:
pow2(0,1) . % base case: any number raised to the 0 power is 1, by definition
pow2(N,M) :- % a positive integral power of 2 is computed thus:
integer(N) , % - verify than N is an inetger
N > 0 , % - verify that N is positive
N1 is N-1 , % - decrement N (towards zero)
pow2(N1,M1) , % - recurse down (when we hit zero, we start popping the stack)
M is M1*2 % - multiply by 2
. %
pow2(N,M) :- % negative integral powers of 2 are computed the same way:
integer(N) , % - verify than N is an integer
N < 0 , % - verify than N is negative
N1 is N+1 , % - increment N (towards zero).
pow2(N1,M) , % - recurse down (we we hit zero, we start popping the stack)
M is M / 2.0 % - divide by 2.
. % Easy!
The above, however, will overflow the stack when the recursion level is sufficiently high (ignoring arithmetic overflow issues). SO...
The tail-recursive solution is optimized away into iteration:
pow2(N,M) :- %
integer(N) , % validate that N is an integer
pow2(N,1,M) % invoke the worker predicate, seeding the accumulator with 1
. %
pow2(0,M,M) . % when we hit zero, we're done
pow2(N,T,M) :- % otherwise...
N > 0 , % - if N is positive,
N1 is N-1 , % - decrement N
T1 is T*2 , % - increment the accumulator
pow2(N1,T1,M) % - recurse down
. %
pow2(N,T,M) :- % otherwise...
N < 0 , % - if N is negative,
N1 is N+1 , % - increment N
T1 is T / 2.0 , % - increment the accumulator
pow2(N1,T1,M) % - recurse down
. %
If I have an FFT implementation of a certain size M (power of 2), how can I calculate the FFT of a set of size P=k*M, where k is a power of 2 as well?
#define M 256
#define P 1024
complex float x[P];
complex float X[P];
// Use FFT_M(y) to calculate X = FFT_P(x) here
[The question is expressed in a general sense on purpose. I know FFT calculation is a huge field and many architecture specific optimizations were researched and developed, but what I am trying to understand is how is this doable in the more abstract level. Note that I am no FFT (or DFT, for that matter) expert, so if an explanation can be laid down in simple terms that would be appreciated]
Here's an algorithm for computing an FFT of size P using two smaller FFT functions, of sizes M and N (the original question call the sizes M and k).
Inputs:
P is the size of the large FFT you wish to compute.
M, N are selected such that MN=P.
x[0...P-1] is the input data.
Setup:
U is a 2D array with M rows and N columns.
y is a vector of length P, which will hold FFT of x.
Algorithm:
step 1. Fill U from x by columns, so that U looks like this:
x(0) x(M) ... x(P-M)
x(1) x(M+1) ... x(P-M+1)
x(2) x(M+2) ... x(P-M+2)
... ... ... ...
x(M-1) x(2M-1) ... x(P-1)
step 2. Replace each row of U with its own FFT (of length N).
step 3. Multiply each element of U(m,n) by exp(-2*pi*j*m*n/P).
step 4. Replace each column of U with its own FFT (of length M).
step 5. Read out the elements of U by rows into y, like this:
y(0) y(1) ... y(N-1)
y(N) y(N+1) ... y(2N-1)
y(2N) y(2N+1) ... y(3N-1)
... ... ... ...
y(P-N) y(P-N-1) ... y(P-1)
Here is MATLAB code which implements this algorithm. You can test it by typing fft_decomposition(randn(256,1), 8);
function y = fft_decomposition(x, M)
% y = fft_decomposition(x, M)
% Computes FFT by decomposing into smaller FFTs.
%
% Inputs:
% x is a 1D array of the input data.
% M is the size of one of the FFTs to use.
%
% Outputs:
% y is the FFT of x. It has been computed using FFTs of size M and
% length(x)/M.
%
% Note that this implementation doesn't explicitly use the 2D array U; it
% works on samples of x in-place.
q = 1; % Offset because MATLAB starts at one. Set to 0 for C code.
x_original = x;
P = length(x);
if mod(P,M)~=0, error('Invalid block size.'); end;
N = P/M;
% step 2: FFT-N on rows of U.
for m = 0 : M-1
x(q+(m:M:P-1)) = fft(x(q+(m:M:P-1)));
end;
% step 3: Twiddle factors.
for m = 0 : M-1
for n = 0 : N-1
x(m+n*M+q) = x(m+n*M+q) * exp(-2*pi*j*m*n/P);
end;
end;
% step 4: FFT-M on columns of U.
for n = 0 : N-1
x(q+n*M+(0:M-1)) = fft(x(q+n*M+(0:M-1)));
end;
% step 5: Re-arrange samples for output.
y = zeros(size(x));
for m = 0 : M-1
for n = 0 : N-1
y(m*N+n+q) = x(m+n*M+q);
end;
end;
err = max(abs(y-fft(x_original)));
fprintf( 1, 'The largest error amplitude is %g\n', err);
return;
% End of fft_decomposition().
kevin_o's response worked quite well. I took his code and eliminated the loops using some basic Matlab tricks. It functionally is identical to his version
function y = fft_decomposition(x, M)
% y = fft_decomposition(x, M)
% Computes FFT by decomposing into smaller FFTs.
%
% Inputs:
% x is a 1D array of the input data.
% M is the size of one of the FFTs to use.
%
% Outputs:
% y is the FFT of x. It has been computed using FFTs of size M and
% length(x)/M.
%
% Note that this implementation doesn't explicitly use the 2D array U; it
% works on samples of x in-place.
q = 1; % Offset because MATLAB starts at one. Set to 0 for C code.
x_original = x;
P = length(x);
if mod(P,M)~=0, error('Invalid block size.'); end;
N = P/M;
% step 2: FFT-N on rows of U.
X=fft(reshape(x,M,N),[],2);
% step 3: Twiddle factors.
X=X.*exp(-j*2*pi*(0:M-1)'*(0:N-1)/P);
% step 4: FFT-M on columns of U.
X=fft(X);
% step 5: Re-arrange samples for output.
x_twiddle=bsxfun(#plus,M*(0:N-1)',(0:M-1))+q;
y=X(x_twiddle(:));
% err = max(abs(y-fft(x_original)));
% fprintf( 1, 'The largest error amplitude is %g\n', err);
return;
% End of fft_decomposition()
You could just use the last log2(k) passes of a radix-2 FFT, assuming the previous FFT results are from appropriately interleaved data subsets.
Well an FFT is basically a recursive type of Fourier Transform. It relies on the fact that as wikipedia puts it:
The best-known FFT algorithms depend upon the factorization of N, but there are FFTs with O(N log N) complexity for >all N, even for prime N. Many FFT algorithms only depend on the fact that e^(-2pi*i/N) is an N-th primitive root of unity, and >thus can be applied to analogous transforms over any finite field, such as number-theoretic transforms. Since the >inverse DFT is the same as the DFT, but with the opposite sign in the exponent and a 1/N factor, any FFT algorithm >can easily be adapted for it.
So this has pretty much already been done in the FFT. If you are talking about getting longer period signals out of your transform you are better off doing an DFT over the data sets of limited frequencies. There might be a way to do it from the frequency domain but IDK if anyone has actually done it. You could be the first!!!! :)
I found some exercises where you combine n-bit 2's complement values in different ways and simplify the output where possible. (Their practice exercises use 16-bit, but that's irrelevant).
Eg:
!(!x&!y) == x|y
0 & y, negate the output == -1
I'm having no problem applying De Morgan's laws with the examples using AND, OR, and NOT but I am having difficulty using NOT with + and -
Eg:
!(!x+y) == x-y
!(y-1) == -y
How does NOT distribute?
Edit: responding to comments: I realize this is a bitwise NOT. My question is: in algebraic terms, how does it distribute as per algebra? Example on Wikipedia
With 2's complement numbers when you bitwise NOT them it is the same as saying the negative of the number minus 1, so !x is equivalent to -x - 1 where x can be a single variable or an expression.
Starting with !(!x+y), well !x is going to be -x - 1 so then it is !(-x - 1 + y) which becomes -(-x - 1 + y) - 1 which simplifies to x - y.
And for !(y-1), that becomes -(y - 1) - 1 = -y + 1 - 1 = -y.