Prolog Power Function - function

I am new to Prolog and while I can understand the code, I find it hard to create a program. I am trying to create a function that takes an integer and return 2^(integer) example pow(4) returns 16 (2^4). I also need it to be in a loop to keep taking input until user inputs negative integer then it exits.
In this example, C is counter, X is user input, tried to include variable for output but cant think how to integrate it.
pow(0):- 0.
pow(1):- 2.
pow(X):-
X > 1,
X is X-1,
power(X),
C is X-1,
pow(X1),
X is 2*2.
pow(X):- X<0, C is 0.
pow(C).

You really need to read something about Prolog before trying to program in it. Skim through http://en.wikibooks.org/wiki/Prolog, for example.
Prolog doesn't have "functions": there are predicates. All inputs and outputs are via predicate parameters, the predicate itself doesn't return anything.
So pow(0):- 0. and pow(1):- 2. don't make any sense. What you want is pow(0, 0). and pow(1, 2).: let the first parameter be the input, and the second be the output.
X is X-1 also doesn't make sense: in Prolog variables are like algebra variables, X means the same value through the whole system of equations. Variables are basically write-once, and you have to introduce new variables in this and similar cases: X1 is X-1.
Hope that's enough info to get you started.

The [naive] recursive solution:
pow2(0,1) . % base case: any number raised to the 0 power is 1, by definition
pow2(N,M) :- % a positive integral power of 2 is computed thus:
integer(N) , % - verify than N is an inetger
N > 0 , % - verify that N is positive
N1 is N-1 , % - decrement N (towards zero)
pow2(N1,M1) , % - recurse down (when we hit zero, we start popping the stack)
M is M1*2 % - multiply by 2
. %
pow2(N,M) :- % negative integral powers of 2 are computed the same way:
integer(N) , % - verify than N is an integer
N < 0 , % - verify than N is negative
N1 is N+1 , % - increment N (towards zero).
pow2(N1,M) , % - recurse down (we we hit zero, we start popping the stack)
M is M / 2.0 % - divide by 2.
. % Easy!
The above, however, will overflow the stack when the recursion level is sufficiently high (ignoring arithmetic overflow issues). SO...
The tail-recursive solution is optimized away into iteration:
pow2(N,M) :- %
integer(N) , % validate that N is an integer
pow2(N,1,M) % invoke the worker predicate, seeding the accumulator with 1
. %
pow2(0,M,M) . % when we hit zero, we're done
pow2(N,T,M) :- % otherwise...
N > 0 , % - if N is positive,
N1 is N-1 , % - decrement N
T1 is T*2 , % - increment the accumulator
pow2(N1,T1,M) % - recurse down
. %
pow2(N,T,M) :- % otherwise...
N < 0 , % - if N is negative,
N1 is N+1 , % - increment N
T1 is T / 2.0 , % - increment the accumulator
pow2(N1,T1,M) % - recurse down
. %

Related

Removing DC component for matrix in chuncks in octave

I'm new to octave and if this as been asked and answered then I'm sorry but I have no idea what the phrase is for what I'm looking for.
I trying to remove the DC component from a large matrix, but in chunks as I need to do calculations on each chuck.
What I got so far
r = dlmread('test.csv',';',0,0);
x = r(:,2);
y = r(:,3); % we work on the 3rd column
d = 1
while d <= (length(y) - 256)
e = y(d:d+256);
avg = sum(e) / length(e);
k(d:d+256) = e - avg; % this is the part I need help with, how to get the chunk with the right value into the matrix
d += 256;
endwhile
% to check the result I like to see it
plot(x, k, '.');
if I change the line into:
k(d:d+256) = e - 1024;
it works perfectly.
I know there is something like an element-wise operation, but if I use e .- avg I get this:
warning: the '.-' operator was deprecated in version 7
and it still doesn't do what I expect.
I must be missing something, any suggestions?
GNU Octave, version 7.2.0 on Linux(Manjaro).
Never mind the code works as expected.
The result (K) got corrupted because the chosen chunk size was too small for my signal. Changing 256 to 4096 got me a better result.
+ and - are always element-wise. Beware that d:d+256 are 257 elements, not 256. So if then you increment d by 256, you have one overlaying point.

SWI-Prolog: Generalize a predicate to calcluate the power of some function

I want to generalize some predicate written in swi-prolog to calculate the power of some function. My predicate so far is:
% calculates the +Power and the +Argument of some function +Function with value +Value.
calc_power(Value, Argument, Function, Power) :-
not(Power is 0),
Power is Power_m1 + 1,
Value =..[Function, Buffer],
calc_power(Buffer, Argument, Function, Power_m1), !.
calc_power(Argument, Argument, _, 0).
The call calc_power((g(a)),A,f,POW). gives so far:
A = g(a),
POW = 0.
My generalization should also solve calls like that:
calc_power(A1, a, f, 3).
the solution should be in that special calse A1 = f(f(f(a))). But for some reason it doesn't work. I get the error:
ERROR: Arguments are not sufficiently instantiated
in line
Power is Power_m1 + 1
it means probably in swi prolog it is not possible to take plus with two variables. How can I solve this problem?
Can delay the + 1 operation with:
int_succ(I0, I1) :-
( nonvar(I0) ->
integer(I0),
I0 >= 0,
I1 is I0 + 1
; nonvar(I1) ->
integer(I1),
I1 >= 1,
I0 is I1 - 1
; when((nonvar(I0) ; nonvar(I1)), int_succ(I0, I1))
).
Example in swi-prolog:
?- int_succ(I0, I1), I1 = 7.
I0 = 6,
I1 = 7.
This is more flexible than https://www.swi-prolog.org/pldoc/man?predicate=succ/2 , and can of course be modified to support negative numbers if desired.
Found some solution
:- use_module(library(clpfd)).
% calculates the +Power and the +Argument of some function +Function with value +Value.
calc_power(Argument, Argument, _, 0).
calc_power(Value, Argument, Function, Power) :-
Power #\= 0,
Power #= Power_m1 + 1,
Value =..[Function, Buffer],
calc_power(Buffer, Argument, Function, Power_m1).

Using linear approximation to perform addition and subtraction | error barrier

I'm attempting my first solo project, after taking an introductory course to machine learning, where I'm trying to use linear approximation to predict the outcome of addition/subtraction of two numbers.
I have 3 features: first number, subtraction/addition (0 or 1), and second number.
So my input looks something like this:
3 0 1
4 1 2
3 0 3
With corresponding output like this:
2
6
0
I have (I think) successfully implemented logistic regression algorithm, as the squared error does gradually decrease, but in 100 values, ranging from 0 to 50, the squared error value flattens out at around 685.6 after about 400 iterations.
Graph: Squared Error vs Iterations
.
To fix this, I have tried using a larger dataset for training, getting rid of regularization, and normalizing the input values.
I know that one of the steps to fix high bias is to add complexity to the approximation, but I want to maximize the performance at this particular level. Is it possible to go any further on this level?
My linear approximation code in Octave:
% Iterate
for i = 1 : iter
% hypothesis
h = X * Theta;
% reg theta prep
regTheta = Theta;
regTheta(:, 1) = 0;
% cost calc
J(i, 2) = (1 / (2 * m)) * (sum((h - y) .^ 2) + lambda * sum(sum(regTheta .^ 2,1),2));
% theta calc
Theta = Theta - (alpha / m) * ((h - y)' * X)' + lambda * sum(sum(regTheta, 1), 2);
end
Note: I'm using 0 for lambda, as to ignore regularization.

Comparing two functions based on Asymptotic notations

f(n)= 1 + 2 + 3 + · · + n
g(n) = 3(n^2) + nlogn
Determining f = O(g) or
f = Ω(g) or f = Θ(g)
.As per my effort and understanding one guess It might be f=O(g) as g(n) has a n^2 power which grows faster than n .
Another way : if divided both by n , f(n) will have a constant 1 and g(n) : nlogn which grows faster than constant 1 . so , f=O(g) .
Is that a correct answer?
What actually is scaling property of Big-O ?
How to prove : For any constant c > 0, cf(n) is O(f(n)).
Understanding so far :
cf(n) < (c + k)f(n) holds for all n > 0 and k > 0.
i. Constant factors are ignored.
ii. Only the powers and functions of n should be exploited
It is this ignoring of constant factors that motivates for such a
notation. Which proves f is O(f).
Is this explanation enough to prove that scaling property of Big-O ?
f(n)=O(g(n)) if there is a positive constant c such as.
|f(n)| <= c*|g(n)| for all n>=n(initial)
and since f(n)=(n(n-1))/2 ----> (n^2)
n^2<= n^2 + nlogn (ignore the constants), for all n>=1 then yes your answer is right.

How to calculate a large size FFT using smaller sized FFTs?

If I have an FFT implementation of a certain size M (power of 2), how can I calculate the FFT of a set of size P=k*M, where k is a power of 2 as well?
#define M 256
#define P 1024
complex float x[P];
complex float X[P];
// Use FFT_M(y) to calculate X = FFT_P(x) here
[The question is expressed in a general sense on purpose. I know FFT calculation is a huge field and many architecture specific optimizations were researched and developed, but what I am trying to understand is how is this doable in the more abstract level. Note that I am no FFT (or DFT, for that matter) expert, so if an explanation can be laid down in simple terms that would be appreciated]
Here's an algorithm for computing an FFT of size P using two smaller FFT functions, of sizes M and N (the original question call the sizes M and k).
Inputs:
P is the size of the large FFT you wish to compute.
M, N are selected such that MN=P.
x[0...P-1] is the input data.
Setup:
U is a 2D array with M rows and N columns.
y is a vector of length P, which will hold FFT of x.
Algorithm:
step 1. Fill U from x by columns, so that U looks like this:
x(0) x(M) ... x(P-M)
x(1) x(M+1) ... x(P-M+1)
x(2) x(M+2) ... x(P-M+2)
... ... ... ...
x(M-1) x(2M-1) ... x(P-1)
step 2. Replace each row of U with its own FFT (of length N).
step 3. Multiply each element of U(m,n) by exp(-2*pi*j*m*n/P).
step 4. Replace each column of U with its own FFT (of length M).
step 5. Read out the elements of U by rows into y, like this:
y(0) y(1) ... y(N-1)
y(N) y(N+1) ... y(2N-1)
y(2N) y(2N+1) ... y(3N-1)
... ... ... ...
y(P-N) y(P-N-1) ... y(P-1)
Here is MATLAB code which implements this algorithm. You can test it by typing fft_decomposition(randn(256,1), 8);
function y = fft_decomposition(x, M)
% y = fft_decomposition(x, M)
% Computes FFT by decomposing into smaller FFTs.
%
% Inputs:
% x is a 1D array of the input data.
% M is the size of one of the FFTs to use.
%
% Outputs:
% y is the FFT of x. It has been computed using FFTs of size M and
% length(x)/M.
%
% Note that this implementation doesn't explicitly use the 2D array U; it
% works on samples of x in-place.
q = 1; % Offset because MATLAB starts at one. Set to 0 for C code.
x_original = x;
P = length(x);
if mod(P,M)~=0, error('Invalid block size.'); end;
N = P/M;
% step 2: FFT-N on rows of U.
for m = 0 : M-1
x(q+(m:M:P-1)) = fft(x(q+(m:M:P-1)));
end;
% step 3: Twiddle factors.
for m = 0 : M-1
for n = 0 : N-1
x(m+n*M+q) = x(m+n*M+q) * exp(-2*pi*j*m*n/P);
end;
end;
% step 4: FFT-M on columns of U.
for n = 0 : N-1
x(q+n*M+(0:M-1)) = fft(x(q+n*M+(0:M-1)));
end;
% step 5: Re-arrange samples for output.
y = zeros(size(x));
for m = 0 : M-1
for n = 0 : N-1
y(m*N+n+q) = x(m+n*M+q);
end;
end;
err = max(abs(y-fft(x_original)));
fprintf( 1, 'The largest error amplitude is %g\n', err);
return;
% End of fft_decomposition().
kevin_o's response worked quite well. I took his code and eliminated the loops using some basic Matlab tricks. It functionally is identical to his version
function y = fft_decomposition(x, M)
% y = fft_decomposition(x, M)
% Computes FFT by decomposing into smaller FFTs.
%
% Inputs:
% x is a 1D array of the input data.
% M is the size of one of the FFTs to use.
%
% Outputs:
% y is the FFT of x. It has been computed using FFTs of size M and
% length(x)/M.
%
% Note that this implementation doesn't explicitly use the 2D array U; it
% works on samples of x in-place.
q = 1; % Offset because MATLAB starts at one. Set to 0 for C code.
x_original = x;
P = length(x);
if mod(P,M)~=0, error('Invalid block size.'); end;
N = P/M;
% step 2: FFT-N on rows of U.
X=fft(reshape(x,M,N),[],2);
% step 3: Twiddle factors.
X=X.*exp(-j*2*pi*(0:M-1)'*(0:N-1)/P);
% step 4: FFT-M on columns of U.
X=fft(X);
% step 5: Re-arrange samples for output.
x_twiddle=bsxfun(#plus,M*(0:N-1)',(0:M-1))+q;
y=X(x_twiddle(:));
% err = max(abs(y-fft(x_original)));
% fprintf( 1, 'The largest error amplitude is %g\n', err);
return;
% End of fft_decomposition()
You could just use the last log2(k) passes of a radix-2 FFT, assuming the previous FFT results are from appropriately interleaved data subsets.
Well an FFT is basically a recursive type of Fourier Transform. It relies on the fact that as wikipedia puts it:
The best-known FFT algorithms depend upon the factorization of N, but there are FFTs with O(N log N) complexity for >all N, even for prime N. Many FFT algorithms only depend on the fact that e^(-2pi*i/N) is an N-th primitive root of unity, and >thus can be applied to analogous transforms over any finite field, such as number-theoretic transforms. Since the >inverse DFT is the same as the DFT, but with the opposite sign in the exponent and a 1/N factor, any FFT algorithm >can easily be adapted for it.
So this has pretty much already been done in the FFT. If you are talking about getting longer period signals out of your transform you are better off doing an DFT over the data sets of limited frequencies. There might be a way to do it from the frequency domain but IDK if anyone has actually done it. You could be the first!!!! :)