Octave FWHM calculation - octave

I am having some problem about calculating the FWHM of my data. Because the "fwhm" function in signal package results in a 100 times bigger value than i expected to get.
What i did is that,
Depending on the gaussian distribution function (you can find it on wikipedia) I produced some data. In this function you can give a specific sigma (RMS) value (FWHM=sigma*2.355). Here is that the script I wrote to understand the situation
x=10:0.01:40;
x0=25;
sigma=0.25;
y=(1/(sigma*sqrt(2*pi)))*exp(-((x-x0).^2)/(2*sigma^2));
z=fwhm(y)/2.355;
plot(x,y)
when I compared the results the output of "fwhm" function (24.999) is 100 times bigger than the one I used (0.25) in the function.
If you have any idea it will be very helpful.
Thanks in advance.

Your z is 100 times bigger because your steps in x are 1/100 (0.01). If you use fwhm(y) it is expected that the stepsize in x is 1. If not you have to specify that.
In your case you should do:
z=fwhm(x, y)/2.355
z = 0.24999
which matches your sigma

Related

calculating DFT of time signal in MATLAB

This code computes the DFT from time domain.
Can anybody see the code below and help me to get the right answer?
my problem is:
when I change N value, for example, to 4, 5, 10 ,or other values.
X(1) changes with that. but I think X(1) must be the same for every value of N.
just like the shape below: the N value changes but the vertical value is the same.
I appreciate if you help me.
Thank you.
enter image description here
clear; clc;
% %% Analytical
N=4;
k=0:N-1;
X=zeros(N,1);
t=k/N;
x=(5+2*cos(2*pi*t-pi/2)+3*cos(4*pi*t))
%x=abs((1-(0.012.*(pi.*52.*(t-0.3721)).^2)).*exp(-(pi.*52.*(t-0.3721).^2)))
abs(sum(x))
for k=0:N-1
for n=0:N-1
X(k+1)=X(k+1)+x(n+1).*exp(-1i.*2.*pi.*(n).*(k)/N);
end
end
k1=[0:N-1];
stem(k1,abs(X))
% xlim([0 1])
% ylim([-1 1])
xlabel('Frequency');
ylabel('|X(k)|');
title('Frequency domain - Magnitude response')
Your definition of DFT (which is probably the most common definition) does not have the property that X(1) remains constant with N. Instead, it is X(1)/N which will remain constant. To use this DFT to get the magnitudes of the input at various frequencies, you'll need to divide the DFT output by N.
To verify this, you can call Matlab's fft function and compare with your results. You should get the same answer from Matlab's fft. Note that Matlab's fft documentation says:
The resulting FFT amplitude is A*n/2, where A is the original amplitude and n is the number of FFT points.

Function that will not return 0

I am writing a formula which to use as a decay multiplier on a given value.
The problem is the following : I have a window of processing - days lets say 10, this window is computed every day anew. I need to decay a certain parameter with a factor reflecting the days that an id is present. Currently what I do is (previousWinSize-(start of the current window-start of the previous window))/previousWinSize
In this case if my previous window size is 10 and the difference in the days of processing is two (10-2)/10 which gives me 0.8 to multiply my variable by and respectively decay .2 of it.
However if I have a 3 day window and again 2 days of difference (3-2)/3 I get value close to 0 which cuts more than I would like to.
I am looking for a formula that would scale better when the numbers are small and would not produce a huge decay factor.
Thank you in advance.
I recommend making use of a sigmoid function e.g.
You can take the output of your function i.e. returns a number between 0 and 1 based on the difference of days of processing and feed it into the sigmoid. If you set up the a (slope) and b (inflection point) parameters properly you can for example, ensure that the lowest decay multiplier you get is ~0.5 when your original equation returns a number close to 0.
I've graphed the example I stated above here:
https://www.desmos.com/calculator/nqemuexjhg
(This is based on: https://www.desmos.com/calculator/rna4aqta0c)
I think you do have two edge cases with this method though. When your equation returns 0 the sigmoid isn't exactly going to give you 0.5 (which you may not even want to begin with), you'll end up getting something that's close to 0.5. In this scenario what you may start to see is your values drifting if you keep applying the sigmoid. The same is true for when your equation returns 1. After putting it through the sigmoid you won't get 1, you'll get something close to 1.
What I think I'd do in such a scenario is have some sort of check before the sigmoid gets applied
e.g.
if(x == 0)
y = 0;
else if(x == 1)
y = 1;
else
y = sigmoid(x);
Sources / Possible further reading:
https://en.wikipedia.org/wiki/Sigmoid_function

Finding the smallest distance in a set of points from the origin

I am to find the smallest distance between a given set of points and the origin. I have a matrix with 2 columns and 10 rows. Each row represents coordinates. One point consists of two coordinates and I would like to calculate the smallest distance between each point and to the origin. I would also like to determine which point gave this smallest distance.
In Octave, I calculate this distance by using norm and for each point in my set, I have a distance associated with them and the smallest distance is obviously the one I'm looking for. However, the code I wrote below isn't working the way it should.
function [dist,koor] = bonus4(S)
S= [-6.8667, -44.7967;
-38.0136, -35.5284;
14.4552, -27.1413;
8.4996, 31.7294;
-17.2183, 28.4815;
-37.5100, 14.1941;
-4.2664, -24.4428;
-18.6655, 26.9427;
-15.8828, 18.0170;
17.8440, -22.9164];
for i=1:size(S)
L=norm(S(i, :))
dist=norm(S(9, :));
koor=S(9, :) ;
end
i = 9 is the correct answer, but I need Octave to put that number in. How do I tell Octave that this is the number I want? Specifically:
dist=norm(S(9, :));
koor=S(9, :);
I cannot use any packages. I found the geometry package online but I am to solve the task without additional packages.
I'll work off of your original code. Firstly, you want to compute the norm of all of the points and store them as individual elements in an array. Your current code isn't doing that and is overwriting the variable L which is a single value at each iteration of the loop.
You'll want to make L an array and store the norms at each iteration of the loop. Once you do this, you'll want to find the location as well as the minimum distance itself. That can be done with one call to min where the first output gives you the minimum distance and the second output gives you the location of the minimum. You can use the second output to slice into your S array to retrieve the actual point.
Last but not least, you need to define S first before calling this function. You are defining S inside the function and that will probably give you unintended results if you want to change the input into this function at each invocation. Therefore, define S first, then call the function:
S= [-6.8667, -44.7967;
-38.0136, -35.5284;
14.4552, -27.1413;
8.4996, 31.7294;
-17.2183, 28.4815;
-37.5100, 14.1941;
-4.2664, -24.4428;
-18.6655, 26.9427;
-15.8828, 18.0170;
17.8440, -22.9164];
function [dist,koor] = bonus4(S)
%// New - Create an array to store the distances
L = zeros(size(S,1), 1);
%// Change to iterate over number of rows
for i=1:size(S,1)
L(i)=norm(S(i, :)); %// Change
end
[dist,ind] = min(L); %// Find the minimum distance
koor = S(ind,:); %// Get the actual point
end
Or, make sure you save the above function in a file called bonus4.m, then do this in the Octave command prompt:
octave:1> S= [-6.8667, -44.7967;
> -38.0136, -35.5284;
> 14.4552, -27.1413;
> 8.4996, 31.7294;
> -17.2183, 28.4815;
> -37.5100, 14.1941;
> -4.2664, -24.4428;
> -18.6655, 26.9427;
> -15.8828, 18.0170;
> 17.8440, -22.9164];
octave:2> [dist,koor] = bonus4(S);
Though this code works, I'll debate that it's slow as you're using a for loop. A faster way would be to do this completely vectorized. Because using norm for matrices is different than with vectors, you'll have to compute the distance yourself. Because you are measuring the distance from the origin, you can simply square each of the columns individually then add the columns of each row.
Therefore, you can just do this:
S= [-6.8667, -44.7967;
-38.0136, -35.5284;
14.4552, -27.1413;
8.4996, 31.7294;
-17.2183, 28.4815;
-37.5100, 14.1941;
-4.2664, -24.4428;
-18.6655, 26.9427;
-15.8828, 18.0170;
17.8440, -22.9164];
function [dist,koor] = bonus4(S)
%// New - Computes the norm of each point
L = sqrt(sum(S.^2, 2));
[dist,ind] = min(L); %// Find the minimum distance
koor = S(ind,:); %// Get the actual point
end
The function sum can be used to sum over a dimension independently. As such, by doing S.^2, you are squaring each term in the points matrix, then by using sum with the second parameter as 2, you are summing over all of the columns for each row. Taking the square root of this result computes the distance of each point to the origin, exactly the way the for loop functions. However, this (at least to me) is more readable and I daresay faster for larger sizes of points.

Temperature Scale in SA

First, this is not a question about temperature iteration counts or automatically optimized scheduling. It's how the data magnitude relates to the scaling of the exponentiation.
I'm using the classic formula:
if(delta < 0 || exp(-delta/tK) > random()) { // new state }
The input to the exp function is negative because delta/tK is positive, so the exp result is always less then 1. The random function also returns a value in the 0 to 1 range.
My test data is in the range 1 to 20, and the delta values are below 20. I pick a start temperature equal to the initial computed temperature of the system and linearly ramp down to 1.
In order to get SA to work, I have to scale tK. The working version uses:
exp(-delta/(tK * .001)) > random()
So how does the magnitude of tK relate to the magnitude of delta? I found the scaling factor by trial and error, and I don't understand why it's needed. To my understanding, as long as delta > tK and the step size and number of iterations are reasonable, it should work. In my test case, if I leave out the extra scale the temperature of the system does not decrease.
The various online sources I've looked at say nothing about working with real data. Sometimes they include the Boltzmann constant as a scale, but since I'm not simulating a physical particle system that doesn't help. Examples (typically with pseudocode) use values like 100 or 1000000.
So what am I missing? Is scaling another value that I must set by trial and error? It's bugging me because I don't just want to get this test case running, I want to understand the algorithm, and magic constants mean I don't know what's going on.
Classical SA has 2 parameters: startingTemperate and cooldownSchedule (= what you call scaling).
Configuring 2+ parameters is annoying, so in OptaPlanner's implementation, I automatically calculate the cooldownSchedule based on the timeGradiant (which is a double going from 0.0 to 1.0 during the solver time). This works well. As a guideline for the startingTemperature, I use the maximum score diff of a single move. For more information, see the docs.

Function to dampen a value

I have a list of documents each having a relevance score for a search query. I need older documents to have their relevance score dampened, to try to introduce their date in the ranking process. I already tried fiddling with functions such as 1/(1+date_difference), but the reciprocal function is too discriminating for close recent dates.
I was thinking maybe a mathematical function with range (0..1) and domain(0..x) to amplify their score, where the x-axis is the age of a document. It's best to explain what I further need from the function by an image:
Decaying behavior is often modeled well by an exponentional function (many decaying processes in nature also follow it). You would use 2 positive parameters A and B and get
y(x) = A exp(-B x)
Since you want a y-range [0,1] set A=1. Larger B give slower decays.
If a simple 1/(1+x) decreases too quickly too soon, a sigmoid function like 1/(1+e^-x) or the error function might be better suited to your purpose. Let the current date be somewhere in the negative numbers for such a function, and you can get a value that is current for some configurable time and then decreases towards a base value.
log((x+1)-age_of_document)
Where the base of the logarithm is (x+1). Note the x is as per your diagram and is the "threshold". If the age of the document is greater than x the score goes negative. Multiply by the maximum possible score to introduce scaling.
E.g. Domain = (0,10) with a maximum score of 10: 10*(log(11-x))/log(11)
A bit late, but as thiton says, you might want to use a sigmoid function instead, since it has a "floor" value for your long tail data points. E.g.:
0.8/(1+5^(x-3)) + 0.2 - You can adjust the constants 5 and 3 to control the slope of the curve. The 0.2 is where the floor will be.