I must calculate a function on EES.
Function: T(t)=((T_surface-T_infinity)*(e^(-bt)))+T_infinity
t is time and limits are between 1 and 40 second. I need calculate every seconds in 1-40.
How can I write this function in EES?
If I understand the question correctly, no differential equation should be solved, no integration, summation or the like should be carried out. Only a calculation with variation of the variable t is to be accomplished.
There are two possibilities. But first: EES is not 'key-sensitive'! So you should choose one—T or t—and the other should get another 'name'. I prefer 'tau' for time.
Parametric table
Write the equation in the equation window and aad the parameter you may have(?). Do not define 'tau'. like:
T=((T_surface-T_infinity)*(exp(-b*tau)))+T_infinity
T_surcace = 80[C]
T_infinity = 20[C]
b=3
Then open the parametic table, add the variables you want to see. At least you should take T and tau. Expand the rows of the table to 40 and enter the respective times. Then press the green play-button (top left in the table).
Duplicate function:
T_surface = 80[C]
$varinfo tau[] units='s'
b=0.1[1/s]
Duplicate i=1,40
tau[i] = i*1[s]
T[i]=((T_surface-T_infinity)*(exp(-b*tau[i] )))+T_infinity
End
T_infinity = 20[C]
$varinfo T[] units='C'
Related
I generated random values using following function :
P = floor(6*rand(1,30)+1)
Then, using T=find(P==5), I got values where outcome is 5 and stored them in T. The output was :
T =
10 11 13 14 15 29
Now, I want to calculate the mean value of T using mean(T) but it gives me following error :
error: mean(29): out of bound 1 (dimensions are 1x1) (note: variable 'mean' shadows function)
What I am trying to do is to model the outcomes of a rolling a fair dice and counting the first time I get a 5 in outcomes. Then I want to take mean value of all those times.
Although you don't explicitly say so in your question, it looks like you wrote
mean = mean(T);
When I tried that, it worked the first time I ran the code but the second and subsequent times it gave the same error that you got. What seems to be happening is that the first time you run the script it calculates the mean of T, which is a scalar, i.e. it has dimensions 1x1, and then stores it in a variable called mean, which then also has dimensions 1x1. The second time you run it, the variable mean is still present in the environment so instead of calling the function mean() Octave tries to index the variable called mean using the vector T as the indices. The variable mean only has one element, whose index is 1, so the first element of T whose value is different from 1 is out of bounds. If you call your variable something other than mean, such as, say, mu:
mu = mean(T);
then it should work as intended. A less satisfactory solution would be to write clear all at the top of your script, so that the variable mean is only created after the function mean() has been called.
I'm trying to create a regression that would include a polynomial (let's say 2nd order) of year on a certain interval of year (say 1 to 70) and a number of dummies for certain values of year (say for every year between 45 and 60).
If I didn't have the restriction for dummies, I believe the commands would be:
gen year2=year^2
regress y year year2 i.year if inrange(year,1,70)
I can't make the dummies manually, there will be more than 15 of them in the end). Could anybody help me, please?
If I then want to plot the estimated function without the dummies, why do these two bring different things?
twoway function _b[_cons] +_b[year]*x + _b[year2]*x^2, range(1 70)
twoway function _b[_cons] +_b[year]*year + _b[year2]*year^2, range(1 70)
The way I understood it, _b[_cons], _b[year] and _b[year2] call previously calculated coefficients for the corresponding independent variables and then multiplies it with them. Why does it bring different results then if x should be the same thing as year in this case?
I am not sure why Pearly is giving you such a hard time, I think this may be what you're looking for, but let me know if it is something different:
One thing to note, I am using a dataset that comes preloaded with Stata and this is usually a nice way to make a MVCE like Nick was saying in your other post.
clear
sysuse gnp96
/* variables: gnp, date (quarterly) */
gen year = year(dofq(date)) // get yearly variable
gen year2=year^2 // get the square of the yearly variable
tab year if inrange(year,1970,1975), gen(yr) // generate dummy variables
// the dummy varibales generated have null values for years not
// in the specified range, so we're going to fill those in
foreach v of varlist yr* {
replace `v' = 0 if `v' == .
}
// here's your regression
regress gnp year year2 yr* if inrange(year,1967,1990)
Now, the yr* are your dummy variables and the * is a wildcard calling all variables named like yr[something]
This gives you the range for the dummy variables and the range for the year variables.
As to your question on using x vs year, I am only hypothesizing, but I think that when you use x it is continuous since Stata isn't looking at your variables, but instead just at the x axis whereas your year variable is discrete (a bunch of integers) so it looks more like a step function. More information can be found using the command help twoway function
I am to find the smallest distance between a given set of points and the origin. I have a matrix with 2 columns and 10 rows. Each row represents coordinates. One point consists of two coordinates and I would like to calculate the smallest distance between each point and to the origin. I would also like to determine which point gave this smallest distance.
In Octave, I calculate this distance by using norm and for each point in my set, I have a distance associated with them and the smallest distance is obviously the one I'm looking for. However, the code I wrote below isn't working the way it should.
function [dist,koor] = bonus4(S)
S= [-6.8667, -44.7967;
-38.0136, -35.5284;
14.4552, -27.1413;
8.4996, 31.7294;
-17.2183, 28.4815;
-37.5100, 14.1941;
-4.2664, -24.4428;
-18.6655, 26.9427;
-15.8828, 18.0170;
17.8440, -22.9164];
for i=1:size(S)
L=norm(S(i, :))
dist=norm(S(9, :));
koor=S(9, :) ;
end
i = 9 is the correct answer, but I need Octave to put that number in. How do I tell Octave that this is the number I want? Specifically:
dist=norm(S(9, :));
koor=S(9, :);
I cannot use any packages. I found the geometry package online but I am to solve the task without additional packages.
I'll work off of your original code. Firstly, you want to compute the norm of all of the points and store them as individual elements in an array. Your current code isn't doing that and is overwriting the variable L which is a single value at each iteration of the loop.
You'll want to make L an array and store the norms at each iteration of the loop. Once you do this, you'll want to find the location as well as the minimum distance itself. That can be done with one call to min where the first output gives you the minimum distance and the second output gives you the location of the minimum. You can use the second output to slice into your S array to retrieve the actual point.
Last but not least, you need to define S first before calling this function. You are defining S inside the function and that will probably give you unintended results if you want to change the input into this function at each invocation. Therefore, define S first, then call the function:
S= [-6.8667, -44.7967;
-38.0136, -35.5284;
14.4552, -27.1413;
8.4996, 31.7294;
-17.2183, 28.4815;
-37.5100, 14.1941;
-4.2664, -24.4428;
-18.6655, 26.9427;
-15.8828, 18.0170;
17.8440, -22.9164];
function [dist,koor] = bonus4(S)
%// New - Create an array to store the distances
L = zeros(size(S,1), 1);
%// Change to iterate over number of rows
for i=1:size(S,1)
L(i)=norm(S(i, :)); %// Change
end
[dist,ind] = min(L); %// Find the minimum distance
koor = S(ind,:); %// Get the actual point
end
Or, make sure you save the above function in a file called bonus4.m, then do this in the Octave command prompt:
octave:1> S= [-6.8667, -44.7967;
> -38.0136, -35.5284;
> 14.4552, -27.1413;
> 8.4996, 31.7294;
> -17.2183, 28.4815;
> -37.5100, 14.1941;
> -4.2664, -24.4428;
> -18.6655, 26.9427;
> -15.8828, 18.0170;
> 17.8440, -22.9164];
octave:2> [dist,koor] = bonus4(S);
Though this code works, I'll debate that it's slow as you're using a for loop. A faster way would be to do this completely vectorized. Because using norm for matrices is different than with vectors, you'll have to compute the distance yourself. Because you are measuring the distance from the origin, you can simply square each of the columns individually then add the columns of each row.
Therefore, you can just do this:
S= [-6.8667, -44.7967;
-38.0136, -35.5284;
14.4552, -27.1413;
8.4996, 31.7294;
-17.2183, 28.4815;
-37.5100, 14.1941;
-4.2664, -24.4428;
-18.6655, 26.9427;
-15.8828, 18.0170;
17.8440, -22.9164];
function [dist,koor] = bonus4(S)
%// New - Computes the norm of each point
L = sqrt(sum(S.^2, 2));
[dist,ind] = min(L); %// Find the minimum distance
koor = S(ind,:); %// Get the actual point
end
The function sum can be used to sum over a dimension independently. As such, by doing S.^2, you are squaring each term in the points matrix, then by using sum with the second parameter as 2, you are summing over all of the columns for each row. Taking the square root of this result computes the distance of each point to the origin, exactly the way the for loop functions. However, this (at least to me) is more readable and I daresay faster for larger sizes of points.
I am having some problem about calculating the FWHM of my data. Because the "fwhm" function in signal package results in a 100 times bigger value than i expected to get.
What i did is that,
Depending on the gaussian distribution function (you can find it on wikipedia) I produced some data. In this function you can give a specific sigma (RMS) value (FWHM=sigma*2.355). Here is that the script I wrote to understand the situation
x=10:0.01:40;
x0=25;
sigma=0.25;
y=(1/(sigma*sqrt(2*pi)))*exp(-((x-x0).^2)/(2*sigma^2));
z=fwhm(y)/2.355;
plot(x,y)
when I compared the results the output of "fwhm" function (24.999) is 100 times bigger than the one I used (0.25) in the function.
If you have any idea it will be very helpful.
Thanks in advance.
Your z is 100 times bigger because your steps in x are 1/100 (0.01). If you use fwhm(y) it is expected that the stepsize in x is 1. If not you have to specify that.
In your case you should do:
z=fwhm(x, y)/2.355
z = 0.24999
which matches your sigma
I have a list of documents each having a relevance score for a search query. I need older documents to have their relevance score dampened, to try to introduce their date in the ranking process. I already tried fiddling with functions such as 1/(1+date_difference), but the reciprocal function is too discriminating for close recent dates.
I was thinking maybe a mathematical function with range (0..1) and domain(0..x) to amplify their score, where the x-axis is the age of a document. It's best to explain what I further need from the function by an image:
Decaying behavior is often modeled well by an exponentional function (many decaying processes in nature also follow it). You would use 2 positive parameters A and B and get
y(x) = A exp(-B x)
Since you want a y-range [0,1] set A=1. Larger B give slower decays.
If a simple 1/(1+x) decreases too quickly too soon, a sigmoid function like 1/(1+e^-x) or the error function might be better suited to your purpose. Let the current date be somewhere in the negative numbers for such a function, and you can get a value that is current for some configurable time and then decreases towards a base value.
log((x+1)-age_of_document)
Where the base of the logarithm is (x+1). Note the x is as per your diagram and is the "threshold". If the age of the document is greater than x the score goes negative. Multiply by the maximum possible score to introduce scaling.
E.g. Domain = (0,10) with a maximum score of 10: 10*(log(11-x))/log(11)
A bit late, but as thiton says, you might want to use a sigmoid function instead, since it has a "floor" value for your long tail data points. E.g.:
0.8/(1+5^(x-3)) + 0.2 - You can adjust the constants 5 and 3 to control the slope of the curve. The 0.2 is where the floor will be.