How to find cumulative mean/average of a matrix in Octave? In matlab it is pretty simple. In octave I couldnt able to find.
The cumulative mean is going to simply be the cumulative sum (cumsum) divided by the number of elements into the array up until that point.
So for the data
A = [1 3 4 5 6];
We could do
out = cumsum(A) ./ ( 1:numel(A) );
Related
Maple has a very clean way of computing the resultant of two polynomials:
https://www.maplesoft.com/support/help/maple/view.aspx?path=resultant
Does this function have a counterpart in Octave?
In Scilab, you get it as the determinant of the Sylvester matrix of the two polynomials. As Scilab has a native polynomial datatype, it comes quite simply:
--> a = poly([1 2 3 4],"x","roots")
a =
24 -50x +35x² -10x³ +x⁴
--> b = poly([-2 -1 5],"x","roots")
b =
-10 -13x -2x² +x³
--> det(sylm(a,b))
ans =
1036800.0
In Scilab, sylm() is in the Polynomials section. Apparently there is no equivalent in Octave's Polynomial chapter, nor in its control toolbox. May be elsewhere? Otherwise, you can edit the Scilab sylm() code, and transpose it into Octave. It is less than 20-line long, and simple. Since the Sylvester matrix is a numerical one, you then have just to apply the usual det() function to it.
I'm trying to create a regression that would include a polynomial (let's say 2nd order) of year on a certain interval of year (say 1 to 70) and a number of dummies for certain values of year (say for every year between 45 and 60).
If I didn't have the restriction for dummies, I believe the commands would be:
gen year2=year^2
regress y year year2 i.year if inrange(year,1,70)
I can't make the dummies manually, there will be more than 15 of them in the end). Could anybody help me, please?
If I then want to plot the estimated function without the dummies, why do these two bring different things?
twoway function _b[_cons] +_b[year]*x + _b[year2]*x^2, range(1 70)
twoway function _b[_cons] +_b[year]*year + _b[year2]*year^2, range(1 70)
The way I understood it, _b[_cons], _b[year] and _b[year2] call previously calculated coefficients for the corresponding independent variables and then multiplies it with them. Why does it bring different results then if x should be the same thing as year in this case?
I am not sure why Pearly is giving you such a hard time, I think this may be what you're looking for, but let me know if it is something different:
One thing to note, I am using a dataset that comes preloaded with Stata and this is usually a nice way to make a MVCE like Nick was saying in your other post.
clear
sysuse gnp96
/* variables: gnp, date (quarterly) */
gen year = year(dofq(date)) // get yearly variable
gen year2=year^2 // get the square of the yearly variable
tab year if inrange(year,1970,1975), gen(yr) // generate dummy variables
// the dummy varibales generated have null values for years not
// in the specified range, so we're going to fill those in
foreach v of varlist yr* {
replace `v' = 0 if `v' == .
}
// here's your regression
regress gnp year year2 yr* if inrange(year,1967,1990)
Now, the yr* are your dummy variables and the * is a wildcard calling all variables named like yr[something]
This gives you the range for the dummy variables and the range for the year variables.
As to your question on using x vs year, I am only hypothesizing, but I think that when you use x it is continuous since Stata isn't looking at your variables, but instead just at the x axis whereas your year variable is discrete (a bunch of integers) so it looks more like a step function. More information can be found using the command help twoway function
I am having some problem about calculating the FWHM of my data. Because the "fwhm" function in signal package results in a 100 times bigger value than i expected to get.
What i did is that,
Depending on the gaussian distribution function (you can find it on wikipedia) I produced some data. In this function you can give a specific sigma (RMS) value (FWHM=sigma*2.355). Here is that the script I wrote to understand the situation
x=10:0.01:40;
x0=25;
sigma=0.25;
y=(1/(sigma*sqrt(2*pi)))*exp(-((x-x0).^2)/(2*sigma^2));
z=fwhm(y)/2.355;
plot(x,y)
when I compared the results the output of "fwhm" function (24.999) is 100 times bigger than the one I used (0.25) in the function.
If you have any idea it will be very helpful.
Thanks in advance.
Your z is 100 times bigger because your steps in x are 1/100 (0.01). If you use fwhm(y) it is expected that the stepsize in x is 1. If not you have to specify that.
In your case you should do:
z=fwhm(x, y)/2.355
z = 0.24999
which matches your sigma
I am new to octave and learning it.
Suppose I have a matrix X =
1 2
3 4
5 6
I want to access this matrix from second row, omitting the first row.
What is the syntax for it!?
I could delete the row by X(1,:) = [] which will change the original matrix,
How to access from the second row in octave?
Use colon syntax. To return row 2 to the end use:
X(2:end, :)
See GNU Octave documentation for more indexing options.
I have an array of 10 rows by 20 columns. Each columns corresponds to a data set that cannot be fitted with any sort of continuous mathematical function (it's a series of numbers derived experimentally). I would like to calculate the integral of each column between row 4 and row 8, then store the obtained result in a new array (20 rows x 1 column).
I have tried using different scipy.integrate modules (e.g. quad, trpz,...).
The problem is that, from what I understand, scipy.integrate must be applied to functions, and I am not sure how to convert each column of my initial array into a function. As an alternative, I thought of calculating the average of each column between row 4 and row 8, then multiply this number by 4 (i.e. 8-4=4, the x-interval) and then store this into my final 20x1 array. The problem is...ehm...that I don't know how to calculate the average over a given range. The question I am asking are:
Which method is more efficient/straightforward?
Can integrals be calculated over a data set like the one that I have described?
How do I calculate the average over a range of rows?
Since you know only the data points, the best choice is to use trapz (the trapezoidal approximation to the integral, based on the data points you know).
You most likely don't want to convert your data sets to functions, and with trapz you don't need to.
So if I understand correctly, you want to do something like this:
from numpy import *
# x-coordinates for data points
x = array([0, 0.4, 1.6, 1.9, 2, 4, 5, 9, 10])
# some random data: 3 whatever data sets (sharing the same x-coordinates)
y = zeros([len(x), 3])
y[:,0] = 123
y[:,1] = 1 + x
y[:,2] = cos(x/5.)
print y
# compute approximations for integral(dataset, x=0..10) for datasets i=0,1,2
yi = trapz(y, x[:,newaxis], axis=0)
# what happens here: x must be an array of the same shape as y
# newaxis tells numpy to add a new "virtual" axis to x, in effect saying that the
# x-coordinates are the same for each data set
# approximations of the integrals based the datasets
# (here we also know the exact values, so print them too)
print yi[0], 123*10
print yi[1], 10 + 10*10/2.
print yi[2], sin(10./5.)*5.
To get the sum of the entries 4 to 8 (including both ends) in each column, use
a = numpy.arange(200).reshape(10, 20)
a[4:9].sum(axis=0)
(The first line is just to create an example array of the desired shape.)