Octave exponential in transfer function - octave

I am trying to figure out how to use an exponential in the transform function. Normally the transform function is of the form tf(num, den) where num and den are linear polynomials. I want to transform (1-exp(-Ts)/s. I can't seem to figure out how to handle the exp(-Ts). Any ideas?
Thanks...

I was not able to find a specific function in Octave that handled this. However, I found an approximation that works well. I was going to use a Taylor series approximation. Taylor series uses an infinite series of rational numbers to converge on a exp(-sT). However, the Pade approximation works better and from a few papers I read this is what most people are using. Thanks for all of your help.

Related

Force regression line through origin using sns.jointplot

I am relatively new to python. I have a x-variable and a y-variable plotted against each other using the sns.jointplot function. I know when y is zero, x must also be zero. Is there a way to force the regression line through the origin to satisfy this arguement?
Thanks
Absolutely! You have a constrained linear regression on your hands. You want to find the best linear model in describing your data, right? There are different ways to approach this, but I recommend you try the function scipy.optimize.lsq_linear in SciPy. If you're already working in Seaborn this shouldn't be a problem.
This function allows you to input your linear system and a constraint, and then solves the least-squares problem to satisfy the constraint and minimize residuals.

How to optimize function to get highest coefficient in linear regression?

I am building a typical linear multivariate regression, except that one of variables, rather than being a simple data point, is a function dependent on one of the other variables. So for example, my regression may look like:
y1=c1*x1+c2*x2+c3*x3+c4*f(x3)
f itself contains coefficients a,b,c,d
This particular function is of the form, f(x)=a - b/(1 + e^(-c(x-d)))
Basically, the point of my research is to find which values of a, b, c, and d lead to the highest value of x4, and, hopefully, the best model.
I'm pretty inexperienced in R, but my advisor told me he thinks it would be the best program to get this kind of thing done in... Anyone have any advice on where to start with this problem?
Check out non linear least squares. For R implementation see nls

Differentiate a function by a function

I'm just a little bit lost here. I'm using the latest MATLAB release with the symbolic maths toolbox. At the moment I'm working on a system, which has equations like x=theta(t)+2 (of course a lot more complicated and longer). Now I would like to differentiate this equation by theta(t). Hence, I should get x=1. However, if I use the diff(x,theta) command I only get the message Invalid variable.
How do I do it? What am I doing wrong?
Thanks!
I used to have the same problem, but using Maple or sympy. Try substituting theta(t) by theta in the right hand side of the equation and then differentiate wrt. theta.

how to get max. value for a non linear data

I am a new Matlab user..so quite unfamilier with most of its power...Actually I need to get the maximum value in a non linear moment curvature curve...I define the theoretical max. and min. curvature values in the program and then divide it in small discrete increments...but the problem is...the max. value sometimes occur in between two increments...so the program misses that one...and it stops before finding the max. value...Please help me...how can I overcome this problem
You will need to approximate the curve, using an interpolation/fitting scheme that depends on the problem and the curve shape, and the known functional form. A spline might be appropriate, or perhaps not.
Once you have a viable approximation that connects the dots so to speak, you minimize/maximize that function. This is an easily solved problem at that point.
There is a method to solve non linear functions (find minima/maxima)
It uses least squares non linear method and I think is called lsqnonlin(). Find it in optimization toolbox. Also solve() might work. Another option is to use simulated annealing but I don't remember the name of the function.
Sorry I don't supply code. I am answering from iphone

algorithm to solve related equations

I am working on a project to create a generic equation solver... envision this to take the form of 25-30 equations that will be saved in a table- variable names along with the operators.
I would then call this table for solving any equation with a missing variable and it would move operators/ other pieces to the other side of the missing variable
e.g. 2x+ 3y=z and if x were missing variable. I would call equation with values for y and z and it would convert to solve for x=(z-3y)/2
equations could be linear, polynomial, binary(yes/no result)...
i am not sure if i can get any light-weight library available or whether this needs to built from scratch... any pointers or guidance will be appreciated
See Maxima.
I rather like it for my symbolic computation needs.
If such a general black-box algorithm could be made accurate, robust and stable, pigs could fly. Solutions can be nonexistent, multiple, parametrized, etc.
Even for linear equations it gets tricky to do it right.
Your best bet is some form of Newton algorithm, but generally you tailor it to your problem at hand.
EDIT: I didn't see you wanted something symbolic, rather than numerical. It's another bag of worms.