how to get max. value for a non linear data - function

I am a new Matlab user..so quite unfamilier with most of its power...Actually I need to get the maximum value in a non linear moment curvature curve...I define the theoretical max. and min. curvature values in the program and then divide it in small discrete increments...but the problem is...the max. value sometimes occur in between two increments...so the program misses that one...and it stops before finding the max. value...Please help me...how can I overcome this problem

You will need to approximate the curve, using an interpolation/fitting scheme that depends on the problem and the curve shape, and the known functional form. A spline might be appropriate, or perhaps not.
Once you have a viable approximation that connects the dots so to speak, you minimize/maximize that function. This is an easily solved problem at that point.

There is a method to solve non linear functions (find minima/maxima)
It uses least squares non linear method and I think is called lsqnonlin(). Find it in optimization toolbox. Also solve() might work. Another option is to use simulated annealing but I don't remember the name of the function.
Sorry I don't supply code. I am answering from iphone

Related

Force regression line through origin using sns.jointplot

I am relatively new to python. I have a x-variable and a y-variable plotted against each other using the sns.jointplot function. I know when y is zero, x must also be zero. Is there a way to force the regression line through the origin to satisfy this arguement?
Thanks
Absolutely! You have a constrained linear regression on your hands. You want to find the best linear model in describing your data, right? There are different ways to approach this, but I recommend you try the function scipy.optimize.lsq_linear in SciPy. If you're already working in Seaborn this shouldn't be a problem.
This function allows you to input your linear system and a constraint, and then solves the least-squares problem to satisfy the constraint and minimize residuals.

fft: fitting binned data

I want to fit a curve to data obtained from an FFT. While working on this, I remembered that an FFT gives binned data, and therefore I wondered if I should treat this differently with curve-fitting.
If the bins are narrow compared to the structure, I think it should not be necessary to treat the data differently, but for me that is not the case.
I expect the right way to fit binned data is by minimizing not the difference between values of the bin and fit, but between bin area and the area beneath the fitted curve, for each bin, such that the energy in each bin matches the energy in the range of the bin as signified by the curve.
So my question is: am I thinking correctly about this? If not, how should I go about it?
Also, when looking around for information about this subject, I encountered the "Maximum log likelihood" for example, but did not find enough information about it to understand if and how it applied to my situation.
PS: I have no clue if this is the right site for this question, please let me know if there is a better place.
For an unwindowed FFT, the correct interpolation between bins is by using a Sinc (sin(x)/x) or periodic Sinc (Dirichlet) interpolation kernel. For an FFT of samples of a band-limited signal, thus will reconstruct the continuous spectrum.
A very simple and effective way of interpolating the spectrum (from an FFT) is to use zero-padding. It works both with and without windowing prior to the FFT.
Take your input vector of length N and extend it to length M*N, where M is an integer
Set all values beyond the original N values to zeros
Perform an FFT of length (N*M)
Calculate the magnitude of the ouput bins
What you get is the interpolated spectrum.
Best regards,
Jens
This can be done by using maximum log likelihood estimation. This is a method that finds the set of parameters that is most likely to have yielded the measured data - the technique originates in statistics.
I have finally found an understandable source for how to apply this to binned data. Sadly I cannot enter formulas here, so I refer to that source for a full explanation: slide 4 of this slide show.
EDIT:
For noisier signals this method did not seem to work very well. A method that was a bit more robust is a least squares fit, where the difference between the area is minimized, as suggested in the question.
I have not found any literature to defend this method, but it is similar to what happens in the maximum log likelihood estimation, and yields very similar results for noiseless test cases.

Octave exponential in transfer function

I am trying to figure out how to use an exponential in the transform function. Normally the transform function is of the form tf(num, den) where num and den are linear polynomials. I want to transform (1-exp(-Ts)/s. I can't seem to figure out how to handle the exp(-Ts). Any ideas?
Thanks...
I was not able to find a specific function in Octave that handled this. However, I found an approximation that works well. I was going to use a Taylor series approximation. Taylor series uses an infinite series of rational numbers to converge on a exp(-sT). However, the Pade approximation works better and from a few papers I read this is what most people are using. Thanks for all of your help.

What ODE solver uses calculations in a stepper function for interpolation?

I average over a multiple solutions of ODEs that have different initial conditions, so it's important for all of the solutions to have values at the same times; for example, at an increment of 0.01.
i've been using ODE routines from numerical recipes 3 (nr3). they do adaptive size-step and use the calculated values to do the same order of interpolation. i can't use them because they conflict with boost. are there any other similar routines?
i looked at GSL, it's very nice but it doesn't have a built in interpolation. one way i can do it is solve the ODE with an adaptive size and than run Akima interpolation. But it seems like nr3 solution would be faster and more accurate.
You can use odeint. It has Dopri5, Rosenbrock4 and Burlish-Stoer for dense output.
I have used DOPRI5 from http://www.unige.ch/~hairer/software.html with dense output = interpolation. I found it reliable. I used the original version (in Fortran); there is also a C version on the same webpage which I haven't used myself but I seem to remember that people were happy with it.

algorithm to solve related equations

I am working on a project to create a generic equation solver... envision this to take the form of 25-30 equations that will be saved in a table- variable names along with the operators.
I would then call this table for solving any equation with a missing variable and it would move operators/ other pieces to the other side of the missing variable
e.g. 2x+ 3y=z and if x were missing variable. I would call equation with values for y and z and it would convert to solve for x=(z-3y)/2
equations could be linear, polynomial, binary(yes/no result)...
i am not sure if i can get any light-weight library available or whether this needs to built from scratch... any pointers or guidance will be appreciated
See Maxima.
I rather like it for my symbolic computation needs.
If such a general black-box algorithm could be made accurate, robust and stable, pigs could fly. Solutions can be nonexistent, multiple, parametrized, etc.
Even for linear equations it gets tricky to do it right.
Your best bet is some form of Newton algorithm, but generally you tailor it to your problem at hand.
EDIT: I didn't see you wanted something symbolic, rather than numerical. It's another bag of worms.