How can I solve this kind of equation in Maple? - equation

equation1:
solve({a^2+b^2+169+sqrt(c-13)-24*a-10*b = 0},{a, b, c})
assuming a>0, b>0, c>0;
//a=12, b=5, c=13
equation2:
solve([1/(cos(a)^2)+1/(sin(a)^2*sin(b)^2*cos(b)^2) = 9,
a>0, a<Pi/2, b>0, b<Pi/2], [a,b,c] );
//a=arctan(sqrt(2)), b=Pi/4
I have tired above, but maple couldn't gives a solutions, Am I using solve incorrectly?

In (Eq. 1) it's not your syntax that's an issue. You have three unknowns {a,b,c} but only one equation. You simply do not have enough equations to determine {a,b,c} uniquely. Maple's solve function only returns an answer (if possible) if the number of variables equals the number of equations.
In (Eq. 2) you use square brackets, which are used for ordered lists. The solve function requires a set of equations, which are indicated by curly braces. Again, you have three variables but only one equation. Same problem.
If the equations are linear (which they aren't in your case), Maple can find a parameterization for the solutions in the case of an underdetermined system: http://www.maplesoft.com/support/help/Maple/view.aspx?path=solve/linear.

Related

Octave: How to solve a one order differential equation with variable coefficients

I'm trying to solve a one order differential equation with variable coefficients of the following form:
xdot(1)=a(t)*x(1)+b;
where b=a constant and where a(t) = a time dependent function. I know that I can solve this equation by hand butt a(t) is a quite complex function.
So, my problem is the following. a(t) is a function which I know its values from an experiment (I've got all the results in a file) --> a(t) is a vector (n x 1) which is a problem because x(1)= xdot(1)=a scalar. So, how could I solve this equation, with lsode ?
Possibly I have underestimated your problem, but the way I read it, you are asking to integrate a first order ODE. In general there are two ways to proceed,, implicit methods and explicit methods. Here is the crudest, but easiest to understand, method I can come up with:
nt=101;a=-ones(1,nt);b=1/2;x=NaN*ones(1,nt);x(1)=pi;dt=0.01;
for kt=2:nt
dxdt=a(kt-1)*x(kt-1)+b;
x(kt)=x(kt-1)+dxdt*dt;
endfor
plot(x)
I have assumed a<0 so there is no tendency to blow up. You will want to set it equal to your observed values.

Do we input only 1s for minterms and 0s for maxterms?

This has been bugging me since a long time.
Suppose I have a boolean function F defined as follows:
Now, it can be expressed in its SOP form as:
F = bar(X)Ybar(Z)+ XYZ
But I fail to understand why we always complement the 0s to express them as 1. Is it assumed that the inputs X, Y and Z will always be 1?
What is the practical application of that? All the youtube videos I watched on this topic, how to express a function in SOP form or as sum of minterms but none of them explained why we need this thing? Why do we need minterms in the first place?
As of now, I believe that we design circuits to yield and take only 1 and that's where minterms come in handy. But I couldn't get any confirmation of this thing anywhere so I am not sure I am right.
Maxterms are even more confusing. Do we design circuits that would yield and take only 0s? Is that the purpose of maxterms?
Why do we need minterms in the first place?
We do not need minterms, we need a way to solve a logic design problem, i.e. given a truth table, find a logic circuit able to reproduce this truth table.
Obviously, this requires a methodology. Minterm and sum-of-products is mean to realize that. Maxterms and product-of-sums is another one. In either case, you get an algebraic representation of your truth table and you can either implement it directly or try to apply standard theorems of boolean algebra to find an equivalent, but simpler, representation.
But these are not the only tools. For instance, with Karnaugh maps, you rewrite your truth table with some rules and you can simultaneously find an algebraic representation and reduce its complexity, and it does not consider minterms. Its main drawback is that it becomes unworkable if the number of inputs rises and it cannot be considered as a general way to solve the problem of logic design.
It happens that minterms (or maxterms) do not have this drawback, and can be used to solve any problem. We get a trut table and we can directly convert it in an equation with ands, ors and nots. Indeed minterms are somehow simpler to human beings than maxterms, but it is just a matter of taste or of a reduced number of parenthesis, they are actually equivalent.
But I fail to understand why we always complement the 0s to express them as 1. Is it assumed that the inputs X, Y and Z will always be 1?
Assume that we have a truth table, with only a given output at 1. For instance, as line 3 of your table. It means that when x=0, y=1 and z=0 , the output will be zero. So, can I express that in boolean logic? With the SOP methodology, we say that we want a solution for this problem that is an "and" of entries or of their complement. And obviously the solution is "x must be false and y must be true and z must be false" or "(not x) must be true and y must be true and (not z) must be true", hence the minterm /x.y./z. So complementing when we have a 0 and leaving unchanged when we have a 1 is way to find the equation that will be true when xyz=010
If I have another table with only one output at 1 (for instance line 8 of your table), we can find similarly that I can implement this TT with x.y.z.
Now if I have a TT with 2 lines at 1, one can use the property of OR gates and do the OR of the previous circuits. when the output of the first one is 1, it will force this behavior and ditto for the second. And we directly get the solution for your table /xy/z+xyz
This can be extended to any number of ones in the TT and gives a systematic way to find an equation equivalent to a truth table.
So just think of minterms and maxterms as a tool to translate a TT into equations. What is important is the truth table (that describes the behaviour of what you want to do) and the equations (that give you a way to realize it).

Python: Maximum of a function in a range

I am trying to find out the maximum value of a function (here it is T(n)) through this code:
for i in range(2, imax-1):
Q=q(i-1)-q(i)
Tn=T(i)+(Dt/(rho*cp*0.1))*Q
y=max(Tn)
But I am getting an error "float' object is not iterable". Any suggestion on this would be helpful to me.
Please note that, "q" and "T(i)" have been defined as functions of "i", and all the other terms are constants.
The max function returns the maximum values among several ones, so you logically need to pass at least 2 values as parameters, inside a list or a tuple for example.
I suggest you this solution based on your current code to be easily understood:
y = None
for i in range(2, imax-1):
Q=q(i-1)-q(i)
Tn=T(i)+(Dt/(rho*cp*0.1))*Q
if y is None:
y=Tn
else:
y=max(Tn,y)
To go further (and maybe better), list comprehension is well adapted in this case, as detailed by Andrea in his answer.
max takes an iterable (e.g. a list, dict, str, etc), so it might look something like, max([1, 2, 3]) #=> 3. A common pattern is to use a comprehension: max(f(x) for x in range(10)). The thing about comprehensions is that they require a single expression, so you can't use the original definition of Tn.
If you expand the definition of Tn so that it's a single expression, we get Tn = T(i) + (Dt/(rho*cp*0.1)) * (q(i-1) - q(1)). Use that in the comprehension and we get max(T(i) + (Dt/(rho*cp*0.1)) * (q(i-1) - q(1)) for i in range(2, imax-1)).

How to Solve non-specific non-linear equations?

I am attempting to fit a circle to some data. This requires numerically solving a set of three non-linear simultaneous equations (see the Full Least Squares Method of this document).
To me it seems that the NEWTON function provided by IDL is fit for solving this problem. NEWTON requires the name of a function that will compute the values of the equation system for particular values of the independent variables:
FUNCTION newtfunction,X
RETURN, [Some function of X, Some other function of X]
END
While this works fine, it requires that all parameters of the equation system (in this case the set of data points) is hard coded in the newtfunction. This is fine if there is only one data set to solve for, however I have many thousands of data sets, and defining a new function for each by hand is not an option.
Is there a way around this? Is it possible to define functions programmatically in IDL, or even just pass in the data set in some other manner?
I am not an expert on this matter, but if I were to solve this problem I would do the following. Instead of solving a system of 3 non-linear equations to find the three unknowns (i.e. xc, yc and r), I would use an optimization routine to converge to a solution by starting with an initial guess. For this steepest descent, conjugate gradient, or any other multivariate optimization method can be used.
I just quickly derived the least square equation for your problem as (please check before use):
F = (sum_{i=1}^{N} (xc^2 - 2 xi xc + xi^2 + yc^2 - 2 yi yc + yi^2 - r^2)^2)
Calculating the gradient for this function is fairly easy, since it is just a summation, and therefore writing a steepest descent code would be trivial, to calculate xc, yc and r.
I hope it helps.
It's usual to use a COMMON block in these types of functions to pass in other parameters, cached values, etc. that are not part of the calling signature of the numeric routine.

Calculating an integral of two numerical solutions of an ode

I would like to calculate an integral, which is determined by two functions: I(T) = ∫0T i( f(t), g(t)) dt where f and g solves ordinary differential equations and i is known.
The obvious approach would be to derive a differential equation for I and the solve it alongside f and g (which can be done, but is numerically expensive in my case). In my case, however, f solves an equation with an initial condition f(0) and g and equation with a final condition g(T).
My best guess at the moment would be to solve f and g on a grid using a standard ODE solver and then use a standard method for numerical integration with equally spaced t-coordinates or some kind of quadrature rule (basically anything described by Numerical Recipes).
Does anyone have a better solution? That is, a method that takes the specific type of ode solver and its accuracy into account.
Many advanced ODE solvers come with a feature called "dense output". The ODE solver gives you not only the values of f and g on a grid (as specified beforehand), but allows you to use its result to find the values at any time. Combining this with an adaptive quadrature rule should give you an answer to whatever precision you need.