Comparison of rational numbers in GNU/Octave independent of numeric precision - octave

The Octave interpreter evaluates this expression as false:
>> 2/3 + 1/6 == 5/6
ans = 0
cause
>> 2/3 + 1/6 - 5/6
ans = -1.11022302462516e-16
This can be avoided with the rat (or rats) function, or casting the values, but the resulting expression lacks the clear formatting of the initial one:
>> all(rat(2/3 + 1/6) == rat(5/6))
ans = 1
>> single(2/3 + 1/6) == single(5/6)
ans = 1
When using Octave to teach kids arithmetic, 'dirty' translations of mathematical expressions are of no use.
Is there any global adjustment that could be done to evaluate as true the original expression?

Julia has a rational numbers type and is free. You don't need to use Octave symbolics. You can use a Jupyter notebook. Note that notebooks make great teaching tools. My professor used these to teach. Some examples are here.
2//3+1//6
5//6

Related

Can I define a maxima function f(x) which assigns to the argument x

Sorry for the basic question, but it's quite hard to find too much discussion on Maxima specifics.
I'm trying to learn some Maxima and wanted to use something like
x:2
x+=2
which as far as I can tell doesn't exist in Maxima. Then I discovered that I can define my own operators as infix operators, so I tried doing
infix("+=");
"+=" (a,b):= a:(a+b);
However this doesn't work, as if I first set x:1 then try calling x+=2, the function returns 3, but if I check the value of x I see it hasn't changed.
Is there a way to achieve what I was trying to do in Maxima? Could anyone explain why the definition I gave fails?
Thanks!
The problem with your implementation is that there is too much and too little evaluation -- the += function doesn't see the symbol x so it doesn't know to what variable to assign the result, and the left-hand side of an assignment isn't evaluated, so += thinks it is assigning to a, not x.
Here's one way to get the right amount of evaluation. ::= defines a macro, which is just a function which quotes its arguments, and for which the return value is evaluated again. buildq is a substitution function which quotes the expression into which you are substituting. So the combination of ::= and buildq here is to construct the x: x + 2 expression and then evaluate it.
(%i1) infix ("+=") $
(%i2) "+="(a, b) ::= buildq ([a, b], a: a + b) $
(%i3) x: 100 $
(%i4) macroexpand (x += 1);
(%o4) x : x + 1
(%i5) x += 1;
(%o5) 101
(%i6) x;
(%o6) 101
(%i7) x += 1;
(%o7) 102
(%i8) x;
(%o8) 102
So it is certainly possible to do so, if you want to do that. But may I suggest maybe you don't need it? Modifying a variable makes it harder to keep track, mentally, what is going on. A programming policy such as one-time assignment can make it easier for the programmer to understand the program. This is part of a general approach called functional programming; perhaps you can take a look at that. Maxima has various features which make it possible to use functional programming, although you are not required to use them.

Octave -inf and NaN

I searched the forum and found this thread, but it does not cover my question
Two ways around -inf
From a Machine Learning class, week 3, I am getting -inf when using log(0), which later turns into an NaN. The NaN results in no answer being given in a sum formula, so no scalar for J (a cost function which is the result of matrix math).
Here is a test of my function
>> sigmoid([-100;0;100])
ans =
3.7201e-44
5.0000e-01
1.0000e+00
This is as expected. but the hypothesis requires ans = 1-sigmoid
>> 1-ans
ans =
1.00000
0.50000
0.00000
and the Log(0) gives -Inf
>> log(ans)
ans =
0.00000
-0.69315
-Inf
-Inf rows do not add to the cost function, but the -Inf carries through to NaN, and I do not get a result. I cannot find any material on -Inf, but am thinking there is a problem with my sigmoid function.
Can you provide any direction?
The typical way to avoid infinity in these cases is to add eps to the operand:
log(ans + eps)
eps is a very, very small value, and won't affect the output for values of ans unless ans is zero:
>> z = [-100;0;100];
>> g = 1 ./ (1+exp(-z));
>> log(1-g + eps)
ans =
0.0000
-0.6931
-36.0437
Adding to the answers here, I really do hope you would provide some more context to your question (in particular, what are you actually trying to do.
I will go out on a limb and guess the context, just in case this is useful. You are probably doing machine learning, and trying to define a cost function based on the negative log likelihood of a model, and then trying to differentiate it to find the point where this cost is at its minimum.
In general for a reasonable model with a useful likelihood that adheres to Cromwell's rule, you shouldn't have these problems, but, in practice it happens. And presumably in the process of trying to calculate a negative log likelihood of a zero probability you get inf, and trying to calculate a differential between two points produces inf / inf = nan.
In this case, this is an 'edge case', and generally in computer science edge cases need to be spotted as exceptional circumstances and dealt with appropriately. The reality is that you can reasonably expect that inf isn't going to be your function's minimum! Therefore, whether you remove it from the calculations, or replace it by a very large number (whether arbitrarily or via machine precision) doesn't really make a difference.
So in practice you can do either of the two things suggested by others here, or even just detect such instances and skip them from the calculation. The practical result should be the same.
-inf means negative infinity. Which is the correct answer because log of (0) is minus infinity by definition.
The easiest thing to do is to check your intermediate results and if the number is below some threshold (like 1e-12) then just set it to that threshold. The answers won't be perfect but they will still be pretty close.
Using the following as the sigmoid function:
function g = sigmoid(z)
g = 1 ./ (1 + e.^-z);
end
Then the following code runs with no issues. Choose the threshold value in the 'max' statement to be less than the expected noise in your measurements and then you're good to go
>> a = sigmoid([-100, 0, 100])
a =
3.7201e-44 5.0000e-01 1.0000e+00
>> b = 1-a
b =
1.00000 0.50000 0.00000
>> c = max(b, 1e-12)
c =
1.0000e+00 5.0000e-01 1.0000e-12
>> d = log(c)
d =
0.00000 -0.69315 -27.63102

Is it possible to write (display) exponential equations in scilab?

I've been trying to display in my console an exponential equation like the following one:
y(t) = a*e^t + b*e^t + c*e^t
I would write it as a string, however the coefficients a,b and c, are numbers in a vector V = [a b c]. So I was trying to concatenate the numbers with strings "e^t", but I failed to do it. I know scilab displays polynomial equations, but I don't know it is possible to display exponential one. Anyone can help?
Usually this kind of thing is done with mprintf command, which places given numerical arguments into a string with formatting instructions.
V = [3 5 -7]
mprintf("y(t) = %f*e^t + %f*e^t + %f*e^t", V)
The output is
y(t) = 3.000000*e^t + 5.000000*e^t + -7.000000*e^t
which isn't ideal, and can be improved in some ways by tweaking the formatters, but is readable regardless.
Notice we don't have to list every entry V(1), V(2), ... individually; the vector V gets "unpacked" automatically.
If you wanted to have 2D output like what we get for polynomials,
then no, this kind of thing is what Scilab does for polynomials and rational functions only, not for general expressions.
There is also prettyprint but its output is LaTeX syntax, like $1+s+s^{2}-s^{123}$. It works for a few things: polynomials, rational functions, matrices... but again, Scilab is not meant for symbolic manipulations, and does not really support symbolic expressions.

Is there a way to avoid creating an array in this Julia expression?

Is there a way to avoid creating an array in this Julia expression:
max((filter(n -> string(n) == reverse(string(n)), [x*y for x = 1:N, y = 1:N])))
and make it behave similar to this Python generator expression:
max(x*y for x in range(N+1) for y in range(x, N+1) if str(x*y) == str(x*y)[::-1])
Julia version is 2.3 times slower then Python due to array allocation and N*N iterations vs. Python's N*N/2.
EDIT
After playing a bit with a few implementations in Julia, the fastest loop style version I've got is:
function f(N) # 320ms for N=1000 Julia 0.2.0 i686-w64-mingw32
nMax = NaN
for x = 1:N, y = x:N
n = x*y
s = string(n)
s == reverse(s) || continue
nMax < n && (nMax = n)
end
nMax
end
but an improved functional version isn't far behind (only 14% slower or significantly faster, if you consider 2x larger domain):
function e(N) # 366ms for N=1000 Julia 0.2.0 i686-w64-mingw32
isPalindrome(n) = string(n) == reverse(string(n))
max(filter(isPalindrome, [x*y for x = 1:N, y = 1:N]))
end
There is 2.6x unexpected performance improvement by defining isPalindrome function, compared to original version on the top of this page.
We have talked about allowing the syntax
max(f(x) for x in itr)
as a shorthand for producing each of the values f(x) in one coroutine while computing the max in another coroutine. This would basically be shorthand for something like this:
max(#task for x in itr; produce(f(x)); end)
Note, however, that this syntax that explicitly creates a task already works, although it is somewhat less pretty than the above. Your problem can be expressed like this:
max(#task for x=1:N, y=x:N
string(x*y) == reverse(string(x*y)) && produce(x*y)
end)
With the hypothetical producer syntax above, it could be reduced to something like this:
max(x*y if string(x*y) == reverse(string(x*y) for x=1:N, y=x:N)
While I'm a fan of functional style, in this case I would probably just use a for loop:
m = 0
for x = 1:N, y = x:N
n = x*y
string(n) == reverse(string(n)) || continue
m < n && (m = n)
end
Personally, I don't find this version much harder to read and it will certainly be quite fast in Julia. In general, while functional style can be convenient and pretty, if your primary focus is on performance, then explicit for loops are your friend. Nevertheless, we should make sure that John's max/filter/product version works. The for loop version also makes other optimizations easier to add, like Harlan's suggestion of reversing the loop ordering and exiting on the first palindrome you find. There are also faster ways to check if a number is a palindrome in a given base than actually creating and comparing strings.
As to the general question of "getting flexible generators and list comprehensions in Julia", the language already has
A general high-performance iteration protocol based on the start/done/next functions.
Far more powerful multidimensional array comprehensions than most languages. At this point, the only missing feature is the if guard, which is complicated by the interaction with multidimensional comprehensions and the need to potentially dynamically grow the resulting array.
Coroutines (aka tasks) which allow, among other patterns, the producer-consumer pattern.
Python has the if guard but doesn't worry about comprehension performance nearly as much – if we're going to add that feature to Julia's comprehensions, we're going to do it in a way that's both fast and interacts well with multidimensional arrays, hence the delay.
Update: The max function is now called maximum (maximum is to max as sum is to +) and the generator syntax and/or filters work on master, so for example, you can do this:
julia> #time maximum(100x - x^2 for x = 1:100 if x % 3 == 0)
0.059185 seconds (31.16 k allocations: 1.307 MB)
2499
Once 0.5 is out, I'll update this answer more thoroughly.
There are two questions being mixed together here: (1) can you filter a list comprehension mid-comprehension (for which the answer is currently no) and (2) can you use a generator that doesn't allocate an array (for which the answer is partially yes). Generators are provided by the Iterators package, but the Iterators package seems to not play well with filter at the moment. In principle, the code below should work:
max((x, y) -> x * y,
filter((x, y) -> string(x * y) == reverse(string(x * y)),
product(1:N, 1:N)))
I don't think so. There aren't currently filters in Julia array comprehensions. See discussion in this issue.
In this particular case, I'd suggest just nested for loops if you want to get faster computation.
(There might be faster approaches where you start with N and count backwards, stopping as soon as you find something that succeeds. Figuring out how to do that correctly is left as an exercise, etc...)
As mentioned, this is now possible (using Julia 0.5.0)
isPalindrome(n::String) = n == reverse(n)
fun(N::Int) = maximum(x*y for x in 1:N for y in x:N if isPalindrome(string(x*y)))
I'm sure there are better ways that others can comment on. Time (after warm-up):
julia> #time fun(1000);
0.082785 seconds (2.03 M allocations: 108.109 MB, 27.35% gc time)

solve mathematical equation with 1 unknown (equations are dynamically built)

I have to built dynamically equations like following:
x + x/3 + (x/3)/4 + (x/3/4)/2 = 50
Now I would like to evaluate this equation and get x. The equation is built dynamically. x is the leaf node in a taxonomy, the other 3 nodes are the super concepts. The divisor represents the number of children of the child nodes.
Is there a library that allows to build such equations dynamically and resolve x?
Thanks, Chris
Are your equations always of this form (linear in x)?
If so, when building the equation, just set x to 1 and evaluate the lhs.
This will give you lhs = 1 + 1/3 + (1/3)/4 + (1/3/4)/2 = 1.4583..
Then calculate x = rhs / lhs = 50 / 1.4583
It might help you to do some algebra on it.
Note that:
x= 3*x/3 = (x*4*3*2)/(4*3*2)
x+x/3 = 3x/3 + x/3 = 4x/3
and in your particular case:
x + x/3 + (x/3)/4 + (x/3/4)/2 = (x*4*3*2)/(4*3*2) + (x*4*2)/(4*3*2) + (x*2)/(4*3*2) + (x)/(4*3*2)
= (4*3*2x + 4*2x + 2*x + x)/(4*3*2)
Perhaps if you can find a way to have the left hand side rewritten as a single big fraction like this, the solution will come much easier.
Also, factor out the x
(4*3*2x + 4*2x + 2*x + x)/(4*3*2) = x*(4*3*2 + 4*2 + 2 + 1)/(4*3*2)
Then solve for x
50= x*(a/b)
50*(b/a) = x
Since you have some code generating the polynomial, you should be able to generate this big (a/b) fraction thing pretty easily too. I purposely did not simplify the multiplications so that it is clear where each component comes from.
If you're planning to use Java, you can try JAS. It claims to be able to solve polynomials equations.
FTA:
The Java Algebra System
(JAS) is an object oriented, type safe
and multi-threaded approach to
computer algebra. JAS provides a well
designed software library using
generic types for algebraic
computations implemented in the Java
programming language. The library can
be used as any other Java software
package or it can be used
interactively or interpreted through
an jython (Java Python) front end. The
focus of JAS is at the moment on
commutative and solvable polynomials,
Groebner bases and applications. By
the use of Java as implementation
language JAS is 64-bit and multi-core
cpu ready.