Expr for float values in TCL - tcl

Calculating float values
tclsh
% expr 0.2+0.2
0.4
% expr 0.2+0.1
0.30000000000000004
%
Why not 0.3??
Am i missing some thing.
thanks in advance.

Neither 0.1 or 0.2 have an exact representation in IEEE double precision binary floating point arithmetic (which Tcl uses internally for expressions involving fractional values, as there's good hardware support for them). This means that the values you are computing with are never exactly what you think they are; instead, they're both very slightly more (as it happens; they could also have been slightly less in general). When you add 0.2+ε1+0.1+ε2, it can happen that ε1+ε2 can add up to more than the threshold where 0.3 (another imprecisely represented value) becomes the next exactly represented value above it. This is what you have observed. It's also inherent in the way floating point mathematics works in a vast array of languages; only integer arithmetic (or fractional arithmetic capable of being expressed as exact multiples of some power of 2, e.g., 0.5, 0.25, 0.125) is guaranteed to be exact.
The only interesting thing of note here is that Tcl 8.5 and 8.6 prefer to render floating point numbers with the minimal number of digits required to get the exact value back when re-parsed. If you want to get a fixed number of digits (e.g., 8) try using format when converting:
format %.8f [expr 0.2+0.1]

This behavior exists in almost all programming languages, e.g. Ruby, Python, etc.
The suggestion here is try to avoid storing numbers in floating points, use integer whenever possible. The bottom line is do not use floating points in a comparison.

Related

Increase precision in tcl

How to increase the precision in tcl.
I am getting b2 below as -0.000001 whereas the actual value is -7.95553e-007
set b2 [lindex $b1 0]
I tried "set tcl_precision 12" but it did not change anything
Tcl these days uses a floating point rendering system that means by default it never loses any precision at all when a double-precision floating point number is automatically converted to a string and back, while simultaneously using the minimum number of decimal digits in the string. It has had this code since Tcl 8.5 and uses it whenever the tcl_precision global variable is set to its default value (0 these days). In the future, this may well become a hard-core default, but I don't think it has done so yet.
Older versions of Tcl (all currently unsupported) instead used that tcl_precision global to control the number of decimal digits used; setting it to non-zero values still has that effect for backward compatibility. The old default value was 15, which usually did the right thing, but 17 ensures that no information is ever lost, even in tricky edge cases, but at a cost of often producing effectively noise digits at the end. (That is a consequence of the differences between arithmetic in base-2 and base-10, and are properly common to all languages that use IEEE binary floating point math.)
If you want to use a definite number of decimal digits after the point because you are producing output for human consumption, you should use the format command.
format %.5f 1.23; # >>> 1.23000

How do I round this binary number to the nearest even

I have this binary representation of 0.1:
0.00011001100110011001100110011001100110011001100110011001100110
I need to round it to the nearest even to be able to store it in the double precision floating point. I can't seem to understand how to do that. Most tutorials talk about guard, round and sticky bits - where are they in this representation?
Also I've found the following explanation:
Let’s see what 0.1 looks like in double-precision. First, let’s write
it in binary, truncated to 57 significant bits:
0.000110011001100110011001100110011001100110011001100110011001…
Bits 54 and beyond total to greater than half the value of bit
position 53, so this rounds up to
0.0001100110011001100110011001100110011001100110011001101
This one doesn't talk about GRS bits, why? Aren't they always required?
The text you quote is from my article Why 0.1 Does Not Exist In Floating-Point . In that article I am showing how to do the conversion by hand, and the "GRS" bits are an IEEE implementation detail. Even if you are using a computer to do the conversion, you don't have to use IEEE arithmetic (and you shouldn't if you want to do it correctly ), so the GRS bits won't come into play there either. In any case, the GRS bits apply to calculations, not really to the conceptual idea of conversion.

Is there a "native" way to convert from numbers to dB in Tcl

dB or decibel is a unit that is used to show ratio in logarithmic scale, and specifecly, the definition of dB that I'm interested in is X(dB) = 20log(x) where x is the "normal" value, and X(dB) is the value in dB. When wrote a code converted between mil. and mm, I noticed that if I use the direct approach, i.e., multiplying by the ratio between the units, I got small errors on the opposite conversion, i.e.: to_mil [to_mm val_in_mil] wasn't equal to val_in_mil and the same with mm. The library units has solved this problem, as the conversions done by it do not have that calculation error. But the specifically doesn't offer (or I didn't find) the option to convert a number to dB in the library.
Is there another library / command that can transform numbers to dB and dB to numbers without calculation errors?
I did an experiment with using the direct math conversion, and I what I got is:
>> set a 0.005
0.005
>> set b [expr {20*log10($a)}]
-46.0205999133
>> expr {pow(10,($b/20))}
0.00499999999999
It's all a matter of precision. We often tend to forget that floating point numbers are not real numbers (in the mathematical sense of ℝ).
How many decimal digit do you need?
If you, for example, would only need 5 decimal digits, rounding 0.00499999999999 will give you 0.00500 which is what you wanted.
Since rounding fp numbers is not an easy task and may generate even more troubles, you might just change the way you determine if two numbers are equal:
>> set a 0.005
0.005
>> set b [expr {20*log10($a)}]
-46.0205999133
>> set c [expr {pow(10,($b/20))}]
0.00499999999999
>> expr {abs($a - $c) < 1E-10}
1
>> expr {abs($a - $c) < 1E-20}
0
>> expr {$a - $c}
8.673617379884035e-19
The numbers in your examples can be considered "equal" up to an error or 10-18. Note that this is just a rough estimate, not a full solution.
If you're really dealing with problems that are sensitive to numerical errors propagation you might look deeper into "numerical analysis". The article What Every Computer Scientist Should Know About Floating-Point Arithmetic or, even better, this site: http://floating-point-gui.de might be a start.
In case you need a larger precision you should drop your "native" requirement.
You may use the BigFloat offered by tcllib (http://tcllib.sourceforge.net/doc/bigfloat.html or even use GMP (the GNU multiple precision arithmetic library) through ffidl (http://elf.org/ffidl). There's an interface already defined for it: gmp.tcl
With the way floating point numbers are stored, every log10(...) can't correspond to exactly one pow(10, ...). So you lose precision, just like the integer divisions 89/7 and 88/7 both are 12.
When you put a value into floating point format, you should forget the ability to know it's exact value anymore unless you keep the old, exact value too. If you want exactly 1/200, store it as the integer 1 and the integer 200. If you want exactly the ten-logarithm of 1/200, store it as 1, 200 and the info that a ten-logarithm has been done on it.
You can fill your entire memory with the first x decimal digits of the square root of 2, but it still won't be the square root of 2 you store.

Best practice for storing weights in a SQL database?

An application I'm working on needs to store weights of the format X pounds, y.y ounces. The database is MySQL, but I imagine this is DB agnostic.
I can think of three ways to do this:
Convert the weight to decimal pounds and store in a single field. (5 lbs 6.2 oz = 5.33671875 lbs)
Convert the weight to decimal ounces and store in a single field. (5 lbs 6.2 oz = 86.2 oz)
Store the pounds portion as an integer and the ounces portion as a decimal, in two fields.
I'm thinking that #1 is not such a good idea, since decimal pounds will produce numbers of arbitrary precision, which would need to be stored as a float, which could lead to inaccuracies which are inherent in floating point numbers.
Is there a compelling reason to choose #2 over #3 or vise-versa?
TL;DR
Choose either option #1 or option #2—there's no difference between them. Don't use option #3, because it's awkward to work with.
You claim that there are inherent inaccuracies in floating point numbers. I think that this deserves to be explored a little first.
When deciding upon a numeral system for representing a number (whether on a piece of paper, in a computer circuit, or elsewhere), there are two separate issues to consider:
its basis; and
its format.
Pick a base, any base…
Limited by finite space, one cannot represent an arbitrary member of an infinite set. For example: no matter how much paper you buy or how small your handwriting, it'd always be possible to find an integer that won't fit in the given space (you could just keep appending extra digits until the paper runs out). So, with integers, we usually restrict our finite space to representing only those that fall within some particular interval—e.g. if we have space for the positive/negative sign and three digits, we might restrict ourselves to the interval [-999,+999].
Every non-empty interval contains an infinite set of real numbers. In other words, no matter what interval one takes over the real numbers—be it [-999,+999], [0,1], [0.000001,0.000002] or anything else—there is still an infinite set of reals within that interval (one need only keep appending (non-zero) fractional digits)! Therefore arbitrary real numbers must always be "rounded" to something that can be represented in finite space.
The set of real numbers that can be represented in finite space depends upon the numeral system that is used. In our (familiar) positional base-10 system, finite space will suffice for one-half (0.510) but not for one-third (0.33333…10); by contrast, in the (less familiar) positional base-9 system, it is the other way around (those same numbers are respectively 0.44444…9 and 0.39). The consequence of all this is that some numbers that can be represented using only a small amount of space in positional base-10 (and therefore appear to be very "round" to us humans), e.g. one-tenth, would actually require infinite binary circuits to be stored precisely (and therefore don't appear to be very "round" to our digital friends)! Notably, since 2 is a factor of 10, the same is not true in reverse: any number that can be represented with finite binary can also be represented with finite decimal.
We can't do any better for continuous quantities. Ultimately such quantities must use a finite representation in some numeral system: it's arbitrary whether that system happens to be easy on computer circuits, on human fingers, on something else or on nothing at all—whichever system is used, the value must be rounded and therefore it always results in "representation error".
In other words, even if one has a perfectly accurate measuring instrument (which is physically impossible), then any measurement it reports will already have been rounded to a number that happens to fit on its display (in whatever base it uses—typically decimal, for obvious reasons). So, "86.2 oz" is never actually "86.2 oz" but rather a representation of "something between 86.1500000... oz and 86.2499999... oz". (Actually, because in reality the instrument is imperfect, all we can ever really say is that we have some degree of confidence that the actual value falls within that interval—but that is definitely departing some way from the point here).
But we can do better for discrete quantities. Such values are not "arbitrary real numbers" and therefore none of the above applies to them: they can be represented exactly in the numeral system in which they were defined—and indeed, should be (as converting to another numeral system and truncating to a finite length would result in rounding to an inexact number). Computers can (inefficiently) handle such situations by representing the number as a string: e.g. consider ASCII or BCD encoding.
Apply a format…
Since it's a property of the numeral system's (somewhat arbitrary) basis, whether or not a value appears to be "round" has no bearing on its precision. That's a really important observation, which runs counter to many people's intuition (and it's the reason I spent so much time explaining numerical basis above).
Precision is instead determined by how many significant figures a representation has. We need a storage format that is capable of recording our values to at least as many significant figures as we consider them to be correct. Taking by way of example values that we consider to be correct when stated as 86.2 and 0.0000862, the two most common options are:
Fixed point, where the number of significant figures depends on magnitude: e.g. in fixed 5-decimal-point representation, our values would be stored as 86.20000 and 0.00009 (and therefore have 7 and 1 significant figures of precision respectively). In this example, precision has been lost in the latter value (and indeed, it wouldn't take much more for us to have been totally unable to represent anything of significance); and the former value stored false precision, which is a waste of our finite space (and indeed, it wouldn't take much more for the value to become so large that it overflows the storage capacity).
A common example of when this format might be appropriate is for an accounting system: monetary sums must usually be tracked to the penny irrespective of their magnitude (therefore less precision is required for small values, and more precision is required for large values). As it happens, currency is usually also considered to be discrete (pennies are indivisible), so this is also a good example of a situation where a particular basis (decimal for most modern currencies) is desirable to avoid the representation errors discussed above.
One usually implements fixed point storage by treating one's values as quotients over a common denominator and storing the numerator as an integer. In our example, the common denominator could be 105, so instead of 86.20000 and 0.00009 one would store the integers 8620000 and 9 and remember that they must be divided by 100000.
Floating point, where the number of significant figures is constant irrespective of magnitude: e.g. in 5-significant-figure decimal representation, our values would be stored as 86.200 and 0.000086200 (and, by definition, have 5 significant figures of precision both times). In this example, both values have been stored without any loss of precision; and they both also have the same amount of false precision, which is less wasteful (and we can therefore use our finite space to represent a far greater range of values—both large and small).
A common example of when this format might be appropriate is for recording any real world measurements: the precision of measuring instruments (which all suffer from both systematic and random errors) is fairly constant irrespective of scale so, given sufficient significant figures (typically around 3 or 4 digits), absolutely no precision is lost even if a change of base resulted in rounding to a different number.
One usually implements floating point storage by treating one's values as integer significands with integer exponents. In our example, the significand could be 86200 for both values whereupon the (base-10) exponents would be -4 and -9 respectively.
But how precise are the floating point storage formats used by our computers?
An IEEE754 single precision (binary32) floating point number has 24 bits, or log10(224) (over 7) digits, of significance—i.e. it has a tolerance of less than ±0.000006%. In other words, it is more precise than saying "86.20000".
An IEEE754 double precision (binary64) floating point number has 53 bits, or log10(253) (almost 16) digits, of significance—i.e. it has a tolerance of just over ±0.00000000000001%. In other words, it is more precise than saying "86.2000000000000".
The most important thing to realise is that these formats are, respectively, over ten thousand and over one trillion times more precise than saying "86.2"—even though exact conversions of the binary back into decimal happens to include erroneous false precision (which we must ignore: more on this shortly)!
Notice also that both fixed and floating point formats will result in loss of precision when a value is known more precisely than the format supports. Such rounding errors can propagate in arithmetic operations to yield apparently erroneous results (which no doubt explains your reference to the "inherent inaccuracies" of floating point numbers): for example, 1⁄3 × 3000 in 5-place fixed point would yield 999.99000 rather than 1000.00000; and 1⁄7 − 7⁄50 in 5-significant figure floating point would yield 0.0028600 rather than 0.0028571.
The field of numerical analysis is dedicated to understanding these effects, but it is important to realise that any usable system (even performing calculations in your head) is vulnerable to such problems because no method of calculation that is guaranteed to terminate can ever offer infinite precision: consider, for example, how to calculate the area of a circle—there will necessarily be loss of precision in the value used for π, which will propagate into the result.
Conclusion
Real world measurements should use binary floating point: it's fast, compact, extremely precise and no worse than anything else (including the decimal version from which you started). Since MySQL's floating-point datatypes are IEEE754, this is exactly what they offer.
Currency applications should use denary fixed point: whilst it's slow and wastes memory, it ensures both that values are not rounded to inexact quantities and that pennies are not lost on large monetary sums. Since MySQL's fixed-point datatypes are BCD-encoded strings, this is exactly what they offer.
Finally, bear in mind that programming languages usually represent fractional values using binary floating-point types: so if your database stores values in another format, you need to be careful how they are brought into your application or else they may get converted (with all the ensuing issues that entails) at the interface.
Which option is best in this case?
Hopefully I've convinced you that your values can safely (and should) be stored in floating point types without worrying too much about any "inaccuracies"? Remember, they're more precise than your flimsy 3-significant-digit decimal representation ever was: you just have to ignore false precision (but one must always do that anyway, even if using a fixed-point decimal format).
As for your question: choose either option 1 or 2 over option 3—it makes comparisons easier (for example, to find the maximal mass, one could just use MAX(mass), whereas to do it efficiently across two columns would require some nesting).
Between those two, it doesn’t matter which one chooses—floating point numbers are stored with a constant number of significant bits irrespective of their scale.
Furthermore, whilst in the general case it could happen that some values are rounded to binary numbers that are closer to their original decimal representation using option 1 whilst simultaneously others are rounded to binary numbers that are closer to their original decimal representation using option 2, as we shall shortly see such representation errors only manifest within the false precision that should always be ignored.
However, in this case, because it happens that there are 16 ounces to 1 pound (and 16 is a power of 2), the relative differences between original decimal values and stored binary numbers using the two approaches is identical:
5.387510 (not 5.3367187510 as stated in your question) would be stored in a binary32 float as 101.0110001100110011001102 (which is 5.3874998092651367187510): this is 0.0000036% from the original value (but, as discussed above, the "original value" was already a pretty lousy representation of the physical quantity it represents).
Knowing that a binary32 float stores only 7 decimal digits of precision, our compiler knows for certain that everything from the 8th digit onwards is definitely false precision and therefore must be ignored in every case—thus, provided that our input value didn't require more precision than that (and if it did, binary32 was obviously the wrong choice of format), this guarantees a return to a decimal value that looks just as round as that from which we started: 5.38750010. However, we should really apply domain knowledge at this point (as we should with any storage format) to discard any further false precision that might exist, such as those two trailing zeroes.
86.210 would be stored in a binary32 float as 1010110.001100110011001102 (which is 86.199996948242187510): this is also 0.0000036% from the original value. As before, we then ignore false precision to return to our original input.
Notice how the binary representations of the numbers are identical, except for the placement of the radix point (which is four bits apart):
101.0110 00110011001100110
101 0110.00110011001100110
This is because 5.3875 × 24 = 86.2.
As an aside: being European (albeit British), I also have a strong aversion to imperial units of measurement—handling values of different scales is just so messy. I'd almost certainly store masses in SI units (e.g. kilograms or grams) and then perform conversions to imperial units as required within the presentation layer of my application. Plus rigidly adhering to SI units might one day save you from losing $125m.
I’d be tempted to store it in a metric unit, as they tend to be simple decimals and not complex values like pounds and ounces. That way, you can just store the one value (i.e. 103.25 kg) rather than the pounds–ounces equivalent, and it’s easier to perform conversions.
This is something I’ve dealt with in the past. I do a lot of work on pro wrestling and mixed martial arts (MMA) websites where fighters’ heights and weights need to be recorded. They tend to be displayed as feet and inches and pounds and ounces, but I still store the values in their centimetres and kilogram equivalents, and then do the conversion when displaying on the site.
First, I had not known about how floating point numbers were inaccurate - thankfully a search latter helps me understand: Floating Point Inaccuracy Examples
I would fully agree with #eggyal - keep the data in a single format in a single column. This allows you to expose it to the application and let the application deal with the presentation of it - be it in lbs/oz, rounded up lbs, whatever.
The database should keep the raw data while the presentation layer dictates the layout.
You can use decimal data type for weight column.
decimal('weight', 8, 2); // precision = 8, scale = 2
Storage size:
Precision 1-9 5 Bytes
Precision 10-19 9 Bytes
Precision 20-28 13 Bytes
Precision 29-38 17 Bytes

For any finite floating point value, is it guaranteed that x - x == 0?

Floating point values are inexact, which is why we should rarely use strict numerical equality in comparisons. For example, in Java this prints false (as seen on ideone.com):
System.out.println(.1 + .2 == .3);
// false
Usually the correct way to compare results of floating point calculations is to see if the absolute difference against some expected value is less than some tolerated epsilon.
System.out.println(Math.abs(.1 + .2 - .3) < .00000000000001);
// true
The question is about whether or not some operations can yield exact result. We know that for any non-finite floating point value x (i.e. either NaN or an infinity), x - x is ALWAYS NaN.
But if x is finite, is any of this guaranteed?
x * -1 == -x
x - x == 0
(In particular I'm most interested in Java behavior, but discussions for other languages are also welcome.)
For what it's worth, I think (and I may be wrong here) the answer is YES! I think it boils down to whether or not for any finite IEEE-754 floating point value, its additive inverse is always computable exactly. Since e.g. float and double has one dedicated bit just for the sign, this seems to be the case, since it only needs flipping of the sign bit to find the additive inverse (i.e. the significand should be left intact).
Related questions
Correct Way to Obtain The Most Negative Double
How many double numbers are there between 0.0 and 1.0?
Both equalities are guaranteed with IEEE 754 floating-point, because the results of both x-x and x * -1 are representable exactly as floating-point numbers of the same precision as x. In this case, regardless of the rounding mode, the exact values have to be returned by a compliant implementation.
EDIT: Comparing to the .1 + .2 example.
You can't add .1 and .2 in IEEE 754 because you can't represent them to pass to +. Addition, subtraction, multiplication, division and square root return the unique floating-point value which, depending on the rounding mode, is immediately below, immediately above, nearest with a rule to handle ties, ..., the result of the operation on the same arguments in R. Consequently, when the result (in R) happens to be representable as a floating-point number, this number is automatically the result regardless of the rounding mode.
The fact that your compiler lets you write 0.1 as shorthand for a different, representable number without a warning is orthogonal to the definition of these operations. When you write - (0.1) for instance, the - is exact: it returns exactly the opposite of its argument. On the other hand, its argument is not 0.1, but the double that your compiler uses in its place.
In short, another part of the reason why the operation x * (-1) is exact is that -1 can be represented as a double.
Although x - x may give you -0 rather than true 0, -0 compares as equal to 0, so you will be safe with your assumption that any finite number minus itself will compare equal to zero.
See Is there a floating point value of x, for which x-x == 0 is false? for more details.