While working with tcl, I discover such a behavior: when I looping over double variable it lose its precision.
set dbl [expr { double(13.0/7.0) }]
set dbl2 [expr { double(13.0/7.0) }]
foreach a $dbl {
}
if { $dbl == $dbl2 } {
puts "\$dbl == \$dbl2"
} else {
puts "\$dbl != \$dbl2" ;# they will be not equal
}
As I soon find out, when you use operations that work with strings or lists (e.g. llength, lindex, string first, rsub, foreach, etc.) the double representation of variable will be replaced with string representation which will be created or was created earlier, based on $tcl_precision value. Furthermore, every copy of this double variable that was created with set command, also will be spoiled.
Is there a way not to lose precision after such operations in tcl8.4 and without forcing tcl_precision to some fixed value?
P.S. set tcl_precision 0 will work only in tcl8.5 or above versions.
From Tcl 8.5 onwards, your code should Just Work. Considerable effort was put into 8.5 to make the default conversion of doubles to strings (and hence to other types) not lose information. It also tries to use the minimum number of digits to do this on the grounds that this minimises the amount of surprise presented to people; yes, we had a real expert working on this.
For 8.4 and before, set tcl_precision to 17. That guarantees that no significant bits are lost, though the representation used may be considerably longer than minimal.
Related
I am using a structural software that uses TCL as a programming language.
Does anyone how to define dirac functions in TCL? From the examples I got hold of, 4 arguments are required. What do they correspond to?
This is how the function is defined in my examples:
#
diract(tint,0*dt,dt,dt)
#
Thank you in advance
PS: I am struggling to find some good documentation. Any recommendation ?
Given that we have a finite step size (because we're using IEEE double precision floating point, the Dirac delta function is just this:
proc tcl::mathfunc::delta {x} {
expr {$x == 0.0 ? 4.49423283715579e+307 : 0.0}
}
That gives a delta function with a very large impulse at the origin (where the width of the impulse is determined by the size of the smallest non-denormalized number; that number is one of the largest that can be represented by floating point without using infinity).
That's not all that useful, as it's using floating point equality in its definition (and that rightfully has some major caveats attached to it). More usefully is fact that the integral of it is such that it is 0 when x is less than 0 and 1 when x is more than 0.
I'm not sure what the arguments you're looking to provide mean, especially given that 0*dt one of them.
I'm using arbitrary precision integers in TCL 8.6.
For example:
set x [expr {10**1000}]
How can I save this number to binary? The binary format command doesn't seem to work for this.
I also need to be able to read the number back in later.
I know I can use a loop doing x & 0XFFFF and x >> 16 to dump each word one at a time, but I thought maybe there an efficient was a way to dump the memory directly.
Any ideas?
How can I save this number to binary?
What about using format and scan, respectively?
scan [format %llb $x] %llb
As you are dealing with strings of characters, rather than strings of bytes, they are first choice.
It depends on the serialization format you wish to use. And the code needing to read it again. And how fast it needs to be.
One method you could use is to write the number in ASN.1 BER encoding, which supports a binary integer of arbitrary length.
This can be done with tcllib packages math::bignum and ASN.1:
package require asn
package require math::bignum
set x [expr {10**100}]
set bindata [asn::asnBigInteger [::math::bignum::fromstr $x]]
As you can see from the procedure name fromstr, this isn't the fastest possible code.
If you wish to use some other serialization for integers you can invent different methods, like the looping and shifting as you already discovered.
The naive method of Tcl would be just to dump the string representation, but thats obviously less compact.
How to increase the precision in tcl.
I am getting b2 below as -0.000001 whereas the actual value is -7.95553e-007
set b2 [lindex $b1 0]
I tried "set tcl_precision 12" but it did not change anything
Tcl these days uses a floating point rendering system that means by default it never loses any precision at all when a double-precision floating point number is automatically converted to a string and back, while simultaneously using the minimum number of decimal digits in the string. It has had this code since Tcl 8.5 and uses it whenever the tcl_precision global variable is set to its default value (0 these days). In the future, this may well become a hard-core default, but I don't think it has done so yet.
Older versions of Tcl (all currently unsupported) instead used that tcl_precision global to control the number of decimal digits used; setting it to non-zero values still has that effect for backward compatibility. The old default value was 15, which usually did the right thing, but 17 ensures that no information is ever lost, even in tricky edge cases, but at a cost of often producing effectively noise digits at the end. (That is a consequence of the differences between arithmetic in base-2 and base-10, and are properly common to all languages that use IEEE binary floating point math.)
If you want to use a definite number of decimal digits after the point because you are producing output for human consumption, you should use the format command.
format %.5f 1.23; # >>> 1.23000
Very recently I found out about the namespace concept and the use of double-colon (::) for program variables.
Before I start reshaping all my scripts, I wanted to know if there is a real difference between accessing a variable with the global keyword and with the double colon syntax.
e.g.
set var bla
proc kuku {} { puts $::var }
vs.
proc gaga {} {global var ; puts $var}
In both cases I'm getting 'bla' written to my screen.
What am I missing?
I understand that editing the variable will be a bit problematic (is it even possible?), but for read-only vars, is there a difference between the two methods?
They're talking about the same variable. With the ::var form, you're using the fully-qualified name, whereas with the form with global you're making a local link to the global variable (which really is a pointer to the global variable). Reading from or writing to them should work exactly the same, whichever way you choose.
There is a measurable difference between the two. With global, you've got the extra cost of setting up the link, but thereafter for the remainder of the procedure the cost per use (read or write) is quite a lot lower. With the other form, you're not paying any setup overhead, but the per-use cost is higher. For one use only, the cost of the two are pretty similar. If you're using the variable several times, global is cheaper. OTOH, sometimes it is clearer to use the fully qualified version anyway (particularly true with vwait and trace) despite the reduction in speed.
I find that I access the ::env and ::tcl_platform arrays using their fully-qualified form, but most other things will get accessed via global. That's just my preference though.
Here's an example interactive session:
% set x 1
1
% proc y1 {} {incr ::x;return ok}
% time { y1 } 10000
0.5398216 microseconds per iteration
% proc y2 {} {global x;incr x;return ok}
% time { y2 } 10000
0.4537753 microseconds per iteration
% proc z1 {} {return $::x}
% time { z1 } 10000
0.4864713 microseconds per iteration
% proc z2 {} {global x; return $x}
% time { z2 } 10000
0.4433554 microseconds per iteration
(I wouldn't expect you to get the same absolute figures as me. Do your own performance testing. I would expect similar relative figures…)
dB or decibel is a unit that is used to show ratio in logarithmic scale, and specifecly, the definition of dB that I'm interested in is X(dB) = 20log(x) where x is the "normal" value, and X(dB) is the value in dB. When wrote a code converted between mil. and mm, I noticed that if I use the direct approach, i.e., multiplying by the ratio between the units, I got small errors on the opposite conversion, i.e.: to_mil [to_mm val_in_mil] wasn't equal to val_in_mil and the same with mm. The library units has solved this problem, as the conversions done by it do not have that calculation error. But the specifically doesn't offer (or I didn't find) the option to convert a number to dB in the library.
Is there another library / command that can transform numbers to dB and dB to numbers without calculation errors?
I did an experiment with using the direct math conversion, and I what I got is:
>> set a 0.005
0.005
>> set b [expr {20*log10($a)}]
-46.0205999133
>> expr {pow(10,($b/20))}
0.00499999999999
It's all a matter of precision. We often tend to forget that floating point numbers are not real numbers (in the mathematical sense of ℝ).
How many decimal digit do you need?
If you, for example, would only need 5 decimal digits, rounding 0.00499999999999 will give you 0.00500 which is what you wanted.
Since rounding fp numbers is not an easy task and may generate even more troubles, you might just change the way you determine if two numbers are equal:
>> set a 0.005
0.005
>> set b [expr {20*log10($a)}]
-46.0205999133
>> set c [expr {pow(10,($b/20))}]
0.00499999999999
>> expr {abs($a - $c) < 1E-10}
1
>> expr {abs($a - $c) < 1E-20}
0
>> expr {$a - $c}
8.673617379884035e-19
The numbers in your examples can be considered "equal" up to an error or 10-18. Note that this is just a rough estimate, not a full solution.
If you're really dealing with problems that are sensitive to numerical errors propagation you might look deeper into "numerical analysis". The article What Every Computer Scientist Should Know About Floating-Point Arithmetic or, even better, this site: http://floating-point-gui.de might be a start.
In case you need a larger precision you should drop your "native" requirement.
You may use the BigFloat offered by tcllib (http://tcllib.sourceforge.net/doc/bigfloat.html or even use GMP (the GNU multiple precision arithmetic library) through ffidl (http://elf.org/ffidl). There's an interface already defined for it: gmp.tcl
With the way floating point numbers are stored, every log10(...) can't correspond to exactly one pow(10, ...). So you lose precision, just like the integer divisions 89/7 and 88/7 both are 12.
When you put a value into floating point format, you should forget the ability to know it's exact value anymore unless you keep the old, exact value too. If you want exactly 1/200, store it as the integer 1 and the integer 200. If you want exactly the ten-logarithm of 1/200, store it as 1, 200 and the info that a ten-logarithm has been done on it.
You can fill your entire memory with the first x decimal digits of the square root of 2, but it still won't be the square root of 2 you store.