Dirac function in TCL - tcl

I am using a structural software that uses TCL as a programming language.
Does anyone how to define dirac functions in TCL? From the examples I got hold of, 4 arguments are required. What do they correspond to?
This is how the function is defined in my examples:
#
diract(tint,0*dt,dt,dt)
#
Thank you in advance
PS: I am struggling to find some good documentation. Any recommendation ?

Given that we have a finite step size (because we're using IEEE double precision floating point, the Dirac delta function is just this:
proc tcl::mathfunc::delta {x} {
expr {$x == 0.0 ? 4.49423283715579e+307 : 0.0}
}
That gives a delta function with a very large impulse at the origin (where the width of the impulse is determined by the size of the smallest non-denormalized number; that number is one of the largest that can be represented by floating point without using infinity).
That's not all that useful, as it's using floating point equality in its definition (and that rightfully has some major caveats attached to it). More usefully is fact that the integral of it is such that it is 0 when x is less than 0 and 1 when x is more than 0.
I'm not sure what the arguments you're looking to provide mean, especially given that 0*dt one of them.

Related

How to find a function that fits a given set of data points in Julia?

So, I have a vector that corresponds to a given feature (same dimensionality). Is there a package in Julia that would provide a mathematical function that fits these data points, in relation to the original feature? In other words, I have x and y (both vectors) and need to find a decent mapping between the two, even if it's a highly complex one. The output of this process should be a symbolic formula that connects x and y, e.g. (:x)^3 + log(:x) - 4.2454. It's fine if it's just a polynomial approximation.
I imagine this is a walk in the park if you employ Genetic Programming, but I'd rather opt for a simpler (and faster) approach, if it's available. Thanks
Turns out the Polynomials.jl package includes the function polyfit which does Lagrange interpolation. A usage example would go:
using Polynomials # install with Pkg.add("Polynomials")
x = [1,2,3] # demo x
y = [10,12,4] # demo y
polyfit(x,y)
The last line returns:
Poly(-2.0 + 17.0x - 5.0x^2)`
which evaluates to the correct values.
The polyfit function accepts a maximal degree for the output polynomial, but defaults to using the length of the input vectors x and y minus 1. This is the same degree as the polynomial from the Lagrange formula, and since polynomials of such degree agree on the inputs only if they are identical (this is a basic theorem) - it can be certain this is the same Lagrange polynomial and in fact the only one of such a degree to have this property.
Thanks to the developers of Polynomial.jl for leaving me just to google my way to an Answer.
Take a look to MARS regression. Multi adaptive regression splines.

Expr for float values in TCL

Calculating float values
tclsh
% expr 0.2+0.2
0.4
% expr 0.2+0.1
0.30000000000000004
%
Why not 0.3??
Am i missing some thing.
thanks in advance.
Neither 0.1 or 0.2 have an exact representation in IEEE double precision binary floating point arithmetic (which Tcl uses internally for expressions involving fractional values, as there's good hardware support for them). This means that the values you are computing with are never exactly what you think they are; instead, they're both very slightly more (as it happens; they could also have been slightly less in general). When you add 0.2+ε1+0.1+ε2, it can happen that ε1+ε2 can add up to more than the threshold where 0.3 (another imprecisely represented value) becomes the next exactly represented value above it. This is what you have observed. It's also inherent in the way floating point mathematics works in a vast array of languages; only integer arithmetic (or fractional arithmetic capable of being expressed as exact multiples of some power of 2, e.g., 0.5, 0.25, 0.125) is guaranteed to be exact.
The only interesting thing of note here is that Tcl 8.5 and 8.6 prefer to render floating point numbers with the minimal number of digits required to get the exact value back when re-parsed. If you want to get a fixed number of digits (e.g., 8) try using format when converting:
format %.8f [expr 0.2+0.1]
This behavior exists in almost all programming languages, e.g. Ruby, Python, etc.
The suggestion here is try to avoid storing numbers in floating points, use integer whenever possible. The bottom line is do not use floating points in a comparison.

Is there a "native" way to convert from numbers to dB in Tcl

dB or decibel is a unit that is used to show ratio in logarithmic scale, and specifecly, the definition of dB that I'm interested in is X(dB) = 20log(x) where x is the "normal" value, and X(dB) is the value in dB. When wrote a code converted between mil. and mm, I noticed that if I use the direct approach, i.e., multiplying by the ratio between the units, I got small errors on the opposite conversion, i.e.: to_mil [to_mm val_in_mil] wasn't equal to val_in_mil and the same with mm. The library units has solved this problem, as the conversions done by it do not have that calculation error. But the specifically doesn't offer (or I didn't find) the option to convert a number to dB in the library.
Is there another library / command that can transform numbers to dB and dB to numbers without calculation errors?
I did an experiment with using the direct math conversion, and I what I got is:
>> set a 0.005
0.005
>> set b [expr {20*log10($a)}]
-46.0205999133
>> expr {pow(10,($b/20))}
0.00499999999999
It's all a matter of precision. We often tend to forget that floating point numbers are not real numbers (in the mathematical sense of ℝ).
How many decimal digit do you need?
If you, for example, would only need 5 decimal digits, rounding 0.00499999999999 will give you 0.00500 which is what you wanted.
Since rounding fp numbers is not an easy task and may generate even more troubles, you might just change the way you determine if two numbers are equal:
>> set a 0.005
0.005
>> set b [expr {20*log10($a)}]
-46.0205999133
>> set c [expr {pow(10,($b/20))}]
0.00499999999999
>> expr {abs($a - $c) < 1E-10}
1
>> expr {abs($a - $c) < 1E-20}
0
>> expr {$a - $c}
8.673617379884035e-19
The numbers in your examples can be considered "equal" up to an error or 10-18. Note that this is just a rough estimate, not a full solution.
If you're really dealing with problems that are sensitive to numerical errors propagation you might look deeper into "numerical analysis". The article What Every Computer Scientist Should Know About Floating-Point Arithmetic or, even better, this site: http://floating-point-gui.de might be a start.
In case you need a larger precision you should drop your "native" requirement.
You may use the BigFloat offered by tcllib (http://tcllib.sourceforge.net/doc/bigfloat.html or even use GMP (the GNU multiple precision arithmetic library) through ffidl (http://elf.org/ffidl). There's an interface already defined for it: gmp.tcl
With the way floating point numbers are stored, every log10(...) can't correspond to exactly one pow(10, ...). So you lose precision, just like the integer divisions 89/7 and 88/7 both are 12.
When you put a value into floating point format, you should forget the ability to know it's exact value anymore unless you keep the old, exact value too. If you want exactly 1/200, store it as the integer 1 and the integer 200. If you want exactly the ten-logarithm of 1/200, store it as 1, 200 and the info that a ten-logarithm has been done on it.
You can fill your entire memory with the first x decimal digits of the square root of 2, but it still won't be the square root of 2 you store.

Term for percent, percentage, fraction, scale factor?

I have functions that conceptually all return the same thing, but the result can take different forms:
function GetThingy()
There are four different functions, each can return different things:
0.071 (a float value representing an increase of 7.1%)
7.1 (a float value representing an increase of 7.1%)
1.071 (a float value representing an increase of 7.1%)
"7.1%" (a string value representing a percentage of 7.1%)
What terms can I use to help document these functions return values?
I've come up with my own terminology:
fraction: A fraction of one; where the value is understood to be between 0..1 (e.g. 0.07 represents 7%)
percent: A per-one-hundred value; where the value is understood to be between 0..100 (e.g. 7 represents 7%) Note: This contrasts with a fraction, with is per-one, rather than per-hundred
factor: A scale factor, that can be used to directly multiply; understood to be equivalent to 1+fraction (e.g. 1.07 implies an increase of 7%)
percentage: A string that contains the actual percent character (i.e. %), suitable for display to the user, or cases that prefer the localized text (e.g. "7%" implies 7%)
So applying my own naming scheme to the functions:
GetThingyFraction() = 0.071
GetThingyPercent() = 7.1
GetThingyFactor() = 1.071
GetThingyPercentage()= "7.1%"
What say you?
Not really sure there is an "answer" to this, but naming the functions as you have demonstrated makes it very easy for the consumer to understand what they are getting back. I like the terms you have chosen as well.
Are you planning on implementing all four (or n) flavors of each function, or is this strictly a naming question for when different operations process the result differently?
I am not so sure about the utility of the "percentage" version. Typically making strings for UI of messages should be handled in the presentation, not in the computation. The presentation would determine how many decimal places, "%" vs. "pct" vs "percent", etc.
I'd say you've just about got it, but I'd add the word "Increase" in some places, and put your examples in the documentation/comments:
GetThingyFractionIncrease() [e.g. 0.071 represents an increase of 7.1%]
GetThingyPercentIncrease() [e.g. 7.1 represents an increase of 7.1%]
GetThingyFactor() [e.g. 1.071 represents an increase of 7.1%]
GetThingyPercentageString() [e.g. "7.1%" represents an increase of 7.1%]
Even though your tag is language-agnostic, I'm assuming that you are writing in a modern Object-Oriented Programming language.
If you had a Thingy class with a thingy object that had a private fraction, then you could allow public access through methods like these:
double thingy.asFractionIncrease
double thingy.asPercentIncrease
double thingy.asFactor
String thingy.asPercentIncreaseString
P.S. I'm going to upvote your EL&U posting. As of this moment, this will get you back to 0 and you'll be net positive on the reputation

Invert 4x4 matrix - Numerical most stable solution needed

I want to invert a 4x4 matrix. My numbers are stored in fixed-point format (1.15.16 to be exact).
With floating-point arithmetic I usually just build the adjoint matrix and divide by the determinant (e.g. brute force the solution). That worked for me so far, but when dealing with fixed point numbers I get an unacceptable precision loss due to all of the multiplications used.
Note: In fixed point arithmetic I always throw away some of the least significant bits of immediate results.
So - What's the most numerical stable way to invert a matrix? I don't mind much about the performance, but simply going to floating-point would be to slow on my target architecture.
Meta-answer: Is it really a general 4x4 matrix? If your matrix has a special form, then there are direct formulas for inverting that would be fast and keep your operation count down.
For example, if it's a standard homogenous coordinate transform from graphics, like:
[ux vx wx tx]
[uy vy wy ty]
[uz vz wz tz]
[ 0 0 0 1]
(assuming a composition of rotation, scale, translation matrices)
then there's an easily-derivable direct formula, which is
[ux uy uz -dot(u,t)]
[vx vy vz -dot(v,t)]
[wx wy wz -dot(w,t)]
[ 0 0 0 1 ]
(ASCII matrices stolen from the linked page.)
You probably can't beat that for loss of precision in fixed point.
If your matrix comes from some domain where you know it has more structure, then there's likely to be an easy answer.
I think the answer to this depends on the exact form of the matrix. A standard decomposition method (LU, QR, Cholesky etc.) with pivoting (an essential) is fairly good on fixed point, especially for a small 4x4 matrix. See the book 'Numerical Recipes' by Press et al. for a description of these methods.
This paper gives some useful algorithms, but is behind a paywall unfortunately. They recommend a (pivoted) Cholesky decomposition with some additional features too complicated to list here.
I'd like to second the question Jason S raised: are you certain that you need to invert your matrix? This is almost never necessary. Not only that, it is often a bad idea. If you need to solve Ax = b, it is more numerically stable to solve the system directly than to multiply b by A inverse.
Even if you have to solve Ax = b over and over for many values of b, it's still not a good idea to invert A. You can factor A (say LU factorization or Cholesky factorization) and save the factors so you're not redoing that work every time, but you'd still solve the system each time using the factorization.
You might consider doubling to 1.31 before doing your normal algorithm. It'll double the number of multiplications, but you're doing a matrix invert and anything you do is going to be pretty tied to the multiplier in your processor.
For anyone interested in finding the equations for a 4x4 invert, you can use a symbolic math package to resolve them for you. The TI-89 will do it even, although it'll take several minutes.
If you give us an idea of what the matrix invert does for you, and how it fits in with the rest of your processing we might be able to suggest alternatives.
-Adam
Let me ask a different question: do you definitely need to invert the matrix (call it M), or do you need to use the matrix inverse to solve other equations? (e.g. Mx = b for known M, b) Often there are other ways to do this w/o explicitly needing to calculate the inverse. Or if the matrix M is a function of time & it changes slowly then you could calculate the full inverse once, & there are iterative ways to update it.
If the matrix represents an affine transformation (many times this is the case with 4x4 matrices so long as you don't introduce a scaling component) the inverse is simply the transpose of the upper 3x3 rotation part with the last column negated. Obviously if you require a generalized solution then looking into Gaussian elimination is probably the easiest.