Can coordinates of constructable points be represented exactly? - language-agnostic

I'd like to write a program that lets users draw points, lines, and circles as though with a straightedge and compass. Then I want to be able to answer the question, "are these three points collinear?" To answer correctly, I need to avoid rounding error when calculating the points.
Is this possible? How can I represent the points in memory?
(I looked into some unusual numeric libraries, but I didn't find anything that claimed to offer both exact arithmetic and exact comparisons that are guaranteed to terminate.)

Yes.
I highly recommend Introduction to constructions, which is a good basic guide.
Basically you need to be able to compute with constructible numbers - numbers that are either rational, or of the form a + b sqrt(c) where a,b,c were previously created (see page 6 on that PDF). This could be done with algebraic data type (e.g. data C = Rational Integer Integer | Root C C C in Haskell, where Root a b c = a + b sqrt(c)). However, I don't know how to perform tests with that representation.
Two possible approaches are:
Constructible numbers are a subset of algebraic numbers, so you can use algebraic numbers.
All algebraic numbers can be represented using polynomials of whose they are roots. The operations are computable, so if you represent a number a with polynomial p and b with polynomial q (p(a) = q(b) = 0), then it is possible to find a polynomial r such that r(a+b) = 0. This is done in some CASes like Mathematica, example. See also: Computional algebraic number theory - chapter 4
Use Tarski's test and represent numbers. It is slow (doubly exponential or so), but works :) Example: to represent sqrt(2), use the formula x^2 - 2 && x > 0. You can write equations for lines there, check if points are colinear etc. See A suite of logic programs, including Tarski's test
If you turn to computable numbers, then equality, colinearity etc. get undecidable.

I think the only way this would be possible is if you used a symbolic representation,
as opposed to trying to represent coordinate values directly -- so you would have
to avoid trying to coerce values like sqrt(2) into some numerical format. You will
be dealing with irrational numbers that are not finitely representable in binary,
decimal, or any other positional notation.

To expand on Jim Lewis's answer slightly, if you want to operate on points that are constructible from the integers with exact arithmetic, you will need to be able to operate on representations of the form:
a + b sqrt(c)
where a, b, and c are either rational numbers, or representations in the form given above. Wikipedia has a pretty decent article on the subject of what points are constructible.
Answering the question of exact equality (as necessary to establish colinearity) with such representations is a rather tricky problem.

If you try to compare co-ordinates for your points, then you have a problem. Leaving aside co-linearity for a moment, how about just working out whether two points are the same or not?
Supposing that one has given co-ordinates, and the other is a compass-straightedge construction starting from certain other co-ordinates, you want to determine with certainty whether they're the same point or not. Either way is a theorem of Euclidean geometry, it's not something you can just measure. You can prove they aren't the same by spotting some difference in their co-ordinates (for example by computing decimal places of each until you encounter a difference). But in general to prove they are the same cannot be done by approximate methods. Compute as many decimal places as you like of some expansions of 1/sqrt(2) and sqrt(2)/2, and you can prove they're very close together but you won't ever prove they're equal. That takes algebra (or geometry).
Similarly, to show that three points are co-linear you will need theorem-proving software. Represent the points A, B, C by their constructions, and attempt to prove the theorem "A, B and C are colinear". This is very hard - your program will prove some theorems but not others. Much easier is to ask the user for a proof that they are co-linear, and then verify (or refute) that proof, but that's probably not what you want.

In general, constructable points may have an arbitrarily complex symbolic form, so you must use a symbolic representation to work them exactly. As Stephen Canon noted above, you often need numbers of the form a+b*sqrt(c), where a and b are rational and c is an integer. All numbers of this form form a closed set under arithmetic operations. I have written some C++ classes (see rational_radical1.h) to work with these numbers if that is all you need.
It is also possible to construct numbers which are sums of any number of terms of rational multiples of radicals. When dealing with more than a single radicand, the numbers are no longer closed under multiplication and division, so you will need to store them as variable length rational coefficient arrays. The time complexity of operations will then be quadratic in the number of terms.
To go even further, you can construct the square root of any given number, so you could potentially have nested square roots. Here, the representations must be tree-like structures to deal with root hierarchy. While difficult to implement, there is nothing in principle preventing you from working with these representations. I'm not sure just what additional numbers can be constructed, but beyond a certain point, your symbolic representation will be expressive enough to handle very large classes of numbers.
Addendum
Found this Google Books link.

If the grid axes are integer valued then the answer is fairly straight forward, the points are either exactly colinear or they are not.
Typically however, one works with real numbers (well, floating points) and then draws the rounded values on the screen which does exist in integer space. In this case you have no choice but to pick a tolerance and use it to determine colinearity. Keep it small and the users will never know the difference.

You seem to be asking, in effect, "Can the normal mathematics (integer or floating point) used by computers be made to represent real numbers perfectly, with no rounding errors?" And, of course, the answer to that is "No." If you want theoretical correctness, then you will be stuck with the much harder problem of symbolic manipulation and coding up the equivalent of the inferences that are done in geometry. (In short, I'm agreeing with Steve Jessop, above.)

Some thoughts in the hope that they might help.
The sort of constructions you're talking about will require multiplication and division, which means that to preserve exactness you'll have to use rational numbers, which are generally easy to implement on top of a suitable sort of big integer (i.e., of unbounded magnitude). (Common Lisp has these built-in, and there have to be other languages.)
Now, you need to represent square roots of arbitrary numbers, and these have to be mixed in.
Therefore, a number is one of: a rational number, a rational number multiplied by a square root of a rational number (or, alternately, just the square root of a rational), or a sum of numbers. In order to prove anything, you're going to have to get these numbers into some sort of canonical form, which for all I can figure offhand may be annoying and computationally expensive.
This of course means that the users will be restricted to rational points and cannot use arbitrary rotations, but that's probably not important.

I would recommend no to try to make it perfectly exact.
The first reason for this is what you are asking here, the rounding error and all that stuff that comes with floating point calculations.
The second one is that you have to round your input as the mouse and screen work with integers. So, initially all user input would be integers, and your output would be integers.
Beside, from a usability point of view, its easier to click in the neighborhood of another point (in a line for example) and that the interface consider you are clicking in the point itself.

Related

Best practice for storing weights in a SQL database?

An application I'm working on needs to store weights of the format X pounds, y.y ounces. The database is MySQL, but I imagine this is DB agnostic.
I can think of three ways to do this:
Convert the weight to decimal pounds and store in a single field. (5 lbs 6.2 oz = 5.33671875 lbs)
Convert the weight to decimal ounces and store in a single field. (5 lbs 6.2 oz = 86.2 oz)
Store the pounds portion as an integer and the ounces portion as a decimal, in two fields.
I'm thinking that #1 is not such a good idea, since decimal pounds will produce numbers of arbitrary precision, which would need to be stored as a float, which could lead to inaccuracies which are inherent in floating point numbers.
Is there a compelling reason to choose #2 over #3 or vise-versa?
TL;DR
Choose either option #1 or option #2—there's no difference between them. Don't use option #3, because it's awkward to work with.
You claim that there are inherent inaccuracies in floating point numbers. I think that this deserves to be explored a little first.
When deciding upon a numeral system for representing a number (whether on a piece of paper, in a computer circuit, or elsewhere), there are two separate issues to consider:
its basis; and
its format.
Pick a base, any base…
Limited by finite space, one cannot represent an arbitrary member of an infinite set. For example: no matter how much paper you buy or how small your handwriting, it'd always be possible to find an integer that won't fit in the given space (you could just keep appending extra digits until the paper runs out). So, with integers, we usually restrict our finite space to representing only those that fall within some particular interval—e.g. if we have space for the positive/negative sign and three digits, we might restrict ourselves to the interval [-999,+999].
Every non-empty interval contains an infinite set of real numbers. In other words, no matter what interval one takes over the real numbers—be it [-999,+999], [0,1], [0.000001,0.000002] or anything else—there is still an infinite set of reals within that interval (one need only keep appending (non-zero) fractional digits)! Therefore arbitrary real numbers must always be "rounded" to something that can be represented in finite space.
The set of real numbers that can be represented in finite space depends upon the numeral system that is used. In our (familiar) positional base-10 system, finite space will suffice for one-half (0.510) but not for one-third (0.33333…10); by contrast, in the (less familiar) positional base-9 system, it is the other way around (those same numbers are respectively 0.44444…9 and 0.39). The consequence of all this is that some numbers that can be represented using only a small amount of space in positional base-10 (and therefore appear to be very "round" to us humans), e.g. one-tenth, would actually require infinite binary circuits to be stored precisely (and therefore don't appear to be very "round" to our digital friends)! Notably, since 2 is a factor of 10, the same is not true in reverse: any number that can be represented with finite binary can also be represented with finite decimal.
We can't do any better for continuous quantities. Ultimately such quantities must use a finite representation in some numeral system: it's arbitrary whether that system happens to be easy on computer circuits, on human fingers, on something else or on nothing at all—whichever system is used, the value must be rounded and therefore it always results in "representation error".
In other words, even if one has a perfectly accurate measuring instrument (which is physically impossible), then any measurement it reports will already have been rounded to a number that happens to fit on its display (in whatever base it uses—typically decimal, for obvious reasons). So, "86.2 oz" is never actually "86.2 oz" but rather a representation of "something between 86.1500000... oz and 86.2499999... oz". (Actually, because in reality the instrument is imperfect, all we can ever really say is that we have some degree of confidence that the actual value falls within that interval—but that is definitely departing some way from the point here).
But we can do better for discrete quantities. Such values are not "arbitrary real numbers" and therefore none of the above applies to them: they can be represented exactly in the numeral system in which they were defined—and indeed, should be (as converting to another numeral system and truncating to a finite length would result in rounding to an inexact number). Computers can (inefficiently) handle such situations by representing the number as a string: e.g. consider ASCII or BCD encoding.
Apply a format…
Since it's a property of the numeral system's (somewhat arbitrary) basis, whether or not a value appears to be "round" has no bearing on its precision. That's a really important observation, which runs counter to many people's intuition (and it's the reason I spent so much time explaining numerical basis above).
Precision is instead determined by how many significant figures a representation has. We need a storage format that is capable of recording our values to at least as many significant figures as we consider them to be correct. Taking by way of example values that we consider to be correct when stated as 86.2 and 0.0000862, the two most common options are:
Fixed point, where the number of significant figures depends on magnitude: e.g. in fixed 5-decimal-point representation, our values would be stored as 86.20000 and 0.00009 (and therefore have 7 and 1 significant figures of precision respectively). In this example, precision has been lost in the latter value (and indeed, it wouldn't take much more for us to have been totally unable to represent anything of significance); and the former value stored false precision, which is a waste of our finite space (and indeed, it wouldn't take much more for the value to become so large that it overflows the storage capacity).
A common example of when this format might be appropriate is for an accounting system: monetary sums must usually be tracked to the penny irrespective of their magnitude (therefore less precision is required for small values, and more precision is required for large values). As it happens, currency is usually also considered to be discrete (pennies are indivisible), so this is also a good example of a situation where a particular basis (decimal for most modern currencies) is desirable to avoid the representation errors discussed above.
One usually implements fixed point storage by treating one's values as quotients over a common denominator and storing the numerator as an integer. In our example, the common denominator could be 105, so instead of 86.20000 and 0.00009 one would store the integers 8620000 and 9 and remember that they must be divided by 100000.
Floating point, where the number of significant figures is constant irrespective of magnitude: e.g. in 5-significant-figure decimal representation, our values would be stored as 86.200 and 0.000086200 (and, by definition, have 5 significant figures of precision both times). In this example, both values have been stored without any loss of precision; and they both also have the same amount of false precision, which is less wasteful (and we can therefore use our finite space to represent a far greater range of values—both large and small).
A common example of when this format might be appropriate is for recording any real world measurements: the precision of measuring instruments (which all suffer from both systematic and random errors) is fairly constant irrespective of scale so, given sufficient significant figures (typically around 3 or 4 digits), absolutely no precision is lost even if a change of base resulted in rounding to a different number.
One usually implements floating point storage by treating one's values as integer significands with integer exponents. In our example, the significand could be 86200 for both values whereupon the (base-10) exponents would be -4 and -9 respectively.
But how precise are the floating point storage formats used by our computers?
An IEEE754 single precision (binary32) floating point number has 24 bits, or log10(224) (over 7) digits, of significance—i.e. it has a tolerance of less than ±0.000006%. In other words, it is more precise than saying "86.20000".
An IEEE754 double precision (binary64) floating point number has 53 bits, or log10(253) (almost 16) digits, of significance—i.e. it has a tolerance of just over ±0.00000000000001%. In other words, it is more precise than saying "86.2000000000000".
The most important thing to realise is that these formats are, respectively, over ten thousand and over one trillion times more precise than saying "86.2"—even though exact conversions of the binary back into decimal happens to include erroneous false precision (which we must ignore: more on this shortly)!
Notice also that both fixed and floating point formats will result in loss of precision when a value is known more precisely than the format supports. Such rounding errors can propagate in arithmetic operations to yield apparently erroneous results (which no doubt explains your reference to the "inherent inaccuracies" of floating point numbers): for example, 1⁄3 × 3000 in 5-place fixed point would yield 999.99000 rather than 1000.00000; and 1⁄7 − 7⁄50 in 5-significant figure floating point would yield 0.0028600 rather than 0.0028571.
The field of numerical analysis is dedicated to understanding these effects, but it is important to realise that any usable system (even performing calculations in your head) is vulnerable to such problems because no method of calculation that is guaranteed to terminate can ever offer infinite precision: consider, for example, how to calculate the area of a circle—there will necessarily be loss of precision in the value used for π, which will propagate into the result.
Conclusion
Real world measurements should use binary floating point: it's fast, compact, extremely precise and no worse than anything else (including the decimal version from which you started). Since MySQL's floating-point datatypes are IEEE754, this is exactly what they offer.
Currency applications should use denary fixed point: whilst it's slow and wastes memory, it ensures both that values are not rounded to inexact quantities and that pennies are not lost on large monetary sums. Since MySQL's fixed-point datatypes are BCD-encoded strings, this is exactly what they offer.
Finally, bear in mind that programming languages usually represent fractional values using binary floating-point types: so if your database stores values in another format, you need to be careful how they are brought into your application or else they may get converted (with all the ensuing issues that entails) at the interface.
Which option is best in this case?
Hopefully I've convinced you that your values can safely (and should) be stored in floating point types without worrying too much about any "inaccuracies"? Remember, they're more precise than your flimsy 3-significant-digit decimal representation ever was: you just have to ignore false precision (but one must always do that anyway, even if using a fixed-point decimal format).
As for your question: choose either option 1 or 2 over option 3—it makes comparisons easier (for example, to find the maximal mass, one could just use MAX(mass), whereas to do it efficiently across two columns would require some nesting).
Between those two, it doesn’t matter which one chooses—floating point numbers are stored with a constant number of significant bits irrespective of their scale.
Furthermore, whilst in the general case it could happen that some values are rounded to binary numbers that are closer to their original decimal representation using option 1 whilst simultaneously others are rounded to binary numbers that are closer to their original decimal representation using option 2, as we shall shortly see such representation errors only manifest within the false precision that should always be ignored.
However, in this case, because it happens that there are 16 ounces to 1 pound (and 16 is a power of 2), the relative differences between original decimal values and stored binary numbers using the two approaches is identical:
5.387510 (not 5.3367187510 as stated in your question) would be stored in a binary32 float as 101.0110001100110011001102 (which is 5.3874998092651367187510): this is 0.0000036% from the original value (but, as discussed above, the "original value" was already a pretty lousy representation of the physical quantity it represents).
Knowing that a binary32 float stores only 7 decimal digits of precision, our compiler knows for certain that everything from the 8th digit onwards is definitely false precision and therefore must be ignored in every case—thus, provided that our input value didn't require more precision than that (and if it did, binary32 was obviously the wrong choice of format), this guarantees a return to a decimal value that looks just as round as that from which we started: 5.38750010. However, we should really apply domain knowledge at this point (as we should with any storage format) to discard any further false precision that might exist, such as those two trailing zeroes.
86.210 would be stored in a binary32 float as 1010110.001100110011001102 (which is 86.199996948242187510): this is also 0.0000036% from the original value. As before, we then ignore false precision to return to our original input.
Notice how the binary representations of the numbers are identical, except for the placement of the radix point (which is four bits apart):
101.0110 00110011001100110
101 0110.00110011001100110
This is because 5.3875 × 24 = 86.2.
As an aside: being European (albeit British), I also have a strong aversion to imperial units of measurement—handling values of different scales is just so messy. I'd almost certainly store masses in SI units (e.g. kilograms or grams) and then perform conversions to imperial units as required within the presentation layer of my application. Plus rigidly adhering to SI units might one day save you from losing $125m.
I’d be tempted to store it in a metric unit, as they tend to be simple decimals and not complex values like pounds and ounces. That way, you can just store the one value (i.e. 103.25 kg) rather than the pounds–ounces equivalent, and it’s easier to perform conversions.
This is something I’ve dealt with in the past. I do a lot of work on pro wrestling and mixed martial arts (MMA) websites where fighters’ heights and weights need to be recorded. They tend to be displayed as feet and inches and pounds and ounces, but I still store the values in their centimetres and kilogram equivalents, and then do the conversion when displaying on the site.
First, I had not known about how floating point numbers were inaccurate - thankfully a search latter helps me understand: Floating Point Inaccuracy Examples
I would fully agree with #eggyal - keep the data in a single format in a single column. This allows you to expose it to the application and let the application deal with the presentation of it - be it in lbs/oz, rounded up lbs, whatever.
The database should keep the raw data while the presentation layer dictates the layout.
You can use decimal data type for weight column.
decimal('weight', 8, 2); // precision = 8, scale = 2
Storage size:
Precision 1-9 5 Bytes
Precision 10-19 9 Bytes
Precision 20-28 13 Bytes
Precision 29-38 17 Bytes

What exactly is a datatype?

I understand what a datatype is (intuitively). But I need the formal definition. I don't understand if it is a set or it's the names 'int' 'float' etc. The formal definition found on wikipedia is confusing.
In computer programming, a data type is a classification identifying one of various types of data, such as floating-point, integer, or Boolean, that determines the possible values for that type; the operations that can be done on values of that type; the meaning of the data; and the way values of that type can be stored.
Can anyone help me with that?
Yep. What that's saying is that a data type has three pieces:
The various possible values. So, for example, an eight bit signed integer might have -127..128. This of that as a set of values V.
The operations: so an 8-bit signed integer might have +, -, * (multiply), and / (divide). The full definition would define those as functions from V into V, or possible as a function from V into float for division.
The way it's stored -- I sort of gave it away when I said "eight bit signed integer". The other detail is that I'm assuming a specific representation by the way I showed the range of values.
You might, if you're into object oriented programming, notice that this is very much like the definition of a class, which is defined by the storage used by each object, adn the methods of the class. Providing those parts for some arbitrary thing, but not inheritance rules, gives you what's called an abstract data type.
Update
#Appy, there's some room for differences in the formalities. I was a little subtle because it was late and I was suddenly uncertain if I'd assumed one's complement or two's complement -- of course it's two's complement. So interpretation is included in my description. Abstractly, though, you'd say it is a algebraic structure T=(V,O) where V is a set of values, O a set of functions from V into some arbitrary type -- remember '==' for example will be a function eq:V × V → {0,1} so you can't expect every operation to be into V.
I can define it as a classification of a particular type of information. It is easy for humans to distinguish between different types of data. We can usually tell at a glance whether a number is a percentage, a time, or an amount of money. We do this through special symbols %, :, and $.
Basically it's the concept that I am sure you grock. For computers however a data type is defined and has various associated attributes, like size, like a definition keywork (sometimes), the values it can take (numbers or characters for example) and operations that can be done on it like add subtract for numbers and append on string or compare on a character, etc. These differ from language to language and even from environment to env. (16 - 32 bit ints/ 32 - 64 envs./ etc).
If there is anything I am missing or needs refining please ask as this is fairly open ended.

When to use sentinel values?

I recently had to use a GPS location API where each location object had among other things two properties altitude and verticalAccuracy. A negative verticalAccuracy signifies that altitude is invalid, whereas normally a smaller but positive value of verticalAccuracy actually means that altitude is more precise (since it's the vertical distance that it may be off by - I'll leave the discussion as to why this measure is called verticalAccuracy and not verticalInaccuracy for some other time).
This got me thinking: When is it a good idea to use sentinel values like this API does and when would it be better to explicitly make a separate hasValidAltitude property? Are there other options?
Sometimes, sentinel answers aren't really possible; maybe the function's range coincides with the codomain (range). This isn't the case with altitude, unless you allow negative altitudes (maybe in the future, there will be underwater cities). For instance, maybe we're talking about the intersection between lines (not a great example, since floating-points have a few built-in sentinels like +INF and NaN) or the precise integer quotient (without rounding, this is not guaranteed to exist... 7 and 3, for instance... here, the remainder after division can be viewed as either a sentinel or a "exact integer quotient exists" property). More generally, any reliable sentinel can be trivially used to construct a property-based mechanism.
Based on this, I'd recommend avoiding sentinels wherever this is possible and makes sense. My reasoning is that they are an internal implementation detail of the module, and should be encapsulated behind an information-hiding interface.

Real number arithmetic in a general purpose language?

As (hopefully) most of you know, floating point arithmetic is different from real number arithmetic. It's for starters imprecise. Many numbers, especially decimals (0.1, 0.3) cannot be represented, leading to problems like this. A more thorough list can be found here.
Are there any general purpose languages that have built-in support for something closer to real number arithmetic? If not, what are good libraries that support this?
EDIT: Arbitrary precision decimal
datatypes are not what I am looking
for. I want to be able to represent
numbers like 1/3, sqrt(3), or 1 + 2i as well.
Though I hate to say it, Fortran. It has extensive support for arbitrary-precision arithmetic and tons of support for big-number calculations. It's ancient and gross, but it gets the job done.
All the numbers used in your examples are algebraic numbers, and can be represented
finitely as roots of polynomials with integer coefficients.
The same cannot be said of real numbers in general, which is easily seen when one
considers that the reals are uncountable, but the set of computer programs is
countable. Therefore most reals will not have a finite representation in code.
What you are looking for is symbolic calculation (MATLAB and other tools used in math and engineering are good at it).
If you want a general purposed language, I think expression tree in C# is good point to start with. In the essence, the ability to store the expression (instead of evaluate the expression into real values) is the key to be able to perform symbolic calculation. Note that expression tree does not provide symbolic calculation, it just provides the data structure that supports symbolic calculation.
This question is interesting, but raises some issues. First, you will never be able to represent all the real numbers using a (even theoretically infinite) computer, for cardinality reasons.
What you are looking for is a "symbolic numbers" datatype. You can imagine some sort of expression tree, with predefined constants, arithmetical operations, and perhaps algebraic (roots of polynomials) and transcendantal (exp, sin, cos, log, etc) functions.
Now the fun part of the story: you cannot find an algorithm which tells whether two such trees are representing the same number (or equivalently, which test whether such a tree is zero). I won't state anything precise, but as a hint, this is similar to the Halting Problem (for computer scientists) or the Gödel Incompleteness Theorem (for mathematicians).
This renders such a class pretty useless.
For some subfields of the reals, you have canonical forms, like a/b for the rationals, or finite algebraic extensions of the rationals (a/b + ic/d for complex rationals, a/b + sqrt(2) * a/b for Q[sqrt(2)], etc). These can be used to represent some particular sets of algebraic numbers.
In practice, this is the most complicated thing you will need. If you have a particular necessity, like ranges of floating point numbers (to prove some result is whithin a specified interval, this is probably the closest you can get to real numbers), or arbitrary precision numbers, you have freely available classes everywhere. Google boost::range for the former, and gmp for the latter.
There are several languages with support for rational and complex numbers. Scheme, for instance, has support built in for arbitrarily precise rational numbers, and complex numbers with either rational, floating point, or integral coefficients:
> (+ 1/2 1/3)
5/6
> (* 3 1+1/2i)
3+3/2i
> (+ 1/2 .5)
1.0
If you want to go beyond rational numbers or complex numbers with rational coefficients, to algebraic numbers such as sqrt(2) or closed-form numbers like e, you will probably have to look beyond general purpose programming languages, and use a special purpose mathematical language like Mathematica or Maxima.
To cover the real numbers with any flair you'll need a symbolic package.
Boost, the C++ project, has a Rational library, but that's only part of the story.
You have irrational numbers in all sorts of forms (pi, base of the natural logarithm, square and cube roots, the Champernowne constant, to name only a few). The only way I know of to handle arithmetic operations is a symbolic package with smarts as to the relationship amongst all of these numbers. Assuming you could express e^pi, how would you add one to it? Or take the square root of it?
Mathematica might handle these cases.
Java: java.math.BigDecimal
C#: decimal
A lot of languages have support for that: Java has BigDecimal, Perl has Math::BigFloat and Math::BigRat, Haskell has Integer and a lot of libraries and languages are listed in the wikipedia.
Ada natively supports fixed-point math as well as floating-point. Fixed-point can be much more exact than floating-point, as long as the number's exponents remain in range.
If you need floating-points, but more precision than IEEE gives, there are bignum packages around for just about every language.
I think that's about the best you can do. Neither scheme can exactly represent repeating decimals (like 1/3). It would probably be possible to come up with a scheme that does, but I know of no language that supports such a thing with a built-in type. Even that won't help you with irrational numbers (like pi and e). I believe there's even a theorem that says there will always be unrepresentable numbers, no matter what scheme you come up with.
EDIT: Arbitrary precision decimal
datatypes are not what I am looking
for. I want to be able to represent
numbers like 1/3, sqrt(3), or 1 + 2i
as well.
Ruby has a Rational class, so 1/3 can be expressed exactly as Rational(1,3). It also has a Complex class.
Scheme defines rationals, bignums, floating point and complex numbers. An implementation is not required to support them all, but if they are present, you can mix them and they will to "the right thing".
While its not "built-in", I think C++ (maybe C#) is your best bet. There are classes out there that have been written for this purpose.
http://www.oonumerics.org/oon/

Text-correlation in MySQL [duplicate]

Suppose I want to match address records (or person names or whatever) against each other to merge records that are most likely referring to the same address. Basically, I guess I would like to calculate some kind of correlation between the text values and merge the records if this value is over a certain threshold.
Example:
"West Lawnmower Drive 54 A" is probably the same as "W. Lawn Mower Dr. 54A" but different from "East Lawnmower Drive 54 A".
How would you approach this problem? Would it be necessary to have some kind of context-based dictionary that knows, in the address case, that "W", "W." and "West" are the same? What about misspellings ("mover" instead of "mower" etc)?
I think this is a tricky one - perhaps there are some well-known algorithms out there?
A good baseline, probably an impractical one in terms of its relatively high computational cost and more importantly its production of many false positive, would be generic string distance algorithms such as
Edit distance (aka Levenshtein distance)
Ratcliff/Obershelp
Depending on the level of accuracy required (which, BTW, should be specified both in terms of its recall and precision, i.e. generally expressing whether it is more important to miss a correlation than to falsely identify one), a home-grown process based on [some of] the following heuristics and ideas could do the trick:
tokenize the input, i.e. see the input as an array of words rather than a string
tokenization should also keep the line number info
normalize the input with the use of a short dictionary of common substituions (such as "dr" at the end of a line = "drive", "Jack" = "John", "Bill" = "William"..., "W." at the begining of a line is "West" etc.
Identify (a bit like tagging, as in POS tagging) the nature of some entities (for example ZIP Code, and Extended ZIP code, and also city
Identify (lookup) some of these entities (for example a relative short database table can include all the Cities / town in the targeted area
Identify (lookup) some domain-related entities (if all/many of the address deal with say folks in the legal profession, a lookup of law firm names or of federal buildings may be of help.
Generally, put more weight on tokens that come from the last line of the address
Put more (or less) weight on tokens with a particular entity type (ex: "Drive", "Street", "Court" should with much less than the tokens which precede them.
Consider a modified SOUNDEX algorithm to help with normalization of
With the above in mind, implement a rule-based evaluator. Tentatively, the rules could be implemented as visitors to a tree/array-like structure where the input is parsed initially (Visitor design pattern).
The advantage of the rule-based framework, is that each heuristic is in its own function and rules can be prioritized, i.e. placing some rules early in the chain, allowing to abort the evaluation early, with some strong heuristics (eg: different City => Correlation = 0, level of confidence = 95% etc...).
An important consideration with search for correlations is the need to a priori compare every single item (here address) with every other item, hence requiring as many as 1/2 n^2 item-level comparisons. Because of this, it may be useful to store the reference items in a way where they are pre-processed (parsed, normalized...) and also to maybe have a digest/key of sort that can be used as [very rough] indicator of a possible correlation (for example a key made of the 5 digit ZIP-Code followed by the SOUNDEX value of the "primary" name).
I would look at producing a similarity comparison metric that, given two objects (strings perhaps), returns "distance" between them.
If you fulfil the following criteria then it helps:
distance between an object and
itself is zero. (reflexive)
distance from a to b is the same in
both directions (transitive)
distance from a to c is not more
than distance from a to b plus
distance from a to c. (triangle
rule)
If your metric obeys these they you can arrange your objects in metric space which means you can run queries like:
Which other object is most like
this one
Give me the 5 objects
most like this one.
There's a good book about it here. Once you've set up the infrastructure for hosting objects and running the queries you can simply plug in different comparison algorithms, compare their performance and then tune them.
I did this for geographic data at university and it was quite fun trying to tune the comparison algorithms.
I'm sure you could come up with something more advanced but you could start with something simple like reducing the address line to the digits and the first letter of each word and then compare the result of that using a longest common subsequence algorithm.
Hope that helps in some way.
You can use Levenshtein edit distance to find strings that differ by only a few characters. BK Trees can help speed up the matching process.
Disclaimer: I don't know any algorithm that does that, but would really be interested in knowing one if it exists. This answer is a naive attempt of trying to solve the problem, with no previous knowledge whatsoever. Comments welcome, please don't laugh too laud.
If you try doing it by hand, I would suggest applying some kind of "normalization" to your strings : lowercase them, remove punctuation, maybe replace common abbreviations with the full words (Dr. => drive, St => street, etc...).
Then, you can try different alignments between the two strings you compare, and compute the correlation by averaging the absolute differences between corresponding letters (eg a = 1, b = 2, etc.. and corr(a, b) = |a - b| = 1) :
west lawnmover drive
w lawnmower street
Thus, even if some letters are different, the correlation would be high. Then, simply keep the maximal correlation you found, and decide that their are the same if the correlation is above a given threshold.
When I had to modify a proprietary program doing this, back in the early 90s, it took many thousands of lines of code in multiple modules, built up over years of experience. Modern machine-learning techniques ought to make it easier, and perhaps you don't need to perform as well (it was my employer's bread and butter).
So if you're talking about merging lists of actual mailing addresses, I'd do it by outsourcing if I can.
The USPS had some tests to measure quality of address standardization programs. I don't remember anything about how that worked, but you might check if they still do it -- maybe you can get some good training data.