Why does the following produce an error? - ms-access

I am trying to insert a value of 12,500,000.00 into an Access table and receive the following error message:
Decimal fields precision is too small to accept the numeric you attempted to add.
The field in the table is of data type Number and has the following properties:
Precision 19
Scale 14
Decimal Places 5
I don't understand because 12,500,000.00 has a precision of 8 and scale of 2. And decimal places is for display purposes only, not storage.
I fixed it by changing precision to 25, but would still appreciate some clarity.

Precision : The total number of digits that can be stored, both to the left and right of the decimal point
Scale : The maximum number of digits that can be stored to the right of the decimal separator
Decimal Places : The number of digits that are displayed the right of the decimal separator
In other words, by using a precision of 19, you were declaring your field to have 5 digits to the left of the decimal point, and 14 to the right (the value of Scale).
Changing the total precision to 25, allows 11 digits to be able to be stored to the left of the decimal point.

A decimal is a fixed-point number and when you set a scale of 14, there are actually 14 digits (in your case, zeroes) reserved right side of the decimal point. The scale is part of the precision.
It is different from what you expect in floating-point numbers when you can write 12,500,000.00 as 1.25e+7 and you have a precision of 3.

Related

What will be the constraints of the values that can be entered in the colum that was declared as FLOAT?

I am building a web app, and in some section in it a teacher inserts the expected results of a scientific experiment. These results must be very accurate, they might come like this 0.4933546522886728. And after searching for a while, FLOAT seems to be the right datatype to store these answers in the database. As known FLOAT columns in mysql can be declared like this FLOAT(n, d), where n is the total number of digits in the number and d is the number of digits after the decimal point. So, I do not know the number of digits the teacher will enter. So, what would happen if I declared it like this FLOAT. The thing that made me think of this is this quote from the mysql documentation.
For maximum portability, code requiring storage of approximate numeric data values should use FLOAT or DOUBLE PRECISION with no specification of precision or number of digits.
And what would be the maximum and minimum of the values to be entered in this FLOAT column.
I also thought of using VARCHAR and store the exact number that the teacher enters and then according to the nature of the number that in the database number that the student enters to be compared with the right answer will be manipulated to match the other number.
For example if the teacher enters 1.23451 and the student enters 1.4235123, my code will make it 1.42351.
The (n,d) on the end of FLOAT and DECIMAL does not make sense. All it does is cause an extra rounding.
FLOAT provides about 7 significant decimal digits of precision and a modestly big exponent range. 0.4933546522886728 will be stored as about 0.4933546xxxxx, with the extra digits being noise.
That number can be stored in a DOUBLE, with a rounding error after 53 bits (about 16 digits) of precision.
There are very few scientific measurements that need more digits than available in the precision of FLOAT.
You can INSERT ... VALUES ( 0.4933546522886728 ) and put that into a FLOAT. It will get rounded to 24 significant bits. Ditto for 4933546522886.728 . Or 0.0000000004933546522886728 . Or 4.933546522886728e20 or 4.933546522886728e-20 .
Take whatever numbers you are given and simply put them in the INSERT without worrying about precision or scaling.
VARCHAR is the wrong way to go for numbers and dates, unless you want to store the raw input before it has been converted into the internal format.

encountered many times this difficulty of decimal in MySQL

I encountered many times this problem of decimal in MySQL !
When i put this type: DECIMAL(10,8)
The maximum value allowed are: 99.99999999 !
It supposed to be: 9999999999.99999999 no ?
I want a maximum value of decimal with 8 digits after the point (.).
From the documentation:
The declaration syntax for a DECIMAL column is DECIMAL(M,D). The ranges of values for the arguments in MySQL 5.7 are as follows:
M is the maximum number of digits (the precision). It has a range of 1 to 65.
D is the number of digits to the right of the decimal point (the scale). It has a range of 0 to 30 and must be no larger than M.
The first value is not the number of digits to the left of the decimal point, but the total number of digits.
That's why the value 9999999999.99999999 with DECIMAL(10, 8) is not possible: it is 18 digits long.
A decimal is defined by two parameters - DECIMAL(M, D), where M is the total number of digits, and D is number of digits after the decimal point out of M. To properly represent the number 9999999999.99999999, you'd need to use DECIMAL(18, 8).
The way DECIMAL(x,y) specifiers work is x represents the total number of digits and y the number that come after the decimal place.
10,8 means NN.NNNNNNNN.
If you want more, you need to make your range larger accordingly.
The first number is the total of digits, and the second one is the number of decimal places.
For the number you request, try DECIMAL(18,8).
You define DECIMAL(total positions, total decimal) so remember: the total decimal use the positions of the total positions, if you want most numbers to left and to right, use FLOAT type.

Is the most significant decimal digits precision that can be converted to binary and back to decimal without loss of significance 6 or 7.225?

I've come across two different precision formulas for floating-point numbers.
⌊(N-1) log10(2)⌋ = 6 decimal digits (Single-precision)
and
N log10(2) ≈ 7.225 decimal digits (Single-precision)
Where N = 24 Significant bits (Single-precision)
The first formula is found at the top of page 4 of "IEEE Standard 754 for Binary Floating-Point Arithmetic" written by, Professor W. Kahan.
The second formula is found on the Wikipedia article "Single-precision floating-point format" under section IEEE 754 single-precision binary floating-point format: binary32.
For the first formula, Professor W. Kahan says
If a decimal string with at most 6 sig. dec. is converted to Single and then converted back to the same number of sig. dec.,
then the final string should match the original.
For the second formula, Wikipedia says
...the total precision is 24 bits (equivalent to log10(224) ≈ 7.225 decimal digits).
The results of both formulas (6 and 7.225 decimal digits) are different, and I expected them to be the same because I assumed they both were meant to represent the most significant decimal digits which can be converted to floating-point binary and then converted back to decimal with the same number of significant decimal digits that it started with.
Why do these two numbers differ, and what is the most significant decimal digits precision that can be converted to binary and back to decimal without loss of significance?
These are talking about two slightly different things.
The 7.2251 digits is the precision with which a number can be stored internally. For one example, if you did a computation with a double precision number (so you were starting with something like 15 digits of precision), then rounded it to a single precision number, the precision you'd have left at that point would be approximately 7 digits.
The 6 digits is talking about the precision that can be maintained through a round-trip conversion from a string of decimal digits, into a floating point number, then back to another string of decimal digits.
So, let's assume I start with a number like 1.23456789 as a string, then convert that to a float32, then convert the result back to a string. When I've done this, I can expect 6 digits to match exactly. The seventh digit might be rounded though, so I can't necessarily expect it to match (though it probably will be +/- 1 of the original string.
For example, consider the following code:
#include <iostream>
#include <iomanip>
int main() {
double init = 987.23456789;
for (int i = 0; i < 100; i++) {
float f = init + i / 100.0;
std::cout << std::setprecision(10) << std::setw(20) << f;
}
}
This produces a table like the following:
987.2345581 987.2445679 987.2545776 987.2645874
987.2745972 987.2845459 987.2945557 987.3045654
987.3145752 987.324585 987.3345947 987.3445435
987.3545532 987.364563 987.3745728 987.3845825
987.3945923 987.404541 987.4145508 987.4245605
987.4345703 987.4445801 987.4545898 987.4645386
987.4745483 987.4845581 987.4945679 987.5045776
987.5145874 987.5245972 987.5345459 987.5445557
987.5545654 987.5645752 987.574585 987.5845947
987.5945435 987.6045532 987.614563 987.6245728
987.6345825 987.6445923 987.654541 987.6645508
987.6745605 987.6845703 987.6945801 987.7045898
987.7145386 987.7245483 987.7345581 987.7445679
987.7545776 987.7645874 987.7745972 987.7845459
987.7945557 987.8045654 987.8145752 987.824585
987.8345947 987.8445435 987.8545532 987.864563
987.8745728 987.8845825 987.8945923 987.904541
987.9145508 987.9245605 987.9345703 987.9445801
987.9545898 987.9645386 987.9745483 987.9845581
987.9945679 988.0045776 988.0145874 988.0245972
988.0345459 988.0445557 988.0545654 988.0645752
988.074585 988.0845947 988.0945435 988.1045532
988.114563 988.1245728 988.1345825 988.1445923
988.154541 988.1645508 988.1745605 988.1845703
988.1945801 988.2045898 988.2145386 988.2245483
If we look through this, we can see that the first six significant digits always follow the pattern precisely (i.e., each result is exactly 0.01 greater than its predecessor). As we can see in the original double, the value is actually 98x.xx456--but when we convert the single-precision float to decimal, we can see that the 7th digit frequently would not be read back in correctly--since the subsequent digit is greater than 5, it should round up to 98x.xx46, but some of the values won't (e.g,. the second to last item in the first column is 988.154541, which would be round down instead of up, so we'd end up with 98x.xx45 instead of 46. So, even though the value (as stored) is precise to 7 digits (plus a little), by the time we round-trip the value through a conversion to decimal and back, we can't depend on that seventh digit matching precisely any more (even though there's enough precision that it will a lot more often than not).
1. That basically means 7 digits, and the 8th digit will be a little more accurate than nothing, but not a whole lot--for example, if we were converting from a double of 1.2345678, the .225 digits of precision mean that the last digit would be with about +/- .775 of the what started out there (whereas without the .225 digits of precision, it would be basically +/- 1 of what started out there).
what is the most significant decimal digits precision that can be
converted to binary and back to decimal without loss of significance?
The most significant decimal digits precision that can be converted to binary and back to decimal without loss of significance (for single-precision floating-point numbers or 24-bits) is 6 decimal digits.
Why do these two numbers differ...
The numbers 6 and 7.225 differ, because they define two different things. 6 is the most decimal digits that can be round-tripped. 7.225 is the approximate number of decimal digits precision for a 24-bit binary integer because a 24-bit binary integer can have 7 or 8 decimal digits depending on its specific value.
7.225 was found using the specific binary integer formula.
dspec = b·log10(2) (dspec
= specific decimal digits, b = bits)
However, what you normally need to know, are the minimum and maximum decimal digits for a b-bit integer. The following formulas are used to find the min and max decimal digits (7 and 8 respectively for 24-bits) of a specific binary integer.
dmin = ⌈(b-1)·log10(2)⌉ (dmin
= min decimal digits, b = bits, ⌈x⌉ = smallest integer ≥ x)
dmax = ⌈b·log10(2)⌉ (dmax
= max decimal digits, b = bits, ⌈x⌉ = smallest integer ≥ x)
To learn more about how these formulas are derived, read Number of Decimal Digits In a Binary Integer, written by Rick Regan.
This is all well and good, but you may ask, why is 6 the most decimal digits for a round-trip conversion if you say that the span of decimal digits for a 24-bit number is 7 to 8?
The answer is — because the above formulas only work for integers and not floating-point numbers!
Every decimal integer has an exact value in binary. However, the same cannot be said for every decimal floating-point number. Take .1 for example. .1 in binary is the number 0.000110011001100..., which is a repeating or recurring binary. This can produce rounding error.
Moreover, it takes one more bit to represent a decimal floating-point number than it does to represent a decimal integer of equal significance. This is because floating-point numbers are more precise the closer they are to 0, and less precise the further they are from 0. Because of this, many floating-point numbers near the minimum and maximum value ranges (emin = -126 and emax = +127 for single-precision) lose 1 bit of precision due to rounding error. To see this visually, look at What every computer programmer should know about floating point, part 1, written by Josh Haberman.
Furthermore, there are at least 784,757 positive seven-digit decimal numbers that cannot retain their original value after a round-trip conversion. An example of such a number that cannot survive the round-trip is 8.589973e9. This is the smallest positive number that does not retain its original value.
Here's the formula that you should be using for floating-point number precision that will give you 6 decimal digits for round-trip conversion.
dmax = ⌊(b-1)·log10(2)⌋ (dmax
= max decimal digits, b = bits, ⌊x⌋ = largest integer ≤ x)
To learn more about how this formula is derived, read Number of Digits Required For Round-Trip Conversions, also written by Rick Regan. Rick does an excellent job showing the formulas derivation with references to rigorous proofs.
As a result, you can utilize the above formulas in a constructive way; if you understand how they work, you can apply them to any programming language that uses floating-point data types. All you have to know is the number of significant bits that your floating-point data type has, and you can find their respective number of decimal digits that you can count on to have no loss of significance after a round-trip conversion.
June 18, 2017 Update: I want to include a link to Rick Regan's new article which goes into more detail and in my opinion better answers this question than any answer provided here. His article is "Decimal Precision of Binary Floating-Point Numbers" and can be found on his website www.exploringbinary.com.
Do keep in mind that they are the exact same formulas. Remember your high-school math book identity:
Log(x^y) == y * Log(x)
It helps to actually calculate the values for N = 24 with your calculator:
Kahan's: 23 * Log(2) = 6.924
Wikipedia's: Log(2^24) = 7.225
Kahan was forced to truncate 6.924 down to 6 digits because of floor(), bummer. The only actual difference is that Kahan used 1 less bit of precision.
Pretty hard to guess why, the professor might have relied on old notes. Written before IEEE-754 and not taking into account that the 24th bit of precision is for free. The format uses a trick, the most significant bit of a floating point value that isn't 0 is always 1. So it doesn't need to be stored. The processor adds it back before it performs a calculation. Turning 23 bits of stored precision into 24 of effective precision.
Or he took into account that the conversion from a decimal string to a binary floating point value itself generates an error. Many nice round decimal values, like 0.1, cannot be perfectly converted to binary. It has an endless number of digits, just like 1/3 in decimal. That however generates a result that is off by +/- 0.5 bits, achieved by simple rounding. So the result is accurate to 23.5 * Log(2) = 7.074 decimal digits. If he assumed that the conversion routine is clumsy and doesn't properly round then the result can be off by +/-1 bit and N-1 is appropriate. They are not clumsy.
Or he thought like a typical scientist or (heaven forbid) accountant and wants the result of a calculation converted back to decimal as well. Such as you'd get when you trivially look for a 7 digit decimal number whose conversion back-and-forth does not produce the same number. Yes, that adds another +/- 0.5 bit error, summing up to 1 bit error total.
But never, never make that mistake, you always have to include any errors you get from manipulating the number in a calculation. Some of them lose significant digits very quickly, subtraction in particular is very dangerous.

Storing comma values for percentage in sql 2008

I got a numeric column in my database for storing discount percentages for my items.
If the discount is 25,6 it will remove the 6.
What kind of type does this column have to be to not remove the number behind ,?
And what is best practice?
I think you're talking about the decimal data type.
The decimal definition is split into two parts...
the first is the TOTAL number of digits that you want to store, both to the left and the right of the decimal character (normally . or ,). This is called the precision.
the second is the number of digits to the right of the decimal character. This is called the scale.
For instance, to store a percentage with 1 decimal place (such as the 25,6 you give as an example), you would use the following
decimal(4,1)
That means that you have a maximum of 4 digits, with only 1 of those digits being after the decimal place. It would have a maximum storage of 999.9

MySQL FLOAT & decimals

Datatype of field in the DB is FLOAT and the value is 18.7. I'd like to store and display this on page as 18.70. Whenever I enter the extra 0 it still only stores it as 18.7
How can I store the extra 0 ? I can change the data type of the field.
In a FLOAT column, what MySQL stores for 18.7, is actually:
01000001 10010101 10011001 10011010
which, being retrieved from the DB and converted back into your display format, is 18.7.
In reality, the stored value is a binary fraction represented by the decimal number 18.70000076293945 which you can see by issuing this query:
CREATE TABLE t_f (value FLOAT);
INSERT
INTO t_f
VALUES (18.7);
SELECT CAST(value AS DECIMAL(30, 16))
FROM t_f;
IEEE-754 representation of number stores them as binary fractions, so a value like 0.1 can only be represented with continued fraction and hence be not exact.
DECIMAL, on the other hand, stores decimal digits, packing 9 digits into 4 bytes.
Floating point types do not store the number of insignificant zeros on the left side of a number before decimal digit or on the right side of the number after the decimal digit. You'll need to use a string-based type (or store the precision in a separate field) if you want to store the exact numeric string entered by the user and be able to distinguish 12.7 from 12.70. You can, however, round things that you display by two digits in your application.
if two decimal points needed use:
decimal(n,2); where n>=2
the decimal data type will persist the decimal points formatting and gives more accurate results than float and double data types.
Are you attempting to store a currency as a float? If so, please use a decimal with more decimal digits than 2.
You really want fixed-point arithmetic on currencies.
This is just very broad rule of thumb and my own observation, but in regular business logic as serialized in a database, you almost never want floating point. I know there are lots of exceptions, but I'm suspicious whenever I see a float typed column in a table because of this. I'd be interested in what others have found.