Issue
I'm using SQLite and I've got a bunch of fields representing measures in millimeters that I'd like to limit to 1 number after decimal point (e.g. 1.2 ; 12.2 ; 122.2 and so on).
I've seen such things as putting DECIMAL(n,1) as the type for example and I tried it but it doesn't seem to constraint the value (I suppose it's because it's not an actual SQLite type).
Do I need to migrate to MySQL for it to work?
EDIT (solution found)
I used Dan04's answer : it's simple and it works really fine :
► Table is as follow :
CREATE TABLE demo(
a REAL CHECK(a = ROUND(a,1)),
b REAL CHECK(b = ROUND(b,1)),
c REAL GENERATED ALWAYS AS (a+b)
)
► Insert corerct data : INSERT INTO demo (a,b) values (41.4,22.6)
► Insert bad data : INSERT INTO demo (a,b) values (1.45,22.68) outputs :
Execution finished with errors.
Result: CHECK constraint failed: a = ROUND(a,1)
At line 1:
insert into demo (a,b) values (1.45,22.68)
You can make a CHECK constraint using the ROUND function. Declare the column as:
mm REAL CHECK(mm = ROUND(mm, 1))
But note that the underlying representation is still a binary floating-point number, with the usual caveats about accuracy.
MySQL's DECIMAL(nn,1) will round to 1 decimal place for storing. That's not the same as a constraint.
When displaying data, your app should round the result to a meaningful precision. (One decimal place is arguably over-kill for weather readings.)
In general, measurements (not money) should be stored in FLOAT. This datatype (in MySQL and many other products) provides 7 "significant digits" and a reasonably high range of values.
FLOAT has sufficient precision when used for latitude and longitude to distinguish two vehicles, but not enough precision to distinguish two people embracing.
(Sorry, I can't speak for SQLite. If FLOAT is available then I recommend you use it and round on output.)
Related
When I insert a number as 1234567.1234567 it will translate it to 1234567.1250.
How do I make it to save the correct number?
When I insert a number as 1234567.1234567 it will translate it to 1234567.1250
FLOATs are saved in four bytes, allowing for about 232 different values.
1234567.1234567 is not one of them. Encodable values are all some limited integer times a power of 2.
The closest encodable value is 1234567.125 or 9876537*2-3.
Code could use DOUBLE yet a similar issue applies.
The closest is about 1234567.1234567000065..., may be close enough for OP's purpose.
I am inserting data from one table into another in a MariaDB database, where the column in the first table is FLOAT, and in the second it's DOUBLE. The data can have values of any size, precision and decimal places.
Here is what happens to the values when I do a straight-forward copy:
INSERT INTO data2 (value) SELECT value FROM data1
The values are given random extra significant figures:
FLOAT in data1 DOUBLE in data2
-0.000000000000454747 -0.0000000000004547473508864641
-122.319 -122.31932830810547
14864199700 14864220160
CAST(value AS DECIMAL(65,30)) generates exactly the same values as col 2 above, except I see trailing zeroes.
Yet when I just do
UPDATE data2 SET value = 14867199700 WHERE id = 133025046;
the DOUBLE value is accepted.
Do I have to export all the value to an SQL script and re-import them? Isn't there a better way?
Despite hours trying to experimenting with the issue, I'm not much closer to a solution, despite its limited nature. I can see this is problem that besets all technologies, not just MariaDB or databases, so I have probably just missed the answer somewhere. Stackoverflow is desperately trying to guide to a solution with new suggestion features I hadn't seen before, but unfortunately they are no help, like the other suggested answers.
Your test case is flawed. You are feeding in decimal digits, and not testing just the transfer of FLOAT to DOUBLE.
UPDATE tbl SET double_col = float_col will always copy exactly the same value. This because the DOUBLE representation is a superset of the FLOAT representation (53 vs 24 bits of precision; etc).
Literal, with decimal places: UPDATE tbl SET double_col = 123.456 will mangle the number because of rounding from decimal to DOUBLE. Ditto for float_col. Furthermore, the mangled results will be different!
Hole number literal: UPDATE tbl SET double_col = 14867199700 will be stored exactly. But if you put that same literal into a FLOAT, it will be rounded to 24 bits, so it cannot be stored exactly. You lose exactness at about 7 significant digits for FLOAT and about 16 for DOUBLE. The literal in this example has 9 significant digits (after ignoring trailing zeros).
That's just a sampling of the nightmares you can get into.
You must consider FLOAT and DOUBLE to be approximate. You should never compare for equality; you don't know what might have messed with the last bit of the value.
Also, you should not try to guess when MySQL will perform expressions in DECIMAL instead of DOUBLE.
And, keep in mind that division is usually imprecise due to rounding to some number of bits or decimals.
The "mantissa" of 14864199700 is
1.10111010111111001101100 (binary of FLOAT : 24 bits including 'hidden' leading bit)
1.1011101011111100110110000000101000000000000000000000 (binary of DOUBLE)
^ ^ (lost in FLOAT)
Each of those is multiplied by the same power of 2. The DOUBLE gets exactly 14864199700. The FLOAT lost the bits pointed to.
You can play around with such at https://gregstoll.dyndns.org/~gregstoll/floattohex/
Believe it or not, things used to be worse. People would be billed for $0.00 -- due to rounding errors. Or results of what should have been 1+1 showed as 1.99999999.
after extensive search I am resorting to stack-overflows wisdom to help me.
Problem:
I have a database table that should effectively store values of the format (UserKey, data0, data1, ..) where the UserKey is to be handled as primary key but at least as an index. The UserKey itself (externally defined) is a string of 32 characters representing a checksum, which happens to be (a very big) hexadecimal number, i.e. it looks like this UserKey = "000000003abc4f6e000000003abc4f6e".
Now I can certainly store this UserKey in a char(32)-field, but I feel this being mighty inefficient, as I store a series of in principle arbitrary characters, i.e. reserving space for for more information per character than the 4 bits i need to store the hexadecimal characters (0..9,A-F).
So my thought was to convert this string literal into the hex-number it really represents, and store that. But this number (32*4 bits = 16Bytes) is much too big to store/handle as SQL only handles BIGINTS of 8Bytes.
My second thought was to convert this into a BINARY(16) representation, which should be compact and efficient concerning memory. However, I do not know how to efficiently convert between these two formats, as SQL also internally only handles numbers up to the maximum of 8 Bytes.
Maybe there is a way to convert this string to binary block by block and stitch the binary together somehow, in the way of:
UserKey == concat( stringblock1, stringblock2, ..)
UserKey_binary = concat( toBinary( stringblock1 ), toBinary( stringblock2 ), ..)
So my question is: is there any such mechanism foreseen in SQL that would solve this for me? How would a custom solution look like? (I find it hard to believe that I should be the first to encounter such a problem, as it has become quite modern to use ridiculously long hashkeys in many applications)
Also, the Userkey_binary should than act as relational key for the table, so I hope for a bit of speed by this more compact representation, as it needs to determine the difference on a minimal number of bits. Additionally, I want to mention that I would like to do any conversion if possible on the Server-side, so that user-scripts have not to be altered (the user-side should, if possible, still transmit a string literal not [partially] converted values in the insert statement)
In Contradiction to my previous statement, it seems that MySQL's UNHEX() function does a conversion from a string block by block and then concat much like I stated above, so the method works also for HEX literal values which are bigger than the BIGINT's 8 byte limitation. Here an example table that illustrates this:
CREATE TABLE `testdb`.`tab` (
`hexcol_binary` BINARY(16) GENERATED ALWAYS AS (UNHEX(charcol)) STORED,
`charcol` CHAR(32) NOT NULL,
PRIMARY KEY (`hexcol_binary`));
The primary key is a generated column, so that that updates to charcol are the designated way of interacting with the table with string literals from the outside:
REPLACE into tab (charcol) VALUES ('1010202030304040A0A0B0B0C0C0D0D0');
SELECT HEX(hexcol_binary) as HEXstring, tab.* FROM tab;
as seen building keys and indexes on the hexcol_binary works as intended.
To verify the speedup, take
ALTER TABLE `testdb`.`tab`
ADD INDEX `charkey` (`charcol` ASC);
EXPLAIN SELECT * from tab where hexcol_binary = UNHEX('1010202030304040A0A0B0B0C0C0D0D0') #keylength 16
EXPLAIN SELECT * from tab where charcol = '1010202030304040A0A0B0B0C0C0D0D0' #keylength 97
the lookup on the hexcol_binary column is much better performing, especially if its additonally made unique.
Note: the hex conversion does not care if the hex-characters A through F are capitalized or not for the conversion process, however the charcol will be very sensitive to this.
I have looked at many questions regarding this problem, but I have not found a solution. Hopefully this is not a duplicate question.
Problem
If I do any of:
INSERT INTO `Numbers`(`Number`) VALUES ('NaN')
INSERT INTO `Numbers`(`Number`) VALUES ('Inf')
INSERT INTO `Numbers`(`Number`) VALUES ('+Inf')
I get 0.0 inserted in the table. Sometimes I get:
Error Code: 1265. Data truncated for column 'Number'
I have also tried different casing and spelling, all with the same effect.
I have even tried:
INSERT INTO `Numbers`(`Number`) VALUES ('1111111111111000000000000000000000000000000000000000000000000000')
How do I insert a NaN floating point number into a MySql table?
If it really isn't possible then what is the reasoning? (Maybe I am using the incorrect version of MySql?)
Using NULL as NaN
The tables where I am actually using this I don't want to allow NULL values in those columns. So I don't like the idea of replacing NaN with NULL somewhere in the ORM layer
To get an overall idea of how MySQL manipulates numbers you can read the following chapters:
Numeric Type Overview and Numeric Types, including Out-of-Range and Overflow Handling
Number Literals
Type Conversion in Expression Evaluation
The last article mentions this:
The server includes dtoa, a conversion library that provides the
basis for improved conversion between string or DECIMAL values and
approximate-value (FLOAT/DOUBLE) numbers
[...]
The dtoa library provides conversions with the following properties. D
represents a value with a DECIMAL or string representation, and F
represents a floating-point number in native binary (IEEE) format.
[...]
conversions are lossless unless F is -inf, +inf, or NaN. The latter
values are not supported because the SQL standard defines them as
invalid values for FLOAT or DOUBLE.
In short:
The SQL standard explicitly bans those values
MySQL complies with the standard in that aspect
The round function sometime doesn't work very well. I have in my db a row like this:
field_1= 375
field_2= 0.65
field_3= 0.1
field_4= 11
So we know that: field_1*field_2*field_3*field_4 = 268.125 so if I round it to 2 decimals -> 268.13.
But in mysql I got 268.12 -> Select round(field_1*field_2*field_3*field_4) from my table -> 268.12
This situation just happens with these values, I tried with other numbers and no problem the round works.
Any workaround about it. I tried in mysql 4.1.22, and 5.1.44 and I get the same issue. I read in other forum http://bugs.mysql.com/bug.php?id=6251 that it is not a bug, they said that it depends on the C library implementation.
What data type are you using for those columns?
If you want exact precision then you should use NUMERIC or DECIMAL, not FLOAT, REAL, or DOUBLE.
From the manual:
The DECIMAL and NUMERIC types store exact numeric data values.
These types are used when it is important to preserve exact precision,
for example with monetary data.
If applicable you can use FLOOR()
If not applicable you will be better off handling this on the application side. I.e. do the entire calculation application side and then round, or just round application side.
SELECT (field_1*field_2*field_3*field_4) AS myCalculation FROM my table