In my db table I have a field float where I store cost of a project if I insert cost 4559006 is saved like 4.55901e+006 and when I perform mathematics operation on it it produces errors.
How could I fix it?
Floating point types are not suitable for precise calculations
Use decimal(15,2)
15 is the number of significant digits including scale 2. You can increase that up to 65 if needed.
http://dev.mysql.com/doc/refman/5.1/en/fixed-point-types.html
Related
I have mysql DB with important financial data, currently the data is stored as float type and I get incorrect data due to float rounding, I want to store it as DECIMAL.
What is the safe way to convert the data in the DB without change existing data? or any another idea to solve that issue?
EDIT: Does converting from FLOAT to VARCHAR and than from VARCHAR to DECIMAL is a safe way?
Thanks in advance!
13815500 is exactly representable in FLOAT. But you are close to what Andrew talks about -- 16777217 is not exactly representable; it will be off by 1 Euro or dollar or whatever.
If you have no decimal places, your choices are
FLOAT, which messes up above 16,777,216.
DECIMAL(9,0) which can handle numbers up to about 1 billion. Caveat: If you need decimal places, say so!_
INT which peaks at about 2 billion.
INT UNSIGNED - limit about 4 billion (non-negative values only).
Each of the above datatypes mentioned above takes 4 bytes. All but the last allow for negative values. FLOAT will keep going, but lose bits at the bottom; the others "overflow".
Other options: DECIMAL(m,0) with bigger numbers (m<=64), DOUBLE (huge floating range), BIGINT (huge integral range); each take more space.
The syntax is
ALTER TABLE tablename
MODIFY col_name NEW_DATATYPE [NOT NULL];
(There is no need, and may be harm, in stepping through VARCHAR.)
General rule: Use DECIMAL for money because it is "exact"; use FLOAT for measurements (such as sensors, distance, etc)
More
If the max value is 13815500, then DECIMAL(64,56) will hold any of your numbers, and handle up to 56 decimal places. Furthermore, you can do basic arithmetic exactly on those numbers. Caution: If you will be SUMming a thousand such numbers, you need an extra 3 digits before the decimal point: DECIMAL(64,53). For summing a million numbers: DECIMAL(64,50).
If your current data is sitting in a FLOAT column, then you only have about 7 significant digits; the rest was lost as the numbers were stored. Can you recover the lost precision? If so, start over with a suitable DECIMAL. If not, then a numerical analyst will argue that you may as well stick with FLOAT. A SUM will still be good to about 6-7 significant digits. This is good enough for most uses.
You now have virtually all the knowledge of MySQL and numerical analysis; you decide what to do.
There is no safe way. Due to how floats work, 32 bit floats greater than 16777216 (or less than -16777216) need to be even, greater than 33554432 (or less than -33554432) need to be evenly divisibly by 4, greater than 67108864 (or less than -67108864) need to be evenly divisibly by 8, etc.
The infamous question about datatypes when storing money values in an SQL database.
However in these trying times, we now have currencies that have worth up to 18 decimal places (thank you ETH).
This now reraises the classic argument.
IDEAS
Option 1 BIGINT Use a big integer to save the real value, then store how many decimal places the currency has (simply dividing A by 10^B in translation)?
Option 2 Decimal(60,30) Store the datatype in a large decimal, which inevitibly will cost a large amount of space.
Option 3 VARCHAR(64) Store in a string. Which would have a performance impact.
I want to know peoples thoughts and what they are using if they are dealing with cryptocurrency values. As I am stumped with the best method for proceeding.
There's a clear best option out of the three you suggested (plus one from the comments).
BIGINT — uses just 8 bytes, but the largest BIGINT only has 19 decimal digits; if you divide by 1018, the largest value you can represent is 9.22, which isn't enough range.
DOUBLE — only has 15–17 decimal digits of precision; has all the known drawbacks of floating-point arithmetic.
VARCHAR — will use 20+ bytes if you're dealing with 18 decimal places; will require constant string↔int conversions; can't be sorted; can't be compared; can't be added in DB; many downsides.
DECIMAL(27,18) – if using MySQL, this will take 12 bytes (4 for each group of 9 digits). This is quite a reasonable storage size, and has enough range to support amounts as large as one billion or as small as one Wei. It can be sorted, compared, added, subtracted, etc. in the database without loss of precision.
I would use DECIMAL(27,18) (or DECIMAL(36,18) if you need to store truly huge values) to store cryptocurrency money values.
I have one column which I'll put floating point values into it. the column only need to save number from 0 to 100, with floating point up to double decimal digit precision behind the decimal point. so it need to be able to save from 0.00 to 100.00. what kind of data type I should assign to the MYSQL column that suitable for this kind of scenario? I see in phpmyadmin there are float, single, double, and real. I heard about all of them, but in C in Delphi. But in database, I usually just used double to make everything easier. But as this table will be always growing, I just want to save the smallest space possible. which data type I should use for this situation, which also gives the best performance, if that's possible? thank you for your help.
You can use integer type column for both storing and performance.
Multiply/divide by 100 for presentation.
Numbers from 0 to 10000 will fit into 2 bytes SMALLINT and it's the smallest size possible for your case.
with floating point up to double decimal digit precision
There is no such thing. Floating-point doesn't have decimal digits.
What you need is DECIMAL(5,2).
Seems like BIGINT is the biggest integer available on MySQL, right?
What to do when you need to store a BIGINT(80) for example?
Why in some cases, like somewhere in the Twitter API docs, they recommend us to store these large integers as varchar?
Which is the real reason behind the choice of using one type over another?
Big integers aren't actually limited to 20 digits, they're limited to the numbers that can be expressed in 64 bits (for example, the number 99,999,999,999,999,999,999 is not a valid big integer despite it being 20 digits long).
The reason you have this limitation is that native format integers can be manipulated relatively fast by the underlying hardware whereas textual versions of a number (tend to) need to be processed one digit at a time.
If you want a number larger than the largest 64-bit unsigned integer 18,446,744,073,709,551,615 then you will need to store it as a varchar (or other textual field) and hope that you don't need to do much mathematical manipulation on it.
Alternatively, you can look into floating point numbers which have a larger range but less precision, or decimal numbers which should be able to give you 65 digits for an integral value, with decimal(65,0) as the column type.
You can specify a numeric(65,0), but if you need to get larger, you'll need a varchar.
The reason to select one over another is usage, efficiency and space. Using an int is more efficient than a bigint or, I believe, numeric If you need to do math on it.
You can store that big integers as an arbitrary binary string if you want maximum storage efficiency.
But I'm not sure if it worth it because you'll have to deal with over 64 bit integers in your application too, which is also not the thing you want to do without a strong reason.
Better keep things simple and use varchar.
BIGINT is limited by definition to 8 digits. The maximum number of digits in DECIMAL type is 64. You must use VARCHAR to store values of larger precision and be aware that there is no direct math of such values.
Are there any performance difference between decimal(10,0) unsigned type and int(10) unsigned type?
It may depend on the version of MySQL you are using. See here.
Prior to MySQL 5.0.3, the DECIMAL type was stored as a string and would typically be slower.
However, since MySQL 5.0.3 the DECIMAL type is stored in a binary format so with the size of your DECIMAL above, there may not be much difference in performance.
The main performance issue would have been the amount of space taken up by the different types (with DECIMAL being slower). With MySQL 5.0.3+ this appears to be less of an issue, however if you will be performing numeric calculations on the values as part of the query, there may be some performance difference. This may be worth testing as there is no indication in the documentation that i can see.
Edit:
With regards to the int(10) unsigned, i took this at face value as just being a 4 byte int. However this has a maximum value of 4294967295 which strictly doesn't provide the same range of numbers as a DECIMAL(10,0) unsigned .
As #Unreason pointed out, you would need to use a bigint to cover the full range of 10 digit numbers, pushing the size up to 8 bytes.
A common mistake is that when specifying numeric columns types in MySQL, people often think the number in the brackets has an impact on the size of the number they can store. It doesn't. The number range is purely based on the column type and whether it is signed or unsigned. The number in the brackets is for display purposes in results and has no impact on the values stored in the column. It will also have no impact of the display of the results unless you specify the ZEROFILL option on the column as well.
According to the mysql data storage your decimal will require
DECIMAL(10,0): 4 bytes for 9 digits and 1 byte for the remaining 10th digit, so in total five bytes (assuming my reading of documentation is correct).
INT(10): will need BIGINT which is 8 bytes.
The differences is that the decimal is packed and some operations on such data type might be slower then on normal INT types which map directly to machine represented numbers.
Still I would do your own tests to confirm the above reasoning.
EDIT:
I noticed that I did not elaborate on the obvious point - assuming the above logic is sound the difference in size required is 60% more space needed for BIGINT variant.
However this does not directly translate to penalties due to the fact that data is normally not written byte by byte. In case of selects/updates of many rows you should see the performance loss/gain, but in case of selecting/updating a small number of rows the filesystem will fetch blocks from the disk(s) which will normally get/write multiple columns anyway.
The size (and speed) of indexes might be more directly impacted.
However, the question on how the packing influences various operations still remains open.
According to this similar question, yes, potentially there is a big performance hit because of difference in the way DECIMAL and INT are treated and threaded into the CPU when doing calculations.
See: Is there a performance hit using decimal data types (MySQL / Postgres)
I doubt such a difference can be performance related at all.
Most of performance issues tied to proper database design and indexing plan, and server/hardware tuning as a next level.