I have a Spring/Hibernate/MySQL setup where I need to store BigDecimals on the database. The values of the BigDecimals can range from 0 to 16 decimal places. I.e. all of the following are valid values:
12
543.34
98765.345678
0.000003344332
etc. So in my hibernate mappings I did this:
#Column(precision = 32, scale = 16)
Which has allowed me to store my BigDecimals and use math calculations with MathContext.DECIMAL64.
However when I look in the MySQL database I see values like this:
12.0000000000000000
543.3400000000000000
98765.3456780000000000
0.0000033443320000
And when I retrieve them the BigDecimals all have a scale of 16 and are filled out with trailing zeros when I serialise them out to JSON using Spring's Jackson Object Mapper.
What I would like is to get the values back from the database at the correct scale. ie.
12 -> scale 0
543.34 -> scale 2
98765.345678 -> scale 6
0.000003344332 -> scale 12
etc. Any ideas how this can be done?
Have managed to work around the issue by correcting the scale when the data is read back into Java. There is no MySQL fix for the handling of scaled values.
Related
I am trying to store the number 0.0015 in the database. I have tried float, integer but I am getting zero not the exact figure I have entered. Is there a datatype which can store such a value?
Normally you'd use DECIMAL (aka NUMERIC), with a specified scale and precision, here are the docs for it. FLOAT should also work, but you need to be aware of floating point arithmetics quirks, so DECIMAL is preferred.
Here's a dbfiddle example
If you see your data as 0 then it's either an issue with how you're inserting (maybe importing from a file) or your client rounds it down to 0 and you need to tweak it. As you can see from the dbfiddle above, it works perfectly fine.
This number (0.0015) is not representable in binary. See the following example in python:
Python 3.10.2 (tags/v3.10.2:a58ebcc, Jan 17 2022, 14:12:15) [MSC v.1929 64 bit (AMD64)] on win32
>>> x = 0.0015
>>> (x + 1) - 1
0.0015000000000000568
This means that the storing in mysql (or any other language that converts the number to binary) will show up representation errors. You can use numeric types that doesn't do any conversion to binary, like decimal or numeric.
I have a timesheet application where people put eg. 3.5 hours worked in a day.
Sometimes someone might put 1.25 I guess.
I stored as float but am now having issues when I retrieve data... Should I have used decimal to 2 or 3 points?.
There are a few ways to tackle this.
If you want to have uniformity in the entries, I'd suggest using a "Fixed-Point Type" value (see more info here). This would mean that 3.5 hours should always be saved as 3.50.
The other option is to convert all your values to integer values (see more info here), and in your application layer, convert the values back to "readable" format. For example, you would store 1.25 hours as 125, and in your application layer, divide by 100, therefore getting 1.25. This method is more useful for currency management/calculation applications where a specific level of precision is necessary.
Floats allow for floating decimal point precision, so there is no restriction on uniformity of precision for decimal values...
For your purposes, I'd recommend fixed-point type unless there's complexity that you didn't mention.
Hope that helps.
I am currently importing a data set that includes currency values into an Access database. Although I've read that I should be using the decimal data type for currency, can I not use double to cut down on the file size if the values are rounded to the nearest dollar?
As far as I can tell, the issue with using double for currency is due to rounding, but I won't be doing calculations on the data directly. Any calculations will be done by the application/user.
Similarly, as the data is fixed-length, some of the decimal values are represented by whole numbers. For example, some fields may contain a value of 12345, but the actual value is 12.345. This requires that I import the data and then update the values; dividing by 1000 in the example above.
Will using double in this fashion cause rounding errors as well?
Yes, divisions can and will introduce rounding errors.
You want to use "currency" for ANY kind of business software. In fact if you don't have a currency data type then you use SCALED integers (you scale the results). You thus store
$123.52
As
1235200
(Assuming 4 decimal places)
The reason of course is "real" numbers in computers are only a representation and are only approximate – they are subject to rounding.
This is SIMPLE code:
Public Sub TestAdd()
Dim MyNumber As Single
Dim i As Integer
For i = 1 To 10
MyNumber = MyNumber + 1.01
Debug.Print MyNumber
Next i
End Sub
Here is the actual output of the above:
1.01
2.02
3.03
4.04
5.05
6.06
7.070001
8.080001
9.090001
10.1
Imagine the above – after just 7 SIMPLE and SILLY additions we already getting WRONG answers. And VBA even in Excel will do the SAME. So as noted, we are using 0.01, but it only approximate! (so while we assume this value is 1/100th, it only approximate when using the "real" format in computers.
So computers cannot and do NOT store real numbers to an exact precision. You get rounding errors as a result.
For payroll or anything to do with business applications and money you have to use scaled integers else your tables and accounting and even reports will NOT add up and you experience rounding errors.
I cannot think of any benefits in terms of storage space unless you storing many millions of rows of data. MUCH worse is if you export this data to some other system, then exporting "real" numbers can introduce all kinds of artifacts and even exponents when exporting - use currency - you be safe in what you see and have.
I Have a project using a 240 bit Octal data format that will be coming in the serial port of Arduino uno at 2.4K RS232 converted to TTL.
The 240 bits along with other things has range, azimuth and elevation words, which is what I need to display.
The frame starts with a frame sync code wich is an alternating binary 7 bit code which is:
1110010 for frame 1 and
0001101 for frame 2 and so on.
I was thinking that I might use something like val = serial.read command like
if (val = 1110010 or 0001101) { data++val; }`
that will let me validate the start of my sting.
The rest of the 240 bit octal frame (all numbers) can be serial read to a string of which only parts will be needed to be printed to the screen.
Past the frame sync, all octal data is serial with no Nulls or delimiters so I am thinking
printf("%.Xs",stringname[xx]);
will let me off set the characters as needed so they can be parsed out.
How do I tell the program that the frame sync its looking for is binary or that the data that needs to go into the string is octal, or that it may need to be converted to be read on the screen?
The round function sometime doesn't work very well. I have in my db a row like this:
field_1= 375
field_2= 0.65
field_3= 0.1
field_4= 11
So we know that: field_1*field_2*field_3*field_4 = 268.125 so if I round it to 2 decimals -> 268.13.
But in mysql I got 268.12 -> Select round(field_1*field_2*field_3*field_4) from my table -> 268.12
This situation just happens with these values, I tried with other numbers and no problem the round works.
Any workaround about it. I tried in mysql 4.1.22, and 5.1.44 and I get the same issue. I read in other forum http://bugs.mysql.com/bug.php?id=6251 that it is not a bug, they said that it depends on the C library implementation.
What data type are you using for those columns?
If you want exact precision then you should use NUMERIC or DECIMAL, not FLOAT, REAL, or DOUBLE.
From the manual:
The DECIMAL and NUMERIC types store exact numeric data values.
These types are used when it is important to preserve exact precision,
for example with monetary data.
If applicable you can use FLOOR()
If not applicable you will be better off handling this on the application side. I.e. do the entire calculation application side and then round, or just round application side.
SELECT (field_1*field_2*field_3*field_4) AS myCalculation FROM my table