Best way to handle Fixed point/ percision decimal in Ruby on Rails - mysql

So I am creating a time tracking application using Ruby On Rails and am storing the time as a number representing hours.
Since anything beyond 0.01 (36 seconds ) hours is irrelevant I only need 2 decimal places.
I am using a MySQL database with a float as the column type. While this works most of the time, every now and then i get an error with the calculation and rounding of floats.
I have done some research into my options and see that a lot of people recommend using BigDecimal. Since I use a lot of custom Database querys using calculations, so I wanted to know how changing the column type would affect this. Does it store this as a string or yaml, or is it natively supported by MySQL?
Or is there an equivalent way to do fixed-point decimal arithmetic in Ruby / Rails.
I assume any method is going to require much refactoring, how can I avoid this the most?
Any insight is appreciated.

Instead of storing the time as a number representing hours, store it as a number representing increments of 36 seconds (or maybe individual seconds).
You shouldn't need a decimal supporting type to do fixed-point, simply divide in the business logic to get hours.

MySQL does have built-in BigDecimal support. http://dev.mysql.com/doc/refman/5.1/en/precision-math-decimal-changes.html
I would suggest using that; it works well in my Rails applications. Allowing the database to handle that instead of the application makes life easier - you're using the abstractions the way they're designed.
Here's the migration code:
change_column :table_name, :column_name, :decimal
Reference: Rails migration for change column

We have actually build a time tracking app (http://www.yanomo.com) and store all our times as the number of hours they represent with MySQL as the underlying dbms. For the column type we use DECIMAL(precision,scale). In your case something like DECIMAL(5,2) would do. In our businesslogic (JAVA) we use BigDecimal.

Related

How to store prices in Couchbase without losing precision?

I have a Couchbase database and I would like to store price without losing precision - double is really not good enough for my application. However, it seems that there is no support for currency data types in Couchbase.
Is there a preferred solution for this problem for this database engine?
I was thinking about storing each price twice, once as string and once as double, so that I can still query price for inequality. It's better than nothing but not really a nice solution.
This is really a problem with JSON, but since Couchbase uses pure JSON, it applies :)
One solution that I've seen is to store it as an integer.
For example, if you want to store a price of $129.99, you would store a number of 12999. This could be kinda annoying, but depending on what language/framework you're using, it could be relatively easy to customize your (de)serializer to handle this automatically. Or you could create a calculated property in your class (assuming you're using OOP). Or you could use AOP.
But in any case, your precision is stored. Your string solution would also work, with similar caveats.

What data type should I use for a 'duration' attribute?

I'm using phpmyadmin/MySQL to make a database.
It's for a plane/bus/train booking system.
I have a 'depart_time' attribute which is a time data type. In the same table I have a 'duration' attribute. Later on I will need to do multiplication on this duration attribute (depending on if it is train/bus/plane).
My question is - what would be the best data type for this duration attribute?
I thought about using a decimal type - but then the values in it won't represent the time exactly (e.g. 1.30 won't represent 1 and a half hrs, it would need to be 1.50 - if that makes sense).
I also thought about using the time data type for this field as well, but I wasn't sure if multiplication would be possible on that?
I couldn't find any help after googling about multiplication on the time data type.
Hopefully this made sense, if you need anymore information then feel free to ask in the comments!
Thanks in advance!
Use an int and record durations in the smallest unit you're interested in. For example, if you need minute accuracy, store one and a half hours as 90 minutes. Formatting that value for display purposes is presentation logic, not the business of the database.
If I were in that position I would probably either:
In seconds. Unlikely that you need more precision than that.
In a string such as P1D for 1 day and P1W2DT3H for 1 week, 2 days, 3 hours. This is a standard format used by many libraries and deals better with situations where something really takes 1 day, but it's a day with a leap hour.
For most cases just using seconds will be fine though.
I would represent it in the database as seconds or minutes (minimum precision you want). Showing it to the user should be done dynamically in frontend (e.g. in minutes (1 min, 30 min 180min), hours (0.1h, 1h, 3h, 3.0h), days (0.5d, 1d) or minimal packed (1d 5h 42min).
You should keep this separate. So I suggest to use seconds.
I've solved how I'm going to do this.
Instead of doing it within the database. I am going to do the multiplication using Python.
I took the information from the table, turned the data into int/datetime.deltatime and the multiplication worked.
I then just needed to return that data depending on whether it's bus/train/plane.
For a multiplier not involved with money, simply use FLOAT.
Then work in seconds (or minutes if you prefer). That can be in INT UNSIGNED.
Use appropriate DATETIME functions to convert seconds to hh:mm or whatever output you desire. Note: The internal format need not be the same as the display format.
A duration could be represented in an open standard manner using ISO 8601 duration format.
See https://www.digi.com/resources/documentation/digidocs/90001437-13/reference/r_iso_8601_duration_format.htm

Difference UnixNano and MySQL usage

http://golang.org/pkg/time/
I am building a ISO and RFC complaint core for my new Go system. I am using MySQL and am currently figuring out the most optimal setup for the most important base-tables.
I am trying to figure out how to store the date-time in the database. I want to aim at a good balance between the space the saved time in the database will occupy, but also the query-capabilties and the compatibility with UTC and easy timezone conversion that doesn't give annoying conflicts for inserting and retrieving data into/from Go/MySQL.
I know this sounds a bit weird in context to the title of my question. But I see a lot of wrappers, ORM's and such still storing UNIX timestamps (microseconds?). I think it would be good to just always store UTC nano timestamps and just accepting losing the date/time querying functionalities. I don't want to get into problems when running the system with tons of different countries/languages/timezones/currencies/translations/etc. (internationalizations and localizations). I already encountered these problems before with some systems at work and it drove me nuts to the point where eventually tons of fixes had to be applied through the whole codebase to at least some of the conversion back into order. I don't want this to happen in my system. If it means I always have to do some extra coding to keep all stored times in correct UTC+0, I will take that for granted. Based on ISO-8601 and the timezone aberrations and daytime-savings I will determine the output of the date/time.
The story above is opinion based. But my actual question would be what solely is more efficient to choose Go's timestamp as INT stored vs MySQL TIMESTAMP or DATETIME;
1.) What is most optimal considering storage?
2.) What is most optimal considering timezone conventions?
3.) What is most optimal considering speed and MySQL querying?
The answer to all these questions is simply storing the timestamp in UTC time with t.UTC().UnixNano(), keep in mind that time is int64 so it will always be 8 bytes in the database regardless of precision.

How to store a very big number with hundreds of digits in a MySQL database?

I need to store very large numbers with hundreds or thousands of digits (like 999^999) in a database. How can I do this in MySQL? What data type should I use?
Also, I will need to subtract,compare, add the numbers. Is it possible to do so in a small time with MySQL?
I know this isn't a great answer but the Postgres has bindings to GNU MP, the multi-precision library. If you aren't too far into your project you might consider switching databases. The MySQL docs make it clear that integer support tops out at [BIGINT], which is an 8-byte storage with about 18 places of accuracy.
the site for it is here: http://pgmp.projects.pgfoundry.org/mpz.html
I can't find any sign that anyone used the external bindings in MySQL to do something similar. To do this in MySQL you would either have to implement external bindings to a library like GMPLIB or store the data in MySQL as a [STRING] or as one of the [BLOB],[MEDIUMBLOB] types and then implement all your sorts and operations in code. That is a huge task.

How to work around unsupported unsigned integer field types in MS SQL?

Trying to make a MySQL-based application support MS SQL, I ran into the following issue:
I keep MySQL's auto_increment as unsigned integer fields (of various sizes) in order to make use of the full range, as I know there will never be negative values. MS SQL does not support the unsigned attribute on all integer types, so I have to choose between ditching half the value range or creating some workaround.
One very naive approach would be to put some code in the database abstraction code or in a stored procedure that converts between negative values on the db side and values from the larger portion of the unsigned range. This would mess up sorting of course, and also it would not work with the auto-id feature (or would it some way?).
I can't think of a good workaround right now, is there any? Or am I just being fanatic and should simply forget about half the range?
Edit:
#Mike Woodhouse: Yeah, I guess you're right. There's still a voice in my head saying that maybe I could reduce the field's size if I optimize its utilization. But if there's no easy way to do this, it's probably not worth worrying about it.
When is the problem likely to become a real issue?
Given current growth rates, how soon do you expect signed integer overflow to happen in the MS SQL version?
Be pessimistic.
How long do you expect the application to live?
Do you still think the factor of 2 difference is something you should worry about?
(I have no idea what the answers are, but I think we should be sure that we really have a problem before searching any harder for a solution)
I would recommend using the BIGINT data type as this goes up to 9,223,372,036,854,775,807.
SQL Server does not support signed and unsigned values.
I would say this.. "How do we normally deal with differences between components?"
Encapsulate what varies..
You need to create an abstraction layer within you data access layer to get it to the point where it doesn't care whether or not the database is MySQL or MS SQL..