Is there a library than can convert ounces into Cups? - units-of-measurement

Ideally I could specify something like 10 as my input (in ounces) and get back a string like this: "1 & 1/4 cups". Is there a library that can do something like this? (note: I am totally fine with the rounding implicit in something like this).
Note: I would prefer a C library, but I am OK with solutions for nearly any language as I can probably find appropriate bindings.

It is really two things: 1) the data encompassing the conversion, 2) the presentation of the conversion.
The second is user choice: If you want fractions, you need to write or get a fractions library. There are many.
The first is fairly easy. The vast majority of conversions are just a factor. Usually you will organize known factors into a conversion into the appropriate SI unit for that type of conversion (volume, length, area, density, etc.)
Your data then looks something like this:
A acres 4.046870000000000E+03 6
A ares 1.000000000000000E+02 15
A barns 1.000000000000000E-28 15
A centiares 1.000000000000000E+00 15
A darcys 9.869230000000000E-13 6
A doors 9.290340000000000E+24 6
A ferrados 7.168458781362010E-01 6
A hectares 1.000000000000000E+04 15
A labors 7.168625518000000E+05 6
A Rhode Island 3.144260000000000E+09 4
A sections 2.590000000000000E+06 6
A sheds 1.000000000000000E-48 15
A square centimeters 1.000000000000000E-04 15
A square chains (Gunter's or surveyor's) 4.046860000000000E+02 6
A square chains (Ramsden's) 9.290304000000000E+02 5
A square feet 9.290340000000000E-02 6
A square inches 6.451600000000000E-04 15
A square kilometers 1.000000000000000E+06 15
A square links (Gunter's or surveyor's) 4.046900000000000E-02 5
A square meters (SI) 1.000000000000000E+00 15
A square miles (statute) 2.590000000000000E+06 7
A square millimeter 1.000000000000000E-06 15
A square mils 6.451610000000000E-10 5
A square perches 2.529300000000000E+01 5
A square poles 2.529300000000000E+01 5
A square rods 2.529300000000000E+01 5
A square yards 8.361270000000000E-01 6
A townships 9.324009324009320E+07 5
In each case, these are area conversions into the SI unit for area -- square meters. Then make a second conversion into the the desired conversion. The third number there is significant digits.
Keep a file of these for the desired factors and then you can convert from any area to any area that you have data on. Repeat for other categories of conversion (Volume, Power, Length, Weight, etc etc etc)

My thoughts were using Google Calculator for this task if you want generic conversions...
Example: http://www.google.com/ig/calculator?q=10%20ounces%20to%20cups -- returns JSON, but I believe you can specify format.
Here's a Java example for currency conversion:
http://blog.caplin.com/2011/01/06/simple-currency-conversion-using-google-calculator-and-java/

Well, for a quick and dirty solution you could always have it run GNU Units as an external program. If your software is GPL compatible you can even rip off the code from Units and use it in your program.

Please check out JSR 363, the Units of Measurement Standard for Java: http://unitsofmeasurement.github.io/
At least in C++ you get basic support via "value types" already, but you still have to implement those conversions yourself or find a suitable library similar to what JSR 363 offers for Java.

Related

Find the Relationship Between Two Logarithmic Equations

No idea if I am asking this question in the right place, but here goes...
I have a set of equations that were calculated based on numbers ranging from 4 to 8. So an equation for when this number is 5, one for when it is 6, one for when it is 7, etc. These equations were determined from graphing a best fit line to data points in a Google Sheet graph. Here is an example of a graph...
Example...
When the number is between 6 and 6.9, this equation is used: windGust6to7 = -29.2 + (17.7 * log(windSpeed))
When the number is between 7 and 7.9, this equation is used: windGust7to8 = -70.0 + (30.8 * log(windSpeed))
I am using these equations to create an image in python, but the image is too choppy since each equation covers a range from x to x.9. In order to smooth this image out and make it more accurate, I really would need an equation for every 0.1 change in number. So an equation for 6, a different equation for 6.1, one for 6.2, etc.
Here is an example output image that is created using the current equations:
So my question is: Is there a way to find the relationship between the two example equations I gave above in order to use that to create a smoother looking image?
This is not about logarithms; for the purposes of this derivation, log(windspeed) is a constant term. Rather, you're trying to find a fit for your mapping:
6 (-29.2, 17.7)
7 (-70.0, 30.8)
...
... and all of the other numbers you have already. You need to determine two basic search paramteres:
(1) Where in each range is your function an exact fit? For instance, for the first one, is it exactly correct at 6.0, 6.5, 7.0, or elsewhere? Change the left-hand column to reflect that point.
(2) What sort of fit do you want? You are basically fitting a pair of parameterized equations, one for each coefficient:
x y x y
6 -29.2 6 17.7
7 -70.0 7 30.8
For each of these, you want to find the coefficients of a good matching function. This is a large field of statistical and algebraic study. Since you have four ranges, you will have four points for each function. It is straightforward to fit a cubic equation to each set of points in Cartesian space. However, the resulting function may not be as smooth as you like; in such a case, you may well find that a 4th- or 5th- degree function fits better, or perhaps something exponential, depending on the actual distribution of your points.
You need to work with your own problem objectives and do a little more research into function fitting. Once you determine the desired characteristics, look into scikit for fitting functions to do the heavy computational work for you.

pyephem, libnova, stellarium, JPL Horizons disagree on moon RA/DEC?

MINOR EDIT: I say below that JPL's Horizons library is not open source. Actually, it is, and it's available here: http://naif.jpl.nasa.gov/naif/tutorials.html
At 2013-01-01 00:00:00 UTC at 0 degrees north latitude, 0 degrees east
latitude, sea level elevation, what is the J2000 epoch right ascension
and declination of the moon?
Sadly, different libraries give slightly different answers. Converted
to degrees, the summarized results (RA first):
Stellarium: 141.9408333000, 9.8899166666 [precision: .0004166640, .0000277777]
Pyephem: 142.1278749990, 9.8274722221 [precision .0000416655, .0000277777]
Libnova: 141.320712606865, 9.76909442356909 [precision unknown]
Horizons: 141.9455833320, 9.8878888888 [precision: .0000416655, .0000277777]
My question: why? Notes:
I realize these differences are small, but:
I use pyephem and libnova to calculate sun/moon rise/set, and
these times can be very sensitive to position at higher latitudes
(eg, midnight sun).
I can understand JPL's Horizons library not being open source,
but the other three are. Shouldn't someone work out the
differences in these libraries and merge them? This is my main
complaint. Do the stellarium/pyephem/libnova library authors have
a fundamental difference in how to make these calculations, or do
they just need to merge their code?
I also realize there might be other reasons the calculations are
different, and would appreciate any help in rectifying these
possible errors:
Pyephem and Libnova may be using the epoch of the date instead of J2000
The moon is close enough that observer location can affect its
RA/DEC (parallax effect).
I'm using Perl's Astro::Nova and Python's pyephem, not the
original C implementations of these libraries. However, if these
differences are caused by using Perl/Python, that is important in
my opinion.
My code (w/ raw results):
First, Perl and Astro::Nova:
#!/bin/perl
# RA/DEC of moon at 0N 0E at 0000 UTC 01 Jan 2013
use Astro::Nova;
# 1356998400 == 01 Jan 2013 0000 UTC
$jd = Astro::Nova::get_julian_from_timet(1356998400);
$coords = Astro::Nova::get_lunar_equ_coords($jd);
print join(",",($coords->get_ra(), $coords->get_dec())),"\n";
RESULT: 141.320712606865,9.76909442356909
- Second, Python and pyephem:
#!/usr/local/bin/python
# RA/DEC of moon at 0N 0E at 0000 UTC 01 Jan 2013
import ephem; e = ephem.Observer(); e.date = '2013/01/01 00:00:00';
moon = ephem.Moon(); moon.compute(e); print moon.ra, moon.dec
RESULT: 9:28:30.69 9:49:38.9
- The stellarium result (snapshot):
- The JPL Horizons result (snapshot):
[JPL Horizons requires POST data (not really, but pretend), so I
couldn't post a URL].
I haven't linked them (lazy), but I believe there are many
unanswered questions on stackoverflow that effectively reduce to
this question (inconsistency of precision astronomical libraries),
including some of my own questions.
I'm playing w this stuff at: https://github.com/barrycarter/bcapps/tree/master/ASTRO
I have no idea what Stellarium is doing, but I think I know about the other three. You are correct that only Horizons is using J2000 instead of the epoch-of-date for this apparent, locale-specific observation. You can bring it into close agreement with PyEphem by clicking "change" next to the "Table Settings" and switching from "1. Astrometric RA & DEC" to "2. Apparent RA & DEC."
The difference with Libnova is a bit trickier, but my late-night guess is that Libnova uses UT instead of Ephemeris Time, and so to make PyEphem give the same answer you have to convert from one time to the other:
import ephem
moon, e = ephem.Moon(), ephem.Observer()
e.date = '2013/01/01 00:00:00'
e.date -= ephem.delta_t() * ephem.second
moon.compute(e)
print moon.a_ra / ephem.degree, moon.a_dec / ephem.degree
This outputs:
141.320681918 9.77023197401
Which is, at least, much closer than before. Note that you might also want to do this in your PyEphem code if you want it to ignore refraction like you have asked Horizons to; though for this particular observation I am not seeing it make any difference:
e.pressure = 0
Any residual difference is probably (but not definitely; there could be other sources of error that are not occurring to me right now) due to the different programs using different formulae to predict where the planets will be. PyEphem uses the old but popular VSOP87. Horizons uses the much more recent — and exact — DE405 and DE406, as stated in its output. I do not know what models of the solar system the other products use.

Compressing a binary matrix

We were asked to find a way to compress a square binary matrix as much as possible, and if possible, to add redundancy bits to check and maybe correct errors.
The redundancy thing is easy to implement in my opinion. The complicated part is compressing the matrix. I thought about using run-length after reshaping the matrix to a vector because there will be more zeros than ones, but I only achieved a 40bits compression (we are working on small sizes) although I thought it'd be better.
Also, after run-length an idea was Huffman coding the matrix, but a dictionary must be sent in order to recover the original information.
I'd like to know what would be the best way to compress a binary matrix?
After reading some comments, yes #Adam you're right, the 14x14 matrix should be compressed in 128bits, so if I only use the coordinates (rows&cols) for each non-zero element, still it would be 160bits (since there are twenty ones). I'm not looking for an exact solution but for a useful idea.
You can only talk about compressing something if you have a distribution and a representation. That's the issue of the dictionary you have to send along: you always need some sort of dictionary of protocol to uncompress something. It just so happens that things like .zip and .mpeg already have those dictionaries/codecs. Even something as simple as Huffman-encoding is an algorithm; on the other side of the communication channel (you can think of compression as communication), the other person already has a bit of code (the dictionary) to perform the Huffman decompression scheme.
Thus you cannot even begin to talk about compressing something without first thinking "what kinds of matrices do I expect to see?", "is the data truly random, or is there order?", and if so "how can I represent the matrices to take advantage of order in the data?".
You cannot compress some matrices without increasing the size of other objects (by at least 1 bit). This is bad news if all matrices are equally probable, and you care equally about them all.
Addenda:
The answer to use sparse matrix machinery is not necessarily the right answer. The matrix could for example be represented in python as [[(r+c)%2 for c in range (cols)] for r in range(rows)] (a checkerboard pattern), and a sparse matrix wouldn't compress it at all, but the Kolmogorov complexity of the matrix is the above program's length.
Well, I know every matrix will have the same number of ones, so this is kind of deterministic. The only think I don't know is where the 1's will be. Also, if I transmit the matrix with a dictionary and there are burst errors, maybe the dictionary gets affected so... wouldnt be the resulting information corrupted? That's why I was trying to use lossless data compression such as run-length, the decoder just doesnt need a dictionary. --original poster
How many 1s does the matrix have as a fraction of its size, and what is its size (NxN -- what is N)?
Furthermore, this is an incorrect assertion and should not be used as a reason to desire run-length encoding (which still requires a program); when you transmit data over a channel, you can always add error-correction to this data. "Data" is just a blob of bits. You can transmit both the data and any required dictionaries over the channel. The error-correcting machinery does not care at all what the bits you transmit are for.
Addendum 2:
There are (14*14) choose 20 possible arrangements, which I assume are randomly chosen. If this number was larger than 128^2 what you're trying to do would be impossible. Fortunately log_2((14*14) choose 20) ~= 90bits < 128bits so it's possible.
The simple solution of writing down 20 numbers like 32,2,67,175,52,...,168 won't work because log_2(14*14)*20 ~= 153bits > 128bits. This would be equivalent to run-length encoding. We want to do something like this but we are on a very strict budget and cannot afford to be "wasteful" with bits.
Because you care about each possibility equally, your "dictionary"/"program" will simulate a giant lookup table. Matlab's sparse matrix implementation may work but is not guaranteed to work and is thus not a correct solution.
If you can create a bijection between the number range [0,2^128) and subsets of size 20, you're good to go. This corresponds to enumerating ways to descend the pyramid in http://en.wikipedia.org/wiki/Binomial_coefficient to the 20th element of row 196. This is the same as enumerating all "k-combinations". See http://en.wikipedia.org/wiki/Combination#Enumerating_k-combinations
Fortunately I know that Mathematica and Sage and other CAS software can apparently generate the "5th" or "12th" or arbitrarily numbered k-subset. Looking through their documentation, we come upon a function called "rank", e.g. http://www.sagemath.org/doc/reference/sage/combinat/subset.html
So then we do some more searching, and come across some arcane Fortran code like http://people.sc.fsu.edu/~jburkardt/m_src/subset/ksub_rank.m and http://people.sc.fsu.edu/~jburkardt/m_src/subset/ksub_unrank.m
We could reverse-engineer it, but it's kind of dense. But now we have enough information to search for k-subset rank unrank, which leads us to http://www.site.uottawa.ca/~lucia/courses/5165-09/GenCombObj.pdf -- see the section
"Generating k-subsets (of an n-set): Lexicographical
Ordering" and the rank and unrank algorithms on the next few pages.
In order to achieve the exact theoretically optimal compression, in the case of a uniformly random distribution of 1s, we must thus use this technique to biject our matrices to our output number of range <2^128. It just so happens that combinations have a natural ordering, known as ranking and unranking of combinations. You assign a number to each combination (ranking), and if you know the number you automatically know the combination (unranking). Googling k-subset rank unrank will probably yield other algorithms.
Thus your solution would look like this:
serialize the matrix into a list
e.g. [[0,0,1][0,1,1][1,0,0]] -> [0,0,1,0,1,1,1,0,0]
take the indices of the 1s:
e.g. [0,0,1,0,1,1,1,0,0] -> [3,5,6,7]
1 2 3 4 5 6 7 8 9 a k=4-subset of an n=9 set
take the rank
e.g. compressed = rank([3,5,6,7], n=9)
compressed==412 (or something, I made that up)
you're done!
e.g. 412 -binary-> 110011100 (at most n=9bits, less than 2^n=2^9=512)
to uncompress, unrank it
I'll get to 128 bits in a sec, first here's how you fit a 14x14 boolean matrix with exactly 20 nonzeros into 136 bits. It's based on the CSC sparse matrix format.
You have an array c with 14 4-bit counters that tell you how many nonzeros are in each column.
You have another array r with 20 4-bit row indices.
56 bits (c) + 80 bits (r) = 136 bits.
Let's squeeze 8 bits out of c:
Instead of 4-bit counters, use 2-bit. c is now 2*14 = 28 bits, but can't support more than 3 nonzeros per column. This leaves us with 128-80-28 = 20 bits. Use that space for array a4c with 5 4-bit elements that "add 4 to an element of c" specified by the 4-bit element. So, if a4c={2,2,10,15, 15} that means c[2] += 4; c[2] += 4 (again); c[10] += 4;.
The "most wasteful" distribution of nonzeros is one where the column count will require an add-4 to support 1 extra nonzero: so 5 columns with 4 nonzeros each. Luckily we have exactly 5 add-4s available.
Total space = 28 bits (c) + 20 bits
(a4c) + 80 bits (r) = 128 bits.
Your input is a perfect candidate for a sparse matrix. You said you're using Matlab, so you already have a good sparse matrix built for you.
spm = sparse(dense_matrix)
Matlab's sparse matrix implementation uses Compressed Sparse Columns, which has memory usage on the order of 2*(# of nonzeros) + (# of columns), which should be pretty good in your case of 20 nonzeros and 14 columns. Storing 20 values sure is better than storing 196...
Also remember that all matrices in Matlab are going to be composed of doubles. Just because your matrix can be stored as a 1-bit boolean doesn't mean Matlab won't stick it into a 64-bit floating point value... If you do need it as a boolean you're going to have to make your own type in C and use .mex files to interface with Matlab.
After thinking about this again, if all your matrices are going to be this small and they're all binary, then just store them as a binary vector (bitmask). Going off your 14x14 example, that requires 196 bits or 25 bytes (plus n, m if your dimensions are not constant). That same vector in Matlab would use 64 bits per element, or 1568 bytes. So storing the matrix as a bitmask takes as much space as 4 elements of the original matrix in Matlab, for a compression ratio of 62x.
Unfortunately I don't know if Matlab supports bitmasks natively or if you have to resort to .mex files. If you do get into C++ you can use STL's vector<bool> which implements a bitmask for you.

Binary to standard digit?

I'm going to make a computer in Minecraft. I understand how to build a computer where it can make binary operations but I want the outputs to be displayed as standard integer numbers. How you "convert" the binaries into standard digits? Is there any chart for that? And the digits will be shown like in old calculators; with 7 lines.
--
| |
--
| |
--
In electronics, what you need is called a "binary to binary coded decimal" converter. "Binary coded decimal" is the set of bits needed to produce a number on a 7 segment display. Here's a PDF describing how one of these chips works. Page 3 of the PDF shows the truth table needed to do the conversion as well as a picture of all of the NAND gates that implement it in hardware. You can use the truth table to build the set of boolean expressions needed in your program.
0 = 0
1 = 1
10 = 2
11 = 3
100 = 4
101 = 5
110 = 6
111 = 7
...
Do you see the pattern? Here's the formula:
number = 2^0 * (rightmost digit)
+ 2^1 * (rightmost-but-1 digit
+ 2^2 * (rightmost-but-2 digit) + ...
Maybe what you are looking for is called BCD or Binary Coded Decimal. There is a chart and a karnaugh map for it that has been used for decades. a quick Google search for it gave me this technical page
http://circuitscan.homestead.com/files/digelec/bcdto7seg.htm
How are you trying to build the computer?
Maybe that key word can at least help you find what you need. :)
Your problem has two parts:
Convert a binary number into digits, that is do a binary to BCD conversion.
Convert a digit into a set of segments to activate.
For the latter you can use a table that assigns the bitmap of active segments to each digit.
I think's that's two different questions.
There isn't a "binary string of 0/1" to integer conversion built in - you would normally just write your own to loop over the string and detect each power of 2.
YOu can also write your own 7segment LED display - it's a little tricky because it's on multiple lines, but would be an interesting excersize.
Alternatively most GUIs have an LCD font,Qt certainly does

What is the ideal data type to use when storing latitude / longitude in a MySQL database?

Bearing in mind that I'll be performing calculations on lat / long pairs, what datatype is best suited for use with a MySQL database?
Basically it depends on the precision you need for your locations. Using DOUBLE you'll have a 3.5nm precision. DECIMAL(8,6)/(9,6) goes down to 16cm. FLOAT is 1.7m...
This very interesting table has a more complete list: http://mysql.rjweb.org/doc.php/latlng :
Datatype Bytes Resolution
Deg*100 (SMALLINT) 4 1570 m 1.0 mi Cities
DECIMAL(4,2)/(5,2) 5 1570 m 1.0 mi Cities
SMALLINT scaled 4 682 m 0.4 mi Cities
Deg*10000 (MEDIUMINT) 6 16 m 52 ft Houses/Businesses
DECIMAL(6,4)/(7,4) 7 16 m 52 ft Houses/Businesses
MEDIUMINT scaled 6 2.7 m 8.8 ft
FLOAT 8 1.7 m 5.6 ft
DECIMAL(8,6)/(9,6) 9 16cm 1/2 ft Friends in a mall
Deg*10000000 (INT) 8 16mm 5/8 in Marbles
DOUBLE 16 3.5nm ... Fleas on a dog
Use MySQL's spatial extensions with GIS.
Google provides a start to finish PHP/MySQL solution for an example "Store Locator" application with Google Maps. In this example, they store the lat/lng values as "Float" with a length of "10,6"
http://code.google.com/apis/maps/articles/phpsqlsearch.html
MySQL's Spatial Extensions are the best option because you have the full list of spatial operators and indices at your disposal. A spatial index will allow you to perform distance-based calculations very quickly. Please keep in mind that as of 6.0, the Spatial Extension is still incomplete. I am not putting down MySQL Spatial, only letting you know of the pitfalls before you get too far along on this.
If you are dealing strictly with points and only the DISTANCE function, this is fine. If you need to do any calculations with Polygons, Lines, or Buffered-Points, the spatial operators do not provide exact results unless you use the "relate" operator. See the warning at the top of 21.5.6. Relationships such as contains, within, or intersects are using the MBR, not the exact geometry shape (i.e. an Ellipse is treated like a Rectangle).
Also, the distances in MySQL Spatial are in the same units as your first geometry. This means if you're using Decimal Degrees, then your distance measurements are in Decimal Degrees. This will make it very difficult to get exact results as you get furthur from the equator.
When I did this for a navigation database built from ARINC424 I did a fair amount of testing and looking back at the code, I used a DECIMAL(18,12) (Actually a NUMERIC(18,12) because it was firebird).
Floats and doubles aren't as precise and may result in rounding errors which may be a very bad thing. I can't remember if I found any real data that had problems - but I'm fairly certain that the inability to store accurately in a float or a double could cause problems
The point is that when using degrees or radians we know the range of the values - and the fractional part needs the most digits.
The MySQL Spatial Extensions are a good alternative because they follow The OpenGIS Geometry Model. I didn't use them because I needed to keep my database portable.
Depends on the precision that you require.
Datatype Bytes resolution
------------------ ----- --------------------------------
Deg*100 (SMALLINT) 4 1570 m 1.0 mi Cities
DECIMAL(4,2)/(5,2) 5 1570 m 1.0 mi Cities
SMALLINT scaled 4 682 m 0.4 mi Cities
Deg*10000 (MEDIUMINT) 6 16 m 52 ft Houses/Businesses
DECIMAL(6,4)/(7,4) 7 16 m 52 ft Houses/Businesses
MEDIUMINT scaled 6 2.7 m 8.8 ft
FLOAT 8 1.7 m 5.6 ft
DECIMAL(8,6)/(9,6) 9 16cm 1/2 ft Friends in a mall
Deg*10000000 (INT) 8 16mm 5/8 in Marbles
DOUBLE 16 3.5nm ... Fleas on a dog
From: http://mysql.rjweb.org/doc.php/latlng
To summarise:
The most precise available option is DOUBLE.
The most common seen type used is DECIMAL(8,6)/(9,6).
As of MySQL 5.7, consider using Spatial Data Types (SDT), specifically POINT for storing a single coordinate. Prior to 5.7, SDT does not support indexes (with exception of 5.6 when table type is MyISAM).
Note:
When using POINT class, the order of the arguments for storing coordinates must be POINT(latitude, longitude).
There is a special syntax for creating a spatial index.
The biggest benefit of using SDT is that you have access to Spatial Analyses Functions, e.g. calculating distance between two points (ST_Distance) and determining whether one point is contained within another area (ST_Contains).
Based on this wiki article
http://en.wikipedia.org/wiki/Decimal_degrees#Accuracy
the appropriate data type in MySQL is Decimal(9,6) for storing the longitude and latitude in
separate fields.
Use DECIMAL(8,6) for latitude (90 to -90 degrees) and DECIMAL(9,6) for longitude (180 to -180 degrees). 6 decimal places is fine for most applications. Both should be "signed" to allow for negative values.
No need to go far, according to Google Maps, the best is FLOAT(10,6) for lat and lng.
We store latitude/longitude X 1,000,000 in our oracle database as NUMBERS to avoid round off errors with doubles.
Given that latitude/longitude to the 6th decimal place was 10 cm accuracy that was all we needed. Many other databases also store lat/long to the 6th decimal place.
TL;DR
Use FLOAT(8,5) if you're not working in NASA / military and not making aircrafts navi systems.
To answer your question fully, you'd need to consider several things:
Format
degrees minutes seconds: 40° 26′ 46″ N 79° 58′ 56″ W
degrees decimal minutes: 40° 26.767′ N 79° 58.933′ W
decimal degrees 1: 40.446° N 79.982° W
decimal degrees 2: -32.60875, 21.27812
Some other home-made format? Noone forbids you from making your own home-centric coordinates system and store it as heading and distance from your home. This could make sense for some specific problems you're working on.
So the first part of the answer would be - you can store the coordinates in the format your application uses to avoid constant conversions back and forth and make simpler SQL queries.
Most probably you use Google Maps or OSM to display your data, and GMaps are using "decimal degrees 2" format. So it will be easier to store coordinates in the same format.
Precision
Then, you'd like to define precision you need. Of course you can store coordinates like "-32.608697550570334,21.278081997935146", but have you ever cared about millimeters while navigation to the point? If you're not working in NASA and not doing satellites or rockets or planes trajectories, you should be fine with several meters accuracy.
Commonly used format is 5 digits after dots which gives you 50cm accuracy.
Example: there is 1cm distance between X,21.2780818 and X,21.2780819. So 7 digits after dot give you 1/2cm precision and 5 digits after dot will give you 1/2 meters precision (because minimal distance between distinct points is 1m, so rounding error cannot be more than half of it). For most civil purposes it should be enough.
degrees decimal minutes format (40° 26.767′ N 79° 58.933′ W) gives you exactly the same precision as 5 digits after dot
Space-efficient storage
If you've selected decimal format, then your coordinate is a pair (-32.60875, 21.27812). Obviously, 2 x (1 bit for sign, 2 digits for degrees and 5 digits for exponent) will be enough.
So here I'd like to support Alix Axel from comments saying that Google suggestion to store it in FLOAT(10,6) is really extra, because you don't need 4 digits for main part (since sign is separated and latitude is limited to 90 and longitude is limited to 180). You can easily use FLOAT(8,5) for 1/2m precision or FLOAT(9,6) for 50/2cm precision. Or you can even store lat and long in separated types, because FLOAT(7,5) is enough for lat. See MySQL float types reference. Any of them will be like normal FLOAT and equal to 4 bytes anyway.
Usually space is not an issue nowadays, but if you want to really optimize the storage for some reason (Disclaimer: don't do pre-optimization), you may compress lat(no more than 91 000 values + sign) + long(no more than 181 000 values + sign) to 21 bits which is significantly less than 2xFLOAT (8 bytes == 64 bits)
In a completely different and simpler perspective:
if you are relying on Google for showing your maps, markers, polygons, whatever, then let the calculations be done by Google!
you save resources on your server and you simply store the latitude and longitude together as a single string (VARCHAR), E.g.: "-0000.0000001,-0000.000000000000001" (35 length and if a number has more than 7 decimal digits then it gets rounded);
if Google returns more than 7 decimal digits per number, you can get that data stored in your string anyway, just in case you want to detect some flees or microbes in the future;
you can use their distance matrix or their geometry library for calculating distances or detecting points in certain areas with calls as simple as this: google.maps.geometry.poly.containsLocation(latLng, bermudaTrianglePolygon))
there are plenty of "server-side" APIs you can use (in Python, Ruby on Rails, PHP, CodeIgniter, Laravel, Yii, Zend Framework, etc.) that use Google Maps API.
This way you don't need to worry about indexing numbers and all the other problems associated with data types that may screw up your coordinates.
While it isn't optimal for all operations, if you are making map tiles or working with large numbers of markers (dots) with only one projection (e.g. Mercator, like Google Maps and many other slippy maps frameworks expect), I have found what I call "Vast Coordinate System" to be really, really handy. Basically, you store x and y pixel coordinates at some way-zoomed-in -- I use zoom level 23. This has several benefits:
You do the expensive lat/lng to mercator pixel transformation once instead of every time you handle the point
Getting the tile coordinate from a record given a zoom level takes one right shift.
Getting the pixel coordinate from a record takes one right shift and one bitwise AND.
The shifts are so lightweight that it is practical to do them in SQL, which means you can do a DISTINCT to return only one record per pixel location, which will cut down on the number records returned by the backend, which means less processing on the front end.
I talked about all this in a recent blog post:
http://blog.webfoot.com/2013/03/12/optimizing-map-tile-generation/
Latitudes range from -90 to +90 (degrees), so DECIMAL(10, 8) is ok for that
longitudes range from -180 to +180 (degrees) so you need DECIMAL(11, 8).
Note: The first number is the total number of digits stored, and the second is the number after the decimal point.
In short: lat DECIMAL(10, 8) NOT NULL, lng DECIMAL(11, 8) NOT NULL
The spatial functions in PostGIS are much more functional (i.e. not constrained to BBOX operations) than those in the MySQL spatial functions. Check it out: link text
depending on you application, i suggest using FLOAT(9,6)
spatial keys will give you more features, but in by production benchmarks the floats are much faster than the spatial keys. (0,01 VS 0,001 in AVG)
MySQL uses double for all floats ...
So use type double. Using float will lead to unpredictable rounded values in most situations
I suggest you use Float datatype for SQL Server.
The ideal datatype for storing Lat Long values is decimal(9,6)
This is at approximately 10cm precision, whilst only using 5 bytes of storage.
e.g. CAST(123.456789 as decimal(9,6))
GeoLocationCoordinates returns a double data type representing the position's latitude and longitude in decimal degrees. You can try using double.
Lat Long calculations require precision, so use some type of decimal type and make the precision at least 2 higher than the number you will store in order to perform math calculations. I don't know about the my sql datatypes but in SQL server people often use float or real instead of decimal and get into trouble because these are are estimated numbers not real ones. So just make sure the data type you use is a true decimal type and not a floating decimal type and you should be fine.
A FLOAT should give you all of the precision you need, and be better for comparison functions than storing each co-ordinate as a string or the like.
If your MySQL version is earlier than 5.0.3, you may need to take heed of certain floating point comparison errors however.
Prior to MySQL 5.0.3, DECIMAL columns store values with exact precision because they are represented as strings, but calculations on DECIMAL values are done using floating-point operations. As of 5.0.3, MySQL performs DECIMAL operations with a precision of 64 decimal digits, which should solve most common inaccuracy problems when it comes to DECIMAL columns