Benefits of SPATIAL over using standard bounding box query - mysql

Benefits of SPATIAL over BOUNDING
What are the benefits of using a SPATIAL query rather than a simple MySQL query that utilised a bounding box?
For example, if I wanted to find all locations that fall within a certain polygon:
Something like this:
Bounding Box Example
SELECT * FROM geoplaces
WHERE geolat BETWEEN ? AND ?
AND geolng BETWEEN ? AND ?
As I understand, the only real benefit is that spatial will also factor in the earth's curvature?
Which method is fastest also?

Advantages:
You can use shapes other than points
and rectangles, and operations other
than "is-in".
If you can use SPATIAL
indexes, performance is orders of
magnitude better (depending of
course on the size and nature of
your data).
Disadvantages:
Spatial objects are binary blobs - some care is needed handling them (e.g. you typically can't copy-paste a value).
Spatial indexes in mySQL are only available for MyISAM tables.
Support in mySQL is somewhat primitive; many operations are not implemented yet
If your database is large and you need searches like the one in your example to be fast, you should definitely go with SPATIAL indexes.

Related

What is an optimal approach to storing locations that can be queried by distance?

I want to implement a feature where a list of nearby venues can be presented sorted by the distance from user's location. The approach I have right now is to store lat and lon values as floats and to make a query that is looking for +/- values of the location of the user (searching for a square that extends north, south, east and west of the user). Then I do a quick calculation across the resultset determining the distance and sort in my business logic. Now I am approaching this with the perspective of someone who has primarily used relational databases (the app is running MySQL with Hibernate), but is there a better approach (in a different database like Neo4J or with a better column type?)
Also the approach I have has a semi complex workaround for queries at or near 0 lat or 0 lon).
As for my working definition of optimal I'm looking for approaches that are scalable to potentially hundreds of venues in a 10 mile radius and hundreds of thousands of venues in total. To put it another way approximately 1% of SimpleGEO, so if the scale of this problem doesn't require an optimal solution then "you're alright" would also be an interesting answer, though I'd be intested in knowing why)
You could have a look at Lucene/Solr. Lucene supported location-aware search at least since v2.9.
If you're worried about the Lucene complexities, there's Hibernate Search which is meant to replicated all database changes across to Lucene transparently.
MongoDB has native support for geospatial indexes and extensions to the query language to support a lot of different ways of querying your geo spatial documents.
But if you are looking for relation database try PostgreSQL with PostGIS.
Are you look to Hibernate Spatial?
Hibernate Spatial is a generic extension to Hibernate for handling geographic data. And HS have MySQL Provider.
http://www.hibernatespatial.org/

MySQL-Solr for geospatial search

The site currently does mainly range searches (latitude & longitude) with some filtering like WHERE color = "red" type of clauses. However using MySQL with geospatial index is still quite slow and I need to speed it up.
Problem: Will using Solr to do the search be a good idea?
If so, should I only duplicate the range columns from MySQL into Solr, and do the WHERE clauses in MySQL, or do both type of queries in Solr?
I've read that Solr is not for storing data like a database (ie. MySQL). Does this mean that if my search can take place over 10 different columns (or field in Solr terms), and the MySQL table that I replicated Solr's from only has 11 tables, I would still keep the MySQL table even though that will use up almost twice as much storage space half of which is redundant?
It appears that I'm using structured data (because each row has many columns defined?) and storing the entire table in Solr instead having redundant data on MySQL and Solr will save storage space and number of database access operations when writing. Is Solr a good choice here?
In terms of speed, would it be better to use PostGIS or Solr?
Solr has very fast numerical/date range queries. Solr 3 geospatial takes advantage of that, and I wrote a plugin that does even better. I doubt MySQL is faster.
That said, if the sole problem you are trying to solve is slow geospatial queries then bringing in Solr may solve it but will add a lot of overall complexity to your system since it isn't designed to replace relational databases--it works alongside them. Don't get me wrong; Solr is awesome, particularly for faceted navigation and text search. But you didn't state you wanted to take advantage of Solr's primary features.
PostGIS is by far the most mature open-source GIS storage system. I suggest you try it as an experiment to see if it's better. I would try a lat + lon pair of columns approach like what you are doing now with MySQL, and I would also try using the PostGIS native geospatial way to do it, whatever that is exactly.
One thing you could try in either MySQL or PostGIS is to round your latitude and longitude value to the number of decimals to get an appropriate level of precision you need, which is surely far less than the full precision of a double. And if you store them in floats rather than doubles, right there the precision is capped to 2.37 meters. The system you use will probably have a much easier time doing range queries if there are fewer distinct values to scan over.

MySQL Postgresql / PostGIS

I have lat/lon coordinates in a 400 million rows partitioned mysql table.
The table grows # 2000 records a minute and old data is flushed every few weeks.
I am exploring ways to do spatial analysis of this data as it comes in.
Most of the analysis requires finding whether a point is in a particular lat/lon polygon or which polygons contain that point.
I see the following ways of tackling the point in polygon (PIP) problem:
Create a mysql function that takes a point and a Geometry and returns a boolean.
Simple but not sure how Geometry can be used to perform operations on lat/lon co-ordinates since Geometry assumes flat surfaces and not spheres.
Create a mysql function that takes a point and identifier of a custom data structure and returns a boolean.
The polygon vertices can be stored in a table and a function can compute PIP using spherical math. Large number of polygon points may lead to a huge table and slow queries.
Leave point data in mysql and store polygon data in PostGIS and use the app server to run PIP query in PostGIS by probviding point as a parameter.
Port the application from MySQL to Postgresql/PostGIS.
This will require a lot of effort in rewriting queries and procedures.
I can still do it but how good is Postgresql at handling 400 million rows.
A quick search on google for "mysql 1 billion rows" returns many results. same query for Postgres returns no relevant results.
Would like to hear some thoughts & suggestions.
A few thoughts.
First PostgreSQL and MySQL are completely different beasts when it comes to performance tuning. So if you go the porting route be prepared to rethink your indexing strategies. Not only does PostgreSQL have a far more flexible indexing than MySQL, but the table approaches are very different also, meaning the appropriate indexing strategies are as different as the tactics are. Unfortunately this means you can expect to struggle a bit. If i could give advice I would suggest dropping all non-key indexes at first and then adding them back sparingly as needed.
The second point is that nobody here can likely give you a huge amount of practical advice at this point because we don't know the internals of your program. In PostgreSQL, you are best off indexing only what you need, but you can index functions' outputs (which is really helpful in cases like this) and you can index only part of a table.
I am more a PostgreSQL guy than a MySQL guy so of course I think you should go with PostgreSQL. However rather than tell you why etc. and have you struggle at this scale, I will tell you a few things that I would look at using if I were trying to do this.
Functional indexes
Write my own functions for indexes for related analysis
PostGIS is pretty amazing and very flexible
In the end, switching db's at this volume is going to be a learning curve, and you need to be prepared for that. However, PostgreSQL can handle the volume just fine.
The number of rows is quite irrelevant here.
The question is how much of the point in polygon work that can be done by the index.
The answer to that depends on how big the polygons are.
PostGIS is very fast to find all points in the bounding box of a polygon. Then it takes more effort to find out if the point actually is inside the polygon.
If your polygons is small (small bounding boxes) the query will be efficient. If your polygons are big or have a shape that mekes the bounding box big then it will be less efficient.
If your polygons is more or less static there is work arounds. You can divide your polygons in smaller polygons and recreate the idnex. Then the index will be more efficient.
If your polygons is actually multipolygons the firs step is to split the multipolygons to polygons with ST_Dump and recreate and build an index on the result.
HTH
Nicklas

SQL Bounding Box Optimization

Can anybody link to any documents regarding optimized bounding box style queries in SQL?
At the most basic level, imagine an table consisting of x,y float columns, we query the table for rows within a certain (x1,x2),(y1,y2) range. The query to do this is trivial, but what is the best way to define the indexes to ensure this query behaves efficiently?
We could simply create an index on the x and y columns, or I could create an index on both the x and y columns, but I don't know enough about SQL indexing to reason my way through this.
I am using MySQL.
A space filling curve is best to reduce a 2d space to a 1d problem. It's constructed like a fractal and is basically a gray code traversal of the surface. Instead of calculate an index you can put together a quadtree path prefix-free key similar to a huffman code. Then you can use a simple string query to retrieve a box. MySql has a spatial index extension but I don't know what curve they use. It's probably the simple z-curve or the peano curve. You can take a look at Nick spatial index quadtree hilbert curve blog. Monotonic n-ary gray code can also be very interesting.
mysqls spatial extensions
it can use r tree indexes
then you have handy functions like mbrwithin
seems right up your alley

Which DB to choose for finding best matching records?

I'm storing an object in a database described by a lot of integer attributes. The real object is a little bit more complex, but for now let's assume that I'm storing cars in my database. Each car has a lot of integer attributes to describe the car (ie. maximum speed, wheelbase, maximum power etc.) and these are searchable by the user. The user defines a preferred range for each of the objects and since there are a lot of attributes there most likely won't be any car matching all the attribute ranges. Therefore the query has to return a number of cars sorted by the best match.
At the moment I implemented this in MySQL using the following query:
SELECT *, SQRT( POW((a < min_a)*(min_a - a) + (a > max_a)*(a - max_a), 2) +
POW((b < min_b)*(min_b - b) + (b > max_b)*(b - max_b), 2) +
... ) AS match
WHERE a < (min_a - max_allowable_deviation) AND a > (max_a + max_allowable_deviation) AND ...
ORDER BY match ASC
where a and b are attributes of the object and min_a, max_a, min_b and max_b are user defined values. Basically the match is the square root of the sum of the squared differences between the desired range and the real value of the attribute. A value of 0 meaning a perfect match.
The table contains a couple of million records and the WHERE clausule is only introduced to limit the number of records the calculation is performed on. An index is placed on all of the queryable records and the query takes like 500ms. I'd like to improve this number and I'm looking into ways to improve this query.
Furthermore I am wondering whether there would be a different database better suited to perform this job. Moreover I'd very much like to change to a NoSQL database, because of its more flexible data scheme options. I've been looking into MongoDB, but couldn't find a way to solve this problem efficiently (fast).
Is there any database better suited for this job than MySQL?
Take a look at R-trees. (The pages on specific variants go in to a lot more detail and present pseudo code). These data structures allow you to query by a bounding rectangle, which is what your problem of searching by ranges on each attribute is.
Consider your cars as points in n-dimensional space, where n is the number of attributes that describe your car. Then given a n ranges, each describing an attribute, the problem is the find all the points contained in that n-dimensional hyperrectangle. R-trees support this query efficiently. MySQL implements R-trees for their spatial data types, but MySQL only supports two-dimensional space, which is insufficient for you. I'm not aware of any common databases that support n-dimensional R-trees off the shelf, but you can take some database with good support for user-defined tree data structures and implement R-trees yourself on top of that. For example, you can define a structure for an R-tree node in MongoDB, with child pointers. You will then implement the R-tree algorithms in your own code while letting MongoDB take care of storing the data.
Also, there's this C++ header file implementing of an R-tree, but currently it's only an in-memory structure. Though if your data set is only a few million rows, it seems feasible to just load this memory structure upon startup and update it whenever new cars are added (which I assume is infrequent).
Text search engines, such as Lucene, meet your requirements very well. They allow you to "boost" hits depending on how they were matched, eg you can define engine size to be considered a "better match" than wheel base. Using lucene is really easy and above all, it's SUPER FAST. Way faster than mysql.
Mysql offer a plugin to provide text-based searching, but I prefer to use it separately, that way it's easily scalable (being read-only, you can have multiple lucene engines), and easily manageable.
Also check out Solr, which sits on top of lucene and allows you to store, retrieve and search for simple java object (Lists, arrays etc).
Likely, your indexes aren't helping much, and I can't think of another database technology that's going to be significantly better. A few things to try with MySQL....
I'd try putting a copy of the data in a memory table. At least the table scans will be in memory....
http://dev.mysql.com/doc/refman/5.0/en/memory-storage-engine.html
If that doesn't work for you or help much, you could also try a User Defined Function to optimize the calculation of the matching. Basically, this means executing the range testing in a C library you provide:
http://dev.mysql.com/doc/refman/5.0/en/adding-functions.html