SQL Server 2008 Spatial Query performance - sql-server-2008

I have an application where user's store their commute routes in our database.
The routes are stored as polylines (linestrings).
The database also stores incidents, traffic accidents that kind of thing.
Periodically we need to query a route to see if there is any incident within a 1k radius of the route.
The join on the query is structured as follows:
Route r left outer join Incident i on
r.PolyLine.STDistance(i.Location) < 1000
Now I also tried something like this:
Route r left outer join Incident i on
r.PolyLine.STBuffer(1000).STIntersects(i.Location) = 1
Things we have tried so far to improve the speed are:
Reduce the number of points along the linestring
Add a spatial index (though I don't know how to tweak it)
1) above worked but not well enough and leads me to believe that the incident was being compared to every point along the route which seems really inefficient.
We are considering strong the long lats as geometry vs geography so we get access to the Bounding Box and also to get STContains.
Also considering calling reduce on the PolyLine prior to checking for incidents.

I would suggest geometry storage. The benefits of going to geography in this scenario don't seem to outweigh the costs.
Spatial Indexes are very important. One process I used spatial queries in went from ~15 min to ~1 min by using a properly tuned spatial index. However, I haven't found documentation on a good way to automatically obtain optimal settings for them. I have answered a similar question about spatial index tuning. The stored procedure I provided there takes a while for each data set but can be run in the background while you do other work.
As far as your query goes, I set up a different query and compared its performance with the two you provided above. It appears that performance improves by putting a buffer of your route into a geometry variable and using the variable in your spatial comparison. My reason for this is that it only has to create the buffer (or evaluate distance) once instead of once for each row it compares against. You could try this and see what results you get.
DECLARE #routeBuff geometry
SET #routeBuff = (SELECT r.PolyLine.STBuffer(1000) FROM route r WHERE recordID = 2778) --how ever you select the particular route
SELECT
*
FROM
incident i
WHERE
i.location.STIntersects(#routeBuff) = 1

Related

How would I use DynamoDB to move this usage from my mysql db to nosql?

I'm currently experiencing issues with a service I've developed that relies heavily on large payload reads from the db (500 rows). I'm seeing huge throughput, in the range of 35,000+ requests per minute for up to 500 rows per request going through the DB and it is not handling the scaling at all.
The data in question is retrieved primarily on a latitude / longitude where statement that checks if the latitude and longitude of the row can be contained within a minimum latitude longitude coordinate, and a maximum latitude longitude coordinate. This is effective checking if the row in question is within the bounding box created by the min / max passed into the where.
This is the where portion of the query we rely on for reference.
s.latitude > {minimumLatitude} AND
s.longitude > {minimumLongitude} AND
s.latitude < {maximumLatitude} AND
s.longitude < {maximumLongitude}
SO, with that said. MySQL is handling this find, I'm presently on RDS and having to rely heavily on an r3.8XL master, and 3 r3.8XL reads just to get the throughput capacity I need to prevent the application from slowing down and throwing the CPU into 100% usage.
Obviously, with how heavy the payload is and how frequently it's queried this data needs to be moved into a more fitting service. Something like Elasticache's services or DynamoDB.
I've been leaning towards DynamoDB, but my only option here seems to be using SCAN as there is no useful primary key I can associate on my data to reduce the result set as it relies on calculating if the latitude / longitude of a point is within a bounding box. DynamoDB filters on attributes would work great as they support the basic conditions needed, however on a table that would be 250,000+ rows and growing by nearly 200,000 a day or more would be unusably expensive.
Another option to reduce the result set was to use a Map Binning technique to associate a map region with the data, and reduce on that in dynamo as the primary key and then further filter down on the latitude / longitude attributes. This wouldn't be ideal though, we'd prefer to get data within specific bounds and not have excess redundant data passed back as the min/max lat/lng can overlap multiple bins and would then pull data from pins that a majority may not be needed for.
At this point I'm continuously having to deploy read replicas to keep the service up and it's definitely not ideal. Any help would be greatly appreciated.
You seem to be overlooking what seems like it would be the obvious first thing to try... indexing the data using an index structure suited to the nature of the data... in MySQL.
B-trees are of limited help since you still have to examine all possible matches in one dimension after eliminating impossible matches in the other.
Aside: Assuming you already have an index on (lat,long), you will probably be able to gain some short-term performance improvement by adding a second index with the columns reversed (long,lat). Try this on one of your replicas¹ and see if it helps. If you have no indexes at all, then of course that is your first problem.
Now, the actual solution. This requires MySQL 5.7 because before then, the feature works with MyISAM but not with InnoDB. RDS doesn't like it at all if you try to use MyISAM.
This is effective checking if the row in question is within the bounding box created by the min / max passed into the where.
What you need is an R-Tree index. These indexes actually store the points (or lines, polygons, etc.) in an order that understands and preserves their proximity in more than one dimension... proximate points are closer in the index and minimum bounding rectangles ("bounding box") are easily and quickly identified.
The MySQL spatial extensions support this type of index.
There's even an MBRContains() function that compares the points in the index to the points in the query, using the R-Tree to find all the points contained in thr MBR you're searching. Unlike the usual optimization rule that you should not use column names as function arguments in the where clause to avoid triggering a table scan, this function is an exception -- the optimizer does not actually evaluate the function against every row but uses the meaning of the expression to evaluate it against the index.
There's a bit of a learning curve needed in order to understand the design of the spatial extensions but once you understand the principles, it falls into place nicely and the performance will exceed your expectations. You'll want a single column of type GEOMETRY and you'll want to store lat and long together in that one indexed column as a POINT.
To safely test this without disruption, make a replica, then detach it from your master, promoting it to become its own independent master, and upgrade it to 5.7 if necessary. Create a new table with the same structure plus a GEOMETRY column and a SPATIAL KEY, then populate it with INSERT ... SELECT.
Note that DynamoDB scan is a very "expensive" operation. On a table I was testing against just yesterday, a single scan consistently cost 112 read units each time it was run, regardless of the number of records, presumably because a scan always reads 1MB of data, which is 256 blocks of 4K (definition of a read unit) but not with strong consistency (so, half the cost). 1 MB ÷ 4KB ÷ 2 = 128 which I assume is close enough to 112 that this explains that number.
¹ It's a valid, supported operation to add an index to a MySQL replica but not the master, even in RDS. You need to temporarily make the replica writable by creating a new parameter group identical to the existing one, and then flipping read_only to 0 in that group. Associate the replica to the new parameter group, then wait for the state to change from applying to in-sync, log in to the replica and add the index. Then put the parameter group back when done.

Geo-location search with MYSQL InnoDB

i am working on a GEO-enabled application where i have a obvious use case of searching users within some distance of given user location .Currently i am having MySQL DB used. as the User table is expected to be very large by time the time for getting results will get longer (too long in case it need to traverse entire table).
i am using InnoDB as my table do need many things which MYISAM cant do. i have tried mongo and had a test drive with adding 5 million users and doing some test over them . now i am curious to know what MYSQL can offer in same situation as i will prefer MYSQL if it gives slightly near results to mongo .
My user table is having other fields plus a lat field and a lng (both indexed). still it takes much time. can anyone suggest a better design approach for faster results.
Mongo has a bunch of very useful built in geospatial commands and aggregations that will be ideal for your given case of finding users near to a given user point. Others include within that finds points within a bounding box or polygon. In your case the geoNear aggregation is perfect and can provide the calculated distance away from the given point.
You will have to code a lot of that functionality with mysql. Then you also have Postgis an add on for Postgres. Postgres is the classic open source Mysql competitor and Postgis has been around longer than Mongo and the database presumably behind open street maps, government gis and similar.
But to the problem, you need to use geojson format and 2dsphere index that you might not be using. Post a single record of your data.

MySQL Query Takes A Long Time To Find POIs by Latitude and Longitude

I have 17 million points of interest in a MySQL table (v5.0.77), with several fields, including name,lat,lng, and category. Lat and Long are of type Decimal(10,6), and Category is a Small Integer. I have an multi-column index on lat,lng,category.
My queries to find points within 2km of location take a long time - on average about 120 seconds.
If I query from exactly the same center point, I can tell that the query is cached b/c the query executes in less than second. As soon as I change the center point, the query takes a long time again.
I do my calculation to determine the bounds of the area I'm searching outside of the query, versus a distance calculation within it, which is the source of a lot of reports you see about similar queries taking a long time.
Here's an example from the Slow Query Log:
Query_time: 177 Lock_time: 0 Rows_sent: 2841 Rows_examined: 28691
SELECT p.id, p.name AS name, p.lat, p.lng, c.name AS category
FROM poi AS p
LEFT JOIN categories AS c ON p.category = c.id
WHERE p.lat BETWEEN 37.524993 AND 37.560965 AND p.lng BETWEEN -77.491776 AND -77.446408;
I feel like the server is tuned correctly - I have enough memory, it's just me using it for development, I feel I've tweaked MySQL settings appropriately.
This has really stumped me for a while now. Shouldn't MySQL be able to very efficiently scan the index I've created? Should I convert to spatial data types, or use Sphinx to improve query speed? Any thoughts/perspective much appreciated.
Have you tried to use the spacial extension in mysql (http://dev.mysql.com/doc/refman/5.1/en/spatial-extensions.html)? I think that you can get better performance in your database if you use the date type "geometry" as and index and search using the rectangle created by the latitude-longitude. (info about the type geometry http://dev.mysql.com/doc/refman/5.0/en/geometry-property-functions.html).
I´ve used it with a database with 150k. places and the query responds in few miliseconds.
This might seem extreme, but you could hard code logic into your inserts, updates and retrieval procedures to look at the category field, and select the table that matches the category type you're looking for. Yes, that means you'll have tables dedicated specifically for a certain category, and this may come off as too heavy handed for most, and complicate maintenance later. But if your categories are not modified often (GPS coordinates don't strike me as something that will change anytime soon), you might want to consider it.

How good is the geography datatype in sql server 2008?

I have a large database full of customers, implemented in sql server 2005. Customers each have a latitude and longitude, represented as Decimal(18,15). The most important search query in the database tries to find all customers close to a certain location like this:
(Addresses.Latitude - #SearchInLat) BETWEEN -1 * #LatitudeBound AND #LatitudeBound)
AND ( (Addresses.Longitude - #SearchInLng) BETWEEN -1 * #LongitudeBound AND #LongitudeBound)
So, this is a very simple method. #LatitudeBound and #LongitudeBound are just numbers, used to pull back all the customers within a rough bounding rectangle of the point #SearchInLat, #SearchInLng. Once the results get to a client PC, some results are filtered out so that there is a bounding circle rather than a rectangle. (This is done on the client PC to avoid calculating square roots on the server.)
This method has worked well enough in the past. However, we now want to make the search do more interesting things - for instance, having the number of results pulled back be more predictable, or for the user to dynamically increase the size of the search radius. To do this, I have been looking at the possibility of ugprading to sql server 2008, with its Geography datatype, spatial indexes, and distance functions. My question is this: how fast are these?
The advantage of the simple query we have at the moment is that it is very fast and not performance intensive, which is important as it is called very often. How fast would a query based around something like this:
SearchInPoint.STDistance(Addresses.GeographicPoint) < #DistanceBound
be by comparison? Do the spatial indexes work well, and is STDistance fast?
If your handling just a standard Lat/Lng pair as you describe, and all your doing is a simple lookup, then arguably your not going to gain much in the way of a speed increase by using the Geometry Type.
However, if you do want to get more adventurous as you state, then swapping to using the Geometry types will open up a whole world of new possibilities for you, and not just for searches.
For example (Based on a project I'm working on) you could (If it's uk data) download the polygon definitions for all the towns / villages / city's for a given area, then do cross references to search in a particular town, or if you had a road map, you could find which customers lived next to major delivery routes, motorways, primary roads all sorts of things.
You could also do some very fancy reporting, imagine a map of towns, where each outline was plotted on a map, then shaded in with a colour to show density of customers in an area, some simple geometry SQL will easily return you a count straight from the database, to graph this kind of information.
Then there's tracking, I don't know what data you handle, or why you have customers, but if your delivering anything, feeding the co-ordinates of a delivery van in, tells you how close it is to a given customer.
As for the Question is STDistance fast? well that's difficult to say really, I think a better question is "Is it fast in comparison to.....", it's difficult to say yes or no, unless you have something to compare it to.
Spatial Indexes are one of the primary reasons for moving your data to geographically aware database they are optimised to produce the best results for a given task, but like any database, if you create bad indexes, then you will get bad performance.
In general you should definitely see a speed increase of some sort, because the maths in the sorting and indexing are more aware of the data's purpose as opposed to just being fairly linear in operation like a normal index is.
Bear in mind as well, that the more beefy the SQL server machine is, the better results you'll get.
One last point to mention is management of the data, if your using a GIS aware database, then that opens the avenue for you to use a GIS package such as ArcMap or MapInfo to manage, correct and visualise your data, meaning corrections are very easy to do by pointing, clicking and dragging.
My advice would be to create a side by side table to your existing one, that is formatted for spatial operations, then write a few stored procs and do some timing tests, see which comes out the best. If you have a significant increase just on the basic operations your doing, then that's justification alone, if it's about equal then your decision really hinges on, what new functionality you actually want to achieve.

Most efficient way to get points within radius of a point with sql server spatial

I am trying to work out the most efficient query to get points within a radius of a given point. The results do not have to be very accurate so I would favor speed over accuracy.
We have tried using a where clause comparing distance of points using STDistance like this (where #point and v.GeoPoint are geography types):
WHERE v.GeoPoint.STDistance(#point) <= #radius
Also one using STIntersects similar to this:
WHERE #point.STBuffer(#radius).STIntersects(v.GeoPoint) = 1
Are either of these queries preferred or is there another function that I have missed?
If accuracy is not paramount then using the Filter function might be a good idea:
http://msdn.microsoft.com/en-us/library/cc627367.aspx
This can i many cases be orders of magnitude faster because it does not do the check to see if your match was exact.
In the index the data is stored in a grid pattern, so how viable this approach is probably depends on your spatial index options.
Also, if you don't have to many matches then doing a filter first, and then doing a full intersect might be viable.