Im trying to get familiar with spatial data using sql server 2008.
I use book: http://www.beginningspatial.com/
and there is sample data: http://www.census.gov/geo/cob/bdy/zt/z500shp/zt06_d00_shp.zip
I uploaded it to my sql server database and Spatial results are presented well. Everything seems to be ok but when I tried to calculate distance between 2 geometries result is really small.
For example 0.27
The same is if I try to calculate STLength of given region. Result also is very small like 0.19 for example.
I wonder why the results are so small. I used SRID 4269 importing data.
Does anyone have any idea why is it like that ? I think that the result whould be in meters.
Thanks for any advice
STLength returns units in the data source's unit of measure. In your case, you chose SRID 4326, which stores the data in latitude/longitude, therefore your base units are a "degree", not a standard linear unit like feet/meters. To calculate the distance in linear units, you should load your data as a projected format.
You might want to refer to this very similar question, as well.
Related
In my Anylogic model I have a population of agents (4 terminals) were trucks arrive at, are being served and depart from. The terminals have two parameters (numberOfGates and servicetime) which influence the departures per hour of trucks leaving the terminals. Now I want to tune these two parameters, so that the amount of departures per hour is closest to reality (I know the actual departures per hour). I already have two datasets within each terminal agent, one with de amount of departures per hour that I simulate, and one with the observedDepartures from the data.
I already compare these two datasets in plots for every terminal:
Now I want to create an optimization experiment to tune the numberOfGates and servicetime of the terminals so that the departure dataset is the closest to the observedDepartures dataset. Does anyone know how to do create a(n) (objective) function for this optimization experiment the easiest way?
When I add a variable diff that is updated every hour by abs( departures - observedDepartures) and put root.diff in the optimization experiment, it gives me the eq(null) is not allowed. Use isNull() instead error, in a line that reads the database for the observedDepartures (see last picture), but it works when I run the simulation normally, it only gives this error when running the optimization experiment (I don't know why).
You can use the absolute value of the sum of the differences for each replication. That is, create a variable that logs the | difference | for each hour, call it diff. Then in the optimization experiment, minimize the value of the sum of that variable. In fact this is close to a typical regression model's objectives. There they use a more complex objective function, by minimizing the sum of the square of the differences.
A Calibration experiment already does (in a more mathematically correct way) what you are trying to do, using the in-built difference function to calculate the 'area between two curves' (which is what the optimisation is trying to minimise). You don't need to calculate differences or anything yourself. (There are two variants of the function to compare either two Data Sets (your case) or a Data Set and a Table Function (useful if your empirical data is not at the same time points as your synthetic simulated data).)
In your case it (the objective function) will need to be a sum of the differences between the empirical and simulated datasets for the 4 terminals (or possibly a weighted sum if the fit for some terminals is considered more important than for others).
So your objective is something like
difference(root.terminals(0).departures, root.terminals(0).observedDepartures)
+ difference(root.terminals(1).departures, root.terminals(1).observedDepartures)
+ difference(root.terminals(2).departures, root.terminals(2).observedDepartures)
+ difference(root.terminals(3).departures, root.terminals(2).observedDepartures)
(It would be better to calculate this for an arbitrary population of terminals in a function but this is the 'raw shape' of the code.)
A Calibration experiment is actually just a wizard which creates an Optimization experiment set up in a particular way (with a UI and all settings/code already created for you), so you can just use that objective in your existing Optimization experiment (but it won't have a built-in useful UI like a Calibration experiment). This also means you can still set this up in the Personal Learning Edition too (which doesn't have the Calibration experiment).
I have an application where user's store their commute routes in our database.
The routes are stored as polylines (linestrings).
The database also stores incidents, traffic accidents that kind of thing.
Periodically we need to query a route to see if there is any incident within a 1k radius of the route.
The join on the query is structured as follows:
Route r left outer join Incident i on
r.PolyLine.STDistance(i.Location) < 1000
Now I also tried something like this:
Route r left outer join Incident i on
r.PolyLine.STBuffer(1000).STIntersects(i.Location) = 1
Things we have tried so far to improve the speed are:
Reduce the number of points along the linestring
Add a spatial index (though I don't know how to tweak it)
1) above worked but not well enough and leads me to believe that the incident was being compared to every point along the route which seems really inefficient.
We are considering strong the long lats as geometry vs geography so we get access to the Bounding Box and also to get STContains.
Also considering calling reduce on the PolyLine prior to checking for incidents.
I would suggest geometry storage. The benefits of going to geography in this scenario don't seem to outweigh the costs.
Spatial Indexes are very important. One process I used spatial queries in went from ~15 min to ~1 min by using a properly tuned spatial index. However, I haven't found documentation on a good way to automatically obtain optimal settings for them. I have answered a similar question about spatial index tuning. The stored procedure I provided there takes a while for each data set but can be run in the background while you do other work.
As far as your query goes, I set up a different query and compared its performance with the two you provided above. It appears that performance improves by putting a buffer of your route into a geometry variable and using the variable in your spatial comparison. My reason for this is that it only has to create the buffer (or evaluate distance) once instead of once for each row it compares against. You could try this and see what results you get.
DECLARE #routeBuff geometry
SET #routeBuff = (SELECT r.PolyLine.STBuffer(1000) FROM route r WHERE recordID = 2778) --how ever you select the particular route
SELECT
*
FROM
incident i
WHERE
i.location.STIntersects(#routeBuff) = 1
I couldn't find anything on this, hence the new thread.
We have an application where the data is stored in SQL Server. Some tables have columns of the type "Geography". We use the SQL-Server function STDistance to filter out data within a specified distance. Now we are researching a little on converting the application to PHP for different reasons. One of the heaviest reasons is the cost of ASP.Net and SQL-Server. Now i can't seem to find anything on how MySQL handles Geography-datatype, am i right it doesn't exist?
Isn't it possible to create own functions in MySQL? I thought i could create simple function that calculates whether a location is within the desired radius. What would be the most efficient way of doing this? Of course i could calculate for each row if the coordinates is within the radius, but that feels inefficient and not like a very scalable solution. I was thinking that i first would select all the rows where x1>lat>x2 and y1>lon>y2 and then do the "heavy calculation".
What would be the best way of doing this?
I have a table with zipcode(int) and Location(point). I'm looking for a MySql query or function. Here is an example of the date. I'd like to return 100 miles.
37922|POINT(35.85802 -84.11938)
Is there an easy query to achieve this?
Okay so I have this
select x(Location), Y(Location) FROM zipcodes
This will give me my two points, but how do i figure out whats within a distance of x/y?
The query to do this is not too hard, but is slow. You would want to use the Haversine formula.
http://en.wikipedia.org/wiki/Haversine_formula
Converting that to SQL should not be too difficult, but calculating the distance for every record in a table gets costly as the data set increases.
The work can be significantly reduced by using a geohash function to limit the locus of candidate records. If accuracy is important, the Haversine formula can be applied to the records inside a geohash region.
If the mysql people never completed their GIS and Spatial extension, consider using ElasticSearch or MongoDB.
There is a pretty complete discussion here:
Formulas to Calculate Geo Proximity
I am trying to work out the most efficient query to get points within a radius of a given point. The results do not have to be very accurate so I would favor speed over accuracy.
We have tried using a where clause comparing distance of points using STDistance like this (where #point and v.GeoPoint are geography types):
WHERE v.GeoPoint.STDistance(#point) <= #radius
Also one using STIntersects similar to this:
WHERE #point.STBuffer(#radius).STIntersects(v.GeoPoint) = 1
Are either of these queries preferred or is there another function that I have missed?
If accuracy is not paramount then using the Filter function might be a good idea:
http://msdn.microsoft.com/en-us/library/cc627367.aspx
This can i many cases be orders of magnitude faster because it does not do the check to see if your match was exact.
In the index the data is stored in a grid pattern, so how viable this approach is probably depends on your spatial index options.
Also, if you don't have to many matches then doing a filter first, and then doing a full intersect might be viable.