finding nearest locations in GeoAlchemy and PostGIS - sqlalchemy

I am trying to do a simple query where I find the nearest locations to a user (in my example, I am using airports). The database records look like this:
id: 2249
city: Osaka
country: Japan
location_type_id: 16
geolocation: SRID=4326;POINT(34.59629822 135.6029968)
name: Yao Airport
My database query looks like this:
#classmethod
def find_nearest_locations(cls, data):
self = cls(data)
return db.session.query(LocationModel.location_type_id==16). \
order_by(Comparator.distance_centroid(LocationModel.geolocation,
func.Geometry(func.ST_GeographyFromText(self.__format_geolocation())))).limit(self.result_quantity)
Unfortunately my function keeps returning empty lists. I am not very familiar with GeoAlchemy as this is the first time I am using it. Any help would be greatly appriciated.
Thanks.

In Postgis, the coordinates must be expresses as longitude first, then latitude.
You will need to swap the coordinates in the input
geolocation: SRID=4326;POINT(34.59629822 135.6029968) should become geolocation: SRID=4326;POINT(135.6029968 34.59629822)

I was able to fix the issue. Long story short, since I am using Flask-SQLAlchemy and not the regular SQLAlchemy, I have to search the LocationModel and not db.session.query. The code looks like this.
#classmethod
def find_nearest_locations(cls, data):
self = cls(data)
return LocationModel.query.filter(LocationModel.location_type_id==self.location_type_id). \
order_by(Comparator.distance_centroid(LocationModel.geolocation,
func.Geometry(func.ST_GeographyFromText(self.__format_geolocation())))).limit(self.result_quantity)

Related

How to list all layers on Geopackage using pyqgis?

I am studying about pyqgis (using the pyqgis cookbook and started loading a vector layer.
So far I was able to open a layer that I already knew exist on a geopackge.
iface.addVectorLayer("./bcim_2016_21_11_2018.gpkg|layername=lim_unidade_federacao_a", "Nome Vetor", "ogr")
Now, I am wondering how could I list all layers hosted on a geopackage, so a can define which layer to load?
Thansk in advance
Felipe
I have just found this possibility on PyQGIS CookBook - cheatsheet, which answer my question.
from qgis.core import QgsVectorLayer, QgsProject
fileName = "/path/to/gpkg/file.gpkg"
layer = QgsVectorLayer(fileName,"test","ogr")
subLayers =layer.dataProvider().subLayers()
for subLayer in subLayers:
name = subLayer.split('!!::!!')[1]
uri = "%s|layername=%s" % (fileName, name,)
# Create layer
sub_vlayer = QgsVectorLayer(uri, name, 'ogr')
# Add layer to map
QgsProject.instance().addMapLayer(sub_vlayer)
Felipe, all layers are stored into gpkg_geometry_columns. So you should query this table using either QSqlDatabase from Qt or sqlite3.
To query the table name, column name and geometry type you can do the following:
select table_name, column_name, geometry_type_name from gpkg_geometry_columns
Hope I could help you!
Philipe

Check if model has locale_restriction in scope

Consider a model Post which has a title, description and a locale_restrictions field.
The locale restrictions field specifies in which locales the post should be displayed. It contains a CSV value: en,de,be,nl.
What I would like to do is use either a default_scope or a named scope to only return the model instances for a specific locale. Something like (with a localized scope): Post.localized.all. This scope then looks at the current locale I18n.locale and returns the posts that have that locale in their locale_restrictions CSV.
I cannot seem to get this working, having tried quite a couple of options. The closest I came was with a SQL LIKE expression:
default_scope -> { where("locale_restrictions LIKE (?)", "%#{I18n.locale.to_s}%") }
However, this fails when there's, for example, both a :en and :benl locale, since %en% will match :benl.
Apparently you can't get access to self.locale_restrictions within a scope. self returns the class instead of the instance. I can't figure out a way to split the locale_restrictions and check them.
What would be the best way to go about this using scopes, or are there any best practices regarding localizing database that I'm missing out on?
I'm basically looking for an easy way to scope my controller instance variables to a specific locale. Any help would be greatly appreciated.
Instead of using LIKE you can use REGEXP and include beginning-of-word and end-of-word boundries in the regular expression. This should do the trick:
default_scope -> { where("locale_restrictions REGEXP (?)", "[[:<:]]#{I18n.locale.to_s}[[:>:]]") }

Django Query Natural Sort

Let's say I have this Django model:
class Question(models.Model):
question_code = models.CharField(max_length=10)
and I have 15k questions in the database.
I want to sort it by question_code, which is alphanumeric. This is quite a classical problem and has been talked about in:
http://blog.codinghorror.com/sorting-for-humans-natural-sort-order/
Does Python have a built in function for string natural sort?
I tried the code in the 2nd link (which is copied below, changed a bit), and notice it takes up to 3 seconds to sort the data. To make sure about the function's performance, I write a test which creates a list of 100k random alphanumeric string. It takes only 0.76s to sort that list. So what's happening?
This is what I think. The function needs to get the question_code of each question for comparing, thus calling this function to sort 15k values means requesting mysql 15k separate times. And this is the reason why it takes so long. Any idea? And any solution to natural sort for Django in general? Thanks a lot!
def natural_sort(l, ascending, key=lambda s:s):
def get_alphanum_key_func(key):
convert = lambda text: int(text) if text.isdigit() else text
return lambda s: [convert(c) for c in re.split('([0-9]+)', key(s))]
sort_key = get_alphanum_key_func(key)
return sorted(l, key=sort_key, reverse=ascending)
As far as I'm aware there isn't a generic Django solution to this. You can reduce your memory usage and limit your db queries by building an id/question_code lookup structure
from natsort import natsorted
question_code_lookup = Question.objects.values('id','question_code')
ordered_question_codes = natsorted(question_code_lookup, key=lambda i: i['question_code'])
Assuming you want to page the results you can then slice up ordered_question_codes, perform another query to retrieve all the questions you need order them according to their position in that slice
#get the first 20 questions
ordered_question_codes = ordered_question_codes[:20]
question_ids = [q['id'] for q in ordered_question_codes]
questions = Question.objects.filter(id__in=question_ids)
#put them back into question code order
id_to_pos = dict(zip((question_ids), range(len(question_ids))))
questions = sorted(questions, key = lambda x: id_to_pos[x.id])
If the lookup structure still uses too much memory, or takes too long to sort, then you'll have to come up with something more advanced. This certainly wouldn't scale well to a huge dataset

OpenWeatherMap returns incorrect current weather

I'm trying to implement an iphone app and i'm integration OpenWeatherMap to retrieve the current weather. However, I've noticed the data returned is incorrect (off by about 39 degree Fahrenheit).
Below is the JSON URL i'm using to retrieve current weather for Denver, Usa using Lan/Lon coordinates where xxxxxxxxxxxxx is my APPID key.
http://api.openweathermap.org/data/2.5/weather?APPID=xxxxxxxxxxxxx&lat=39.738539&lon=-104.981114
The temperature returned was 291.05988. From documentation read, this temperature unit is Kelvin. So to convert to Fahrenhiet, I take 291.05988 - 254.928 = 36.13188 degree Fahrenheit. However, the true current weather is 75 degree Fahrenheit. This is off by about 39 degrees.
Please advise what I'm doing wrong.
Thanks
Loc
For those cruising by later, you don't need to do any conversions for Fahrenheit, you can add another query param to your request for that:
Fahrenheit: units=imperial
... also the option for
Celsius: units=metric
Example being:
http://api.openweathermap.org/data/2.5/weather?q=London&appid=XXXXXX&units=imperial
Found this here.
I found one more way if you use in post url: &units=metric or it will be necessary to create a variable city and key:
city = 'London'
key = 'some_key'
So it will look like this :
url = 'http://api.openweathermap.org/data/2.5/weather?q={}&appid={}&units=metric'.format(city,key)
I'm answering my own question...
I was naived to believed the comments written by OpenWeather and had my calculation from Kelvin to Fahrenheit all wrong. From OpenWeather's link here, it states:
Temperature in Kelvin. Subtracted 273.15 from this figure to convert
to Celsius.
That statement is WRONG. To convert from Kelvin to Fahrenheit, use this equation:
° F = 9/5(° K - 273) + 32
Hope others won't get tripped by that statement like I did.

Get all coords within radius in google maps

The thing is I'm trying to query my database to find all points that fall within a radius of a certain given point.
One possible way of doing this would be to get all the data on the database and then find the distance between those points and the point I'm interested in, but I have 36k records in that database and that would mean 36k google maps requests, which I understand wouldn't be possible with the free API.
What I was thinking is getting all possible coords in a radius of that point I'm interested in and check if any of the points in the database match those coords. Is that even possible? Or efficient? I suppose I would get a LOT of points and it would translate into a very long for loop, but I can't think of any other way.
Any suggestions?
EDIT
Ok, I'll give a little more detail as of the specific scenario as I forgot to do so and now several other problems came to my mind.
First off I'm using mongodb as a database.
Secondly, my database has the locations in UTM format (this is supposed to work only in zone 21s) whereas I'm handling client side coords with Google Map's LatLng coords. I'm converting the UTM coords to LatLng to actually use them in the map.
Haversine won't do it in this scenario would it?
Look into using the Haversine formula - if you have latitude and longitude in your database then you can use the Haversine formula in a SQL query to find records within a certain distance.
This article covers it with more details: http://www.plumislandmedia.net/mysql/haversine-mysql-nearest-loc/
Do you have latitude and longitude as separate numerical fields in your database? You could search for all points in your database that are in a square area whose sides are twice the radius of the circle you're looking for. Then just check the distances within that subset of points.
select * from points where lat > my_latitude-radius and lat < my_latitude + radius and long > my_longitude-radius and long < my_longitude+radius;
Here I have done with mongoose. We need to create a schema that contain type and coordinates. You can see more details in https://mongoosejs.com/docs/geojson.html
So it's has another problem with mongoose version. Mongoose v6.3.0
worked for me. When you will use countDocuments with the query, it can
be generate error but count function not generating any error. I know
count deprecated, it shouldn't be use but I haven't find better
solution. If anyone find solution for count, let me know. Also you can visit https://github.com/Automattic/mongoose/issues/6981
const schema = new mongoose.Schema(
{
location: {
type: {
type: String,
enum: ["Point"],
},
coordinates: {
type: [Number],
index: "2dsphere",
},
},
},
{ timestamps: true }
);
const MyModel = mongoose.model("rent", schema);
The query will be
const result = await MyModel.find({
location: {
$near: {
$geometry: {
type: "Point",
coordinates: [Number(filters.longitude), Number(filters.latitude)],
},
$maxDistance: filters.maxRadius * 1000,
$minDistance: filters.minRadius * 1000,
},
},
})