"Initializing geometry from JSON input requires GDAL" Error - json

I'm doing a Django project and I want to save polygons that represent areas of interest in a map. I am trying to use django-leaflet and django-geojson. The model for the shapes is:
#models.py
...
from django.contrib.gis.db import models as gismodels
...
class MushroomShape(gismodels.Model):
name = models.CharField(max_length=256)
geom = gismodels.PolygonField()
objects = gismodels.GeoManager()
def __unicode__(self):
return self.name
def __str__(self):
return self.name
I'm trying to create the polygon shapes in the admin, using a leaflet widget, to be added to the Database:
#admin.py
...
from leaflet.admin import LeafletGeoAdmin
from .models import MushroomShape
...
admin.site.register(MushroomShape, LeafletGeoAdmin)
Running the server in my computer, when I draw a polygon in the admin form and try to submit it:
The client side reports "Invalid geometry value." and the server side reports:
Error creating geometry from value
'{"type":"Polygon","coordinates":[[[-87.58575439453125,41.83375828633243],[-87.58575439453125,42.002366213375524],[-86.74942016601562,42.002366213375524],[-86.74942016601562,41.83375828633243],[-87.58575439453125,41.83375828633243]]]}'
(Initializing geometry from JSON input requires GDAL.)
A little push to help understand where I have to look, to solve this error, would be really awesome.

Sorry if this is bad etiquette (posting an answer to my question instead of deleting), but I've found my answer in the official Django page for geo libraries:
https://docs.djangoproject.com/el/1.10/ref/contrib/gis/install/geolibs/
I didn't know GDAL is necessary for some geojson features that I tried to use to work. I've followed their instructions and installed it with
sudo apt-get install binutils libproj-dev gdal-bin
and my error is gone.

Related

Creating a serving graph separately from training in tensorflow for Google CloudML deployment?

I am trying to deploy a tf.keras image classification model to Google CloudML Engine. Do I have to include code to create serving graph separately from training to get it to serve my models in a web app? I already have my model in SavedModel format (saved_model.pb & variable files), so I'm not sure if I need to do this extra step to get it to work.
e.g. this is code directly from GCP Tensorflow Deploying models documentation
def json_serving_input_fn():
"""Build the serving inputs."""
inputs = {}
for feat in INPUT_COLUMNS:
inputs[feat.name] = tf.placeholder(shape=[None], dtype=feat.dtype)
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
You are probably training your model with actual image files, while it is best to send images as encoded byte-string to a model hosted on CloudML. Therefore you'll need to specify a ServingInputReceiver function when exporting the model, as you mention. Some boilerplate code to do this for a Keras model:
# Convert keras model to TF estimator
tf_files_path = './tf'
estimator =\
tf.keras.estimator.model_to_estimator(keras_model=model,
model_dir=tf_files_path)
# Your serving input function will accept a string
# And decode it into an image
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_png(image_str_tensor,
channels=3)
return image # apply additional processing if necessary
# Ensure model is batchable
# https://stackoverflow.com/questions/52303403/
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{model.input_names[0]: images_tensor},
{'image_bytes': input_ph})
# Export the estimator - deploy it to CloudML afterwards
export_path = './export'
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)
You can refer to this very helpful answer for a more complete reference and other options for exporting your model.
Edit: If this approach throws a ValueError: Couldn't find trained model at ./tf. error, you can try it the workaround solution that I documented in this answer.

Django MySQL Error (1146, "Table 'db_name.django_content_type' doesn't exist")

I am getting the error
django.db.utils.ProgrammingError: (1146, "Table 'db_name.django_content_type' doesn't exist")
when trying to do the initial migration for a django project with a new database that I'm deploying on the production server for the first time.
I suspected the problem might be because one of the the apps had a directory full of old migrations from a SQLite3 development environment; I cleared those out but it didn't help. I also searched and found references to people having the problem with multiple databases, but I only have one.
Django version is 1.11.6 on python 3.5.4, mysqlclient 1.3.12
Some considerations:
Are you calling ContentType.objects manager anywhere in your code that may be called before the db has been built?
I am currently facing this issue and need a way to check the db table has been built before I can look up any ContentTypes
I ended up creating a method to check the tables to see if it had been created, not sure if it will also help you:
def get_content_type(cls):
from django.contrib.contenttypes.models import ContentType
from django.db import connection
if 'django_content_type' in connection.introspection.table_names():
return ContentType.objects.get_for_model(cls)
else:
return None
As for migrations, my understanding is that they should always belong in your version control repo, however you can squash, or edit as required, or even rebuild them, this linked helps me with some migrations problems:
Reset Migrations
Answering my own question:
UMDA's comment was right. I have some initialization code for the django-import-export module that looks at content_types, and evidently I have never deployed the app from scratch in a new environment since I wrote it.
Lessons learned / solution:
will wrap the offending code in an exception block, since I should
only have this exception once when deploying in a new environment
test clean deployments in a new environment more regularly.
(edit to add) consider whether your migrationsdirectories belong in .gitignore. For my purposes they do.
(Relatively new to stackoverflow etiquette - how do I credit UMDA's comment for putting me on the right track?)
I had the same issue when trying to create a generic ModelView (where the model name would be passed as a variable in urls.py). I was handling this in a kind of silly way:
Bad idea: a function that returns a generic class-based view
views.py
from django.contrib.auth.mixins import LoginRequiredMixin
from django.contrib.contenttypes.models import ContentType
from django.views.generic.edit import DeleteView
def get_generic_delete_view(model_name):
model_type = ContentType.objects.get(app_label='myapp', model=model_name)
class _GenericDelete(LoginRequiredMixin, DeleteView):
model = model_type.model_class()
template_name = "confirm_delete.html"
return _GenericDelete.as_view()
urls.py
from django.urls import path, include
from my_app import views
urlpatterns = [
path("mymodels/<name>/delete/", views.get_generic_delete_view("MyModel"),
]
Anyway. Let's not dwell in the past.
This was fixable by properly switching to a class-based view, instead of whatever infernal hybrid is outlined above, since (according to this SO post) a class-based view isn't instantiated until request-time.
Better idea: actual generic class-based view
views.py
from django.contrib.auth.mixins import LoginRequiredMixin
from django.contrib.contenttypes.models import ContentType
from django.views.generic.edit import DeleteView
class GenericDelete(LoginRequiredMixin, DeleteView):
template_name = "confirm_delete.html"
def __init__(self, **kwargs):
model = kwargs.pop("model")
model_type = ContentType.objects.get(app_label='myapp', model=model)
self.model = model_type.model_class()
super().__init__()
urls.py
from django.urls import path, include
from my_app import views
urlpatterns = [
path("mymodels/<name>/delete/", views.GenericDelete.as_view(model="MyModel"),
]
May you make new and better mistakes.
Chipping in because maybe this option will appeal better in some scenarios.
Most of the project's imports usually cascade down from your urls.py. What I usually do, is wrap the urls.py imports in a try/except statement and only create the routes if all imports were successful.
What this accomplishes is to create your project's / app's routes only if the modules were imported. If there is an error because the tables do not yet exist, it will be ignored, and the migrations will be done. In the next run, hopefully, you will have no errors in your imports and everything will run smoothly. But if you do, it's easy to spot because you won't have any URLs. Also, I usually add an error log to guide me through the issue in those cases.
A simplified version would look like this:
# Workaround to avoid programming errors on greenfield migrations
register_routes = True
try:
from myapp.views import CoolViewSet
# More imports...
except Exception as e:
register_routes = False
logger.error("Avoiding creation of routes. Error on import: {}".format(e))
if register_routes:
# Add yout url paterns here
Now, maybe you can combine Omar's answer for a more sensible, less catch-all solution.

Python error: argument -c/--conf is required

I'm new in python, my native language is C. I'm doing a code in python for a surveillance system triggered by motion using OpenCV. I based my code in the one made by Adrian Rosebrock in his blog pyimagesearch.com. Originally the code was developed for a Raspiberry Pi with a Pi Camera module attached to it, now I'm trying to adapt to my notebook's webcam. He made a easier tutorial about a simple code for motion detection and it worked very nicely in my PC. But I'm having a hardtime with this other code. Probably it's a silly mistake, but as begginer I couldn't found a specific answer to this issue.
This image have the part of the code that is causing the error (line 15) and the structure of the project on the left side of the screen. Image of python project for surveillance.
Similar part, originall code:
# import the necessary packages
from pyimagesearch.tempimage import TempImage
from dropbox.client import DropboxOAuth2FlowNoRedirect
from dropbox.client import DropboxClient
from picamera.array import PiRGBArray
from picamera import PiCamera
import argparse
import warnings
import datetime
import imutils
import json
import time
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-c", "--conf", required=True,
help="path to the JSON configuration file")
args = vars(ap.parse_args())
# filter warnings, load the configuration and initialize the Dropbox
# client
warnings.filterwarnings("ignore")
conf = json.load(open(args["conf"]))
client = None
Until now I only change these things:
Exclude the imports relatives to pi camera.
Change camera = PiCamera() by camera = cv2.VideoCapture(0). This way I use notebook's webcam.
Exclude:
camera.resolution = tuple(conf["resolution"])
camera.framerate = conf["fps"]
rawCapture = PiRGBArray(camera, size=tuple(conf["resolution"]))
Substitute the line for f in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True): by while True:.
Exclude two lines in program that was rawCapture.truncate(0).
Probably there is more things to repair, if you now please tell me, but first I'd like to understand how solve that mensage error. I use PyCharm in Windows 7 with Python 2.7 and OpenCV 3.1. Sorry for not post the entire code, but once that this is my first question in the site and I have 0 reputation, apparently I can just post 2 links. The entire originall code is in the pyimagesearch.com. Thank you for your time!
I think you probably not running it properly. Error message is clear. You are adding argument that means you need to provide them while running which you are not doing.
Check this how he ran this in tutorial link you provided
http://www.pyimagesearch.com/2015/06/01/home-surveillance-and-motion-detection-with-the-raspberry-pi-python-and-opencv#crayon-56d3c5551ac59089479643
Notice on the Figure 6 screen capture in #Rhoit's link.
python pi_surveillance.py --conf conf.json
The program was initialized with the name and these --conf conf.json words.
In your code:
ap = argparse.ArgumentParser()
ap.add_argument("-c", "--conf", required=True,
help="path to the JSON configuration file")
ap is a piece of code that reads these inputs from the commandline, and parses the information. This definition specifies that a --conf argument is required, as demonstrated in Figure 6.
The error indicates that you omitted this information:
argument -c/--conf is required

How to use flask-migrate with other declarative_bases

I'm trying to implement python-social-auth in Flask. I've ironed out tons of kinks whilst trying to interpret about 4 tutorials and a full Flask-book at the same time, and feel I've reached sort of an impasse with Flask-migrate.
I'm currently using the following code to create the tables necessary for python-social-auth to function in a flask-sqlalchemy environment.
from social.apps.flask_app.default import models
models.PSABase.metadata.create_all(db.engine)
Now, they're obviously using some form of their own Base, not related to my actual db-object. This in turn causes Flask-Migrate to completely miss out on these tables and remove them in migrations. Now, obviously I can remove these db-drops from every removal, but I can imagine it being one of those things that at one point is going to get forgotten about and all of a sudden I have no OAuth-ties anymore.
I've gotten this solution to work with the usage (and modification) of the manage.py-command syncdb as suggested by the python-social-auth Flask example
Miguel Grinberg, the author of Flask-Migrate replies here to an issue that seems to very closely resemble mine.
The closest I could find on stack overflow was this, but it doesn't shed too much light on the entire thing for me, and the answer was never accepted (and I can't get it to work, I have tried a few times)
For reference, here is my manage.py:
#!/usr/bin/env python
from flask.ext.script import Server, Manager, Shell
from flask.ext.migrate import Migrate, MigrateCommand
from app import app, db
manager = Manager(app)
manager.add_command('runserver', Server())
manager.add_command('shell', Shell(make_context=lambda: {
'app': app,
'db_session': db.session
}))
migrate = Migrate(app, db)
manager.add_command('db', MigrateCommand)
#manager.command
def syncdb():
from social.apps.flask_app.default import models
models.PSABase.metadata.create_all(db.engine)
db.create_all()
if __name__ == '__main__':
manager.run()
And to clarify, the db init / migrate / upgrade commands only create my user table (and the migration one obviously), but not the social auth ones, while the syncdb command works for the python-social-auth tables.
I understand from the github response that this isn't supported by Flask-Migrate, but I'm wondering if there's a way to fiddle in the PSABase-tables so they are picked up by the db-object sent into Migrate.
Any suggestions welcome.
(Also, first-time poster. I feel I've done a lot of research and tried quite a few solutions before I finally came here to post. If I've missed something obvious in the guidelines of SO, don't hesitate to point that out to me in a private message and I'll happily oblige)
After the helpful answer from Miguel here I got some new keywords to research. I ended up at a helpful github-page which had further references to, amongst others, the Alembic bitbucket site which helped immensely.
In the end I did this to my Alembic migration env.py-file:
from sqlalchemy import engine_from_config, pool, MetaData
[...]
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
from flask import current_app
config.set_main_option('sqlalchemy.url',
current_app.config.get('SQLALCHEMY_DATABASE_URI'))
def combine_metadata(*args):
m = MetaData()
for metadata in args:
for t in metadata.tables.values():
t.tometadata(m)
return m
from social.apps.flask_app.default import models
target_metadata = combine_metadata(
current_app.extensions['migrate'].db.metadata,
models.PSABase.metadata)
This seems to work absolutely perfectly.
The problem is that you have two sets of models, each with a different SQLAlchemy metadata object. The models from PSA were generated directly from SQLAlchemy, while your own models were generated through Flask-SQLAlchemy.
Flask-Migrate only sees the models that are defined via Flask-SQLAlchemy, because the db object that you give it only knows about the metadata for those models, it knows nothing about these other PSA models that bypassed Flask-SQLAlchemy.
So yeah, end result is that each time you generate a migration, Flask-Migrate/Alembic find these PSA tables in the db and decides to delete them, because it does not see any models for them.
I think the best solution for your problem is to configure Alembic to ignore certain tables. For this you can use the include_object configuration in the env.py module stored in the migrations directory. Basically you are going to write a function that Alembic will call every time it comes upon a new entity while generating a migration script. The function will return False when the object in question is one of these PSA tables, and True for every thing else.
Update: Another option, which you included in the response you wrote, is to merge the two metadata objects into one, then the models from your application and PSA are inspected by Alembic together.
I have nothing against the technique of merging multiple metadata objects into one, but I think it is not a good idea for an application to track migrations in models that aren't yours. Many times Alembic will not be able to capture a migration accurately, so you may need to make minor corrections on the generated script before you apply it. For models that are yours, you are capable of detecting these inaccuracies that sometimes show up in migration scripts, but when the models aren't yours I think you can miss stuff, because you will not be familiar enough with the changes that went into those models to do a good review of the Alembic generated script.
For this reason, I think it is a better idea to use my proposed include_object configuration to leave the third party models out of your migrations. Those models should be migrated according to the third party project's instructions instead.
I use two models as following:-
One which is use using db as
db = SQLAlchemy()
app['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres:' + POSTGRES_PASSWORD + '#localhost/Flask'
db.init_app(app)
class User(db.Model):
pass
the other with Base as
Base = declarative_base()
uri = 'postgresql://postgres:' + POSTGRES_PASSWORD + '#localhost/Flask'
engine = create_engine(uri)
metadata = MetaData(engine)
Session = sessionmaker(bind=engine)
session = Session()
class Address(Base):
pass
Since you created user with db.Model you can use flask migrate on User and class Address used Base which handles fetching pre-existing table from the database.

Automatically encode points to geojson with as_json in rails3

A rails service I am currently working on requires that points are returned as a GeoJSON object within our json response. We are using rgeo and the mysql2spatial adapter to represent these points in our application and I would like to use the rgeo-geojson gem to handle the encoding if possible (we already use it to decode geojson on post).
I am currently overwriting as_json with the following code to achieve this:
def as_json(params)
l = {:lat_lng => ::RGeo::GeoJSON.encode(lat_lng)}
self.attributes.merge(l).as_json
end
However this is not optimal as the root (eg object: {}) is missing. Is there a function to easily include it? (a lot of our models have a lat_lng associated, so I'd rather not hard code it).
Any tips for a ruby/rails beginner would be greatly appreciated
For posterity, I fixed this in the "rgeo-activerecord" gem, version 0.3.4, after getting several reports on it. By default it renders spatial columns in WKT. To switch it to GeoJSON, set this:
RGeo::ActiveRecord::GeometryMixin.set_json_generator(:geojson)
The answer by NielsV will work sometimes but not every time. Specifically, it will work for geographic factories (i.e. geometry columns in PostGIS) but not for GEOS-backed factories.
You can specify it by including root with this line of code:
ActiveRecord::Base.include_root_in_json = true
I Hope this helps.
I solved this by extending the RGEO library with an as_json method for a Point, doing this it's no longer required to overwrite as_json in my own models. Thanks for your response though.
module RGeo
module Feature
module Point
def as_json(params)
::RGeo::GeoJSON.encode(self)
end
end
end
end