sqlAlchemy converts geometry to byte using ST_AsBinary - sqlalchemy

I have a sqlAlchemy model that has one column of type geometry which is defined like this:
point_geom = Column(Geometry('POINT'), index=True)
I'm using geoalchemy2 module:
from geoalchemy2 import Geometry
Then I make my queries using sqlAlchemy ORM, and everything works fine. For example:
data = session.query(myModel).filter_by(...)
My problem is that when I need to get the sql statement of the query object, I use the following code:
sql = data.statement.compile(dialect=postgresql.dialect())
But the column of type geometry is converted to Byte[], so the resulting sql statement is this:
SELECT column_a, column_b, ST_AsBinary(point_geom) AS point_geom
FROM tablename WHERE ...
What should be done to avoid the conversion of the geometry type to byte type?

I had the same problem when was working with Flask-Sqlalchemy and Geoalchemy2 and solved this as follows.
You just need to create a new subclass of GEOMETRY type.
If you look at documentations, the arguments of "GEOMETRY" type are given:
ElementType - which is the type of returned element, by default it's 'WKBElement' (Well-known-binary-element)
as_binary - the function to use, by default it's 'ST_AsEWKB' which in makes a problem on your case
from_text - the geometry constructor used to create, insert and update elements, by default it is 'ST_GeomFromEWKT'
So what I did? I have just created new subclass with required function, element and constructor and used "Geometry" type on my db models as always do.
from geoalchemy2 import Geometry as BaseGeometry
from geoalchemy2.elements import WKTElement
class Geometry(BaseGeometry):
from_text = 'ST_GeomFromText'
as_binary = 'ST_asText'
ElementType = WKTElement
As you can see I have changed only these 3 arguments of a base class.
This will return you a String with required column variables.

It think you can specify that in your query. Something like this:
from geoalchemy2.functions import ST_AsGeoJSON
query = session.query(ST_AsGeoJSON(YourModel.geom_column))
That should change your conversion. There are many conversion functions in the
geoalchemy documentation.

Related

How to get the vendor type for a SQLAlchemy generic type without creating a table?

Using the code shown below I can obtain the vendor type that corresponds to the SQLAlchemy generic type. In this case it is "VARCHAR(10)". How can I get the vendor type without creating a table?
engine = create_engine(DB_URL)
metadata_obj = MetaData()
table = Table('Table', metadata_obj,
Column('Column', types.String(10))
)
metadata_obj.create_all(bind=engine)
metadata_obj = MetaData()
metadata_obj.reflect(bind=engine)
print(metadata_obj.tables['Table'].columns[0].type)
You can't obtain the type directly, but you could use a mock_engine to generate the DDL as a string which can be parsed. A mock_engine must be coupled with a callable that will process the SQL expression object that it generates.
This snippet is based on the example code from the SQLAlchemy docs.
import sqlalchemy as sa
tbl = sa.Table('drop_me', sa.MetaData(), sa.Column('col', sa.String(10)))
def dump(sql, *multiparams, **params):
print(sql.compile(dialect=engine.dialect))
mock_engine = sa.create_mock_engine('postgresql://', executor=dump)
tbl.create(mock_engine)
Outputs
CREATE TABLE "Table" (
"Column" VARCHAR(10)
)
sqlalchemy.schema.CreateTable, could also be used, but binding it to an engine is deprecated, to be removed in SQLAlchemy 2.0.
from sqlalchemy.schema import CreateTable
print(CreateTable(tbl, bind=some_engine)

How to push `dict` column into Redshift SUPER type column using `pandas.to_sql`?

AWS Redshift allows SUPER datatype in columns to hold json like data.
This guide explains how to do it via the COPY function or using an INSERT function. The INSERT function requires JSON_PARSE function to be applied to the column value in the statement. This is shown here.
How am I able to use pandas.DataFrame.to_sql function to implement the above behaviour?
df.to_sql('table', connection, schema='my_schema', if_exists='append', dtype=type_dict)
The above is used to execute INSERT statements.
I tried using
type_dict = {
'my_json_column' = sqlalchemy.types.JSON,
}
However, I am seeing my redshift table having "\" characters within the string. Therefore the SUPER column defined in the target redshift table has string values and not json.
How can i leverage pandas.DataFrame.to_sql function to implement JSON_PARSE functionality in redshift and is there no way around writing INSERT queries?
GENUINE REQUEST TO COMMUNITY : Please be friendly when you answer this and feel free to comment in if the question is not clear to you. I will revisit and reiterate.
You need to use SqlAlchemy types and SqlAlchemy-Redshift dialect. In addition, you need to enable psycopg2 extensions.
import pandas as pd
import sqlalchemy as sa
import sqlalchemy_redshift as sar
from psycopg2.extensions import register_adapter
from psycopg2.extras import Json
register_adapter(dict, Json)
register_adapter(list, Json)
rs_url = 'redshift+psycopg2://username:password#cluser_url.redshift.amazonaws.com:5439/db_name'
dict_types={
'responseID': sa.types.INTEGER(),
'surveyID': sa.types.INTEGER(),
'surveyName': sa.types.NVARCHAR(length=65535),
'timestamp': sa.types.DateTime(),
'location': sar.dialect.SUPER()
}
df = pd.read_json('my_file.json')
df.to_sql('table_name', con='connection_string',
chunksize=100, method='multi', if_exists='replace', index=False, schema='schema_name', dtype=dict_types)

Insert data into JSON column in postgres using JOOQ

I have a postgres database to which I read/write using JOOQ. One of my DB tables has a column of type JSON. When I try to insert data into this column using the query below, I get the error
Exception in thread "main" org.jooq.exception.DataAccessException: SQL [update "public"."asset_state" set "sites_as_json" = ?]; ERROR: column "sites_as_json" is of type json but expression is of type character varying
Hint: You will need to rewrite or cast the expression.
Below is the code for inserting data into the column
SiteObj s1 = new SiteObj();
s1.setId("1");
s1.setName("Site1");
s1.setGeofenceType("Customer Site");
SiteObj s2 = new SiteObj();
s2.setId("2");
s2.setName("Site2");
s2.setGeofenceType("Customer Site");
List<SiteObj> sitesList = Arrays.asList(s1, s2);
int result = this.dsl.update(as).set(as.SITES_AS_JSON, LambdaUtil.convertJsonToStr(sitesList)).execute();
The call LambdaUtil.convertJsonToStr(sitesList) outputs a string that looks like this...
[{"id":"1","name":"Site1","geofenceType":"Customer Site"},{"id":"2","name":"Site2","geofenceType":"Customer Site"}]
What do I need to do to be able to insert into the JSON column?
Current jOOQ versions
jOOQ natively supports JSON and JSONB data types. You shouldn't need to have to do anything custom.
Historic answer
For jOOQ to correctly bind your JSON string to the JDBC driver, you will need to implement a data type binding as documented here:
https://www.jooq.org/doc/latest/manual/code-generation/custom-data-type-bindings
The important bit is the fact that your generated SQL needs to produce an explicit type cast, for example:
#Override
public void sql(BindingSQLContext<JsonElement> ctx) throws SQLException {
// Depending on how you generate your SQL, you may need to explicitly distinguish
// between jOOQ generating bind variables or inlined literals.
if (ctx.render().paramType() == ParamType.INLINED)
ctx.render().visit(DSL.inline(ctx.convert(converter()).value())).sql("::json");
else
ctx.render().sql("?::json");
}

How to use MySQL's standard deviation (STD, STDDEV, STDDEV_POP) function inside SQLAlchemy?

I need to use the STD function of MySQL through SQLAlchemy, but after a couple of minutes of search, it looks like there is no func.<> way of using this one in SQLAlchemy. Is it not supported, or am I missing something?
Found this issue while coding some aggregates on SQLAlchemy.
Citing the docs:
Any name can be given to func. If the function name is unknown to SQLAlchemy, it will be rendered exactly as is. For common SQL functions which SQLAlchemy is aware of, the name may be interpreted as a generic function which will be compiled appropriately to the target database.
Basically func will generate a function matching the attribute "func." if its not a common function of which SQLAlchemy is aware of (like func.count).
To keep the advantages of RDBMS abstraction that comes with any ORM I always suggest to use ANSI functions to decouple the code from the DB Engine.
For a working sample you can add a connection string and execute the following code:
from sqlalchemy.orm import sessionmaker
from sqlalchemy import func, create_engine, Column
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.types import DateTime, Integer, String
# Add your connection string
engine = create_engine('My Connection String')
Base = declarative_base(engine)
Session = sessionmaker(bind=engine)
db_session = Session()
# Make sure to have a table foo in the db with foo_id, bar, baz columns
class Foo(Base):
__tablename__ = 'foo'
__table_args__ = { 'autoload' : True }
query = db_session.query(
func.count(Foo.bar).label('count_agg'),
func.avg(Foo.foo_id).label('avg_agg'),
func.stddev(Foo.foo_id).label('stddev_agg'),
func.stddev_samp(Foo.foo_id).label('stddev_samp_agg')
)
print(query.statement.compile())
It will generate the following SQL
SELECT count(foo.bar) AS count_agg,
avg(foo.foo_id) AS avg_agg,
stddev(foo.foo_id) AS stddev_agg,
stddev_samp(foo.foo_id) AS stddev_samp_agg
FROM foo

How do we update an HSTORE field with Flask-Admin?

How do I update an HSTORE field with Flask-Admin?
The regular ModelView doesn't show the HSTORE field in Edit view. It shows nothing. No control at all. In list view, it shows a column with data in JSON notation. That's fine with me.
Using a custom ModelView, I can change the HSTORE field into a TextAreaField. This will show me the HSTORE field in JSON notation when in edit view. But I cannot edit/update it. In list view, it still shows me the object in JSON notation. Looks fine to me.
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
When I attempt to save/edit the JSON, I receive this error:
sqlalchemy.exc.InternalError
InternalError: (InternalError) Unexpected end of string
LINE 1: UPDATE mytable SET attributes='{}' WHERE mytable.id = ...
^
'UPDATE mytable SET attributes=%(attributes)s WHERE mytable.id = %(mytable_id)s' {'attributes': u'{}', 'mytable_id': 14L}
Now -- using code, I can get something to save into the HSTORE field:
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
def on_model_change(self, form, model, is_created):
model.attributes = {"a": "1"}
return
This basically overrides the model and put this object into it. I can then see the object in the List view and the Edit view. Still not good enough -- I want to save/edit the object that the user typed in.
I tried to parse and save the content from the form into JSON and back out. This doesn't work:
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
def on_model_change(self, form, model, is_created):
x = form.data['attributes']
y = json.loads(x)
model.attributes = y
return
json.loads(x) says this:
ValueError ValueError: Expecting property name: line 1 column 1 (char
1)
and here are some sample inputs that fail:
{u's': u'ff'}
{'s':'ff'}
However, this input works:
{}
Blank also works
This is my SQL Table:
CREATE TABLE mytable (
id BIGSERIAL UNIQUE PRIMARY KEY,
attributes hstore
);
This is my SQA Model:
class MyTable(Base):
__tablename__ = u'mytable'
id = Column(BigInteger, primary_key=True)
attributes = Column(HSTORE)
Here is how I added the view's to the admin object
admin.add_view(ModelView(models.MyTable, db.session))
Add the view using a custom Model View
admin.add_view(MyView(models.MyTable, db.session))
But I don't do those views at the same time -- I get a Blueprint name collision error -- separate issue)
I also attempted to use a form field converter. I couldn't get it to actually hit the code.
class MyModelConverter(AdminModelConverter):
def post_process(self, form_class, info):
raise Exception('here I am') #but it never hits this
return form_class
class MyView(ModelView):
form_overrides = dict(attributes=fields.TextAreaField)
The answer gives you a bit more then asked
Fist of all it "extends" hstore to be able to store actually JSON, not just key-value
So this structure is also OK:
{"key":{"inner_object_key":{"Another_key":"Done!","list":["no","problem"]}}}
So, first of all your ModelView should use custom converter
class ExtendedModelView(ModelView):
model_form_converter=CustomAdminConverter
Converter itself should know how to use hstore dialect:
class CustomAdminConverter(AdminModelConverter):
#converts('sqlalchemy.dialects.postgresql.hstore.HSTORE')
def conv_HSTORE(self, field_args, **extra):
return DictToHstoreField(**field_args)
This one as you can see uses custom WTForms field which converts data in both directions:
class DictToHstoreField(TextAreaField):
def process_data(self, value):
if value is None:
value = {}
else:
for key,obj in value.iteritems():
if (obj.startswith("{") and obj.endswith("}")) or (obj.startswith("[") and obj.endswith("]")):
try:
value[key]=json.loads(obj)
except:
pass #
self.data=json.dumps(value)
def process_formdata(self, valuelist):
if valuelist:
self.data = json.loads(valuelist[0])
for key,obj in self.data.iteritems():
if isinstance(obj,dict) or isinstance(obj,list):
self.data[key]=json.dumps(obj)
if isinstance(obj,int):
self.data[key]=str(obj)
The final step will be to actual use this data in application
I did not make it in common nice way for SQLalchemy, since was used with flask-restful, so I have only adoption for flask-restful in one direction, but I think it's easy to get the idea from here and do the rest.
And if your case is simple key-value storage so nothing additionaly should be done, just use it as is.
But if you want to unwrap JSON somewhere in code, it's simple like this whenever you use it, just wrap in function
if (value.startswith("{") and value.endswith("}")) or (value.startswith("[") and value.endswith("]")):
value=json.loads(value)
Creating dynamical field for actual nice non-JSON way for editing of data also possible by extending FormField and adding some javascript for adding/removing fields, but this is whole different story, in my case I needed actual json storage, with blackjack and lists :)
Was working on postgres JSON datatype. The above solution worked great with a minor modifications.
Tried
'sqlalchemy.dialects.postgresql.json.JSON',
'sqlalchemy.dialects.postgresql.JSON',
'dialects.postgresql.json.JSON',
'dialects.postgresql.JSON'
The above versions did not work.
Finally the following change worked
#converts('JSON')
And changed class DictToHstoreField to the following:
class DictToJSONField(fields.TextAreaField):
def process_data(self, value):
if value is None:
value = {}
self.data = json.dumps(value)
def process_formdata(self, valuelist):
if valuelist:
self.data = json.loads(valuelist[0])
else:
self.data = '{}'
Although, this is might not be the answer to your question, but by default SQLAlchemy's ORM doesn't detect in-place changes to HSTORE field values. But fortunately there's a solution: SQLAlchemy's MutableDict type:
from sqlalchemy.ext.mutable import MutableDict
class MyClass(Base):
__tablename__ = 'mytable'
id = Column(Integer, primary_key=True)
attributes = Column(MutableDict.as_mutable(HSTORE))
Now when you change something in-place:
my_object.attributes.['some_key'] = 'some value'
The hstore field will be updated after session.commit().