Using SQLAlchemy with flask_sqlalchemy and alembic for PostgreSQL. I have the following definition for a field:
date_updated = db.Column(db.DateTime, server_default=db.func.now(), server_onupdate=db.func.now())
However the field never updates when the record is modified. It is set on create and never updates. This is what is generated by alembic to create the table:
sa.Column('date_updated', sa.DateTime(), server_default=sa.text('now()'), nullable=True),
So it's no wonder that it's not being updated, since the server_onupdate param doesn't seem to be getting past alembic.
I'm not sure of the right way to do this. The SQLAlchemy documentation is frustratingly complex and unclear where this is concerned.
Edit: From looking at how to do this in PostgreSQL directly, it looks like it requires the use of triggers. I would prefer to do it at the DB level rather than at the application level if possible, but I don't know if I can add a trigger through SQLAlchemy. I can add it directly at the DB but that could break when migrations are applied.
The way you say "I'm not sure of the right way to do this", I'm not sure if you mean specifically updating the date on the server side, or just updating it in general.
If you just want to update it and it doesn't matter how, the cleanest way in my opinion is to use event listeners.
Here's an example using the normal sqlalchemy, it will probably be the same (or at least very similar) in flask-sqlalchemy:
from datetime import datetime
from sqlalchemy import event
#event.listens_for(YourModel, 'before_insert')
#event.listens_for(YourModel, 'before_update')
def date_insert(mapper, connection, target):
target.date_updated = datetime.utcnow()
Related
I'm setting up alembic for our project, which is already really big, and has a lot of tables. The thing is that our project's DB has been managed via SQL for a long time, and our Alchemy models are almost all reflected like so (I obscured the name, but the rest is all directly from our code):
class SomeModel(Base, BaseModelMixin):
"""
Model docstring
"""
"""Reflect Table"""
__table__ = Table('some_models', metadata, autoload=True)
What's happening is that when I create an automatic migration, a lot of drop table (and a lot of create table) operations are created for some reason. I assumed it's because the model class doesn't explicitly define the tables, but I don't see why that would drop the tables as well.
I'm making sure all model definitions are processed before setting the target_metadata variable in env.py:
# this imports every model in our app
from shiphero_app.utils.sql_dependencies import import_dependencies
import_dependencies()
from shiphero_app.core.database import Base
target_metadata = Base.metadata
Any ideas of what might I be missing here?
This is probably what you are looking for - this makes Alembic ignore predefined tables:
https://alembic.sqlalchemy.org/en/latest/cookbook.html#don-t-generate-any-drop-table-directives-with-autogenerate
Unfortunately this also prevents Alembic from dropping tables within scope as well
This may sound like an opinion question, but it's actually a technical one: Is there a standard process for maintaining a simple data set?
What I mean is this: let's say all I have is a list of something (we'll say books). The primary storage engine is MySQL. I see that Solr has a data import handler. I understand that I can use this to pull in book records on a first run - is it possible to use this for continuous migration? If so, would it work as well for updating books that have already been pulled into Solr as it would for pulling in new book records?
Otherwise, if the data import handler isn't the standard way to do it, what other ways are there? Thoughts?
Thank you very much for the help!
If you want to update documents from within Solr, I believe you'll need to use the UpdateRequestHandler as opposed to the DataImportHandler. I've never had need to do this where I work, so I don't know all that much about it. You may find this link of interest: Uploading Data With Index Handlers.
If you want to update Solr with records that have newly been added to your MySQL database, you would use the DataImportHandler for a delta-import. Basically, how it works is you have some kind of field in MySQL that shows the new record is, well, new. If the record is new, Solr will import it. For example, where I work, we have an "updated" field that Solr uses to determine whether or not it should import that record. Here's a good link to visit: DataImportHandler
The question looks similar to the one which we are doing, but not with SQL. Its with HBase(hadoop stack DB). However there we have Hbase indexer, which after mapping DB with Solr, listens to the events in hbase(DB) for new rows, and then executes code to fetch those values from DB and add in Solr. Not sure if there is such for SQL. However the concept looks similar. IN SQL I know about triggers which can listen to inserts and updates. At that even, you can trigger something to execute the steps of adding them in continuosly manner.
I have a Pyramid application that I am using with SQLAlchemy and MySQL. For database fields that I wanted to treat as boolean, I've been using a "BIT" data type on the SQLAlchemy side, and BIT(1) on the MySQL side.
This had all been working fine, but I was checking some newly updated code on my webhost and their version of phpMyAdmin is newer than the one I'm using locally; I was browsing a table that has a BIT field and on the newer phpMyAdmin none of the data appears - it's just blank. On my local instance BIT fields display as 0 or 1. If I tried to inline edit the hosted phpMyAdmin it wouldn't take any values I tried. I did try my application code and it appears to be able to toggle the true/false values just fine.
The got me wondering - with this setup should I be approaching it differently? SQLAlchemy does support Boolean, which seems like it would be more intuitive and appropriate, should I use that and set the MySQL fields to TINYINT instead?
What is the conventionally accepted way to handle booleans between SQLAlchemy and MySQL?
MySQL has a BOOL type (which is what SQLAlchemy uses) so I'm not sure why you don't just use that? Apparently it is an alias for TINYINT.
from sqlalchemy import Boolean and you should be good to go.
On my local machine I develop my Rails application using MySQL, but on deployment I am using Heroku which uses PostgreSQL. I am in need of creating a new data type, specifically I wish to call it longtext, and it is going to need to map to separate column types in either database.
I have been searching for this. My basic idea is that I am going to need to override some hash inside of the ActiveRecord::ConnectionAdapters::*SQL adapter(s) but I figured I would consult the wealth of knowledge here to see if this is a good approach (and, if possible, pointers on how to do it) or if there is a quick win another way.
Right now the data type is "string" and I am getting failed inserts because the data type is too long. I want the same functionality on both MySQL and PgSQL, but it looks like there is no common data type that gives me an unlimited text blob column type?
The idea is that I want to have this application working correctly (with migrations) for both database technologies.
Much appreciated.
Why don't you install PostgreSQL on your dev machine? Download it, click "ok" a few times and you're up and running. It isn't rocket science :-)
http://www.postgresql.org/download/
PostgreSQL doesn't have limitations on datatypes, you can create anything you want, it's up to your imagination:
CREATE DOMAIN (simple stuff only)
CREATE TYPE (unlimited)
The SQL that Frank mentioned is actually the answer, but I really was looking for a more specific way to do RDBMS specific Rails migrations. The reason is that I want to maintain the fact that my application can run on both PostgreSQL and MySQL.
class AddLongtextToPostgresql < ActiveRecord::Migration
def self.up
case ActiveRecord::Base.connection.adapter_name
when 'PostgreSQL'
execute "CREATE DOMAIN longtext as text"
execute "ALTER TABLE chapters ALTER COLUMN html TYPE longtext"
execute "ALTER TABLE chapters ALTER COLUMN body TYPE longtext"
else
puts "This migration is not supported on this platform."
end
end
def self.down
end
end
That is effectively what I was looking for.
I'm trying to manually manage some geometry (spatial) columns in a rails model.
When updating the geometry column I do this in rails:
self.geom="POINTFROMTEXT('POINT(#{lat},#{lng})')"
Which is the value I want to be in the SQL updates and so be evaluated by the database. However by the time this has been through the active record magic, it comes out as:
INSERT INTO `places` (..., `geom`) VALUES(...,'POINTFROMTEXT(\'POINT(52.2531519,20.9778386)\')')
In other words, the quotes are escaped. This is fine for the other columns as it prevents sql-injection, but not for this. The values are guaranteed to be floats, and I want the update to look like:
INSERT INTO `places` (..., `geom`) VALUES(...,'POINTFROMTEXT('POINT(52.2531519,20.9778386)')')
So is there a way to turn escaping off for a particular column? Or a better way to do this?
(I've tried using GeoRuby+spatial adapter, and spatial adaptor seems too buggy to me, plus I don't need all the functionality - hence trying to do it directly).
The Rails Spatial Adapter should implement exactly what you need. Although, before I found GeoRuby & Spatial Adapter, I was doing this:
Have two fields: one text field and a real geometry field, on the model
On a after_save hook, I ran something like this:
connection.execute "update mytable set geom_column=#{text_column} where id=#{id}"
But the solution above was just a hack, and this have additional issues: I can't create a spatial index if the column allows NULL values, MySQL doesn't let me set a default value on a geometry column, and the save method fails if the geometry column doesn't have a value set.
So I would try GeoRuby & Spatial Adapter instead, or reuse some of its code (on my case, I am considering extracting only the GIS-aware MysqlAdapter#quote method from the Spatial Adapter code).
You can use an after_save method, write them with a direct SQL UPDATE call. Annoying, but should work.
You should be able to create a trigger in your DB migration using the 'execute' method... but I've never tried it.
Dig into ActiveRecord's calculate functionality: max/min/avg, etc. Not sure whether this saves you much over the direct SQL call in after_save. See calculations.rb.
You could patch the function that quotes the attributes (looking for POINTFROMTEXT and then skip the quoting). This is pretty easy to find, as all the methods start with quote. Start with ActiveRecord::Base #quote_value.