sqlalchemy change from immutable ARRAY type to mutable requires migration? - sqlalchemy

I use sqlalchemy 1.4 and alembic for migrations.
Previously, my column type looked like this:
has_bubble_in_countries = sa.Column((ARRAY(sa.Enum(Country))), nullable=False, default=[])
Which was not allowing me to add or remove any elements from this array.
Then, I made it immutable by changing it like this:
has_bubble_in_countries = sa.Column(MutableList.as_mutable(ARRAY(sa.Enum(Country))), nullable=False, default=[])
Does this change require a migration? If so, what is the alembic's property in order to detect this type of change?
My first thought was that this is not an altering of the type so I considered that migration is not needed.

Found the answer. The change from a mutable sqlalchemy ARRAY type to an immutable ARRAY is just a Python behavioral change and not a change in the schema of the database, so no migration is needed.

The answer to this is "it depends".
If you are using a database that supports the ALTER COLUMN TYPE command, then you can change the type of a column without needing to create a new column and copy the data over. However, if you are using a database that does not support that command, then you will need to create a new column and copy the data over.
If you are using a database that does not support ALTER COLUMN TYPE, then you can use the postgresql_alter_column_type operation in Alembic to do this. This will create a new column, copy the data over, and then drop the old column. You can read more about it here:
https://alembic.sqlalchemy.org/en/latest/ops.html#alembic.operations.Operations.postgresql_alter_column_type

Related

SQLAlchemy date field not updating with server_onupdate

Using SQLAlchemy with flask_sqlalchemy and alembic for PostgreSQL. I have the following definition for a field:
date_updated = db.Column(db.DateTime, server_default=db.func.now(), server_onupdate=db.func.now())
However the field never updates when the record is modified. It is set on create and never updates. This is what is generated by alembic to create the table:
sa.Column('date_updated', sa.DateTime(), server_default=sa.text('now()'), nullable=True),
So it's no wonder that it's not being updated, since the server_onupdate param doesn't seem to be getting past alembic.
I'm not sure of the right way to do this. The SQLAlchemy documentation is frustratingly complex and unclear where this is concerned.
Edit: From looking at how to do this in PostgreSQL directly, it looks like it requires the use of triggers. I would prefer to do it at the DB level rather than at the application level if possible, but I don't know if I can add a trigger through SQLAlchemy. I can add it directly at the DB but that could break when migrations are applied.
The way you say "I'm not sure of the right way to do this", I'm not sure if you mean specifically updating the date on the server side, or just updating it in general.
If you just want to update it and it doesn't matter how, the cleanest way in my opinion is to use event listeners.
Here's an example using the normal sqlalchemy, it will probably be the same (or at least very similar) in flask-sqlalchemy:
from datetime import datetime
from sqlalchemy import event
#event.listens_for(YourModel, 'before_insert')
#event.listens_for(YourModel, 'before_update')
def date_insert(mapper, connection, target):
target.date_updated = datetime.utcnow()

How to support old schema versions with SQLAlchemy?

My application works with huge database contents for which we can't always migrate to the latest database schema version we designed at software upgrade time. And yes, we're using database migration tools (Alembic), but this doesn't yet allow us to have Python application code that can handle multiple schema versions. At some point in time when the system downtime is accepted, a migration to the latest version will be performed, but in the meantime the application code is required to be able to handle both (multiple) versions.
So, for example, we can offer Feature X only if the database migration has been performed. It should also be able to function if the migration hasn't been performed yet, but then doesn't offer Feature X with a warning printed in the log. I see several ways of doing this with SQLAlchemy, but they all feel hackish ugly. I'd like to get some advice on how to handle this properly.
Example:
Base = declarative_base()
class MyTable(Base):
__tablename__ = 'my_table'
id = Column(MyCustomType, nullable=False, primary_key=True)
column_a = Column(Integer, nullable=True)
column_b = Column(String(32)) # old schema
column_b_new = Column(String(64)) # new schema
New schema version has a new column replacing the old one. Note that both the column name and column data specification change.
Another requirement is that the use of this class from other parts of the code must be transparent to support backwards compatibility. Other components of the product will get awareness of the new column/datatype later. This means that if initialized with the new schema, the old attribute still has to be functional. So, if a new object is created with Mytable(column_a=123, column_b="abc"), it should work with both the new and old schema.
What would be the best way to move from here? Options to support two schemas I see:
Define two MyTable classes for both schema versions, then determine the schema version (how?) and based on the result, use either version. With this approach I think this requires the logic for which schema to use in every place the MyTable class is used, and therefore breaks easily. Link the class attributes to each other (column_b = column_b_new) for backward compatibility (does that actually work?).
Initialize the database normally and alter the MyTable class object based on the schema version detected. I'm not sure whether SQLAlchemy support changing the attributes (columns) of a declarative base class after initialization.
Create a custom Mapper configuration as desribed here: http://docs.sqlalchemy.org/en/rel_1_1/orm/extensions/declarative/basic_use.html#mapper-configuration I'm not sure how to get from this SQLAlchemy feature to my desired solution. Perhaps a custom attribute set dynamically can be checked in a custom mapper function?

mySQL Auto increment using JDBC

I have created a page and want to store it in mySQl but i want to implement autoincrement from my java program and pass it as a parameter.how to get that.I used static count=0 counter as passing but it is not happening.
This is the function i am using
static int=count++;
CorruptionStory corruptionStory =
new CorruptionStory(count, new State(stateId,stateNameSelected),
age, new Department(deptId,departmentNameSelected),
positionOfOfficial,bribeAmount, description, sqlDate);
isSuccessfullySaved = CorruptionStoryJdbcImpl.
saveCorruptionStory(corruptionStory);
I don't think it's a good idea to have the java code manage the auto incrementing of the number, you should really configure your table schema to do it for you. Here is why:
if you restart your application, you will need to write code to figure out what number to resume with.
if you have multiple instances of your program running, they will somehow need to coordinate with each other so they don't use the same number.
MySQL column definitions allow you to specify auto-increment, see this:
http://dev.mysql.com/doc/refman/5.6/en/example-auto-increment.html
it would be much better if you wrote a SQL schema file to solve this problem. Then, when you do the insert statement from the java program, you can omit that column, and MySQL will automatically set it to the next appropriate value.
Also, if you are willing to spend time studying hibernate, you can use that. Hibernate is able to generate your SQL schema automatically for you, and even update the database for you at startup. It has an annotation that lets you tell hibernate that a certain class field (table column) should be an automatically incrementing id.
I should warn you though, hibernate is not something you're going to learn overnight.

Grails Change ID type on existing database

I have an existing application in grails using mysql database with much data. The previous programmer uses int as id and I need to change to long as I'm running out of ids. I know that the change in the domain class does not update the column of the existing table. Do I change the type in mysql manually?
There's this thing called database migrations... There's a plugin for it.
http://grails.org/plugin/database-migration
Yes, after changing the domain class change the column manually.
Also, I suppose it's a good idea first to set dbUpdate to e.g. "create-drop" and try it (on another DB instance) to let Grails generate new schema and to see whether it looks as you expect.
So, change the domain, generate test schema and check whether it is correct, then change the original DB manually.

How do I stop rails from escaping values in SQL for a particular column?

I'm trying to manually manage some geometry (spatial) columns in a rails model.
When updating the geometry column I do this in rails:
self.geom="POINTFROMTEXT('POINT(#{lat},#{lng})')"
Which is the value I want to be in the SQL updates and so be evaluated by the database. However by the time this has been through the active record magic, it comes out as:
INSERT INTO `places` (..., `geom`) VALUES(...,'POINTFROMTEXT(\'POINT(52.2531519,20.9778386)\')')
In other words, the quotes are escaped. This is fine for the other columns as it prevents sql-injection, but not for this. The values are guaranteed to be floats, and I want the update to look like:
INSERT INTO `places` (..., `geom`) VALUES(...,'POINTFROMTEXT('POINT(52.2531519,20.9778386)')')
So is there a way to turn escaping off for a particular column? Or a better way to do this?
(I've tried using GeoRuby+spatial adapter, and spatial adaptor seems too buggy to me, plus I don't need all the functionality - hence trying to do it directly).
The Rails Spatial Adapter should implement exactly what you need. Although, before I found GeoRuby & Spatial Adapter, I was doing this:
Have two fields: one text field and a real geometry field, on the model
On a after_save hook, I ran something like this:
connection.execute "update mytable set geom_column=#{text_column} where id=#{id}"
But the solution above was just a hack, and this have additional issues: I can't create a spatial index if the column allows NULL values, MySQL doesn't let me set a default value on a geometry column, and the save method fails if the geometry column doesn't have a value set.
So I would try GeoRuby & Spatial Adapter instead, or reuse some of its code (on my case, I am considering extracting only the GIS-aware MysqlAdapter#quote method from the Spatial Adapter code).
You can use an after_save method, write them with a direct SQL UPDATE call. Annoying, but should work.
You should be able to create a trigger in your DB migration using the 'execute' method... but I've never tried it.
Dig into ActiveRecord's calculate functionality: max/min/avg, etc. Not sure whether this saves you much over the direct SQL call in after_save. See calculations.rb.
You could patch the function that quotes the attributes (looking for POINTFROMTEXT and then skip the quoting). This is pretty easy to find, as all the methods start with quote. Start with ActiveRecord::Base #quote_value.