How to disable automatic modifiaction of the database schema in sqalchemy? - sqlalchemy

When sqalchemy application starts, it automatically modifies database schema, so that database tables would reflect ORM models in python code. In my case it's very unwanted, because when I'm switching between different branches / writing new migrations / etc it's easy sometimes to accidentally run application in a "transitional" moment, so the DB gets broken by the attempt of sqlalchemy to "fix" the DB. By broken I mean detached from the last applied alembic_version, so that I can't even run a downgrade and have to fix it manually or use recovery.
So is there any native setting in sqalchemy to just raise / assert when real database schema does not match ORM models instead of automatically modifying the DB?

Related

is it ok to change the database table schemas created from rails using mysql?

I created some tables using rails. Now I want to modify the structure of a few of them. I know it can be done using rails migration. But i was wondering if it would cause any anomaly in the rails app if I modify the schemas using mysql rdbms?
Doing such changes through a migration has the advantage of not losing the changes if you decide to recreate/remigrate the database.
Also it serves as documentation. Imagine if your coworker altered some tables sneakily (and then you both forgot about it).
Technically, updating schemas directly in the database should work, but don't do it.
To add to Sergio's point, you're missing a simple fact - Rails' migrations create the famous db/schema.rb file - from which your migrations pull all their data.
The importance of schema.rb is overlooked - it is one of the most crucial aspects of your application.
db/schema.rb
The schema gives all your migrations a version of your DB to change / add to. Each time you perform a migration, Rails changes the schema file to ensure it has a "blueprint" of your db stored on file.
The schema is then able to rebuild the database using such methods as rake db:schema:load (ONLY RECOMMENDED FOR NEW INSTALLS -- DELETES PREVIOUS DATA)
So whilst there's no problem setting up the db using the db's own native tools, I recommend against it. You need to keep your migrations up to speed so that Rails can build the appropriate tables from its schema.

How to handle database changes made by automatic update script while using Liquibase?

I'm developing a web application that also use Wordpress as part of it. I want to use Liquibase to track my database changes.
How to handle database changes made by automatic update script of Wordpress?
Can I just ignore them? and put only my own changes in Liquibase changelog file?
You could do a diffChangelog of the schemas after each WordPress upgrade so that Liquibase could keep track of the changes. You can just ignore them though - Liquibase doesn't really care about unknown schema objects. The only issue would be if your changes and the WordPress changes conflicted.
You can and should just ignore them.
Liquibase just does one thing. It keeps track of the fact that:
a certain command (say, createTable)...
...that looked a certain way at time 0 (the name of the table, its columns, etc.)...
...was definitively executed at time 0 (it stores this record in DATABASECHANGELOG).
That's it. It is not a structure enforcer or a database state reconstitution engine. It is quite possible—and permitted, and often expected—that the database will be changed by other tools and Liquibase will have no idea what went on.
So just keep your commands in your changelogs, don't worry about preexisting database structure, use preconditions to control whether or not your changesets run, and ignore everything else that might be going on in the database that happened due to other tools.

How to log mysql database structural changes

I'm working with a project which is using mysql as the database. The application is hosted with many clients and we are doing upgrades for the current live systems often.
There are some instances where the client has change the database structure(adding new tables) and causes some unexpected db crashes.
I need to log all the structural changes which were done at that database, so we can find the correct root cause for that. We can't do it 100% correct with diff tool because it will not show the intermediate changes.
I found http://www.liquibase.org/ tool but seems little bit complex.
Is there any well known technique or a tool to track database structural changes only.
well from mysql studio you can generate all object's schema definition and compare them with your standard schema definition and this way you can compare two database schema...
generate scrips of both database (One is client's Database and One is master copy database) and then compare it using file compare tool would be the best practice according to me because this way you can track which collumn was added, which column was deleted, which index was added like wise without any tool download.
Possiable duplication of Compare two MySQL databases ?
Hope this helps.
If you have an application for your clients to manage these schema changes, you can use a mechanism at application level. If you have a Python and Django-based solution, you could probably use South which provides schema change tracking and rollbacks.

How do I manage a set of mysql tables in a production Rails app that are periodically recreated?

I have a production Rails app that serves data from a set of tables that are built from a LOAD DATA LOCAL INFILE MYSQL import of CSV files, via a ruby script. The tables are consistently named and the schema does not change. The script drops/creates the tables and schema, then loads the data.
However I want to re-do how I manage data changes. I need a suggestion on how to manage new published data over time, since the app is in production, so I can (1) push data updates frequently without breaking the application servicing user requests and (2) make the new set of data "testable" before it is live (with the ability to roll back to the previous tables/data if something went wrong).
What I'm thinking is keeping a table of "versions" and creating a record each time a new rebuild is done. The latest version ID could be stuck into the database.yml, and each model could specify a table name from database.yml. A script could move the version forward or backward to make sure everything is ok on the new import, without destroying the old version.
Is that a good approach? Any patterns like this already? It seems similar to Rails' migrations somewhat. Any plugins or gems that help with this sort of data management?
UPDATE/current solution: I ended up creating database.yml configuration and creating the tables at import time there. The data doesn't change based on the environment, so it is a "peer" to the environment-specific config. Since there are only four models to update, I added the database connection explicitly:
establish_connection Rails.configuration.database_configuration["other_db"]
This way migrations and queries work as normal with Rails. To I can keep running imports, I update the database name in the separate config for each import. I could manually specify the previous database version this way and restart the app if there was a problem.
config = YAML.load_file(File.join("config/database.yml"))
config["other_db"]["database"] = OTHER_DB_NAME
File.open(path, 'w'){|f| f.write(config.to_yaml)}
One option would be to use soft deletes or an "is active" column. If you need to know when records were replaced/deleted, you can also add columns for date imported and date deleted. When you load new data, default "is active" to false. Your application can preview the newly loaded data by using different queries than the production application, and when you're ready to promote the new data, you can do it in a single transaction so the production application gets the changes atomically.
This would be simpler than trying to maintain multiple tables, but there would be some complexity around separating previously deleted rows and incoming rows that were just imported but haven't been made active.

test and production server deployment with yii framework - syncing DB changes

I am working on a Yii framework based app where I have to test the app on my local machine and then when ready move the changes to the production server.
the app will be developed as people are using it and ask for new features. So when I make changes to my DB schema on the test machine I have to apply these to the schema of the production DB without destroying data there.
is there a recommended and convenient way to deal with this? syncing source code is less of an issue, i am using svn and can do svn export ; rsync ...
MySQLWorkbench can be helpful for syncing db schema as well as other database design tasks.
Yii does support Migrations (since v1.1.6), although it can be more trouble than it's worth depending on how often you make changes and how collaborative your project is.
Another approach I've used is to keep a log of update statements in svn and basically handled the migrations manually.
The best approach is going to depend on the cost/benefits to your particular project/workflow.
You can try SQLyog's Schema Synchronization Tool, which is a visual comparison/synchronization tool designed for developers who work between different MySQL servers or need to keep databases between two MySQL servers in sync. This means reporting the differences between tables, indexes, columns and routines of two databases, and generating scripts to bring them in Sync. Only the Schema will be synced in the target.
For a similar project we
use MySQLWorkbench (MWB) to design and edit the schema
share the .mwb file through a VCS.
When one of us is comfortable with a change he uses mysqldump --complete-insert... on the production and test schemas to generate a copy of the existing test and production data with field names
pull out all the production server insert statements in (3) and put them in protected/data/insert.sql
use the "forward engineer" menu item in MWB on the modified model to generate sql to save to a file called protected/data/create.sql, hand-editing as appropriate (be sure to use the if exists clause to reduce errors)
write a drop.sql file based on drop statements in (3)
use MWB, run the sql (drop.sql, create.sql, insert.sql) after issuing the appropriate "use database" command that identifies the production database
deal with all the errors in (7) by getting rid of any invalid inserts due to columns/fields that are not needed in the new models/schema. Rerun (7)
deal with new fields in (7) that need data other than Null. Rerun (7)
Now you have a snapshot of your schema (drop.sql create.sql) and your data that should revive either your test or production server if you ever have a problem. And you have a "fixture" of the data that was in your production server (insert.sql) that can be used to bring your test server up to speed, or as a backup of the production server data (that will quickly be outdated). Obviously all the foreign key relationships are what are really painful, so it's rare that the insert.sql is useful for anything except upgrading the schema and restoring the production data after this change. Clearly it takes some time to work out the kinks in the process so that the delay between (3) and (9) is small enough that the production server users don't notice the downtime.
Clearly "Rerun (7)" gets repetitive and quickly turns into a shell script that calls mysql directly. Also other steps in the sql editing process become sed scripts.
Have a look at schema comparison tool in dbForge Studio for MySQL.
This tool will help you to compare and synchronize two databases or a database project with specified database.
Also there is separate tool - dbForge Schema Compare for MySQL.