I have an existing application that I recently started to use Flyway with, and that's been working pretty well, for the most part.
I've been running a local MySQL DB for my development environment, which matches up with what's used in QA and Prod.
However, I want to be able to run at least some integration tests directly against an embedded database, like H2. I had naïvely hoped that, since MySQL seems to wrap (most?) of its special statements in special comments (e.g. /*! SET #foo = 123 */;).
However, it seems that when Flyway parses my first migration, it ends up skipping ALL of my CREATE TABLE statements, so that it only ends up applying an INSERT of some reference data, which fails since the tables never got created...
I've tried turning up the logging level, but I'm having no luck seeing any indication of why Flyway has just skipped the first 2228 lines of my migration...
Does anyone have any advice on how to best handle this situation? I've tried liberally sprinkling some /*! ... */ comments over things like ENGINE=InnoDB, but it seems Flyway still skips those statements.
Am I best off just reorganizing and duplicating most, if not all, of my migrations using database-specific flyway.locations, as referred to in the FAQ? Or is there some way I can make minimal changes, at least to what I got from my initial mysqldump of the existing DB that I used for the baseline migration, to maintain a single migration for both databases?
Or... is there a recommended way to run my integration tests against MySQL instead? I came across MySQL Connector/MXJ, but that seems to be discontinued...
It is the old problem "There is no SQL standard in existence".
Flyway is probably skipping your statements because they contain syntax H2 does not understand. Please take a look at the H2 docu to figure out what part of the H2 CREATE TABLE syntax is different to the MySQL CREATE TABLE syntax. If you are lucky there might even be a syntax variant that both databases understand.
If not you would have to separate the SQL statements into two different locations. Keep in mind that you can tell Flyway multiple locations at the same time. So you can have a core of common scripts and only move the parts that differ in db specific files. You then start your local tests with common + H2 as location and your production scripts with common + MySQL.
If you are using a technology that can create the tables for you (like Hibernate) you might want to not use Flyway when executing tests locally to avoid to have to take care of two sets of migration files. Just let your test generate the latest version of the database. This might also advantages as it could be quite a lot faster then running a lot of migration scripts later down the line (say in a few years).
You will have to run some integration tests against a real MySQL database as as you have seen H2 might behave quite different. That way you might consider side-loading your database with some data using what ever backup solution is available for your database. This might be faster than trying to initialize the database from scratch using Flyway. (Again done the line you will not want to run years of migration scripts before testing.) You probably want to only test your latest set of scripts anyway as the older ones did work when they where new (and Flyway will ensure they have not been changed).
Related
I'm researching something that I'd like to call replication, but there is probably some other technical word for it - since as far as I know "replication" is a complete replication of structure and its data to slaves. I only want the structure replication. My terminology is probably wrong which is why I can't seem to find answers on my own.
Is it possible to set up a mysql environment that replicates a master structure to multiple local databases when a change, addition or drop has been made? I'm looking for a solution where each user gets its own database instance with their own unique data but with the same structure of tables. When an update is being made to the master structure, the same procedure should be replicated by each user database.
E.g. a column is being added to master.table1 that is replicated by user1.table1 and user2.table1.
My first idea was to write a update procedure in PHP but it feels like this would be a quite fundamental function built-in to the database, since my conclusion would be that index lookup would be much faster with less data (~ total data divided by users) and probably more secure (no unfortunate leaks, if any).
I solved this problem with simple set of SQL scripts for every change in database, named year-month-day-description.sql, which i run in lexicographical order (that's why it begins with date).
Of course you do not want to run them all every time. So to know which scripts I need to execute, each script has simple insert at it's end, which inserts filename of the script into table in database. So the updater PHP script simply make list of scripts, remove these in table and run the rest.
Good on this solution is, that you can include data transformations too. And also, it can be fully automatic and as long as scripts are ok, nothing bad will happen.
You will probably need to look into incorporating the use of database "migrations", something popularized by the Ruby on Rails framework. This Google search for PHP database migrations might be a could starting point for you.
The concept is that as you develop your application and make schema changes, you can create SQL migration scripts to roll-forward or roll-back the schema changes. This makes it really easy to then easily "migrate" your database schema to work with a particular code version (for example if you have branched code being worked on in multiple environments that need each need a different version of the database).
That isn't going to autmoatically make updates like you suggest, but is certainly a step in the right direction. There a also tools like Toad for MySQL and Navicat which have some level of support of schema synchronization. But again these would be manual comparisons/syncs.
A common occurrence when rolling out the next version of a software package is that some of the data structures change. When you are using a Sql database, an appropriate series of alters and updates may be required. I've seen (and created myself) many ways of doing this over the years. For example RoR has the concept of migrations. However, everything I've done so far seems a bit hairy to maintain or has other shortcomings.
In a magical world I'd be able to specify the desired schema definition, and have something automatically sort out what alters, updates, etc. are needed to move from the existing database layout...
What modern methodologies/practices/patterns exist for rolling out table definition changes with software updates? Do any MySql specific tools/scripts/commands exist for this kind of thing?
Have you looked into flyway or dbdeploy ? Flyway is Java specific, but I believe works with any DB, dbdeploy supports more languages, and again multiple databases.
I am looking for suggestions on the best way to sync mySQL tables (myISAM) from 2 different databases.
Currently we use Navicat to sync tables from our production server to our test server but we have been running into many problems. Just about everyday we have been running into a sync failure on a table.
We get the error below a lot of the times, not to mention Navicat spams our e-mails with successful and unsuccessful syncs(is there anyway to just receive only the unsuccessful syncs?). I also know altering the table in anyway will cause a failure to sync. So altering the table in anyway must be done to the master first (This makes sense but is there any way around this?).
-[Sync] Finished - Unsuccessful Synchronization: List index out of bounds (0)
Is there any reason to not use the Navicat sync? My boss suggested using mySQL replication instead but my first concern is finding why we have so many problems because it seems like we just are misusing the sync thus giving us all these problems.
Thanks.
sync tables from our production server to our test server
It sounds like you're trying to replicate your production environment in your test environment, right?
A common pattern to follow in this situation is using a tool like mysqldump to create a backup of the entire database, then later import the backup into the test environment. By doing a complete backup and restore, you're not only ensuring that you have at least one backup method that's known to work, you're also ensuring that the test database can never contain modifications that a sync tool might miss. (Sync tools generally require a primary or unique key on each table to operate effectively.)
The backup and reimport process should be an easy thing for you to automate. At my workplace, we perform a mysqldump-based database dump every night, and perform optional imports into each developer's personal copy of the dev environment early the following morning.
Is there any Ruby script for converting a PostgreSQL database to a MySQL database? I have searched many sites to no avail.
To be honest these migrations can be tricky. I don't know that there are any good tools to do it. Also note that this can be a major pain, and you end up giving up on a lot of nice features that PostgreSQL has for agile development (like transactional DDL). This being said, here's the way to go about it:
Rebuild your schema on MySQL. Do not try to convert schema files per se. Use your existing approaches to generate a new schema using MySQL's syntax.
Write a script which pulls data from PostgreSQL and inserts it one row at a time into MySQL. MySQL has some thread locking problems that interfere with bulk loads, updating indexes, etc,. where multiple rows are inserted per statement. For deciding the table order, what I have usually started with is the order the tables are listed in pg_dump, though in Rails you may be able to use your model definition instead.
Review your indexing strategies to make sure they are still applicable.
On the whole these dbs are very different. I would not expect that the migration will be easy.
I wish to migrate the database of a legacy web app from SQL Server to MySQL. What are the limitations of MySQL that I must look out for ? And what all items would be part of a comprehensive checklist before jumping into actually modifying the code ?
First thing I would check is the data types - the exact definition of datatypes varies from database to database. I would create a mapping list that tellme what to map each of the datatypes to. That will help in building the new tables. I would also check for data tables or columns that are not being used now. No point in migrating them. Do the same with functions, job, sps, etc. Now is the time to clean out the junk.
How are you accessing the data through sps or dynamic queries from the database? Check each query by running it aganst a new dev database and make sure they still work. Again there are differences between how the two flavors of SQl work. I've not used my sql so I'm not sure what some of the common failure points are. While you are at it you might want to time new queries and see if they can be optimized. Optimization also varies from database to database and while you are at it, there are probably some poorly performing queries right now that you can fix as part of the migration.
User defined functions will need to be looked at as well. Don't forget these if you are doing this.
Don't forget scheduled jobs, these will need to be checkd and recreated in myslq as well.
Are you importing any data ona regular schedule? All imports will have to be rewritten.
Key to everything is to use a test database and test, test, test. Test everything especially quarterly or annual reports or jobs that you might forget.
Another thing you want to do is do everything through scripts that are version controlled. Do not move to production until you can run all the scripts in order on dev with no failures.
One thing I forgot, make sure the dev database you are running the migration from (the sql server database) is updated from production immediately before each test run. Hate to have something fail on prod because you were testing against outdated records.
Your client code is almost certain to be the most complex part to modify. Unless your application has a very high quality test suite, you will end up having to do a lot of testing. You can't rely on anything working the same, even things which you might expect to.
Yes, things in the database itself will need to change, but the client code is where the main action is, it will need heaps of work and rigorous testing.
Forget migrating the data, that is the last thing which should be on your mind; the database schema can probably be converted without too much difficulty; other database objects (SPs, views etc) could cause issues, but the client code is where the focus of the problems will be.
Almost every routine which executes a database query will need to be changed, but absolutely all of them will need to be tested. This will be nontrivial.
I am currently looking at migrating our application's main database from MySQL 4.1 to 5, that is much less of a difference, but it will still be a very, very large task.