RoR: efficiently testing a project with mysql and sqlite - mysql

I'd like to continually test and benchmark my RoR app running over both mysql and sqlite, and I'm looking for techniques to simplify that. Ideally, I'd like a few things:
simultaneous autotest / rspec testing with mysql and sqlite versions of the app so I'll know right away if I've broken something
a dependable construct for writing db-specific code, since I need to break into `ActiveRecord::Base.connection.select_all()` once in a while.
The latter seems easy, the former seems difficult. I've considered having two separate source trees, each with its own db-specific config files (e.g. Gemfile, config/database.yml) and using filesystem links to share all common files, but that might frighten and confuse git.
A cleaner approach would be a command line switch to rails to say which configuration to use as rails starts up. Though it would be nice, I don't think such a command line switch exists
How do other people handle this?

If I were you, I would do two things:
Don't check database.yml into your code repo. It contains database passwords, and if you're working with other developers on different machines, it will be a headache trying to keep track of which database is on which machine. It's considered bad practice and not a habit you should get into.
For files that should be checked into source (Gemfile & Gemfile.lock), I would manage this using Git branches. I would have one master branch that uses one database. Then another branch that uses the other. If you are working off the master branch and have it setup with MySQL, you can just rebase or merge into the SQlite branch whenever you make code changes. As long as you're not writing a lot of database-specific queries, you shouldn't have conflict problems.

Okay, with just a couple of tweaks, there's a simple way to run your app and tests under any one of several databases. I describe the technique in:
RoR: how do I test my app against multiple databases?
It works well for me -- someone else might find it useful.

Related

How to get the difference between two mysql dumps and update the delta? [duplicate]

Is there a way to keep two databases in sync? I have a client who's running WordPress with MySQL. Is there a way to take a copy of the database the current state, and use it for a development server, and then when the dev changes are done push it back to the live site?
The client might make changes to the live site while I'll be working on the dev version, and wondering if there will any merge conflicts.
If I import the updated database via phpmyadmin, will it only update with only the newest changes or overwrite everything?
Here's a quick reference of MySQL Replication by #Mark Baker or you can use MySQL Workbench Synchronization.
So I finally found a solution to my problem. Since this was an issue for WordPress I found two plugins that worked really well.
Free one: Database Sync
Very simple and has an easy push/pull interface.
Paid Plugin $40-200: WP Migrate DB Pro
Much more polished and has an option to select specific tables you want to sync.
There's an answer to the duplication problem here. However, that's only the start of your difficulties. If two people are making changes independently to two copies of one database, merging the two will inevitably cause nightmares. In short, yes there will be merge conflicts. Exactly what, and what you do about it, will depend on the nature of the changes each of you have made. Good luck!
Other modern (this post is quite old) paid solutions to the problem would be deevop and mergebot.
Mergebot is a plugin saas, that helps with complicated merges between the different development and production databases, specifically for WordPress.
deevop is a more comprehensive solution providing the development environment but also having many options for complex data syncronisation between phases (excluding tables, etc) not only for WordPress but for other platforms, too.
You can even combine both and use deevop as deployment manager (one click deploy to/from production) and then use mergebot for the complex database merges.

How to reduce development database to smaller size?

The problem is in this:
I have a dump of staging database that I am using in development and the database size is around 2 Gb which makes many of the ActiveRecord commands (mostly 'where' commands) to run for at least 5 minutes.
What could be a solution(s) to speed up this in 'development'?
Some of the options would be to create a partial database of the development (haven't investigated how), caching, which for some reason didn't work or there is some other option. I would even consider hardcoding some part of the ActiveRecord calls, just to acheave this in development mode.
There's a few ways to achieve this based on the info you've provided.
As mentioned in the comments, you could create a seed file and build a few records to be used in development. This is common practice for most development databases (especially with more than one developer). See the Rails guides about this
Another idea would be to write a rake task that isolates a few relevant rows within the most dependent table in your staging database (say users) and build dummy data from that record. This might help you build "real-ish" data without having to do it all from scratch. If there's a large tangle of associations, this might be more work than it's worth.
Gem seed_dump could come in handy for that pourpouse.
Word of caution, if that staging DB has any PII (personally identifiable information) you will likely want to obfuscate it so you aren't storing user information locally.

How best to handle Flyway with embedded DB for integration tests?

I have an existing application that I recently started to use Flyway with, and that's been working pretty well, for the most part.
I've been running a local MySQL DB for my development environment, which matches up with what's used in QA and Prod.
However, I want to be able to run at least some integration tests directly against an embedded database, like H2. I had naïvely hoped that, since MySQL seems to wrap (most?) of its special statements in special comments (e.g. /*! SET #foo = 123 */;).
However, it seems that when Flyway parses my first migration, it ends up skipping ALL of my CREATE TABLE statements, so that it only ends up applying an INSERT of some reference data, which fails since the tables never got created...
I've tried turning up the logging level, but I'm having no luck seeing any indication of why Flyway has just skipped the first 2228 lines of my migration...
Does anyone have any advice on how to best handle this situation? I've tried liberally sprinkling some /*! ... */ comments over things like ENGINE=InnoDB, but it seems Flyway still skips those statements.
Am I best off just reorganizing and duplicating most, if not all, of my migrations using database-specific flyway.locations, as referred to in the FAQ? Or is there some way I can make minimal changes, at least to what I got from my initial mysqldump of the existing DB that I used for the baseline migration, to maintain a single migration for both databases?
Or... is there a recommended way to run my integration tests against MySQL instead? I came across MySQL Connector/MXJ, but that seems to be discontinued...
It is the old problem "There is no SQL standard in existence".
Flyway is probably skipping your statements because they contain syntax H2 does not understand. Please take a look at the H2 docu to figure out what part of the H2 CREATE TABLE syntax is different to the MySQL CREATE TABLE syntax. If you are lucky there might even be a syntax variant that both databases understand.
If not you would have to separate the SQL statements into two different locations. Keep in mind that you can tell Flyway multiple locations at the same time. So you can have a core of common scripts and only move the parts that differ in db specific files. You then start your local tests with common + H2 as location and your production scripts with common + MySQL.
If you are using a technology that can create the tables for you (like Hibernate) you might want to not use Flyway when executing tests locally to avoid to have to take care of two sets of migration files. Just let your test generate the latest version of the database. This might also advantages as it could be quite a lot faster then running a lot of migration scripts later down the line (say in a few years).
You will have to run some integration tests against a real MySQL database as as you have seen H2 might behave quite different. That way you might consider side-loading your database with some data using what ever backup solution is available for your database. This might be faster than trying to initialize the database from scratch using Flyway. (Again done the line you will not want to run years of migration scripts before testing.) You probably want to only test your latest set of scripts anyway as the older ones did work when they where new (and Flyway will ensure they have not been changed).

How to update mysql tables between computers

I'm working on a group project where we all have a mysql database working on a local machine. The table mainly has filenames and stats used for image processing. We all will run some processing, which updates the database locally with results.
I want to know what the best way is to update everyone else's database, once someone has changed theirs.
My idea is to perform a mysqldump after each processing run, and let that file be tracked by git (which we use religiously). I've written a bunch of python utils for the database, and it would be simple enough to read this dump into the database when we detect that the db is behind. I don't really want to do this though, less it clog up our git repo with unnecessary 10-50Mb files with every commit.
Does anyone know a better way to do this?
*I'll also note that we are Aerospace students. I have some DB experience, but it only comes out of need. We're busy and I'm not looking to become an IT networking guru. Just want to keep it hands off for them since they are DB noobs and get the glazed over look of fear whenever I tell them to do anything with the database. I made it hands off for them thus far.
You might want to consider following the Rails-style database migration concept, whereby as you are developing you provide roll-forward and roll-back SQL statements that work as patches, allowing you to roll your database to any particular revision state that is required.
Of course, this is typically meant for dealing with schema changes only (i.e. you don't worry about revisioning data that might be dynamically populated into tables.). For configuration tables or similar tables that are basically static in content, you can certainly add migrations as well.
A Google search for "rails migrations for python" turned up a number of results, including the following tool:
http://pypi.python.org/pypi/simple-db-migrate
I would suggest to create a DEV MySQL server on any shared hosting. (No DB experience is required).
Allow remote access to this server. (again, no experience is required, everything could be done through Control Panel)
And you and your group of developers will have access to the database at any time from any place and from any device. (As long as you have internet connection)

Collaborating on websites with relational databases and a CMS

What processes do you put in place when collaborating in a small team on websites with databases?
We have no problems working on site files as they are under revision control, so any number of our developers can work from any location on this aspect of a website.
But, when database changes need to be made (either directly as part of the development or implicitly by making content changes in a CMS), obviously it is difficult for the different developers to then merge these database changes.
Our approaches thus far have been limited to the following:
Putting a content freeze on the production website and having all developers work on the same copy of the production database
Delegating tasks that will involve database changes to one developer and then asking other developers to import a copy of that database once changes have been made; in the meantime other developers work only on site files under revision control
Allowing developers to make changes to their own copy of the database for the sake of their own development, but then manually making these changes on all other copies of the database (e.g. providing other developers with an SQL import script pertaining to the database changes they have made)
I'd be interested to know if you have any better suggestions.
We work mainly with MySQL databases and at present do not keep track of revisions to these databases. The problems discussed above pertain mainly to Drupal and Wordpress sites where a good deal of the 'development' is carried out in conjunction with changes made to the database in the CMS.
You put all your database changes in SQL scripts. Put some kind of sequence number into the filename of each script so you know the order they must be run in. Then check in those scripts into your source control system. Now you have reproducible steps that you can apply to test and production databases.
While you could put all your DDL into the VC, this can get very messy very quickly if you try to manage lots and lots of ALTER statements.
Forcing all developers to use the same source database is not a very efficient approach either.
The solution I used was to maintain a file for each database entity specifying how to create the entity (primarily so the changes could be viewed using a diff utility), then manually creating ALTER statements by comparing the release version with the current version - yes, it is rather labour intensive but the only way I've found to solve the problem.
I had a plan to automate the generation of the ALTER statements - it should be relatively straightforward - indeed a quick google found this article and this one. Never got round to implementing one myself since the effort of doing so was much greater than the frequency of schema changes on the projects I was working on.
Where i work, every developer (actually, every development virtual machine) has its own database (or rather, its own schema on a shared Oracle instance). Our working process is based around complete rebuilds. We don't have any ability to modify an existing database - we only ever have the nuclear option of blowing away the whole schema and building from scratch.
We have a little 'drop everything' script, which uses queries on system tables to identify every object in the schema, constructs a pile of SQL to drop them, and runs it. Then we have a stack of DDL files full of CREATE TABLE statements, then we have a stack of XML files containing the initial data for the system, which are loaded by a loading tool. All of this is checked into source control. When a developer does an update from source control, if they see incoming database changes (DDL or data), they run the master build script, which runs them in order to create a fresh database from scratch.
The good thing is that this makes life simple. We never need to worry about diffs, deltas, ALTER TABLE, reversibility, etc, just straightforward DDL and data. We never have to worry about preserving the state of the database, or keeping it clean - you can get back to a clean state at the push of a button. Another important feature of this is that it makes it trivial to set up a new platform - and that means that when we add more development machines, or need to build an acceptance system or whatever, it's easy. I've seen projects fail because they couldn't build new instances from their muddled databases.
The main bad thing is that it takes some time - in our case, due to the particular depressing details of our system, a painfully long time, but i think a team that was really on top of its tools could do a complete rebuild like this in 10 minutes. Half an hour if you have a lot of data. Short enough to be able to do a few times during a working day without killing yourself.
The problem is what you do about data. There are two sides to this: data generated during development, and live data.
Data generated during development is actually pretty easy. People who don't work our way are presumably in the habit of creating that data directly in the database, and so see a problem in that it will be lost when rebuilding. The solution is simple: you don't create the data in the database, you create it in the loader scripts (XML in our case, but you could use SQL DML, or CSV with your database's import tool, or whatever). Think of the loader scripts as being source code, and the database as object code: the scripts are the definitive form, and are what you edit by hand; the database is what's made from them.
Live data is tougher. My company hasn't developed a single process which works in all cases - i don't know if we just haven't found the magic bullet yet, or if there isn't one. One of our projects is taking the approach that live is different to development, and that there are no complete rebuilds; rather, they have developed a set of practices for identifying the deltas when making a new release and applying them manually. They release every few weeks, so it's only a couple of days' work for a couple of people that often. Not a lot.
The project i'm on hasn't gone live yet, but it is replacing an existing live system, so we have a similar problem. Our approach is based on migration: rather than trying to use the existing database, we are migrating all the data from it into our system. We have written a rather sprawling tool to do this, which runs queries against the existing database (a copy of it, not the live version!), then writes the data out as loader scripts. These then feed into the build process just like any others. The migration is scripted, and runs every night as part of our daily build. In this case, the effort needed to write this tool was necessary anyway, because our database is very different in structure to the old one; the ability to do repeatable migrations at the push of a button came for free.
When we go live, one of our options will be to adapt this process to migrate from old versions of our database to new ones. We'll have to write completely new queries, but they should be very easy, because the source database is our own, and the mapping from it to the loader scripts is, as you would imagine, straightforward, even as the new version of the system drifts away from the live version. This would let us keep working in the complete rebuild paradigm - we still wouldn't have to worry about ALTER TABLE or keeping our databases clean, even when we're doing maintenance. I have no idea what the operations team will think of this idea, though!
You can use the replication module of the database engine, if it has one.
One server will be the master, changes are to be made on it.
Developers copies will be slaves.
Any changes on the master will be duplicated on the slaves.
It's a one way replication.
Can be a bit tricky to put into place as any changes on the slaves will be erased.
Also it means that the developers should have two copy of the database.
One will be the slave and another the "development" database.
There are also tools for cross database replications.
So any copies can be the master.
Both solutions can lead to disasters (replication errors).
The only solution is see fit is to have only one database for all developers and save it several times a day on a rotating history.
Won't save you from conflicts but you will be able to restore the previous version if it happens (and it always do...).
Where I work we are using Dotnetnuke and this poses the same problems. i.e. once released the production site has data going into the database as well as files being added to the file system by some modules and in the DNN file system.
We are versioning the site file system with svn which for the most part works ok. However, the database is a different matter. The best method we have come across so far is to use RedGate tools to synchronise the staging database with the production database. RedGate tools are very good and well worth the money.
Basically we all develop locally with a local copy of the database and site. If the changes are major we branch. Then we commit locally and do a RedGate merge to put our DB changes on the the shared dev server.
We use a shared dev server so others can do the testing. Once complete we then update the site on staging with svn and then merge the database changes from the development server to the staging server.
Then to go live we do the same from staging to prod.
This method works but is prone to error and is very time consuming when small changes need to be made. The prod DB is always backed up so we can roll back easily if a delivery goes wrong.
One major headache we have is that Dotnetnuke uses identity cols in many tables and if you have data going into tables on development and production such as tabs and permissions and module instances you have a nightmare syncing them. Ideally you want to find or build a cms that uses GUI's or something else in the database so you can easily sync tables that are in use concurrently.
We'd love to find a better method! As we have a lot of trouble with branching and merging when projects are concurrent.
Gus