I have a JSON (multiline) file with lots of project settings and a list of included modules for a few projects. It's version-controlled by git with the same repository as my projects. It is constantly growing and works just fine to set up and tune my projects. The only problem is, when working with team and branches I constantly have merge conflicts that need to be solved manually, and 99% of cases is "use both" because it's just new entries. So what are the alternatives? I need to have the same version and branching in this database since it has project settings and dependencies, but I want to reduce conflicts to a minimum. And I do not want a separate database, that I need to maintain in parallel with git, they need to perfectly sync automatically when switching between different branches or commits. Thanks!
Related
We have a project which has data and code, bundled into a single Mercurial repository. The data is just as important the code (it contains parameters for business logic, some inputs, etc.) However, the format of the data files changes rarely, and it's quite natural to change the data files independently from the code.
One advantage of the unified repository is that we don't have to keep track of multiple revisions: if we ever need to recreate output from a previous run, we only need to update the system to the single revision number stored in the output log.
One disadvantage is that if we modify the data while multiple heads are active, we may lose the data changes unless we manually copy those changes to each head.
Are there any other pros/cons to splitting the code and the data into separate repositories?
Multiple repos:
pros:
component-based approach (you identify groups of files that can evolve independently one from another)
configuration specification: you list the references (here "revisions") you need for your system to work. If you want to modify one part without changing the other, you update that list.
partial clones: if you don't need all components, you can only clone the ones you want (doesn't apply in your case)
cons
configuration management: you need to track that configuration (usually through a parent repo, registering subrepos)
in your case, data is quite dependent on certain versions of the projects (you can have new data which doesn't make sense for old versions of the project)
One repo
pros
system-based approach: you see your modules as one system (project and data).
repo management: all in one
tight link between modules (which can makes sense for data)
cons
data propagation (when, as you mention, several HEAD are active)
intermediate revisions (not to reflect a new feature, but just because some data changes)
larger clone (not relevant here, unless your data include large binaries)
For non-binary data, with infrequent changes, I would still keep them in the same repo.
Yes, you should separate code and data. Keep you code in version control and your data in a database.
I love version control since I am a programmer since more then ten years and I like this job.
But during the last months I realized: Data must not be in version control. Sometimes it is hard for a person which is familiar with git (or an other version control system) to "let it go".
You need a good ORM which supports database schema migrations. The migrations (schemamigrations and datamigrations) are kept in version control, but the data is not.
I know your question was about using one or two repositories, but maybe my answer helps you to get a different view point.
I was wondering what ways are are there to sync web projects initialized with git and mysql databases between 2 computers without using a 3rd one as a "server".
I already know that I could use a service like Dropbox and sync data with it, but I don't what to do it so.
If the two servers aren't always available (in particular not available at the same time), then you need an external third-party source for your synchronization.
One solution for git repo is to use git bundle which allows to create a kind of "bare repo" in one file.
Having only one file to move around make it any sync operation easier to do.
You will have to copy a bundle from one server to another (by whatever mean you want), in order for the second repo (on the second server) to pull from (you can pull from a git bundle: it acts as a bare repo) that bundle.
Just clone from one to the other. In git, there is no real difference between server repos and local repos in terms of pulling and cloning. Pushing from one to the other is tricky if neither is created as bare. Generally in that case, rather than push one from the other, we'll pull back and forth as needed.
What processes do you put in place when collaborating in a small team on websites with databases?
We have no problems working on site files as they are under revision control, so any number of our developers can work from any location on this aspect of a website.
But, when database changes need to be made (either directly as part of the development or implicitly by making content changes in a CMS), obviously it is difficult for the different developers to then merge these database changes.
Our approaches thus far have been limited to the following:
Putting a content freeze on the production website and having all developers work on the same copy of the production database
Delegating tasks that will involve database changes to one developer and then asking other developers to import a copy of that database once changes have been made; in the meantime other developers work only on site files under revision control
Allowing developers to make changes to their own copy of the database for the sake of their own development, but then manually making these changes on all other copies of the database (e.g. providing other developers with an SQL import script pertaining to the database changes they have made)
I'd be interested to know if you have any better suggestions.
We work mainly with MySQL databases and at present do not keep track of revisions to these databases. The problems discussed above pertain mainly to Drupal and Wordpress sites where a good deal of the 'development' is carried out in conjunction with changes made to the database in the CMS.
You put all your database changes in SQL scripts. Put some kind of sequence number into the filename of each script so you know the order they must be run in. Then check in those scripts into your source control system. Now you have reproducible steps that you can apply to test and production databases.
While you could put all your DDL into the VC, this can get very messy very quickly if you try to manage lots and lots of ALTER statements.
Forcing all developers to use the same source database is not a very efficient approach either.
The solution I used was to maintain a file for each database entity specifying how to create the entity (primarily so the changes could be viewed using a diff utility), then manually creating ALTER statements by comparing the release version with the current version - yes, it is rather labour intensive but the only way I've found to solve the problem.
I had a plan to automate the generation of the ALTER statements - it should be relatively straightforward - indeed a quick google found this article and this one. Never got round to implementing one myself since the effort of doing so was much greater than the frequency of schema changes on the projects I was working on.
Where i work, every developer (actually, every development virtual machine) has its own database (or rather, its own schema on a shared Oracle instance). Our working process is based around complete rebuilds. We don't have any ability to modify an existing database - we only ever have the nuclear option of blowing away the whole schema and building from scratch.
We have a little 'drop everything' script, which uses queries on system tables to identify every object in the schema, constructs a pile of SQL to drop them, and runs it. Then we have a stack of DDL files full of CREATE TABLE statements, then we have a stack of XML files containing the initial data for the system, which are loaded by a loading tool. All of this is checked into source control. When a developer does an update from source control, if they see incoming database changes (DDL or data), they run the master build script, which runs them in order to create a fresh database from scratch.
The good thing is that this makes life simple. We never need to worry about diffs, deltas, ALTER TABLE, reversibility, etc, just straightforward DDL and data. We never have to worry about preserving the state of the database, or keeping it clean - you can get back to a clean state at the push of a button. Another important feature of this is that it makes it trivial to set up a new platform - and that means that when we add more development machines, or need to build an acceptance system or whatever, it's easy. I've seen projects fail because they couldn't build new instances from their muddled databases.
The main bad thing is that it takes some time - in our case, due to the particular depressing details of our system, a painfully long time, but i think a team that was really on top of its tools could do a complete rebuild like this in 10 minutes. Half an hour if you have a lot of data. Short enough to be able to do a few times during a working day without killing yourself.
The problem is what you do about data. There are two sides to this: data generated during development, and live data.
Data generated during development is actually pretty easy. People who don't work our way are presumably in the habit of creating that data directly in the database, and so see a problem in that it will be lost when rebuilding. The solution is simple: you don't create the data in the database, you create it in the loader scripts (XML in our case, but you could use SQL DML, or CSV with your database's import tool, or whatever). Think of the loader scripts as being source code, and the database as object code: the scripts are the definitive form, and are what you edit by hand; the database is what's made from them.
Live data is tougher. My company hasn't developed a single process which works in all cases - i don't know if we just haven't found the magic bullet yet, or if there isn't one. One of our projects is taking the approach that live is different to development, and that there are no complete rebuilds; rather, they have developed a set of practices for identifying the deltas when making a new release and applying them manually. They release every few weeks, so it's only a couple of days' work for a couple of people that often. Not a lot.
The project i'm on hasn't gone live yet, but it is replacing an existing live system, so we have a similar problem. Our approach is based on migration: rather than trying to use the existing database, we are migrating all the data from it into our system. We have written a rather sprawling tool to do this, which runs queries against the existing database (a copy of it, not the live version!), then writes the data out as loader scripts. These then feed into the build process just like any others. The migration is scripted, and runs every night as part of our daily build. In this case, the effort needed to write this tool was necessary anyway, because our database is very different in structure to the old one; the ability to do repeatable migrations at the push of a button came for free.
When we go live, one of our options will be to adapt this process to migrate from old versions of our database to new ones. We'll have to write completely new queries, but they should be very easy, because the source database is our own, and the mapping from it to the loader scripts is, as you would imagine, straightforward, even as the new version of the system drifts away from the live version. This would let us keep working in the complete rebuild paradigm - we still wouldn't have to worry about ALTER TABLE or keeping our databases clean, even when we're doing maintenance. I have no idea what the operations team will think of this idea, though!
You can use the replication module of the database engine, if it has one.
One server will be the master, changes are to be made on it.
Developers copies will be slaves.
Any changes on the master will be duplicated on the slaves.
It's a one way replication.
Can be a bit tricky to put into place as any changes on the slaves will be erased.
Also it means that the developers should have two copy of the database.
One will be the slave and another the "development" database.
There are also tools for cross database replications.
So any copies can be the master.
Both solutions can lead to disasters (replication errors).
The only solution is see fit is to have only one database for all developers and save it several times a day on a rotating history.
Won't save you from conflicts but you will be able to restore the previous version if it happens (and it always do...).
Where I work we are using Dotnetnuke and this poses the same problems. i.e. once released the production site has data going into the database as well as files being added to the file system by some modules and in the DNN file system.
We are versioning the site file system with svn which for the most part works ok. However, the database is a different matter. The best method we have come across so far is to use RedGate tools to synchronise the staging database with the production database. RedGate tools are very good and well worth the money.
Basically we all develop locally with a local copy of the database and site. If the changes are major we branch. Then we commit locally and do a RedGate merge to put our DB changes on the the shared dev server.
We use a shared dev server so others can do the testing. Once complete we then update the site on staging with svn and then merge the database changes from the development server to the staging server.
Then to go live we do the same from staging to prod.
This method works but is prone to error and is very time consuming when small changes need to be made. The prod DB is always backed up so we can roll back easily if a delivery goes wrong.
One major headache we have is that Dotnetnuke uses identity cols in many tables and if you have data going into tables on development and production such as tabs and permissions and module instances you have a nightmare syncing them. Ideally you want to find or build a cms that uses GUI's or something else in the database so you can easily sync tables that are in use concurrently.
We'd love to find a better method! As we have a lot of trouble with branching and merging when projects are concurrent.
Gus
The application's code and configuration files are maintained in a code repository. But sometimes, as a part of the project, I also have a some data (which in some cases can be >100MB, >1GB or so), which is stored in a database. Git does a nice job in handling the code and its changes, but how can the development team easily share the data?
It doesn't really fit in the code version control system, as it is mostly large binary files, and would make pulling updates a nightmare. But it does have to be synchronised with the repository, because some code revisions change the schema (ie migrations).
How do you handle such situations?
We have the data and schema stored in xml and use liquibase to handle the updates to both the schema and the data. The advantage here is that you can diff the files to see what's going on, it plays nicely with any VCS and you can automate it.
Due to the size of your database this would mean a sizable "version 0" file. But, using the migration strategy, after that the updates should be manageable as they would only be deltas. You might be able to convert your existing migrations one-to-one to liquibase as well which might be nicer than a big-bang approach.
You can also leverage #belisarius' strategy if your deltas are very large so each developer doesn't have to apply the delta individually.
It seems to me that your database has a lot of parallels with a binary library dependency: it's large (well, much larger than a reasonable code library!), binary, and has its own versions which must correspond to various versions of your codebase.
With this in mind, why not integrate a dependency manager (e.g. Apache Ivy) with your build process and let it manage your database? This seems like just the sort of task that a dependency manager was built for.
Regarding the sheer size of the data/download, I don't think there's any magic bullet (short of some serious document pre-loading infrastructure) unless you can serialize the data into a delta-able format (the XML/JSON/SQL you mentioned).
A second approach (maybe not so compatible with dependency management): If the specifics of your code allow it, you could keep a second file that is a manual diff that can take a base (version 0) database and bring it up to version X. Every developer will need to keep a clean version 0. A pull (of a version with a changed DB) will consist of: pull diff file, copy version 0 to working database, apply diff file. Note that applying the diff file might take a while for a sizable DB, so you may not be saving as much time over the straight download as it first seems.
We usually use the database sync or replication schema.
Each developer has 2 copies of the database, one for working and the other just for keeping the sync version.
When the code is synchronized, the script syncs the database too (the central DB against the "dead" developer's copy). After that each developer updates his own working copy. Sometimes a developer needs to keep some of his/her data, so these second updates are not always driven by the standard script.
It is as robust as the replication schema .... sometimes (depending on the DB) that doesn't represent good news.
DataGrove is a new product that gives you version control for databases. We allow you to store the entire database (schema and data), tag, restore and share the database at any point in time.
This sounds like what you are looking for.
We're currently working on features to allow git-like (push-pull) behaviors so developers can share their repositories across machines, so I can load the latest version of your database when I need it.
If I want to set up a smallish Mercurial repository for some internal work among a few developers, can I just navigate to a network share and create a repository there, and then just clone that down locally? Or do I need to set up a server (I know, it's easy to do).
This is Windows by the way.
Specifically, I'm wondering if there will be concurrency issues, like abandoned transactions, etc. if multiple users work push/pull simultaneously.
So long as folks are interacting with the repo using only 'clone', 'push', and 'pull', you're in fine shape. What you can't do is have multiple people committing directly from a shared working directory. However, push, pull, and clone are safe to use to a shared folder from a user's personal repository. All changes end up effectively atomic, and no aborted work should cause anyone any problems.
When creating that clone consider using clone -U so it's created without a working directory so folks aren't tempted to edit and commit there.
There's no reason I can think of why you wouldn't be able to do so. I do something similar, only I don't use CIFS, but ssh to access the files. No server setup to speak of in either case.
The only thing that came to mind as a possible problem was concurrent access, but you can see for yourself that Mercurial takes care not to allow users to step on each other's toes.