I have a Drupal site running in production. After some time I had changes in code and through admin as well, some configurations, changed content types and changed body of some pages etc. Meanwhile the production database was growing. Now I want my changes in production by not loosing the data which is already in production DB. One way is to repeat the same steps as of Dev on production. That looks not good to me. Is there any automated procedure to migrate the changes?
Thanks
The modules features and strongarm will do the trick for you.
Features can help you save and migrate the content types, for example, while strongarm will help you migrate site settings and configuration information that is stored in variables.
After installing the two modules, go to Admin --> Structure --> Features --> Manage on your dev site and create features for the changes you want to transfer from dev to production. If you have both features and strongarm installed, it will let you create features that capture both site building components (content types, views you created, roles and permissions you have changed, etc) and site settings (settings stored in variables -- you'll see the long list of settings you can export once you install the strongarm module). When you create your feature, it is exported as code (as a module), and you can then add that module to any additional sites in which you want to add the components you selected when creating your feature(s).
You will have to install the two modules on your production environment too. Then add the features you just created in your dev environment to your production site. Once set up though, you can transfer changes between dev and production environments more easily going forward!
Here is the features documentation: http://drupal.org/node/580026.
Hope this doesn't sound too confusing!
Related
I was able to split the database and used packaging solution to distribute the front-end. I tested it the exe file and it worked fine. Now I am updating the forms and I cannot figure out a way of updating just the front-end (backend is on the server) wihtout going through the new installation of the new package. I did create the template file while going through the process of packaging the database.
I found this website but was afraid the unzip the file. Have any of you use this tool?
http://www.btabdevelopment.com/ts/freetools
Thank you
Some existing tools
Here is a list of deployment tools for Access front ends:
Auto FE Updater from Tony Toews (probably the best, but commercial)
Application Starter from Peter's Software
Front End Updater Utility, from Roger's Access Library
AutoUpdater, from the UtterAccess archives
The one you listed could also help.
More information on deployment
The issue is that there is no single way to update an Access application.
As you discovered, the packaging tools are nice, but they don't really take care of the most important, and complex, part of deploying software: how do you update an existing installation?
Access doesn't have a good story here, so there are many custom solutions, each with its flaws and advantages.
You were right that in any case, you must separate the backend database, containing only the tables, from the front-end, containing the code, forms and reports.
The front-end must be deployed to each user: the rule being that a front-end is meant to be used by a single user only.
What's in a good updater
So, what are the characteristics that you want in a good update story:
User should not have to do anything: you want the user to get the new version of the front-end as soon as it is made available, automatically.
This could mean that your front-end could check if there is a new version available on a remote folder before it allows the user to login or star any work.
If a version is available, it is then fetched and deployed.
Sometimes, because your development environment may be different from the user's environment (different server names, different shared folders, etc), you may also need to re-link the tables in the front-end to the correct path of the back-end after deployment.
What I am using for my deployments
For a few years now, I have perfected my own system that works without a stitch.
Instead of launching the application directly, when the user click the application icon, a small launcher application is started.
The launcher I use is a simple Click-Once application (so it can auto-update) written in .Net.
The launcher is responsible for ensuring that the main application is only running once, and also for checking and deploying new versions (or downgrading them) when updates are made available.
Updates are simply packaged into zip files that contain all the files necessary for a new update.
The name of the file contains the version number, like myAppFE-2013-08-01.zip so that sorting the list of clients packages by name would make it easy to pick the most up-to-date package.
All these front-end packages are stored on a shared folder on a server, for instance if my backend database is in \\myServer\myApp\DB, the front-end packages could be kept in \\myServer\myApp\FE.
When the launcher detects that a new package is available, it deletes the existing folder of the front-end on the user's machine and unzip the new package there instead.
Once the launcher has finished its tasks, it just launches the application frontend.
When the application front-end is started for the first time, it can do more checks to re-link tables if they point to the wrong location.
Notes
An alternative for detecting a new package would be to keep a small text file on the server that would contain the filename of the most current package.
Whenever a frontend is started it could check if the package name it is running is the same as the package name listed in the file. If not, then an upgrade/downgrade is necessary.
One of the advantages of this solution is that once the Access Runtime is installed, users can run in normal user session on the machine, without ever requiring administrator rights: the click-once launcher doesn't require any admin rights and if you deploy your access front-end under the user's %APPDATA% folder, you do not need elevated rights to update your front-end at all.
The first time you deploy, your launcher should also be responsible for registering the location of front-end folder as a Trusted Location so that Access allows it to run without VBA/Macros being disabled.
These are just a set of registry keys you can easily add under HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Access\Security\Trusted Locations\ (the exact registry key depends on your version of Office and whether you are on a 32 bit or 64 bit system).
We have a development team, with a few development and support projects being supported on the same code line. Recently we started additional changes in existing production module, We use to copy the code into development environment and modify with reference to new chagne request. We cannot move the development code immediately into production due to project shcedule.
Meanwhile, we are expecting the small small issues to be fixed in production module which need to be fixed in dev environment before fixing in prod, without affecting the new development.
I am aware of using the SCHEMAS in SQL Server 2008, but If I use schema then I need to keep two procedures used for same purpose like file loading, etc.
I would like to avoid duplication of objects like maintainig in two different schemas.
It totally feels like we're doing it wrong by creating different database or different schemas for same purpose.
Is there a method or tool that anyone's encountered that deals with working on same production code and new development using the production code for simiilar purpose.
Thanks
RTV
I believe that I am in a similar situation. What we've done is simply create two almost identical databases; production and development. The production database feeds our reporting software while the development database is where we are free to test out changes before copying the changes into production. All data loaded into the production database is loaded into the development database through a series of jobs and stored procedures that execute daily. This method, however, does involve extensive duplication of tables, stored procedures, etc., but it is the simplest way we know how to get the functionality we need.
I work at a large university and have been instructed to look in to using a source control system (git, svn, etc) to manage the websites. We use Joomla which relies heavily on MySQL.
Currently, we have a barely functional system that uses a development server which pushes to a live server whenever we change a website. It's a pain and it doesn't always work. Plus, we can and often do overwrite changes that another dev has made.
We want to be able to manage content via the Joomla front end on the dev branch, then push those changes to the test branch, then to the master (live) branch.
Without getting off in tot he weeds: my question is, essentially, what is a good strategy for managing websites using a CMS like Joomla that relies on a database?
Since you also want to sync the database (content is stored in the db, while images and media are on the filesystem), you need the commit/push script to also dump the db to a file, and the pull script to load the db. This can be done with pre and post hooks, http://githooks.com/ or google it.
However there will be different parts of Joomla that you will want to sync separately.
Let's consider three servers:
edit server: where content is managed
dev server: where extensions are tested and configured
test server
production server
Let's consider three layers of information:
The user and session data: this should not be synchronized at all so people are not logged out, and if any users register on the production server their login will be preserved.
The contents, user groups and assets (privileges): this is the articles, news, images which have to go from edit to test to production and to dev (unless you have content-specific privileges at the user level i.e. each user has separate privileges on each content item)
The template, extensions, modules, menu configurations: this will go from dev to test to production and edit.
Each of these groups of data will require their own branch and their custom pre-commit hooks to include in the commit/push the relevant database tables. The list of tables for each group depends on the extensions you are using.
I have written an article it's in italian and for svn but you can grab some of the bash scripts we use: http://www.fasterjoomla.com/joomla-tips/svn-per-joomla or translated by google http://translate.google.com/translate?sl=it&tl=en&js=n&prev=_t&hl=it&ie=UTF-8&u=http%3A%2F%2Fwww.fasterjoomla.com%2Fjoomla-tips%2Fsvn-per-joomla
I am a WordPress Designer/Developer, who is getting more and more heavily involved with using version control, notably Git, though I do use SVN for some projects. I am currently using Beanstalk for my remote repo.
Adding all of the WordPress files to my repo is no problem, if I wanted to I know I could .gitignore the wp-config file, but since I'm the only developer, currently, and these projects are closed source, it really makes little sense.
WordPress relies heavily on the database, as any CMS does, to keep textual content, and many settings depending on the specific plugin/theme configuration I'm using. I'm wondering what the best way of using version control on the database would be, if it's even possible. I guess I could do a SQL dump, though my MySQL server is running on Windows (read as: I don't know how to do it), and then add the SQL dump to my repository. But when I push something live, that poses huge security threats.
Is there an accepted practice of doing this?
You can backup your database within a git repository. Of course, if you place the data into git in a binary form, you will lose all of git's ability to efficiently store the data using diffs (changes). So the number one best practice is this: store the data in a text serialised format.
mysqldump is a suitable program to help you do this. It isn't perfect though. If anything disturbs the serialisation order of items (eg. as a result of creating new tables, etc.) then artificial breaks will enter into the diff. That will decrease the efficiency of storage. You could write a custom serialiser to serialise changes only -- but then you are doing the hard work that git is already good at. Just use the sql dump.
That being said, what you are wanting to do isn't what devs normally mean when they talk about putting the database in git. For instance, if you read the link posted by #eggyal (link to codinghorror) you will see that what is actually placed in git are the scripts needed to generate the initial database. There may be additional scripts, like those to populate the database data with a clean state, or to populate it with testing data. All such sql scripts are text files, and pretty much the same format as the sql dump you would get from mysqldump. So there's no reason you can't do it that way with your day-to-day data as well.
There are not many software available to version control databases like MySQL and MongoDB.
But one is under development and the beta version is about to be launched soon. Check out Klonio - Version Control for databases
The article How to Sync A Local & Remote WordPress Blog Using Version Control gives advice on how to automate sync between two instances (development, production) of a WordPress blog using Mercurial. Is mentions that for this scenario, Git and Mercurial are very similar.
Step 4 (Synchronizing The Databases) is of interest here.
The database content will be exported to a file that is tracked by the revision control. Each time we pull changes, the database content will be replaced by this file, making our database up-to-date.
Then, it elaborates on conflicts and the scripting part of the job.
There is a version control tutorial in Mercurial out there, if you're not familiar with it.
If you are only interested in schema changes under version control, there is a nice stuff SqlRog. It extracts schema into the project files that can be put under the git.
Be aware that Wordpress stores all news feed content in the database, so even if you don't make any changes, there will be a lot of changing content.
I have two instances of Magento, a production site and a staging site, both have there own codebase and mysql databases.
We have been making some changes to the staging site, specifically we have installed the aheadworks - payments and subscriptions module which has been configured.
We need to sync all the products from the production site to the staging site, then we will need to make our changes to the products so they are configured to work with the aheadworks - payments and subscriptions module and finally upload everything back to the production site without wiping out any new customers/orders that have been added to the production site while we have been making our changes.
Could anyone please explain how we could achieve this ?
Thanks
Steven
I suggest first copying the entire live database into your staging environment. That way, you have the most recent (live) data to work with on staging. After that, do some testing of the newly installed module on staging. Implement any template and code changes needed and test if the required features are working (for one product). Once everything works as expected, install the module on live, configure it there and start using it there.
So try to do most of the database / admin changes on live only, and use the staging environment just for testing if the module does what you need. That way, you avoid having to do complicated synchronisation of the database as well as having to do the same thing twice. Synching the databases can easily lead to problems like duplicate order IDs, and it's a lot of meticulous work which I would try to avoid. After all the products are changed on live, you could copy over the whole database to staging again, to sync afterwards. That's a lot less risky and meticulous. Hope this helps.