Push updated code to installs - auto-update

I am trying to automate an update mechanism which will push updates to all its installed instances. All the code stays on the server (no third party installs), however each install is independant from other installs
eg/
/webbase/client1
/webbase/client2
/webbase/client3
...
Other then each install having its own config file, the code is the same. What would you consider to be the best way to automatically push updates to all the installed instances?. The entire application is database driven at the moment, and coded in PHP

First small assumption:
Your "webapp" is purly code or mostly-code.
Your "webapp" do not change itself, or changes are limited to external resources.
Your "webapp" do not change database design during updates.
Simplest answer would be to set up dcvs, and with one command you would send all code to clients. As side effect you would gain ability to roll back update if needed. In modern cvs there is no problem with setting multiple repost you update to, and adding new ones is also trivial. Setting up dcvs is also not a problem.
So on dev machine (or machine that code visit just before pushing update to clients) you set up cvs, and then with one command you deploy changes to clients. However this will push static changes (and will not update database!), so in order to do dynamic changes after merely coping changes you will need additional scripts (and those you also can put in repo so they also get updated!). Then set up repos and dcvs server on clients machine. And your are ready to start deploying.
You can also look for dedicated (to your language+framework) deployment solution (there should be plenty of those for PHP, though they may not be free of costs).
Comment if you need any clarification.

Related

mediawiki move and upgrade at once

I'm considering to move one of the company internal wikis (very basic wiki with few/no extensions and not so many pages) to another machine and wondering if at the same time I can upgrade the mediawiki version, passing from 1.6 to the current latest 1.25 (in order to use extensions only available for the latest versions)
The Upgrade guide
https://www.mediawiki.org/wiki/Manual:Upgrading
seems to omit the scenario in which an upgrade of the underlying software (apache,mysql) is also required for setting up the target version.
and the Moving guide
https://www.mediawiki.org/wiki/Manual:Moving_a_wiki
strictly recommends that source and target wikis share the same software level.
So I'm a bit stuck. I would attempt an export/import of an xml dump, but I'm not confident for the above reason (there is a huge version gap in source and target wikis)
Or is there a better way to approach the problem? Thx
Edit after some tests
I consider Florian's answer the most safe and advisable, but I would share the final solution I came up with.
Install the new wiki (blank)
Export an xml dump of the original wiki
php maintenance\dumpBackup.php --full > dump.xml
I first encountered a "Cannot connect to database" error. So i had to add in the LocalSettings.php the lines
$wgDBadminuser=...
$wgDBadminpassword=...
Import the xml dump in the new wiki (first try in dry-run mode)
php maintenance\importDump.php --dry-run < dump.xml
php maintenance\importDump.php < dump.xml
Then I was prompted to run
php maintenance\rebuildrecentchanges.php
Copy the physical files from the old to the new wiki, in the same path(for common wikis they should be in the "images" folder. That was not my case).
Re-create the users (manually) in the new wiki
Last edited the LocalSettings.php with the most essential settings I wanted to preserve (groups, restrictions,...) .
And the moving was done! The new wiki is ok and already usable at this stage: pages are there, links are working..
In fact, it should work, if you move the wiki from one server to another and after that upgrade it on the new server. Like you may already know, it's important to backup all files and data you have for the wiki in the "old" environment, so you can easily restore it from there.
If I would want to do, what you want to do, I would first follow the "Moving a wiki" guide except the "Test" section. After that I would upgrade the wiki to the newest version. Now I can test the wiki intensively to see, if anything worked well.
If you don't want to do that, you really need to upgrade the wiki in the "old" source and move it after that. If I understand you correctly, that would require an update of the server software (I expect php and mysql?).

Reliable way to tell development server apart from production server?

Here are the ways I've come up with:
Have an unversion-controlled config file
Check the server-name/IP address against a list of known dev servers
Set some environment variable that can be read
I've used (2) on some of my projects, and that has worked well with only one dev machine, but we're up to about 10 now, it may become difficult to manage an ever-changing list.
(1) I don't like, because that's an important file and it should be version controlled.
(3) I've never tried. It requires more configuration when we set up each server, but it could be an OK solution.
Are there any others I've missed? What are the pros/cons?
(3) doesn't have to require more configuration on the servers. You could instead default to server mode, and require more configuration on the dev machines.
In general I'd always want to make the dev machines the special case, and release behavior the default. The only tricky part is that if the relevant setting is in the config file, then developers will keep accidentally checking in their modified version of the file. You can avoid this either in your version-control system (for example a checkin hook), or:
read two config files, one of which is allowed to not exist (and only exists on dev machines, or perhaps on servers set up by expert users)
read an environment variable that is allowed to not exist.
Personally I prefer to have a config override file, just because you've already got the code to load the one config file, it should be pretty straightforward to add another. Reading the environment isn't exactly difficult, of course, it's just a separate mechanism.
Some people really like their programs to be controlled by the environment (especially those who want to control them when running from scripts. They don't want to have to write a config file on the fly when it's so easy to set the environment from a script). So it might be worth using the environment from that POV, but not just for this setting.
Another completely different option: make dev/release mode configurable within the app, if you're logged into the app with suitable admin privileges. Whether this is a good idea might depend whether you have the kind of devs who write debug logging messages along the lines of, "I can't be bothered to fix this, but no customer is ever going to tell the difference, they're all too stupid." If so, (a) don't allow app admins to enable debug mode (b) re-educate your devs.
Here are a few other possibilities.
Some organizations keep development machines on one network, and production machines on another network, for example, dev.example.com and prod.example.com. If your organization uses that practice, then an application can determine its environment via the fully-qualified hostname on which it is running, or perhaps by examining some bits in its IP address.
Another possibility is to use an embeddable scripting language (Tcl, Lua and Python come to mind) as the syntax of your configuration file. Doing that means your configuration file can easily query environment variables (or IP addresses) and use that to drive an if-then-else statement. A drawback of this approach is the potential security risk of somebody editing a configuration file to add malicious code (for example, to delete files).
A final possibility is to start each application via a shell/Python/Perl script. The script can query its environment and then use that to driven an if-then-else statement for passing a command-line option to the "real" application.
By the way, I don't like to code an environment-testing if-then-else statement as follows:
if (check-for-running-in-production) {
... // run program in production mode
} else {
... // run program in development mode
}
The above logic silently breaks if the check-for-running-in-production test has not been updated to deal with a newly added production machine. Instead, if prefer to code a bit more defensively:
if (check-for-running-in-production) {
... // run program in production mode
} else if (check-for-running-in-development) {
... // run program in development mode
} else {
print "Error: unknown environment"
exit
}

Autoupdate ala Google Chrome workflow

In the company I am I was asked to write an autoupdate function a la chrome. I.e. It should check periodically whether a new version is available, download the new version and apply it silently the next time the application starts.
I already have something up and running but it is more like a dirty hack than something I feel happy about it. So, I would like to know how to design and implement such a solution. My horrible hack works as this:
Have a mechanism to check whether a new version exists (a database query or a web service)
Download a full zip with the whole new version.
Check file signature. If everything went alright, set a registry value: must update to true.
When the application restarts, if the must update value is true, launch an update program and exist.
The update deletes the contents of the application folder, unzips the update and replaces the old contents, launches the application and exits.
Now, I would like to change it, so it works cleaner. I am planning to send the update as a bsdiff file. It gets downloaded. But the question is, what happens next?
When do apply the update?
Who is in charge of applying the patch? is it the program itself or is it a third program, as I did, which is in charge of applying the patch and relaunch the application?
If your going down the C++ route you can go to chromium and download the Chrome source code and dig around to see how the update is done, this might give you a better idea on how to approach it. Here's an article that might help.
If your familiar with .NET the recently release nuget also has an auto update feature that might be useful to look at, you can get the source code from here. David Ebbo has a blog about how its done here.
I'm not up to date on Delphi but you might be able to use either of the above options.
The workflow you proposed is more or less like it should work, but there's no need to re-invent the wheel - there are plenty libraries out there that will do this for you. Using a 3rd party library has the benefit of keeping your code cleaner while making sure the dirty process of auto-update is contained and working flawlessly.
Trust me, I know. I'm the author of NAppUpdate, an app update framework for .NET (which you might want to try out or learn from).
So, after giving it a lot of though, this is what I came with (for active directory I will refer to the directory where the main program lies, active program is the main program and update program is the one that replaces the active program and its resource files):
The active program checks if there is a new version every certain amount of time. If so, download it
Prepare new version in a separate folder (this can be done by copying the contents of the directory with the program to a subdirectory and applying a binary patch, or simply unziping the new version).
Set a flag that indicates that a new version is ready.
When a program is exiting (and one has to control for different interrupts here):
The active program checks the new version ready flag. Launch the update program and exit.
The update program checks if it can write in the active directory. If so, replaces the contents with the prepared version.
The update program has to recheck links and update them accordingly.
So guys, if you have a better workflow, please tell me.
You could literally use the Google Chrome update workflow by using the Google Chrome updater:
http://code.google.com/p/omaha/
They open sourced it Feb 2009.

How can multiple developers use the same vcproj files?

I'm working on a project with two other developers that's built on FireBreath. So far, I've been able to get things working perfectly on my machine, but we need to coordinate our development via Mercurial. So I pushed my files to the repository and thought all was well.
Unfortunately, that doesn't work.
The various .vcproj files that make up the solution all contain hard-coded references to my local file system. This works fine for me, because I'm not moving the project around. But when you try to build the solution on another machine with a different file structure (different drive letter, different folder location, etc.) everything breaks.
I used FireBreath's standard project generation script (Python) and then the Visual Studio CMake script (prep2008.cmd) to generate the solution files. What can I do to tweak things so that other developers can use the same code base?
If your developers are not using the same build/make/project files, this could quickly become a maintenance nightmare. So you should definitively all use the same .vcproj files. (An exception to this would be if the project files were generated from some other files. In that case treat those other files in the way described above.)
there's two ways to deal with the problem of differing setups on different machines. One is to make all paths relative to the project's path. The other is to use environment variables to refer to files/tools/libraries/whatever. IME it's best to use relative paths for everything that can be checked out with the project, and use environment variables for the rest. Add a script that checks for the existence of all necessary environment variable, pointing out the meaning of any missing ones, and run this as a build prerequisite, so whoever tries to get a new build machine up and running gets hints at what to do.
To make sure that everyone caught the updated comments from sbi's answer, let me give you the "definitive" answer from the FireBreath devs.
Your build directory is disposable; you should never share .vcproj files. Instead, you should regenerate your build/ directory any time you change the project and on each new computer, just like any project that uses CMake.
For more information, see http://colonelpanic.net/2010/11/firebreath-tips-working-with-source-control/
For reference, I am the primary author of FireBreath and I wrote the article.
I'm not familiar with FireBreath, but you need to make the references relative, and then recreate that relative structure on every machine. That is, if your project sits in "c:\myprojects\thisproject" and has an additional include directory "c:\mydir\mylib\include", then the latter path needs to be replaced with "....\mydir\mylib\include".
EDIT: I rewrote my anyswer to make it clearer. When I got you correctly, your problem is that FireBreath generates those .vcproj files with absolute paths in it, and you want to use this .vcproj files on a different developer machine.
I see 3 options:
Live with it. That means, make sure, every team member has the same file structure / view to the file system, tools installed in the same place.
Ask the authors of FireBreath to change their .vcproj generator to allow relative paths, use of environment variables etc.
If 1 or 2 does not work, write a program or script for changing the absolute path to relatives in those .vcproj files. Run this script whenever you have to regenerate your FireBreath project.
What you should not do due to the FireBreath FAQ: don't change the .vcproj manually, those changes will be lost next time the project is regenerated.
EDIT: seems that "option 4." turned out to be the best solution: generating those .vcproj files for each developer individually. Hope my suggestions were helpful, either.

Fetching project code from different repositories

we want to use Hudson for our CI, but our project is made of code coming from different repository. For example:
- org.sourceforce... should be check out from http:/sv/n/rep1.
- org.python.... should be check out from http:/sv/n/rep2.
- com.company.product should be check out from http:/sv/n/rep3.
right now we use an ant script with a get.all target that checkout/update the code from different rep.
So i can create a job that let hudson call our get.all target to fetch out all source code and call a second target to build all. But in that case, how to monitor change in the 3 repositories ?
I'm thinking that I could just not assign any repository in the job configuration and schedule the job to fethc/build at regular time interval, but i feel that i'll miss the idea of CI if build can't be trigger from commit/repository change.
what would be the best way to do ? is there a way to configure project dependencies in hudson ?
I haven't poked at the innards of our Hudson installation too much, but it there is a button under Source Code Management that says "Add more locations..." (if that isn't the default out-of-the-box configuration, let me know and I will dig deeper).
Most of our Hudson builds require at least dozen different SVN repos to be checked out, and Hudson monitors them all automatically. We then have the Build steps invoke ant in the correct order to build of the dependencies.
I assume you're using subversion. If not, then please ignore.
Subversion, at least the newer version of it, supports a concept called 'Externals.'
An external is an API, alternate project, dependency, or whatnot that does not reside in YOUR project repository.
see:http://svnbook.red-bean.com/en/1.1/ch07s04.html