We have a build pipeline to manage the artifacts' life cycle. The pipline is consist of four stages:
1.commit(runing unit/ingetration tests)
2.at(deploy artifact to at environment and runn automated acceptance tests)
3.uat(deploy artifact to uat environment and run manual acceptance tests)
4.pt(deploy to pt environment and run performance tests)
5.//TODO we're trying to support the production environment.
The pipeline supports environment varialbles so we can deploy artifacts with different configurations by triggerting with options. The problem is sometimes there are too many configuration items making the deploy script contains too many replacement tasks.
I have an idea of building a centralized configuration managment system (ccm for short name), so we can maintain the configuration items over there and leave only a url(connect to the ccm) replacement task (handling different stages) in the deploy script. Therefore, the artifact doesnt hold the configuration values, it asks the ccm for them.
Is this feasible or a bad idea of the first place?
My concern is that the potential mismatch between the configuration key (defined in the artifact) and value (set in the ccm) is not solved by this solution and may even worse.
Configuration files should remain with the project or set as configuration variables where they are run. The reasoning behind this is that you're adding a new point of failure in your architecture, you have to take into account that your configuration server could go down thus breaking everything that depends on it.
I would advice against putting yourself in this situation.
There is no problem in having a long list of environment variables defined for a project, besides that could even mean you're doing things properly.
If for some reason you find yourself changing configuration files a lot (for ex. database connection strings, api ednpoints, etc...) then the problem might be this need to change a lot configurations, which should stay almost always the same.
Related
In our project, we currently have two different configurations. The first one builds the assemblies. The other packages (including moving stuff to the right directories etc.) everything for InstallShield.
Now, we can't agree if it's better to move all the build steps into a single configuration and run it as a whole chain or if it's better to keep the build process separate from creating installation package.
Googling results in guides on how to do that but not in what way to do that (and our confusion is mainly due to the architecture of the configurations' order). We'll be using a few steps from PowerShield in order to move a number of files between different directories due to certain local considerations. The total number of steps will land on 5 or less.
The suggestion that I have is the following three configurations. They run separately, independently and their build steps overlap (being super sets of each other, consecutively regarded).
Configuration Build.
Configuration Build and test.
Configuration Build, test and package.
The main point of my suggestion is that e.g. the step that compiles the software is implemented in each configuration (as opposed to reusing the artifacts from an independent run of other configuration).
I would argue like this:
if you ever need to perform just one of the two steps - then leave them as separate steps.
This gives you the flexibility to run one, or the other, or both steps. E.g. could it be that you need to just build the solution, but not create the final installation package? E.g. for local testing?
However, if you never ever use one of the steps separately (you always run both together), then I'd probably just merge them together into one - having two separate steps doesn't make much sense to me
What is a good way to coordinate configuration changes through environments?
In an effort to decouple our code from the environment we've moved all environmental config to external files. So maybe the application will look for ${application.config.dir}/app.properties and app.properties could contain:
user.auth.endpoint=http://some.url/user
user.auth.apikey=abcd12314
The problem is, user.auth.endpoint needs to point to a test resource when on test, a staging resource when on the staging environment, and a production resource when on prod.
We could maintain different copies of the config file but this would violate DRY and become very unwieldy (there are 20+ production environments).
What's a good way to manage this? What tools should I be searching for?
Externalizing config is a good idea, you could externalize them all the way to environment variables.
Env vars are easy to change between deploys without changing any code;
unlike config files, there is little chance of them being checked into
the code repo accidentally; and unlike custom config files, or other
config mechanisms such as Java System Properties, they are a language-
and OS-agnostic standard.
From http://12factor.net/config
I know of three approaches to this.
The first approach is to write, say, a Python "wrapper" script for your application. The script will find out some environmental details, such as hostname, user name and values of environment variables, and then construct the appropriate configuration file (or a set of command-line options) that is passed to the real application.
The second approach is to embed an interpreter for a scripting language (Python, Lua and Tcl come to mind) into your application. This makes it possible for you to write a configuration file in the syntax of that embedded scripting language. In this way, the configuration file can make use of the scripting language's features, such as the ability to query environment variables or execute a shell command (such as hostname) and use if-then-else statements to set variables appropriately.
The third approach (if you are using C++ or Java) is to use the open-source Config4* library (disclaimer, I am the main developer of that). I recommend you read Chapter 2 of the "Config4* Getting Started" manual to see examples of how its flexible syntax can enable a single configuration file adapt to multiple environments.
You can take a look at http://www.configapp.com. You work with 1 configuration file, and switch/tab between the environments. Internally it's just 1 configuration file, and it handles the environment variables and generates the config file for the specific environment. In Config terminology, you have 1 Prod environment with 20+ instances. You will have a Prod environment configuration and you can tweak the 20+ instances accordingly using a web interface.
You moved environment specific properties to a separate file, but with Config, you don't have to do that. With Config, you can have 1 configuration file, with environment variables support, and common configuration applied to all environments.
Note that I'm part of the Config team.
We are creating unit test cases for our existing code base, while progressing through the creation of the test cases the test files are getting bigger in size and are taking very long time in execution.
I know the limitations of unit testing and I did some research also to increase efficiency. While research I found one useful idea to tighten up the provided data set.
Still I am looking for some more ideas on how I can increase efficiency of running/creating the unit test cases? We can keep the option to increase the server resources outside of this scope.
As your question was general I'll cover a few of the common choices. But most of the speed-up techniques have downsides.
If you have dependencies on external components (web services, file systems, etc.) you can get a speed-up by mocking them. This is usually desirable for unit testing anyway. You still need to have integration/functional tests that test with the real component.
If testing databases, you can get quite a speed-up by using an in-memory database (sqlite works well with PHP's PDO; with java maybe H2?). This can have downsides, unless database portability is already a design goal. (I'm about to move to running one set of unit tests against both MySQL and sqlite.) Mocking the database away completely (see above) may be better.
PHPUnit allows you to specify #group on each test. You could go through and mark your slower tests with #group slow, and then use the --exclude-group commandline flag to exclude them on most of your tests runs, and just include them in the overnight build. (You can also specify groups to include/exclude in your phpunit.xml.dist file.
(I don't think jUnit has this option, but TestNG does; for C#, NUnit offers categories for this.)
Creating fixtures once, and then sharing them between tests is quicker than creating the fixture before each test. The XUnit Test Patterns devotes whole chapters to the pros and cons (mostly cons) of this approach.
I know throwing hardware at it was explicitly forbidden in your question, but look again at #group, and consider how it can allow you to split your tests across multiple machines. Or splitting tests by directory, and processing one directory on each of multiple machines on your LAN. (PHPUnit is single-threaded, so you could run multiple instances on the same machine, each doing its own directory: be aware of how fixtures need to be independent (including unique names for databases you create, mocking the filesystem, etc.) if you go down this route.)
In the Cloudbees wiki, this page explains how to add configuration parameter for an app deployment, using cloudbees-web.xml.
But, is the content of:
<appid>APP_ID</appid>
injected as a well ? How can I retrieve this value from my application's code ?
My preference is to avoid coding an application to contain explicit references to the container within which it runs. So I would favour using techniques that do not tie your code to CloudBees (a.k.a. us).
Thus I would use a container specific descriptor file that configures a context parameter, then your application just reads the context parameter and uses that parameter directly.
There are two techniques for doing this:
Application Environments personally I love this way... though if you want to deploy the application to your own test environment that you have just spun up yourself, your cloudbees-web.xml will likely be missing the required environment definition... so better is to use the newer
Configuration Parameters so that when you need your own test instance you just define the configuration parameters for that test environment and then deploy the exact same artifact to that instance... it also prevents the issue of deploying to the test instance with the production environment turned on.
I am sure one of the RUN# team may well have some other trick such as a System property that tells you the app id... but keep in mind that when running locally, e.g. using a local jetty/tomcat/bees:run container your code will then blow up!
Here are the ways I've come up with:
Have an unversion-controlled config file
Check the server-name/IP address against a list of known dev servers
Set some environment variable that can be read
I've used (2) on some of my projects, and that has worked well with only one dev machine, but we're up to about 10 now, it may become difficult to manage an ever-changing list.
(1) I don't like, because that's an important file and it should be version controlled.
(3) I've never tried. It requires more configuration when we set up each server, but it could be an OK solution.
Are there any others I've missed? What are the pros/cons?
(3) doesn't have to require more configuration on the servers. You could instead default to server mode, and require more configuration on the dev machines.
In general I'd always want to make the dev machines the special case, and release behavior the default. The only tricky part is that if the relevant setting is in the config file, then developers will keep accidentally checking in their modified version of the file. You can avoid this either in your version-control system (for example a checkin hook), or:
read two config files, one of which is allowed to not exist (and only exists on dev machines, or perhaps on servers set up by expert users)
read an environment variable that is allowed to not exist.
Personally I prefer to have a config override file, just because you've already got the code to load the one config file, it should be pretty straightforward to add another. Reading the environment isn't exactly difficult, of course, it's just a separate mechanism.
Some people really like their programs to be controlled by the environment (especially those who want to control them when running from scripts. They don't want to have to write a config file on the fly when it's so easy to set the environment from a script). So it might be worth using the environment from that POV, but not just for this setting.
Another completely different option: make dev/release mode configurable within the app, if you're logged into the app with suitable admin privileges. Whether this is a good idea might depend whether you have the kind of devs who write debug logging messages along the lines of, "I can't be bothered to fix this, but no customer is ever going to tell the difference, they're all too stupid." If so, (a) don't allow app admins to enable debug mode (b) re-educate your devs.
Here are a few other possibilities.
Some organizations keep development machines on one network, and production machines on another network, for example, dev.example.com and prod.example.com. If your organization uses that practice, then an application can determine its environment via the fully-qualified hostname on which it is running, or perhaps by examining some bits in its IP address.
Another possibility is to use an embeddable scripting language (Tcl, Lua and Python come to mind) as the syntax of your configuration file. Doing that means your configuration file can easily query environment variables (or IP addresses) and use that to drive an if-then-else statement. A drawback of this approach is the potential security risk of somebody editing a configuration file to add malicious code (for example, to delete files).
A final possibility is to start each application via a shell/Python/Perl script. The script can query its environment and then use that to driven an if-then-else statement for passing a command-line option to the "real" application.
By the way, I don't like to code an environment-testing if-then-else statement as follows:
if (check-for-running-in-production) {
... // run program in production mode
} else {
... // run program in development mode
}
The above logic silently breaks if the check-for-running-in-production test has not been updated to deal with a newly added production machine. Instead, if prefer to code a bit more defensively:
if (check-for-running-in-production) {
... // run program in production mode
} else if (check-for-running-in-development) {
... // run program in development mode
} else {
print "Error: unknown environment"
exit
}