Does Urbancode Deploy have the ability to share properties across applications? - urbancode

Some other deployment platforms such as Octopus Deploy have the concept of shared variables/properties/values across applications. For instance, there may be 25 applications that all consume an API at a configurable URL. In the case that the URL changes, it would be ideal to change that value in one place.
Is there anything in UCD that supports that type of arrangement?

You can set a "system" level variable for this under "Settings"->"Properties" in the top bar. These properties can be referenced with the syntax ${p:system/property-name} and are accessible across all applications.

While System level variables are global, the Resource tree variables are likely what you want. The Resource tree is a hierarchical, cascading set of variables that can be overridden at lower branches, but inherit from parent branches.
At the root level, create folder (Resource Group) for each Application
Under the Application folder, create folder for each
Environment in the Application
Under the Environment folder, add the Agents (target machines)
Under the Agents, map the corresponding Components Add additional
folders as needed or desired
Here is the documentation on resources:
https://www.ibm.com/support/knowledgecenter/en/SS4GSP_7.1.1/com.ibm.udeploy.doc/topics/getstart_resource_create.html

Related

What are the output files of the VxWorks Workbench kernel configuration GUI

I'm trying to generate a VxWorks 6.9.4.8 kernel configuration that is identical to another kernel workbench project. The Workbench 3.3.6 only allows GUI configuration.
Is there an underlying kernel configuration file, produced by the GUI, which can be replaced?
After updating the kernel configuration using the Workbench GUI, I see the following files have changed:
linkSyms.c,
prjComps.h,
prjConfig.c, and
prjParams.h
I guess my question is, which one, if any uniquely identifies the kernel as built?
prjComps.h will contain all the component's names, as you have chosen in your kernel configuration GUI.
First step to create new Kernel configuration based on some other Kernel configuration is to use GUI configurator and add the missing component in prjComps.h, Better use some diff tool like 'beyond compare', and keep reducing the differences by adding/removing the components. Remember not to edit this file directly, but via GUI configurator only. As the tool calculates the dependent component and adds/removes them.
Second step is to create the new prjParams.h as above.
The Workbench actually allows to use command line to edit Kernel configuration via vxprj tool in vxworks 6.9(this tool has been replaced by "wrtool" in vxworks 7), you can right click on the Image project and chose 'Open Wind River vxWorks 6.9 Developement Shell'.
If you want to add a component for e.g. telnet client (INCLUDE_TELNET_CLIENT)
, you can use the following command
vxprj component add INCLUDE_TELNET_CLIENT
To remove a component
vxprj component remove INCLUDE_TELNET_CLIENT
For more of vxprj tool, you can look up the documentation in the workbench itself.
The project configuration is held in a handful of files in the kernel project directory.
These are:
.project
.cproject
.wrproject
projectname.wpj
Files such as prjComps.h, prjParams.h prjConfig.c are all generated by the configuration tool, however these are not configuration files themselves. Instead, this is generated C code that contains, amongst other things, a list of selected components.
These files are also re-generated, I believe, when you rebuild the project.
As such, these are not really the authoritative source you are interested in.
For this, you need to look at the project files. In terms of a list of components, the most interesting is the .wpj file, which contains amongst other things a list of explicitly and implicitly included components.
The explicitly included components are those you manually selected in the Kernel Configuration GUI, the implicitly included are those that were then included to satisfy dependencies.
This distinction can sometimes make comparing kernel configurations tricky, then you may want to fall back on the generated files eg prjComps.h, however you should always remember that this is a representation of the configuration, not the source.
The .project etc configuration files are big and complex, but a decent diff tool, such as BeyondCompare can make comparisons of the project directories fairly easy
Thanks for the clue, #endTunnel. I looked at that file, and noticed that a few files get modified when I save my GUI selections.
prjComps.h - all the components #included in the kernel build
prjParams.h - the additional parameters set for the enabled components
prjConfig.c - the configuration and initialization calls for each module included.
'linkSyms.c' also gets modified. Not sure how that is used, yet.
I can now use diff to compare kernel configurations, and perhaps even duplicate a configuration (haven't tried that yet).

mercurial - several projects and repositories

(disclaimer: I am completely new to mercurial and version control)
So I have a folder structure
Programs
CPPLib1
CPPProject11
CPPProject12
CPPLib2
CPPProject21
CPPProject22
Each group of three is completely independent of the other group, but within each group the code is related and I'd like to manage it under version control as a whole (commit/extract everything in one transaction). As I understood by googling it, I must have a repository for each group in their common parent (Programs), but I cannot have 2 different repositories there, right? Does it mean I must have this structure instead:
Programs
Group1
CPPLib1
CPPProject11
CPPProject12
Group2
CPPLib2
CPPProject21
CPPProject22
A related question, this site http://help.fogcreek.com/8169/using-more-than-one-repository says
"Since Mercurial and Git are Distributed Version Control Systems (DVCSs), you should use at least use one separate repository per project, including shared projects and libraries."
So what does this advice mean? I can't have a separate repository for each of
CPPLib1
CPPProject11
CPPProject12
and manage them as a whole. I am confused.
For each of you project groups you'll need to create one repository in a separate directory. On how you structure that beneath is up to debate and depends a bit on your preferences.
You say that you want everything in that project group managed within a single repository. It means you can simply create a directory structure as you described with the sub-projects residing in different directories within this repository.
Within each group, you can take it further and make each of these directories (library, programme 1, programme 2, ...) a separate repository which in turn become a sub-repository to the main repository, as described in the link given by Lasse Karlsen (Subrepository).
You could also handle it differently, if you allow a more flexible layout and let go of checking out one group in its entirety: For instance you could declare the libraries a sub-repository to each of the programmes which uses the library. It would have the advantage that the programme defines this way directly which library version it depends on
Further, before jumping to sub-repositories, you might want to look at the alternative implementation of guest repositories as well. They handle the dependency less strict, thus a failure to find the sub-repository becomes less fatal: https://bitbucket.org/selinc/guestrepo

How to organize code so that we can move and update it without having to edit the location of the configuration file?

The issue that I consider is how to write code that can easily know the location of a required config file and yet is portable, without any edit, from an environment to another. We don't want to edit the location of the configuration file to adapt the code to each new environment, say each time we move the code from a development environment to production. The method should not rely on resources that are not universally available, such as an access to user-defined environment variables or an access to a specific directory. For example, it may seem that using the DOCUMENT_ROOT as a base location for the config file is the way to go, but that is not universal. First, in a command line environment the DOCUMENT_ROOT makes no sense. Second, a programmer might be given access to a sub-folder of the DOCUMENT_ROOT only. Another requirement is that the configuration file could depend on values known at run time, say the user who call the application, as in this question How to load a config file based on user selection from "unknown" location .
The question is not what is the best location of the configuration file in specific environments, such as Location to put user configuration files in windows . The programmers would still have to figure out the best location so that end users could easily find the configuration file. The question is how this location, whatever it is, even if it depends on values known at run time, can be passed to the code in a portable manner.
One approach is to design any script file with in mind that it is to be included in another file and so on until we get to a wrapper script that only defines the directory of the config file to the benefit of the included file and other included files therein. Once this directory path is known, other configuration values can be obtained from a named configuration file within it. This works because the wrapper scripts are not updated when we update the code from a repository or testing environment. This approach seems universally applicable : no special support of any kind such as an access to user defined environment variables or to some specific directory in the server is needed. As long as you have access to the code, which is a strict minimum to expect, it works. Also, scripts are often naturally designed to be included in another file - so it is natural.
The approach only requires that we agree on a convention for the name of the constant, say CONFIG_DIRECTORY. If every programmer would agree to search at the location specified by this constant for the config file, then any user of the code could put the config file anywhere and just define this constant accordingly.
In Linux, they have the folder /etc for config files. So, the notion of an universally agreed standard in a very large context is already there. This is the same idea than the one proposed here, except that it is the same constant for all machines and someone might not have access to that level of the server. Moreover, we lose the possibility to have different configuration directories for different wrapper scripts. Allowing the universal standard to be a constant name, say 'CONFIG_DIRECTORY', instead of being the fixed constant '/etc', seems just an extra flexibility with no additional inconvenient. It does require that we define this constant in some wrapper script, but we could fall back to the old approach if it is not defined. The outcome, if the approach is strictly applied, would be that all the scripts required in the server document root would only be simple wrappers that define a configuration directory. That seems cool. Often people say that it is safer to have important code outside the document root.

Configuration Promotion Between Environments

What is a good way to coordinate configuration changes through environments?
In an effort to decouple our code from the environment we've moved all environmental config to external files. So maybe the application will look for ${application.config.dir}/app.properties and app.properties could contain:
user.auth.endpoint=http://some.url/user
user.auth.apikey=abcd12314
The problem is, user.auth.endpoint needs to point to a test resource when on test, a staging resource when on the staging environment, and a production resource when on prod.
We could maintain different copies of the config file but this would violate DRY and become very unwieldy (there are 20+ production environments).
What's a good way to manage this? What tools should I be searching for?
Externalizing config is a good idea, you could externalize them all the way to environment variables.
Env vars are easy to change between deploys without changing any code;
unlike config files, there is little chance of them being checked into
the code repo accidentally; and unlike custom config files, or other
config mechanisms such as Java System Properties, they are a language-
and OS-agnostic standard.
From http://12factor.net/config
I know of three approaches to this.
The first approach is to write, say, a Python "wrapper" script for your application. The script will find out some environmental details, such as hostname, user name and values of environment variables, and then construct the appropriate configuration file (or a set of command-line options) that is passed to the real application.
The second approach is to embed an interpreter for a scripting language (Python, Lua and Tcl come to mind) into your application. This makes it possible for you to write a configuration file in the syntax of that embedded scripting language. In this way, the configuration file can make use of the scripting language's features, such as the ability to query environment variables or execute a shell command (such as hostname) and use if-then-else statements to set variables appropriately.
The third approach (if you are using C++ or Java) is to use the open-source Config4* library (disclaimer, I am the main developer of that). I recommend you read Chapter 2 of the "Config4* Getting Started" manual to see examples of how its flexible syntax can enable a single configuration file adapt to multiple environments.
You can take a look at http://www.configapp.com. You work with 1 configuration file, and switch/tab between the environments. Internally it's just 1 configuration file, and it handles the environment variables and generates the config file for the specific environment. In Config terminology, you have 1 Prod environment with 20+ instances. You will have a Prod environment configuration and you can tweak the 20+ instances accordingly using a web interface.
You moved environment specific properties to a separate file, but with Config, you don't have to do that. With Config, you can have 1 configuration file, with environment variables support, and common configuration applied to all environments.
Note that I'm part of the Config team.

How to retrieve appid when deployed to cloudbees?

In the Cloudbees wiki, this page explains how to add configuration parameter for an app deployment, using cloudbees-web.xml.
But, is the content of:
<appid>APP_ID</appid>
injected as a well ? How can I retrieve this value from my application's code ?
My preference is to avoid coding an application to contain explicit references to the container within which it runs. So I would favour using techniques that do not tie your code to CloudBees (a.k.a. us).
Thus I would use a container specific descriptor file that configures a context parameter, then your application just reads the context parameter and uses that parameter directly.
There are two techniques for doing this:
Application Environments personally I love this way... though if you want to deploy the application to your own test environment that you have just spun up yourself, your cloudbees-web.xml will likely be missing the required environment definition... so better is to use the newer
Configuration Parameters so that when you need your own test instance you just define the configuration parameters for that test environment and then deploy the exact same artifact to that instance... it also prevents the issue of deploying to the test instance with the production environment turned on.
I am sure one of the RUN# team may well have some other trick such as a System property that tells you the app id... but keep in mind that when running locally, e.g. using a local jetty/tomcat/bees:run container your code will then blow up!