In Hudson, how do I set multiple environment variables given a single parameter? - hudson

I want to set up a parameterized build in Hudson that only takes one parameter -- the type of build to create (QA, Stage, Production). However, each of those builds requires several different environment variables to be set. Something like (pseudocode):
if ${CONFIG} == "QA" then
${SVN_PATH} = "branches/dev"
${BUILD_CONFIG} = "Debug"
# more environment variables...
else if ${CONFIG} == "Production" then
${SVN_PATH} = "trunk"
${BUILD_CONFIG} = "Release"
# more environment variables...
else # more build configurations...
end if
There are myriad steps in our build -- pull from subversion, then run a combination of MSBuild commands, DOS Batch files, and Powershell scripts.
We typically schedule our builds from the Hudson interface, and I want the parameter entry to be as idiot-proof as possible.
Is there a way to do this?

Since you do so many things for a release, how about scripting all the steps outside of Hudson. you can use ant, batch files, or whatever scripting language you prefer. Put that script in your scm to get the revision control.
pro's:
you get the logic out of Hudson, since Hudson only calls the script.
You don't need to create environment variables that need to get persisted between shells (global variables).
you can use config/properties files for every environment, which you should also put into version control
you can run these scripts outside of Hudson, if you need to
Hudson job config get's way easier
you don't have the side effects of changing the global environment variables. You should always try to create jobs that don't change any global settings, because that always calls for trouble.

Hudson supports build parameters, which are build-time variables that Hudson makes available as environment variables to your build steps (see the Hudson Parameterized Build wiki page). In your job, you can have one choice parameter CONFIG that the user enters. (To set up the parameter, under your job's configuration, select This build is parameterized.)
Then, you can basically write what you've coded in pseudocode at the start of your build step. Or since you have multiple steps, put the environment setup in a file that's referenced from your existing build scripts. (Depending on your shell, there are various tricks for exporting variables set in a script to the parent environment.) Putting the setup in a file that's integrated with your existing build scripts will make it much easier to manage and test (i.e. it's testable outside of Hudson) as well as give you an easy way to invoke your scripts locally.
You might also consider separating your unified build into separate jobs that perform each of the configurations that you've described. Even though they may reference a central build script, the CONFIG types that you've defined seem like they should be distinct actions and deserve separate jobs.

Related

Azure Devops Release for application that remains the same except for appsettings.json

I am creating an Azure devops build pipeline and release. This release has a staging environment that utilizes a deployment group with 3 servers, in production it can have 50+ servers. The application will be the same across all the servers except for the appsettings file. appsettings will contain the db connections and location/server specific variables. I have looked into ways to manipulate this file on release per server, all I have come across are ways to have variable substitutions in the release for environments when you only need to switch values in a dev to staging to prod release. Is there a good way to manipulate this file per server in a deployment group rather than 50+ stages/tags, or a better way to setup my pipe and release?
Is there a good way to manipulate this file per server in a deployment
group rather than 50+ stages/tags
Afraid to say that as far as I know, we does not support this possible yet. But the if you host your app on Azure website, azure have one new feature can achieve this goal.
But if you host the app to self servers, I'm afraid that the better deployment approach in this scenario is Build once, deploy many. In another word, build the project in Build pipeline, and configure corresponding appsettings.json file on specific stage.
To improving the maintainability of the release and simplifying the configuration structure, you can make use of task group and the variable group. (Please keeping using variable substitutions in release)
Encapsulate a sequence of reusable tasks into task group, then this template will be used in every deployment group job. Note, you can make the reusable parameters as a part of template. Just abstract the app setting information and store them as variables into corresponding variable group.
At this moment, whenever you add a new server, you only need to save the corresponding app setting parameters into the created variable group. In release pipeline, you only need to add task group, and link the previous created variable group to specified stage. Execute the release pipeline, then everything will go as expect.
In post-maintenance, you just need to modify the basic configuration of deploy task task once, and it can be applied to all stages. When you need to modify the corresponding server app setting configuration, you can modify them by opening the corresponding variable group.

Store Teamcity Build Steps in Branch

Whenever we change a Teamcity build definition (e.g. to use a new version of the NUnit console runner), it breaks the builds if we need to hotfix an earlier version of our code. To address this, I'd like to store the build definition alongside the code, so that it executes the build as it looked at the time the hotfix is branched from.
I've been looking into exporting the TC build definitions as Kotlin scripts, but I have a couple of issues with them:
Versioned Settings is controlled by a higher-level project I don't have access to, and is stored in a separate repository (and not as Kotlin).
It seems like it'll just look at your master branch and import settings from there, but I need the build to use whatever is on my hotfix branch at the point of execution.
In the past, we've solved this with Fake scripts, but the experience tends not to be very good for identifying problems.
What's the best way to execute builds on TeamCity to meet the following requirements?
Output as close to normal build steps as possible
Build defined within the branch/code under execution
Easy to maintain/manage
Test output should remain the same as in normal TC builds
You can have the TeamCity build configuration automatically stored in your repo. So changes made in the configuration will be committed to master and leaving your configuration on the branch intact. If no usable configuration is found on the branch, the configuration on the master will be used.
https://www.jetbrains.com/help/teamcity/storing-project-settings-in-version-control.html

Feedback requested for SSIS Master package design - Running a bunch of Sub-Packages

Overall, I am looking for feedback regarding two different design options of running a master package.
I have one package that Agent calls that runs a bunch of packages that process data (I think we are up to about 50 now).
The original design was to group packages into smaller chunks called directorates which call the actual packages. Sample below:
A few perceptions I see (and experienced) with this approach is that:
1. Every package has to open (even if it is unnecessary to run ie no file present)
2. #1 adds so much time for the process to complete
3. Runs in parallel for sure
So I developed a new approach which will only run the packages that have the necessary files and logs the attempt if not. It is so much cleaner and you don't need all the file connections for each package to run since you are iterating through them.
I am not sure it runs in parallel (I actually doubt it).
I am adding the dataflow that populates the ADO Object that is being iterated in foreach to demonstrate the files being processed.
Note: Usually in DEV environment there are not many files to be processed, however, when deploying to TEST and PROD there will be most files present to be processed.
Can I get some feedback on these two different approaches?
Anyone that provides productive feedback will recieve upvotes!!!
I would go with modified first approach ie something like Inside package, use Script task to check if files are present in destination or not.
For instance :
Create a Script task and a variable.
Inside script task, write a code similar to the image below(Logic is, if file is found then flag it as true, else flag is false) :
Now constraint the execution of DFT by using this flag as shown below :
Only con is, you'll have to make changes in 50 packages, but this is a one time activity. Your parallel execution will remain intact.
I will go with 2nd approach as its cleaner and easy to debug.
Here are the suggestions to improve 2nd approach :
Create a Control table with all package Names, Enable/Disable flag, FileAvailable Flag
Create a Poll package which will go through files and sets flag and package flag accordingly
Loop through this Control table and run only those are enabled and having file.

simultaneous instances of the same hudson/jenkins job

I would like a way for individual users to send a repo path to a hudson server and have the server start a build of that repo. I don't want to leave behind a trail of dynamically created job configuration. I'd like to start multiple simultaneous instances of the same job. Obviously this requires that the workspaces different for the different instances. I believe this isn't possible using any of the current extensions. I'm open to different approaches to what I'm trying to accomplish.
I just want the hudson server to be able to receive requests for builds from outside sources, and start them as long as there are free executors. I want the build configuration to be the same for all the builds except the location of the repo. I don't want to have dozens of identical jobs sitting around with automatically generated names.
Is there anyone out there using Hudson or Jenkins for something like this? How do you set it up? I guess with enough scripting I could dynamically create the necessary job configuration through the CLI API from a script, and then destroy it when it's done. But I want to keep the artifacts around, so destroying the job when it's done running is an issue. I really don't want to write and maintain my own extension.
This should be pretty straightforward to do with Jenkins without requiring any plugins, though it depends on the type of SCM that you use.
It's worth upgrading from Hudson in any case; there have certainly been improvements to the features required to support your use case in the many releases since becoming Jenkins.
You want to pass the repo path as a parameter to your build, so you should select the "This build is parameterized" option in the build config. There you can add a string parameter called REPO_PATH or similar.
Next, where you specify where code is checked-out from, replace the path with ${REPO_PATH}.
If you are checking out the code — or otherwise need access to the repo path — from a script, the variable will automatically be added to your environment, so you can refer to ${REPO_PATH} from your shell script or Ant file.
At this point, when pressing Build Now, you will be prompted to enter a repo path before the build will start. As mentioned in the wiki page above, you can call the buildWithParameters URL to start a build directly with the desired parameter, e.g. http://server/job/myjob/buildWithParameters?REPO_PATH=foo
Finally, if you want builds to execute concurrently, Jenkins can manage this for you by creating temporary workspaces for concurrent builds. Just enable the option
"Execute concurrent builds if necessary" in your job config.
The artifacts will be available, the same as any other Jenkins build. Though probably you want to manage how many recent artifacts are kept; this can be done by checking "Discard Old Builds", and then under Advanced…, you can select enter a value for "Max # of builds to keep with artifacts".

Modifying Saved Artifacts On a Particular Jenkins Build for Deployment

We have a .NET Jenkins installation that builds a few .NET apps. These apps include a bunch of *.exe and *.exe.config files. Right now, I save the app as a zipfile containing all of the *.exe, the required DLLs and *.xml files, and the default *.exe.config files. The default *.exe.config get their value from what is in the Subversion repository and is tuned for the production environment.
The *.exe.config files contain the database name, the database server, the name of the server, etc. These are correct for the production environment, but not for UAT, QA, or developer testing.
What I'd like to do is have some sort of post-build task where a user can specify the particular build, and the values for those particular parameters that vary from environment to environment. If I got that, I could run an Nant or Ant task that unzips the zipfile, munges the *.exe.config file and either deploy it (my ultimate goal), or at least zip that up and put it somewhere the user can access it.
I know there's a parameterized build, and I know there are batch tasks, but I need a combination of the two. Is that possible?
It's not as elegant, but I think you can implement what you want as a separate build. You need:
A Parameterized Build (which you know about)
A way to access artifacts from another build
Given these pieces, you should be able to create a Parameterized Build that does exactly what you describe: grabs the build artifact, munges the configuration and provides it somewhere for the user. Of course the devil is in the details, e.g. it may be tricky to make it easy for a user to "select the right build".
Update (now that I've learned about Batch Tasks - thanks!): I don't see a way to parameterize the batch task like you asked. I'm guessing that the combination of variables make it prohibitive to define a bunch of different batch tasks. You could define a couple of batch tasks for common release and testing configurations and also provide the "munger" program for more user-specific configuration.