I have a bunch of jobs of which some of them can run parallel. However I also have a consolidation job which has to wait until all of the parallel jobs are completed and perform the consolidation.
Eg:
Job A -> Job B, Job C
Job B -> Job D, Job E
Job C -> Job F
Job F, Job E -> Job G
After Job A is done, Job B and Job C are to be triggered. Job D & Job E will be triggered after Job B is completed and similarly Job F is triggered after Job C is completed. and Job G is to be triggered after Job F & Job E are completed.
I notice that Job G is triggered twice after completion of each of the dependent jobs (Job F, Job E). Is there any way I can ensure that Job G runs only once but after completion of Job F and Job E.
One of the better approaches to CI is developing a build process that mirror's the developer's build-cycle as closely possible. You haven't mentioned any other constraints other than build parallelism, and with that information in mind, I'd recommend pulling the hierarchy that you've created in Hudson and moving it into a ant, nant, msbuild, or other script that can be reasonably run on a developer's work station. Then configure Hudson to use that script as its project script. That doesn't mean you can't have the other project's as independent projects in Hudson, it just means that the final project knows how to build itself from all the others.
Lastly, unless the build is long lived, I can't see an issue with letting Hudson run the job twice.
Related
I have a GitHub action which contains the following jobs (dependent on each other, running in sequence):
Compile and deploy.
Run tests.
Job 1 compiles a C# solution. One project is an ASP.NET Core application that gets deployed to Azure. Another is an xUnit test project that will call the deployed Azure application.
The reason I want them to be two separate jobs is to be able to re-run them independently. Deployment is slow, so if deployment succeeds but tests fail, I want to be able to re-run the tests without re-running the deployment.
Note that in my sequence of jobs above, job 1 creates an artifact-on-disk that job 2 needs, namely the compiled assembly of the test project. This works as long as they are run in sequence, and sometimes it also works if I re-run job 2 much later. But I suspect that my setup is not sane. I suspect that it has just accidentally worked for me so far.
If, for example, I have two feature branches that both trigger the action, I might have a race condition on the test assembly, and one run of the action might accidentally be running a DLL that was compiled from code taken from the other feature branch.
I have read (see this SO thread) that I can do upload-artifact and download-artifact in order to transfer artifacts between jobs. But my test assemblies have dependencies, so I would need to either upload many artifacts or zip the whole folder, upload it, download it and unzip it. That is inconvenient.
Another option would be to check out the source code again and recompile the test assembly at the beginning of job 2. But that might be slow.
What is the best option here? Are there other options?
We have an Azure Pipeline that merges into a repository that converts .json files representing customer orders into C# objects. Naturally, if the design or naming of these C# objects ever changes, the old orders will become unusable, so we run a script 'Migrating' all these outdated .jsons to conform to the new model.
Our current pipeline that merges dev into production Migrates our .jsons, and we run a PowerShell unit test script after the pipeline's completion to ensure that the .jsons have successfully Migrated. We'd like to place this test into the pipeline itself, but there are two conditions we'd prefer to meet.
If the Test fails, not only abort the merge, but revert the .jsons to their un-Migrated versions.
Give us the option to continue the merge anyway, in the event that the website encounters an error so critical and urgent we are willing to bear the loss of a few quotes.
Are these conditions feasible?
According to your description, you may consider using Build validation as the Branch policies and settings.
Basically, let's assume your production code is in the Production branch; then you can create a Dev branch and push your new commits into the Dev branch. When setting the Build validation policy on the Production branch, the PR request will not be completed if the build fails, which contains the unit test. Therefore the new code from Dev branch will not be merged into the Production branch.
In the meanwhile, other branch policies may also help you with the code version control. Hope the following documents can help as well.
Require a minimum number of reviewers
Check for linked work items
Check for comment resolution
Limit merge types
I'm new to mercurial and I have problems with the solution we're working to implement in my company. I work in a Lab with a strict security environment and the production Mercurial server is in a isolated network. All people have two computers, one to work in the "real world" and another one to work in the isolated and secure environment.
The problem is that we have other Labs distributed around the world and in some cases we need to work together two or more Labs in a project. Every Lab has a HG server for to manage their own projects locally, but I'm not sure if our method to sync common projects is the best solution. To solve that, we use a "bundle" to send the news changesets from one Lab to another. My question is about how good is this method because the solution is a little bit complicate. The procedure is more or less that way:
In Lab B, hg pull and update to be sure about the last version in local folder.
Ask the other about the "hg log", to see what are the last common changeset.
In Lab A: hg pull and update to be sure about the last version in local folder.
In Lab A: Make bundle, "hg bundle --base XX project.bundle" (where XX is the last common changeset).
Send it to Lab B (with a complicated method due the security normative: encrypt files, encrypt drives, secure erases, etc).
In Lab B: "hg unbundle projectYY.bundle" in the local folder.
This process creates two heads, that sometimes force you to make merges.
Once the changesets from Lab A are correctly implemented at Lab B, we need to repeat the process in the opposite direction, to implement the evolution of the project in the Lab B to the Lab A.
Could anyone enlighten me the way to find the best solution to get out of this dilemma?
Anyone have a better solution?
Thanks a lot for your help.
Bundles are the right vehicle for propagating changes without a direct connection. But you can simplify the bundle-building process by modeling communication locally:
In Lab A, maintain repoA (the central repo for local use), as well as repoB, which represents the state of the repository in lab B. Lab B has a complementary set-up.
You can use this dual set-up to model the relationship between the labs as if you had a direct connection, but changeset sharing proceeds via bundles instead of push/pull.
From the perspective of Lab A: Update repoA the regular way, but update repoB only with bundles that you receive from Lab B and bundles (or changesets) that you are sending to Lab B.
More specifically (again from the perspective of Lab A):
In the beginning the repos are synchronized, but as development progresses, changes are committed only to repoA.
When it's time to bring lab B up to speed, just go to repoA and run hg outgoing path/to/repoB. You now know what to bundle without having to request and study lab B's logs. In fact, hg bundle bundlename.bzip repoB will bundle the right changesets for you.
Encrypt and send off your bundle.
You can assume that the bundle will be integrated into Lab B's home repo, so update our local repoB as well, either by pushing directly or (for assured consistency) by unbundling (importing) the bundle that was mailed off.
When lab B receives the bundle, they will import it to their own copy of repoA-- it is now updated to the same state as repoA in lab A. Lab B can now push or pull changes into their own repoB, and merge them (in repoB) with their own unshared changesets. This will generate one or more merge changesets, which are handled just like any other check-ins to lab B's repoB.
And that's that. When lab B sends a bundle back to lab A, it will use the same process, steps 1 to 5. Everything stays synchronized just like they would if the repositories were directly connected. As always, it pays to synchronize frequently so as to avoid diverging too far and encountering merge conflicts.
In fact you have more than two labs. The approaches to keeping them synchronized are the same as if you had a direct connection: Do you want a "star topology" with a central server that is the only node the other labs communicate with directly? Then each lab only needs a local copy of this server. Do you need lots of bilateral communication before some work is shared with everyone? Then keep a local model of every lab you want to exchange changesets with.
If you have no direct network communication between the two mercurial repositories, then the method you describe seems like the easiest way to sync those two repositories.
You could probably save a bit on the process boilerplate on getting the new changesets which need bundling, how exactly depends.
For once, you don't need to update your working copy in order to create the bundles; it suffices to just have the repo, you don't need a working copy.
And if you know the date and time of the last sync, you can simply bundle all changesets added since that time, using an appropriate revset, e.g. all revisions since 30th March this year: hg log -r'date(">2015-03-30")' Thus you could skip a lengthy manual review process.
If your repository is not too big (thus fits on the media you use for exchange), simply copy it there in its entirety and do a local pulls from that exchange disk to sync, skipping those review processes, too.
Of course you will not be able to avoid making the merges - they are the price you have to pay when several people work on the same thing at the same time and both commit to their own repos.
I want to run testNG classes to run all the time and parallel. What I mean is until I finish the execution of the test from Jenkins I want my tests to run all the time. And testclasses should run in parallel. Is this possible?
In Jenkins you can't because it is not designed that way. It's not designed that way because in spite of you clients wish it is not logical to do so. What you can do is set Jenkins to build and run the test profiles when someone checks in code to trunk and then again every few hours on a cycle. There is no value to running tests in a continuous loop. You can also create branch builds for certain feature branches, these are useful as well for immediate feedback to devs before they merge back to trunk.
You can run your tests concurrently, if they don't share state (Which, they shouldn't). Assuming you are using Maven. If you want your tests to execute in a highly parallel fashion you can configure your Maven Surefire plugin to Fork and Execute Parallel Tests. Or, if you are using Gradle set options.fork accordingly
I have a Jenkins build job with a Mercurial trigger on the default branch, which works fine for building "release candidates". This job then starts a smoke test job.
We use a branch-per-feature branching scheme, so that at any given time there could be up to a dozen different active branches in Mercurial (but the active branches change regularly).
I would like a Jenkins job triggered by changes to any branch, that will then build and run the smoke tests for all branches that need updating. Each time we do a build, we should create artifacts named to match the branch.
I saw a suggestion in another answer of using "tip" instead of the branch name in the Mercurial trigger - this is a possibility, but I think it would fall into the "mostly works" category. The trigger is polling, so if changes to more than one branch occur within the polling interval, then a branch update could be missed.
I could create a new job each time a branch is created, but due to the dynamic nature of our branches, that would be a lot of ongoing work.
If you decide to go with the job for every branch approach, the following tools can make the task a bit more manageable:
jenkins-build-per-branch (supports git)
jenkins-autojobs (supports git, mercurial and svn)
I think you'll need to customize: the top-level polling job (tuned to tip) runs a custom script that determines branches that have changed or have been added. It then will use Jenkins API to start a job parameterized by the branch name. That parameter can be used in your job to customize everything you need by the branch name (including the artifacts).