running two GitLab pipelines with the same runner - gitlab-ci-runner

I have two pipelines that I want to run with the same runner, is it possible?
my single runner installed on a Linux virtual machine and I want to use it to run all my pipelines.

If the pipelines are for different projects you will need to make sure the runner is accessible to each project.
Depending on the level of control you want, you can utilise gitlab cis keyword for tags, this will then enable you to determine which runner handles which pipeline.
If you want to run the jobs in parallel you will need to make sure the runner is enabled for concurrent running and also that the jobs are in the same stages within the pipelines.

The only way to do this at the moment is to define one tag for one runner only and user this tag for your project.
This way everything is run on this single runner.
This has of course the disadvantage that the load is not spread to different runners, so be careful.
You could improve this solution if you a child pipeline to get a new free runner tag and create a child pipeline using it.
There are active issues about this problem in gitlab, see this one and this.
There is a forum entry about it as well.

Related

Github actions: using two different kinds of self-hosted runners

I have a github repository which is doing CI/CD using github actions that need more than what the github-hosted runners can do. In multiple ways. For some tasks, we need to test CUDA code on GPUs. For some other tasks, we need lots of CPU cores and local disk.
Is it possible to route github actions to different self-hosted runners based on the task? Some tasks go to the GPU workers, and others to the big CPU workers? The docs imply this might be possible using "runner groups" but I honestly can't tell if this is something that A) can work if I figure it out B) will only work if I upgrade my paid github account to something pricier (even though it says it's "enterprise" already) or C) can never work.
When I try to set up a runner group following the docs, I don't see the UI elements that the docs describe. So maybe my account isn't expensive enough yet?
But I also don't see any way that I would route a task to a specific runner group. To use the self-hosted runners today, I just say
gpu-test-job:
runs-on: self-hosted
instead of
standard-test-job:
runs-on: ubuntu-22.04
and I'm not sure how I would even specify which runner group (or other routing mechanism) to get it to a specific kind of self-hosted runner, if that's even a thing. I'd need to specify something like:
big-cpu-job:
runs-on: self-hosted
self-hosted-runner-group: big-cpu # is this even a thing?
It looks like you won't be able to utilize runner groups on a personal account, but that's not a problem!
Labels can be added to self-hosted runners. Those labels can be referenced in the runs-on value (as an array) to specify which self-hosted runner(s) the job should go to.
You would run ./config.sh like this (you can pass in as many comma-separated labels as you like):
./config.sh --labels big-cpu
and your job would use an array in the runs-on field to make sure it's selecting a self-hosted runner that is also has the big-cpu label:
big-cpu-job:
runs-on: [self-hosted, big-cpu]
...
Note: If you wanted to "reserve" the big-cpu runners for the jobs that need it, then you'd use a separate label, regular, for example, on the other runners' ./config.sh and use that in the runs-on for the jobs that don't need the specialized runner.

Number of configurations in a project for build and install

In our project, we currently have two different configurations. The first one builds the assemblies. The other packages (including moving stuff to the right directories etc.) everything for InstallShield.
Now, we can't agree if it's better to move all the build steps into a single configuration and run it as a whole chain or if it's better to keep the build process separate from creating installation package.
Googling results in guides on how to do that but not in what way to do that (and our confusion is mainly due to the architecture of the configurations' order). We'll be using a few steps from PowerShield in order to move a number of files between different directories due to certain local considerations. The total number of steps will land on 5 or less.
The suggestion that I have is the following three configurations. They run separately, independently and their build steps overlap (being super sets of each other, consecutively regarded).
Configuration Build.
Configuration Build and test.
Configuration Build, test and package.
The main point of my suggestion is that e.g. the step that compiles the software is implemented in each configuration (as opposed to reusing the artifacts from an independent run of other configuration).
I would argue like this:
if you ever need to perform just one of the two steps - then leave them as separate steps.
This gives you the flexibility to run one, or the other, or both steps. E.g. could it be that you need to just build the solution, but not create the final installation package? E.g. for local testing?
However, if you never ever use one of the steps separately (you always run both together), then I'd probably just merge them together into one - having two separate steps doesn't make much sense to me

How can I prevent two Jenkins projects/builds from running concurrently?

I have two Jenkins projects that share a database. They must not be run simultaneously. Strictly speaking, there is no particular dependency between them beyond non concurrency, but at the moment I partially manage this constraint by running one "downstream" of the other. This works most of the time, but not always. If a source control change happens while the second is running, the first will start up again, and they'll be running concurrently and probably both fail miserably.
This is similar, but not identical, to How to prevent certain Jenkins jobs from running simultaneously? The difference is that I don't have a "number of threads" problem -- I'm already only running at most one thread of any given project at any one time, even in the case where two (different-project) builds stomp each other. This seems to rule out all the several suggestions in that thread.
The Locks and Latches plugin should resolve your problem. Create a lock and have both jobs use the same lock. That will prevent the jobs from running concurrently.
Install the plugin in "Manage Jenkins: Manage Plugins."
Define (provide a name for) your lock(s) in "Manage Jenkins: Configure System."
For each job you want to participate in the exclusion,
in ": Configure: Build Environment," check "Locks",
and pick your lock name from the drop list.
The Lockable Resources Plugin. Simple and working well for me May 2016.
Install the plugin.
In Manage Jenkins > Configure System go to Lockable Resources Manager.
Select Add Lockable Resource.
Enter values for field: Name and hit Save.
Warning: Do not enter spaces in Name field.
In Jenkins > job_name > Configure > General,
Select checkbox: This build requires lockable resources.
Enter name or names in value for field: Resources.
Start a build.
Under build #number select Locked Resources.
You should see something like:This build has locked the following resources: resource_name - resource_description.
Start a different build which uses the same resource.
You will see Build Queue in Jenkins status/menu showing job name.
Hover text shows Started by, Waiting for resources resources_list, Waiting for time.
(also resource tags/labels can be used)
Adding screenshot of Job Configuration page as there seems to be a problem for some users where "This build requires lockable resources" is not visible: ** when the checkbox is not selected you should only see "[_] This build requires lockable resources"
EDIT: Below information is effective as of 04/10/2014
Exclusion plugin, https://wiki.jenkins-ci.org/display/JENKINS/Exclusion-Plugin Very useful if few build use the same resource - e.g. a test database. All you need to do is to update configuration of all jobs using this resource and as a result they will never run in parallel but wait for others to complete.
Taken from : http://www.kaczanowscy.pl/tomek/2012-07/jenkins-plugins-part-iii-towards-continuous-delivery
This plugin does block two or more jobs from running in parallel.
To test, do this for job1
Configure
Under Build Environment check "Add resource to manage exclusion."
Then Add -> New Resource -> Name -> lock
Under Build -> Add build step
Critical Block Start
Add build step -> Add whatever you want to add.(add sleep 15 to make sure it lasts longer to check concurrency.)
Add build step -> Critical block end
Repeat the above steps for job2, make sure you use the same lock name 'lock'.
manually build both jobs concurrently.
Monitor the run progress under jenkins -> Exclusion administration.
1 December 2021
Use Build Blocker plugin, Install from Manage Jenkins > Plugin Manager
For example, you have two pipelines React-build and React-tests:
Go to React-build -> Configure -> Block build
if I don't need React-tests to run concurrently with the current React-build job, add it in the blocking list,
Regex expressions can also be used, i.e. to avoid concurrent builds for all projects starting with React-, add React-.* to the list,
Replace React-tests with any pipeline-name you want not to run parallel, with global or node level options,
When tried to run any blocked jobs together with configured React-build job, it gets moved to pending state,

Fetching project code from different repositories

we want to use Hudson for our CI, but our project is made of code coming from different repository. For example:
- org.sourceforce... should be check out from http:/sv/n/rep1.
- org.python.... should be check out from http:/sv/n/rep2.
- com.company.product should be check out from http:/sv/n/rep3.
right now we use an ant script with a get.all target that checkout/update the code from different rep.
So i can create a job that let hudson call our get.all target to fetch out all source code and call a second target to build all. But in that case, how to monitor change in the 3 repositories ?
I'm thinking that I could just not assign any repository in the job configuration and schedule the job to fethc/build at regular time interval, but i feel that i'll miss the idea of CI if build can't be trigger from commit/repository change.
what would be the best way to do ? is there a way to configure project dependencies in hudson ?
I haven't poked at the innards of our Hudson installation too much, but it there is a button under Source Code Management that says "Add more locations..." (if that isn't the default out-of-the-box configuration, let me know and I will dig deeper).
Most of our Hudson builds require at least dozen different SVN repos to be checked out, and Hudson monitors them all automatically. We then have the Build steps invoke ant in the correct order to build of the dependencies.
I assume you're using subversion. If not, then please ignore.
Subversion, at least the newer version of it, supports a concept called 'Externals.'
An external is an API, alternate project, dependency, or whatnot that does not reside in YOUR project repository.
see:http://svnbook.red-bean.com/en/1.1/ch07s04.html

Hudson - save artifacts only when less than 90% passes

I am new at this and I was wondering how I can setup that I save the artifacts, only if less than 90% of the tests have passed.
Any idea how I can do this?
thanks
This is not currently possible with Hudson. What is the motivation to avoid archiving artifacts on every build?
How about a rather simple workaround. You create a post build step (or additional build step) that calls your tests from the command line. Be sure to capture all errors so Hudson don't count it as a failure. Than you evaluate your condition and set the error level accordingly. In addition you need to save reports (probably outside hudson) before you set the error level, so they are available even or only when the build fails.
My assumption here is, that it is OK, not to run the tests when building the app fails. However, you can separate the building and testing in two jobs. See here.