All in the title really — porting over from Jenkins, where I would normally set notifications to trigger:
On each fail of master (i.e. every red build)
On first pass of master (i.e. first green build, when it had been failing)
I can't see a way to achieve #2 on Github Actions. Is it possible and if so, how can I do it?
For #1, it is easy as it happens within the workflow runs. You can just leverage the workflow context, and there is builtin syntax like if: ${{ failure() }} to help you do the notification trigger.
For #2, it might be little bit tricky, as you need context cross the workflow runs. I have done anything like that yet, but I think you can persist workflow data with actions/upload-artifact and actions/download-artifact to achieve what you want.
Related
I have a github repository which is doing CI/CD using github actions that need more than what the github-hosted runners can do. In multiple ways. For some tasks, we need to test CUDA code on GPUs. For some other tasks, we need lots of CPU cores and local disk.
Is it possible to route github actions to different self-hosted runners based on the task? Some tasks go to the GPU workers, and others to the big CPU workers? The docs imply this might be possible using "runner groups" but I honestly can't tell if this is something that A) can work if I figure it out B) will only work if I upgrade my paid github account to something pricier (even though it says it's "enterprise" already) or C) can never work.
When I try to set up a runner group following the docs, I don't see the UI elements that the docs describe. So maybe my account isn't expensive enough yet?
But I also don't see any way that I would route a task to a specific runner group. To use the self-hosted runners today, I just say
gpu-test-job:
runs-on: self-hosted
instead of
standard-test-job:
runs-on: ubuntu-22.04
and I'm not sure how I would even specify which runner group (or other routing mechanism) to get it to a specific kind of self-hosted runner, if that's even a thing. I'd need to specify something like:
big-cpu-job:
runs-on: self-hosted
self-hosted-runner-group: big-cpu # is this even a thing?
It looks like you won't be able to utilize runner groups on a personal account, but that's not a problem!
Labels can be added to self-hosted runners. Those labels can be referenced in the runs-on value (as an array) to specify which self-hosted runner(s) the job should go to.
You would run ./config.sh like this (you can pass in as many comma-separated labels as you like):
./config.sh --labels big-cpu
and your job would use an array in the runs-on field to make sure it's selecting a self-hosted runner that is also has the big-cpu label:
big-cpu-job:
runs-on: [self-hosted, big-cpu]
...
Note: If you wanted to "reserve" the big-cpu runners for the jobs that need it, then you'd use a separate label, regular, for example, on the other runners' ./config.sh and use that in the runs-on for the jobs that don't need the specialized runner.
It's very weired that my code passes all UT/IT in my laptop, but it encounters errors in github CI.
Would you mind helping to take up some methods to debug in github CI? Or to make code runs in local as same as github?
It's a project about timeseries database, Apache-IoTDB. The error looks like a trivial logical error among ordinary code. Hope it may help diagnose the bug. Thank you very much !
act is a local runner for GitHub Actions workflows and should run nearly identically to the real thing.
Alternatively, the debugging-with-ssh action uses upterm to open an SSH listener within a container to get a shell on a running workflow within GitHub Actions itself.
The question solved directly by merging master(the branch my pull request forward to) again.
The point is, github CI (actions) may be running on the code which is AUTO-MERGED when the pull-request accepted.
So if your code passes all tests locally but failed in CI with different result from your local debugging, try merge the branch which PR forward may solve the problem.
Hope this may hepl you and thanks guys under this question.
I'm using a Github workflow to run tests. Because the setup can take a while, we want to skip running the tests when no code was changed. So we are using paths-ignore like this:
on:
pull_request:
branches:
- develop
paths-ignore:
- '*.md'
The problem is that we have a protected branch here that requires a check to pass before a branch can be merged. There seems to be some workarounds https://github.community/t/feature-request-conditional-required-checks/16761/20 but they are pretty clunky. Is there an elegant and idiomatic way to return a passing status here for a job that was essentially skipped?
Elegant and idiomatic, evidently not. The conclusion elsewhere (GitHub community forum, Reddit) is that this is expected behavior, at least right now.
The two main workarounds people seem to be using are:
Run all required status checks on all PRs, even the slow ones. Sigh.
Use paths-filter or a homegrown alternative (example) inside the required workflows, as part of their execution, and skip the actual work and return success if no relevant files were changed.
I have a script and I can run it using execute shell and run as even parametrized by giving argument1 and argument 2 if I disable my warning message.My warning message do you want run yes/no but how we can give yes or no once we start the parametrized build. If I say yes it should continue if I say no it should stop.Any idea would be appreciated.
Thanks
Pravin
If your script is interactive, it will not work, since Hudson requires non-interactive scripts - otherwise it will just hang (there is no terminal).
What you can do is, instead of asking whether to continue, make it dependent on a variable, let's say CHOICE, so
...
if [[ "$CHOICE" == "yes" ]]
then
#do some work here
else
echo "Script ended."
fi
and put CHOICE as a choice-style parameter in Hudson. The next time you start the build, it will ask you "yes" or "no". You can also pick a default for automated builds.
It might be better to use some of the Hudson variables to indicate you are in a Hudson build (HUDSON_JOB_ID?) etc. This can be used to change the behavior of the build. I see the problem in having an interactive script for building. A build script must be running without interactions with the user.
I am new at this and I was wondering how I can setup that I save the artifacts, only if less than 90% of the tests have passed.
Any idea how I can do this?
thanks
This is not currently possible with Hudson. What is the motivation to avoid archiving artifacts on every build?
How about a rather simple workaround. You create a post build step (or additional build step) that calls your tests from the command line. Be sure to capture all errors so Hudson don't count it as a failure. Than you evaluate your condition and set the error level accordingly. In addition you need to save reports (probably outside hudson) before you set the error level, so they are available even or only when the build fails.
My assumption here is, that it is OK, not to run the tests when building the app fails. However, you can separate the building and testing in two jobs. See here.