Get Github Actions Virtual Environment values - github-actions

In a Github action workflow, is there a way to access the "Set up Job" -> "Virtual Environment" values? Ideally, I'd like to get them from variables already present, but getting them from the output of a command would be just fine too.
I can see the values in the Github actions UI by clicking on a specific job, then "Set up Job", then expending "Virtual Environment".
Virtual Environment
Environment: ubuntu-20.04
Version: 20220425.1
Included Software: https://github.com/actions/virtual-environments/blob/ubuntu20/20220425.1/images/linux/Ubuntu2004-Readme.md
Image Release: https://github.com/actions/virtual-environments/releases/tag/ubuntu20%2F20220425.1
I'd like to create a cache key based on that info so that, when it changes, the cache is made anew.
Background:
I have some tests that run on both ["ubuntu-latest", "macos-latest"]. I also have a setup action that in part builds some shared libraries (which is slow) and caches them. The shared libraries are external though and only need to be rebuilt for updates to the runner image (or for new versions of the libraries, which isn't a concern). The current cache key is ${{ runner.os }}-${{ needs.setup.outputs.cache-key-suffix }}, which only ever changes when we bump the cache-key-suffix hard-coded string. Using that key, the cache is shared among all the workflows and branches, saving lots of needless duplicated work.
Deep Background:
The specific problem I'm solving is that one of those shared libraries is Rocks DB. The tests were working fine for a while, but recently stopped; most tests using Rocks DB started failing with signal: illegal instruction (core dumped). The only thing that might have changed is an update to ubuntu-latest. So I figure that it'd be nice to automatically have the cache recreated when that happens.
I've tried digging through the Github Actions: Variables and Github Actions: Contexts documentation, and some general Google-based research, but haven't been able to find a way to get those values for use in a workflow.

Related

Github actions: using two different kinds of self-hosted runners

I have a github repository which is doing CI/CD using github actions that need more than what the github-hosted runners can do. In multiple ways. For some tasks, we need to test CUDA code on GPUs. For some other tasks, we need lots of CPU cores and local disk.
Is it possible to route github actions to different self-hosted runners based on the task? Some tasks go to the GPU workers, and others to the big CPU workers? The docs imply this might be possible using "runner groups" but I honestly can't tell if this is something that A) can work if I figure it out B) will only work if I upgrade my paid github account to something pricier (even though it says it's "enterprise" already) or C) can never work.
When I try to set up a runner group following the docs, I don't see the UI elements that the docs describe. So maybe my account isn't expensive enough yet?
But I also don't see any way that I would route a task to a specific runner group. To use the self-hosted runners today, I just say
gpu-test-job:
runs-on: self-hosted
instead of
standard-test-job:
runs-on: ubuntu-22.04
and I'm not sure how I would even specify which runner group (or other routing mechanism) to get it to a specific kind of self-hosted runner, if that's even a thing. I'd need to specify something like:
big-cpu-job:
runs-on: self-hosted
self-hosted-runner-group: big-cpu # is this even a thing?
It looks like you won't be able to utilize runner groups on a personal account, but that's not a problem!
Labels can be added to self-hosted runners. Those labels can be referenced in the runs-on value (as an array) to specify which self-hosted runner(s) the job should go to.
You would run ./config.sh like this (you can pass in as many comma-separated labels as you like):
./config.sh --labels big-cpu
and your job would use an array in the runs-on field to make sure it's selecting a self-hosted runner that is also has the big-cpu label:
big-cpu-job:
runs-on: [self-hosted, big-cpu]
...
Note: If you wanted to "reserve" the big-cpu runners for the jobs that need it, then you'd use a separate label, regular, for example, on the other runners' ./config.sh and use that in the runs-on for the jobs that don't need the specialized runner.

How to build Chromium faster?

Following only the instructions here - https://www.chromium.org/developers/how-tos/get-the-code I have been able to successfully build and get a Chromium executable which I can then run.
So, I have been playing around with the code (adding new buttons to the browser etc.) for learning purposes. So each time I make a change (like adding a new button in the settings toolbar) and I use the ninja command to build it takes over 3 hours to finish before I can run the executable. It builds each and every file again I guess.
I have a decently powerful machine (i7, 8GB RAM) running 64-bit Ubuntu. Are there ways to speed up the builds? (At the moment, I have literally just followed the instructions in the above mentioned link and no other optimizations to speed it up.)
Thank you very very much!
If all you're doing is modifying a few files and rebuilding, ninja will only rebuild the objects that were affected by those files. When you run ninja -C ..., the console displays the number of targets that need to be built. If you're modifying only a few files, that should be ~2000 at the high end (modifying popular header files can touch lots of objects). Modifying a single .cpp would result in rebuilding just that object.
Of course, you still have to relink which can take a very long time. To make linking faster, try using a component build, which keeps everything in separate shared libraries rather than one big onw that needs to be relinked for any change. If you're using GN, add is_component_build=true to gn args out/${build_dir}. For GYP, see this page.
You can also peruse faster linux builds and see if any of those tips apply to you. Unfortunately, Chrome is a massive project so builds will naturally be long. However, once you've done the initial build, incremental builds should be on the order of minutes rather than hours.
Follow the recently updated instructions here:
https://chromium.googlesource.com/chromium/src/+/HEAD/docs/windows_build_instructions.md#Faster-builds
In addition to using component builds you can disable nacl, use jumbo builds, turn off symbols for webcore, etc. Jumbo builds are still experimental at this point but they already help build times and they will gradually help more.
Full builds will always take a long time even with jumbo builds, but component builds should let incremental builds be quite fast in many cases.
For building on Linux, you can see how to build faster at: https://chromium.googlesource.com/chromium/src/+/master/docs/linux_build_instructions.md#faster-builds
Most of them require add build argments. To edit build arguments, you can see GN build configuration at: https://www.chromium.org/developers/gn-build-configuration.
You can edit the build arguments on a build directory by:
$ gn args out/mybuild

How about an Application Centralized Configuration Management System?

We have a build pipeline to manage the artifacts' life cycle. The pipline is consist of four stages:
1.commit(runing unit/ingetration tests)
2.at(deploy artifact to at environment and runn automated acceptance tests)
3.uat(deploy artifact to uat environment and run manual acceptance tests)
4.pt(deploy to pt environment and run performance tests)
5.//TODO we're trying to support the production environment.
The pipeline supports environment varialbles so we can deploy artifacts with different configurations by triggerting with options. The problem is sometimes there are too many configuration items making the deploy script contains too many replacement tasks.
I have an idea of building a centralized configuration managment system (ccm for short name), so we can maintain the configuration items over there and leave only a url(connect to the ccm) replacement task (handling different stages) in the deploy script. Therefore, the artifact doesnt hold the configuration values, it asks the ccm for them.
Is this feasible or a bad idea of the first place?
My concern is that the potential mismatch between the configuration key (defined in the artifact) and value (set in the ccm) is not solved by this solution and may even worse.
Configuration files should remain with the project or set as configuration variables where they are run. The reasoning behind this is that you're adding a new point of failure in your architecture, you have to take into account that your configuration server could go down thus breaking everything that depends on it.
I would advice against putting yourself in this situation.
There is no problem in having a long list of environment variables defined for a project, besides that could even mean you're doing things properly.
If for some reason you find yourself changing configuration files a lot (for ex. database connection strings, api ednpoints, etc...) then the problem might be this need to change a lot configurations, which should stay almost always the same.

Fetching project code from different repositories

we want to use Hudson for our CI, but our project is made of code coming from different repository. For example:
- org.sourceforce... should be check out from http:/sv/n/rep1.
- org.python.... should be check out from http:/sv/n/rep2.
- com.company.product should be check out from http:/sv/n/rep3.
right now we use an ant script with a get.all target that checkout/update the code from different rep.
So i can create a job that let hudson call our get.all target to fetch out all source code and call a second target to build all. But in that case, how to monitor change in the 3 repositories ?
I'm thinking that I could just not assign any repository in the job configuration and schedule the job to fethc/build at regular time interval, but i feel that i'll miss the idea of CI if build can't be trigger from commit/repository change.
what would be the best way to do ? is there a way to configure project dependencies in hudson ?
I haven't poked at the innards of our Hudson installation too much, but it there is a button under Source Code Management that says "Add more locations..." (if that isn't the default out-of-the-box configuration, let me know and I will dig deeper).
Most of our Hudson builds require at least dozen different SVN repos to be checked out, and Hudson monitors them all automatically. We then have the Build steps invoke ant in the correct order to build of the dependencies.
I assume you're using subversion. If not, then please ignore.
Subversion, at least the newer version of it, supports a concept called 'Externals.'
An external is an API, alternate project, dependency, or whatnot that does not reside in YOUR project repository.
see:http://svnbook.red-bean.com/en/1.1/ch07s04.html

xcode 7.2, Swift "Show Live Issues" and autocomplete failure

Symptoms:
Successful build
"Live Issues" shows tons of errors
Autocomplete no longer works
This happened in the middle of coding, yet is unrelated to code changes. I have tried various other solutions I've found on here, including:
Clean Build Folder
Remove Derived Data
Restart xcode (in combination with other items on this list)
Restart computer
Removing then re-adding the framework (referenced below*)
Change build settings:
ALWAYS_SEARCH_USER_PATHS No -> Yes
FRAMEWORK_SEARCH_PATHS nil -> $(PROJECT_DIR) and explicit path (non-recursive) to included framework
Checkout code several changes back to ensure it's unrelated to code changes
Build for actual device instead of simulator
Changing the name of the file
None of the above works (the "Restart xcode" step was tried in combination with other steps above and in various orderings).
I am currently using xcode 7.2.1. (I couldn't upgrade to 7.3, but didn't see anything in the release notes about this issue anyway.)
*This project includes a framework that I have developed that is in a separate directory.
I've ruled out other solutions from stackoverflow, because:
This project is swift
I haven't created any precompiled headers (find /var/folders -name SharedPrecompiledHeaders yielded no results)
Again, the project builds. I can make changes and run and those changes make it out to the simulator. As far as I can tell, the Live Issues and code completion are just in a single file.
This is hardly a "real" answer. In the end, I ended up checking out a revision so old that most of the code wasn't present.
I was pushing to a remote repository, checking out each branch and pushing, one of the branches was just the initial project autogenerated by xcode before it had any real code. Checking that branch out and checking out the current dev branch seems to have made the problem go away.
...for now