We are using GitHub Action Self Hosted Runners on a Windows Server to build and deploy private repositories. For context, they are .NET Projects.
A pattern we've adopted is to break out a workflow into multiple jobs (checkout, restore, build, test & deploy). Some of these jobs can be run in parallel, some need other jobs to complete before they can start.
I have tried to set up two Runners in the same Runner Group on the same machine.
My Expectations:
Be able to run multiple workflows at the same time (one runner per workflow)
Be able to run multiple jobs in a single workflow at the same time (multiple runners per workflow)
Self Hosted Runners have their own folder: _work which is where $Env:GITHUB_WORKSPACE points.
When I tried #2 above, I saw both runners working on the same workflow, but they were using their own respective _work folders. The first runner would check out a repo to its _work folder and the second runner would error out because it couldn't find the repo in its _work folder.
Possible Solutions:
A) Move the _work directory to a root folder that both runners can access
B) Remap $Env:GITHUB_WORKSPACE for each workflow
I don't believe either of these solutions works, what am I missing? Is there a better technique here for using multiple self hosted runners?
I would even be happy if I could have my #1 expectation of one runner per workflow.
Related
EDIT: My repo resides in github enterprise
I have a very basic github workflow action as below:
All it does is to run a powershell script as mentioned in here.
name: First Github Action
on:
workflow_dispatch:
jobs:
first-job:
name: First Job
runs-on: ubuntu-latest
steps:
- name: Display the path
run: echo ${env:PATH}
shell: pwsh
Unfortunately, it just keeps waiting for the runner to pick up. Below is the message it is being displayed.
Requested labels: ubuntu-latest
Job defined at: {myUserName}/{repoName}/.github/workflows/{myFileName}.yml#refs/heads/main
Waiting for a runner to pick up this job...
EDIT: I created another public repo and ran the action. It is still waiting.
Unfortunately, I cannot share my public repo as it is an enterprise github repo owned by the company I work in.
Assuming you are running this on GitHub Cloud (or github.com):
GitHub Actions is only free for public repositories, otherwise you have to pay for a license
Switching the repo's visibility from from private to public may not cause the workflow that is stuck to be picked up. You will likely need to cancel it and queue a new one.
Make sure your workflows are located in .github/workflows folder.
Assuming you are running this on GitHub Enterprise Cloud (GHEC):
You need to make sure that your admin has Actions enabled
You need to make sure that your admin has Actions allowed for repositories not owned by an organization
Assuming you are running this on GitHub Enterprise Server (GHES):
You need to make sure that your admin has Actions enabled
You need to make sure that your admin has Actions allowed for repositories not owned by an organization
You will not be able to use GitHub hosted runners as you have in your YAML file
You will need to use a self-hosted runner and your GitHub admin can provide you the details of what you need to use.
The workflow you have in your question does in fact work:
https://github.com/tjc-actions-demo/simple-actions
The issue is going to be either permissions related or configuration related. Depending on your environment, you will need to troubleshoot based on my suggestions above.
My group has a project with a GitHub publishing workflow that has two versions: one which is automated as soon as the builds are finished compiling and one that can be manually triggered from the GitHub Actions interface. Aside from the triggering event, the jobs in the yaml files are identical.
The issue is that in the case of the automated trigger, the env is not being completely loaded:
Whereas the manual trigger does correctly load the env:
We have tried multiple different "solutions" ranging from resetting our repository secrets to explicitly instantiating the env at the job level (in addition to the workflow level). The result has been the same every time.
I uploaded many GitHub artifacts, causing the GitHub free storage space (50 GB) to run out.
Most of these artifacts were copies or had very small changes.
Assuming that these were stored in layers as diffs from parent images (like docker images are stored), it was unlikely that 50 GB of space would run out.
Are GitHub artifacts stored as individual files every time a workflow is run ?
Are GitHub packages stored in the same way ?
Is the storage for packages and artifacts the same ?
GitHub's artifacts are usually linked with:
release: artifacts could be another term for "assets": files associated with a particular GitHub release (see Update a release asset for instance).
or workflow artifacts: An artifact is a file or collection of files produced during a workflow run. It allows you to persist data after a job has completed, and share that data with another job in the same workflow.
As Edward Thomson (from GitHub) notes: "because they're meant to be used to move data between jobs in a workflow, workflow assets are not permanent".
GitHub Container Registry is dedicated to store and manage Docker and OCI images.
Only the latter benefit from incremental storage through layers.
The former would be uploaded as a all for each file.
From the comments below:
A workflow where one authenticates to GHCR (GitHub Container Registry) and push to the registry an image (docker push ghcr.io/OWNER/IMAGE_NAME:tag) will benefit from an incremental layer-by-layer storage.
This differ from a regular asset upload, where the artifact is stored as a all.
So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)
Is it possible in a Hudson job to specify a different Source directory to poll to the directory in which a build is run ?
I've used Hudson successfully to enforce compilation success in java projects.
An SVN directory is polled every say 5 mins and an ant target specified - the errant programmer getting emailed in the event of failures.
However in every case the ant build.xml happened to reside in the same directory as the SVN directory being polled.
Basically I am trying to apply the same system to an Oracle database build.
There are multiple directories to watch (schema, static data, stored procs etc and an upstream / downstream order).
However the ant build script resides several directories above the directories I wish to poll.
I guess the solution is I must create multiple ant build.xmls one for each database component and I assume a separate Hudson job for each ?
I wondered was there a better way of doing this.
Best Rgds
Peter
Checkout the project from the highest level and configure your build steps to execute the various steps in the the sub folder as you would do it manually. As long as everything needed is in the workspace you can build whatever is in there in the top level as well as sub folders.