I've just learnt about GitHub Actions and I think it's super fantastic.
One thing that struck me hard at the beginning was, when I was setting up the self-hosted runner, GitHub asks me to run a bunch of command on my local machine which apparently is in private network and it's not exposed to the internet (inbound - meaning the www cannot reach it).
However, after installing what GitHub asks me to install, it seems like a webhook is set up successfully and every time there's push/merge to my master (set up in GitHub action file), the worker from my computer knows and start pulling the newest version of the repo and start installing dependencies and other CI/CD stuff.
Now what I'm curious is how does GitHub actually talks to my VM while it's in a private network?
I've never been a networking guy, so I'm not so sure why this is possible. But it's fascinating.
It's not that GitHub is connecting to your self-hosted runner (inbound) but the self-hosted runner itself connecting to GitHub (outbound). This is why it works. It's your VM (with the runner in the private network) talking to GitHub. The communication direction is reversed. After your self-hosted runner connects to GitHub, both parties keep the connection open. That allows all the events to be pushed to your runner through the connection by the GitHub repository if something happens (PR is opened, a commit was made, etc...). The connection remains open while the runner is operating. Of course, if something bad happens to the network and the connection is broken the communication will stop working. To fix that the runner periodically sends ping packets to GitHub to validate the connection is working and attempts to reconnect if it's not.
You can read more about the communication workflow in the official documentation.
Related
I'm currently using a self hosted runner for CI purposes. The issue is that if the self hosted runner is not available for some reason, and is not accessible for the time being, the action will hang forever until the self hosted runner comes back online.
I was wondering if it would be possible to fall back to a shared runner when a self-hosted runner is not available.
We need to use the Github action self-hosted runner because we need to have an access to the on-premises resource.
I understand that we can run the self-hosted runner on VM or the docker container.
Can we run the self-hosted runner on-demand? Like the Github-hosted runner that always use the clean isolated VM and is destroyed at the end of job execution. Or like the job agents on Azure DevOps/Github that created the clean job agent container to run the pipeline and gets deleted at the end.
Can we do something similar with Gihub-action self-hosted runner? E.g. Allow us to register a VM with the Docker engine or Kubernetes cluster as a self-hosted runner. When I run the Github workflow, it should start a container and run the workflow inside that container. After that, it should delete the container.
Is it possible to do that now? I am not sure if there is a roadmap document somewhere for Github action.
If you use AWS, you can try ec2-github-runner. It does exactly what you're looking for.
I believe the same approach can also be implemented for the other cloud providers.
GitHub itself doesn't provide such capabilities at the moment.
I think the question is a little bit misleading but, if I understand properly, what you are trying to achieve is to have stateless GitHub Actions workflow run execution in your self-hosted runner. Unfortunately, this is currently impossible. Only GitHub-hosted runners work the way you describe. According to the documentation:
GitHub-hosted runner is always a clean isolated virtual machine, and it is destroyed at the end of the job execution.
You can read more about the differences between each runner type here.
What does gitlab do in CI/CD process at pending state?
I'm using specific runner and it gets stuck at pending state for so long until running, even my internet connections work well.
Can anyone explain to me what exactly happened in CI/CD pending state?
In your specific case, I could think of two possible reasons.
Case 1. No gitlab runner registered with your repository.
-> Solution: On your gitlab repository(project) page, go to Settings -> CI/CD -> Expand Runners sections. Then check if there are Runners associated with your repository. If there are, you need to check two things.
Check 1: Are the runners active?
Check 2: Are the tags associated with the specific runner being used as a tag in the pipeline job you are running.
Case 2. Runner cannot connect to your gitlab repository.
-> Check the network settings of your runner. Enter the runner server, and try to clone your project from there (see if it can connect to gitlab). If you are running your runner in your own PC, see if your proxy settings might be interfering with the connection.
I have been looking at Openshift docs and on Stack Overflow for a while now and I can't seem to get any answers.
I want to know what the standard pattern is for developing applications for deployment on Openshift? I am especially concerned with testing of action_hooks prior to deployment. I found this particularly troublesome when I was using a DIY cartridge recently where I had to deal with downloading dependencies in my build script prior to starting my application. As my application kept failing to start every time I made a change and pushed it (I only did this as an initial test of the Openshift service, I would never develop like this). I ended up having to ssh onto my instance and resolve the issue by trial and error (not really ideal).
I appreciate any help anyone can offer.
Thanks
The only way that I am aware of to test action hooks on OpenShift is to ssh into an application and run them manually from the command line. This way you can quickly debug & update them. Then copy them to your git repository and do a git push to deploy the new code.
The only other way I can think of would be to run OpenShift Origin (v2) locally and use that to test with.
I have just started working on a web project that uses Mercurial version control system to a bitbucket account.
The web project is hosted on a 3rd party server - Webfaction.
I have followed all the Mercurial tutorials at Mercurial
The tutorials state that a repository should be made on the local pc and then changes made to the code in the repository on the local pc and then added, committed and pushed to the bitbucket account.
But my project is hosted on a server - WebFaction, so all the code changes should happen on the server, so I can see that the changes work.
I cannot find a reference to changing the code on the WebFaction server (only on the local pc) and then committing and pushing the code from the WebFaction server to the bitbucket account. I simply don't know how to do this (or even if it can be done!).
Can someone give me the steps and syntax (as much as possible) to do this? Could you also keep the answers as simple as possible as there are huge parts of Mercurial I don't yet understand.
Thanks.
Assuming you have full SSH access to the WebFaction server (you should according to the WebFaction features page), I suggest you try following the detailled instructions found here. If you get stuck on any step, then you can ask a more specific question (probably better to ask on serverfault though).
The fact that the repository is on a remote server does not really change anything. You connect through SSH to the remote server (WebFaction) and you follow the steps as if it was a local machine.