We need to use the Github action self-hosted runner because we need to have an access to the on-premises resource.
I understand that we can run the self-hosted runner on VM or the docker container.
Can we run the self-hosted runner on-demand? Like the Github-hosted runner that always use the clean isolated VM and is destroyed at the end of job execution. Or like the job agents on Azure DevOps/Github that created the clean job agent container to run the pipeline and gets deleted at the end.
Can we do something similar with Gihub-action self-hosted runner? E.g. Allow us to register a VM with the Docker engine or Kubernetes cluster as a self-hosted runner. When I run the Github workflow, it should start a container and run the workflow inside that container. After that, it should delete the container.
Is it possible to do that now? I am not sure if there is a roadmap document somewhere for Github action.
If you use AWS, you can try ec2-github-runner. It does exactly what you're looking for.
I believe the same approach can also be implemented for the other cloud providers.
GitHub itself doesn't provide such capabilities at the moment.
I think the question is a little bit misleading but, if I understand properly, what you are trying to achieve is to have stateless GitHub Actions workflow run execution in your self-hosted runner. Unfortunately, this is currently impossible. Only GitHub-hosted runners work the way you describe. According to the documentation:
GitHub-hosted runner is always a clean isolated virtual machine, and it is destroyed at the end of the job execution.
You can read more about the differences between each runner type here.
Related
I've just learnt about GitHub Actions and I think it's super fantastic.
One thing that struck me hard at the beginning was, when I was setting up the self-hosted runner, GitHub asks me to run a bunch of command on my local machine which apparently is in private network and it's not exposed to the internet (inbound - meaning the www cannot reach it).
However, after installing what GitHub asks me to install, it seems like a webhook is set up successfully and every time there's push/merge to my master (set up in GitHub action file), the worker from my computer knows and start pulling the newest version of the repo and start installing dependencies and other CI/CD stuff.
Now what I'm curious is how does GitHub actually talks to my VM while it's in a private network?
I've never been a networking guy, so I'm not so sure why this is possible. But it's fascinating.
It's not that GitHub is connecting to your self-hosted runner (inbound) but the self-hosted runner itself connecting to GitHub (outbound). This is why it works. It's your VM (with the runner in the private network) talking to GitHub. The communication direction is reversed. After your self-hosted runner connects to GitHub, both parties keep the connection open. That allows all the events to be pushed to your runner through the connection by the GitHub repository if something happens (PR is opened, a commit was made, etc...). The connection remains open while the runner is operating. Of course, if something bad happens to the network and the connection is broken the communication will stop working. To fix that the runner periodically sends ping packets to GitHub to validate the connection is working and attempts to reconnect if it's not.
You can read more about the communication workflow in the official documentation.
What is the best way to run an API test that was created via the IBM API Connect Test and Monitor ?
I published a test and I would like my CI sever (jenkins or azure devops) to run it?
Many thanks,
Assaf
Great question Assaf. It is currently not possible in the product, however, is on the roadmap and is coming soon.
We will have APIs and Webhooks to help execute tests as part of your CI/CD processes. Meaning, the same tests you generated using the test composer and run in production can be recycled to ensure your deployments are error free.
As well, Jenkins is an industry standard so we will be providing a plugin to help facilitate the API testing processes via GUI. More details to come, will update this space when it does.
Alternatively keep your eye out here: http://ibm.biz/apitest
I have been looking at Openshift docs and on Stack Overflow for a while now and I can't seem to get any answers.
I want to know what the standard pattern is for developing applications for deployment on Openshift? I am especially concerned with testing of action_hooks prior to deployment. I found this particularly troublesome when I was using a DIY cartridge recently where I had to deal with downloading dependencies in my build script prior to starting my application. As my application kept failing to start every time I made a change and pushed it (I only did this as an initial test of the Openshift service, I would never develop like this). I ended up having to ssh onto my instance and resolve the issue by trial and error (not really ideal).
I appreciate any help anyone can offer.
Thanks
The only way that I am aware of to test action hooks on OpenShift is to ssh into an application and run them manually from the command line. This way you can quickly debug & update them. Then copy them to your git repository and do a git push to deploy the new code.
The only other way I can think of would be to run OpenShift Origin (v2) locally and use that to test with.
Lately I have been looking into docker and the usefulness it can provide to a SaaS company. I have spent some time learning about how to containerize apps and learning briefly about what docker and containers are. I have some problems understanding the usefulness of this technology. I have watched some videos from dockercon and it seems like everyone is talking about how docker makes deployment easy and how you deploy in your dev environment is guaranteed to run in production. However, I have some questions:
Deploying containers directly to production from the dev environment means that developers should develop inside containers which are the same containers that will run on production. This is practically not possible because developers like to develop on their fancy MACs with IDEs. Developers will revolt if they are told to ssh into containers and develop their code inside. So how does that work in companies that currently use docker?
If we assume that the development workflow will not change. Developers will develop locally and push their code into a repo or something. So where is the "containerizing the app" fits within the workflow?
Also if developers do not develop within containers, then the "what you develop is what you deploy and is guaranteed to work" assumption is violated. If this is the case, then I can only see that the only benefit docker offers is isolation, which is the same thing virtualization offer, of course with a lower overhead. So my question here would be, is the low overhead the only advantage docker has on virtualization? or are other things I dont see?
You can write the code outside of a container and transfer it into the container in many different ways. Some examples include:
Code locally and include the source when you docker build by using an ADD or COPY statement as part of the Dockerfile
Code locally and push your code to a source code repository like GitHub and then have the build process pull the code into the container as part of docker build
Code locally and mount your local source code directory as a shared volume with the container.
The first two allow you to have exactly the same build process in production and development. The last example would not be suitable for production, but could be quickly converted to production with an ADD statement (i.e. the first example)
In the Docker workflow, Developers can create both source code (which gets stored and shared in a repository like git, mercurial, etc) and a ready-to-run container, which gets stored and shared via a repository like https://registry.hub.docker.com or a local registry.
The containerized running code that you develop and test is exactly what can go to production. That's one advantage. In addition you get isolation, interesting container-to-container networking, and tool integration with a growing class of devops tools for creating, maintaining and deploying containers.
I have bit of experience in running a simple build upon every SVN commit (it is a piece of cake)
Regarding deployment of the war to a remote production server via
Hudson, there seem to be some alternatives:
use the 'deploy' target in the app's build.xml
use the deploy-plugin of Hudson
which I fail to get working :(
What is the simplest way to do a remote deployment to Tomcat?
Are there any examples available?
And what about release management? How do we tag our releases in
your SCM?
I use Maven for builds.
Since I am working with WAS I use the WAS Builder Plugin for deployment. However, I could also just fire up a batch/shell script for deployment. My current approach is to use slaves that run on the target machine where I want to deploy and assign my deployment jobs to them.
For tagging you can use whatever you prefer. The are 3 basic options:
Let maven do it
run a command line command
use a Hudson plugin
We use subversion, so the Subversion Tagging Plugin would be a natural match. However, we don't use Hudson for tagging right now. However, there are several plugins out there for different SCMs. I usually prefer a plugin over a command line, one reason is that company policy forbids to store passwords unencrypted and it is usually fairly easy to configure.
Our strategy has been to combine the promoted builds plugin and the deploy plugin in hudson. Then, when something is "promoted," send an email and do the deployment. The deployment plugin is cargo based and works with a variety of web and app servers.
Maybe post a little info about why the deploy plugin isn't working for you?