Github Action Service Container from Dockerfile in same repo - github-actions

I'm learning Github Actions and designing a workflow with a job that requires a Service Container.
The documentation states that configuration must specify "The Docker image to use as the service container to run the action. The value can be the Docker base image name or a public docker Hub or registry". All of the examples in the docs use publicly-available Docker images, however I want to create a Service Container from a Dockerfile contained within my repo.
Is it possible to use a local Dockerfile to create a Service Container?
Because the job depends on a Service Container, that image must exist when the job begins, and therefore the image cannot be created by an earlier step in the same job. The image could be built in a separate job, but because jobs execute in separate runners I believe that Job 2 will not have access to the image created in Job 1. If this is true then could I follow this approach, using upload/download-artifact so provide Job 1's image to Job 2?
If all else fails, I could have Job 1 create the image and upload it to Docker Hub, then have Job 2 download it from Docker Hub, but surely there is a better way.

The GitHub Actions host machine (runner) is a fully loaded Linux machine, with everything everybody needs already installed.
You can easily launch multiple containers - either your own images, or public images - by simply running docker and docker-compose commands.
My advice to you is: Describe your service(s) in a docker-compose.yml file, and in one of your GitHub Actions steps, simply do docker-compose up -d.

You can create a docker image with the Dockerfile or docker-compose.yml residing inside the repo. Refer to this public gist, it might be helpful.
Instead of building multiple docker-images, you can use docker-compose. Docker-compose is the preferred way to deal with this kind of scenario.

Related

Edit an HTML file in Docker Container

I am starting out with programming and am currently working with Docker Containers.
One of the containers is a webserver that takes an input from another container and displays an output on a web page on localhost.
I was wondering if it would be possible to change some comments on the webpage that is part of the container and if so how to go about it?
PS: Pretty new to all this, so please forgive me if I'm asking something really basic
Depends upon the strategy, if you need to change it dynamically when you change the code. You should mount the directory on the container via docker command or docker-compose file. If it is static copy the files via docker file.
It is strange that you are a beginner to programming and are working with docker containers. But now you are here.
Find out if the files you want to edit are part of a container ('baked in') or if they get mounted at container runtime.
If they are baked in, you would go to the bakery (docker build ...) and modify files so that you get modified containers.
If they are mounted at runtime (docker run -v ...) find out where they get mounted from and modify the files over there.
Baked in files cannot be changed just like that, so they reflect an immutable installation. The other files can be changed at runtime. There is no right or wrong, choose the pattern depending on what you want to achieve. That is where the strategy comes into play.

Compute Engine Deploy Container

I am using golang to programmatically create and destroy one-off Compute Engine instances using the Compute Engine API.
I can create an instance just fine, but what I'm really having trouble with is launching a container on startup.
You can do it from the Console UI:
But as far as I can tell it's extremely hard to do it programmatically, especially with Container Optimized OS as the base image. I tried doing a startup script that does a docker pull us-central1-docker.pkg.dev/project/repo/image:tag but it fails because you need to do gcloud auth configure-docker us-central1-docker.pkg.dev first for that to work and COOS doesn't have gcloud nor a package manager to get it.
All my workarounds seem hacky:
Manually create a VM template that has the desired container and create instances of the template
Put container in external registry like docker hub (not acceptable)
Use Ubuntu instead of COOS with a package manager so I can programmatically install gcloud, docker, and the container on startup
Use COOS to pull down an image from dockerhub containing gcloud, then do some sort of docker-in-docker mount to pull it down
Am I missing something or is it just really cumbersome to deploy a container to a compute engine instance without using gcloud or the Console UI?
To have a Compute Engine start a container when the Compute Engine starts, one has to define meta data for the description of the container. When the COOS starts, it appears to run an application called konlet which can be found here:
https://github.com/GoogleCloudPlatform/konlet
If we look at the documentation for this, it says:
The agent parses container declaration that is stored in VM instance metadata under gce-container-declaration key and starts the container with the declared configuration options.
Unfortunately, I haven't found any formal documentation for the structure of this metadata. While I couldn't find documentation, I did find two possible solutions:
Decipher the source code of konlet and break it apart to find out how the metadata maps to what is passed when the docker container is started
or
Create a Compute Engine by hand with the desired container definitions and then start the Compute Engine. SSH into the Compute Engine and then retrieve the current metadata. We can read about retrieving meta data here:
https://cloud.google.com/compute/docs/metadata/overview
It turns out, it's not too hard to pull down a container from Artifact Registry in Container Optimized OS:
Run docker-credential-gcr configure-docker --registries [region]-docker.pkg.dev
See: https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#accessing_private_images_in_or
So what you can do is put the above line along with docker pull [image] and docker run ... into a startup script. You can specify a startup script when creating an instance using the metadata field: https://cloud.google.com/compute/docs/instances/startup-scripts/linux#api
This seems the least hacky way of provisioning an instance with a container programmatically.
You mentioned you used docker-credential-gcr to solve your problem. I tried the same in my startup script:
docker-credential-gcr configure-docker --registries us-east1-docker.pkg.dev
But it returns:
ERROR: Unable to save docker config: mkdir /root/.docker: read-only file system
Is there some other step needed? Thanks.
I recently ran into the other side of these limitations (and asked a question on the topic).
Basically, I wanted to provision a COOS instance without launching a container. I was unable to, so I just launched a container from a base image and then later in my CI/CD pipeline, Dockerized my app, uploaded it to Artifact Registry and replaced the base image on the COOS instance with my newly built app.
The metadata I provided to launch the initial base image as a container:
spec:
containers:
- image: blairnangle/python3-numpy-ta-lib:latest
name: containervm
securityContext:
privileged: false
stdin: false
tty: false
volumeMounts: []
restartPolicy: Always
volumes: []
I'm a Terraform fanboi, so the metadata exists within some Terraform configuration. I have a public project with the code that achieves this if you want to take a proper look: blairnangle/dockerized-flask-on-gce.

Cannot map agent.conf using Cygnus docker installation

I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!

How to run base centos image in minishift?

I try to learn about Open Shift, how it works, how to run apps, build images etc.
To start with something, which I thought will be rather simple, I decided to run a pod with pure centos7 OS, based on this image. I installed locally minishift v1.11.0+4459917, I created a new project, and performed command:
oc new-app openshift/base-centos7 in this project. As a result I received the following message:
--> Found Docker image bb81a09 (11 months old) from Docker Hub for "openshift/base-centos7"
* An image stream will be created as "pon3:latest" that will track this image
* This image will be deployed in deployment config "pon3"
* The image does not expose any ports - if you want to load balance or send traffic to this component
you will need to create a service with 'expose dc/pon3 --port=[port]' later
* WARNING: Image "openshift/base-centos7" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
imagestream "pon3" created
deploymentconfig "pon3" created
--> Success
Run 'oc status' to view your app.
As I can see in the warning this image runs as root, which is clearly not a good practice, but it may be worked around, as described here and here. I tried both approaches - I have created a new service account with anyuid scc, and I assigned anyuid scc to default sa. Unfortunately I'm still not able to run a pod based on this image. The result looks like this:
oc get pods
mycentos-1-deploy 1/1 Running 0 32s
mycentos-1-p1vh5 0/1 CrashLoopBackOff 1 30s
I try to troubleshoot this way:
oc logs -p mycentos-1-p1vh5
This image serves as the base image for all OpenShift v3 S2I builder images.
It provides all essential libraries and development tools needed to
successfully build and run an application.
To use this image as a base image, you need to have 's2i/bin' directory in the
same directory as your S2I image Dockerfile. This directory should contain S2I
scripts.
This base image also provides the default user you should use to run your
application. Your Dockerfile should include this instruction after you finish
installing software:
USER default
The default directory for installing your application sources is
'/opt/app-root/src' and the WORKDIR and HOME for the 'default' user is set
to this directory as well. In your S2I scripts, you don't have to use absolute
path, but rather rely on the relative path.
To learn more about S2I visit: https://github.com/openshift/source-to-image
Additionally I tried to troubleshoot with oc adm diagnostics but to be honest I didn't see anything relevant to this issue.
I'm clearly missing something here. Can someone give me a hint how this should be handled or how can I try to troubleshoot this? Is there a different way to run pure centos OS?
Thank you for any help.
You need the image you want to deploy using oc new-app to have an actual application in it. The openshift/base-centos7 image is a base image only on which other images are built and doesn't have an application in it.
If you just want to spin up a container and be presented with a shell environment in which you can play in use the oc run command instead.
OpenShift isn't like a traditional VPS where you just spin up permanent shell environments which you then access to set up your application manually. The idea is that you build your application into an image and deploy the application.
I would suggest you go read:
https://www.openshift.com/promotions/for-developers.html
https://www.openshift.com/promotions/devops-with-openshift.html
and work through the exercises at:
https://learn.openshift.com
to learn more about what OpenShift is and how to use it.

docker commit mysql doesn't save

I am trying to create a docker image from a mysql container.
The problem is that db of the new image is clean, but
files/folders, which I create manually
in the origin container before commit, are copied.
base mysql image is official 5.6
docker is 1.11.
I checked that folder
/var/lib/mysql/d1 appears when a db is created but new image
doesn't persist this folder, though folders in / root are persisted.
Several things happening here:
First, docker commit is a code smell. It tends to be used by those creating images with a manual process, rather than automating their builds with a Dockerfile that would allow for easy recreation. If at all possible, I recommend you transition to a Dockerfile for your image creation.
Next, a docker commit will not capture changes made to a volume. And this same issue occurs if you try to update a volume with a RUN step in a Dockerfile. Both of these capture changes to the container filesystem and store those changes as a layer in the docker image, and the volumes are not part of the container filesystem. This is also visible if you run docker diff against a container. In this case, the upstream image has defined the volume in their Dockerfile:
VOLUME /var/lib/mysql
And docker does not have a command to undo a created volume from the Dockerfile. You would need to either directly modify the image definition from outside of docker (not recommended) or build your own upstream image with that step removed (recommended).
What the mysql image does provide is the ability to inject your own database creation scripts in /docker-entrypoint-initdb.d, which you can add with your own image that extends mysql, or mount as a volume. This is where you would inject your schema, or initialize from a known backup for development.
Lastly, if the goal is to have persistence, you should store your data in a volume, not by committing containers:
docker run -v mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
The volume allows you to recreate the container, upgrade to a newer version of mysql when patches are released (e.g. security fixes) without losing your data.
To backup the volume this will export to a tgz:
docker run --rm -v mysql-data:/source busybox tar -cC /source . >backup.tgz
And to restore a volume, this creates one from a tgz:
docker run --rm -i -v mysql-data:/target busybox tar -xC /target <backup.tgz
You can make data persist by using docker commit command like below.
docker commit CONTAINER_ID REPOSITORY:TAG
docker commit | Docker Documentation
But just as BMitch's answer said, a docker commit will not capture changes made to a volume.
And usually you should use a volume to store data permanently and let a container be ephemeral without data being stored in itself.
So I guess many people think that trying to persist data without using a volume is a bad practice.
But there are some cases you might consider committing and freeze data into an image.
For example, it's handy when you have an image with all the tables and records in it if you use the image for automated test in CI.
In the case of github actions, only thing you need to do is just pull the image and create the database container and run tests against the database.
No need to think about migration of data.