I am using the following project as a baseline to create a Docker container action.
The problem that I have is that I need to be able to access my secrets inside my Dockerfile. I tried almost all the tricks that I knew.
Retrieve the secret
RUN --mount=type=secret,id=API_ENDPOINT \
export API_ENDPOINT=$(cat /run/secrets/API_ENDPOINT)
Docker build is not happy because the --mount option requires BuildKit. I tried to set DOCKER_BUILDKIT=1, but I had zero success.
How can I pass the secrets? I created an env var at the top of my action (global), and all the steps have complete visibility of that secret.
env:
API_ENDPOINT: ${{secrets.API_ENDPOINT}}
Related
I have an ECS Fargate cluster that is being deployed to through BitBucket Pipelines. I have my docker image being stored in ECR. Within BitBucket pipelines I am utilizing pipes in order to push my docker image to ECR and a second pipe to deploy to Fargate.
I'm facing a blocker when it comes to Fargate deploying the correct image on the deployment. The way I have the pipeline is setup is below. The docker image gets tagged with the BitBucket Build Number for each deployment. Below is the pipe for the Docker image that gets built and pushed to ECR:
name: Push Docker Image to ECR
script:
- ECR_PASSWORD=`aws ecr get-login-password --region $AWS_DEFAULT_REGION`
- AWS_REGISTRY=$ACCT.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
- docker login --username AWS --password $ECR_PASSWORD $AWS_REGISTRY
- docker build -t $DOCKER_IMAGE .
- pipe: atlassian/aws-ecr-push-image:1.6.2
variables:
IMAGE_NAME: $DOCKER_IMAGE
TAGS: $BITBUCKET_BUILD_NUMBER
The next part of the pipeline is to deploy the image, that was pushed to ECR, to Fargate. The pipe associated with the push to Fargate is below:
name: Deploy to Fargate
script:
- pipe: atlassian/aws-ecs-deploy:1.6.2
variables:
CLUSTER_NAME: $CLUSTER_NAME
SERVICE_NAME: $SERVICE_NAME
TASK_DEFINITION: $TASK_DEFINITION
FORCE_NEW_DEPLOYMENT: 'true'
DEBUG: 'true'
Within this pipe, the attribute for TASK_DEFINITION specifies a file in the repo that ECS runs its tasks off. This file which is a JSON file, has a key pair for the image ECS is to use. Below is an example of the key pair:
"image": "XXXXXXXXXXXX.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$DOCKER_IMAGE:latest",
The problem with this line is that the tag of the image is changing with each deployment.
What I would like to do is have this entire deployment process be automated, but am having this step prevent me from doing that. I had came across this link that shows how to change the tag in the task definition in the build environment of the pipeline. The article utilizes envsubst. I've seen how envsubst works, but not sure how to use it for a JSON file.
Any recs on how I can change the tag in the task definition from latest to the Bitbucket Build Number using envsubst would be appreciated.
Thank you.
I've followed this guide to deploy my custom image but now I'm stuck on how do I deploy vNext of my container image?
Skimming this YouTube video, it seems revisions are the way but how using the Azure CLI?
There's also an interesting concepts page on application lifecycle management but no guides/tutorials on this topic of revisions, only the api guide pages.
You need to update the app for a new image. This action will create a new revision behind the scenes.
az containerapp update `
--name <APPLICATION_NAME> `
--resource-group <RESOURCE_GROUP_NAME> `
--image mcr.microsoft.com/azuredocs/containerapps-helloworld
Depending on your activeRevisionsMode property:
a. if single then revision should automatically get activates.
b. if mulitple then activate and configure traffic-splitting
create a revision copy pointing to the new image:
$RESOURCE_GROUP="my-resource-group"
$CONTAINER_APP_NAME="my-image"
$NEW_IMAGE="myregistry.azurecr.io/smile:vNext"
az containerapp revision copy --resource-group $RESOURCE_GROUP `
--name $CONTAINER_APP_NAME `
--image $NEW_IMAGE
I'm learning Github Actions and designing a workflow with a job that requires a Service Container.
The documentation states that configuration must specify "The Docker image to use as the service container to run the action. The value can be the Docker base image name or a public docker Hub or registry". All of the examples in the docs use publicly-available Docker images, however I want to create a Service Container from a Dockerfile contained within my repo.
Is it possible to use a local Dockerfile to create a Service Container?
Because the job depends on a Service Container, that image must exist when the job begins, and therefore the image cannot be created by an earlier step in the same job. The image could be built in a separate job, but because jobs execute in separate runners I believe that Job 2 will not have access to the image created in Job 1. If this is true then could I follow this approach, using upload/download-artifact so provide Job 1's image to Job 2?
If all else fails, I could have Job 1 create the image and upload it to Docker Hub, then have Job 2 download it from Docker Hub, but surely there is a better way.
The GitHub Actions host machine (runner) is a fully loaded Linux machine, with everything everybody needs already installed.
You can easily launch multiple containers - either your own images, or public images - by simply running docker and docker-compose commands.
My advice to you is: Describe your service(s) in a docker-compose.yml file, and in one of your GitHub Actions steps, simply do docker-compose up -d.
You can create a docker image with the Dockerfile or docker-compose.yml residing inside the repo. Refer to this public gist, it might be helpful.
Instead of building multiple docker-images, you can use docker-compose. Docker-compose is the preferred way to deal with this kind of scenario.
I am trying to create a docker image from a mysql container.
The problem is that db of the new image is clean, but
files/folders, which I create manually
in the origin container before commit, are copied.
base mysql image is official 5.6
docker is 1.11.
I checked that folder
/var/lib/mysql/d1 appears when a db is created but new image
doesn't persist this folder, though folders in / root are persisted.
Several things happening here:
First, docker commit is a code smell. It tends to be used by those creating images with a manual process, rather than automating their builds with a Dockerfile that would allow for easy recreation. If at all possible, I recommend you transition to a Dockerfile for your image creation.
Next, a docker commit will not capture changes made to a volume. And this same issue occurs if you try to update a volume with a RUN step in a Dockerfile. Both of these capture changes to the container filesystem and store those changes as a layer in the docker image, and the volumes are not part of the container filesystem. This is also visible if you run docker diff against a container. In this case, the upstream image has defined the volume in their Dockerfile:
VOLUME /var/lib/mysql
And docker does not have a command to undo a created volume from the Dockerfile. You would need to either directly modify the image definition from outside of docker (not recommended) or build your own upstream image with that step removed (recommended).
What the mysql image does provide is the ability to inject your own database creation scripts in /docker-entrypoint-initdb.d, which you can add with your own image that extends mysql, or mount as a volume. This is where you would inject your schema, or initialize from a known backup for development.
Lastly, if the goal is to have persistence, you should store your data in a volume, not by committing containers:
docker run -v mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
The volume allows you to recreate the container, upgrade to a newer version of mysql when patches are released (e.g. security fixes) without losing your data.
To backup the volume this will export to a tgz:
docker run --rm -v mysql-data:/source busybox tar -cC /source . >backup.tgz
And to restore a volume, this creates one from a tgz:
docker run --rm -i -v mysql-data:/target busybox tar -xC /target <backup.tgz
You can make data persist by using docker commit command like below.
docker commit CONTAINER_ID REPOSITORY:TAG
docker commit | Docker Documentation
But just as BMitch's answer said, a docker commit will not capture changes made to a volume.
And usually you should use a volume to store data permanently and let a container be ephemeral without data being stored in itself.
So I guess many people think that trying to persist data without using a volume is a bad practice.
But there are some cases you might consider committing and freeze data into an image.
For example, it's handy when you have an image with all the tables and records in it if you use the image for automated test in CI.
In the case of github actions, only thing you need to do is just pull the image and create the database container and run tests against the database.
No need to think about migration of data.
I'm using the mysql image as an example, but the question is generic.
The password used to launch mysqld in docker is not visible in docker ps however it's visible in docker inspect:
sudo docker run --name mysql-5.7.7 -e MYSQL_ROOT_PASSWORD=12345 -d mysql:5.7.7
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b98afde2fab7 mysql:5.7.7 "/entrypoint.sh mysq 6 seconds ago Up 5 seconds 3306/tcp mysql-5.7.7
sudo docker inspect b98afde2fab75ca433c46ba504759c4826fa7ffcbe09c44307c0538007499e2a
"Env": [
"MYSQL_ROOT_PASSWORD=12345",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.7-rc"
]
Is there a way to hide/obfuscate environment parameters passed when launching containers. Alternatively, is it possible to pass sensitive parameters by reference to a file?
Weirdly, I'm just writing an article on this.
I would advise against using environment variables to store secrets, mainly for the reasons Diogo Monica outlines here; they are visible in too many places (linked containers, docker inspect, child processes) and are likely to end up in debug info and issue reports. I don't think using an environment variable file will help mitigate any of these issues, although it would stop values getting saved to your shell history.
Instead, you can pass in your secret in a volume e.g:
$ docker run -v $(pwd)/my-secret-file:/secret-file ....
If you really want to use an environment variable, you could pass it in as a script to be sourced, which would at least hide it from inspect and linked containers (e.g. CMD source /secret-file && /run-my-app).
The main drawback with using a volume is that you run the risk of accidentally checking the file into version control.
A better, but more complicated solution is to get it from a key-value store such as etcd (with crypt), keywhiz or vault.
You say "Alternatively, is it possible to pass sensitive parameters by reference to a file?", extract from the doc http://docs.docker.com/reference/commandline/run/ --env-file=[] Read in a file of environment variables.