Update Task Definition for ECS Fargate - json

I have an ECS Fargate cluster that is being deployed to through BitBucket Pipelines. I have my docker image being stored in ECR. Within BitBucket pipelines I am utilizing pipes in order to push my docker image to ECR and a second pipe to deploy to Fargate.
I'm facing a blocker when it comes to Fargate deploying the correct image on the deployment. The way I have the pipeline is setup is below. The docker image gets tagged with the BitBucket Build Number for each deployment. Below is the pipe for the Docker image that gets built and pushed to ECR:
name: Push Docker Image to ECR
script:
- ECR_PASSWORD=`aws ecr get-login-password --region $AWS_DEFAULT_REGION`
- AWS_REGISTRY=$ACCT.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
- docker login --username AWS --password $ECR_PASSWORD $AWS_REGISTRY
- docker build -t $DOCKER_IMAGE .
- pipe: atlassian/aws-ecr-push-image:1.6.2
variables:
IMAGE_NAME: $DOCKER_IMAGE
TAGS: $BITBUCKET_BUILD_NUMBER
The next part of the pipeline is to deploy the image, that was pushed to ECR, to Fargate. The pipe associated with the push to Fargate is below:
name: Deploy to Fargate
script:
- pipe: atlassian/aws-ecs-deploy:1.6.2
variables:
CLUSTER_NAME: $CLUSTER_NAME
SERVICE_NAME: $SERVICE_NAME
TASK_DEFINITION: $TASK_DEFINITION
FORCE_NEW_DEPLOYMENT: 'true'
DEBUG: 'true'
Within this pipe, the attribute for TASK_DEFINITION specifies a file in the repo that ECS runs its tasks off. This file which is a JSON file, has a key pair for the image ECS is to use. Below is an example of the key pair:
"image": "XXXXXXXXXXXX.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$DOCKER_IMAGE:latest",
The problem with this line is that the tag of the image is changing with each deployment.
What I would like to do is have this entire deployment process be automated, but am having this step prevent me from doing that. I had came across this link that shows how to change the tag in the task definition in the build environment of the pipeline. The article utilizes envsubst. I've seen how envsubst works, but not sure how to use it for a JSON file.
Any recs on how I can change the tag in the task definition from latest to the Bitbucket Build Number using envsubst would be appreciated.
Thank you.

Related

Unable to access secrets from Dockerfile in GitHub Actions

I am using the following project as a baseline to create a Docker container action.
The problem that I have is that I need to be able to access my secrets inside my Dockerfile. I tried almost all the tricks that I knew.
Retrieve the secret
RUN --mount=type=secret,id=API_ENDPOINT \
export API_ENDPOINT=$(cat /run/secrets/API_ENDPOINT)
Docker build is not happy because the --mount option requires BuildKit. I tried to set DOCKER_BUILDKIT=1, but I had zero success.
How can I pass the secrets? I created an env var at the top of my action (global), and all the steps have complete visibility of that secret.
env:
API_ENDPOINT: ${{secrets.API_ENDPOINT}}

Compute Engine Deploy Container

I am using golang to programmatically create and destroy one-off Compute Engine instances using the Compute Engine API.
I can create an instance just fine, but what I'm really having trouble with is launching a container on startup.
You can do it from the Console UI:
But as far as I can tell it's extremely hard to do it programmatically, especially with Container Optimized OS as the base image. I tried doing a startup script that does a docker pull us-central1-docker.pkg.dev/project/repo/image:tag but it fails because you need to do gcloud auth configure-docker us-central1-docker.pkg.dev first for that to work and COOS doesn't have gcloud nor a package manager to get it.
All my workarounds seem hacky:
Manually create a VM template that has the desired container and create instances of the template
Put container in external registry like docker hub (not acceptable)
Use Ubuntu instead of COOS with a package manager so I can programmatically install gcloud, docker, and the container on startup
Use COOS to pull down an image from dockerhub containing gcloud, then do some sort of docker-in-docker mount to pull it down
Am I missing something or is it just really cumbersome to deploy a container to a compute engine instance without using gcloud or the Console UI?
To have a Compute Engine start a container when the Compute Engine starts, one has to define meta data for the description of the container. When the COOS starts, it appears to run an application called konlet which can be found here:
https://github.com/GoogleCloudPlatform/konlet
If we look at the documentation for this, it says:
The agent parses container declaration that is stored in VM instance metadata under gce-container-declaration key and starts the container with the declared configuration options.
Unfortunately, I haven't found any formal documentation for the structure of this metadata. While I couldn't find documentation, I did find two possible solutions:
Decipher the source code of konlet and break it apart to find out how the metadata maps to what is passed when the docker container is started
or
Create a Compute Engine by hand with the desired container definitions and then start the Compute Engine. SSH into the Compute Engine and then retrieve the current metadata. We can read about retrieving meta data here:
https://cloud.google.com/compute/docs/metadata/overview
It turns out, it's not too hard to pull down a container from Artifact Registry in Container Optimized OS:
Run docker-credential-gcr configure-docker --registries [region]-docker.pkg.dev
See: https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#accessing_private_images_in_or
So what you can do is put the above line along with docker pull [image] and docker run ... into a startup script. You can specify a startup script when creating an instance using the metadata field: https://cloud.google.com/compute/docs/instances/startup-scripts/linux#api
This seems the least hacky way of provisioning an instance with a container programmatically.
You mentioned you used docker-credential-gcr to solve your problem. I tried the same in my startup script:
docker-credential-gcr configure-docker --registries us-east1-docker.pkg.dev
But it returns:
ERROR: Unable to save docker config: mkdir /root/.docker: read-only file system
Is there some other step needed? Thanks.
I recently ran into the other side of these limitations (and asked a question on the topic).
Basically, I wanted to provision a COOS instance without launching a container. I was unable to, so I just launched a container from a base image and then later in my CI/CD pipeline, Dockerized my app, uploaded it to Artifact Registry and replaced the base image on the COOS instance with my newly built app.
The metadata I provided to launch the initial base image as a container:
spec:
containers:
- image: blairnangle/python3-numpy-ta-lib:latest
name: containervm
securityContext:
privileged: false
stdin: false
tty: false
volumeMounts: []
restartPolicy: Always
volumes: []
I'm a Terraform fanboi, so the metadata exists within some Terraform configuration. I have a public project with the code that achieves this if you want to take a proper look: blairnangle/dockerized-flask-on-gce.

Docker-compose.yml for NodeJs with MySQL on AWS Elastic Beanstalk single container Docker

I have an Nodejs app that is hosted on AWS EB Single container Docker. I connect to MySQL database from the app.
For now I am deploying my app from AWS console by uploading zip file. Everything is working as expected.
I would like to be able to push changes to AWS using CLI.
It's my understanding that I need docker-compose.yml file to accomplish that. I have seen samples of docker-compose file that creates two containers one for node, another for mysql.
Is there a way to user docker-compose.yml and still deploy to a single container Docker?
Thanks in advance for any guidance.
I don't think you can deploy a docker-compose file to Elastic Beanstalk. But, I can think of two ways for deploying your code from the command line:
One is to put your existing zip file in an s3 bucket (which can be scripted) and then to use the Elastic Beanstalk command line something like this:
aws elasticbeanstalk create-application-version --application-name avengers \
--version-label v1 \
--source-bundle S3Bucket="avengers-docker-eb",S3Key="deployment.zip" \
--auto-create-application \
--region eu-west-3
The full instructions are here: https://read.acloud.guru/docker-on-elastic-beanstalk-tips-e1a4e6b70ff2
The second way, and the one you might prefer is to create a Dockerrun.aws.json file that points to your docker image either in an s3 bucket or in a docker registry (you can use the aws one). From there you can update your application from the cli like so:
aws elasticbeanstalk update-environment --application-name [your_app_name] --environment-name [your_environment_name] --version-label [your_version_label]
The pertinent documentation is here: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/single-container-docker.html
Y

Azure Container Registry ACR How to add tag to image?

I see you can untag an image in an Azure Container Registry
https://learn.microsoft.com/en-us/cli/azure/acr/repository?view=azure-cli-latest#az-acr-repository-show-manifests
But how do you add a tag?
You can use import command to import the image in the same repository:
az acr import --name myacr --source myacr.azurecr.io/myimage:latest --image myimage:retagged --force
As far as I know. There is no Azure CLI command to create a tag for the images directly. If you want to add a tag for the image, you just can use the docker command docker tag to add the tag and then push the image to Azure Container Registry.
When you create the image through the build task, it also will lead to the tag adding. Take a look at this.
I had overwrite an existing tag with a latest build inside a release pipeline. So I could not do a build step since its inside a release pipeline. This is my solution hope this helps someone:
This is my release pipeline:
Step 1
task 1: Docker CLI installer
task 2: Docker Task - with login command(log into the ACR)
task 3: Powershell script:
which runs these commands (in my case )
$sourceImage= "acrloginserver/repository:old-tag";
$newtag= "acrloginserver/repository:latest-tag"
docker pull $sourceImage
docker tag $sourceImage $newtag
docker push $newtag`

how to create imagestream of jbossweb in openshift origin

How can I create and use the imagestream of jboss webserver in openshift origin ?
Image yaml available in this link. I see that it is automatically built with openshift enterprise version (link) . but why not in origin ?
Thanks.
I expected it to pull itself the image during build but did not happen.
D:\docker\apps>oc new-build --image-stream=jboss-webserver31-tomcat7-openshift:1.1 --name=newapp --binary=true
warning: Cannot find git. Ensure that it is installed and in your path. Git is required to work with git repositories.
error: unable to locate any images in image streams with name "jboss-webserver31-tomcat7-openshift:1.1"
The 'oc new-build' command will match arguments to the following types:
1. Images tagged into image streams in the current project or the 'openshift' project
- if you don't specify a tag, we'll add ':latest'
2. Images in the Docker Hub, on remote registries, or on the local Docker engine
3. Git repository URLs or local paths that point to Git repositories
--allow-missing-images can be used to force the use of an image that was not matched
See 'oc new-build -h' for examples.
So I tried to create the import yaml in webconsole but got below error with yaml.
Failed to process the resource.
Resource is missing kind field.
Got it. Apparently one has to be logged in redhat
oc import-image my-jboss-webserver-3/webserver31-tomcat7-openshift --from=registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift --confirm