deploy new version of azure container app - azure-cli

I've followed this guide to deploy my custom image but now I'm stuck on how do I deploy vNext of my container image?
Skimming this YouTube video, it seems revisions are the way but how using the Azure CLI?
There's also an interesting concepts page on application lifecycle management but no guides/tutorials on this topic of revisions, only the api guide pages.

You need to update the app for a new image. This action will create a new revision behind the scenes.
az containerapp update `
--name <APPLICATION_NAME> `
--resource-group <RESOURCE_GROUP_NAME> `
--image mcr.microsoft.com/azuredocs/containerapps-helloworld
Depending on your activeRevisionsMode property:
a. if single then revision should automatically get activates.
b. if mulitple then activate and configure traffic-splitting

create a revision copy pointing to the new image:
$RESOURCE_GROUP="my-resource-group"
$CONTAINER_APP_NAME="my-image"
$NEW_IMAGE="myregistry.azurecr.io/smile:vNext"
az containerapp revision copy --resource-group $RESOURCE_GROUP `
--name $CONTAINER_APP_NAME `
--image $NEW_IMAGE

Related

Unable to access secrets from Dockerfile in GitHub Actions

I am using the following project as a baseline to create a Docker container action.
The problem that I have is that I need to be able to access my secrets inside my Dockerfile. I tried almost all the tricks that I knew.
Retrieve the secret
RUN --mount=type=secret,id=API_ENDPOINT \
export API_ENDPOINT=$(cat /run/secrets/API_ENDPOINT)
Docker build is not happy because the --mount option requires BuildKit. I tried to set DOCKER_BUILDKIT=1, but I had zero success.
How can I pass the secrets? I created an env var at the top of my action (global), and all the steps have complete visibility of that secret.
env:
API_ENDPOINT: ${{secrets.API_ENDPOINT}}

Github Action Service Container from Dockerfile in same repo

I'm learning Github Actions and designing a workflow with a job that requires a Service Container.
The documentation states that configuration must specify "The Docker image to use as the service container to run the action. The value can be the Docker base image name or a public docker Hub or registry". All of the examples in the docs use publicly-available Docker images, however I want to create a Service Container from a Dockerfile contained within my repo.
Is it possible to use a local Dockerfile to create a Service Container?
Because the job depends on a Service Container, that image must exist when the job begins, and therefore the image cannot be created by an earlier step in the same job. The image could be built in a separate job, but because jobs execute in separate runners I believe that Job 2 will not have access to the image created in Job 1. If this is true then could I follow this approach, using upload/download-artifact so provide Job 1's image to Job 2?
If all else fails, I could have Job 1 create the image and upload it to Docker Hub, then have Job 2 download it from Docker Hub, but surely there is a better way.
The GitHub Actions host machine (runner) is a fully loaded Linux machine, with everything everybody needs already installed.
You can easily launch multiple containers - either your own images, or public images - by simply running docker and docker-compose commands.
My advice to you is: Describe your service(s) in a docker-compose.yml file, and in one of your GitHub Actions steps, simply do docker-compose up -d.
You can create a docker image with the Dockerfile or docker-compose.yml residing inside the repo. Refer to this public gist, it might be helpful.
Instead of building multiple docker-images, you can use docker-compose. Docker-compose is the preferred way to deal with this kind of scenario.

Apply changes dynamically when OpenShift template is modified (and applied)

I defined a template (let's call it template.yaml) with a service, deploymentconfig, buildconfig and imagestream, applied it with oc apply -f template.yaml and ran oc new-app app-name to create new app from the template. What the app basically does is to build a Node.js application with S2I, write it to a new ImageStream and deploy it to a pod with the necessary service exposed.
Now I've decided to make some changes to the template and have applied it on OpenShift. How do I go about ensuring that all resources in the said template also get reconfigured without having to delete all resources associated with that template and recreating it again?
I think the template is only used to create the related resource first time. Even though you modify the template, it's not associated with created resources. So you should recreate or modify each resource that is modified.
But you can modify simply all resources created by template using the following cmd.
# oc apply -f template_modified.yaml | oc replace -f -
I hope it help you
The correct command turned out to be:
$ oc apply -f template_modified.yaml
$ oc process -f template_modified.yaml | oc replace -f -
That worked for me on OpenShift 3.9.

How we can fetch code from Gitlab when create image using packer?

I am creating image using packer. I have used 2 provisioners i.e Shell and ansible-local and both are working fine and installed all the required packages.
But now i need to deploy my application code too into my image which is over Gitlab.
I am out of idea how we can do this. Can you please help me how we can fetch the code from Gitlab to create image using packer?
Any assistance will be appreciated.
Thanks.
You should use ssh agent forwarding.
On the host running Packer load a ssh key that have access to git repository ssh-add <path to private key>.
Ensure that you have "ssh_disable_agent_forwarding": false (default) in your packer template. See Docs: Communicator.
Now in your packer provisioning script you should be able to clone the repository over SSH with git clone git#<GitLab server>:<repo.git>.

Using service accounts to automate deployments is failing

We are trying to automate the build and deployment of containers to projects created in openshift v3.3. From the documentation I can see that we will need to leverage service accounts to do this but the documentation is hard to follow and the examples I have found in the blogs don't complete the task. My workflow is as follows with examples oc commands I use:
BUILDER_TOKEN='xxx'
DEPLOYER_TOKEN='xxx'
# build and push the image works as expected
docker build -t registry.xyz.com/want/want:latest .
docker login --username=<someuser> --password=${BUILDER_TOKEN} registry.xyz.com
docker push registry.xyz.com/<repo>/<image>:<tag>
# This fails with error
oc login https://api.xyz.com --token=${DEPLOYER_TOKEN}
oc project <someproject>
oc new-app registry.xyz.com/<repo>/<image>:<tag>
Notice I login into the rest api interface, select the project and create the app but this fails with the following errors:
error: User "system:serviceaccount:want:deployer" cannot create deploymentconfigs in project "default"
error: User "system:serviceaccount:want:deployer" cannot create services in project "default"
Any ideas?
Service accounts only have permission in their owning project by default. You would need to grant deployer access to deploy in other projects.
OK so it seems that using a service account to accomplish this is not the best way to go about things. This is not helped by the documentation. The use case above is very common and the correct approach is to simply evoke the new-app with the image name and corresponding tag:
oc new-app ${APP}:${TAG}
There is no need to mess around with service accounts.