Why doesn't my newest cloudrun revision receive traffic? - gunicorn

I am trying to deploy a Python (Gunicorn) application on CloudRun (fully managed).
Most of the time the newest deployed revision receives 100% of traffic directly after I run the deploy command.
However, from time to time, the deploy commands says the newest revision receives 0% of traffic (and when I try to reach the application, I am indeed redirected to an older version).
gcloud beta run deploy my-app \
--platform=managed \
--allow-unauthenticated \
--project my-project \
--region europe-west1 \
--port=80 \
--memory=1Gi \
--service-account my-app#my-project.iam.gserviceaccount.com \
--min-instances=1 \
--image=europe-west1-docker.pkg.dev/my-project/my-registry/my-app:latest
Deploying container to Cloud Run service [my-app] in project [my-project] region [europe-west1]
Service [my-app] revision [my-app-00044-zay] has been deployed and is serving 0 percent of traffic.
Why doesn't my revision receive traffic ?

Assuming you have previously used --no-traffic this is working as intended as this option is persistent.
Try running gcloud run services update-traffic --to-latest
Reference doc: https://cloud.google.com/sdk/gcloud/reference/run/deploy#--no-traffic

Related

Using CodePipeline to Deploy ElasticBeanstalk application in another AWS account

We have a setup with different AWS accounts for each environment(dev, test, prod) and then a shared build account which has a AWS CodePipeline that deploys into each of these environment by assuming a role in dev, test, prod.
This works fine for our Serverless applications using a Codebuild script.
Can we do something similar for the Elastic Beanstalk application that uses the deploy action provider? Or what is the best approach for Elastic Beanstalk
We do this by using a CodeBuild job specified in each of the stage accounts (dev, test, prod) that uses the AWS CLI to deploy the CodePipeline artifact (available as CODEBUILD_SOURCE_VERSION in your build job's environment variables) to Elastic Beanstalk. We run this job as part of a CodePipeline in our shared build account.
These are the AWS CLI commands the CodeBuild deploy job runs:
aws elasticbeanstalk create-application-version --application-name ... --version-label ... --source-bundle S3Bucket="codepipeline-artifacts-us-east-1-123456789012",S3Key="application/deployable/XXXXXXX"
aws elasticbeanstalk update-environment --environment-name ... --version-label ...
You can specify a CodeBuild job from another account in CodePipeline using the strategy outlined here: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html. It involves setting up cross-account access to the role_arn used for the CodeBuild deploy job and a customer managed KMS key for the pipeline (with a cross-account access policy).
One deficiency with this approach is that the CodeBuild deploy job will complete as soon as the deployment starts and not wait until the ElasticBeanstalk deployment succeeds or fails, as the native CodePipeline EB deploy action does. You should be able to call aws elasticbeanstalk describe-environments in a loop from the job to replicate this behavior, but I have not yet attempted this. (Sample script here: https://blog.cyplo.net/posts/2018/04/wait-for-beanstalk/)
I have found the solution to cross account deployment of application to elastic beanstalk in another aws account using aws cdk.
As aws cdk do not have deploy to elastic beanstalk action feature yet so we have to implement it manually by implementing IAction interface
You can find complete working CDK app in my git repo
https://github.com/dhirajkhodade/CDKDotNetWebAppEbPipeline
We ended up solving it this way using CodeBuild:
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
commands:
- pip install awsebcli --upgrade
pre_build:
commands:
- CRED=`aws sts assume-role --role-arn $assume_role --role-session-name codebuild-deployment-$environment`
- export AWS_ACCESS_KEY_ID=`node -pe 'JSON.parse(process.argv[1]).Credentials.AccessKeyId' "$CRED"`
- export AWS_SECRET_ACCESS_KEY=`node -pe 'JSON.parse(process.argv[1]).Credentials.SecretAccessKey' "$CRED"`
- export AWS_SESSION_TOKEN=`node -pe 'JSON.parse(process.argv[1]).Credentials.SessionToken' "$CRED"`
- export AWS_EXPIRATION=`node -pe 'JSON.parse(process.argv[1]).Credentials.Expiration' "$CRED"`
- echo $(aws sts get-caller-identity)
build:
commands:
- eb --version
- eb init <project-name> --platform "Node.js running on 64bit Amazon Linux" --region $AWS_DEFAULT_REGION
- eb deploy
Using the aws-cli to assume the role we needed and then using eb-cli to do the actual deployment. Not sure if this is the best way, but it works. We are considering moving to another CI/CD tool which is more flexi

Docker image fails to build on Live but fine on Dev

I have a strange issue with Docker.
This is the Dockerfile in question.
FROM python:2.7
RUN apt-get update && apt-get install -y \
build-essential \
python-lxml \
python-dev \
python-pip \
python-cffi \
libcairo2 \
libpango1.0-0 \
libpangocairo-1.0.0 \
libxml2-dev \
libxslt1-dev \
zlib1g-dev \
libpq-dev \
libjpeg-dev \
libgdk-pixbuf2.0-0 \
libffi-dev \
mysql-client \
shared-mime-info
# ... further docker file rules, which doesn't get run cause apt-get fails
The problem I'm having is that on my development machine, this Dockerfile builds
perfectly fine, but on our live servers it's suddenly failing (worked in the
past), with E: Package 'mysql-client' has no installation candidate.
I thought the point of Docker is that everything runs using the same image and
that you shouldn't run into issues like this.
Why is this the case and what can I do to fix it from here so that it runs the
same on both dev and live?
You are using image python with tag 2.7, however this tag is a "shared" tag as per Python readme on Docker Hub which is changing other time: right now python:2.7 is shared with Python python:2.7.16 and python:2 but previously it was probably shared with python:2.7.15, python:2.7.14 etc. (in other words, python:2.7 is following python:2.7.x as it upgrades)
Your machine and live server probably pulled the image at a different time and now have a different image tagged 2.7. The "shared" tags seems to be like latest tags and may point to newer images as they are released.
What you can do:
Enforce image pull when building even if image is already present (using docker build with --pull option
Use a documented Simple tag instead, these should be more consistent (such as python:2.7.16-alpine3.9)
Do not re-build images during your release process, only build once and use the same image in your local and live environment (see below)
EDIT: this can be put into evidence with:
docker images --filter "reference=python" --digests --format "{{.Digest}} {{.Repository}}:{{.Tag}}"
sha256:7a61a96567a2b2ba5db636c83ffa18db584da4024fa5839665e330934cb6b2b2 python:2
sha256:7a61a96567a2b2ba5db636c83ffa18db584da4024fa5839665e330934cb6b2b2 python:2.7
sha256:7a61a96567a2b2ba5db636c83ffa18db584da4024fa5839665e330934cb6b2b2 python:2.7.16
sha256:39224960015b9c0fce12e08037692e8a4be2e940e73a36ed0c86332ce5ce325b python:2.7.15
To precise on:
I thought the point of Docker is that everything runs using the same
image and that you shouldn't run into issues like this.
Why is this the case and what can I do to fix it from here so that it runs the same on both dev and live?
Yes, and the recommended pattern is build image once and use that same image trough all your release process - this ensure you have the exact same context (packages, code, etc.) from development to production. You should not re-build your image from scratch on your live server, but ideally build it during your development phase and use that same image for testing and deploying.
Python:2.7 is now based on Debian Buster. There is no mysql-client apt package, considered https://packages.debian.org/search?keywords=mysql-client

How to deploy functions server for apiai-facts-about-google-nodejs Actions example

I'm following the instructions for the Facts About Google example and I can't seem to perform the step in the README.md:
Deploy the fulfillment webhook to your preferred hosting environment
(we recommend Google Cloud Functions).
I am trying to start the functions server:
$ functions deploy factsAboutGoogle --trigger-http
functions deploy <functionName> <modulePath>
Options:
--host, -h The emulator's host. [string]
--port, -p The emulator's port. [number]
--help Show help [boolean]
--version Show version number [boolean]
--trigger-http, -t Deploys the function as an HTTP function.
Not enough non-option arguments: got 1, need at least 2
I have also tried using npm start but I get the same issue. I don't know what the functionName or modulePath should be.
I tried the following and just saw that abraham had the same idea. This worked for me:
functions deploy factsAboutGoogle ./ --trigger-http

How do I make gcloud work on opensuse 13.2 in Google Cloud?

I spin up an instance with opensuse 13.2 (x86_64 built on 2015-05-11) in Google Cloud, ssh to the instance, try to run gcloud and get following error:
evgeny#tea-2:~> gcloud
python: can't open file '/usr/bin/../lib/google/cloud/sdk/gcloud/gcloud.py': [Errno 2] No such file or directory
How do I make it work?
Sounds like a bug of some sort. Can try reinstalling it? Try running:
curl https://sdk.cloud.google.com | bash
then log out and log back in
You can pull gcloud directly from Google as shown in Answer #1 or you can use the packaged version from the openSUSE repositories.
After logging in via ssh:
~> sudo -i
# zypper ar -t rpm-md -n 'Cloud Tools Devel' http://download.opensuse.org/repositories/Cloud:/Tools/openSUSE_13.2/ cloud_tools_devel
# zypper install google-cloud-sdk-0.9.44-13.2.noarch
You will need to accept the build key for the new repository that was added.

Alternative ways to deploy code to Openshift

I am trying to setup Travis CI to deploy my repository to Openshift on a successful build. Is there a way to deploy a repository besides using Git?
Git is the official mechanism for how your code is update, however depending on the type of application that you are deploying you may not need to deploy your entire code base.
For example Java application (war, ear, etc) can be deployed to JBoss or Tomcat servers, by simply taking the built application and checking it into the OpenShift git repositories, webapps or deploy directories.
An alternative to this (and it will be unsupported), is to scp your application to the gear using the SSH key. However any time the application is moved or updated (with git) this content stands a good chance of getting deleted(cleaned), by the gear.
We're working on direct binary deploys ("push") and "pull" style deploys (Openshift downloads a binary for you. The design/process is described here:
https://github.com/openshift/openshift-pep/blob/master/openshift-pep-006-deploy.md
You can do a SCP to the app-root/dependencies/jbossews/webapps directory direcly. I was able to do that successfully and have the app working. Here is the link
Here is the code which I had in the after_success blck
after_success:
- sudo apt-get -y install sshpass
- openssl aes-256-cbc -K $encrypted_8544f7cb7a3c_key -iv $encrypted_8544f7cb7a3c_iv
-in id_rsa.enc -out ~/id_rsa_dpl -d
- chmod 600 ~/id_rsa_dpl
- sshpass scp -i ~/id_rsa_dpl webapps/ROOT.war $DEPLOY_HOST:$DEPLOY_PATH
Hope this helps