Block additional CircleCI builds until Elastic Beanstalk deployment completes - amazon-elastic-beanstalk

We have a multi-step deployment procedure:
Step 1 -> Send assets to S3, other prep work, and trigger elastic beanstalk deployment (occurs on CircleCI)
Step 2 -> Elastic Beanstalk deployment (occurs on AWS)
What I'd like to do is block Circle builds until Step 2 has completed (Elastic Beanstalk deployment is in 'ready' state) to prevent additional builds from failing. One strategy to accomplish this is to include a 'wait' script as the last step in the build of Step 1 that would wait for the EB environment to return "ready". However, this will cost us unnecessary Circle credits so I'd rather not do it this way. Maybe there's a way to tell Circle to retry builds if EB is not in a 'ready' state?
What are some other strategies to accomplish this?

The way that I solved this was by putting an 'infinite' loop at the beginning of my deploy script that checked the EB environment status. If the status is "Ready", it broke out of the loop and continued executing the deployment script.

Related

Openshift - API to get ARTIFACT_URL parameter of a pod or the version of its deployed app

What I want to do is to make a web app that lists in one single view the version of every application deployed in our Openshift (a fast view of versions). At this moment, the only way I have seen to locate the version of an app deployed in a pod is the ARTIFACT_URL parameter in the envirorment view, that's why I ask for that parameter, but if there's another way to get a pod and the version of its current app deployed, I'm also open to that option as long as I can get it through an API. Maybe I'd eventually also need an endpoint that retrieves the list of the current pods.
I've looked into the Openshift API and the only thing I've found that may help me is this GET but if the parameter :id is what I think, it changes with every deploy, so I would need to be modifying it constantly and that's not practical. Obviously, I'd also need an endpoint to get the list of IDs or whatever that let me identify the pod when I ask for the ARTIFACT_URL
Thanks!
There is a way to do that. See https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html
List Environment Variables
To list environment variables in pods or pod templates:
$ oc env <object-selection> --list [<common-options>]
This example lists all environment variables for pod p1:
$ oc env pod/p1 --list
I suggest redesigning builds and deployments if you don't have persistent app versioning information outside of Openshift.
If app versions need to be obtained from running pods (e.g. with oc rsh or oc env as suggested elsewhere), then you have a serious reproducibility problem. Git should be used for app versioning, and all app builds and deployments, even in dev and test environments should be fully automated.
Within Openshift you can achieve full automation with Webhook Triggers in your Build Configs and Image Change Triggers in your Deployment Configs.
Outside of Openshift, this can be done at no extra cost using Jenkins (which can even be run in a container if you have persistent storage available to preserve its settings).
As a quick workaround you may also consider:
oc describe pods | grep ARTIFACT_URL
to get the list of values of your environment variable (here: ARTIFACT_URL) from all pods.
The corresponding list of pod names can be obtained either simply using 'oc get pods' or a second call to oc describe:
oc describe pods | grep "Name: "
(notice the 8 spaces needed to filter out other Names:)

Terraform - Elastic Beanstalk - How to Swap Environment URLs

I'm trying to accomplish blue/green deployments with terraform/elastic beanstalk. How would one swap environment urls with this stack? I don't see anything obvious here.
The best I can come up with is...
running a terraform apply to spin up my entire architecture
spins up aws_elastic_beanstalk_environment's blue env
When wanting to deploy a new version of an app, running terraform
apply module.elasticbeanstalk.aws_elastic_beanstalk_environment.green to only spin up the other aws_elastic_beanstalk_environment resource
Now I have both blue & green up. Time for actually swapping the URLs...
through the command line eb swap API, swap the two environment URLs
update tfstate manually
terraform push new state
I would love it if there was a solution where I didn't have to manually manipulate the state. Or is this the only way to accomplish this function using these two tools?
I believe you just toggle the cname prefix of each
If you are looking to update the state after the swap you could note the ids of the beanstalk environments and do:
# Remove the old state information, using the command line not manually
terraform state rm module.elasticbeanstalk.aws_elastic_beanstalk_environment.blue
terraform state rm module.elasticbeanstalk.aws_elastic_beanstalk_environment.green
# Import these to the opposite environment resources
terraform import module.elasticbeanstalk.aws_elastic_beanstalk_environment.blue <green id>
terraform import module.elasticbeanstalk.aws_elastic_beanstalk_environment.green <blue id>
This doesn't exactly seem ideal, but it does avoid needing to manually manipulate the state file and could be scripted fairy easily.

SHA Commit number from Pipelinec w/ Openshift Plugin

I'm using the OpenShift plugin with Jenkins Pipelines to run builds in OpenShift when Github gets a new commit.
I'd also like to be able to report the status of the build back to github.
However in order to do this, I need to know what the commit was that just got built. I'm using the following pipeline config
node() {
stage 'build'
def builder = openshiftBuild(buildConfig: 'my-web', showBuildLogs: 'true')
stage 'deploy'
openshiftDeploy(deploymentConfig: 'my-web')
openshiftScale(deploymentConfig: 'my-web',replicaCount: '3')
}
However I have zero idea how to get the commit SHA from the openshiftBuild step since this does the git pull.
According to https://wiki.jenkins.io/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-JenkinsSetEnvironmentVariables, you get it from the GIT_COMMIT environment variable.
If the checkout happens later, you can get it with the following code:
def gitCommitId = sh(returnStdout: true, script: 'git rev-parse HEAD')
It's hard to tell without seeing the rest of your pipeline, but it looks like you are just triggering an OpenShift S2I build, which is not what is recommended for Pipeline builds. You should have your pipeline build the artifact(s) for the application, then use an S2I binary build to have OpenShift put the artifacts into a runtime container. For an example, see HERE.

How to convert elastic beanstalk classic load balancer to application load balancer on a running application?

I have several EB applications that I would like to convert from a classic to an application load balancer. In the documentation it seems that the default way is to create a new environment from scratch with the proper load balancer. Considering that I have many environment variables and several environments, I would prefer not to have to rebuild applications. Is there a way to switch out the load balancer on an already running application?
It is not possible to set a a load balancer type except at creation time. You can use elastic beanstalk cli and aws cli to clone the application with the same config and version. To get the deployed application version run:
aws elasticbeanstalk describe-environments --application-name ${APPLICATION_NAME} --environment-names ${SRC_ENV_NAME} | jq -r '.Environments | .[] | .VersionLabel'
The jq pipe filters out the rest of the json blob.
After that, you can save the config of the curent appication using:
eb config save $SRC_ENV_NAME --cfg "${SRC_ENV_NAME}_save"
Then create an application clone using:
eb create $NEW_ENV_NAME --elb-type application --cfg "${SRC_ENV_NAME}_save" --version $APP_VERSION
Where APP_VERSION is the string extracted in step one.
It is not simple, but it can be done.
If the Envivornment name is important to you, it gets a little trickier.
Here is it how it should go, step by step (using the web console):
Save the configuration of the Environment you want to change
From the Saved config, generate a new Env (select Customize settings)
2.1) Change the LB type to Application and fill out all the necessary info for this
Swap the URLs from the original env to the new one (check if everything is working with the new env, if not swap back)
[STEPS ONLY NECESSARY IF ENV NAME IS IMPORTANT]
Delete the original env (which now is not receiving traffic and has a Classic LB)
Wait until the original name disappears from the console (it make take a couple of hours)
Clone the production env, and give the new env the original env name
Swap URLs
Done!

Deployment strategies in Openshift v3

I know I can have two different strategies when I want to deploy in Openshift.
Rolling strategy: Openshift waits for new pods to become ready before scaling down the production pods.
Recreate strategy: Openshift will remove old instances and the will start new ones.Getting a 503 HTTP error in the meanthime. For db or when two or more instances can't coexist.
To chage the deployment configuration:
oc edit dc/mydeploy-conf -o json
"spec": {
"strategy": {
"type": "Recreate/Rolling"
},
EDIT 1 -- Adding info from the project github suggested by Clayton
https://github.com/openshift/origin/blob/master/examples/deployment/README.md
Not included strategies in Openshift v3 but can be done manually.
Blue-Green Deployment
Blue-Green deployments involve running two versions of an application at the same time and moving production traffic from the old version to the new version (more about blue-green deployments). There are several ways to implement a blue-green deployment in OpenShift.
Create two copies of the example application
oc new-app openshift/deployment-example:v1 --name=bluegreen-example-old
oc new-app openshift/deployment-example:v2 --name=bluegreen-example-new
Create a route that points to the old service
oc expose svc/bluegreen-example-old --name=bluegreen-example
Edit the route and change the service to bluegreen-example-new
oc edit route/bluegreen-example
A/B Deployment
A/B deployments generally imply running two (or more) versions of the application code or application configuration at the same time for testing or experimentation purposes.
The simplest form of an A/B deployment is to divide production traffic between two or more distinct shards -- a single group of instances with homogenous configuration and code.
More complicated A/B deployments may involve a specialized proxy or load balancer that assigns traffic to specific shards based on information about the user or application (all "test" users get sent to the B shard, but regular users get sent to the A shard). A/B deployments can be considered similar to A/B testing, although an A/B deployment implies multiple versions of code and configuration, where as A/B testing often uses one codebase with application specific checks.
Example:
One service, multiple deployment configs
OpenShift, through labels and deployment configurations, can support multiple simultaneous shards being exposed through the same service. To the consuming user, the shards are invisible. An example of the simplest possible sharding is described below:
Create the first shard of the application based on the example deployment images
oc new-app openshift/deployment-example --name=ab-example-a --labels=ab-example=true SUBTITLE="shard A"
Edit the newly created shard to set a label ab-example=true that will be common to all shards:
oc edit dc/ab-example-a
In the editor, add the line ab-example: "true" underneath spec.selector and spec.template.metadata.labels alongside the existing deploymentconfig=ab-example-a label. Save and exit the editor.
Trigger a re-deployment of the first shard to pick up the new labels:
oc deploy ab-example-a --latest
Create a service that uses the common label:
oc expose dc/ab-example-a --name=ab-example --selector=ab-example=true
make the application available via a route
oc expose svc/ab-example
Create a second shard based on the same source image as the first shard but different tagged version, and set a unique value:
oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE="shard B" COLOR="red"
Edit the newly created shard to set a label ab-example=true that will be common to all shards:
oc edit dc/ab-example-b
In the editor, add the line ab-example: "true" underneath spec.selector and spec.template.metadata.labels alongside the existing deploymentconfig=ab-example-b label. Save and exit the editor.
Trigger a re-deployment of the second shard to pick up the new labels:
oc deploy ab-example-b --latest
At this point, both sets of pods are being served under the route. However, since both browsers (by leaving a connection open) and the router (by default through a cookie) will attempt to preserve your connection to a backend server, you may not see both shards being returned to you. To force your browser to one or the other shard, use the scale command:
oc scale dc/ab-example-a --replicas=0
oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0
https://github.com/openshift/origin/blob/master/examples/deployment/README.md is probably the best documentation for the types of strategies and how to achieve them