We have a setup with different AWS accounts for each environment(dev, test, prod) and then a shared build account which has a AWS CodePipeline that deploys into each of these environment by assuming a role in dev, test, prod.
This works fine for our Serverless applications using a Codebuild script.
Can we do something similar for the Elastic Beanstalk application that uses the deploy action provider? Or what is the best approach for Elastic Beanstalk
We do this by using a CodeBuild job specified in each of the stage accounts (dev, test, prod) that uses the AWS CLI to deploy the CodePipeline artifact (available as CODEBUILD_SOURCE_VERSION in your build job's environment variables) to Elastic Beanstalk. We run this job as part of a CodePipeline in our shared build account.
These are the AWS CLI commands the CodeBuild deploy job runs:
aws elasticbeanstalk create-application-version --application-name ... --version-label ... --source-bundle S3Bucket="codepipeline-artifacts-us-east-1-123456789012",S3Key="application/deployable/XXXXXXX"
aws elasticbeanstalk update-environment --environment-name ... --version-label ...
You can specify a CodeBuild job from another account in CodePipeline using the strategy outlined here: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html. It involves setting up cross-account access to the role_arn used for the CodeBuild deploy job and a customer managed KMS key for the pipeline (with a cross-account access policy).
One deficiency with this approach is that the CodeBuild deploy job will complete as soon as the deployment starts and not wait until the ElasticBeanstalk deployment succeeds or fails, as the native CodePipeline EB deploy action does. You should be able to call aws elasticbeanstalk describe-environments in a loop from the job to replicate this behavior, but I have not yet attempted this. (Sample script here: https://blog.cyplo.net/posts/2018/04/wait-for-beanstalk/)
I have found the solution to cross account deployment of application to elastic beanstalk in another aws account using aws cdk.
As aws cdk do not have deploy to elastic beanstalk action feature yet so we have to implement it manually by implementing IAction interface
You can find complete working CDK app in my git repo
https://github.com/dhirajkhodade/CDKDotNetWebAppEbPipeline
We ended up solving it this way using CodeBuild:
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
commands:
- pip install awsebcli --upgrade
pre_build:
commands:
- CRED=`aws sts assume-role --role-arn $assume_role --role-session-name codebuild-deployment-$environment`
- export AWS_ACCESS_KEY_ID=`node -pe 'JSON.parse(process.argv[1]).Credentials.AccessKeyId' "$CRED"`
- export AWS_SECRET_ACCESS_KEY=`node -pe 'JSON.parse(process.argv[1]).Credentials.SecretAccessKey' "$CRED"`
- export AWS_SESSION_TOKEN=`node -pe 'JSON.parse(process.argv[1]).Credentials.SessionToken' "$CRED"`
- export AWS_EXPIRATION=`node -pe 'JSON.parse(process.argv[1]).Credentials.Expiration' "$CRED"`
- echo $(aws sts get-caller-identity)
build:
commands:
- eb --version
- eb init <project-name> --platform "Node.js running on 64bit Amazon Linux" --region $AWS_DEFAULT_REGION
- eb deploy
Using the aws-cli to assume the role we needed and then using eb-cli to do the actual deployment. Not sure if this is the best way, but it works. We are considering moving to another CI/CD tool which is more flexi
Related
Just to give context:
I am planning to use Terraform to bring up new separate environments with ec2 machines, elb etc. and then maintaining configuration as well.
Doing that with terraform and using AWS provider sounds fairly simple.
Problem 1:
While launching those instances I want to install few packages etc. so that when Terraform launches the instances (servers) things/ apps should be up and running.
Assuming the above is up and running:
Problem 2:
How do I deploy new code on the servers in this environment launched by Terraform?
Should I use for eg. ansible playbooks/chef recipes/puppet manifests for that? or Terraform gives some other options/ways?
Brief answers:
Problem 1: While launching those instances I want to install few packages etc. so that when Terraform launches the instances (servers) things/ apps should be up and running.
A couple of options:
Create an AMI of your instance with the installed packages and specify that in the resource.
Use the user data script to install the packages that you need when the instance starts.
Use ansible playbooks/chef recipes/puppet to install packages once the instance is running (e.g. creating an opsworks stack with terraform)
Problem 2: How do I deploy new code on the servers in this environment
launched by Terraform? Should I use for eg. ansible playbooks/chef
recipes/puppet manifests for that? or Terraform gives some other
options/ways?
Not the intended use case for terraform, use other tools like jenkins or aws services like codepipeline or codedeploy. Ansible/chef/puppet can also help (e.g. with opsworks)
Does Terraform support application deployment using ElasticBeanstalk?
I've tried to deploy Spring Boot app using
`aws_elastic_beanstalk_application`,
`aws_elastic_beanstalk_application_version`
`aws_elastic_beanstalk_environment`
directives, but noticed it creates Elastic Beanstalk application, application version and environment, but does not deploy actual .jar file. I have to use aws elasticbeanstalk update-environment command to make it work.
The current version of Terraform just creates the s3 bucket, uploads your source code and then creates the application version in the elastic beanstalk.
To Deploy the version, use AWS CLI:
aws elasticbeanstalk update-environment \
--application-name test-app \
--version-label latest \
--environment-name test-env
I am using Azure cli. I have created app by using azure cli command. azure site create $SITENAME --location $LOCATION --hostname $HOSTNAME -s $SUBSCRIPTIONID.
Now i want to connect it with my git account using Azure CLI.
If you use the new CLI 2.0 (https://github.com/Azure/azure-cli) you should be able to use the following command:
az appservice web source-control config-local-git -g {group} -n {webapp name}
git remote add azure https://<deploy_user_name>#MyApp.scm.azurewebsites.net/MyApp.git
If this doesn't work for your scenario, please post a feature request to our repo.
If you wish to customize continious deployment, I'm afraid there is no way to use Azure CLI in this case now.
Please use Azure web-portal to do it - details are here
You can also use Visual Studio Team Services and have the compile website, and then execute your ARM template, through release management to update environment.
Is it possible to setup a web hook to automatically deploy a new version of an application from a Docker Hub repository to Elastic Beanstalk?
I currently have the following setup:
Bitbucket Repo -----> Docker Hub -----> Elastic Beanstalk
When I push to the master branch on the git repository, it triggers a build on the Docker repository through a POST request. However, once the image is built, I have to manually deploy it on EB.
Docker Hub has the option for making a POST request whenever a build is successfully completed. Is there some API or URL that I could point Docker to call so that EB redeploys the application?
Note: Eventually I would like to include an automated testing server into this workflow.
AWS does not seem to have a HTTP API, but you can use the aws command-line tool to trigger the update: https://stackoverflow.com/a/41715702/5879759
I am migrating from DotCloud to Elastic Beanstalk.
Using DotCloud, they clearly explained how to set up Python Worker, and how to use supervisord.
Moving to Elastic Beanstalk, I am lost on how I could do that.
I have a script myworker.py and want to make sure it is always running. How?
Elastic Beanstalk is just a stack configuration tools over EC2, ELB and autoscaling.
One approach you can use, is create your own AMI, but since October last year, there is another approach that probably will be more suitable for your needs: ebextensions.
.ebextension is just a directory in your application, that get's detected once your application has been loaded by AWS.
Here is the full documentation: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html
With Amazon Linux 2 you need to use the .platform folder to supply elastic beanstalk with installation scripts.
We recommend using platform hooks to run custom code on your environment instances. You can still use commands and container commands in .ebextensions configuration files, but they aren't as easy to work with. For example, writing command scripts inside a YAML file can be cumbersome and difficult to test.
So you should add a prebuild hook (example) into a .platform folder to install supervisor and a postdeploy hook (example) to restart supervisor after each deployment.
There is an ini file (example) used in the script; which is made for laravel specific.
Make sure that the .sh files from the .platform folder are executable before deploying your project:
$ chmod +x .platform/hooks/prebuild/*.sh
$ chmod +x .platform/hooks/postdeploy/*.sh