I'm using the eb CLI tool to create my application and environments on Elastic Beanstalk, but finding that it's creating a Worker 1.0 tier.
Is there a way I can use .ebextensions or a script that runs on "eb start" that can upgrade the tier to v1.1?
When you run eb init, eb generates a file with name .elasticbeanstalk/config in your app source directory.
You can modify the tier version in this file.
You can change from EnvironmentTier=Worker::SQS/HTTP::1.0 to EnvironmentTier=Worker::SQS/HTTP::1.1 and when you launch an environment it will launch with worker tier 1.1.
Tier version cannot be modified via ebextensions.
Related
We're using a bitbucket pipeline to deploy a NodeJS app to elastic beanstalk (eb).
We've configured the eb environment option "Rolling updates and deployments" to use "Rolling with additional batch" (RollingWithAdditionalBatch strategy).
This used to work fine up until about a year ago or so, since then (and updating the platform version to Node 14 then 16), the "rolling with additional batch" strategy is ignored on deployment. Eb seems to revert to a "rolling" strategy instead.
Bitbucket pipe version: atlassian/aws-elasticbeanstalk-deploy:1.0.2.
Eb config is as per attached screenshot.
We have a setup with different AWS accounts for each environment(dev, test, prod) and then a shared build account which has a AWS CodePipeline that deploys into each of these environment by assuming a role in dev, test, prod.
This works fine for our Serverless applications using a Codebuild script.
Can we do something similar for the Elastic Beanstalk application that uses the deploy action provider? Or what is the best approach for Elastic Beanstalk
We do this by using a CodeBuild job specified in each of the stage accounts (dev, test, prod) that uses the AWS CLI to deploy the CodePipeline artifact (available as CODEBUILD_SOURCE_VERSION in your build job's environment variables) to Elastic Beanstalk. We run this job as part of a CodePipeline in our shared build account.
These are the AWS CLI commands the CodeBuild deploy job runs:
aws elasticbeanstalk create-application-version --application-name ... --version-label ... --source-bundle S3Bucket="codepipeline-artifacts-us-east-1-123456789012",S3Key="application/deployable/XXXXXXX"
aws elasticbeanstalk update-environment --environment-name ... --version-label ...
You can specify a CodeBuild job from another account in CodePipeline using the strategy outlined here: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html. It involves setting up cross-account access to the role_arn used for the CodeBuild deploy job and a customer managed KMS key for the pipeline (with a cross-account access policy).
One deficiency with this approach is that the CodeBuild deploy job will complete as soon as the deployment starts and not wait until the ElasticBeanstalk deployment succeeds or fails, as the native CodePipeline EB deploy action does. You should be able to call aws elasticbeanstalk describe-environments in a loop from the job to replicate this behavior, but I have not yet attempted this. (Sample script here: https://blog.cyplo.net/posts/2018/04/wait-for-beanstalk/)
I have found the solution to cross account deployment of application to elastic beanstalk in another aws account using aws cdk.
As aws cdk do not have deploy to elastic beanstalk action feature yet so we have to implement it manually by implementing IAction interface
You can find complete working CDK app in my git repo
https://github.com/dhirajkhodade/CDKDotNetWebAppEbPipeline
We ended up solving it this way using CodeBuild:
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
commands:
- pip install awsebcli --upgrade
pre_build:
commands:
- CRED=`aws sts assume-role --role-arn $assume_role --role-session-name codebuild-deployment-$environment`
- export AWS_ACCESS_KEY_ID=`node -pe 'JSON.parse(process.argv[1]).Credentials.AccessKeyId' "$CRED"`
- export AWS_SECRET_ACCESS_KEY=`node -pe 'JSON.parse(process.argv[1]).Credentials.SecretAccessKey' "$CRED"`
- export AWS_SESSION_TOKEN=`node -pe 'JSON.parse(process.argv[1]).Credentials.SessionToken' "$CRED"`
- export AWS_EXPIRATION=`node -pe 'JSON.parse(process.argv[1]).Credentials.Expiration' "$CRED"`
- echo $(aws sts get-caller-identity)
build:
commands:
- eb --version
- eb init <project-name> --platform "Node.js running on 64bit Amazon Linux" --region $AWS_DEFAULT_REGION
- eb deploy
Using the aws-cli to assume the role we needed and then using eb-cli to do the actual deployment. Not sure if this is the best way, but it works. We are considering moving to another CI/CD tool which is more flexi
Unable to install the "da-cli-114-7582c1a0bd-linux.run" file in my Ubuntu VM. The setup is failing while checking the latest version check.
I have downloaded the latest DAML SDK setup file "da-cli-114-7582c1a0bd-linux.run" and copied the same into my Ubuntu VM through local network connection. When I try to install the .run file, the setup trying to connect to the internet for latest version check. But I am not allowed to use internet in the application servers/VMs. Because of this restriction the setup is getting failed and I am unable to complete the DAML SDK installation.
Is it possible to get the DAML SDK setup as a .tar file? If we have tar file, then it will be easy to complete the setup manually.
Installing the SDK using the .run files in an environment without an internet connection is not easy. It might be possible to install it in an environment with internet and then tar up the folder ~/.da, extract it back into place in the VM and put ~/.da/bin.
However, there is a new SDK assistant in the works (called daml, not da), which can be installed using curl -sSL get.daml.com | sh. If you look at the content of the installation script, you can see that all it really does is downloading a tar-ball from GitHub releases, un-tars it and calls an install.sh script within. That's probably the easier way to get the SDK into an offline environment at this point.
However, the documentation for the new daml assistant is not on docs.daml.com yet. It will be shortly, but in the meantime you can read it on GitHub.
I have install Git, Node.js and NPM on my machine and have successfully been able to run a progressive web app on Chrome through the LocalHost. Now what about when I want to run this web app on a public server? Will I have to install Git, Node.js and NPM on my web hosting account? Or are these components already installed on web hosting servers in general (like would be an app like cPanel)?
By the way, would you be able to recommend any good FTP application for Mac that can upload zillions of files easily (not 1 by 1)?
You will need Node on the host server, but not NPM or Git (although NPM will be there by default when you install Node).
Typically what you want is to setup a "Continuous Integration / Continuous Delivery" platform. You can use free options like Travis or Jenkins or you can just use a shell script and run it through AWS Lambda or something like that.
Very simplified version:
You push code to Git
CI/CD detects the code check-in (polls) and pulls latest from Git on an Agent
CI/CD runs a "build" on the Agent, which for Node is at least npm install but can include Grunt, Gulp, Webpack, or a host of other useful steps.
CI/CD publishes the result of the build to a target server.
Here you have five machines involved:
Git server
Your local dev box
CI/CD server
CI/CD agent
production server
Hope this helps you get started in the right direction.
I am migrating from DotCloud to Elastic Beanstalk.
Using DotCloud, they clearly explained how to set up Python Worker, and how to use supervisord.
Moving to Elastic Beanstalk, I am lost on how I could do that.
I have a script myworker.py and want to make sure it is always running. How?
Elastic Beanstalk is just a stack configuration tools over EC2, ELB and autoscaling.
One approach you can use, is create your own AMI, but since October last year, there is another approach that probably will be more suitable for your needs: ebextensions.
.ebextension is just a directory in your application, that get's detected once your application has been loaded by AWS.
Here is the full documentation: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html
With Amazon Linux 2 you need to use the .platform folder to supply elastic beanstalk with installation scripts.
We recommend using platform hooks to run custom code on your environment instances. You can still use commands and container commands in .ebextensions configuration files, but they aren't as easy to work with. For example, writing command scripts inside a YAML file can be cumbersome and difficult to test.
So you should add a prebuild hook (example) into a .platform folder to install supervisor and a postdeploy hook (example) to restart supervisor after each deployment.
There is an ini file (example) used in the script; which is made for laravel specific.
Make sure that the .sh files from the .platform folder are executable before deploying your project:
$ chmod +x .platform/hooks/prebuild/*.sh
$ chmod +x .platform/hooks/postdeploy/*.sh