How to choose the EC2 Instance type of an Elastic Beanstalk environment when creating it using awscli? - amazon-elastic-beanstalk

I've been through all the documentation (I guess). And haven't found a way, yet, to choose which EC2 instance type to use for my environment.

You can specify it when you create your environment. (It will overwrite what you selected when you did eb init)
eb create --instance_type t2.micro
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-create.html
You can also set it in your .ebextensions/config
option_settings:
aws:autoscaling:launchconfiguration:
InstanceType: t2.micro
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html

Related

Unable to have Elastic IP for EMR cluster

I am running a datapipeline every once a day and the pipeline creates a temporary EMR to run the activities. The EMR connects to a Mysql database and the IPs for the master and core nodes need to be whitelisted in the DB.
Is it possible to assign static or elastic IPs to nodes on EMR so that I don't have to whitelist the node IPs manually everytime the EMR is created?
Thanks in advance.
I was in the same situation and made a code for attaching the pre-obtained EIP to the master instance. But if you want to connect the AWS RDS, then you can simply allow the security group of EMR that is the best.
Below is what I used to fix the EIP for the master instance, not the others. In the case of the spark application with cluster mode, the master node has the driver session of the spark and only that is needed to be the whitelist.
#!/bin/bash
BOOL=`cat /emr/instance-controller/lib/info/instance.json | jq .isMaster`
if [ $BOOL == "true" ]
then
ID=`/usr/bin/curl -s http://169.254.169.254/latest/meta-data/instance-id`
aws ec2 associate-address --instance-id $ID --public-ip $1
fi
This script receives an argument that is the EIP what you want and I have put this code as a bootstrap action for the EMR. Be aware that the EMR should have the execution permission for associate-address.

ECS EC2 Launch Type: Service database connection string

I am trying out a small POC (learning experiment) on docker. I have 3 docker images, one each for a storefront, a search engine and a database engine called, storefront, solr, docmysql respectively. I have tried running them in a docker swarm (on a single node) on ec2 and it works fine.
In the POC, I next needed to move this to AWS ECS using the EC2 launch type on a single Non-Amazon ECS-Optimized AMI. I have installed and started a ecs-agent on this. I have created 3 services with one task for each of the 3 images configured as containers within the task. The question is about connecting to the database from the storefront.
The storefront has a property file where the database connection is typically defined as
"jdbc:mysql://docmysql/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false".
This worked when I ran it as a docker swarm. Once I moved it to ECS (EC2 launch type), I had to expose the port 3306 from my task/container for the docmysql service. This gave me a service endpoint of docmysql.local, with 'local' being a private namespace. I tried changing the connection string to
"jdbc:mysql://docmysql.local/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false"
in the property file and it always fails with " Name or service not known". What should my connection string be? When the service is created I see 2 entries in Route 53, one SRV record and a A record. The A record has as its name a .docmysql.local, If I use this in the database connection string, I see that it works but obvious not the right thing to do with the hadcoded taskid. I have read about AWS Cloud Map (servicediscovery) but still not very clear how to go about it. I will not be putting any loadbalancer in front of my DB task in the service, there will always be only one task for the db.
So what is the best way to generate the connection string that works. Also why did I not have issues when I ran it as a docker swarm.
I know I can use an RDS instead of stating my own database, I will try that but for now need this working as I have started with this. Thanks for any help.
Well, I've raised some points before my own solution within the problem:
Do you need your instance to scale using ECS? If not, migrate it to RDS.
Do you need to deploy it on EC2-Type? If not, use fargate, it is more simple to handle.
Now, I've faced that issue on Fargate, and discovered that depending on your container/task definitions, it can be used inside the same task for testing purposes, so, 127.0.0.1 should be the answer.
On different tasks you need to work with awsvpc network mode so, you will have this:
Each task that uses the awsvpc network mode receives its own elastic network interface, which is attached to the container instance that hosts it. (FROM AWS)
My suggestion is to create a Lambda Function to discover your network interface dynamically.
Read this for deeply understanding:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
https://aws.amazon.com/blogs/developer/invoking-aws-lambda-functions-from-java/

Template error when mounting EFS to Elastic Beanstalk EC2 using AWS mount script

I am running into
"Service:AmazonCloudFormation, Message:Template error: every Fn::Join object requires two parameters, (1) a string delimiter and (2) a list of strings to be joined or a function that returns a list of strings (such as Fn::GetAZs) to be joined."
error when trying to deploy tomcat application with
https://github.com/awslabs/elastic-beanstalk-docs/blob/master/configuration-files/aws-provided/instance-configuration/storage-efs-mountfilesystem.config
script to mount the EFS file system to the elastic beanstalk EC2 instance.
I have been trying for a while now to resolve it. Any help is highly appreciated.
The EFS and EC2 are on the same VPC and mounting successfully works when I SSH into the EC2.
Surprisingly I dont see any ERROR logs in the CloudFormation stack either.
I finally figured out the problem. Its a very stupid mistake, in case you run into this problem here's what i was doing
The discreption says "To use this file to mount a file system that you created outside of AWS Elastic Beanstalk, replace the Ref with the resource ID" in below line
FILE_SYSTEM_ID: '{"Ref" : "FileSystem"}' so i inferred it should be
FILE_SYSTEM_ID: '{"<RESOURCE_ID>" : "FileSystem"}' no this is wrong what they actually mean is do this
FILE_SYSTEM_ID: RESOURCE_ID
I know this was stupid error but in case someone's stuck like me. Hopefully you don't do this mistake.

login failed, although I specified the inbound rule in my security group of RDS instance

At first try I made an instance of DB and also my local DB was immigrated to the AWS instance DB and then tried to deploy the WAR in my environment on AWS but it couldn't connect to the DB instance, although the instance of DB on AWS was working well with my local tomcat container, I also read the documents of how to connect from environment to the existing DB, but it didn't help either.
In the recent try, I made a new environment and made an DB instance concurrently; although I defined inbound rule (All traffic,all protocol, all port ranges), the new instance on MySQL can't make connection to DB instance on AWS. Would you please guide me?
Do you deploy an RDS instance which is publicly accessible or not [1]? If the RDS instance is private, where is your EC2 instance (same region, same VPC?). Are you sure that database endpoint, user, and password in your application are correct?
[1] http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.html

rake db:migrate runs in development AWS Beanstalk

I'm new to Beanstalk. I've created a Rails application and set the database production configuration to use the environment variables hopefully provided by AWS. I'm using Mysql (mysql2 gem), and want to use RDS and Passenger (I have no preference there).
On my development environment I can run the rails application with my local Mysql (it is just a basic application I've created for experimentation).
I have added the passenger gem to Gemfile and bundled, but I'm using WEBBrick in development still.
The only thing I did not do by the book is that I did not use 'eb' but rather tried from the console. My application/environment failed to run as while "rake db:migrate" it still thinks I wanted it to connect to the local Mysql (I guess from the logs that it is not aware of RACK_ENV and hence uses 'development').
Any tip? I can of course try next the 'eb', yet would prefer to work with the console.
Regards,
Oren
In Elastic Beanstalk (both the web console and the cli), you can pass environnement variables. If you pass the RAKE_ENV variable, you will change your environnement.
After that you still need to pass your database parameters (db password, name, ...) which should not be hardcoded into the code.
Have you tried to run
bin/rake db:migrate RAILS_ENV=development
?
I got the same issue and that worked for me.
I recommend you enter to EC2 instance through this command "eb ssh" (The first time you need specified you .pem file, if you have not one you can create in IAM services) and check your logs for more information about yours error.
If you have problems when you are uploading your code (eb deploy) you have the log in this file: "/var/log/eb-activity.log" (Remember this file is in your EC2 instance)
If you have a problems with your app, you can read the logs in this files: "/var/app/support/logs/production.log" or "/var/app/support/logs/passenger.log"
Other recommedations is install EB CLI version 3. for manage your eb instance
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install.html
I believed that Elastic Beanstalk will run 'rake db:migrate' by itself. Indeed it seems to try, but that is failing. I gave my bounty to 'Yahs Hef', even though I will only try this evening (UK). My disorientation with AWS caused me to forget this easy solution, of running the migration by myself. If this does not work by itself, I'll simplify the database configuration as possibile.