Does Elastic Beanstalk misassign security groups in environment creation? - amazon-elastic-beanstalk

When I create an environment with a VPC in Elastic Beanstalk, the VPC configuration page requires selection of a VPC Security Group. After creation of the environment, two security groups are automatically created; a VPC Security Group and a Load Balancer Security Group.
On the Configuration -> Instances page, there are two security groups listed in the EC2 security groups field. The first is the security group selected during setup, the second is the automatically created VPC Security Group. In our case, the security group selected during setup is extraneous.
I have two questions. Why does EB require selection of a security group? And should the Load Balancer Security Group be included in the EC2 security groups field on the Instances page?

Related

Is there a way to prevent Elastic Beanstalk from automatically opening up security groups to 0.0.0.0/0?

Problem
I've been working with several small Elastic Beanstalk environments, both within and outside of VPCs. Every time I deploy, even if I specify security groups, Elastic Beanstalk creates new security groups for the instance(s) and ELB. These groups are open to 0.0.0.0/0. My organization's security rules catch these rules every time and remove them, and the security team follows up with me to ask if everything's alright.
Question
Can I configure Elastic Beanstalk to not open up the security groups to HTTP from anywhere on the internet prior to deploying?

Cannot add elasticbeanstalk environment created with EB cli to a VPC: This environment is not part of a VPC

Firstly, the use case: I have a large Spring Boot monolith which is accompanied by some smaller go services which are used to perform some tasks. Currently they are hosted privately and simply on the same server and can therefor communicate internally using localhost. I am looking into deploying this to AWS as ElasticBeanstalks and is currently using free tier for evaluating this. I want the Spring Boot application to be publicly visible and the go services to be available for the Spring Boot application but not to the public. My impression is that I want to deploy them as separate ElasticBeanstalk environments but assign them to the same VPC. If that is the wrong assumption please let me know of the correct one!
If that is however what we want, then this is my current initial issue. I have VPC set up (with default values) and in my local repository I use eb init, eb create etc to deploy the application. When it is deployed and up an running and I go into Configuration in the AWS console of the EB then network part simply says This environment is not part of a VPC.. I've tested to select classic, application and network as the loadbalancer but with the same result. Do I need to do something during eb create instead?
I've tried eb create --vpc but honestly don't know what to fill in for all the prompts:
Enter the VPC ID: xxxxxxxxxxxxxxxxx
Do you want to associate a public IP address? (Y/n): Y
Enter a comma-separated list of Amazon EC2 subnets: ?
Enter a comma-separated list of Amazon ELB subnets: ?
Do you want the load balancer to be public? (Select no for internal) (Y/n): ?
Enter a comma-separated list of Amazon VPC security groups:
What should I be looking for to enter here? The VPC ID i assume is the VPC id of the VPC I have created but I am having difficulties understanding the rest of them. If I try to simply run eb create --vpc.id <XXXXXXXXXXXXXXXXXX> then I instead get ERROR: ServiceError - Configuration validation exception: Invalid option value: 'internal' (Namespace: 'aws:ec2:vpc', OptionName: 'ELBScheme'): Internal load balancers are valid only in a VPC; however, your environment is currently not running in a VPC.
Grateful for help!
You dont need two separate VPCs for your public facing applications. In the same vpc, you can create a load balancer as Internal and create another load balancer as Internet facing.
here is some information about the fields.
Enter the VPC ID: vpc-abc123
Do you want to associate a public IP address? (Y/n):
if internet facing, Yes.
this will assign a public ip address to the load balancer
Enter a comma-separated list of Amazon EC2 subnets: ?
You can enter the list of private subnets, private subnets cannot be accessed from the internet directly, thats why you create a public facing load balancer (for the internet facing application) receive the web traffic and forward to the instances.
Enter a comma-separated list of Amazon ELB subnets: ?
For Internet facing application, you need to choose public subnets.
For internal application, you need to choose private subnets
Do you want the load balancer to be public? (Select no for internal) (Y/n): ?
For Internet facing application, Yes
For internal application, No
Enter a comma-separated list of Amazon VPC security groups: The security should be created for the VPC. in other words, if you inspect the security group, you should see your vpc id.

Networking across Google Cloud projects

Is it possible to route/forward all tcp traffic for a specific port originating from one instances group to that tcp port for a specific instance in a 2nd project? In a single project this is not difficult, but without static IP's (auto-scaling instance group with hundreds of instances) it is not clear how to route across proejcts.
Use Shared VPC. It allows you to share a VPC network across projects in the same organization.
I found these answers in need of further details or perhaps outdated? First, for those who don't know, a VPC is a Virtual Private Cloud network. Yes, you need a VPC, but not necessarily a shared one that requires an organization configuration. An easy solution is to use VPC Network Peering.
When you create a compute engine instance, you are assigned to a VPC, the "default" VPC. If you have instances in more than 1 project and you want to communicate between them, then you need to create another VPC that doesn't share the same subnet as the default VPC, but only if the two projects have the same default subnet.
One VPC might have 10.142.0.0/20 for its network and another might have 10.143.0.0/20 for its network. This would be fine, but if they both have 10.142.0.0/20, that won't work and you'd need to create a new VPC.
Now, you go to VPC network menu option in the console and add a new VPC, if needed. If you do that, then you need to set up firewall and routing similar to that of the default VPC. If you don't, then traffic on the same VPC, between compute engines, will not be possible.
Now, go to the VPC network peering option and create an entry in one project that points to the VPC of the other project. It will tell you that it is waiting to connect. Now go to the other project and create a network peering entry that has the opposite configuration. For example, in project A, with VPC AA and project B with VPC BB, you create an entry in project A that uses AA and points to BB. In project B, you create an entry that uses BB and points to AA. After some validation, the connection, if valid, will connect. Once connected, it creates all of the routes necessary to get between the two project VPCs.
Now, if your firewall settings are correct, you should be able to send and receive traffic between projects.
The "only" way to connect between your instances on different Google Cloud projects is either through VPN or using the public IP. By using the Public IP, I mean either through a NAT gateway or directly from instance to instance using the public IP. You can have more information about Google Cloud VPN in this Help Center article.

how to allow ECS task access to RDS

I have an ECS task executed from a Lambda function. This task will perform some basic SQL operations (e.g. SELECT, INSERT, UPDATE) on an RDS instance running MySQL. What is the proper way to manage access from the ECS task to RDS?
I am currently connecting to RDS using a security group rule where port 3306 allows a connection from a particular IP address (where an EC2 instance resides).
I am in the process of moving this functionality from EC2 to the ECS task. I looked into IAM policies, but the actions appear to manage AWS CLI RDS operations, and are likely not the solution here. Thanks!
IAM roles and Security Groups are two totally different things that serve different purposes. You have to open the Security Group to allow any network traffic to access the RDS server. Instead of whitelisting the IP address you should whitelist the inbound Security Group.
For example if the RDS server is in Security Group 1, and the ECS server is in Security Group 2, you can enter the ID of Security Group 2 in the inbound access rule of Security Group 1. Then you don't have to worry about servers changing IP addresses.

AWS RDS MySQL How to restrict db access to a single EC2 instance

I recently inherited an AWS account's maintenance and noticed that the db access is wide open to any network, anywhere! So I decided it must be simple like it is when we do it with our own VMs. Except on Amazon AWS EC2 instances have an internal IP and a public IP and sometimes an elastic IP. So I thought ok I'll search google and find a simple quick writeup, and there doesn't seem to be one. So can someone please provide a simple writeup, here, on how to do this. I understand there are three methods on the RDS security and so forth. If you don't have time or desire to cover all three please just pick the one you like and have used for the example. If I don't get a good response on this within a day or so I'll hit the docs and piece it together myself, thank you in advance!
Well I tinkered with it a bit. The docs are not too suggestive. I found on an EC2 instance that has an Elastic IP assigned I had to use the Private IP allowed in the security group I applied to the RDS MySQL database. The Elastic IP assigned or UN-assigned did not affect connection. On the EC2 instance which had no Elastic IP assigned I had to use the Public IP allowed in the security group. The Private IP did not matter. This seems a bit strange to me.
An Amazon Relational Database Service (RDS) instance should typically be kept private to prevent access from the Internet. Only in rare circumstances should an RDS instance be accessible on the Internet.
An RDS instance can be secured in several ways:
1. Launch it in a Private Subnet
A Virtual Private Cloud (VPC) can be configured with public and private subnets. Launching the RDS instance in the private subnet will prevent access from the Internet. If access is still required across the Internet (eg to your corporate network), create a secure VPN connection between the VPC and your corporate network.
2. Use Security Groups
Security Groups operate like a firewall around each individual EC2 instance. They define which ports and IP address ranges are permitted for inbound and outbound access. By default, outbound access is permitted but inbound access is NOT permitted.
3. No Public IP address
If an RDS instance does NOT have a Public IP address, it cannot be directly accessed from the Internet.
4. Network Access Control Lists
These are like Security Groups, but they operate at the Subnet level. Good for controlling which app layers can talk to each other, but not good for securing specific EC2 or RDS instances.
Thus, for an RDS instance to be publicly accessible, it must have all the following:
A public IP address
A Security Group permitting inbound access
Located in a public subnet
Open Network ACL rules
For your situation, I would recommend:
Modify the RDS instance and set PubliclyAccessible to False. This will remove the public IP address.
Create a new Security Group (I'll refer to it as "SG1") and assign it to the single EC2 instance that you want to allow to communicate with the RDS instance
Modify the Security Group associated with the RDS instance and allow Inbound communication from SG1 (which permits communication from the EC2 instance). Note that this refers to the SG1 security group itself, rather than referring to any specific IP addresses.