Cant connect to AWS Aurora cluster endpoint but can access Writier instance - mysql

I have a MySql Aurora cluster setup on AWS. For the last few weeks I have had all of my apps pointing to an instance endpoint, and it has been working fine. Yesterday, however, I started getting errors on inserts/updates saying that the instance was in ReadOnly mode and couldnt be updated.
Apparently the Reader/Writer endpoints can change and what I am really supposed to do is point to the cluster endpoint, which will route the request appropriately. I have tried pointing directly to that cluster endpoint, but it always fails. The error message is fairly generic, telling me to check my username/password, make sure I am not blocked by a firewall, and all of the normal default solutions.
My Cluster is in a VPC, but the Subnets assigned to the cluster are public (they are routed through Internet Gateway).
The ready/writer instances have the same Security Group and VPC configuration. I can connect to the Reader instance (Read Only) but not the Writer instance.
Any idea what else I could look for? Most forums say that I need to check my Routing Tables or security groups, but from what I can tell that are all open to all traffic (I realize that is a bad configuration, I am just trying to get this working). Is there anything else that I should be checking?
Thanks
Update
I can Telnet in to the Reader instance, but not the Writer instance. They are in the same VPC, and both use the public subnet as far as I can tell.
Update 2
My Lambda functions that are in the same VPC as my RDS can access the Cluster endpoint, so I guess its just a problem getting outside. I thought that would be resolved by having a public subnet in the VPC but it doesnt seem to work for that endpoint.

Merely having public subnets would not be enough, you need to explicitly enable public accessibility for your db instances. Public Accessibility is an instance level setting, and you need to turn that ON on all your instances in the cluster. Given your symptoms, I suspect if you have enabled public access on one of your instances and not on some of the others. You can check the same via CLI using the describe-db-clusters API and filtering or searching for PubliclyAccessible. Here is an example:
aws rds describe-db-instances --region us-west-2 --output json --query 'DBInstances[?Engine==`aurora`].{instance:DBInstanceIdentifier, cluster:DBClusterIdentifier, isPublic:PubliclyAccessible }'
[
{
"instance": "karthik1",
"isPublic": true,
"cluster": "karthik-cluster"
},
{
"instance": "karthik2",
"isPublic": false,
"cluster": "karthik-cluster"
}
]
You modify an instance and enable public access on it using the modify-db-instance API.
Hope this helps.

Related

Is there a way to have multiple external IP addresses with Elastic Beanstalk?

I'm using Amazon Elastic Beanstalk with a VPC and I want to have multiple environments (workers) with different IP addresses. I don't need them to be static, I would actually prefer them to change regularly if possible.
Is there a way to have multiple environments with dynamic external IP addresses?
It's hard to understand the use case of wanting to change the instance IP address of an Elastic Beanstalk environment. The fundamental advantage that a managed service like Elastic Beanstalk provides is abstraction over the underlying architecture for a deployment. You are given a CNAME to access the environment's (your application's) API and you shouldn't be relying on the internal IP addresses or Load Balancer URLs for anything as they can be added, removed by the beanstalk service at will.
That being said, there is a way that you can achieve having changing IPs for the underlying instances.
Elastic Beanstalk Rebuild Environment destroys the existing resources including EC2s and creates new resources resulting in your instances having new IP addresses. This would work given that a scheduled downtime (of a few minutes depending on your resources) is not a problem for this use case.
You can use one the following two ways to schedule an environment rebuild
Solution 1:
You can schedule your Rebuild Environment using a simple lambda function.
import boto3
envid=['e-awsenvidid']
client = boto3.client('elasticbeanstalk')
def handler(event, context):
try:
for appid in range(len(envid)):
response = client.rebuild_environment(EnvironmentId=str(envid[appid].strip()))
if response:
print('Restore environment %s' %str(envid[appid]))
else:
print('Failed to Restore environment %s' %str(envid[appid]))
except Exception as e:
print('EnvironmentID is not valid')
In order to do this you will have to create an IAM role with the required permissions.
You can find a comprehensive guide in this AWS Guide.
Solution 2:
You can use a cron job to rebuild the environment using aws-cli. You can follow the steps below to achieve this.
Create EC2 instance
Create IAM Role with permission to rebuild environment
The following example policy would work
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"elasticbeanstalk:RebuildEnvironment"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Attach the IAM role to the EC2 instance
Add a cron job using command crontab -e
The following example cron job rebuilds the environment at 12.00 am on the 1st of every month
0 0 1 * * aws elasticbeanstalk rebuild-environment --environment-name my-environment-name
Save the cronjob and exit.
It is not recommended to rebuild the environment unnecessarily, but as of now there is no explicit way to achieve your particular requirement. So hope this helps!
Further Reading:
https://docs.aws.amazon.com/cli/latest/reference/elasticbeanstalk/rebuild-environment.html
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-management-rebuild.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html
https://awspolicygen.s3.amazonaws.com/policygen.html

Unable to connect to MySQL instance running on AWS EC2 from AWS Lambda function

I am writing an AWS Lambda function to connect to a MySQL instance running on EC2. I have associated the Lambda function with the same subnet and security group that the EC2 is configured in. I checked the Lambda function's IAM roles and that has AWSLambdaVPCAccessExecutionRole policy attached to it. However I am still not able to connect to the MySQL instance.
I tried allowing traffic from anywhere and that worked but now I am not sure how to connect to the MySQL instance with stricter security rules.
I am using Kotlin to write my lambda function and using serverless to deploy changes to lambda.
I have tried every possible solution available online to make this happen but I haven't had any positive results yet.
You have associated the Lambda function with the same security group. But just that would not do. You also need to add an ingress rule to allow traffic to the security group from itself. Basically, you need to self reference the security group.
Add a rule to allow traffic on the mysql port from sg-xxxxxxxx.

AWS RDS Writer Endpoint vs Reader Endpoint

I created an Amazon Aurora instance in my VPC. When the instance was created, it came with 2 endpoints, a writer and a reader endpoint.
The instance is using a security policy with an ingress rule (Type: All Traffic, Protocol: All, Port: All, Source: 0.0.0.0/0).
I tried both MySQL Workbench and MySQL monitor command interface to connect to the endpoints.
The connection to the Reader endpoint worked but that to the Writer endpoint didn't. The reader endpoint was readonly, so I was unable to build my DB using it.
Any idea?
An aurora cluster instance might be either a writer or a reader. Aurora clusters allow one writer and up to 15 readers. The instance role might change failover happens.
The writer DNS endpoint always resolves to the writer instance,
Cluster writer endpoint
The reader endpoint DNS randomly resolves to one of the reader instances
with TTL=1.
(Note: It might point to the writer instance only if they are one healthy instance is available in the cluster fleet)
Cluster reader endpoint
In the comments, the author mentions it worked some times when they recreate, and sometimes it didn't. My suggestion was to review the network setup of the account.
The instances created share the same Security Group, so based on your scenario where one of them is functional, we can assume the SG is properly configured.
Each instance (reader/writer) is located in a different Availability Zone. That means each instance is in a different subnet. It's possible that one of the subnets is not configured properly (either with improper NACL rules, or incorrect Routing), and the non-functional instance is placed in that subnet. Since the allocation is dynamic everytime you create the cluster, this could create the on-and-off scenario.
Which subnets are used by an Aurora cluster depends on the RDS Subnet Group. This information is available in the cluster console > select each DB Identifier > Connectivity & Security > Subnet group, and use that value in the Subnet Group console (in the left menu). Ideally, all subnets should have the same NACL rules and be associated with the same Route Table (both in the VPC Console).
Side note: having your Security Group open to All Traffic from All Sources (0.0.0.0/0) is a security risk. Please evaluate narrowing down your ingress access.

AWS: can't connect to RDS database from my machine

The EC2 instance/live web can connect just fine to the RDS database. But when I want to debug the code in my local machine, I can't connect to the database and got this error:
OperationalError: (2003, "Can't connect to MySQL server on 'aa9jliuygesv4w.c03i1
ck3o0us.us-east-1.rds.amazonaws.com' (10060)")
I've added .pem and .ppk keys to .ssh and I already configure EB CLI. I don't know what should I do anymore.
FYI: The app is in Django
It turns out it is not that hard. Do these steps:
Go to EC2 Dashboard
Go to Security Groups tab
Select and only select the RDS database security group. You'll see the security group detail at the bottom
Click Inbound tab
Click Edit button
Add Type:MYSQL/Aurora;Protocol:TCP;Range:3306;Source:0.0.0.0/0
MAKE SURE PUBLIC ACCESSIBILITY IS SET TO YES
This is what I spent the last 3 days trying to solve...
Instructions to change Public Accessibility
Accept traffic from any IP address
After creating an RDS instance my security group inbound rule was set to a specific IP address. I had to edit inbound rules to allow access from any IP address.
"Security group rules"
Select a security group
Click "Inbound Rules"
Click "Edit Inbound Rules"
Under "Source" Select the Dropdown and click "Anywhere"
::0 or 0.0.0.0/0 Should appear.
Click "Save Rules"
Just burned two hours going through the great solutions on this page. Time for the stupid answer!
I redid my Security Groups, VPC's, Routing Tables, Subnets, Gateways... NOPE. I copy-pasted the URL from the AWS Console, which in some cases results in a hidden trailing space. The endpoint is in a <div> element, which the browser gives a \n when copying. Pasting this into the Intellij db connector coerces it to a space.
I only noticed the problem after pasting the URL into a quote string in my source code.
Make sure that your VPC and subnets are wide enought.
The following CIDR configuration works great for two subnets:
VPC
10.0.0.0/16
10.0.0.0 — 10.0.255.255 (65536 addresses)
Subnet 1
10.0.0.0/17
10.0.0.0 — 10.0.127.255 (32768 addresses, half)
Subnet 2
10.0.128.0/17
10.0.128.0 — 10.0.255.255 (32768 addresses, other half)
Adjust it if you need three subnets.
I wasn't being able to connect to my RDS database. I've manually reviewed any detail and everything was alright. There were no indications of any issues whatsoever and I couldn't find any suitable information in the documentation. My VPC was configured with narrow CIDR: 10.0.0.0/22 and each subnet had a 255 addresses. After I've changed CIDR to 10.0.0.0/16 and split it totally between two subnets my RDS connection started to working. It was a pure luck that I've managed to find a source of the problem, because it doesn't make any sense to me.
Well almost everyone has pointed out the answers, i will put it in different perspective so that you can understand.
There are two ways to connect to you AWS RDS
You provision an instance in the same VPC & Subnet. You install the workbench you will be able to connect to the DB. You would not need to make it public accessible. Example: You can provision an windows instance in the same VPC group and install workbench and you can connect to the DB via endpoint.
The other way is to make the Db publically accessible to your IP only to prevent unwanted access. You can change the DB security group to allow the DB port traffic to your IP only. In this way your DB will be publically accessible but to you only. This is the way we do for various AWS services we add there security group in the source part of the SG.
If both the options doesn't work then the error is in the VPC routing table, you can check there if it associated with the subnet and also if the internet gateway is attached.
You can watch this video it will clear your doubts:
https://youtu.be/e18NqiWeCHw
In my case, when I upgrade the size. The private address of the rds instance fell into a private subnet of the VPC. You can use the article
My instance is in a private subnet, and I can't connect to it from my local computer to find out your db instance address.
However, changing the route table didn't fix my issue. What I did finally solve my problem is to downgrade the size and then upgrade the size back. Once the private address falls back to the public subnet. Everything works like a charm.
I was also not able to connect even from inside an ec2 instance.
After digging AWS RDS options it turns out that ec2 instances are only able to connect to RDS in the same VPC they are in.
When creating an ec2 instance in the same VPC where the RDS was I could access it as expected.
Do not forget to check if you have your VPN or firewall blocking connection.
The ideal debugging checklist is:
Instance's "Publicly Accessible" property should be enabled
The security group attached to the instance should have open inbound rules (as open as you'd want)
The funny part is still if you're not able to access it - then the problem surely is with your instance lying in a private subnet of the respective VPC.
However, there're more secure ways to access your RDS instance. The best bet would be not make it publicly accessible, lock down security groups and have a P2P relay endpoint (think Tailscale).
In case you've tried all answers above try this...
Recreate the database....
AWS on database creation provides an option to allow public/private access access
I'm sure it's not the proper answer but I added the internet gateway to all my private subnet route tables..
Even though the private subnets and the public subnets are in the subnetgroup.
For me none of the above worked.
What did work was creating a peering connection between my default VPC and the VPC in which the database was created, as it appears that when connecting to resources in AWS, it automatically goes through the default VPC.
Then, set up routing using the peering connection between the 2 VPCs. Also, make sure that your security groups permits postgres ports from your default VPC CIDR block as well. And finally, make sure all the subnets are associated with your route table accessing this peering connection.

Querying GCE instance properties from the VM itself

I want to be able to query the external IP address of a GCE instance when the instance starts up. I'm planning to use that to fix up some configs which are copied to multiple similar instances. Is there a way to automatically discover an instance's external IP(s) or other properties from the instance itself? I see there are some things you can query with the gcloud tool, but for that you have to know the instance name, and it's not clear where to get that from.
See Querying metadata in GCE public documentation. For example, for the instance's external IP:
curl http://metadata/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip/ -H "Metadata-Flavor: Google"
This command will query the instance's private metadata server. Another option is configuring the instance's service account with the right scopes as described at Preparing an instance to use service accounts in the public documentation. This way, gcloud command can be used directly in the instance to get information from the project without authentication.