vcn deletion due to bad cidr - oracle-cloud-infrastructure

How do i terminate a subnet in a vcn for deletion?
The VCN cannot be terminated because there are associated resources in one or more compartments that you do not have access to. Learn more: Subnet or VCN Deletion.
(Conflict - The Subnet ocid1.subnet.oc1.af-johannesburg-1.aaaaaaaauq3ng7ox5zlhl6rgl46kxex6lrh3ebl5hx5llakr6l3qmu6jd4pa references the VNIC ocid1.vnic.oc1.af-johannesburg-1.abvg4ljr7ky5g5sjxp3ztuuqs3kr4w7c3sj3rtkby5psiuftktcuubqc2mmq. You must remove the reference to proceed with this operation.)

Related

Problem of connection between lambda function and MySQL on RDS

I am trying to access a MySQL database on Amazon RDS from an AWS Lambda Python function. After running test, it give a error of connection failed:
"errorMessage": "2019-05-27T15:14:26.967Z f6e8ae8d-1dfc-4be5-9e00-a2c937e4ca2c Task timed out after 3.00 seconds"
I believe that is cause by the configuration of VPC or NAT or Security Group.
I tried to follow:
AWS Lambda: Enable Outgoing Internet Access within VPC
Tutorial: Configuring a Lambda Function to Access Amazon RDS in an Amazon VPC - AWS Lambda
But still not working
I have:
A default VPC with one Internet Gateway attached
2 subnets with IPv4 CIDR xxx.xx.0.0/20 (subnet001) and xxx.xx.16.0/20 (subnet002) associated with one route table and one Network ACL.
NAT Gateway associate with subnet001
My question is:
According to these two tutorials, I will need one VPC, four subnets (1,2,3,4), first two subnets associate with the main route table that access to local and internet gateway. And second two subnets associate with "lambda-to-internet" route table that access to local and a NAT gateway.
The NAT gateway should associate with subnet 1. Am I correct?
And for network ACL, do all four subnets associate with same ACL?
In Lambda VPC setting, do I add all four subnets or only last two subnets?
rds_host = "my_host_name"
name = "my_username"
password = "my_password"
db_name = "my_db_name"
conn = pymysql.connect(rds_host, user=name, passwd=password, db=db_name, connect_timeout=5)
You have a lot of information in your question, so it is hard to reply to it all, but it seems that your basic question is how to allow the AWS Lambda function to connect to the Amazon RDS instance.
Your configuration will need to be:
The Lambda function configured to connect to the VPC (any subnet, it doesn't matter)
The Amazon RDS instance launched in the same VPC
A security group (Lambda-SG) on the Lambda function - it doesn't need any configuration, but it needs to exist
A security group (RDS-SG) on the Amazon RDS db instance that permits inbound traffic from Lambda-SG on port 3306
Please note that RDS-SG is permitting a connection from Lambda-SG. A security group can refer to another security group.
Also, increase the timeout for your Lambda function. It is currently set to 3 seconds, which might not be enough to accomplish what you are trying to do. The connection timeout is set to 5 seconds, so you will need the Lambda function to run longer than this time to confirm what is happening.
That's all you need. You don't need a NAT Gateway for this setup. It is only required if you wish the Lambda function to be connect to the VPC AND have the ability to connect to the Internet. You only need one subnet (for the RDS db instance) for what you have described, but your architecture might require more for other resources you are using. Event the Internet Gateway is not needed for the Lambda and RDS bits. But, focus on getting the Lambda-RDS connection going and then you can clean up things.

Switching an existing private RDS MySQL instance to be publicly accessible

I have an existing RDS instance running mysql inside a VPC
It is (was all along) associated with the public subnets of the VPC
The public subnets all share one "public" route table that has a Internet Gateway (IGW) attached (they are explicitly attached to this route table)
It was not originally, but now is set to be publicly accessible
It has a Security group which allows access from outside and I tried both wide open and specific IP here.
Yet still when I attempt to connect the connection just times out ...
$ mysql -usomeuser -h somename.us-east-1.rds.amazonaws.com
ERROR 2003 (HY000): Can't connect to MySQL server on 'somename.us-east-1.rds.amazonaws.com' (60)
What am I missing? anyone ... ?
Assuming that your RDS instance is truly public:
Is in a subnet with an internet gateway
Using a security group that is appropriately open to the internet
Your RDS instance needs to have it's "Publicly Accessible" setting to set to Yes/True. If it is No/False, then resolving your DNS entry of somename.us-east-1.rds.amazonaws.com will result in it's private IP address, which will not work from the outside world.

Google Compute VM hacked, now what?

I've been running my Google Compute VM for literally 1 day, and I was hacked, by this IP: http://www.infobyip.com/ip-121.8.187.25.html
I'm trying to understand what I can do next (user connected via ssh, root password was changed), to avoid these types of attacks (and to understand more than what /var/log/auth.log is telling me) ?
I assume you deleted the instance already, right ? from Developers console.
As suggested, always use ssh rsa keys to connect to your instance, instead of passwords. Additionally, depending on where you want access from, you can only allow certain IPs through the firewall. Configuring the firewall along with iptables, gives you better security.
You may also want to take a look at sshguard. Sshguard will add iptables rules automatically when it detects a number of failed connection attempts.
Just to make sure, please change the default port 22 in /etc/ssh/sshd_config to something else.

Amazon AWS RDS: how to make the database Publicly Accessible to the internet

I have a database running inside AWS, region South America (Sao Paulo) that I could access with no problems from anywhere in the internet.
Then I wanted to create the same database on US East (North Virginia), but I wasn't able to access it from the internet. I compared creating a database on both regions to see the diferences and noticed the US East region doesn't list me any VPC to make it available to the internet.
I've been trying to create this VPC with subnet DB, etc, but no success! Anybody know what steps I need to do in order to make the database available to the internet?
Thanks!
First made sure that you have a DB-subnet group in my VPC with an associated VPC subnet in each of the availability regions, then
Create two subnets within the VPC one each in a different AZ for DB use (take a note of the Subnet IDs).
From RDS create a "Subnet Group" which you add the two subnets to one from each AZ so cover multi-az deployments. Now the "Choose a VPC" dropdown should be available when you create a new RDS instance.
for further info Go here please >>
ANSWER FOR YOUR SECOND QUESTION:
Q. Why there are only 251 IPs available when I created the subnet as 172.31.0.0/24?
A. When you create each subnet, you provide the VPC ID and the CIDR block you want for the subnet. After you create a subnet, you can't change its CIDR block. The subnet's CIDR block can be the same as the VPC's CIDR block (assuming you want only a single subnet in the VPC), or a subset of the VPC's CIDR block. If you create more than one subnet in a VPC, the subnets' CIDR blocks must not overlap. The smallest subnet (and VPC) you can create uses a /28 netmask (16 IP addresses), and the largest uses a /16 netmask (65,536 IP addresses).
Important
AWS reserves both the first four and the last IP address in each subnet's CIDR block. They're not available for use.
If you add more than one subnet to a VPC, they're set up in a star topology with a logical router in the middle. By default, you can create up to 20 subnets in a VPC. If you need more than 20 subnets, you can request more by going to 'Request to Increase Amazon VPC Limits'
for further info GO here please.
I had this same issue and I found the following alternative (instead of recreating my RDS instance and setting the "Publicly Accessible" setting to "Yes"). This involves setting up an SSH tunnel then connecting to the RDS instance via that tunnel:
Setup SSH Tunnel:
ssh -N -L 3306:RDS_HOST:3306 USER#EC2HOST -i SSH-KEY &
Connect to the RDS instance:
mysql -u rdsuser -p -h 127.0.0.1
source: http://thekeesh.com/2014/01/connecting-to-a-rds-server-from-a-local-computer-using-ssh-tunneling-on-a-mac/#comment-27252

Permission denied (publickey,gssapi-keyex,gssapi-with-mic)

After creating the instance, I can login using gcutil or ssh. I tried copy/paste from the ssh link listed at the bottom of the instance and get the same error message.
The permission denied error probably indicates that SSH private key authentication has failed. Assuming that you're using an image derived from the Debian or Centos images recommended by gcutil, it's likely one of the following:
You don't have any ssh keys loaded into your ssh keychain, and you haven't specified a private ssh key with the -i option.
None of your ssh keys match the entries in .ssh/authorized_keys for the account you're attempting to log in to.
You're attempting to log into an account that doesn't exist on the machine, or attempting to log in as root. (The default images disable direct root login – most ssh brute-force attacks are against root or other well-known accounts with weak passwords.)
How to determine what accounts and keys are on the instance:
There's a script that runs every minute on the standard Compute Engine Centos and Debian images which fetches the 'sshKeys' metadata entry from the metadata server, and creates accounts (with sudoers access) as necessary. This script expects entries of the form "account:\n" in the sshKeys metadata, and can put several entries into authorized_keys for a single account. (or create multiple accounts if desired)
In recent versions of the image, this script sends its output to the serial port via syslog, as well as to the local logs on the machine. You can read the last 1MB of serial port output via gcutil getserialportoutput, which can be handy when the machine isn't responding via SSH.
How gcutil ssh works:
gcutil ssh does the following:
Looks for a key in $HOME/.ssh/google_compute_engine, and calls ssh-keygen to create one if not present.
Checks the current contents of the project metadata entry for sshKeys for an entry that looks like ${USER}:$(cat $HOME/.ssh/google_compute_engine.pub)
If no such entry exists, adds that entry to the project metadata, and waits for up to 5 minutes for the metadata change to propagate and for the script inside the VM to notice the new entry and create the new account.
Once the new entry is in place, (or immediately, if the user:key was already present) gcutil ssh invokes ssh with a few command-line arguments to connect to the VM.
A few ways this could break down, and what you might be able to do to fix them:
If you've removed or modified the scripts that read sshKeys, the console and command line tool won't realize that modifying sshKeys doesn't work, and a lot of the automatic magic above can get broken.
If you're trying to use raw ssh, it may not find your .ssh/google_compute_engine key. You can fix this by using gcutil ssh, or by copying your ssh public key (ends in .pub) and adding to the sshKeys entry for the project or instance in the console. (You'll also need to put in a username, probably the same as your local-machine account name.)
If you've never used gcutil ssh, you probably don't have a .ssh/google_compute_engine.pub file. You can either use ssh-keygen to create a new SSH public/private keypair and add it to sshKeys, as above, or use gcutil ssh to create them and manage sshKeys.
If you're mostly using the console, it's possible that the account name in the sshKeys entry doesn't match your local username, you may need to supply the -l argument to SSH.
Ensure that the permissions on your home directory and on the home directory of the user on the host you're connecting to are set to 700 ( owning user rwx only to prevent others seeing the .ssh subdirectory ).
Then ensure that the ~/.ssh directory is also 700 ( user rwx ) and that the authorized_keys is 600 ( user rw ) .
Private keys in your ~/.ssh directory should be 600 or 400 ( user rw or user r )
I was facing this issue for long time. Finally it was issue of ssh-add. Git ssh credentials were not taken into consideration.
Check following command might work for you:
ssh-add
I had the same problem and for some reason The sshKeys was not syncing up with my user on the instance.
I created another user by adding --ssh_user=anotheruser to gcutil command.
The gcutil looked like this
gcutil --service_version="v1" --project="project" --ssh_user=anotheruser ssh --zone="us-central1-a" "inst1"
I just experienced a similar message [ mine was "Permission denied (publickey)"] after connecting to a compute engine VM which I just created. After reading this post, I decided to try it again.
That time it worked. So i see 3 possible reasons for it working the second time,
connecting the second time resolves the problem (after the ssh key was created the first time), or
perhaps trying to connect to a compute engine immediately after it was created could also cause a problem which resolves itself after a while, or
merely reading this post resolves the problem
I suspect the last is unlikely :)
I found this error while connecting ec2 instance with ssh.
and it comes if i write wrong user name.
eg. for ubuntu I need to use ubuntu as user name
and for others I need to use ec2-user.
You haven't accepted an answer, so here's what worked for me in PuTTY:
Without allowing username changes, i got this question's subject as error on the gateway machine.
You need to follow this instructions
https://cloud.google.com/compute/docs/instances/connecting-to-instance#generatesshkeypair
If get "Permission denied (publickey)." with the follow command
ssh -i ~/.ssh/my-ssh-key [USERNAME]#[IP_ADDRESS]
you need to modify the /etc/ssh/sshd_config file and add the line
AllowUsers [USERNAME]
Then restart the ssh service with
service ssh restart
if you get the message "Could not load host key: /etc/ssh/ssh_host_ed25519_key" execute:
ssh-keygen -A
and finally restart the ssh service again.
service ssh restart
I followed everything from here:
https://cloud.google.com/compute/docs/instances/connecting-to-instance#generatesshkeypair
But still there was an error and SSH keys in my instance metadata wasn't getting recognized.
Solution: Check if your ssh key has any new-line. When I copied my public key using cat, it added into-lines into the key, thus breaking the key. Had to manually check any line-breaks and correct it.
The trick here is to use the -C (comment) parameter to specify your GCE userid. It looks like Google introduced this change last in 2018.
If the Google user who owns the GCE instance is myname#gmail.com (which you will use as your login userid), then generate the key pair with (for example)
ssh-keygen -b521 -t ecdsa -C myname -f mykeypair
When you paste mykeypair.pub into the instance's public key list, you should see "myname" appear as the userid of the key.
Setting this up will let you use ssh, scp, etc from your command line.
Add ssh public key to Google cloud
cat ~/.ssh/id_rsa.pub
go and click your VM instances
edit VM instances
add ssh public key(from id_rsa.pub) in SSH keys area
ssh login from Git bash on your computer
ssh -i ~/.ssh/id_rsa tiennt#x.y.z.120