Unable to terminate a subnet on Oracle Cloud - oracle-cloud-infrastructure

I am trying to terminate a subnet on oracle cloud but I get this issue:
The Subnet ocid1.subnet.oc1.iad.aaaaaaaaxzhssdvwh4pc2pk7uwpyokeluklqkdqcjnsib7idmdp7wkh7o5iq references the VNIC ocid1.vnic.oc1.iad.abuwcljs2ggk7yk4fpvvytdt74ya7wolg3xwoz4nauryrifd3dl4t4cfbonq. You must remove the reference to proceed with this operation.
I haven't been able to find out how to remove the VNIC reference.
Thx
oracle cloud vcn issue

FINALLY FIGURED IT OUT
I have figured it out. The disadvantage of using the free tier of Oracle Cloud that I am on is that you don't get any support from Oracle.
steps taken to solve the issue:
delete/terminate the internet gateway associated with the Virtual Cloud Network
also delete/terminate the service gateway associated with the VCN/Virtual Cloud Network
terminate the NAT Gateway
remove security lists (you can leave the default security list on)
remove route tables (default route table can stay on)
terminate the load balancer associated with the subnet
NOW you can terminate the subnet
finally, terminate the VCN
helpful links: (these links will help but you might have to figure some of it out if you set up your VCN differently than how it is shown in the docs.
https://docs.oracle.com/en-us/iaas/Content/Network/Troubleshoot/subnetdelete.htm
https://docs.oracle.com/en-us/iaas/Content/File/Troubleshooting/orphanedmounttarget.htm#Cannot_Delete_VCN_Mount_Target_VNIC_Still_Attached

Related

My google cloud instance lost network connectivity

My google cloud instance (10.128.0.3) lost network connectivity somewhere just after 0400 this AM. I am running Centos 6.10) The network interfaces are up and have IP addresses. Unable to ping default gateway (10.128.0.1). Firewall rules (google and local) have not been changed/modified. This instance has been online for several years with no recent changes made. Any suggestions would be helpful and appreciated.
This is a known issue when updating to kernel 2.6.32-754 that is affecting both Red Hat, and CentOS images, and seems related to this DHCP update. The Compute Engine team are already aware of this issue.
Meanwhile, and in addition to the great suggestions above, you may also use a startup script ( add the default gateway IP address) to fix this issue, and then restart your instance. Todo so without access to the instance simply add a metadata for the instance with the name startup-script and the content of the below script (make sure to update the gateway to your, it can be found in the VPC Page)
#!/bin/bash
route add default gw [default_gateway_ip] eth0
For further information/updates about this issue, you may check this issue tracker link. https://issuetracker.google.com/issues/111154121

Google Cloud - Adding additional Internal IP to VM

I'm trying to build a webserver in Google Cloud Platform that hosts multiple websites (GBP, IE, FR, DK etc.)
Generally, we assign a range of IPs to the server statically, set the bindings in IIS, then loadbalance using a virtual IP.
It seems near enough impossible to assign another internal IP in GCP. Lots of guides about additional external IPs, but we don't want a public facing webserver like this.
Anybody have any idea on how to add additional internal IPs to a VM / Instance?
Also, I have tried changing the internal address I have assigned to the Instance to static in network adapter settings, next thing I know I can't access my VM for love nor money, had to delete and re-create. If I go into advanced settings to add additional static IPs, w'ere set to DHCP apparently, so can't add additional IPs.
Thanks all.
Answer that I recieved from GCE discussion group, in Google Groups:
"You can add additional internal IP addresses to a VM instance. This is possible by enabling IP forwarding for the VM, creating a static network route, adding appropriate firewall rules, and setting additional internal IP addresses to network adapter of Windows. These steps are described in this article for Linux machines (https://cloud.google.com/compute/docs/networking#set_a_static_target_ip_address). The same steps are valid for Windows VMs. You will need to keep the initial internal IP address, subnet mask, gateway address and DNS settings of the adapter and manually enter them in properties of IPv4 of the network adapter. The below is a screenshot of my configuration on a VM instance (Windows 2008 R2) that perfectly works."
Update:
Now, you can create instances with multiple network interfaces On Google Compute Engine and assign IPs. For more information, refer to this public documentation link. However, currently it has following limitations:
Alias IP ranges are not supported on any network interface on a VM
that has multiple network interfaces enabled.
You cannot modify or delete the network interfaces after the VM has
been created.

Google Cloud Service Account Not Found

Our team is trying to troubleshoot an issue we have been encountering with service accounts. The service account we are using is able to create a disk and IP address, however an error is thrown when an instance request is created. All resources can be listed (ie. networks, snapshots, etc.). I have attached a small console snippet below.
The service account is successfully authenticated with JSON key given to me. I have tried altering permissions of the service account and created a new key.
Any assistance is greatly appreciated.
Created [https://www.googleapis.com/compute/v1/projects/<PROJECT>/zones/asia-east1-c/disks/dev-josh-ui-test-08].
Created [https://www.googleapis.com/compute/v1/projects/<PROJECT>/regions/asia-east1/addresses/dev-josh-ui-test-08-ip].
ERROR: (gcloud.compute.instances.create) Some requests did not succeed:
- The resource '<ID>-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
I was able to get the exact error provided:
The resource '-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
by deleting my default compute service account and attempting to create an instance through the Cloud Shell, so I assume this is the issue.
If the default compute service account was somehow deleted, if has been less than 30 days, you can restore it using: gcloud beta iam service-accounts undelete [ACCOUNT_ID]
https://cloud.google.com/iam/docs/creating-managing-service-accounts#undeleting
After this, you will have to go into https://console.cloud.google.com/apis/dashboard and disable and re-enable the compute engine API. This will take a few moments, but after the GCE API is re-enabled you should be able to create VMs through the Cloud Shell again and I was able to reproduce this.
On https://console.cloud.google.com/apis/dashboard disable the "google compute engine API" and after enable it again.
The enabling also creates some additional setup that is needed to use the API. Those resources could have been deleted by accident beforehand.
You might need to have some patience and wait a minute or two between disabling and enabling.

Cloud SQL Connection + Auto Scaling

Per this, Cloud SQL requires the external IP address of the client in order to allow connections to it. The other suggested way is the sql proxy with a big disclaimer that the method may change over time.
Question: If I am auto scaling compute engine VMs running webservers, do I need to assign them all external IPs and then go set those in the Cloud SQL instance? Or am I missing something huge? Noob question perhaps, thanks for reading through.
The recommended way is to use the Cloud SQL proxy (but if you really don't want to use it you would need to add static IPs to your GCE VMs and whitelist them on the Cloud SQL instance).
Also, you can setup a single VM instance with cloud_sql_proxy and listen to your subnet interface (for example) to make possible to connect any new VM instance to the one with a proxy.

AWS: can't connect to RDS database from my machine

The EC2 instance/live web can connect just fine to the RDS database. But when I want to debug the code in my local machine, I can't connect to the database and got this error:
OperationalError: (2003, "Can't connect to MySQL server on 'aa9jliuygesv4w.c03i1
ck3o0us.us-east-1.rds.amazonaws.com' (10060)")
I've added .pem and .ppk keys to .ssh and I already configure EB CLI. I don't know what should I do anymore.
FYI: The app is in Django
It turns out it is not that hard. Do these steps:
Go to EC2 Dashboard
Go to Security Groups tab
Select and only select the RDS database security group. You'll see the security group detail at the bottom
Click Inbound tab
Click Edit button
Add Type:MYSQL/Aurora;Protocol:TCP;Range:3306;Source:0.0.0.0/0
MAKE SURE PUBLIC ACCESSIBILITY IS SET TO YES
This is what I spent the last 3 days trying to solve...
Instructions to change Public Accessibility
Accept traffic from any IP address
After creating an RDS instance my security group inbound rule was set to a specific IP address. I had to edit inbound rules to allow access from any IP address.
"Security group rules"
Select a security group
Click "Inbound Rules"
Click "Edit Inbound Rules"
Under "Source" Select the Dropdown and click "Anywhere"
::0 or 0.0.0.0/0 Should appear.
Click "Save Rules"
Just burned two hours going through the great solutions on this page. Time for the stupid answer!
I redid my Security Groups, VPC's, Routing Tables, Subnets, Gateways... NOPE. I copy-pasted the URL from the AWS Console, which in some cases results in a hidden trailing space. The endpoint is in a <div> element, which the browser gives a \n when copying. Pasting this into the Intellij db connector coerces it to a space.
I only noticed the problem after pasting the URL into a quote string in my source code.
Make sure that your VPC and subnets are wide enought.
The following CIDR configuration works great for two subnets:
VPC
10.0.0.0/16
10.0.0.0 — 10.0.255.255 (65536 addresses)
Subnet 1
10.0.0.0/17
10.0.0.0 — 10.0.127.255 (32768 addresses, half)
Subnet 2
10.0.128.0/17
10.0.128.0 — 10.0.255.255 (32768 addresses, other half)
Adjust it if you need three subnets.
I wasn't being able to connect to my RDS database. I've manually reviewed any detail and everything was alright. There were no indications of any issues whatsoever and I couldn't find any suitable information in the documentation. My VPC was configured with narrow CIDR: 10.0.0.0/22 and each subnet had a 255 addresses. After I've changed CIDR to 10.0.0.0/16 and split it totally between two subnets my RDS connection started to working. It was a pure luck that I've managed to find a source of the problem, because it doesn't make any sense to me.
Well almost everyone has pointed out the answers, i will put it in different perspective so that you can understand.
There are two ways to connect to you AWS RDS
You provision an instance in the same VPC & Subnet. You install the workbench you will be able to connect to the DB. You would not need to make it public accessible. Example: You can provision an windows instance in the same VPC group and install workbench and you can connect to the DB via endpoint.
The other way is to make the Db publically accessible to your IP only to prevent unwanted access. You can change the DB security group to allow the DB port traffic to your IP only. In this way your DB will be publically accessible but to you only. This is the way we do for various AWS services we add there security group in the source part of the SG.
If both the options doesn't work then the error is in the VPC routing table, you can check there if it associated with the subnet and also if the internet gateway is attached.
You can watch this video it will clear your doubts:
https://youtu.be/e18NqiWeCHw
In my case, when I upgrade the size. The private address of the rds instance fell into a private subnet of the VPC. You can use the article
My instance is in a private subnet, and I can't connect to it from my local computer to find out your db instance address.
However, changing the route table didn't fix my issue. What I did finally solve my problem is to downgrade the size and then upgrade the size back. Once the private address falls back to the public subnet. Everything works like a charm.
I was also not able to connect even from inside an ec2 instance.
After digging AWS RDS options it turns out that ec2 instances are only able to connect to RDS in the same VPC they are in.
When creating an ec2 instance in the same VPC where the RDS was I could access it as expected.
Do not forget to check if you have your VPN or firewall blocking connection.
The ideal debugging checklist is:
Instance's "Publicly Accessible" property should be enabled
The security group attached to the instance should have open inbound rules (as open as you'd want)
The funny part is still if you're not able to access it - then the problem surely is with your instance lying in a private subnet of the respective VPC.
However, there're more secure ways to access your RDS instance. The best bet would be not make it publicly accessible, lock down security groups and have a P2P relay endpoint (think Tailscale).
In case you've tried all answers above try this...
Recreate the database....
AWS on database creation provides an option to allow public/private access access
I'm sure it's not the proper answer but I added the internet gateway to all my private subnet route tables..
Even though the private subnets and the public subnets are in the subnetgroup.
For me none of the above worked.
What did work was creating a peering connection between my default VPC and the VPC in which the database was created, as it appears that when connecting to resources in AWS, it automatically goes through the default VPC.
Then, set up routing using the peering connection between the 2 VPCs. Also, make sure that your security groups permits postgres ports from your default VPC CIDR block as well. And finally, make sure all the subnets are associated with your route table accessing this peering connection.