Shared VPC Interconnect - google-compute-engine

I have configured an Interconnect connection to my on-premise network and it is working perfectly. Now I want to shared this connection to other VPCs and I enabled the shared VPC option, my project is as host and the second project is as service.
The problem I have is when I tried to configured the shared-interconnect in the service project I got the message:
The resource 'projects/host-project-name/global/interconnects/xxxxxxx' was not found.
Now, on my host project I don't see the interconnect name I only see the attached vlans which are both up.
I also ran the command on the shell, but I did not get any result:
ricardo_ramos#cloudshell:~ (hosting-project-name)$ gcloud compute interconnects list
Listed 0 items.
Any thoughts about this? someone had the same issue before?
Thanks in advance,
Ricardo.

Related

Google Cloud Run: Could not find specified network to attach to app

I have a Cloud Run container that uses a Serverless Connector to connect to a Cloud SQL instance all in the same project. This configuration works just fine.
I have moved the Cloud SQL instance to another project in the same organisation and setup a Serverless Connector there as per the instructions. I have tested this Serverless Connector with a Cloud Function in the same project that accesses the database and reports the number of rows in a table, this works without problems.
I have now updated the Cloud Run instance to point to the new connector reference. I have used the specified format: projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME. When I release a new revision of the container, I get the error message: "Could not find specified network to attach to app." I see the message "Ready condition status changed to False for Service {service name} with message: Deploying Revision." in the Cloud Run logs for this service.
Any ideas on how to get this working please?
Documentation:
Configuring Serverless VPC Access
Configure connectors in the Shared VPC host project
Info:
Command gcloud compute networks vpc-access connectors describe --region=europe-west3 projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME gives the output:
connectedProjects:
- company-service-dev
- a-project-name
ipCidrRange: 10.8.0.0/28
machineType: f1-micro
maxInstances: 3
maxThroughput: 300
minInstances: 2
minThroughput: 200
name: projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME
network: company-project-servicename
state: READY
The connector MUST be in the same region AND the same project as the Cloud Run service.
The wrong solution is to create a peering between the Cloud Run project VPC and the Cloud SQL project VPC. But it won't work because of network transitivity issue (CLoud SQL to Project create 1 peering and Cloud Run VPC to Project create another peering -> 2 peering in a row aren't transitive).
The correct solution is to create Shared VPC architecture to share the same VPC and therefore not to require to perform peering between project.
Another ack exists: you can create a VPN between Cloud Run project VPC and Cloud SQL project VPC. It's ugly, but it works.
Solved!
Problem: Configuration. There was a VPC created for the Cloud SQL db to get an IP address assigned in. The Serverless Connector was created and had access to the same network. I, mistakenly, thought that was all that is needed. As #guillaume-blaquiere points out, this is for a single project only.
To fix: Create a Shared VPC configuration in the host project. In the Google Cloud Console it was as easy as turning on Shared VPC (VPC Network > Shared VPC). Setup a configuration with pretty much the default options it gives you and then you can use the Serverless Connector reference projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME in your Cloud Run or Cloud Functions and all works just fine!

Amazon Web Service RDS Connection Failure

I am trying to locally run a PHP based project, connecting to an Amazon RDS instance. I am receiving the following error in the browser:
![SQLSTATE[HY000] [2002]]1
I have run a series of networking tests where I pinged the following and received successful test results. I pinged:
iiNet's web address
One of iiNet's DNS servers
The loopback address of my computer
I pinged Google
I then tried the mysql utility to remotely connect and received the
ERROR 2003 (HY000): Can't connect to MySQL server
Last factor I think you should know regarding my own networking situation, I am connecting to the internet via:
modem->Zyxel VPN->Wireless Router->My laptop
What in the Sam Hill is going on?
Thanks,
CM
For this to work, the following must be true:
the RDS instance must resolve to a public IP address (I'd check this for you but since you chose to use a screenshot instead of text, I can't copy paste it, so I'll leave it to you)
the Security Group(s) associated with the RDS instance must allow traffic from your public IP ( the one you'll get from http://wtfismyip.com/text ). This won't bet true by default. I highly recommend you open to your IP, not just everyone, as Mysql is trivial to DOS attack if its port is public.
The network ACL of the VPC hosting the RDS instance must allow the traffic also. This will be allowed by default, so unless you changed the ACLs in your VPC, you can ignore this.
If all those are true, you should be able to connect!

Google Compute Instance RDP Fails (after working for years)

Apologies if this is a bit basic:
I have a Google Compute Instance running Windows Server 2012 R2. It has a valid admin account and password (checked via gcloud). The external IP address can be pinged, the system has been stopped and started successfully. The gcloud commands execute successfully etc etc.
If I try to RDT in I get the unsuccessful message. If I use the RDT (Chrome) option in the Google Cloud Platform admin page I get this message:
In order to use the Chrome RDP Extension, you must configure VM
instance so that it has an external IP address, username and password.
Note: You must configure the network firewall to open TCP port 3389 to
enable RDP access.
Note that ALL of the above are correct and confirmed.
I am sort of going round in circles, I've tried to use powershell on a windows system to RDT in to no avail. Again, using the built in Bash serial access I can get to the system and, for example, retrieve the admin account and password, BUT RDT FAILS.
I have tried using the powershell command Enter-PSSEssion... and I initially got a winrm error, apparently the IP address needs to be in trustedhosts. Fixed that and now I am getting a message that I need to verify that winrm is running on the destination computer, catch 22, that's why I'm using winrm, to access the destination computer.
Any ideas what I might try next?
Thanks.....
create a rdp network tag for firewall rule, which allows tcp:3389 ingress and and then apply it to the instance in question... someone (assuming you're at work) might have removed/edited these rules trough the console or gcloud command.

My google cloud instance lost network connectivity

My google cloud instance (10.128.0.3) lost network connectivity somewhere just after 0400 this AM. I am running Centos 6.10) The network interfaces are up and have IP addresses. Unable to ping default gateway (10.128.0.1). Firewall rules (google and local) have not been changed/modified. This instance has been online for several years with no recent changes made. Any suggestions would be helpful and appreciated.
This is a known issue when updating to kernel 2.6.32-754 that is affecting both Red Hat, and CentOS images, and seems related to this DHCP update. The Compute Engine team are already aware of this issue.
Meanwhile, and in addition to the great suggestions above, you may also use a startup script ( add the default gateway IP address) to fix this issue, and then restart your instance. Todo so without access to the instance simply add a metadata for the instance with the name startup-script and the content of the below script (make sure to update the gateway to your, it can be found in the VPC Page)
#!/bin/bash
route add default gw [default_gateway_ip] eth0
For further information/updates about this issue, you may check this issue tracker link. https://issuetracker.google.com/issues/111154121

Connection to AWS Database fails with Mule app in Runtime Manager

I've recently created a Mule application (3.7.0 CE) on a laptop. I'm connected to an AWS RDS instance when running locally in AnyPoint Studio using Maven. I started with a local MySQL DB and migrated it to AWS because my application "proofofconcept" is just that a proof of concept and I would like to show the application online (public url) instead of my laptop for a presentation. I added the database.url=... property to the application properties when I deployed to Anypoint Runtime Manager in the cloud. I'm currently getting a:
communications link failure
I've tried several things and nothing has worked. I tried a basic database connection first in the database config. And, then I created a JDBC datasource in Spring-beans. Both methods worked locally and in-communication with AWS (remote). When I deploy to Runtime Manager, the application deploys. And, I get the console that's generated runtime by the RAML. When I call a url e.g. api/v1/orders it runs and runs and after timeout provides the communication error.
Does anyone 1) know if the communication is allowed? 2) know how to fix this? I would like to demo the POC online for my client.
Thanks in advance
My issue was with Amazon VPC and the default security group assigned to my RDS instance. By default all outbound activity is set to any protocol and any port for any ip (0.0.0.0/0). Inbound routing, however was specifying only port 3306 but also a custom using-ip that was my home network public ip. I changed the ip specification to be 0.0.0.0/0. This now mean's that any ip can send a request though port 3306 to my Amazon MySQL instance.