Currently I'm working on a project in GCP that uses several service projects attached to a single host project using multiple subnets for mapping different environments (classic environments development, stage and production), and I'm trying to run dataflow pipelines and cloud functions that need to connect to databases hosted on VMs in a different service project. So far I have set the service account running dataflow and cloud function with Network User role for the subnet that belongs to specific environment and in case of dataflow I'm specifying the subnetwork for the pipeline on the host project, but dataflow pipelines and cloud functions are not even able to resolve database VMs host names or connect directly using internal IP address. Does anybody know how to setup similar environment?
You can use a shared VPC, which will let your accessory projects connect to the VPC of the main project.
https://cloud.google.com/vpc/docs/shared-vpc
From there you can use VPC connectors to allow your cloud functions to access internal resources. You can see this option when configuring the cloud function and hitting "more".
Related
I have a Cloud Run container that uses a Serverless Connector to connect to a Cloud SQL instance all in the same project. This configuration works just fine.
I have moved the Cloud SQL instance to another project in the same organisation and setup a Serverless Connector there as per the instructions. I have tested this Serverless Connector with a Cloud Function in the same project that accesses the database and reports the number of rows in a table, this works without problems.
I have now updated the Cloud Run instance to point to the new connector reference. I have used the specified format: projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME. When I release a new revision of the container, I get the error message: "Could not find specified network to attach to app." I see the message "Ready condition status changed to False for Service {service name} with message: Deploying Revision." in the Cloud Run logs for this service.
Any ideas on how to get this working please?
Documentation:
Configuring Serverless VPC Access
Configure connectors in the Shared VPC host project
Info:
Command gcloud compute networks vpc-access connectors describe --region=europe-west3 projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME gives the output:
connectedProjects:
- company-service-dev
- a-project-name
ipCidrRange: 10.8.0.0/28
machineType: f1-micro
maxInstances: 3
maxThroughput: 300
minInstances: 2
minThroughput: 200
name: projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME
network: company-project-servicename
state: READY
The connector MUST be in the same region AND the same project as the Cloud Run service.
The wrong solution is to create a peering between the Cloud Run project VPC and the Cloud SQL project VPC. But it won't work because of network transitivity issue (CLoud SQL to Project create 1 peering and Cloud Run VPC to Project create another peering -> 2 peering in a row aren't transitive).
The correct solution is to create Shared VPC architecture to share the same VPC and therefore not to require to perform peering between project.
Another ack exists: you can create a VPN between Cloud Run project VPC and Cloud SQL project VPC. It's ugly, but it works.
Solved!
Problem: Configuration. There was a VPC created for the Cloud SQL db to get an IP address assigned in. The Serverless Connector was created and had access to the same network. I, mistakenly, thought that was all that is needed. As #guillaume-blaquiere points out, this is for a single project only.
To fix: Create a Shared VPC configuration in the host project. In the Google Cloud Console it was as easy as turning on Shared VPC (VPC Network > Shared VPC). Setup a configuration with pretty much the default options it gives you and then you can use the Serverless Connector reference projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME in your Cloud Run or Cloud Functions and all works just fine!
I would like a Google Cloud Function in projectA to be able to connect to a Google Compute Engine instance in projectB. I'm aware that I need a VPC Serverless Connector in order to accomplish this and have followed the advice at
Cloud Functions > Guides > Connecting to a VPC network however it doesn't work for me. When I try to deploy my cloud function the deployment "hangs" and eventually fails after many many minutes of attempting to create the function.
I am wondering if perhaps I should follow the advice at Cloud Functions > Guides > Connecting to a Shared VPC network instead. As I said above my Cloud Function and GCE instance are in different projects, does this mean I must create a Shared VPC? I am not at all familiar with Shared VPCs at all so would appreciate some guidance here.
Shared VPC is a solution, but the #FerreginaPelona comment is also true: 2 projects 2 VPC strictly isolated.
You need to create a bridge (a peering) between both. Be careful to not overlap the subnet ranges (thing that happens when you use the default VPC created automatically in your project.) You need to create custom VPC with only the required subnet and then to peer them.
I have multiple environments in Google Compute Engine (dev, staging, and production), each with its own Google Cloud SQL instance. The instances connect via Cloud SQL Proxy and authenticate with a credential file that is tied to a service account. I want to have a separate service account for each environment, which would be restricted to accessing the SQL instance specific to that environment. Currently, it appears that any service account with role Cloud SQL Client can access any Cloud SQL instance within the same project.
I cannot find any way to restrict access on a Cloud SQL Instance to a specific service account. Is it possible, and if so, how? If not, is there a different way to achieve the goal of preventing a server in one environment from accessing a Cloud SQL instance in another environment?
NOTE: this configuration is possible with Google Cloud Storage; one can assign a specific service account to have various permissions on each bucket, so that the dev service account cannot accidentally access Production files.
Unfortunately, Cloud SQL currently does not support instance level IAM policies.
The only workaround is hosting the instances in different projects.
As of the August 2021 release of Google Cloud SQL:
You can use IAM Conditions to define and enforce conditional, attribute-based access control for Google Cloud resources, including Cloud SQL instances
See the documentation for IAM Conditions for information about how to restrict a user or service account to specific Cloud SQL instances.
I have a cluster on a google container engine. There are internal service with the domain app.superproject with exposed port 9999.
Also I have an instance in google compute engine.
How can I access to service with it's domain name from the instance of google compute engine?
GKE is built on top of GCE, a GKE instance is also a GCE instance. You can view all your instances either in the web console, or with gcloud compute instances list command.
Note that they may not be in the same GCE virtual network, but in your use case, it's better to put them in, e.g., the default network (I guess they are already, but check their network properties if you are not sure), then they're accessible to each other through the internal IPs (if not, check firewall settings).
You can also use instance names, which resolve to internal IPs, e.g., ping instance1.
If they're not in the same GCE virtual network, you have to treat the service as an external service by exposing an external IP, which is not recommended in your use case.
i need to know if the following scenario is possible using Google Cloud:
I need to have a IPSec VPN with a partner, the thing is that at their side they will allow only one of my hosts access their network, at their side they configure a ACL as follows: network-object host X.X.X.4.
So, is a must that in the negotiation of phase 2, Google Cloud send as local address the ip number allowed by their X.X.X.4, and not the network X.X.X.0/something, if that happens phase 2 will crash.
Is possible to configure the VPN using this requirement?
Regards,
Will.
You could try creating a /30 network in your project and hosts the VM that you would like to interact with the partner and setup the VPN tunnel
If you have another network, where other VM/Apps exists, setup a cross-vpn between the VPN tunnels in your project, just that they are in different network within the same project.