I am stuck at one thing regarding CloudSQL.
I have my WordPress app running on GCE and I create Instance Group so I will utilise the AutoScaler.
for Db, I am using CloudSQL.
Now point where is stuck is the "Authorise network" in CloudSQL as it accepts only IPV4 Public IP.
How do I know when autoscaling happen what IP will attach to Instance so my instance will know where the DB is?
I can hard code the CloudSQL IP as a CNAME but from CloudSQL Side I am not able to figure it out how to provide access. I can make my DB access all open
If you can let me know what will be the point which I am missing.
I used cloudsql proxy also but that doesn't come with Service in Linux ... I hope you can understand my situation. Let me know if any idea you like to share on this.
Thank you
The recommended way is to use the second generation instances and Cloud SQL Proxy, you’ll need to configure the Proxy on Linux and start it by using service account credentials as outlined at the provided link.
Another way is to use startup script in your GCE instance template, so you can get your new instance’s external IP address and add it to a Cloud SQL instance’s authorized networks by using gcloud sql instance patch command. The IP can be removed from the authorized networks in the same way by using shutdown script. The external IP address of GCE VM instance can be retrieved from metadata by running:
$ curl "http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip" -H "Metadata-Flavor: Google".
Related
I have jar file of springboot and I'm running on compute engineVM
And I also connect SQL-client but what address of mysql should I give in spring boot
I assume you are using GCP's hosted mysql? (Cloud SQL).
If so, then if you are connecting to it via cloud sql proxy, which is running on the same machine, then you just use localhost. The proxy should know the way to the server from there, assuming that you've configured the instance name and project/etc. correctly.
Otherwise, without the proxy, you can use your SQL instance's public IP address, which you can see on the list of running instances when you select the SQL page.
In the second case (using the actual IP address) keep in mind that GCP probably wont let the VM running your application through the firewall to the SQl instance directly. To work around this, you'd have to list your VM's IP address in the Authorized Networks section of the SQL entry (click on your SQL instance in the list and select the Authorization tab). Again, in this case, you need to keep in mind that your VM's IP address is ephemeral by default (unless you made and effort to make it permanent). So if you restart your VM, the above Authorization will no longer make sense. So make sure you make your VM's IP address permanent.
I'm trying to move my dynamic website and database from my own VM's to the Google cloud. For the DB, I'm using the Google Cloud SQL, and for the website I made a host under compute.
The problem is, I can't seem to connect to the DB from the VM using an internal IP address. Somehow my Cloud SQL DB only has an external IP address.
I also have phpMyAdmin running on a compute VM, this machine can also only connect to the external ip address (this works, but I'm guessing is not very secure)
What am I doing wrong? Must I use the app engine instead for my website? I've done the training exercise but, to be honest, I have no clue what I was doing.
CloudSQL does not currently support private networks. You either need to connect via external IP or use CloudSQL proxy.
In order to increase security make sure to connect via SSL when using external IP.
I can't connect Google Cloud SQL from GCE even I added public IP (external IP) of my GCE instance as a authorized network. It works when I add "0.0.0.0" into authorized network. Obviously I don't want to do that. The authorized network setting may be the cause. But I can't find out it. Does anyone know about this.
I'm using Google Cloud SQL version 2 beta. I am trying to connect from GCP cloud console. Although it may be not necessary, I changed external IP setting from ephemeral to static but it didn't work.
mysql -u root -p -h xxxx <--- I can login normally if I add "0.0.0.0" into authorized network.
I've double checked this same question..
Linking Google Compute Engine and Google Cloud SQL
1. Ensure your Cloud SQL instance has an IPv4 address.
2. Find out the public IP address of your GCE instance and add it as an authorized network on your Cloud SQL instance.
3. Add a MySQL username and password for your instance with remote access.
4. When connecting from GCE use you standard MySQL connection system (e.g. mysqli_connect) with the username and password you just set up, connecting to the IPv4 address of your Cloud SQL instance.
Edit 1
I noticed this description.
Note: Connecting to Cloud SQL from Compute Engine using the Cloud SQL Proxy is currently available only for Cloud SQL Second Generation instances.
https://cloud.google.com/sql/docs/compute-engine-access
Does it mean that I have to use the Proxy..?
Edit 2
$ mysql -u root -p -h (Cloud SQL Instance's IP)
Enter password:
ERROR 2003 (HY000): Can't connect to MySQL server on '(Cloud SQL Instance's IP)' (110)
Edit 3
Does it mean that I have to use the Proxy..?
According to the official document as Vadim said, Cloud SQL Proxy seems to be optional but it sounds better for security, flexibility and also the price. (static IP will be charged. However, the proxy setting may be complicated for me..)
https://cloud.google.com/sql/docs/compute-engine-access
If you are connecting to a Cloud SQL First Generation instance, then you must use its IP address to connect. However, if you are using a Cloud SQL Second Generation instance, you can also use the Cloud SQL Proxy or the Cloud SQL Proxy Docker image.
Edit 4
I found the reason... I was stupid... I tried connect from Google Cloud Shell but that was not my gce instance. It works when I try to connect from my gce instance.
Did you add the public IP of the GCE VM under authorized networks?
From your post:
2. Find out the public IP address of your GCE instance and add it as an authorized network on your Cloud SQL instance.
The official documentation is here:
https://cloud.google.com/sql/docs/external#appaccessIP
I want to be able to query the external IP address of a GCE instance when the instance starts up. I'm planning to use that to fix up some configs which are copied to multiple similar instances. Is there a way to automatically discover an instance's external IP(s) or other properties from the instance itself? I see there are some things you can query with the gcloud tool, but for that you have to know the instance name, and it's not clear where to get that from.
See Querying metadata in GCE public documentation. For example, for the instance's external IP:
curl http://metadata/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip/ -H "Metadata-Flavor: Google"
This command will query the instance's private metadata server. Another option is configuring the instance's service account with the right scopes as described at Preparing an instance to use service accounts in the public documentation. This way, gcloud command can be used directly in the instance to get information from the project without authentication.
I have deployed a hadoop cluster on google compute engine. I then run a machine learning algorithm (Cloudera's Oryx) on the master node of the hadoop cluster. The output of this algorithm is accessed via an HTTP REST API. Thus I need to access the output either by a web browser, or via REST commands. However, I cannot resolve the address for the output of the master node which takes the form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091.
I have allowed http traffic and allowed access to ports 80 and 8091 on the network. But I cannot resolve the address given. Note this http address is NOT the IP address of the master node instance.
I have followed along with examples for accessing IP addresses of compute instances. However, I cannot find examples of accessing a single node of a hadoop cluster on GCE, that follows this form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091. Any help would be appreciated. Thank you.
The reason you're seeing this is that the "HOSTNAME.c.PROJECT.internal" name is only resolvable from within the GCE network of that same instance itself; these domain names are not globally visible. So, if you were to SSH into your master node first, and then try to curl http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 then you should successfully retrieve the contents, whereas trying to access from your personal browser will just fail to resolve that hostname into any IP address.
So unfortunately, the quickest way for you to retrieve those contents is indeed to use the external IP address of your GCE instance. If you've already opened port 8091 on the network, simply use gcutil getinstance CLUSTER_NAME-m and look for the entry specifying external IP address; then plug that in as your URL: http://[external ip address]:8091.
If you turned up the cluster using bdutil, a more involved but nicer way to access your cluster is by running the bdutil socksproxy command. This opens a dynamic-port-forwarding SSH tunnel to your master node as a SOCKS5 proxy, so that you can then configure your browser to use localhost:1080 as your proxy server, make sure to enable remote DNS resolution, and then visit your browser using the normal http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 URL.