I'm trying to install Cloudera Manager in a Google Compute Engine Ubuntu 12-04 instance. Everything works fine until the installation step.
The error occur when tries to detect the Cloudera Manager Server. It seems an error with the hostname... the report error is the next one:
could not contact scm server at 236.193.155.104.bc.googleusercontent.com:7182, giving up
waiting for rollback request
Screenshot:
Please someone help me with this! I'm working on it for too long time and I feel that it is not complicated to resolve..
Many thanks in advance!
Add the firewall exception for your instances to allow tcp connections on port 7182 through them.The following is the command that allows all the tcp connections.
gcloud compute firewall-rules create cloudera-manager --allow tcp
Reference: Hadoop on GCE http://github.com/scalding-io/hadoop-on-gce/blob/master/build-cluster
To help with deploying Cloudera Manager, I wrote these scripts which may be of use to you, as they simplify the installation and deployment of Cloudera Manager on Google Compute Engine.
Also, you can now use Cloudera Director to deploy and provision Hadoop clusters on Google Cloud Platform. Cloudera Director is a single server binary which can deploy and manage other VMs to install Cloudera Manager as well as the rest of the Cloudera Hadoop distribution automatically with an easy-to-use GUI.
Related
I have a Cloud Run container that uses a Serverless Connector to connect to a Cloud SQL instance all in the same project. This configuration works just fine.
I have moved the Cloud SQL instance to another project in the same organisation and setup a Serverless Connector there as per the instructions. I have tested this Serverless Connector with a Cloud Function in the same project that accesses the database and reports the number of rows in a table, this works without problems.
I have now updated the Cloud Run instance to point to the new connector reference. I have used the specified format: projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME. When I release a new revision of the container, I get the error message: "Could not find specified network to attach to app." I see the message "Ready condition status changed to False for Service {service name} with message: Deploying Revision." in the Cloud Run logs for this service.
Any ideas on how to get this working please?
Documentation:
Configuring Serverless VPC Access
Configure connectors in the Shared VPC host project
Info:
Command gcloud compute networks vpc-access connectors describe --region=europe-west3 projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME gives the output:
connectedProjects:
- company-service-dev
- a-project-name
ipCidrRange: 10.8.0.0/28
machineType: f1-micro
maxInstances: 3
maxThroughput: 300
minInstances: 2
minThroughput: 200
name: projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME
network: company-project-servicename
state: READY
The connector MUST be in the same region AND the same project as the Cloud Run service.
The wrong solution is to create a peering between the Cloud Run project VPC and the Cloud SQL project VPC. But it won't work because of network transitivity issue (CLoud SQL to Project create 1 peering and Cloud Run VPC to Project create another peering -> 2 peering in a row aren't transitive).
The correct solution is to create Shared VPC architecture to share the same VPC and therefore not to require to perform peering between project.
Another ack exists: you can create a VPN between Cloud Run project VPC and Cloud SQL project VPC. It's ugly, but it works.
Solved!
Problem: Configuration. There was a VPC created for the Cloud SQL db to get an IP address assigned in. The Serverless Connector was created and had access to the same network. I, mistakenly, thought that was all that is needed. As #guillaume-blaquiere points out, this is for a single project only.
To fix: Create a Shared VPC configuration in the host project. In the Google Cloud Console it was as easy as turning on Shared VPC (VPC Network > Shared VPC). Setup a configuration with pretty much the default options it gives you and then you can use the Serverless Connector reference projects/PROJECT_ID/locations/europe-west3/connectors/CONNECTOR_NAME in your Cloud Run or Cloud Functions and all works just fine!
Hi when i try to ssh to google cloud VM instance it doesn't connect and when i check the logs it says there is no storage available.
but when i connect using google cloud console it connects and when i check the storage there is enough storage
also one thing my current persistent disk is 20gb but here it shows twice the amount. if anyone can explain me whats going this would help me out a lot
The output that you are posting is from Cloud Shell link.
When you start Cloud Shell, it provisions a g1-small Google Compute
Engine virtual machine running a Debian-based Linux operating system.
Cloud Shell instances are provisioned on a per-user, per-session
basis. The instance persists while your Cloud Shell session is active;
after an hour of inactivity, your session terminates and its VM,
discarded. For more on usage quotas, refer to the limitations guide.
With the default Cloud Shell experience, you are allocated with an
ephemeral, pre-configured VM and the environment you work with is a
Docker container running on that VM. You can also choose to use a
custom environment to save your configurations, in which case, your
environment will be your very own custom Docker image.
Cloud Shell provisions 5 GB of free persistent disk storage mounted as
your $HOME directory on the virtual machine instance.
As Travis mentioned you run df -h --total in the Cloud Shell storage not the VM.
Here you can find a SO related question with possible solutions to fix your issue.
Disk is full, and I can't SSH to instance.
My GCP server is down. It was working last day. I can see the server in VM Instances but can not connect using SSH. All the client websites are down.
Can any one help ?
There is several reasons this could happen:
If your disk is full
sshd deamon isn't configured properly
If OS login is enabled on your instance
A firewall rule block port 20
Sometimes, you see some connection errors in the console, that worth to take a look.
EDIT:
I will need additional information if that still not working;
Take a look to your serial console logs and tell me if you have any relevant logs that can help like a kernel panic, issue with networking, permission denied, etc
Use Cloud Shell and try to connect to your VM instance with these commands:
gcloud compute firewall-rules create --network=default default-allow-ssh --allow tcp:22
gcloud compute ssh YOUR_INSTANCE_NAME --zone YOUR_ZONE -- -vvv
If you can't connect from cloud shell, try to ping your VM instance (internal IP & external IP)
I highly recommend to delete your screenshots showing information about your VM instance (Firewall rules, Project name, nmap scans, etc).
I launched a WordPress website on a GCE VM and connected it to cloud SQL database and the website ran perfectly. However when I did the same but on a GKE cluster the WordPress installation got stuck here: wp-admin/setup-config.php?step=2.
It just wouldn't load. I'm not sure whether this is a problem with scopes/permissions or something else.
If you did the same steps to install WordPress in GCE and GKE it will not work cuz its a different service architecture, for GKE you should use and follow the steps for Kubernetes (WordPress from GCP Marketplace); If you already use this steps to deploy your WordPress Containers I would suggest to check Stackdriver Logging to more information about this issue and also retrieve logs from the pods:
kubectl logs [POD NAME]
I'm using google compute engine and have an auto scaling instance group that spins up new VMs as needed all sitting behind a load balancer. I'm also using google's cloud SQL in the same project. The VMs need to connect to the cloud SQL instance.
Since the IPs of the VMs are dynamic I can't just plug in the IPs to the SQL access config so I followed the cloud sql proxy setup along with the notes from this very similar question:
How to connect from a pool of Google Compute Engine instances to Cloud SQL DB in the same project?
I can now log into a single test VM and run:
./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306
and everything works great and that VM connects to the cloud SQL instance.
The next step is where I'm having issues. How can I setup the VM so it automatically starts up the proxy when it's either built from an instance template or just restarted. The obvious answer seem to be to shove the above in the VM's start-up script but that doesn't seem to be working. So with my single test VM I can SSH into the VM and manually run the cloud_sql_proxy command and all works. If I then include the below in my start-up script and restart the VM it doesn't connect:
#! /bin/bash
./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306
Any suggestions? I seriously can't believe it's this hard to connect to the SQL cloud from a VM in the same project...
The startup script you have shown doesn’t show the download step of the cloud_sql_proxy.
You need to first download and then launch the proxy. So, your startup script should look like:
sudo wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64
sudo mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy
sudo chmod +x cloud_sql_proxy
sudo ./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306 &
I choose crontab to run cloud_sql_proxy automatically when vm start up.
$crontab -e
and add
#reboot cloud_sql_proxy blah blah.