Can we mount google compute engine directory on local system using sshfs or any other alternative ?
Can we mount google compute engine directory on local system using sshfs or any other alternative ?
Sorry, i had to repeat question otherwise SOF was not allowing me to post question.
Yes you can mount mount google compute engine directory on local system using sshfs. You’ll need to first install FUSE and configure autofs or sshfs. Here is a guide you can look at to start this up.
Related
I have a Google Firebase app with a Cloud Functions back-end. I'm using the node.js 10 runtime which is Ubuntu 18.04. Users can upload files to Google Cloud Storage, and that triggers a GCF function.
What I'd like that function to do is copy that file to Google Cloud Filestore (that's File with an L), Google's new NFS-mountable file server. They say the usual way to do that is from the command line with gsutil rsync or gcloud compute scp.
My question is: are either or both of those commands available on GCF nodes? Or is there another way? (It would be awesome if I could just mount the Filestore share on the GCF instance, but I'm guessing that's nontrivial.)
Using NFS based storage is not a good idea in this environment. NFS works by providing a mountable file system, something that will not work in Cloud Functions environment as the file system is read only with the exception of /tmp folder.
You should consider using cloud native storage systems like GCS for which the Application Default Credentials are already setup. See the list of supported services here.
According to the official documentation Cloud Filestore documentation
Use Cloud Filestore to create fully managed NFS file servers on Google
Cloud Platform (GCP) for use with applications running on Compute
Engine virtual machines (VMs) instances or Google Kubernetes Engine
clusters.
You can not mount the Filestore on GCF.
Also, you can not execute gsutil or gcloud commands from a Google cloud function Writing Cloud Functions.
Google Cloud Functions can be written in Node.js, Python, and Go, and
are executed in language-specific runtimes
I am new to openshift origin and just downloaded binary (All-in-One Server) from below URL and untar it
https://github.com/openshift/origin/releases
Now while google i found two option to start working with it, i want to know whats difference between them??
oc cluster up
openshift start
As both are providing console like
localhost:8443/console
Where data are stored which need a web service in OpenShift 3? Anyway how can I browser file system?
In OpenShift 3 there is no persistent storage provided by default. You will need to claim a persistent volume and then mount it at whatever directory you desire in the container for your application.
To view the contents of the directory, use oc rsh or the terminal window for a pod in the web console to get shell access, Then change to the directory to look in the directory.
To transfer files into the persistent volume, you can use the oc rsync command.
You can find a tutorial on transferring files in and out of container at https://learn.openshift.com
In google-cloud, the snapshots of Disk (attached to an instance) can be taken through Python APIs. I'm using the same. My requirement is : moving the snapshot taken by google-cloud, to my local storage.
This I think is kind of common use-case. How can i achieve this ?
Best way is saving the snapshot into a bucket trough ssh, and there you can download it or use fuse or cloudberry to sync it locally.
Reference here https://cloud.google.com/compute/docs/images/export-image
In that case I strongly advice you to perform this using Scripts, you can run a script in the VM to backup using a cron. In this script you can run the snapshot and save it in the current project,
gcloud compute disks snapshot [DISK_NAME]
then create a vm with a startup script.
gcloud compute instances create [YOUR_INSTANCE] --scopes storage-ro \
--metadata startup-script-url=gs://bucket/startupscript.sh
In this script copy de disk to a bucket
gsutil cp [disk] gs://bucket/Snapshots
I have a vm instance on google compute engine, which deploys a python web application. But now i want to have a snapshot of the online vm deplyed on my local server. Is it feasible to do that? if yes, how do i proceed to do that?