I have a Google Firebase app with a Cloud Functions back-end. I'm using the node.js 10 runtime which is Ubuntu 18.04. Users can upload files to Google Cloud Storage, and that triggers a GCF function.
What I'd like that function to do is copy that file to Google Cloud Filestore (that's File with an L), Google's new NFS-mountable file server. They say the usual way to do that is from the command line with gsutil rsync or gcloud compute scp.
My question is: are either or both of those commands available on GCF nodes? Or is there another way? (It would be awesome if I could just mount the Filestore share on the GCF instance, but I'm guessing that's nontrivial.)
Using NFS based storage is not a good idea in this environment. NFS works by providing a mountable file system, something that will not work in Cloud Functions environment as the file system is read only with the exception of /tmp folder.
You should consider using cloud native storage systems like GCS for which the Application Default Credentials are already setup. See the list of supported services here.
According to the official documentation Cloud Filestore documentation
Use Cloud Filestore to create fully managed NFS file servers on Google
Cloud Platform (GCP) for use with applications running on Compute
Engine virtual machines (VMs) instances or Google Kubernetes Engine
clusters.
You can not mount the Filestore on GCF.
Also, you can not execute gsutil or gcloud commands from a Google cloud function Writing Cloud Functions.
Google Cloud Functions can be written in Node.js, Python, and Go, and
are executed in language-specific runtimes
Related
I use the following command in my Compute Engine to run a script that's stored in Cloud Storage:
gsutil cat gs://project/folder/script.sh | sh
I want to create a function that runs this command and eventually schedule to run this function, but I don't know how to do this. Does anyone know how to do this?
Cloud Functions is serverless and you can't manage the runtime environment. You don't know what is installed on the runtime environment of the Cloud Functions and your can't assume that GCLOUD exists.
The solution is to use cloud Run. the behavior is very close to Cloud Functions, simply wrap your function on a webserver (I wrote my first article on that) and, in your container, install what you want, especially GCLOUD SDK (you can also use a base image with GCLOUD SDK already installed). And this time you will be able to call system binaries, because you know that they exist because you installed them!
Anyway, be careful in your script execution: the container is immutable, you can't change file, the binaries, the stored files,... I don't know the content of your script but your aren't on a VM, you are still on serverless environment, with ephemeral runtime.
Hi when i try to ssh to google cloud VM instance it doesn't connect and when i check the logs it says there is no storage available.
but when i connect using google cloud console it connects and when i check the storage there is enough storage
also one thing my current persistent disk is 20gb but here it shows twice the amount. if anyone can explain me whats going this would help me out a lot
The output that you are posting is from Cloud Shell link.
When you start Cloud Shell, it provisions a g1-small Google Compute
Engine virtual machine running a Debian-based Linux operating system.
Cloud Shell instances are provisioned on a per-user, per-session
basis. The instance persists while your Cloud Shell session is active;
after an hour of inactivity, your session terminates and its VM,
discarded. For more on usage quotas, refer to the limitations guide.
With the default Cloud Shell experience, you are allocated with an
ephemeral, pre-configured VM and the environment you work with is a
Docker container running on that VM. You can also choose to use a
custom environment to save your configurations, in which case, your
environment will be your very own custom Docker image.
Cloud Shell provisions 5 GB of free persistent disk storage mounted as
your $HOME directory on the virtual machine instance.
As Travis mentioned you run df -h --total in the Cloud Shell storage not the VM.
Here you can find a SO related question with possible solutions to fix your issue.
Disk is full, and I can't SSH to instance.
Can we mount google compute engine directory on local system using sshfs or any other alternative ?
Can we mount google compute engine directory on local system using sshfs or any other alternative ?
Sorry, i had to repeat question otherwise SOF was not allowing me to post question.
Yes you can mount mount google compute engine directory on local system using sshfs. You’ll need to first install FUSE and configure autofs or sshfs. Here is a guide you can look at to start this up.
I'd like to deploy a Google Cloud Function from GAE/Java but I cannot find any info about deploying a function other than using the gcloud command line tool.
Is there a way to deploy a cloud function from Google App Engine (standard) / Java e.g. by using the Cloud Storage API and setting some additional fields on the request (e.g. for the httptrigger etc.)?
You can create Cloud Functions using the REST or RPC end-points.
I have a vm instance on google compute engine, which deploys a python web application. But now i want to have a snapshot of the online vm deplyed on my local server. Is it feasible to do that? if yes, how do i proceed to do that?