I use the following command in my Compute Engine to run a script that's stored in Cloud Storage:
gsutil cat gs://project/folder/script.sh | sh
I want to create a function that runs this command and eventually schedule to run this function, but I don't know how to do this. Does anyone know how to do this?
Cloud Functions is serverless and you can't manage the runtime environment. You don't know what is installed on the runtime environment of the Cloud Functions and your can't assume that GCLOUD exists.
The solution is to use cloud Run. the behavior is very close to Cloud Functions, simply wrap your function on a webserver (I wrote my first article on that) and, in your container, install what you want, especially GCLOUD SDK (you can also use a base image with GCLOUD SDK already installed). And this time you will be able to call system binaries, because you know that they exist because you installed them!
Anyway, be careful in your script execution: the container is immutable, you can't change file, the binaries, the stored files,... I don't know the content of your script but your aren't on a VM, you are still on serverless environment, with ephemeral runtime.
Related
Is there a way to create a problem metric in Dynatrace using a shell script that can be executed from the Linux server?
Here, Problem metric means,
Let's assume that we are using a shell script to check the status of deployed services on the Linux Server.
Then,
That Shell Script should be able to be called by Dynatrace
And, based on Shell Script's response, should be able to create Problem.
What do you mean by 'problem metric'?
You can create metrics via the Metric API and Problems via the Events API
You can call either endpoint from a shell script on linux. If there is a OneAgent on the system you could also use an extension.
I am looking to run a script once during VM instantiation. The startup script in the compute engine template runs every time the VM is started. Say for e.g. I have to install gnome desktop on linux host, I don't want to include that in startup script. Rather I am looking for something that runs once whet he host is created. Of course, I want this automated. Is it possible to do this?
Edit: I am trying to achieve this in Linux OS.
As the documentation [1], if we create startup scripts on a compute engine instance then the instances perform automated tasks “every time” the instance boots up.
To run startup script once, the most basic way is to use a file on the filesystem to flag when the script has been run or you could use the instance metadata to store the state.
For example via:
INSTANCE_STATE=$(curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/state -H "Metadata-Flavor: Google")
Then set state = PROVISIONED after running the script etc.
But it is a good idea to have your script check specifically if the actions it is going to do have already been done and handled accordingly.
Another option, in your startup script you can have it removed the startup-script metadata at the end from the host instance
[1] https://cloud.google.com/compute/docs/startupscript
[2] https://cloud.google.com/compute/docs/storing-retrieving-metadata
I need to execute a bash script/command within a Google compute instance using Google Functions and get a response.
AWS has an agent called SSM that let me do that with no hassle using Lambda, nevertheless I did not find anything like that on Google Cloud.
On AWS using a nodejs lambda I use the following example:
ssm.sendCommand({
DocumentName: documentName,
InstanceIds: [ instanceId ],
TimeoutSeconds: 3600,
Parameters: {
'commands' : commands
}
}
How can I achieve what I want on Google Cloud? Thank you.
The nature of Google Cloud Functions is that it is the most abstract of the Serverless paradigms. It assumes that all you are providing is stateless logic to be executed in one of the supported languages. You aren't supposed to know nor care how that logic is executed. Running bash (for example) makes an assumption that your Cloud Function is running in a Unix like environment where bash is available. While this is likely to be a true statement, it isn't part of the "core" contract that you have with the Cloud Function environment.
An alternative solution I would suggest is to study Google's Cloud Run concept. Just like Cloud Functions, Cloud Run is serverless (you don't provision any servers and it scales to zero) but the distinction is that what is executed is a docker container. Within that container, your "code" within is executed in the container environment when called. Google spins up / spins down these containers as needed to satisfy your incoming load. Since it is running in a container, you have 100% control over what your logic does ... including running bash commands and providing any scripts or other environment needed to be able to run.
This is not possible. You can perform some actions such as start a VM or stop it from a Cloud Function, but you can't get or list the dirs within a VM. In this case, the Compute Engine API is being used, but only the is reached.
The workaround would be to create a request handler in your VM in order to could be rached by the CF. The proper security would be implemented in order to avoid security issues and requests from anonymous callers. You might use the public IP to reach your VM.
You can run a Bash script on Google Cloud Run with a Bash Docker image. As an example - https://github.com/sethvargo/cloud-run-bash-example
Then you can call this Cloud Run service from your other existing Google Cloud Functions, AppEngine etc.
I have a Google Firebase app with a Cloud Functions back-end. I'm using the node.js 10 runtime which is Ubuntu 18.04. Users can upload files to Google Cloud Storage, and that triggers a GCF function.
What I'd like that function to do is copy that file to Google Cloud Filestore (that's File with an L), Google's new NFS-mountable file server. They say the usual way to do that is from the command line with gsutil rsync or gcloud compute scp.
My question is: are either or both of those commands available on GCF nodes? Or is there another way? (It would be awesome if I could just mount the Filestore share on the GCF instance, but I'm guessing that's nontrivial.)
Using NFS based storage is not a good idea in this environment. NFS works by providing a mountable file system, something that will not work in Cloud Functions environment as the file system is read only with the exception of /tmp folder.
You should consider using cloud native storage systems like GCS for which the Application Default Credentials are already setup. See the list of supported services here.
According to the official documentation Cloud Filestore documentation
Use Cloud Filestore to create fully managed NFS file servers on Google
Cloud Platform (GCP) for use with applications running on Compute
Engine virtual machines (VMs) instances or Google Kubernetes Engine
clusters.
You can not mount the Filestore on GCF.
Also, you can not execute gsutil or gcloud commands from a Google cloud function Writing Cloud Functions.
Google Cloud Functions can be written in Node.js, Python, and Go, and
are executed in language-specific runtimes
I want to run the same Python script with different parameters on several instances on google compute engine. Currently I setup all my instances by creating an instance group. Then I ssh into each machine and start the Python script with the correct parameters.
I'm able to automate setup that's common for all the VMs, such as mounting buckets and so on, by using startup scripts. But I still have to ssh into each VM and start the Python script with a different parameter for each VM. Hence, I'm wondering if there's some clever and simple way of running the same Python script with different parameters on each instance.
Any suggestions? Thank you!
One solution is to use metadata: Create your instances separately instead of with an instance group. Make them identical (ie use the same script) except for metadata - use the metadata to give each instance it's unique parameters. In your script, fetch the metadata to determine how to proceed separately.