One time script on Compute Engine - google-compute-engine

I am looking to run a script once during VM instantiation. The startup script in the compute engine template runs every time the VM is started. Say for e.g. I have to install gnome desktop on linux host, I don't want to include that in startup script. Rather I am looking for something that runs once whet he host is created. Of course, I want this automated. Is it possible to do this?
Edit: I am trying to achieve this in Linux OS.

As the documentation [1], if we create startup scripts on a compute engine instance then the instances perform automated tasks “every time” the instance boots up.
To run startup script once, the most basic way is to use a file on the filesystem to flag when the script has been run or you could use the instance metadata to store the state.
For example via:
INSTANCE_STATE=$(curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/state -H "Metadata-Flavor: Google")
Then set state = PROVISIONED after running the script etc.
But it is a good idea to have your script check specifically if the actions it is going to do have already been done and handled accordingly.
Another option, in your startup script you can have it removed the startup-script metadata at the end from the host instance
[1] https://cloud.google.com/compute/docs/startupscript
[2] https://cloud.google.com/compute/docs/storing-retrieving-metadata

Related

How to create a function that runs a gcloud command?

I use the following command in my Compute Engine to run a script that's stored in Cloud Storage:
gsutil cat gs://project/folder/script.sh | sh
I want to create a function that runs this command and eventually schedule to run this function, but I don't know how to do this. Does anyone know how to do this?
Cloud Functions is serverless and you can't manage the runtime environment. You don't know what is installed on the runtime environment of the Cloud Functions and your can't assume that GCLOUD exists.
The solution is to use cloud Run. the behavior is very close to Cloud Functions, simply wrap your function on a webserver (I wrote my first article on that) and, in your container, install what you want, especially GCLOUD SDK (you can also use a base image with GCLOUD SDK already installed). And this time you will be able to call system binaries, because you know that they exist because you installed them!
Anyway, be careful in your script execution: the container is immutable, you can't change file, the binaries, the stored files,... I don't know the content of your script but your aren't on a VM, you are still on serverless environment, with ephemeral runtime.

Google Functions + Execute bash script on instance

I need to execute a bash script/command within a Google compute instance using Google Functions and get a response.
AWS has an agent called SSM that let me do that with no hassle using Lambda, nevertheless I did not find anything like that on Google Cloud.
On AWS using a nodejs lambda I use the following example:
ssm.sendCommand({
DocumentName: documentName,
InstanceIds: [ instanceId ],
TimeoutSeconds: 3600,
Parameters: {
'commands' : commands
}
}
How can I achieve what I want on Google Cloud? Thank you.
The nature of Google Cloud Functions is that it is the most abstract of the Serverless paradigms. It assumes that all you are providing is stateless logic to be executed in one of the supported languages. You aren't supposed to know nor care how that logic is executed. Running bash (for example) makes an assumption that your Cloud Function is running in a Unix like environment where bash is available. While this is likely to be a true statement, it isn't part of the "core" contract that you have with the Cloud Function environment.
An alternative solution I would suggest is to study Google's Cloud Run concept. Just like Cloud Functions, Cloud Run is serverless (you don't provision any servers and it scales to zero) but the distinction is that what is executed is a docker container. Within that container, your "code" within is executed in the container environment when called. Google spins up / spins down these containers as needed to satisfy your incoming load. Since it is running in a container, you have 100% control over what your logic does ... including running bash commands and providing any scripts or other environment needed to be able to run.
This is not possible. You can perform some actions such as start a VM or stop it from a Cloud Function, but you can't get or list the dirs within a VM. In this case, the Compute Engine API is being used, but only the is reached.
The workaround would be to create a request handler in your VM in order to could be rached by the CF. The proper security would be implemented in order to avoid security issues and requests from anonymous callers. You might use the public IP to reach your VM.
You can run a Bash script on Google Cloud Run with a Bash Docker image. As an example - https://github.com/sethvargo/cloud-run-bash-example
Then you can call this Cloud Run service from your other existing Google Cloud Functions, AppEngine etc.

Distributing tasks between GCE VM instances

I want to run the same Python script with different parameters on several instances on google compute engine. Currently I setup all my instances by creating an instance group. Then I ssh into each machine and start the Python script with the correct parameters.
I'm able to automate setup that's common for all the VMs, such as mounting buckets and so on, by using startup scripts. But I still have to ssh into each VM and start the Python script with a different parameter for each VM. Hence, I'm wondering if there's some clever and simple way of running the same Python script with different parameters on each instance.
Any suggestions? Thank you!
One solution is to use metadata: Create your instances separately instead of with an instance group. Make them identical (ie use the same script) except for metadata - use the metadata to give each instance it's unique parameters. In your script, fetch the metadata to determine how to proceed separately.

How to release a global static IP

I am being billed for an unused IP address. I can't find the item that's
charging me.
I've gone through the project using console.cloud.google.com looking in Compute Engine and Networking settings, but I can't find any IP addresses.
I'm only using the project for Cloud Storage of 1 text file, and a git
repository. I run these commands on the terminal, and I am getting 0 items.
$ gcloud --project=PROJECTNAME compute addresses list
The above command listed 0 items.
$ gcloud --project=PROJECTNAME compute forwarding-rules list
The above command listed 0 items.
Is there a way of telling where this static IP address is, or how I
can disable it? I can't find it anywhere. I'd rather not delete the entire
project because some of the services are being used by my production app.
I know that it's a global IP address because I can see it listed in my
Compute Engine quota. For me to be able to use a command line option to delete the address, I think that I need the name of the address, but I can't find that listed anywhere.
I'm thinking this could be related to me having one of these two
things enabled for the project in the past:
I was running an AppEngine project, but have since terminated it.
For the AppEngine project, I registered a custom domain to point
to it.
I had used AppEngine Flexible (aef). The unused IP was from my stopped version. This blocks the releasing of the static IP and so it was advised to first delete this version before trying to release the IP address again.
You cannot delete your previous version if that's the only one you have as you need to have at least one version for the default module.
To fix you could deploy a new version, say a Flexible VM (deployed to another region), or a Standard VM. Then as a workaround, if you do not have any app to replace it right now, you can deploy an empty app instead. You would need to create an app.yaml that uses only static files that does not have any script to execute so you would not be charged for any instance.
For a more detailed guide in doing this workaround, you may check this documentation [1].
[1] http://stackoverflow.com/questions/37679552/cannot-delete-version

How do I redirect the output from a Google Compute Engine instance startup script?

I've set up startup scripts for all of my instances, so that when I reboot one, it updates itself to the latest version of whatever it's running. Now I want to do multiple of those via one script, one single button push. It works by just rebooting all relevant instances, but I want to see the output of the startup scripts.
From here: https://cloud.google.com/compute/docs/startupscript#rerunthescript - I've found out that, on Debian machines, triggering a startup script by itself without rebooting a machine is done via sudo google_metadata_script_runner --script-type startup, and that all output from the startup script goes to /var/log/daemon.log. Is there any way to set the startup scripts to output directly to stdout?
As ZachB mentioned, startup scripts on Google Compute Engine will output to the serial port which you can view in the Cloud Console or on the command line with the gcloud tool. The following docs explain in more detail how to view the serial port output:
Interacting with the Serial Console
https://cloud.google.com/compute/docs/instances/interacting-with-serial-console
(Navigate to 'VM Instances' -> instance name -> 'Serial port' -> 'Connect to serial port')
gcloud compute instances get-serial-port-output
https://cloud.google.com/sdk/gcloud/reference/compute/instances/get-serial-port-output
gcloud compute instances get-serial-port-output NAME [--port=PORT] [--zone=ZONE] [GLOBAL-FLAG …]