Distributing tasks between GCE VM instances - google-compute-engine

I want to run the same Python script with different parameters on several instances on google compute engine. Currently I setup all my instances by creating an instance group. Then I ssh into each machine and start the Python script with the correct parameters.
I'm able to automate setup that's common for all the VMs, such as mounting buckets and so on, by using startup scripts. But I still have to ssh into each VM and start the Python script with a different parameter for each VM. Hence, I'm wondering if there's some clever and simple way of running the same Python script with different parameters on each instance.
Any suggestions? Thank you!

One solution is to use metadata: Create your instances separately instead of with an instance group. Make them identical (ie use the same script) except for metadata - use the metadata to give each instance it's unique parameters. In your script, fetch the metadata to determine how to proceed separately.

Related

How to create a function that runs a gcloud command?

I use the following command in my Compute Engine to run a script that's stored in Cloud Storage:
gsutil cat gs://project/folder/script.sh | sh
I want to create a function that runs this command and eventually schedule to run this function, but I don't know how to do this. Does anyone know how to do this?
Cloud Functions is serverless and you can't manage the runtime environment. You don't know what is installed on the runtime environment of the Cloud Functions and your can't assume that GCLOUD exists.
The solution is to use cloud Run. the behavior is very close to Cloud Functions, simply wrap your function on a webserver (I wrote my first article on that) and, in your container, install what you want, especially GCLOUD SDK (you can also use a base image with GCLOUD SDK already installed). And this time you will be able to call system binaries, because you know that they exist because you installed them!
Anyway, be careful in your script execution: the container is immutable, you can't change file, the binaries, the stored files,... I don't know the content of your script but your aren't on a VM, you are still on serverless environment, with ephemeral runtime.

One time script on Compute Engine

I am looking to run a script once during VM instantiation. The startup script in the compute engine template runs every time the VM is started. Say for e.g. I have to install gnome desktop on linux host, I don't want to include that in startup script. Rather I am looking for something that runs once whet he host is created. Of course, I want this automated. Is it possible to do this?
Edit: I am trying to achieve this in Linux OS.
As the documentation [1], if we create startup scripts on a compute engine instance then the instances perform automated tasks “every time” the instance boots up.
To run startup script once, the most basic way is to use a file on the filesystem to flag when the script has been run or you could use the instance metadata to store the state.
For example via:
INSTANCE_STATE=$(curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/state -H "Metadata-Flavor: Google")
Then set state = PROVISIONED after running the script etc.
But it is a good idea to have your script check specifically if the actions it is going to do have already been done and handled accordingly.
Another option, in your startup script you can have it removed the startup-script metadata at the end from the host instance
[1] https://cloud.google.com/compute/docs/startupscript
[2] https://cloud.google.com/compute/docs/storing-retrieving-metadata

Google Functions + Execute bash script on instance

I need to execute a bash script/command within a Google compute instance using Google Functions and get a response.
AWS has an agent called SSM that let me do that with no hassle using Lambda, nevertheless I did not find anything like that on Google Cloud.
On AWS using a nodejs lambda I use the following example:
ssm.sendCommand({
DocumentName: documentName,
InstanceIds: [ instanceId ],
TimeoutSeconds: 3600,
Parameters: {
'commands' : commands
}
}
How can I achieve what I want on Google Cloud? Thank you.
The nature of Google Cloud Functions is that it is the most abstract of the Serverless paradigms. It assumes that all you are providing is stateless logic to be executed in one of the supported languages. You aren't supposed to know nor care how that logic is executed. Running bash (for example) makes an assumption that your Cloud Function is running in a Unix like environment where bash is available. While this is likely to be a true statement, it isn't part of the "core" contract that you have with the Cloud Function environment.
An alternative solution I would suggest is to study Google's Cloud Run concept. Just like Cloud Functions, Cloud Run is serverless (you don't provision any servers and it scales to zero) but the distinction is that what is executed is a docker container. Within that container, your "code" within is executed in the container environment when called. Google spins up / spins down these containers as needed to satisfy your incoming load. Since it is running in a container, you have 100% control over what your logic does ... including running bash commands and providing any scripts or other environment needed to be able to run.
This is not possible. You can perform some actions such as start a VM or stop it from a Cloud Function, but you can't get or list the dirs within a VM. In this case, the Compute Engine API is being used, but only the is reached.
The workaround would be to create a request handler in your VM in order to could be rached by the CF. The proper security would be implemented in order to avoid security issues and requests from anonymous callers. You might use the public IP to reach your VM.
You can run a Bash script on Google Cloud Run with a Bash Docker image. As an example - https://github.com/sethvargo/cloud-run-bash-example
Then you can call this Cloud Run service from your other existing Google Cloud Functions, AppEngine etc.

how to host name and IP address of the instances deployment from the deployment manager for a particular deployment session?

how to hostname and IP address of the instances deployment from the deployment manager for a particular deployment session?
I have seen it can be done via gcloud but I am looking for alternate via saving files through jinja
Also, would like to know if we can save via Jinja templates
need to know if there are any postscript available for gcloud deployment manager
for example, I have deployed 4 centos instances and now I need to create a config file using the above four instances and then go about starting services on all four.
I doubt it can be done through start-up script
You can simulate a creation of a vm instance reserving the desired IP, specifying the hostname and the start-up script where you start the services of your machines, then, check the REST file at the bottom of the page to see the actual labels used for that and where those should be used, but remember that for IP static assign you must reserve one or more first, for internals check this, for externals check this.
You can create an instance-templates based on this documentation[1] and deploying your VM/s with gcloud[2]. The Hostname and IP Address can be specified on the instance template itself:
gcloud deployment-manager deployments create [DEPLOYMENT_NAME]
--config [CONFIG.YAML]
I am not familiar with Jinja but based on Google doc [3], you can use it to create templates used by Deployment Manager.
You can also add a metadata resource in the template to use the startup-script[4]. Keep in mind that startup-script can simply download and execute a python/bash if it is come to be too complex.

How to release a global static IP

I am being billed for an unused IP address. I can't find the item that's
charging me.
I've gone through the project using console.cloud.google.com looking in Compute Engine and Networking settings, but I can't find any IP addresses.
I'm only using the project for Cloud Storage of 1 text file, and a git
repository. I run these commands on the terminal, and I am getting 0 items.
$ gcloud --project=PROJECTNAME compute addresses list
The above command listed 0 items.
$ gcloud --project=PROJECTNAME compute forwarding-rules list
The above command listed 0 items.
Is there a way of telling where this static IP address is, or how I
can disable it? I can't find it anywhere. I'd rather not delete the entire
project because some of the services are being used by my production app.
I know that it's a global IP address because I can see it listed in my
Compute Engine quota. For me to be able to use a command line option to delete the address, I think that I need the name of the address, but I can't find that listed anywhere.
I'm thinking this could be related to me having one of these two
things enabled for the project in the past:
I was running an AppEngine project, but have since terminated it.
For the AppEngine project, I registered a custom domain to point
to it.
I had used AppEngine Flexible (aef). The unused IP was from my stopped version. This blocks the releasing of the static IP and so it was advised to first delete this version before trying to release the IP address again.
You cannot delete your previous version if that's the only one you have as you need to have at least one version for the default module.
To fix you could deploy a new version, say a Flexible VM (deployed to another region), or a Standard VM. Then as a workaround, if you do not have any app to replace it right now, you can deploy an empty app instead. You would need to create an app.yaml that uses only static files that does not have any script to execute so you would not be charged for any instance.
For a more detailed guide in doing this workaround, you may check this documentation [1].
[1] http://stackoverflow.com/questions/37679552/cannot-delete-version