Managed VMs running Perl on Google App Engine - google-compute-engine

I have a perl job that runs for 5 mins at the top of every hour. What is the most cost effective way of running this job on the Google Cloud infrastructure? Running a compute engine VM seems too heavy-weight for this since I'd get charged for the other 55 mins of no use. I don't understand the "Managed VMs" well enough, but it seems like this might be an option, but I'm not sure if pricing is rounded to the hour. Does anyone have any ideas what the best option is so that I only get charged for 120 mins of usage (24 times run * 5 minutes). The script also uses some image processing binaries, so converting to Python won't do the trick.

Managed VMs are linked to Google App Engine. If you have an App in GAE, managed VMs are used to configure the hosting environment for you App using VMs that run on Google Compute Engine and these applications are subject to Java and Python run time. This link can give you an idea on pricing on GAE, however Perl is not a supported language in GAE.
On GCE, you can start up an instance, do the task and then delete the instance without deleting the persistence disk, this will allow you to recreate the instance using this disk, however you will still be charged for the provisioned disk space and you will need to create a script that will spin up the instance and delete it. You can also create a snapshot of your disk and recreate your instance based on the snapshot, this will be little bit less expensive that keeping the disk.
Also, you should look at the type of persistence disks (PD) on GCE, at this link, take a look at the examples provided, since based on your operation, regular PD or SSD PD can make a big difference on price.
You can use the pricing calculator to estimate your charges

When you deploy to App Engine using a managed VM, an compute engine instance (managed by google) is created for you. All request to App Engine will be forwarded to the created compute engine instance.
To run your script in App Engine as a Managed VM, you will have to dockerize your project, as the managed VM runs a docker container.
I don't see a reason to use App Engine managed VM (just for running a script), as the cost will be same as using a compute engine instance.
Probably the most cost effective way is to create a script that:
Launches a compute engine instance
Install perl
Copies your script to the instance
Runs you script in the created instance
To schedule the execution, you can put at home/office a cron job that executes the above script.

Related

Cloud SQL instance 2nd Generation ALTERNATIVE activation policy "ON DEMAND"

I have problem with Cloud SQL billing.
My Cloud SQL has used all 720 Hours running machine (db-g1-small : changed from db-n1-standard-1 recently)
I've found accordding to Cloud SQL Documentation that
For Second Generation instances, the activation policy is used only to start or stop the instance.
So without ON_DEMAND policy of the First Generation, how can I reduce these costs on my Cloud SQL instance?
PS. Look like my cloud server not automatically down because it stay 4 sleep connections
Indeed for second generation instances of Cloud SQL, the only activation policies available are ALWAYS and NEVER, so it's not possible anymore to leave that kind of instance handling entirely on Cloud SQL's hands.
However you can create a workaround for this by executing a cron job that turns the instances on/off on a fixed schedule. Eg: you can run a cron job that runs on friday night to shutdown the instance and on monday morning to shut it back on.
You can use the following command to do so:
gcloud sql instances patch [INSTANCE_NAME] --activation-policy [ACTIVATION_POLICY_VALUE]
Moreover, you can create a feature request on Google Cloud's Public Issue Tracker System to re-include that functionality on Cloud SQL in the future, but there are no guaratees that this will happen.

Google Cloud Composer - Create Environment - with a few compute engine instances - That is expensive

I am new to Google Cloud Composer and following the QuickStart instruction, Create the Environment, Load Dag, Check Airflow, and Delete the Environment.
But in (real life) production use case, after we finish load dag files and run them in the environment. Should we delete the Google Cloud Composer Environment? Because there might be several compute instances in that composer and doing nothing now. It is expensive.
But if I delete the environment, then I would lose the access to its airflow web portal, and I could not check the processing logs of my processing on the deleted environment.
So what should I do? In real life production case, should I delete or not delete the environment after the processing is done?
Apache Airflow (and therefore Cloud Composer) is for orchestrating workflows, not for ETL batch jobs that only require transient compute resources. Similarly to how you wouldn't turn a server off just because a scheduled cron task isn't running, Composer environments are meant to be long-running compute resources that are always online, such that you can schedule repeating workflows whenever necessary (whether that be per second, daily, etc)
In a real production case, a Composer environment should always be left running, or no DAGs will be scheduled when it is down. If you have a development environment and wish to save money, then you can resize the Composer environment's attached GKE cluster to 0 nodes so you won't be billed for them. Similarly, if you don't think you're running enough DAGs to justify the cost, consider smaller worker machine sizes.

What is the difference between Serverless containers and App Engine flexible with custom runtimes?

I came across the article : Bringing the best of serverless to you
where I came to know about upcoming product called Serverless containers on Cloud Functions which is currently in Alpha.
As described in the article:
Today, we’re also introducing serverless containers, which allow you
to run container-based workloads in a fully managed environment and
still only pay for what you use.
and in GCP solutions page
Serverless containers on Cloud Functions enables you to run your own containerized workloads on
GCP with all the benefits of serverless. And you will still pay only
for what you use. If you are interested in learning more about
serverless containers, please sign up for the alpha.
So my question is how this serverless containers different from app engine flexible with custom runtime, which also use a docker file?
And it's my suspicion, since mentioned named is Serverless containers on Cloud Functions, the differentiation may include role of cloud functions. If so what is the role played by cloud functions in the serverless containers?
Please clarify.
What are Cloud Funtions?
From the official documentation:
Google Cloud Functions is a serverless execution environment for building and connecting cloud services. With Cloud Functions you write simple, single-purpose functions that are attached to events emitted from your cloud infrastructure and services. Your function is triggered when an event being watched is fired. Your code executes in a fully managed environment. There is no need to provision any infrastructure or worry about managing any servers.
In simple words, the Cloud Function is triggered by some event (HTTP request, PubSub message, Cloud Storage file insert...), runs the code of the function, returns a result and then the function dies.
Currently there are available four runtime environments:
Node.js 6
Node.js 8 (Beta)
Python (Beta)
Go (Beta)
With the Serverless containers on Cloud Functions product it is intended that you can provide your own custom runtime environment with a Docker Image. But the life cycle of the Cloud Function will be the same:
It is triggered > Runs > Outputs Result > Dies
App Engine Flex applications
Applications running in the App Engine flexible environment are deployed to virtual machines, i.e Google Cloud Compute Engine instances. You can choose the type of machine you want use and the resources (CPU, RAM, disk space). The App Engine flexible environment automatically scales your app up and down while balancing the load.
As well as in the case of the Cloud Functions there runtimes provided by Google but if you would like to use an alternative implementation of Python, Java, Node.js, Go, Ruby, PHP, .NET you can use Custom Runtimes. Or even you can work with another language like C++, Dart..., you just need to provide a Docker Image for your Application.
What are differences between Cloud Functions and App Engine Flex apps?
The main difference between them are its life cycle and the use case.
As commented above a Cloud Function has a defined life cycle and it dies when it task concludes. They should be used to do 1 thing and do it well.
On the other hand an Application running on the GAE Flex environment will always have at least 1 instance running. The typical case for this applications are to serve several endpoints where users can do REST API calls. But they provide more flexibility as you have full control over the Docker Image provided. You can do "almost" whatever you want there.
What is a Serverless Container?
As stated on the official blog post (search for Serverless Containerss), it's basically a Cloud Function running inside a custom environment defined by the Dockerfile.
It is stated on the official blog post:
With serverless containers, we are providing the same underlying
infrastructure that powers Cloud Functions, but you’ll be able to
simply provide a Docker image as input.
So, instead of deploying your code on the CF, you could also just deploy the Docker image with the runtime and the code to execute.
What's the difference between this Cloud Functions with custom runtimes vs App Engine Flexible?
There are 5 basic differences:
Network: On GAE Flexible you can customize the network the instances run. This let's you add firewalls rules to restrict egress and ingress traffic, block specific ports or specify the SSL you wish to run.
Time-Out: Cloud Functions can run for a maximum of 9 minutes, Flexible on the other hand, can run indefinitely.
Ready only environment: Cloud Functions environment is read-only while Flexible could be written (this is only intended to store spontaneous information as once the Flexible instance is restarted or terminated, all the stored data is lost).
Cold Boot: Cloud Functions are fast to deploy and fast to start compared to Flexible. This is because Flexible runs inside a VM, thus this extra time is taken in order for the VM to start.
How they work: Cloud functions are event driven (ex: upload of photo to cloud storage executing a function) on the other hand flexible is request driven.(ex: handling a request coming from a browser)
As you can see, been able to deploy a small amount of code without having to take care of all the things listed above is a feature.
Also, take into account that Serverless Containers are still in Alpha, thus, many things could change in the future and there is still not a lot of documentation explaining in-depth it's behavior.

Handling multiple vm instances in google compute engine

I'm pretty new to Google Compute Engine, I have 5 types of machines and lets' say 10 instances of each type. I don't wan't to do load balancing on them, so I can't use managed instance groups.
is there any 'smarter' way to copy my files to those VMs and run my software on the VMs remotely and automatically than doing this manually?
Basic startup scripts with gcloud/gsutil to copy from Google Storage and then to the respective VMs.

Google Compute Instance 100% CPU Utilisation

I am running n1-standard-1 (1 vCPU, 3.75 GB memory) Compute Instance , In my android app around 80 users are online write now and cpu Utilisation of instance is 99% and my app became less responsive. Kindly suggest me the workaround and If i need to upgrade , can I do that with same instance or new instance needs to be created.
Since your app is running already and users are connecting to it, you don't want to do the following process:
shut down the VM instance, keeping the boot disk and other disks
boot a more powerful instance, using the boot disk from step (1)
attach and mount any additional disks, if applicable
Instead, you might want to do the following:
create an additional VM instance with similar software/configuration
create a load balancer and add both the original and new VM to it as a backend
change your DNS name to point to the load balancer IP instead of the original VM instance
Now, your users will be randomly sent to a VM that's least-loaded to see the application, and you can add more VMs if your traffic increases.
You did not describe your application in detail, so it's unclear if each VM has local state (e.g., runs a database) or there's a database running externally. You will still need to figure out how to manage the stateful systems such as database or user-uploaded data from all the VM instances, which is hard to advise on given the little information in your quest.