Share state across Google Cloud Functions - google-cloud-functions

I have a Google Cloud Function that I want to return the same value to all clients calling it. The value is set by another Google Cloud Function. I have this working using Firestore, but I want something that can store the value in memory or push the value change into an event queue.

If you look for in memory and low latency data storage, you can have a look to memorystore service. It's based on Redis product and can serve you data in key-value access mode at low latency.
Memorystore is only available with a private IP in your VPC. For this, you can plug a serverless VPC Connector to your functions (who write and who read) to allow them to access to your VPC and thus to access to Memorystore service.
Take care to create your functions, your serverless VPC Connector and your Memorystore in the same region to improve the latency.
If it doesn't work, have a look to your firewall rules and allow the Redis traffic port (6379)

Related

GCP Cloud Functions with GPU access or Compute Engine instance

I have created a Google Cloud Function to do Image Processing, I am using a Deep Learning model for one part of the process, but It uses GPU so I could unable GPU to change CPU and it is working well. After reading many links.
My question is: how can enable use of GPU for Cloud Functions? How could I send one image to be processed in a Compute Engine instance with GPU from Cloud Functions? Finally, I read something about Atheros but it looks expensiver more than 1k/month.
Thanks for your comments and ideas.
GPU are expensive, there is no real solution for that. You need to have a small VM with a small GPU to limit cost, but it's still expensive.
The communication between Cloud Functions and the VM is up to you. It can be through HTTP rest API, gRPC, custom protocol on custom port. If you use the VM private IP, you need to add a serverless VPC connector to your Cloud Functions to bridge the serverless world managed by Google with your own VPC where live your VM

Server vs Serverless for REST API

I have a REST API that I was thinking about deploying using a Serverless model. My data is in an AWS RDS server that needs to be put in a VPC for security reasons. To allow a Lambda to access the RDS, I need to configure the lambda to be in a VPC, but this makes cold starts an average of 8 seconds longer according to articles I read.
The REST API is for a website so an 8 second page load is not acceptable.
Is there anyway I can use a Serverless model to implement my REST API or should I just use a regular EC2 server?
Unfortunately, this is not yet released, but let us hope that this is a matter of weeks/months now. At re:Invent 2018 AWS has introduced Remote NAT for Lambda to be available this year (2019).
For now you have to either expose RDS to the outside (directly or through a tunnel), but this is a security issue. Or Create Lambda ENIs in VPC.
In order to keep your Lambdas "warm" you may create a scheduled "ping" mechanism. Some example of this pattern you can find in the Article of Yan Cui.

Are custom metadata values for GCE instance stored securely?

I was wondering if custom metadata for google compute engine VM instances was an appropriate place to store sensitive information for configuring apps that run on the instance.
So we use container-optimised OS images to run microservices. We configure the containers with environment variables for things like creds for db connections and other systems we integrate with.
The VMs are treated as ephemeral for each CD deployment and the best I have come up with so far is to create an instance template with config values loaded via a file I keep on my local machine into the VM custom metadata, which is then made available to a systemctl unit when the VM starts up (cloud-config).
The essence of this means environment variable values (some containing creds) are uploaded by me (which don't change very much) and are then pulled from the VM instance metadata server when a new VM is fired up. So I'm just wondering if there's any significant security concerns with this approach...
Many thanks for your help
According to the Compute Engine documentation :
Is metadata information secure?
When you make a request to get
information from the metadata server, your request and the subsequent
metadata response never leaves the physical host running the virtual
machine instance.
Since the request and response are not leaving the physical host, you will not be able to access the metadata from another VM or from outside Google Cloud Platform. However, any user with access the VM will be able to query the metadata server and retrieve the information.
Based on the information you provided, storing credentials for a test or staging environment in this manner would be acceptable. However, if this is a production system with customer or information important to the business, I would keep the credentials in a secure store that tracks access. The data in the metadata server is not encrypted, and accesses are not logged.

Monitoring unhealthy hosts on google cloud

I am using an external monitoring service (not stackdriver)
I wish to monitor the number of unhealthy hosts on my load balancer.
It seems like the google cloud api doesn't expose this metrics
therefore I implemented a custom script that gets the instance groups of the load balancer, get the instances' data (dns) and performs the health check
pretty cumbersome. is there a simple way to do it?
You can use the command 'gcloud compute backend-services get-health' to get the status of each instance in your backend service. This command will provide the current status of each instance, HEALTHY or UNHEALTHY, that is part of your backend service.

How to access service in google container engine from google compute engine instance

I have a cluster on a google container engine. There are internal service with the domain app.superproject with exposed port 9999.
Also I have an instance in google compute engine.
How can I access to service with it's domain name from the instance of google compute engine?
GKE is built on top of GCE, a GKE instance is also a GCE instance. You can view all your instances either in the web console, or with gcloud compute instances list command.
Note that they may not be in the same GCE virtual network, but in your use case, it's better to put them in, e.g., the default network (I guess they are already, but check their network properties if you are not sure), then they're accessible to each other through the internal IPs (if not, check firewall settings).
You can also use instance names, which resolve to internal IPs, e.g., ping instance1.
If they're not in the same GCE virtual network, you have to treat the service as an external service by exposing an external IP, which is not recommended in your use case.