google cloud functions cpu speed from command line - google-cloud-functions

How do I set the CPU speed with gcloud functions deploy? When I run the following
gcloud functions deploy "$FUNC" \
--gen2 \
--runtime=python310 \
--region=us-central1 \
--source=. \
--entry-point="$ENTRY" \
--trigger-http \
--allow-unauthenticated \
--memory=128MiB \
--min-instances=1
I end up with a function with 128MB and 0.583 CPU. I suspect 0.583 comes from an old setting when I set the memory to 1024MB. I don't see an argument for the CPU and changing it using the GCP UI is not ideal.
Google Cloud Functions CPU Speed Setup suggests that memory and cpu are tied but that doesn't seem to be the case.
EDIT: I filed an issue: https://issuetracker.google.com/issues/259575942

As mentioned by #John Hanley, you have to delete the function and redeploy in order to change the memory.
I tried to replicate the issue at my end and I noticed that I wasn't able to increase the memory beyond 512MB. I have to create a new function to change memory.
And also noticed that we cannot increase the memory beyond specified memory at time of deploying functions. I.e if you deploy a function with --memory=1GiB, you can’t increase it to 2GiB.
It seems like there is a bug in Cloud Function gen2, if this bug affects your production. Please raise a bug in Public issue tracker with description or contact Google support

Related

How to fix "qemu-system-mipsel: The following two regions overlap (in the memory address space)"?

I would like to run a Linux root filesystem for MIPSEL on qemu-system-mipsel.
The root filesystem was extracted from the firmware using "firmware-analysis-toolkit" (firmadyne).
However, After I build a root filesystem as required I encountered an error when I run
The script for run qemu is:
qemu-system-mipsel -M malta -kernel vmlinuz.elf \
-drive if=ide,format=raw,file=squashfs-factory.raw \
-append "root=/dev/sda1 console=ttyS0 nandsim.parts=64,64,64,64,64,64,64,64,64,64 \
rdinit=/firmadyne/preInit.sh rw debug ignore_loglevel print-fatal-signals=1 user_debug=31 firmadyn \
-nographic
If i use the vmlinux.elf provided by firmadyne toolkit (kernel 2.6.39.4+) everything works.
If i want to use a vmlinux.elf (kernel 5.4) provided by openwrt-imagebuilder (or compiled by me) i encountered an error this error:
The following two regions overlap (in the memory address space):
vmlinux-5.4.111.mipsel ELF program header segment 0 (addresses 0x0000000000001000 - 0x000000000084b910)
prom (addresses 0x0000000000002000 - 0x0000000000003040)
I've tried everything. How can it be fixed?
QEMU is complaining that the ELF file you've asked it to load is overlapping with the blob of 'prom' data that contains data to pass to the kernel such as memory size and the kernel command line. That PROM data always starts at address 0x2000. You need to build your kernel so that it doesn't try to put anything at that address.

Error with --enable-display-device in COS Google Compute Engine VM

I'm using Cloud Build to start a Google Cloud Compute Engine VM.
Here's the command I got from Compute Engine by selecting options in the UI and getting the "Equivalent command line", notice the --enable-display-device:
gcloud compute instances create-with-container my-vm
--project=XXX
--zone=us-central1-a
--machine-type=g1-small
--network-interface=network-tier=PREMIUM,subnet=default
--maintenance-policy=MIGRATE
--provisioning-model=STANDARD
--service-account=XXX
--scopes=https://www.googleapis.com/auth/cloud-platform
--enable-display-device
--image=projects/cos-cloud/global/images/cos-stable-97-16919-29-16
--boot-disk-size=10GB
--boot-disk-type=pd-balanced
--boot-disk-device-name=checkmate-dev
--container-image=XXX
--container-restart-policy=always
--no-shielded-secure-boot
--shielded-vtpm
--shielded-integrity-monitoring
--labels=container-vm=cos-stable-97-16919-29-16
This command is placed in the args field of the cloudbuild.yaml which seems to run as intended until Cloud Build spits out an error:
unrecognized arguments: --enable-display-device (did you mean '--boot-disk-device-name'?)
I don't understand because this flag came from Compute Engine so I figure it's valid. The flag is not actually in the docs in https://cloud.google.com/sdk/gcloud/reference/compute/instances/create-with-container.
But if I use the Compute Engine UI to make a "New VM Instance" as opposed to a container VM, then the flag appears in the command line.

How to change memory allocation for cloud function when deploying by gcloud command

When deploying a cloud function, I'm using command like this.
gcloud functions deploy MyCloudFunction --runtime nodejs8 --trigger-http
Default memory allocation is 256MB. I changed it 1GB using Google cloud console from browser.
Is there a way to change memory allocation when deploying by gcloud command?
You might want to read over the CLI documentation for gcloud functions deploy.
You can use the --memory flag to set the memory:
gcloud functions deploy MyCloud Functions ... --memory 1024MB
You may also need to increase the CPU count to be able to increase memory beyond 512 MiB. Otherwise, with the default 0.33 vCPU Cloud Function allocation, I saw errors like the following, where [SERVICE] is the name of your Google Cloud Function below:
ERROR: (gcloud.functions.deploy) INVALID_ARGUMENT: Could not update Cloud Run service [SERVICE]. spec.template.spec.containers[0].resources.limits.memory: Invalid value specified for container memory. For 0.333 CPU, memory must be between 128Mi and 512Mi inclusive.
From https://cloud.google.com/run/docs/configuring/cpu#command-line, this can be done by calling gcloud run services update [SERVICE] --cpu [CPU], for example:
gcloud run services update [SERVICE] --cpu=4 --memory=16Gi --region=northamerica-northeast1
You should see a response like:
Service [SERVICE] revision [SERVICE-*****-***] has been deployed and is serving 100 percent of traffic.
https://console.cloud.google.com/run can help show what is happening too.

How to create google compute engine template with a custom disk and an external ip

How do I create a google compute engine template, named my-template, with a custom disk named my-disk, an external ip, that's preemptible, and with the tags necessary to open the http server ports?
Can I use the a managed template group to automatically restart these preemptible instances?
Something like the following command will work. Note that I set it up to use a highmem machine with 8 cores.
gcloud compute instance-templates create my-template \
--disk=boot=yes,auto-delete=no,name=my-disk \
--machine-type=n1-highmem-8 \
--preemptible \
--network-interface=address=35.238.XXX.YYY \
--tags=http-server,https-server
As of Nov 2018, the following link is where you can setup your external IP:
https://console.cloud.google.com/networking/addresses/list
Yes, you'll be able to use a managed instance group to automatically restart the preemptible instance once compute resources are available.

Bug in GCE Developer Console 'equivalent command line'

When attempting to create an instance in a project that includes local SSDs, I am given the following (redacted) command line equivalent:
gcloud compute --project "PROJECTS" instances create "INSTANCE" --zone "us-central1-f" \
--machine-type "n1-standard-2" --network "default" --maintenance-policy "MIGRATE" \
--scopes [...] --tags "http-server" --local-ssd-count "2" \
--image "ubuntu-1404-trusty-v20150316" --boot-disk-type "pd-standard" \
--boot-disk-device-name "INSTANCEDEVICE"
This fails with:
ERROR: (gcloud) unrecognized arguments: --local-ssd-count 2
Indeed, I find no mention of --local-ssd-count in the current docs: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Changing this to --local-ssd --local-ssd works, as then the default are used.
This is using Google Cloud SDK 0.9.54, the most recent after gcloud components update.
If you've found a bug for GCE or have a feature you'd like to propose/request, the best way to report it is to use the 'Public Issue Tracker' that Google has made available.
Visit this Issue Tracker to report your feedback/bug/feature request. It does not require any support package at all.
I highly encourage you to do so as they have staff actively monitoring and working on those reports (Please note that it's different than what they seem to do on StackOverflow as the tracker is for bugs and feature requests, while SO is for questions). It is likely the best way to get your feedback to their engineers. But we do know that they have staff answering questions on Stack Overflow as well. Questions go here, bug reports go to the tracker as far as I understand.