How to change memory allocation for cloud function when deploying by gcloud command - google-cloud-functions

When deploying a cloud function, I'm using command like this.
gcloud functions deploy MyCloudFunction --runtime nodejs8 --trigger-http
Default memory allocation is 256MB. I changed it 1GB using Google cloud console from browser.
Is there a way to change memory allocation when deploying by gcloud command?

You might want to read over the CLI documentation for gcloud functions deploy.
You can use the --memory flag to set the memory:
gcloud functions deploy MyCloud Functions ... --memory 1024MB

You may also need to increase the CPU count to be able to increase memory beyond 512 MiB. Otherwise, with the default 0.33 vCPU Cloud Function allocation, I saw errors like the following, where [SERVICE] is the name of your Google Cloud Function below:
ERROR: (gcloud.functions.deploy) INVALID_ARGUMENT: Could not update Cloud Run service [SERVICE]. spec.template.spec.containers[0].resources.limits.memory: Invalid value specified for container memory. For 0.333 CPU, memory must be between 128Mi and 512Mi inclusive.
From https://cloud.google.com/run/docs/configuring/cpu#command-line, this can be done by calling gcloud run services update [SERVICE] --cpu [CPU], for example:
gcloud run services update [SERVICE] --cpu=4 --memory=16Gi --region=northamerica-northeast1
You should see a response like:
Service [SERVICE] revision [SERVICE-*****-***] has been deployed and is serving 100 percent of traffic.
https://console.cloud.google.com/run can help show what is happening too.

Related

Error with --enable-display-device in COS Google Compute Engine VM

I'm using Cloud Build to start a Google Cloud Compute Engine VM.
Here's the command I got from Compute Engine by selecting options in the UI and getting the "Equivalent command line", notice the --enable-display-device:
gcloud compute instances create-with-container my-vm
--project=XXX
--zone=us-central1-a
--machine-type=g1-small
--network-interface=network-tier=PREMIUM,subnet=default
--maintenance-policy=MIGRATE
--provisioning-model=STANDARD
--service-account=XXX
--scopes=https://www.googleapis.com/auth/cloud-platform
--enable-display-device
--image=projects/cos-cloud/global/images/cos-stable-97-16919-29-16
--boot-disk-size=10GB
--boot-disk-type=pd-balanced
--boot-disk-device-name=checkmate-dev
--container-image=XXX
--container-restart-policy=always
--no-shielded-secure-boot
--shielded-vtpm
--shielded-integrity-monitoring
--labels=container-vm=cos-stable-97-16919-29-16
This command is placed in the args field of the cloudbuild.yaml which seems to run as intended until Cloud Build spits out an error:
unrecognized arguments: --enable-display-device (did you mean '--boot-disk-device-name'?)
I don't understand because this flag came from Compute Engine so I figure it's valid. The flag is not actually in the docs in https://cloud.google.com/sdk/gcloud/reference/compute/instances/create-with-container.
But if I use the Compute Engine UI to make a "New VM Instance" as opposed to a container VM, then the flag appears in the command line.

google cloud functions cpu speed from command line

How do I set the CPU speed with gcloud functions deploy? When I run the following
gcloud functions deploy "$FUNC" \
--gen2 \
--runtime=python310 \
--region=us-central1 \
--source=. \
--entry-point="$ENTRY" \
--trigger-http \
--allow-unauthenticated \
--memory=128MiB \
--min-instances=1
I end up with a function with 128MB and 0.583 CPU. I suspect 0.583 comes from an old setting when I set the memory to 1024MB. I don't see an argument for the CPU and changing it using the GCP UI is not ideal.
Google Cloud Functions CPU Speed Setup suggests that memory and cpu are tied but that doesn't seem to be the case.
EDIT: I filed an issue: https://issuetracker.google.com/issues/259575942
As mentioned by #John Hanley, you have to delete the function and redeploy in order to change the memory.
I tried to replicate the issue at my end and I noticed that I wasn't able to increase the memory beyond 512MB. I have to create a new function to change memory.
And also noticed that we cannot increase the memory beyond specified memory at time of deploying functions. I.e if you deploy a function with --memory=1GiB, you can’t increase it to 2GiB.
It seems like there is a bug in Cloud Function gen2, if this bug affects your production. Please raise a bug in Public issue tracker with description or contact Google support

Trying to monitor resource usage of a kvm/qemu virtual machine with mesos

I’m currently deploying a kvm/qemu virtual machine with mesos/marathon. In marathon, I’m using the built in mesos command executor and running the script.
virsh start centos7.0; while true; do echo 'centos 7.0 guest is running'; sleep 5; done
Note the while loop is there only to keep the task running. My issue is that I cannot get mesos to monitor the resource usage of the virtual machine.
When marathon deploys this task on a mesos-agent, it is creating a container that uses the memory and cpu cgroups.
/sys/fs/cgroup/cpu/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
/sys/fs/cgroup/memory/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
When the virtual machine is being kicked off, the virsh start command is sending a request to libvirtd. Libvirtd then reads the guest.xml file located in /etc/libvirt/qemu/ and then sends a request to the qemu/kvm driver to deploy it.
In my guest.xml file I’m using a custom partition cgroup slice to monitor my virtual machine usage.
https://libvirt.org/cgroups.html
(for each cgroup)
/sys/fs/cgroup/???/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
What I have tried.
I tried deleting my memory / cpu cgroup from this slice by doing
cgdelete -r cpu,memory:vmHolder.slice
and then adding my qemu guest process to the mesos controllers
cgclassify -g cpu,memory:mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895 GUEST-PID
When I run the command cat /proc/5531/cgroup
11:perf_event:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
10:pids:/
9:devices:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
8:cpuset:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope/emulator
7:net_prio,net_cls:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
6:freezer:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
5:blkio:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
4:hugetlb:/
3:cpuacct,cpu:/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
2:memory:/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
1:name=systemd:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
It shows that I’m using those controllers, but when I run systemd-cgtop it's not adding the memory usage of the VM. I'm not sure what to do next. Any suggestions?

Google Compute Engine: "attach-disk" command doesn't mount disk on the machine

I want to attach disk to an instance on Google Compute Engine, through commands below.
gcloud compute instances create pg-disk-formatter --image ubuntu-1604-lts --custom-cpu 1 --custom-memory 1
gcloud compute disks create pg-data-disk --size 50GB
gcloud compute instances attach-disk pg-disk-formatter --disk pg-data-disk
However, even I logged into the machine and cd to /dev/disk/by-id/, it doesn't show up on the list.
mkouhei0910#pg-data-disk:~$ cd /dev/disk/by-id/
google-persistent-disk-0 scsi-0Google_PersistentDisk_persistent-disk-0
google-persistent-disk-0-part1 scsi-0Google_PersistentDisk_persistent-disk-0-part1
google-pg-data-disk2 scsi-0Google_PersistentDisk_pg-data-disk2
I noticed it shows up after I attached new disk image from Google Platform Console, but how can I achieve it purely gloud command line?
You first command is not correct. It should be:
gcloud compute instances create pg-disk-formatter --image-project ubuntu-os-cloud --image-family ubuntu-1604-lts --custom-cpu 1 --custom-memory 1
The second and third commands are good. They will create a disk and will attach it to the VM instance. The additional disk is listed in the output of ls command that your provided:
google-pg-data-disk2
If you want guest operating system sees a different name for attached disk, you can use --device-name flag with the command.

Unable to access Google Compute Engine instance using external IP address

I have a Google compute engine instance(Cent-Os) which I could access using its external IP address till recently.
Now suddenly the instance cannot be accessed using its using its external IP address.
I logged in to the developer console and tried rebooting the instance but that did not help.
I also noticed that the CPU usage is almost at 100% continuously.
On further analysis of the Serial port output it appears the init module is not loading properly.
I am pasting below the last few lines from the serial port output of the virtual machine.
rtc_cmos 00:01: RTC can wake from S4
rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
rtc0: alarms up to one day, 114 bytes nvram
cpuidle: using governor ladder
cpuidle: using governor menu
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver hiddev
usbcore: registered new interface driver usbhid
usbhid: v2.6:USB HID core driver
GRE over IPv4 demultiplexor driver
TCP cubic registered
Initializing XFRM netlink socket
NET: Registered protocol family 17
registered taskstats version 1
rtc_cmos 00:01: setting system clock to 2014-07-04 07:40:53 UTC (1404459653)
Initalizing network drop monitor service
Freeing unused kernel memory: 1280k freed
Write protecting the kernel read-only data: 10240k
Freeing unused kernel memory: 800k freed
Freeing unused kernel memory: 1584k freed
Failed to execute /init
Kernel panic - not syncing: No init found. Try passing init= option to kernel.
Pid: 1, comm: swapper Not tainted 2.6.32-431.17.1.el6.x86_64 #1
Call Trace:
[] ? panic+0xa7/0x16f
[] ? init_post+0xa8/0x100
[] ? kernel_init+0x2e6/0x2f7
[] ? child_rip+0xa/0x20
[] ? kernel_init+0x0/0x2f7
[] ? child_rip+0x0/0x20
Thanks in advance for any tips to resolve this issue.
Mathew
It looks like you might have an script or other program that is causing you to run out of Inodes.
You can delete the instance without deleting the persistent disk (PD) and create a new vm with a higher capacity using your PD, however if it's an script causing this, you will end up with the same issue. It's always recommended to backup your PD before making any changes.
Run this command to find more info about your instance:
gcutil --project= getserialportoutput
If the issue still continue, you can either
- Make a snapshot of your PD and make a PD's copy or
- Delete the instance without deleting the PD
Attach and mount the PD to another vm as a second disk, so you can access it to find what is causing this issue. Visit this link https://developers.google.com/compute/docs/disks#attach_disk for more information on how to do this.
Visit this page http://www.ivankuznetsov.com/2010/02/no-space-left-on-device-running-out-of-inodes.html for more information about inodes troubleshooting.
Make sure the Allow HTTP traffic setting on the vm is still enabled.
Then see which network firewall you are using and it's rules.
If your network is set up to use an ephemral IP, it will be periodically released back. This will cause your IP to change over time. Set it to static/reserved then (on networks page).
https://developers.google.com/compute/docs/instances-and-network#externaladdresses