Google cloud function non persistent disk - google-cloud-functions

What is the Google Cloud Function non-persistent disk space (/tmp directory) size that a cloud function receives on creation ?
For example on AWS you get 500MB.

/tmp directory is mounted in the RAM of Cloud Functions, therefore the maximum capacity of this directory will vary depending on the size of your function as a whole.
Do note that the exact amount of RAM doesn't mean the exact amount of files you can store, as your app is also loaded in the RAM. For your reference, the maximum memory per function is 8192 MiB.

Related

Share a SSD or RAMdisk between Google Compute Engine VMs

From google documentation it is clear that read-only Persistent Disk(PD) can be shared between multiple instances (Google compute VMs), but is it somehow possible to share SSD or RAM disk with multiple VMs?
Local SSD are physically attached and they are, as well as RAM, not read-only.
So this question probably answers itself.

Confusion on disk types in gcloud when creating a new disk

So I'm looking at the disk types available on the google cloud platform and they have Persistent disks and Local SSD disks (from reading their documentation). After dreading the docs I go to create a disk the disk types are labeled as 'SSD Persistent disk' and 'Standard Persistent disk'. They seems to be named different here to in the docs. I just want a confirmation that:
SSD Persistent disk = Local SSD disk
Standard Persistent disk = Persistent disks
I'm new to running VMs on a cloud platform for hosting your own site and im trying to wrap my head around the different options available for all the different settings and why you would choose one over the other..
I'm trying to learn the different disks available and what they do but when I go to actually create a disk the types they have avaliable aren't even the same as the ones they list in the docs. I figured they could be the same but named different for whatever reason so I wanted to come here and confirm if thats the case.
Your choices are:
local SSD = locally-attached to the VM, SSD
standard persistent = network-attached, persistent, HDD**
SSD persistent = network-attached, persistent, SSD
Type 1 is lower latency than types 2 and 3, because type 1 is physically attached to the VM.
Type 2 and 3 persist beyond instance stop/delete. Type 1 does not.
Type 2 and 3 are durable/redundant (Google replicates them, like Raid 1). Type 1 is not.
Type 2 and 3 can be attached to multiple VMs simultaneously (in read mode). Type 1 cannot.
** nowhere does Google actually indicate afaik that standard persistent is actually HDD, just that it is not SSD, so it may not be guaranteed to be HDD.
You can see more specific data at Storage Options, but in summary:
local SSD is the fastest (by far)
SSD persistent has much higher read/write IOPS than standard persistent
SSD persistent is more expensive (4x) than standard persistent
Basically you have 2 different disk types to choose from when setting up a GCE instance:
Persistent disks can either be regular persistent disks, which could be HDD, or SSD's.
Local disks, which are not persistent are always SSD's.
Like you can read in the docs using local SSD's only allow up to 3TB to be stored on the disks while the Persistent disks allow up to 64 TB.
SSD's are also more expensive per GB used. The upside for SSD's however is that they allow higher throughput and lower latency.
There's several other things to note, everything there is to be found about GCE disks can be found in the aforementioned docs.
Answer to your original question:
Persistent disk = Standard Persistent Disk or SSD persistent disk and non-persistent disk = Local SSD

HTML 5 File System how to increase persistent storage

Following this guide: https://developer.chrome.com/apps/offline_storage#asking_more
When I execute
navigator.webkitPersistentStorage.requestQuota(Number.MAX_SAFE_INTEGER, console.log.bind(console));
I receive output 10737418240 bytes (10.73GB), which is the calculated maximum size when using Temporary storage:
(available storage space + storage being used by apps) * .5
However, I did press OK to allow allocating more storage than that. So why don't I get the requested storage?
An application can have a larger quota for persistent storage than
temporary storage, but you must request storage using the Quota
Management API and the user must grant you permission to use more
space.
I've got the same problem posted here Cannot allocate more than 10GB of HTML5 persistent storage
Conclusion: maximum persistent storage quota is 10GB, and the documentation saying it's limited by user's free disk space is incorrect.

Google compute engine Persistent Disk maximum size

I'm working on a digital asset management deployment and it's likely we'll need need more than the 10TB storage maximum for a persistent disk. The DAM software does not support multiple storage points and likely won't in the future.
How are people coping with this limitation? Is the 10TB max likely to increase as Google compute engine matures?
As of 1 Feb 2016, you can create a persistent disk of up to 64 TB; see the blog post for more details.
According to the official doc:
Instances with shared-core machine types are limited to a maximum of 16 persistent disks.
For custom machine types or predefined machine types that have a minimum of 1 vCPU, you can attach up to 128 persistent disks.
Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes. Each instance can attach only a limited amount of total persistent disk space and a limited number of individual persistent disks. Predefined machine types and custom machine types have the same persistent disk limits.
Most instances can have up to 64 TB of total persistent disk space attached. Shared-core machine types are limited to 3 TB of total persistent disk space. Total persistent disk space for an instance includes the size of the boot persistent disk.

How can i copy Couchbase data from disk to memory, due to memory increasing?

Some time ago our Couchbase cluster started to read data from disk, because memory was full. We increased memory amount, but the Couchbase still reads from the disk. Disk reads greatly increases the number of errors in our software. And i'm wondering is there possibility to copy data from disk to memory, so Couchbase can work normally again?
CentOs 5.6
Couchbase v.1.8
As documented here http://www.couchbase.com/docs/couchbase-manual-1.8/couchbase-introduction-architecture-diskstorage.html Couchbase tries to keep the dataset in memory. So when you access the document it will be put in memory.
When adding physical memory you will need to also increase the RAM Quota of your cluster/nodes.
Do you have information about the cache misses?
Is what you want is put "all the document" in memory? (do you have enougt RAM?)