Share a SSD or RAMdisk between Google Compute Engine VMs - google-compute-engine

From google documentation it is clear that read-only Persistent Disk(PD) can be shared between multiple instances (Google compute VMs), but is it somehow possible to share SSD or RAM disk with multiple VMs?

Local SSD are physically attached and they are, as well as RAM, not read-only.
So this question probably answers itself.

Related

Is there SSD storage available for Oracle Cloud VM.Standard.E2.1 instance?

I want to know if I can choose somehow SSD storage type for my VM.Standard.E2.1 instance on Oracle Cloud.
They provide the so-called Block Storage type, though I am not exactly sure it is SSD.
Their website here says so.
Are Oracle Cloud Infrastructure Block Volumes using NVMe SSDs in the storage infrastructure?
Yes. Industry-leading highest performance NVMe solid state drives are used. This high performance, backed by a performance SLA, is enabled without using storage caching.

Primary Disk vs Swap Disk [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am using Google Compute Engine with 90GB of SSD. As my site is growing, cost has also shooted up. I tried shifting to https://www.vpb.com but they gave me
30GB Primary Disk and 60GB Swap Disk (Both are SSDs as they said).
The proposed cost has also decreased to 50%. My RAM is just 8GB.
Is above configuration different from 90GB SSD disk in Google Compute Engine?
Is above configuration different from 90GB SSD disk in Google Compute Engine?
Yes. Google Compute Engine is a full-featured IaaS platform where you can create VMs with the disks (and sizes) you need. The Persistent Disk is designed to be reliable, allows for easy snapshots, and you can also resize them while the VM is running.
This other server might be giving you 2 different disks for their VM or dedicated machine and you will have to design your site to use them both. Swap disks are really only meant for temporary work and it's strange to see them being offered separately like that. They also might be attached to the machine rather than reliable storage like GCP's persistent disks.
If 90GB isn't enough on your GCP VM, how will 30+60 be enough in this other machine? Are you uploading large media files? You might be better served by using Cloud Storage or S3 for those files.
As mentioned above - there are 2 important things to understand:
SWAP + disk is not the same as a big disk. SWAP is basically cheap RAM, in case you're running low on it. If you have 60+GB of static data on your VM, 30GB disk is more than twice as small as your minimum.
Using Disk for storing static data (e.g. images may be served from storage instances, that are way cheaper).
Disclosure: I am a product manager on Google Cloud Platform (but not Google Compute Engine or Persistent Disk specifically).
30GB Primary Disk and 60GB Swap Disk (Both are SSDs as they said).
The proposed cost has also decreased to 50%. My RAM is just 8GB.
Is above configuration different from 90GB SSD disk in Google Compute Engine?
Note that a "disk in a machine" is very different from a Google Compute Engine persistent disk:
A "disk in a machine" is exactly that: a single physical device. If it fails, you are expected to have made a backup of it prior to failure. How you make the backup is up to you.
A Google Compute Engine persistent disk is a replicated disk, so a single disk failure will not cause you to lose data. You can make backups (snapshots) of your persistent disk, and it's highly recommended, and you can use Google Cloud Storage for this purpose, but it's typically used for protecting against application bugs, not persistent disk durability.
As another answer says, GCE persistent disk also has a live resize capability so that you can easily increase the size of it if needed.
Google Cloud Platform has many more services besides just VMs: databases, key-value storage, object/blob storage, etc. so there's more to consider when making your decision.

Confusion on disk types in gcloud when creating a new disk

So I'm looking at the disk types available on the google cloud platform and they have Persistent disks and Local SSD disks (from reading their documentation). After dreading the docs I go to create a disk the disk types are labeled as 'SSD Persistent disk' and 'Standard Persistent disk'. They seems to be named different here to in the docs. I just want a confirmation that:
SSD Persistent disk = Local SSD disk
Standard Persistent disk = Persistent disks
I'm new to running VMs on a cloud platform for hosting your own site and im trying to wrap my head around the different options available for all the different settings and why you would choose one over the other..
I'm trying to learn the different disks available and what they do but when I go to actually create a disk the types they have avaliable aren't even the same as the ones they list in the docs. I figured they could be the same but named different for whatever reason so I wanted to come here and confirm if thats the case.
Your choices are:
local SSD = locally-attached to the VM, SSD
standard persistent = network-attached, persistent, HDD**
SSD persistent = network-attached, persistent, SSD
Type 1 is lower latency than types 2 and 3, because type 1 is physically attached to the VM.
Type 2 and 3 persist beyond instance stop/delete. Type 1 does not.
Type 2 and 3 are durable/redundant (Google replicates them, like Raid 1). Type 1 is not.
Type 2 and 3 can be attached to multiple VMs simultaneously (in read mode). Type 1 cannot.
** nowhere does Google actually indicate afaik that standard persistent is actually HDD, just that it is not SSD, so it may not be guaranteed to be HDD.
You can see more specific data at Storage Options, but in summary:
local SSD is the fastest (by far)
SSD persistent has much higher read/write IOPS than standard persistent
SSD persistent is more expensive (4x) than standard persistent
Basically you have 2 different disk types to choose from when setting up a GCE instance:
Persistent disks can either be regular persistent disks, which could be HDD, or SSD's.
Local disks, which are not persistent are always SSD's.
Like you can read in the docs using local SSD's only allow up to 3TB to be stored on the disks while the Persistent disks allow up to 64 TB.
SSD's are also more expensive per GB used. The upside for SSD's however is that they allow higher throughput and lower latency.
There's several other things to note, everything there is to be found about GCE disks can be found in the aforementioned docs.
Answer to your original question:
Persistent disk = Standard Persistent Disk or SSD persistent disk and non-persistent disk = Local SSD

Google compute engine Persistent Disk maximum size

I'm working on a digital asset management deployment and it's likely we'll need need more than the 10TB storage maximum for a persistent disk. The DAM software does not support multiple storage points and likely won't in the future.
How are people coping with this limitation? Is the 10TB max likely to increase as Google compute engine matures?
As of 1 Feb 2016, you can create a persistent disk of up to 64 TB; see the blog post for more details.
According to the official doc:
Instances with shared-core machine types are limited to a maximum of 16 persistent disks.
For custom machine types or predefined machine types that have a minimum of 1 vCPU, you can attach up to 128 persistent disks.
Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes. Each instance can attach only a limited amount of total persistent disk space and a limited number of individual persistent disks. Predefined machine types and custom machine types have the same persistent disk limits.
Most instances can have up to 64 TB of total persistent disk space attached. Shared-core machine types are limited to 3 TB of total persistent disk space. Total persistent disk space for an instance includes the size of the boot persistent disk.

Share a persistent disk between Google Compute Engine VMs

From Google's documentation:
It is possible to attach a persistent disk to more than one instance. However, if you attach a persistent disk to multiple instances, all instances must attach the persistent disk in read-only mode. It is not possible to attach the persistent disk to multiple instances in read-write mode.
If you attach a persistent disk in read-write mode and then try to attach the disk to subsequent instances, Google Compute Engine returns an error.
So, I need to have a share persistent-disk as frontend for all my compute engine, good, how can you write on this shared disk?
My guess (I hope) is a read/write persistent-disk can be attached only with 1 compute engine but this same disk can be share in read only to others VMs, is thats right?
Lets say I have 2 Compute Engine VMs and 2 persistent disks,
is this flow is possible?
compute1 read/write disk1 and read only disk2
compute2 read/write disk2 and read only disk1
Update: this is available as of 2020-06-16
As per another answer by Matthew Lenz, the functionality for creating multi-writer persistent disks is available, but it's still in alpha status (even though it's documented as being in the beta track) and requires special per-project enablement.
Note: This GitHub issue notes that the functionality is still in alpha, even though it's labelled as beta. You can submit feedback via Cloud Console to request it for your project if you'd like to get early access to this functionality, but it's not guaranteed to be enabled.
Assuming your project has the permissions to use this feature (or the feature becomes public-access), note that it comes with some caveats:
--multi-writer
Create the disk in multi-writer mode so that it can be attached with read-write access to multiple VMs. Can only be used with zonal SSD persistent disks. Disks in multi-writer mode do not support resize and snapshot operations.
You can use this via:
$ gcloud beta compute disks create DISK_NAME --multi-writer [...]
Note the caveats:
zonal SSD persistent disks only
no disk resizing
no snapshots
If these trade-offs are not acceptable to you, see the original answer (below) which has a long list of recommended storage alternatives for sharing data between multiple GCE VMs.
Original answer (valid prior to 2020-06-16)
No, this is not possible, as the documentation that you cited at the time of writing said (since updated):
However, if you attach a persistent disk to multiple instances, all instances must attach the persistent disk in read-only mode.
The documentation has been re-arranged since then; the new docs are at a different URL but with the same content:
You can attach a non-root persistent disk to more than one virtual machine instance in read-only mode, which allows you to share static data between multiple instances. Sharing static data between multiple instances from one persistent disk is cheaper than replicating your data to unique disks for individual instances.
If you attach a persistent disk to multiple instances, all of those instances must attach the persistent disk in read-only mode. It is not possible to attach the persistent disk to multiple instances in read-write mode. If you need to share dynamic storage space between multiple instances, connect your instances to Cloud Storage or create a network file server.
If you have a persistent disk with data that you want to share between multiple instances, detach it from any read-write instances and attach it to one or more instances in read-only mode.
which means you cannot have one instance have write access while another has read-only access.
If you want to share data between them, you need to use something other than Persistent Disk. Below are some possible solutions.
You can use any of the following hosted/managed services:
Google Cloud Filestore — perhaps closest to what you're looking for, as it provides an NFSv3 file system
You can also use Elastifile on GCP as a fully-managed service; note that GCP acquired Elastifile in July 2019
Google Cloud Datastore
Google Cloud Storage, which you can use via the GCS API (JSON or XML) or you can mount it using gcsfuse as a block device
Google Cloud Bigtable
Google Cloud SQL
Alternatively, you can run your own:
self-managed or third-party managed file servers solutions, including NetApp and Panzura
self-managed Elastifile storage deployment (for fully-managed, see previous section for the link)
database (whether SQL or NoSQL)
distributed filesystem such as Ceph, GlusterFS, OrangeFS, ZFS, etc.
file server such as NFS or SAMBA
single VM as a data storage node, and use sshfs to create a FUSE mount from other VMs that want to access that data
GCP has alpha functionality for 'multi-write' persistent disks. It's been in alpha for quite a long time so who knows if it'll make it to beta or ga any time soon. Here is a link to the documentation. https://cloud.google.com/sdk/gcloud/reference/beta/compute/disks/create#--multi-writer
EDIT: 2020-06-16. This has been promoted to beta.