Google compute engine Persistent Disk maximum size - google-compute-engine

I'm working on a digital asset management deployment and it's likely we'll need need more than the 10TB storage maximum for a persistent disk. The DAM software does not support multiple storage points and likely won't in the future.
How are people coping with this limitation? Is the 10TB max likely to increase as Google compute engine matures?

As of 1 Feb 2016, you can create a persistent disk of up to 64 TB; see the blog post for more details.

According to the official doc:
Instances with shared-core machine types are limited to a maximum of 16 persistent disks.
For custom machine types or predefined machine types that have a minimum of 1 vCPU, you can attach up to 128 persistent disks.
Each persistent disk can be up to 64 TB in size, so there is no need to manage arrays of disks to create large logical volumes. Each instance can attach only a limited amount of total persistent disk space and a limited number of individual persistent disks. Predefined machine types and custom machine types have the same persistent disk limits.
Most instances can have up to 64 TB of total persistent disk space attached. Shared-core machine types are limited to 3 TB of total persistent disk space. Total persistent disk space for an instance includes the size of the boot persistent disk.

Related

Cloud SQL resize would make me lose data?

We currently have a database hosted in Google Cloud SQL. We are paying almost 100$ but we use less than 5% of our size. Our configs are:
Version: MySQL 8.0
Machine Type: High Memory 4 vCPU, 26 GB
Storage Type: SSD
Storage capacity: 100GB
I was thinking of switching to Machine Type High Memory to Lightweight.
Would this delete my current database data?
You can scale up and down the memory and te CPU without data loss. Your database will be unavailable (you need to stop and to start it again with the new machine type configuration). Don't be afraid, you can do this
At, the opposite, you can scale up but not to scale down the storage capacity. If you want to achieve that, you need to export the data outside of your database, to delete the current Cloud SQL instance and to create a new one with a smallest disk. And then to reimport the data.

Confusion on disk types in gcloud when creating a new disk

So I'm looking at the disk types available on the google cloud platform and they have Persistent disks and Local SSD disks (from reading their documentation). After dreading the docs I go to create a disk the disk types are labeled as 'SSD Persistent disk' and 'Standard Persistent disk'. They seems to be named different here to in the docs. I just want a confirmation that:
SSD Persistent disk = Local SSD disk
Standard Persistent disk = Persistent disks
I'm new to running VMs on a cloud platform for hosting your own site and im trying to wrap my head around the different options available for all the different settings and why you would choose one over the other..
I'm trying to learn the different disks available and what they do but when I go to actually create a disk the types they have avaliable aren't even the same as the ones they list in the docs. I figured they could be the same but named different for whatever reason so I wanted to come here and confirm if thats the case.
Your choices are:
local SSD = locally-attached to the VM, SSD
standard persistent = network-attached, persistent, HDD**
SSD persistent = network-attached, persistent, SSD
Type 1 is lower latency than types 2 and 3, because type 1 is physically attached to the VM.
Type 2 and 3 persist beyond instance stop/delete. Type 1 does not.
Type 2 and 3 are durable/redundant (Google replicates them, like Raid 1). Type 1 is not.
Type 2 and 3 can be attached to multiple VMs simultaneously (in read mode). Type 1 cannot.
** nowhere does Google actually indicate afaik that standard persistent is actually HDD, just that it is not SSD, so it may not be guaranteed to be HDD.
You can see more specific data at Storage Options, but in summary:
local SSD is the fastest (by far)
SSD persistent has much higher read/write IOPS than standard persistent
SSD persistent is more expensive (4x) than standard persistent
Basically you have 2 different disk types to choose from when setting up a GCE instance:
Persistent disks can either be regular persistent disks, which could be HDD, or SSD's.
Local disks, which are not persistent are always SSD's.
Like you can read in the docs using local SSD's only allow up to 3TB to be stored on the disks while the Persistent disks allow up to 64 TB.
SSD's are also more expensive per GB used. The upside for SSD's however is that they allow higher throughput and lower latency.
There's several other things to note, everything there is to be found about GCE disks can be found in the aforementioned docs.
Answer to your original question:
Persistent disk = Standard Persistent Disk or SSD persistent disk and non-persistent disk = Local SSD

Running out of disk space, MySQL (VM azure)

I created replication in Microsoft Azure Virtual Machine .
I'm using MySQL and working with sql workbench (windows).
yesterday I discovered that my 250 GB storage are full and replication stopped.
this log wrote,
Timestamp, Thread, Type, Details
2015-07-29 23:26:44, 1672, Warning, Disk is full writing '.\database123-relay-bin.000164' (Errcode: 28 - No space left on device). Waiting for someone to free space...
and I created another 250 GB external storage.
I have 2 Q :
how can I create queries and use data within two difference storage ?
is it the right thing to do? to create another storage or there is a way to create flexible storage
?
that i found is this : http://www.psce.com/blog/2012/05/07/running-out-of-disk-space-on-mysql-partition-a-quick-rescue/
but it not help , need help and Guidance
this is another option that i found :
how to extend C drive size of Azure VM
Use data disks for storing application data from your Azure VMs. The largest data disk size available is 1TB. If you require more space, you can stripe more than one data disk together. Avoid using the OS disk for storing application data because you will run into this issue of limited space. Also avoid using the temporary disk for storing application data as it is not persistent. You cannot extend the OS disk size, but if you use data disks, you can start with a smaller data disk and increase its size as your application grows.
Learn more about Azure disks here: https://msdn.microsoft.com/en-us/library/azure/Dn790303.aspx

Can the "Max number of persistent disks" limit be raised?

In the Persistent Disk Size Limits documentation here it says:
Standard, high memory, and high CPU machine types can attach up to 16 persistent disks.
Is this a limit that can be raised?
We would like to run many docker containers per machine and give each of them a persistent disk (so they aren't tied specifically to that machine). The current limit would only let us run 16 containers per VM, which is a much lower number than we'd like.
On AWS HVM instances, we can attach up to 73 EBS volumes.
Thanks.
No, it presently cannot. I have just been advised that the number of PDs attached to an instance is fixed.
According to Amazon's documentation, it is inadvisable to attach more than 40 disks per instance.

MySQL Cluster Node Specific Hardware

I am looking at setting up a MySQL Cluster with two high end dell servers (duel opteron 4386 cpus, 16gb ram and RAID 10 SAS). I also have a handful of other high end machines that are i7 with 20gb+ ram each on average.
What is the hardware focus for each node in MySQL Cluster?
I know that the management node requires very little as it is simply an interface to control the cluster and can be put on the same machine as mysqld.
What hardware is the most important for the MySQL node and for the data nodes (hard disk IO, ram, cpu etc?
You're correct that the management node needs very little resource - just make sure you run it/them on different machines to the data nodes.
Data Nodes like lots of memory as by default all of the data is held in RAM; they can also make good use of fast cores and in more recent releases lots of them (perhaps upto about 48). Multiple spindles for redo/undo logs,checkpoints, disk table files etc. can speed things up.
MySQL Servers don't need a lot of RAM as they don't store the Cluster data and in most cases you wouldn't use the query cache. If replicating to a second Cluster then you should make sure that they have enough disk space for the binary log files.