I'm trying to create a compute instance on Google Cloud with a 20TB disk attached, but I'm seeing something strange. When I specify the size of the disk size in the gcloud command I do not see that same disk size reflected when I check the size of the instance. I've also tried creating new disks and attaching them, resizing the attached disks, but it does not go about 2TB. Is 2TB the max disk size for compute instances?
$ gcloud compute instances create instance --boot-disk-size 10TB --scopes storage-rw
Created [https://www.googleapis.com/compute/v1/projects/project/zones/us-central1-a/instances/instance].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
instance us-central1-a n1-standard-1 10.240.0.2 104.154.45.175 RUNNING
$ gcloud compute ssh gm-vcf
Warning: Permanently added 'compute.8994721896014059218' (ECDSA) to the list of known hosts.
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
user#instance:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 2.0T 880M 1.9T 1% /
udev 10M 0 10M 0% /dev
tmpfs 743M 8.3M 735M 2% /run
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
You can attach up to 64TB of Standard Persistent Disk per VM for most machine types, you can refer to this blogpost for details. You need to resize the file system so that the operating system can access the additional space on your disk. You can refer to this link for steps to resize the disk.
I'd say it's not the disk size, but the partition size (you can confirm by running fdisk -l and checking the disk size). Depending on how the disk is partitioned, the maximum size will be 2 TB. Perhaps it'd be better to use a smaller disk for the system and attach another one that is bigger to store the data (this new one you'd be able to partition as you'd like)
Related
I have additional disk of 500GB that mounted into instance with path /mnt/disks/ssd-1/www with ext4 as file system, no partitioned, and have been used 400GB.
Usually for booting disk, i will use sudo growpart /dev/sda 1 and sudo resize2fs /dev/sda1 to increase disk space.
But never done for additional disk. Is it possible to update additional disk size?
https://console.cloud.google.com/compute/disks
Select your disk, then click Edit.
You may need to have the disk not currently mounted to perform some actions.
I have an application running on Container-Optimized OS based Compute Engine.
My application runs every 20min, fetches and writes data to a local file, then deletes the file after some processing. Note that each file is less than 100KB.
My boot disk size is the default 10GB.
I run into "no space left on device" error every month or so while attempting to write the file locally.
How can I track disk usage?
I manually checked the size of the folders and it seems that the bulk of the space is taken by /mnt/stateful_partition/var/lib/docker/overlay2.
my-vm / # sudo du -sh /mnt/stateful_partition/var/lib/docker/*
20K /mnt/stateful_partition/var/lib/docker/builder
72K /mnt/stateful_partition/var/lib/docker/buildkit
208K /mnt/stateful_partition/var/lib/docker/containers
4.4M /mnt/stateful_partition/var/lib/docker/image
52K /mnt/stateful_partition/var/lib/docker/network
1.6G /mnt/stateful_partition/var/lib/docker/overlay2
20K /mnt/stateful_partition/var/lib/docker/plugins
4.0K /mnt/stateful_partition/var/lib/docker/runtimes
4.0K /mnt/stateful_partition/var/lib/docker/swarm
4.0K /mnt/stateful_partition/var/lib/docker/tmp
4.0K /mnt/stateful_partition/var/lib/docker/trust
28K /mnt/stateful_partition/var/lib/docker/volumes
TL;DR: Use Stackdriver Monitoring and create an alert for DISK usage.
Since you are using COS images, you can enable Stackdriver Monitoring agent by simply adding the “google-monitoring-enabled” label set to “true” on GCE Instance metadata. To do so, run the command:
gcloud compute instances add-metadata instance-name --metadata=google-monitoring-enabled=true
Replace instance-name with the name of your instance. Remember to restart your instance to get the change done. You don't need to install the Stackdriver Monitoring agent since is already installed by default in COS images.
Then, you can use disk usage metric to get the usage of your disk.
You can create an alert to get a notification each time the usage of the partition reaches a certain threshold.
Since you are in a cloud, it is always the best idea to use the Cloud resources to solve Cloud issues.
Docker uses /var/lib/docker to store your images, containers, and local named volumes. Deleting this can result in data loss and possibly stop the engine from running. The overlay2 subdirectory specifically contains the various filesystem layers for images and containers.
To cleanup unused containers and images via command:
docker system prune.
Monitor it via command "watch"
sudo watch "du -sh /mnt/stateful_partition/var/lib/docker/*"
I just read in the google compute engine documentation that :
<<Note: Compute Engine is working with respective operating system communities and vendors to eventually convert all operating systems to automatically resize root partitions. As such, we recommend you occasionally check back to make sure this step is still needed for your operating system and over time, this step will be removed completely for all operating systems.>>
Did that mean (as I'm using Ubuntu that support automatic resizing) my compute will resize it's hard drive automatically after a certain level of disk space using. Actually I'm using 31% of my capacity, did the resizing occurs a a x% percent of disk space occupying ?
No. It means if you create a VM instance with a root persistent disk with more disk space than the original image, using an operating system that supports automatic resizing of the root partition, then your virtual machine automatically resize the partition to recognize the additional space and you won't need to manually repartition the disk.
In order to create a VM instance with a root persistent disk with more disk space than the original image, you will need use the following commands:
Create your root persistent disk:
$ gcloud compute disks create DISK_NAME --image IMAGE --size 100GB --zone ZONE
Then start a virtual machine instance using your root persistent disk:
$ gcloud compute instances create INSTANCE_NAME --disk name=DISK_NAME boot=yes --zone ZONE
i have one query which is causing problem and according to google it is because of insufficient temp memory.
the same query was working just fine few days back and my website got hacked and after restoring from backup i am getting this type of error however database was as usual old one.
"Incorrect key file for table /tmp/#sql_xxx_x.MYI" error
1030 Got error 28 from storage engine
both cases i searched and found that it is because of temp memory but how suddenly temp become prblem the same query was working just fine few days back and i checked that query using mysql explain and its output was good as it says only 144 rows are examined to give the output of 20 rows.
then i used this command to see how much really i am having memory in temp
and it says
ddfdd#drddrr[~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
3.6T 49G 3.4T 2% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/sda1 243M 86M 145M 38% /boot
/usr/tmpDSK 4.0G 3.8G 0 100% /tmp
so where is the problem and how i can resolve it?
any advice will be highly appreciated.
Double check that /tmp does not have other files using all of the space and preventing MySQL from creating an on disk tmp table.
Alternatively you can create a new tmp directory off your root slice as it has plenty of space and then change your tmpdir variable in the my.cnf to point to it. Though this is not a dynamic variable and will require a restart. Make sure to chown the directory so MySQL can write to it.
I have a question about getting a huge table to local machine from mysql running in AWS.
I just created a table which has a size of 2.3GB, however I have only 2 GB free disk space.
This lead into a situation that I even can not dump my table into a dump file which would cause error 28. Then I have two choices.
Clean up the disk with 300+MB free space.
I have already tried to delete everything I could.
I have only 2.5G database but mysqldb1 takes up to 4GB size which I have no idea.
ubuntu#ip-10-60-125-122:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 5.6G 2.0G 74% /
udev 819M 12K 819M 1% /dev
tmpfs 331M 184K 331M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 827M 0 827M 0% /run/shm
/dev/xvdb 147G 188M 140G 1% /mnt
Split my table into two different tables or more which I could dump then seperately and then put them together later.
I am new to mysql and hope a safe and easy solutions could be provided.
Best regards and let me know if I could do anything to improve my question.
If you're sure that you actually don't have that much data stored in the database, you might want to take a look at this other question here on SO:
MySQL InnoDB not releasing disk space after deleting data rows from table
By default, MySQL doesn't reduce the file sizes if you delete data. If your MySQL is configured for per-table files, you should be able to reduce the data by optimizing the database. Otherwise, you'll have to get all the data to another machine and recreate the database with per-table files configured.