How to increase additional disk size with ext4 file system on Google Cloud - google-compute-engine

I have additional disk of 500GB that mounted into instance with path /mnt/disks/ssd-1/www with ext4 as file system, no partitioned, and have been used 400GB.
Usually for booting disk, i will use sudo growpart /dev/sda 1 and sudo resize2fs /dev/sda1 to increase disk space.
But never done for additional disk. Is it possible to update additional disk size?

https://console.cloud.google.com/compute/disks
Select your disk, then click Edit.
You may need to have the disk not currently mounted to perform some actions.

Related

Force Docker to use memory instead of disk space

When I run my docker containers in Google Cloud Run, any disk space they use comes from the available memory.
I'm running several self-hosted github action runners on a single local server, and they have worn out my SSD over the past year. The thing is, all the data they are saving is pointless. None of it needs to be saved. It exists for a few minutes during the build, and then is deleted.
Is it possible for me to instead run all these docker machines using memory for the disk space? That would improve performance and stop putting unnecessary wear on a hard drive.

How to track disk usage on Container-Optimized OS

I have an application running on Container-Optimized OS based Compute Engine.
My application runs every 20min, fetches and writes data to a local file, then deletes the file after some processing. Note that each file is less than 100KB.
My boot disk size is the default 10GB.
I run into "no space left on device" error every month or so while attempting to write the file locally.
How can I track disk usage?
I manually checked the size of the folders and it seems that the bulk of the space is taken by /mnt/stateful_partition/var/lib/docker/overlay2.
my-vm / # sudo du -sh /mnt/stateful_partition/var/lib/docker/*
20K /mnt/stateful_partition/var/lib/docker/builder
72K /mnt/stateful_partition/var/lib/docker/buildkit
208K /mnt/stateful_partition/var/lib/docker/containers
4.4M /mnt/stateful_partition/var/lib/docker/image
52K /mnt/stateful_partition/var/lib/docker/network
1.6G /mnt/stateful_partition/var/lib/docker/overlay2
20K /mnt/stateful_partition/var/lib/docker/plugins
4.0K /mnt/stateful_partition/var/lib/docker/runtimes
4.0K /mnt/stateful_partition/var/lib/docker/swarm
4.0K /mnt/stateful_partition/var/lib/docker/tmp
4.0K /mnt/stateful_partition/var/lib/docker/trust
28K /mnt/stateful_partition/var/lib/docker/volumes
TL;DR: Use Stackdriver Monitoring and create an alert for DISK usage.
Since you are using COS images, you can enable Stackdriver Monitoring agent by simply adding the “google-monitoring-enabled” label set to “true” on GCE Instance metadata. To do so, run the command:
gcloud compute instances add-metadata instance-name --metadata=google-monitoring-enabled=true
Replace instance-name with the name of your instance. Remember to restart your instance to get the change done. You don't need to install the Stackdriver Monitoring agent since is already installed by default in COS images.
Then, you can use disk usage metric to get the usage of your disk.
You can create an alert to get a notification each time the usage of the partition reaches a certain threshold.
Since you are in a cloud, it is always the best idea to use the Cloud resources to solve Cloud issues.
Docker uses /var/lib/docker to store your images, containers, and local named volumes. Deleting this can result in data loss and possibly stop the engine from running. The overlay2 subdirectory specifically contains the various filesystem layers for images and containers.
To cleanup unused containers and images via command:
docker system prune.
Monitor it via command "watch"
sudo watch "du -sh /mnt/stateful_partition/var/lib/docker/*"

error with storage 28 from storage engine with mysql

I am having an issue with a storage . I recieved this error
Got error 28 from storage engine
I have checked the storage capacity and it was still available and it was not full. what can be the reason for this? I have checked everything with no success
It is possible that I am running out of main mysql data directory, or in the mysql tmp. Can someone tell me how to find their place in order to check for it too ?
It is possible that I am running out of main mysql data directory, or in the mysql tmp. Can someone tell me how to find their place in order to check for it too ?
TL;DR
Issue the following commands to inspect the location of your server's data and temporary directories respectively:
SHOW GLOBAL VARIABLES LIKE 'datadir'
SHOW GLOBAL VARIABLES LIKE 'tmpdir'
The values of these variables are typically absolute paths (relative to any chroot jail in which the server is running), but if they happen to be relative paths then they will be relative to the working directory of the process that started the server.
However...
As documented under The MySQL Data Directory (emphasis added):
The following list briefly describes the items typically found in the data directory ...
Some items in the preceding list can be relocated elsewhere by reconfiguring the server. In addition, the --datadir option enables the location of the data directory itself to be changed. For a given MySQL installation, check the server configuration to determine whether items have been moved.
You may therefore also wish to inspect the values of a number of other variables, including:
pid_file
ssl_%
%_log_file
innodb_data_home_dir
innodb_log_group_home_dir
innodb_temp_data_file_path
innodb_undo_directory
innodb_buffer_pool_filename
If your server is not responsive...
You can also inspect the server's startup configuration.
As documented under Specifying Program Options, the server's startup configuration is determined "by examining environment variables, then by processing option files, and then by checking the command line" with later options taking precedence over earlier options.
The documentation also lists the locations of the Option Files Read on Unix and Unix-Like Systems, should you require it. Note that the sections of those files that the server reads is determined by the manner in which the server is started, as described in the second and third paragraphs of Server Command Options.
Once you have found the locations where MySQL stores files, run a command in the shell:
df -Ph <pathname>
Where <pathname> is each of the locations you want to test. Some may be on the same disk volume, so they'll show up as the same when reported by df.
[vagrant#localhost ~]$ mysql -e 'select ##datadir'
+-----------------+
| ##datadir |
+-----------------+
| /var/lib/mysql/ |
+-----------------+
[vagrant#localhost ~]$ df -Ph /var/lib/mysql
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00 38G 3.7G 34G 10% /
This tells me that the disk volume for my datadir is the root volume, including the top-level directory "/" and everything beneath that. The volume is 10% full, with 34G unused.
If the volume where your datadir reaches 100%, then you'll start seeing errno 28 issues when you insert new data and it needs to expand a MySQL tablespace, or write to a log file.
In that case, you need to figure out what's taking so much space. It might be something under the MySQL directory, or like in my case, your datadir might be part of a larger disk volume, where all your other files exist. In that case, any accumulation of files on the system might cause the disk to fill up. For example, log files or temp files even if they're not related to MySQL.
I'd start at the top of the disk volume and use du to figure out which directories are so full.
[vagrant#localhost ~]$ sudo du -shx /*
33M /boot
34M /etc
40K /home
28M /opt
4.0K /tmp
2.0G /usr
941M /vagrant
666M /var
Note: if your df command told you that your datadir is on a separate disk volume, you'd start at that volume's mount point. The space used by one disk volume does not count toward another disk volume.
Now I see that /usr is taking the most space, of top-level directories. Drill down and see what's taking space under that:
[vagrant#localhost ~]$ sudo du -shx /usr/*
166M /usr/bin
126M /usr/include
345M /usr/lib
268M /usr/lib64
55M /usr/libexec
546M /usr/local
106M /usr/sbin
335M /usr/share
56M /usr/src
Keep drilling down level by level.
Usually the culprit ends up being pretty clear. Like if you have some huge 500G log file /var/log somewhere that has been growing for months.
An example of a typical culprit is the http server logs.
Re your comments:
It sounds like you have a separate storage volume for your database storage. That's good.
You just added the du output to your question above. I see that in your 1.4T disk volume, the largest single file by far is this:
1020G /vol/db1/mysql/_fool_Gerg_sql_200e_4979_main_16419_2_18.tokudb$
This appears to be a TokuDB tablespace. There's information on how TokuDB handles full disks here: https://www.percona.com/doc/percona-server/LATEST/tokudb/tokudb_faq.html#full-disks
I would not remove those files. I'm not as familiar with TokuDB as I am with InnoDB, but I assume those files are important datafiles. If you remove them, you will lose part of your data and you might corrupt the rest of your data.
I found this post, which explains in detail what the files are used for: https://www.percona.com/blog/2014/07/30/examining-the-tokudb-mysql-storage-engine-file-structure/
The manual also says:
Deleting large numbers of rows from an existing table and then closing the table may free some space, but it may not. Deleting rows may simply leave unused space (available for new inserts) inside TokuDB data files rather than shrink the files (internal fragmentation).
So you can DELETE rows from the table, but the physical file on disk may not shrink. Eventually, you could free enough space that you can build a new TokuDB data file with ALTER TABLE <tablename> ENGINE=TokuDB ROW_FORMAT=TOKUDB_SMALL; (see https://dba.stackexchange.com/questions/48190/tokudb-optimize-table-does-not-reclaim-disk-space)
But this will require enough free disk space to build the new table.
So I'm afraid you have painted yourself into a corner. You no longer have enough disk space to rebuild your large table. You should never let the free disk space get smaller than the space required to rebuild your largest table.
At this point, you probably have to use mysqldump to dump data from your largest table. Not necessarily the whole table, but just what you want to keep (read about the mysqldump --where option). Then DROP TABLE to remove that large table entirely. I assume that will free disk space, where using DELETE won't.
You don't have enough space on your db1 volume to save the dump file, so you'll have to save it to another volume. It looks like you have a larger volume on /vol/cbgb1, but I don't know if it's full.
I'd dump the whole thing, for archiving purposes. Then dump again with a subset.
mkdir /vol/cbgdb1/backup
mysqldump fool Gerg | gzip -c > /vol/cbgdb1/backup/Gerg-dump-full.sql.gz
mysqldump fool Gerg --where "id > 8675309" | gzip -c > /vol/cbgb1/backup/Gerg-dump-partial.sql.gz
I'm totally guessing at the arguments to --where. You'll have to decide how you want to select for a partial dump.
After the big table is dropped, reload the partial data you dumped:
gunzip -c /vol/cbgb1/backup/Gerg-dump-partial.sql.gz | mysql fool
If there are any commands I've given in my examples that you don't already know well, I suggest you learn them before trying them. Or find someone to pair with who is more familiar with those commands.

Google Compute Engine Resize boot disk

I just read in the google compute engine documentation that :
<<Note: Compute Engine is working with respective operating system communities and vendors to eventually convert all operating systems to automatically resize root partitions. As such, we recommend you occasionally check back to make sure this step is still needed for your operating system and over time, this step will be removed completely for all operating systems.>>
Did that mean (as I'm using Ubuntu that support automatic resizing) my compute will resize it's hard drive automatically after a certain level of disk space using. Actually I'm using 31% of my capacity, did the resizing occurs a a x% percent of disk space occupying ?
No. It means if you create a VM instance with a root persistent disk with more disk space than the original image, using an operating system that supports automatic resizing of the root partition, then your virtual machine automatically resize the partition to recognize the additional space and you won't need to manually repartition the disk.
In order to create a VM instance with a root persistent disk with more disk space than the original image, you will need use the following commands:
Create your root persistent disk:
$ gcloud compute disks create DISK_NAME --image IMAGE --size 100GB --zone ZONE
Then start a virtual machine instance using your root persistent disk:
$ gcloud compute instances create INSTANCE_NAME --disk name=DISK_NAME boot=yes --zone ZONE

Compute Engine Instance

I have created a Google Compute Engine instance with CentOS and added some stuff there, such as Apache, Webmin, ActiveCollab, Gitolite etc.. etc.
The problem is that the VM is always running out of memory because the RAM is too low.
How do I change the assigned RAM in Google Compute Engine?
Should I have to copy the VM to another with bigger RAM? If so will it copy all the contents from my CentOS installation?
Can anyone give me some advises on how to get more RAM without having to reinstall everything.
Thanks
The recommended approach for manually managed instances is to boot from a Persistent root Disk. When your instance has been booted from Persistent Disk, you can delete the instance and immediately create a new instance from the same disk with a larger machine type. This is similar to shutting down a physical machine, installing faster processors and more RAM, and starting it back up again. This doesn't work with scratch disks because they come and go with the instance.
Using Persistent Disks also enables snapshots, which allow you to take a point-in-time snapshot of the exact state of the disk and create new disks from it. You can use them as backups. Snapshots are also global resources, so you can use them to create Persistent Disks in any zone. This makes it easy to migrate your instance between zones (to prepare for a maintenance window in your current zone, for example).
Never store state on scratch disks. If the instance stops for any reason, you've lost that data. For manually configured instances, boot them from a Persistent Disk. For application data, store it on Persistent Disk, or consider using a managed service for state, like Google Cloud SQL or Google Cloud Datastore.