Can I Reduce the Size of Root Persistent Disk of Google Compute Engine? - google-compute-engine

I am running a wordpress site on google compute engine, it started very well until today......
As the site is intended for user to create contents, I was testing how to increase the size of the persistent disk.
Yes, it is possible and easy, however, it seems not possible to reduce or return to the original size of the disk.
It was 10GB and now it is 10TB, which heavily increased my cost and it is totally unnecessary.
Is there any way that I can reduce the size of the root disk?
I tried to create a new disk from snapshot, but it cannot be attached to the current instance with error "feature not supported".
I think it maybe possible to create a new instance with the snapshot but it will cost me to buy a new SSL cert., however, I guess it could be the last or best option.
Anyone can help?
Thank you so much!
Renfred

I'm pretty sure, you go to options for the instance, and edit them to make sure the "Delete boot disk when instance is deleted" is not selected. This will allow you to delete your current instance. Doing that you can maintain your original IP address etc.
Try to see if that lets you just put the snapshot on a smaller disk without your instance.
From here you can create an image from the disk.
And try to create a new disk from the disk image.
Be sure to have everything backed up etc. Don't delete anything (other then the instance) until you have a working thing back up. And triple check that you won't lose the disk when you trash the instance.
Then when you're back up and running. Make sure you get rid of the stuff you really don't need. You could still have the 10TB disk if you don't delete it. But, avoid that until you have something working again.
In both cases, you would be maintaining the IP address and the webhost. I don't see how that cert no longer matches that. You would be provisioning a new machine in place. Same IP address, same URL, even in the worst cast where you go rid of instance altogether, and start fresh. But, paying 260 bucks a month is a non-starter.

Related

gCloud Increase vCPU and Memory

I have a gCloud Compute Engine instance, with 4 vCPUs and 16GB Ram, i need increase these parameteres, my instance is a 'e2-standard-4'.
How i can increase this?
It is possibly with no down time?
Thanks
I had a similar requirement, and used the following steps, and it worked, with very little down-time (less than a minute, but DNS propagation meant web access was down for ~10 min), but I've only had the one experience, so don't trust my blindly.
First, make a snapshot of your VM.
Snapshots > Create Snapshot > Choose your VM for Source disk
Next, create a new instance, with the better specs.
For "Boot disk", choose the Snapshots tab, and choose your snapshot.
That should do it. You'll have a VM that acts exactly like your old VM, but beefier. I had web access, so I had to change my domain DNS settings, but HTTPS continued to work once they propagated.
If anything's unclear, let me know and I can flesh it out. If not, I hope that gets you on the right path.

Google cloud compute instance metrics taking up disk space

I have a google cloud compute instance set up but it's getting low on disk space. It looks like it is the /mnt/stateful_partition/var/lib/metrics directory taking up a significant amount of space (3+gb). I assume this is the compute metrics but I can't find any way to safely remove these other than just deleting the files. Is this going to cause any issues?
The path you are referring are File System directories that are used for the GCE VM instance, and you are correct that the metrics folder is safe to be removed. To learn more about these directories, see Disks and file system overview.
I would also suggest to create a snapshot first if you wanted to make sure that the changes you will do on your instance won't affect your system performance. So that you can easily revert it back to your previous instance state.

How do I see which database is running faster after making changes to my.cnf file?

Background
I have 5 servers all running essentially the same site but I have had difficulties with database speed. Part of the process has lead me to make changes to one of my my.cnf files to improve performance.
Problem
I am having difficulty finding if the settings are making any difference at all. I restart the mysql service and ever have rebooted the entire server, the variables show up as changed but I don't see any kind of noticeable difference when accessing my site. I would like a way to quantify how fast my database is without relying on the front end of the app so I can show my boss real figures for database speed instead of looking at the google console for load speeds.
Research
I thought that there might be some tool in phpmyadmin to help track speed but after going through the different tabs I couldn't find anything. All of the other on-line resources I have looked at seem to just talk about "expected results" instead of how to test directly.
Question
Is there a way to get speed information directly from the database (or phpmyadmin) instead of using the front end of the web app?
The optimal realistic benchmark goes something like this:
Capture a copy of the live dataset.
Turn on the general log.
Run for some period of time (say, an hour).
Turn off the general log.
That gives you 2 things: a starting point, and a list of realistic instructions. Now to replay:
Load the data on a test machine.
Change some my.cnf setting.
Apply the captured general log, timing how long it takes.
Replay with another setting change; see if the timing is more than trivially faster or slower.
Even better would be to arrange for the replay to be multi-threaded like the real product.
Caveat... Some settings won't make any difference until the size of something (data, index, number of connections, etc) exceeds some threshold. Only at that point will the setting show a difference. This benchmark method fails to predict such.
If you would like an independent review of your my.cnf, please provide these:
How much RAM do you have
SHOW VARIABLES;
SHOW GLOBAL STATUS; -- after mysld has been running at least a day.
I will compute a couple hundred formulas and judge which ones are red flags.

How do I make a snapshot of my boot disk?

I've read multiple times that I can cause read/write errors if I create a snapshot. Is it possible to create a snapshot of the disk my machine is booted off of?
It depends on what you mean by "snapshot".
A snapshot is not a backup, it is a way of temporarily capturing the state of a system so you can make changes test the results and revert back to the previously known good state if the changes cause issues.
How to take a snapshot varies depending on the OS you're using, whether you're talking about a physical system or a virtual system, what virtualization platform, you're using, what image types you're using for disks within a given virtualization platform etc. etc. etc.
Once you have a snapshot, then you can make a real backup from the snapshot. You'll want to make sure that if it's a database server that you've flushed everything to disk and then write lock it for the time it takes to make the snapshot (typically seconds). For other systems you'll similarly need to address things in a way that ensures that you have a consistent state.
If you want to make a complete backup of your system drive, directly rather than via a snapshot then you want to shut down and boot off an alternate boot device like a CD or an external drive.
If you don't do that, and try to directly back up a running system then you will be leaving yourself open to all manner of potential issues. It might work some of the time, but you won't know until you try and restore it.
If you can provide more details about the system in question, then you'll get more detailed answers.
As far as moving apps and data to different drives, data is easy provided you can shut down whatever is accessing the data. If it's a database, stop the database, move the data files, tell the database server where to find its files and start it up.
For applications, it depends. Often it doesn't matter and it's fine to leave it on the system disk. It comes down to how it's being installed.
It looks like that works a little differently. The first snapshot will create an entire copy of the disk and subsequent snapshots will act like ordinary snapshots. This means it might take a bit longer to do the first snapshot.
According to :
this you ideally want to shut down the system before taking a snapshot of your boot disk. If you can't do that for whatever reason, then you want to minimize the amount of writes hitting the disk and then take the snapshot. Assuming you're using a journaling filesystem (ext3, ext4, xfs etc.) it should be able to recover without issue.
You an use the GCE APIs. Use the Disks:insert API to create the Persistence disk. you have some code examples on how to start an instance using Python, but Google has libraries for other programming languages like Java, PHP and other

Wordpress site malfunctions when server disk is full

My company has some private servers for web development.
The server has storage space of 600GB, but at times it runs out of disk space.
At such situations one of our websites using wordpress malfunctions and some of its features can't be accessed.
Can any one tell what is the possible cause for this & how it can be prevented?
Thanks in advance....
At such situations one of our websites using wordpress malfunctions and some of its features can't be accessed.
That's what happens when the disk is full - nothing we can do about that. Temporary files can no longer be created, including session files that are necessary to track whether a user is logged in or not. MySQL may need to write temporary data even when doing only a SELECT. The web server may need some swap space on the hard disk in peak times. etc. etc.
I'm more concerned with WP malfunctioning rather than "why server disk is full".
That won't work. You need to fix the source of the problem, and that is the server's disk filling up. You can't make Wordpress work despite the disk being full.
It could be down to so many things, probably not at WordPress level. With no disk space apache can't write it's access/error logs, sessions can't be written by PHP, MySQL can't write and so on.
The only answer applicable to you at this stage is to stop maxing out the disk space.