gCloud Increase vCPU and Memory - google-compute-engine

I have a gCloud Compute Engine instance, with 4 vCPUs and 16GB Ram, i need increase these parameteres, my instance is a 'e2-standard-4'.
How i can increase this?
It is possibly with no down time?
Thanks

I had a similar requirement, and used the following steps, and it worked, with very little down-time (less than a minute, but DNS propagation meant web access was down for ~10 min), but I've only had the one experience, so don't trust my blindly.
First, make a snapshot of your VM.
Snapshots > Create Snapshot > Choose your VM for Source disk
Next, create a new instance, with the better specs.
For "Boot disk", choose the Snapshots tab, and choose your snapshot.
That should do it. You'll have a VM that acts exactly like your old VM, but beefier. I had web access, so I had to change my domain DNS settings, but HTTPS continued to work once they propagated.
If anything's unclear, let me know and I can flesh it out. If not, I hope that gets you on the right path.

Related

Google cloud compute instance metrics taking up disk space

I have a google cloud compute instance set up but it's getting low on disk space. It looks like it is the /mnt/stateful_partition/var/lib/metrics directory taking up a significant amount of space (3+gb). I assume this is the compute metrics but I can't find any way to safely remove these other than just deleting the files. Is this going to cause any issues?
The path you are referring are File System directories that are used for the GCE VM instance, and you are correct that the metrics folder is safe to be removed. To learn more about these directories, see Disks and file system overview.
I would also suggest to create a snapshot first if you wanted to make sure that the changes you will do on your instance won't affect your system performance. So that you can easily revert it back to your previous instance state.

Can I Reduce the Size of Root Persistent Disk of Google Compute Engine?

I am running a wordpress site on google compute engine, it started very well until today......
As the site is intended for user to create contents, I was testing how to increase the size of the persistent disk.
Yes, it is possible and easy, however, it seems not possible to reduce or return to the original size of the disk.
It was 10GB and now it is 10TB, which heavily increased my cost and it is totally unnecessary.
Is there any way that I can reduce the size of the root disk?
I tried to create a new disk from snapshot, but it cannot be attached to the current instance with error "feature not supported".
I think it maybe possible to create a new instance with the snapshot but it will cost me to buy a new SSL cert., however, I guess it could be the last or best option.
Anyone can help?
Thank you so much!
Renfred
I'm pretty sure, you go to options for the instance, and edit them to make sure the "Delete boot disk when instance is deleted" is not selected. This will allow you to delete your current instance. Doing that you can maintain your original IP address etc.
Try to see if that lets you just put the snapshot on a smaller disk without your instance.
From here you can create an image from the disk.
And try to create a new disk from the disk image.
Be sure to have everything backed up etc. Don't delete anything (other then the instance) until you have a working thing back up. And triple check that you won't lose the disk when you trash the instance.
Then when you're back up and running. Make sure you get rid of the stuff you really don't need. You could still have the 10TB disk if you don't delete it. But, avoid that until you have something working again.
In both cases, you would be maintaining the IP address and the webhost. I don't see how that cert no longer matches that. You would be provisioning a new machine in place. Same IP address, same URL, even in the worst cast where you go rid of instance altogether, and start fresh. But, paying 260 bucks a month is a non-starter.

RDS Migration - MySQL Queries have slowed down

We recently moved to Amazon Web Services from colo hosting. We have two EC2 servers and a RDS instance. Initally everything was running quite smoothly but recently queries that used to take seconds to run are now taking minutes.
We tried upgrading to a larger instance but that does not seemed to have helped. Also, Ive reached the limit of my knowledge and we are stil in the process of trying to find a new DBA after the last one left.
Our RDS is a m3.xlarge and we are using SSD storage. Below is a screenshot of max Read and Write ops as well as CPU usage
Any suggestions or guidance on paramaters that I should check or should change would be much appreciated.
It seems you are having a latency problem, i. e. low availability.
Amazon EBS drives, like almost everything on the cloud, are shared.
And, like everything on the cloud, you can pay extra for maximum peak or extra for minimum availability (or extra for both, of course).
(sorry for being obvious)
Now the tips:
See those low valleys in between the huge peaks on your IOPS graph? That probably doesn't mean your RDBMS isn't requesting them, but that it is not getting them instead, because Amazon is giving those IOPS to other, less IOPS greedy users. But many of them.
If you haven't done so already, read about Provisioned IOPS for low latency SSD disk access,
and how to improve EBS performance.
Also, is EBS optimization active for your instance? Amazon docs say it is enabled by default for c4 instances and supported by m3.xlarge instances, but doesn't mention anything about the optimization being enabled by default for the latter.
I am not in any case an expert, but there is no harm and almost no cost on trying those simple solutions. That should probably be enough. Otherwise, don't wait till you manage to hire a new competent DBA and get some consulting from a reputable firm ASAP (Or even buying AWS premium support for a month). At least they will be able to tell where the bottleneck is and what has to be done to fix it.

How do I make a snapshot of my boot disk?

I've read multiple times that I can cause read/write errors if I create a snapshot. Is it possible to create a snapshot of the disk my machine is booted off of?
It depends on what you mean by "snapshot".
A snapshot is not a backup, it is a way of temporarily capturing the state of a system so you can make changes test the results and revert back to the previously known good state if the changes cause issues.
How to take a snapshot varies depending on the OS you're using, whether you're talking about a physical system or a virtual system, what virtualization platform, you're using, what image types you're using for disks within a given virtualization platform etc. etc. etc.
Once you have a snapshot, then you can make a real backup from the snapshot. You'll want to make sure that if it's a database server that you've flushed everything to disk and then write lock it for the time it takes to make the snapshot (typically seconds). For other systems you'll similarly need to address things in a way that ensures that you have a consistent state.
If you want to make a complete backup of your system drive, directly rather than via a snapshot then you want to shut down and boot off an alternate boot device like a CD or an external drive.
If you don't do that, and try to directly back up a running system then you will be leaving yourself open to all manner of potential issues. It might work some of the time, but you won't know until you try and restore it.
If you can provide more details about the system in question, then you'll get more detailed answers.
As far as moving apps and data to different drives, data is easy provided you can shut down whatever is accessing the data. If it's a database, stop the database, move the data files, tell the database server where to find its files and start it up.
For applications, it depends. Often it doesn't matter and it's fine to leave it on the system disk. It comes down to how it's being installed.
It looks like that works a little differently. The first snapshot will create an entire copy of the disk and subsequent snapshots will act like ordinary snapshots. This means it might take a bit longer to do the first snapshot.
According to :
this you ideally want to shut down the system before taking a snapshot of your boot disk. If you can't do that for whatever reason, then you want to minimize the amount of writes hitting the disk and then take the snapshot. Assuming you're using a journaling filesystem (ext3, ext4, xfs etc.) it should be able to recover without issue.
You an use the GCE APIs. Use the Disks:insert API to create the Persistence disk. you have some code examples on how to start an instance using Python, but Google has libraries for other programming languages like Java, PHP and other

Performance effects of moving mysql db to another Amazon EC2 instance

We have an EC2 running both apache and mysql at the moment. I am wondering if moving the mysql to another EC2 instance will increase or decrease the performance of the site. I am more worried about the network speed issues between the two instances.
EC2 instances in the same availability zone are connected via a 10,000 Mbps network - that's faster than a good solid state drive on a SATA-3 interface (6Gb/s)
You won't see any performance drop by moving a database to another server, in fact you'll probably see a performance increase because of having separate memory and cpu cores for the two servers.
If your worry is network latency then forget about it - not a problem on AWS in the same availability zone.
Another consideration is that you're probably storing your website & db file on an EBS mounted volume. That EBS block is stored off-instance so you're actually storing a storage array on the same super-fast 10Gbps network.
So what I'm saying is... with EBS your website and database are already talking across the network to get their data, putting them on seperate instances won't really change anything in that respect - besides giving more resources to both servers. More resources means more data stored locally in memory and more performance.
The answer depends largely on what resources apache and MySQL are using. They can happily co-habit if demands on your website are low, and each are configured with enough memory that they don't shell out to virtual memory. In this instance, they are best kept together.
As traffic grows, or your application grows, you will benefit from splitting them out because they can then both run inside dedicated memory. Provided that the instances are in the same region then you should see fast performance between them. I have even run a web application in Europe with the DB in USA and performance wasn't noticeably bad! I wouldn't recommend that though!
Because AWS is easy and cheap, your best bet is to set it up and benchmark it!