Getting Google-Cloud Disk Snapshots to Local-Storage (on-premise) - google-compute-engine

In google-cloud, the snapshots of Disk (attached to an instance) can be taken through Python APIs. I'm using the same. My requirement is : moving the snapshot taken by google-cloud, to my local storage.
This I think is kind of common use-case. How can i achieve this ?

Best way is saving the snapshot into a bucket trough ssh, and there you can download it or use fuse or cloudberry to sync it locally.
Reference here https://cloud.google.com/compute/docs/images/export-image
In that case I strongly advice you to perform this using Scripts, you can run a script in the VM to backup using a cron. In this script you can run the snapshot and save it in the current project,
gcloud compute disks snapshot [DISK_NAME]
then create a vm with a startup script.
gcloud compute instances create [YOUR_INSTANCE] --scopes storage-ro \
--metadata startup-script-url=gs://bucket/startupscript.sh
In this script copy de disk to a bucket
gsutil cp [disk] gs://bucket/Snapshots

Related

One time script on Compute Engine

I am looking to run a script once during VM instantiation. The startup script in the compute engine template runs every time the VM is started. Say for e.g. I have to install gnome desktop on linux host, I don't want to include that in startup script. Rather I am looking for something that runs once whet he host is created. Of course, I want this automated. Is it possible to do this?
Edit: I am trying to achieve this in Linux OS.
As the documentation [1], if we create startup scripts on a compute engine instance then the instances perform automated tasks “every time” the instance boots up.
To run startup script once, the most basic way is to use a file on the filesystem to flag when the script has been run or you could use the instance metadata to store the state.
For example via:
INSTANCE_STATE=$(curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/state -H "Metadata-Flavor: Google")
Then set state = PROVISIONED after running the script etc.
But it is a good idea to have your script check specifically if the actions it is going to do have already been done and handled accordingly.
Another option, in your startup script you can have it removed the startup-script metadata at the end from the host instance
[1] https://cloud.google.com/compute/docs/startupscript
[2] https://cloud.google.com/compute/docs/storing-retrieving-metadata

Google Cloud Compute no storage left

Hi when i try to ssh to google cloud VM instance it doesn't connect and when i check the logs it says there is no storage available.
but when i connect using google cloud console it connects and when i check the storage there is enough storage
also one thing my current persistent disk is 20gb but here it shows twice the amount. if anyone can explain me whats going this would help me out a lot
The output that you are posting is from Cloud Shell link.
When you start Cloud Shell, it provisions a g1-small Google Compute
Engine virtual machine running a Debian-based Linux operating system.
Cloud Shell instances are provisioned on a per-user, per-session
basis. The instance persists while your Cloud Shell session is active;
after an hour of inactivity, your session terminates and its VM,
discarded. For more on usage quotas, refer to the limitations guide.
With the default Cloud Shell experience, you are allocated with an
ephemeral, pre-configured VM and the environment you work with is a
Docker container running on that VM. You can also choose to use a
custom environment to save your configurations, in which case, your
environment will be your very own custom Docker image.
Cloud Shell provisions 5 GB of free persistent disk storage mounted as
your $HOME directory on the virtual machine instance.
As Travis mentioned you run df -h --total in the Cloud Shell storage not the VM.
Here you can find a SO related question with possible solutions to fix your issue.
Disk is full, and I can't SSH to instance.

Invoking gsutil or gcloud from a Google cloud function?

I have a Google Firebase app with a Cloud Functions back-end. I'm using the node.js 10 runtime which is Ubuntu 18.04. Users can upload files to Google Cloud Storage, and that triggers a GCF function.
What I'd like that function to do is copy that file to Google Cloud Filestore (that's File with an L), Google's new NFS-mountable file server. They say the usual way to do that is from the command line with gsutil rsync or gcloud compute scp.
My question is: are either or both of those commands available on GCF nodes? Or is there another way? (It would be awesome if I could just mount the Filestore share on the GCF instance, but I'm guessing that's nontrivial.)
Using NFS based storage is not a good idea in this environment. NFS works by providing a mountable file system, something that will not work in Cloud Functions environment as the file system is read only with the exception of /tmp folder.
You should consider using cloud native storage systems like GCS for which the Application Default Credentials are already setup. See the list of supported services here.
According to the official documentation Cloud Filestore documentation
Use Cloud Filestore to create fully managed NFS file servers on Google
Cloud Platform (GCP) for use with applications running on Compute
Engine virtual machines (VMs) instances or Google Kubernetes Engine
clusters.
You can not mount the Filestore on GCF.
Also, you can not execute gsutil or gcloud commands from a Google cloud function Writing Cloud Functions.
Google Cloud Functions can be written in Node.js, Python, and Go, and
are executed in language-specific runtimes

How to change machine type of GCE instance?

As there isn't any direct option to change machine type and i have to create a new instance. What are the steps to do so that the configuration/software that I had installed remain same ?
1) Delete the instance that you want to upgrade by keeping its boot disk.
gcloud compute instances delete <instance-name> --keep-disks boot
2) Now create image from this boot disk
gcloud compute images create <any-image-name> --source-disk <instance-name>
3) Now Check Images list
gcloud compute images list
4) Now Create new instance from developer console or using gcloud compute
and select your image as boot disk.
5) Done.
Here is the link.
Updated answer
I'm not sure when this launched, but it is now possible to change the machine type, without deleting instance and re-creating it from scratch, per the docs:
You can change the machine type of a stopped instance if it is not part of a managed instance group.
Here's how you can do this with gcloud:
$ gcloud compute instances set-machine-type INSTANCE_NAME \
--machine-type NEW_MACHINE_TYPE
Also, note the caveat about moving to smaller instance types:
If you move from a machine type with more resources to a machine type with fewer resources, such as moving from a e2-standard-8 machine type to a e2-standard-2, you could run into hardware resource issues or performance limitations because smaller machine types are less powerful than larger machine types. Make sure that your new machine type is able to support any applications or services that are currently running on the instance, or that you update your services and applications to run on the smaller machine types.
Original answer (outdated)
You can't change the instance type of a VM on-the-fly. To upgrade or downgrade the VM type, you should do the following:
VERY IMPORTANT: make sure to not delete VM's boot disk while shutting down the VM; see this answer for details
shut down the VM cleanly while taking into account the information from step #1 if you're doing this via Google Developers Console or via gcloud on the CLI by using the --keep-disks option or by having already set those disks to not auto-delete as described in this answer:
gcloud compute instances delete VM \
--keep-disks=all \
--project $PROJECT
--zone $ZONE
Note that --keep-disks accepts any of the following options: boot, data, or all. In your case, you want at least boot but if you've attached other disks, you want to specify all. See the docs for more info.
create a new VM and choose a larger/smaller instance type: again, this can be done via Google Developers Console or via gcloud on the CLI and instead of creating a new boot disk, select the boot disk from the original VM, e.g.,
gcloud compute instances create $VM \
--disk name=${DISK_NAME},boot=yes \
--machine-type ${MACHINE_TYPE} \
--project $PROJECT
--zone $ZONE
See the docs for more info.
As of today, this ability can be seen on Google Compute Engine. You will need to stop the instance and then edit the instance.. which will give you a drop-down menu for the Machine Types
https://cloud.google.com/sdk/gcloud/reference/alpha/compute/instances/set-machine-type?hl=en

deploy java application on aws

i have a web application running on tomcat7 and mySql, now i want to deploy it to aws..
the application need to write file on disk (such as images uploaded by users)
some one can help me pointing out how to configure a good infrastructure in aws for my need?
i read this: http://aws.amazon.com/elasticbeanstalk/ , i think that my needs are an EC2 instance for running tomcat and an Amazon RDS whit mySql...
i need something else for R/W file ?
i need to change my code in some way in order to make it work on aws?
thanks in advance,
Loris
Elasticbeanstalk is a good way to get started with an application deployment at AWS. For persistent file storage you can use S3 or an EBS volume.
S3 allows you to read and write using amazon's SDK/API. I am using this on a java application running at AWS and it works pretty smoothly.
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html
It is also possible to mount S3 over NFS, you can read some interesting points in this answer:
How stable is s3fs to mount an Amazon S3 bucket as a local directory
With EBS you can create a persistent storage volume attached to your EC2 node. Please note that EBS is a block level storage device so you'll need to format it before its usable as a filesystem. EBS allows you to help protect yourself from data loss by configuring EBS snapshot backups to S3.
http://aws.amazon.com/ebs/details/
-fred