I have local system that have a beanstalkd application running.Now i want to migrate to GCP compute instances.
I have installed beanstalkd on CE.So how i can migrate this to CE?
The steps are straightworfard
make sure binlog is activated
locate the binlog file
stop the Beanstalkd instance, for fully graceful shutdown, so binlog file should be flushed
upload to Cloud Storage, (either using the Console or using gcloud utility)
on the Compute Engine instance, make sure you have Storage permission in the edit section
download using gcloud commands the binlog file to the machine itself
start the beanstalkd service and woala, your persisted messages are there
you can setup on the same machine or other machines Beanstalkd Console Admin
Related
Hi when i try to ssh to google cloud VM instance it doesn't connect and when i check the logs it says there is no storage available.
but when i connect using google cloud console it connects and when i check the storage there is enough storage
also one thing my current persistent disk is 20gb but here it shows twice the amount. if anyone can explain me whats going this would help me out a lot
The output that you are posting is from Cloud Shell link.
When you start Cloud Shell, it provisions a g1-small Google Compute
Engine virtual machine running a Debian-based Linux operating system.
Cloud Shell instances are provisioned on a per-user, per-session
basis. The instance persists while your Cloud Shell session is active;
after an hour of inactivity, your session terminates and its VM,
discarded. For more on usage quotas, refer to the limitations guide.
With the default Cloud Shell experience, you are allocated with an
ephemeral, pre-configured VM and the environment you work with is a
Docker container running on that VM. You can also choose to use a
custom environment to save your configurations, in which case, your
environment will be your very own custom Docker image.
Cloud Shell provisions 5 GB of free persistent disk storage mounted as
your $HOME directory on the virtual machine instance.
As Travis mentioned you run df -h --total in the Cloud Shell storage not the VM.
Here you can find a SO related question with possible solutions to fix your issue.
Disk is full, and I can't SSH to instance.
In google-cloud, the snapshots of Disk (attached to an instance) can be taken through Python APIs. I'm using the same. My requirement is : moving the snapshot taken by google-cloud, to my local storage.
This I think is kind of common use-case. How can i achieve this ?
Best way is saving the snapshot into a bucket trough ssh, and there you can download it or use fuse or cloudberry to sync it locally.
Reference here https://cloud.google.com/compute/docs/images/export-image
In that case I strongly advice you to perform this using Scripts, you can run a script in the VM to backup using a cron. In this script you can run the snapshot and save it in the current project,
gcloud compute disks snapshot [DISK_NAME]
then create a vm with a startup script.
gcloud compute instances create [YOUR_INSTANCE] --scopes storage-ro \
--metadata startup-script-url=gs://bucket/startupscript.sh
In this script copy de disk to a bucket
gsutil cp [disk] gs://bucket/Snapshots
I have a vm instance on google compute engine, which deploys a python web application. But now i want to have a snapshot of the online vm deplyed on my local server. Is it feasible to do that? if yes, how do i proceed to do that?
I have a few servers that host customer websites. These customers access the system via SSH or SFTP for data manipulation. In GCE, I don't know what the best approach for this type of access is considering our hosting application creates a jailed account for the users via a control panel and billing system.
I thought about altering sshd_config to allow SSH access with passwords for users. However, GCE documentation reveals that if an instance is rebooted or upgraded to a different machine type that SSH settings would be reset based on the image. Therefore I would lose my sshd_config alterations. I was under the impression that as long as I have a persistent boot disk that I wouldn't loose such changes.
What options do I have to allow our customers to access the server via SSH without them having to use gcutil and be able to authenticate with passwords.
After some testing, I have found that enabling SSH is as simple as modifying your sshd_config file. This file DOES NOT get reverted back to GCE defaults if using a persistent disk. So, a reboot or a VM instance migration/upgrade should keep all SSH settings intact as long as you are using a persistent disk or recovering from a snapshot.
I tested by doing the following:
Modifying SSH for password authentication (as needed)
Test VM connectivity with just ssh vm_fqdn without using gcutil and was successful
Rebooted the VM instance, which kept all sshd_config changes allowing me to still connect with passwords outside of gcutil
Recreated a different instance of GCE with the persistent disk, which also kept my SSH settings allowing me to login without gcutil
Seems like the documentation for all SSH settings/authentication methods are geared to VM instances that are not using persistent disks if you do reboot. Settings with non-persistent disks would trigger new SSH default settings.
i have a web application running on tomcat7 and mySql, now i want to deploy it to aws..
the application need to write file on disk (such as images uploaded by users)
some one can help me pointing out how to configure a good infrastructure in aws for my need?
i read this: http://aws.amazon.com/elasticbeanstalk/ , i think that my needs are an EC2 instance for running tomcat and an Amazon RDS whit mySql...
i need something else for R/W file ?
i need to change my code in some way in order to make it work on aws?
thanks in advance,
Loris
Elasticbeanstalk is a good way to get started with an application deployment at AWS. For persistent file storage you can use S3 or an EBS volume.
S3 allows you to read and write using amazon's SDK/API. I am using this on a java application running at AWS and it works pretty smoothly.
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html
It is also possible to mount S3 over NFS, you can read some interesting points in this answer:
How stable is s3fs to mount an Amazon S3 bucket as a local directory
With EBS you can create a persistent storage volume attached to your EC2 node. Please note that EBS is a block level storage device so you'll need to format it before its usable as a filesystem. EBS allows you to help protect yourself from data loss by configuring EBS snapshot backups to S3.
http://aws.amazon.com/ebs/details/
-fred