Is there a way to transfer big files (over 2 GB) between google cloud VM instances without downloading and then uploading?
Related
I'm trying to create a Cloud Function that access to a website and download CSV file to Cloud Storage.
I managed to access the site using headless-chrominium and chromedriver.
On my local environment I can set up the download directory like below
options.add_experimental_option("prefs", {
"download.default_directory": download_dir,
"plugins.always_open_pdf_externally": True
})
where download_dir is like "/usr/USERID/tmp/"
How in Cloud Function could I assigned the value so that it points to the right Cloud Storage?
As I understand, a GCS bucket cannot be mounted as a local drive in runtime environment used for cloud functions.
Thus, you might need to download the source csv file into the cloud function memory and save it, for example, as a file in the "/tmp" directory.
Then, you can upload it from that location into a GCS bucket. A more detailed explanation how to upload - is provided here: Move file from /tmp folder to Google Cloud Storage bucket
Note: cloud functions have some restrictions - i.e. memory and timeout. Make sure that you allocated (during deployment) enough memory and time to process your csv files.
In addition, make sure that a service account, which is used by your cloud function, has relevant IAM roles for the GCS bucket under discussion.
My application runs on an embedded database that writes to file system storage instead of a standard database. I want to know how to set my OpenShift app to be able to have persistent storage that automatically grows.
Currently, my application database writes into a mount path in the file system (in standard Linux environment), e.g. /mnt/db If I moved it to Openshift how can I be able to have such persistent storage that scales as the data grows?
From the Openshift pricing page: https://www.openshift.com/products/online/
It shows storage from 2GB to 150GB, does it mean that this is the hard limit? How about my application data hits 1TB above? It would not work with current Red-hat hosted version of Openshift?
I'm pretty new to Google Compute Engine, I have 5 types of machines and lets' say 10 instances of each type. I don't wan't to do load balancing on them, so I can't use managed instance groups.
is there any 'smarter' way to copy my files to those VMs and run my software on the VMs remotely and automatically than doing this manually?
Basic startup scripts with gcloud/gsutil to copy from Google Storage and then to the respective VMs.
I have a vm instance on google compute engine, which deploys a python web application. But now i want to have a snapshot of the online vm deplyed on my local server. Is it feasible to do that? if yes, how do i proceed to do that?
I am running n1-standard-1 (1 vCPU, 3.75 GB memory) Compute Instance , In my android app around 80 users are online write now and cpu Utilisation of instance is 99% and my app became less responsive. Kindly suggest me the workaround and If i need to upgrade , can I do that with same instance or new instance needs to be created.
Since your app is running already and users are connecting to it, you don't want to do the following process:
shut down the VM instance, keeping the boot disk and other disks
boot a more powerful instance, using the boot disk from step (1)
attach and mount any additional disks, if applicable
Instead, you might want to do the following:
create an additional VM instance with similar software/configuration
create a load balancer and add both the original and new VM to it as a backend
change your DNS name to point to the load balancer IP instead of the original VM instance
Now, your users will be randomly sent to a VM that's least-loaded to see the application, and you can add more VMs if your traffic increases.
You did not describe your application in detail, so it's unclear if each VM has local state (e.g., runs a database) or there's a database running externally. You will still need to figure out how to manage the stateful systems such as database or user-uploaded data from all the VM instances, which is hard to advise on given the little information in your quest.