Setting download directory to Cloud Storage in chrome-drive in Cloud Function - google-cloud-functions

I'm trying to create a Cloud Function that access to a website and download CSV file to Cloud Storage.
I managed to access the site using headless-chrominium and chromedriver.
On my local environment I can set up the download directory like below
options.add_experimental_option("prefs", {
"download.default_directory": download_dir,
"plugins.always_open_pdf_externally": True
})
where download_dir is like "/usr/USERID/tmp/"
How in Cloud Function could I assigned the value so that it points to the right Cloud Storage?

As I understand, a GCS bucket cannot be mounted as a local drive in runtime environment used for cloud functions.
Thus, you might need to download the source csv file into the cloud function memory and save it, for example, as a file in the "/tmp" directory.
Then, you can upload it from that location into a GCS bucket. A more detailed explanation how to upload - is provided here: Move file from /tmp folder to Google Cloud Storage bucket
Note: cloud functions have some restrictions - i.e. memory and timeout. Make sure that you allocated (during deployment) enough memory and time to process your csv files.
In addition, make sure that a service account, which is used by your cloud function, has relevant IAM roles for the GCS bucket under discussion.

Related

How to use google cloud functions to import files from winscp? I also want to make triggers associated with this function which executes weekly

How to use google cloud functions to import files from winscp? I also want to make triggers associated with this function, which executes weekly to import these files.Please assist.
For a weekly trigger -> Use Cloud Scheduler.
Download a file from SSH server, use a SSH library to initiate the traffic from the Cloud Functions to the SCP server. I don't know your language, I can't help you on that.
Keep in mind that the files that you download is stored in memory in the /tmp directory (size correctly your Cloud Functions service) and, because it's in memory, the file is not persisted. If you need to persist it, think to put it in a persistent storage, like Cloud Storage.

Setting up UNC paths with GCP FileStore

I'm really new at GCP.
At this time we have Multiple MSSQL Servers that heavily use UNC paths, and in trying to setup the filestore, mount it to a windows VM, I cannot seem to get UNC paths to work at all in the Microsoft Server 2019 instance to use UNC paths.
Does GCP support UNC paths from File Store.
Yes, GCP supports UNC paths for the File Store.
The most common protocols for exporting file shares are Server Message Block (SMB) for Windows and Network File System (NFS) for Linux (and in some cases Windows).
You can map/mount an SMB share created with Cloud Volumes Service by using its uniform naming convention (UNC) path, which the UI displays.
This document has FAQs and suggestions on the working of SMB.
Note:
Cloud Volumes Service doesn't let you use more than 10 characters for the cloud volumes SMB name.
A UNC path follows the pattern \<sharename> (for example, \cvssmb-d2e6.cvsdemo.internal\quirky-youthful-hermann).

Invoking gsutil or gcloud from a Google cloud function?

I have a Google Firebase app with a Cloud Functions back-end. I'm using the node.js 10 runtime which is Ubuntu 18.04. Users can upload files to Google Cloud Storage, and that triggers a GCF function.
What I'd like that function to do is copy that file to Google Cloud Filestore (that's File with an L), Google's new NFS-mountable file server. They say the usual way to do that is from the command line with gsutil rsync or gcloud compute scp.
My question is: are either or both of those commands available on GCF nodes? Or is there another way? (It would be awesome if I could just mount the Filestore share on the GCF instance, but I'm guessing that's nontrivial.)
Using NFS based storage is not a good idea in this environment. NFS works by providing a mountable file system, something that will not work in Cloud Functions environment as the file system is read only with the exception of /tmp folder.
You should consider using cloud native storage systems like GCS for which the Application Default Credentials are already setup. See the list of supported services here.
According to the official documentation Cloud Filestore documentation
Use Cloud Filestore to create fully managed NFS file servers on Google
Cloud Platform (GCP) for use with applications running on Compute
Engine virtual machines (VMs) instances or Google Kubernetes Engine
clusters.
You can not mount the Filestore on GCF.
Also, you can not execute gsutil or gcloud commands from a Google cloud function Writing Cloud Functions.
Google Cloud Functions can be written in Node.js, Python, and Go, and
are executed in language-specific runtimes

Can I map a drive letter to an Azure Storage fileshare from my Azure App Service (not VM)

I have an Azure App Service and an Azure Storage Account. I know there is a server/vm behind the app service, but I have not explicitly started a machine.
I'm trying to import data from an access database which will be regularly uploaded to a fileshare in my storage account. I'd like to use an Azure WebJob to do the work in the background.
I'm trying to use DAO to read the data:
string path = #"\\server\share\folder\datbase.mdb";
DBEngine dbe = new DBEngine();
Database db = dbe.OpenDatabase(path);
DAO.Recordset rs = db.OpenRecordset("select * from ...");
This works when I run it locally, but when I try to run it in my web job accessing a fileshare in my storage account, it is not finding the file. I assume this is because DBEngine knows nothing of Azure and Azure account names and security keys, doesn't send them and Azure Storage doesn't respond.
So what I'd like to try is to see if I can map an Azure Storage Fileshare onto the server underlying my App Service. I've tried a number of different things, but have received variations of "Access Denied" each time. I have tried:
Running net use T: \name.file.core.windows.net\azurefileshare
/u:name key from the App Service consoles in the Azure Portal
Running
net use from a process within my webjob
Invoking WNetAddConnection2
from within my webjob
Looks like the server is locked down tight. Does anyone have any ideas on how I might be able to map the fileshare onto the underlying server?
Many thanks
As I know, Azure web app runs in sandbox. we could not map an Azure file share to Azure web app. So Azure file storage is a good place if you choose Azure web app. From my experience, there are below workarounds for you. Hope this could give you some tips.
1) Use Azure file storage, but choose Azure VM or Cloud service as host service.
2) Still choose Azure web app as host service, but include the access db in the solution and upload to Azure web app.
3) Choose SQL Azure as database instead. Here is the article that could help us to migrate the access database to SQL Azure
In the end, as Jambor rightly says, the App Service VM is locked down tight.
However, it turns out that the App Service VM comes with some local temporary storage for the use of the various components running on the VM.
This is at D:\local\Temp\ and can be written to by a web job.
Interestingly, this is a logical folder on a different share/drive from D:\local and the size of this additional storage is dependent on the App Service's scale.

Upload Image into AWS S3 bucket using Aws Lambda

I want some suggestion on Upload Image file in S3 bucket using Lambda function.I am able to create bucket using lambda function but unable to upload file to S3 using Lambda function. It is possible? can we upload local system files(image,text etc) files to S3 bucket using lambda?.
when I am trying upload file using C:\users\images.jpg to S3 using Lambda function its showing me error ..Error: ENOENT, no such file or directory 'C:\Users\Images'.
Please suggest.
Thanks
You have to imagine where your code is running.
If you have a desktop application, you can access to local files such as C:\users\images.jpg becasue the process is has access to the file system.
Your lambda functions are maintained by AWS and they run on Amazon's infrastructure.
Also in general you have to design your functions stateless:
Local file system access, child processes, and similar artifacts may
not extend beyond the lifetime of the request, and any persistent
state should be stored in Amazon S3, Amazon DynamoDB, or another
Internet-available storage service.
Reference: AWS Lambda FAQs
So in your case I'd upload everything to S3 first, or create a background process that does this periodically. That way you can access them via Lambda functions but not directly from your local file system.