I tried to upload a 500MB file to S3 signed URL with this path.
#{AUTODESK_API_URL}/oss/v2/buckets/#{bucket_key}/objects/#{object_name}/signeds3upload?minutesExpiration=30&useAcceleration=true
It takes around 20 minutes. Is this normal?
Related
I am using a Google Cloud Function to send files from Cloud Storage to Sharepoint using the Sharepoint REST API. This works great except for large files because the function runs out of memory (set to max 2gb).
Currently, the Cloud Function is triggered when a new file arrives at the storage bucket. It downloads this file into the /tmp directory and then makes a POST request to the Sharepoint endpoint.
Is there a way to bypass downloading the file to /tmp? Can a POST request be made by specifying a remote file location? Other recommendations?
I'm using Colab with a mounted Google Drive to unpack zips and consolidate the csvs that come out of them. But this, for example:
for z in zip_list:
zipfile.ZipFile(z, 'r').extractall()
zipfile.ZipFile(z, 'r').close()
os.remove(z)
runs about 60x slower in Colab/Drive compared to when I run it on my local computer. Why is this so much slower and how can I fix it?
A typical strategy is to copy the .zip file from Drive to the local disk first.
Unzipping involves lots of small operations like file creation, which are much faster on a local disk than Drive, which is remote.
I'm setting up Azure file sync to Sync files to Azure File shares. How do I check the directory statistics?
I tried using "Storage Explorer(preview)" at Azure Portal to get the file shares directory statistics, but there are so many files under file shares, it had taken more five hours and met an error. I had tried three times but every time was the same error.
I expect to get the Azure Files Share Directory Statistics separately but Storage Explorer usually ends with an error.
The Storage Explorer (preview) is a preview version, you could download the Azure Storage Explorer to local, connect to your storage account and have a try.
I am new to cloud foundry. I am currently working on a requirement where I have to upload a CSV file (via JSP UI) into a service deployed in cloud foundry and persists its data in service.
The issue is from UI, I only get a local path of that CSV file and when I am trying to parse that CSV via this path the file is not recognized. I guess the reason is service is already deployed in CF, so it does not recognize this local machine path.
Can you please let me know how can I parse this CSV file in local machine and where to parse this CSV.
Thanks in Advance!
There is nothing specific to Cloud Foundry about how you would receive an uploaded file in a web application. Since you mentioned using Java, I would suggest checking out this post.
How to upload files to server using JSP/Servlet?
The only thing you need to keep in mind that's specific to Cloud Foundry is that the filesystem in Cloud Foundry is ephemeral. It behaves like a normal filesystem and you can write files to it, however, the lifetime of the filesystem is equal to the lifetime of your application instance. That means restarts, restages or anything else that would cause the application container to be recreated will destroy the file system.
In short, you can use the file system for caching or temporary files but anything that you want to retain should be stored elsewhere.
https://docs.cloudfoundry.org/devguide/deploy-apps/prepare-to-deploy.html#filesystem
We got this alert and it appears that the controller server was unable to transfer the file to our production server due to the file too large to be transferred to our Production Server/s:
*exception: tooltwist.fip.FipException: File is too large to be downloaded: tomcat/bin/synnexESDClient.2013-04-30.log*
Upon checking on the controller's image, the file's size is 84MB:
-rw-rw-r--. 1 controller controller **84M** Apr 30 23:59 synnexESDClient.2013-04-30.log
What is the maximum filesize per file can the FIP service handle to transfer from controller to production server/s? Or is there a config file for FIP service that we can check?
I'm not certain of the maximum size of fip file transfers, but it's probably a power of 2 (32mb, 64mb, etc).
In any case, the purpose of FIP (File Installation Protocol) is to incrementally deploy an application to production servers. Including large log files in the software distribution process is likely to jam up your website updating process, as it is installed to a dozen or more servers (especially when some are on the other side of the country).
First thing, you might want to consider whether you really want to deploy a log file from the Controller to production servers (what is the Controller doing that creates that log file, and why do you want it on production servers?).
If you really need to copy that file to production servers, I suggest you do it independently to the software and web files installation process. To do this, include the log file in the exclusion list for fip and then copy it by hand.