How to get Azure File Shares Directory Statistics? - function

I'm setting up Azure file sync to Sync files to Azure File shares. How do I check the directory statistics?
I tried using "Storage Explorer(preview)" at Azure Portal to get the file shares directory statistics, but there are so many files under file shares, it had taken more five hours and met an error. I had tried three times but every time was the same error.
I expect to get the Azure Files Share Directory Statistics separately but Storage Explorer usually ends with an error.

The Storage Explorer (preview) is a preview version, you could download the Azure Storage Explorer to local, connect to your storage account and have a try.

Related

Download Directory from Google Cloud Compute Engine

I am trying to download a full recursive directory from Google Cloud Platform using the trial edition of the platform. I assumed that the "Download File" option under the SSH dropdown settings would work, but it does not, providing only a "Failed" message on the window.
Upon trying to look up the answer, I found people mentioning downloading files from storage buckets and such - that is not what this is and to my knowledge I don't have access to those on a trial edition of GCP. I have a compute engine running and can SSH into it and I am looking to download a full recursive directory from it.
Thank you for any advice that you can offer me!
If you already have SSH access, you can use the scp command to copy files(assuming it is available on the system to which you want to copy the files).
scp -r username#server:/path/to/your/directory /local/destination
Another option is to use SFTP if scp is not available. Various clients are available for this for various operating systems.
Either of these options will transfer the files over SSH without any additional configuration required on the server(compute instance in your case).

How to upload a CSV file in a microservice deployed in cloud foundry

I am new to cloud foundry. I am currently working on a requirement where I have to upload a CSV file (via JSP UI) into a service deployed in cloud foundry and persists its data in service.
The issue is from UI, I only get a local path of that CSV file and when I am trying to parse that CSV via this path the file is not recognized. I guess the reason is service is already deployed in CF, so it does not recognize this local machine path.
Can you please let me know how can I parse this CSV file in local machine and where to parse this CSV.
Thanks in Advance!
There is nothing specific to Cloud Foundry about how you would receive an uploaded file in a web application. Since you mentioned using Java, I would suggest checking out this post.
How to upload files to server using JSP/Servlet?
The only thing you need to keep in mind that's specific to Cloud Foundry is that the filesystem in Cloud Foundry is ephemeral. It behaves like a normal filesystem and you can write files to it, however, the lifetime of the filesystem is equal to the lifetime of your application instance. That means restarts, restages or anything else that would cause the application container to be recreated will destroy the file system.
In short, you can use the file system for caching or temporary files but anything that you want to retain should be stored elsewhere.
https://docs.cloudfoundry.org/devguide/deploy-apps/prepare-to-deploy.html#filesystem

Heroku - retrieve remote changes of JSON file

My site that heroku hosts takes user inputs and update a json file on a remote server. I should have probably stored in a database. It's a better solution. But is there any way I can download the most up to date Json files on the Heroku remote server?
Heroku doesn't offer any mechanism to commit files directly on the server, or to copy files from the server. One of the main reasons is its ephemeral filesystem:
Each dyno gets its own ephemeral filesystem, with a fresh copy of the most recently deployed code. During the dyno’s lifetime its running processes can use the filesystem as a temporary scratchpad, but no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted. For example, this occurs any time a dyno is replaced due to application deployment and approximately once a day as part of normal dyno management.
If the file is accessible over the web you might be able to download it from your browser, but whatever file you created may not be there anymore. You're right that a database is a better choice.

Open folder vs create new project from existing files, located under shared network drive in PhpStorm

It's not clear to my why I should use the option in PhpStorm to create a new project from existing files instead of just opening a folder and declaring the project directory.
I have a web server installed and I can access it's root by a shared network drive. Now I can just open the a folder in PhpStorm and declare it's root. It will generate a PhpStorm project at the given directory.
But there is also an option to open a new project from existing files (located under shared network drive). My best guess is that this option is the way to go. Is this true and if so, why? Or if it doesn't matter, why doesn't it?
There will be several people using the same shared drive to work in different projects in the webroot.
You can, of course, create a project on mounted network drive via File/Open, but note that this is not officially supported. All IDE functionality is based on the index of the project files which PHPStorm builds when the project is loaded and updates on the fly as you edit your code. To provide efficient coding assistance, PHPStorm needs to re-index code fast, which requires fast access to project files and caches storage. The latter can be ensured only for local files, that is, files that are stored on you hard disk and are accessible through the file system. Sure, mounts are typically in the fast network, but one day some hiccup happen and a user sends a stacktrace and all we see in it is blocking I/O call.
So, the suggested approach is downloading files to your local drive and use deployment configuiration to synchronize local files with remote. See https://confluence.jetbrains.com/display/PhpStorm/Sync+changes+and+automatic+upload+to+a+deployment+server+in+PhpStorm

Accessing of file from other system in Apache Drill

I am using latest version of drill i.e (1.5 version) and using drill in embedded mode for local.
I have some csv file in my other system(PC2), which has some I.P address. I want to run search query from my own system(PC1) and try to get that csv file which is store in other system(PC2). PC1 have Drill and its running throught cmd in embedded mode.
Is There any way to get data or to search files(csv, psv, etc) from other pc(remote machine) in embedded mode for local(not for hdfs)?
You could try to mount drive from remote pc as local drive, and then run your drill queries on local drive. I did that with linux PC.
But this is not best decision for querying big files. Better to setup drill in distributed mode and run drillbit on machine where you store your data.