Store files in database with DBIC using Catalyst - mysql

I'm using Perl Catalyst framework to build an application that needs to store several files in a MySQL database (among other things). I want to store the name, path, extension, etc of the files to retrieve them later; because they are supposed to be accessible from the application (e.g: a PDF document uploaded for someone, must be available for download later). Can I do this? I found several ways to do it in PHP, but none for perl. Any ideas?
EDIT
I know I can access to some information using Catalyst::Request::Upload. I used this in the past for BLOB storage, but I dont't know how to get file information nor how to know where does catalyst store tmp files.
So, basically, the questions that arise when trying to this are:
How to know where are my files being stored once I submit them?
How to copy these files (which I assume go to a tmp folder somewhere) to a folder in my computer/server?
How to retrieve these files once I have them stored?
EDIT 2
I've checked again the documentation for Catalyst::Request::Upload (http://search.cpan.org/~jjnapiork/Catalyst-Runtime-5.90114/lib/Catalyst/Request/Upload.pm) and found out how to know where are my files being stored and how to copy them to a new non-tmp location. The only question that remains:
How do I generate a download link for these files??

The solution was pretty straight-forward.
First Make sure your 'tmp' folder is configured in the Catalyst app file (e.g: MyApp.pm).
Now, use Catalyst::Request::Upload to create the file object with the uploaded file. Sort of...
my $upload = $req->upload('input_field_name');
Now make sure you get all the data you want to store from the file. I, personally, got just the filename, MIME Type and size.
my $filename = $upload->filename;
my $size = $upload->size;
my $type = $upload->type;
Store into the database.
Now, create a folder within the public content of the page to copy the files to, and perform the copy like:
$upload->copy_to('path/to/the/public/folder');
To retrieve the files, just create a link with the base URL to the public folder and the filename you stored in the database.
Hope it helps someone... it was pretty obvious, though; but it cracked my head a little.

Related

Flutter render downloaded html

I have an elearning APP in Flutter, which can render html files stored online. Now I want to download these files to assets, so they may be accessed offline. For this, I will download a zipped file with the entire html's folder and unzip it.
The problem is the html has many subfolders with it's own assets, which I would have to declare in the pubsp.yaml in order to access, but these downloadable htmls are constantly being added (every new course has new files).
I see a few ways to solve the problem:
Somehow declare access to subfolders in the pubsp.yaml.
As far as I know, this cannot be done.
Update the folder access for the installed APP dynamically.
As far as I know, this cannot be done.
Read the html file without unziping it.
I don't know if this is doable (I'm using webview_flutter_plus to render) and weather it would allow access to files in folders inside the zip without declaring them in the .yaml.
Pre-load empty folders inside assets that would mimic the html folder structure, declare them in the .yaml and then unzip and read the htmls from these folders. I would create some 100 of them in order to accomodate a large number of course downloads.
I believe this method would work, but it seems very cumbersome and inelegant.
So my questions are:
Would any of methods 1-3 work and if so how?
Would method 4 work?
Is it possible to reference folders in the .yaml file without them existing? It would make method 4 far easier.
Is there any other way to accomplish this? I cannot change the language, since the APP is months along, but plugins are fair game.
Thanks in advance!

How can I save two files with the same name inside the same folder, without renaming any using php?

How can I save two files with the same file name in the same folder without renaming anyone using php?
For instance: A user has an audio file name "first.mp3"; and another user uploads another file named: "first.mp3"; and I want to save these two files without renaming any so that when people are downloading the audio from the front end, the name does not change.
I can do this by concatenating a random number to differentiate the files but I want to beat this method of renaming.
Should I be saving each file inside a unique folder and save the file names to database? but this method will create too many folders which i don't think it is appropriate.
You cannot have two files with the same name in the same folder.
You would either have to add a random string to the end of each file like you suggest or save each user's files in a directory allocated to their account.
Regards,
Leslie
Saving multiple files with the same name within the same folder is just not possible.
I'd opt for a strategy that would involve saving the original filename somewhere (in a database, for example) along with the name/path of the actual file. When the User downloads the file (presumably through a web app of some sorts), you can set the name of the file via headers with your language of choice.
You could even rename the files to something completely random when they're uploaded so you can have them all in one folder - as long as you store the original filename somewhere, you can always set it before you serve it back to the end user.

Is there any way to fill in Sharepoint entries via parsing text file?

My workplace has a whole bunch of unannotated .zip files that need to be uploaded to the new file server (Windows). I've used perl to parse through through the excel files within the .zip files to create an annotation.txt file for each .zip file that contains information about the .zip file. I have 1000's of zip files and do not want to manually enter in information for each entry if there's a way to automate it. I am proficient in perl and mysql, and wondering if there is any way to utilize my skillsets to port this information into the Microsoft Sharepoint website.
Thank you in advance for any advice or suggestions.
There a many, many ways to meet your requirement.
You could write a event receiver to parse the files once uploaded and set metadata.
A better approach for your use case might be to write a .NET based console application and reference Microsoft.SharePoint.Client and then upload your files using the Client side object model (CSOM) and set the metadata during that process as outlined here: Upload a document to a SharePoint list from Client Side Object Model
There are also REST and ASMX webservices that you could call from a non .NET runtime process.
Plenty of options, pick the one that fits your needs and skills best.

How to handle uploading html content to an AppEngine application?

I would like to allow my users to upload HTML content to my AppEngine web app. However if I am using the Blobstore to upload all the files (HTML files, css files, images etc.) this causes a problem as all the links to other files (pages, resources) will not work.
I see two possibilities, but both of them are not very pretty and I would like to avoid using them:
Go over all the links in the html files and change them to the relevant blob key.
Save a mapping between a file and a blob key, catch all the redirections and serve the blobs (could cause problems with same name files).
How can I solve this elegantly without having to go over and change my user's files?
Because app engine is running your content on multiple servers, you are not able to write to the filesystem. What you could do is ask them to upload a zip file containing their html, css, js, images,... The zipfile module from python is available in appengine, so you can unzip these files, and store them individually. This way, you know the directory structure of the zip. This allows you to create a mapping of relative paths to the content in the blobstore. I don't have enough experience with zipfile to write a full example here, I hope someone more experienced can edit my answer, or create a new one with an example.
Saving a mapping is the best option here. You'll need to identify a group of files in some way, since multiple users may upload a file with the same name, then associate unique pathnames with each file in that group. You can use key names to make it a simple datastore get to find the blob associated with a given path. No redirects are required - just use the standard Blobstore serving approach of setting the blobstore header to have App Engine serve the blob to the user.
Another option is to upload a zip, as Frederik suggests. There's no need to unpack and store the files individually, though - you can serve them directly out of the zip in blobstore, as this demo app does.

How can I add file locations to a database after they are uploaded using a Perl CGI script?

I have a CGI program I have written using Perl. One of its functions is to upload pics to the server.
All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db?
I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases.
Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db.
I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).
It sounds like you have your method complete where you take the upload, make it a string and toss it unto mysql similar to reading file in as a string. However since your given a filehandle versus a filename to read by CGI. You are wondering where that file actually is.
If your using CGI.pm, the upload, uploadInfo, the param for the upload, and upload private files will help you deal with the upload file sources. Where they are stashed after the remote client and the CGI are done isn't permanent usually and a minimum is volatile.
You've got a bunch of uploaded files that need to be added to the db? Should be trivial to dash off a one-off script to loop through all the files and insert the details into the DB. If they're all in one spot, then a simple opendir()/readdir() type loop would catch them all, otherwise you can make a list of file paths to loop over and loop over that.
If you've talking about recording new uploads in the server, then it would be something along these lines:
user uploads file to server
script extracts any wanted/needed info from the file (name, size, mime-type, checksums, etc...)
start database transaction
insert file info into database
retrieve ID of new record
move uploaded file to final resting place, using the ID as its filename
if everything goes file, commit the transaction
Using the ID as the filename solves the worries of filename collisions and new uploads overwriting previous ones. And if you store the uploads somewhere outside of the site's webroot, then the only access to the files will be via your scripts, providing you with complete control over downloads.