In a package where I harvest all mail attachments (example found here:cozyroc documentation), I would like to read the attachment filename and according to that, store attachments in different folders.
Anyone knows if and how can I do this using Cozyroc?
It doesn't look like there's an easy way to extract the attachment file names from the CozyRoc email object. What I would do is save all attachments to a single "staging" folder, then create a For Each Loop to loop through all the files in that staging folder, and then use File System tasks to move the files.
Related
How can I save two files with the same file name in the same folder without renaming anyone using php?
For instance: A user has an audio file name "first.mp3"; and another user uploads another file named: "first.mp3"; and I want to save these two files without renaming any so that when people are downloading the audio from the front end, the name does not change.
I can do this by concatenating a random number to differentiate the files but I want to beat this method of renaming.
Should I be saving each file inside a unique folder and save the file names to database? but this method will create too many folders which i don't think it is appropriate.
You cannot have two files with the same name in the same folder.
You would either have to add a random string to the end of each file like you suggest or save each user's files in a directory allocated to their account.
Regards,
Leslie
Saving multiple files with the same name within the same folder is just not possible.
I'd opt for a strategy that would involve saving the original filename somewhere (in a database, for example) along with the name/path of the actual file. When the User downloads the file (presumably through a web app of some sorts), you can set the name of the file via headers with your language of choice.
You could even rename the files to something completely random when they're uploaded so you can have them all in one folder - as long as you store the original filename somewhere, you can always set it before you serve it back to the end user.
I'm using Perl Catalyst framework to build an application that needs to store several files in a MySQL database (among other things). I want to store the name, path, extension, etc of the files to retrieve them later; because they are supposed to be accessible from the application (e.g: a PDF document uploaded for someone, must be available for download later). Can I do this? I found several ways to do it in PHP, but none for perl. Any ideas?
EDIT
I know I can access to some information using Catalyst::Request::Upload. I used this in the past for BLOB storage, but I dont't know how to get file information nor how to know where does catalyst store tmp files.
So, basically, the questions that arise when trying to this are:
How to know where are my files being stored once I submit them?
How to copy these files (which I assume go to a tmp folder somewhere) to a folder in my computer/server?
How to retrieve these files once I have them stored?
EDIT 2
I've checked again the documentation for Catalyst::Request::Upload (http://search.cpan.org/~jjnapiork/Catalyst-Runtime-5.90114/lib/Catalyst/Request/Upload.pm) and found out how to know where are my files being stored and how to copy them to a new non-tmp location. The only question that remains:
How do I generate a download link for these files??
The solution was pretty straight-forward.
First Make sure your 'tmp' folder is configured in the Catalyst app file (e.g: MyApp.pm).
Now, use Catalyst::Request::Upload to create the file object with the uploaded file. Sort of...
my $upload = $req->upload('input_field_name');
Now make sure you get all the data you want to store from the file. I, personally, got just the filename, MIME Type and size.
my $filename = $upload->filename;
my $size = $upload->size;
my $type = $upload->type;
Store into the database.
Now, create a folder within the public content of the page to copy the files to, and perform the copy like:
$upload->copy_to('path/to/the/public/folder');
To retrieve the files, just create a link with the base URL to the public folder and the filename you stored in the database.
Hope it helps someone... it was pretty obvious, though; but it cracked my head a little.
I have used the "extract" command, but it never was able to find as much information as FOCA found on these excel spreadsheets I am dealing with.
For example, I am using the FOCA application to harvest and download files from the web. Afterwards, it is extracting metadata from all of the files.
With regards to excel files, it appears that these files are containing more metadata than the average pdf file. That being said, FOCA is able to detect printer names, email addresses, and a few other things that are stored within this spreadsheet file. However, I cannot find any way to get this same information in Linux using the "extract" command.
Anyone know a way to extract files within Linux and grab ALL of its metadata? Seems like the extract command may be limited from what I understand.
Thanks,
Excel files store a lot of meta data within the file, so you would have to parse the file itself to get at it. Since you're on Linux and can't use the Excel interop, you could try to use an Excel library like ExcelWriter or something similar. ExcelWriter is written for .Net, so you'd have to use mono.
Hi can anyone help i have an ssis package that sends an xls file daily, but the file is too big hence it falis to send. How do i compress the file and automate the ssis to send it daily
There is no native support to do that. The simplest way is to download a component that does that for you like this
Or you can write your own code on a script task. Here you have an example to zip and email a file, exactly what you need
I have a CGI program I have written using Perl. One of its functions is to upload pics to the server.
All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db?
I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases.
Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db.
I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).
It sounds like you have your method complete where you take the upload, make it a string and toss it unto mysql similar to reading file in as a string. However since your given a filehandle versus a filename to read by CGI. You are wondering where that file actually is.
If your using CGI.pm, the upload, uploadInfo, the param for the upload, and upload private files will help you deal with the upload file sources. Where they are stashed after the remote client and the CGI are done isn't permanent usually and a minimum is volatile.
You've got a bunch of uploaded files that need to be added to the db? Should be trivial to dash off a one-off script to loop through all the files and insert the details into the DB. If they're all in one spot, then a simple opendir()/readdir() type loop would catch them all, otherwise you can make a list of file paths to loop over and loop over that.
If you've talking about recording new uploads in the server, then it would be something along these lines:
user uploads file to server
script extracts any wanted/needed info from the file (name, size, mime-type, checksums, etc...)
start database transaction
insert file info into database
retrieve ID of new record
move uploaded file to final resting place, using the ID as its filename
if everything goes file, commit the transaction
Using the ID as the filename solves the worries of filename collisions and new uploads overwriting previous ones. And if you store the uploads somewhere outside of the site's webroot, then the only access to the files will be via your scripts, providing you with complete control over downloads.