How can I add file locations to a database after they are uploaded using a Perl CGI script? - mysql

I have a CGI program I have written using Perl. One of its functions is to upload pics to the server.
All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db?
I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases.
Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db.
I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).

It sounds like you have your method complete where you take the upload, make it a string and toss it unto mysql similar to reading file in as a string. However since your given a filehandle versus a filename to read by CGI. You are wondering where that file actually is.
If your using CGI.pm, the upload, uploadInfo, the param for the upload, and upload private files will help you deal with the upload file sources. Where they are stashed after the remote client and the CGI are done isn't permanent usually and a minimum is volatile.

You've got a bunch of uploaded files that need to be added to the db? Should be trivial to dash off a one-off script to loop through all the files and insert the details into the DB. If they're all in one spot, then a simple opendir()/readdir() type loop would catch them all, otherwise you can make a list of file paths to loop over and loop over that.
If you've talking about recording new uploads in the server, then it would be something along these lines:
user uploads file to server
script extracts any wanted/needed info from the file (name, size, mime-type, checksums, etc...)
start database transaction
insert file info into database
retrieve ID of new record
move uploaded file to final resting place, using the ID as its filename
if everything goes file, commit the transaction
Using the ID as the filename solves the worries of filename collisions and new uploads overwriting previous ones. And if you store the uploads somewhere outside of the site's webroot, then the only access to the files will be via your scripts, providing you with complete control over downloads.

Related

Managing a large SPSS (*.sav) file (4.2 GB)

I have received an SPSS file from survey fielded by another company that allegedly only contains ~1500 respondents, but the file size somehow has ballooned 4.2GB. My hunch is that the reason for this is that the file was from a global survey and the 1500 records that have been selected are from the US only so there are a series of blank variables, metadata for those variables that are included in this file and may also be in multiple languages/alphabets.
I only need a subset of this data, and can likely work with it if I removed the metadata but my issue has been that I can't get the damn thing open to cut down on the number of variables. I have been using the tools at my disposal to try the following workarounds, though I'm sure there are better options:
Opening the file using PSPP (freeware SPSS) - this causes the PSPP to stop responding
Using the R command read.spss (from the foreign package) to write a .csv - this claims that the file has a duplicate variable name and won't proceed further
Using the R command spss.system.file to write a .csv - when I tried this, R has spend a lot of time thinking as it as it attempts to run this and has been running for a couple hours with no apparent success.
Using the PSPP text conversion tool (https://pspp.benpfaff.org/) to create either a dictionary or a .csv file - both of these options crash after the file has completed uploading.
I've gone back to the other company to try have them work on reducing the file size, however I wasn't sure if anyone else had any ideas to do either of the following:
Open the file using another program/converter that could turn it into a .csv or other similarly skinny file format
Use another program to at least read only the variable names included in the file so that I can provide the other company with the specific variables I need
The following command from PSPP should do what you need:
$ pspp-convert originalFile.sav output.csv
In case it doesn't, please provide terminal error message.

Store files in database with DBIC using Catalyst

I'm using Perl Catalyst framework to build an application that needs to store several files in a MySQL database (among other things). I want to store the name, path, extension, etc of the files to retrieve them later; because they are supposed to be accessible from the application (e.g: a PDF document uploaded for someone, must be available for download later). Can I do this? I found several ways to do it in PHP, but none for perl. Any ideas?
EDIT
I know I can access to some information using Catalyst::Request::Upload. I used this in the past for BLOB storage, but I dont't know how to get file information nor how to know where does catalyst store tmp files.
So, basically, the questions that arise when trying to this are:
How to know where are my files being stored once I submit them?
How to copy these files (which I assume go to a tmp folder somewhere) to a folder in my computer/server?
How to retrieve these files once I have them stored?
EDIT 2
I've checked again the documentation for Catalyst::Request::Upload (http://search.cpan.org/~jjnapiork/Catalyst-Runtime-5.90114/lib/Catalyst/Request/Upload.pm) and found out how to know where are my files being stored and how to copy them to a new non-tmp location. The only question that remains:
How do I generate a download link for these files??
The solution was pretty straight-forward.
First Make sure your 'tmp' folder is configured in the Catalyst app file (e.g: MyApp.pm).
Now, use Catalyst::Request::Upload to create the file object with the uploaded file. Sort of...
my $upload = $req->upload('input_field_name');
Now make sure you get all the data you want to store from the file. I, personally, got just the filename, MIME Type and size.
my $filename = $upload->filename;
my $size = $upload->size;
my $type = $upload->type;
Store into the database.
Now, create a folder within the public content of the page to copy the files to, and perform the copy like:
$upload->copy_to('path/to/the/public/folder');
To retrieve the files, just create a link with the base URL to the public folder and the filename you stored in the database.
Hope it helps someone... it was pretty obvious, though; but it cracked my head a little.

Is there any way to fill in Sharepoint entries via parsing text file?

My workplace has a whole bunch of unannotated .zip files that need to be uploaded to the new file server (Windows). I've used perl to parse through through the excel files within the .zip files to create an annotation.txt file for each .zip file that contains information about the .zip file. I have 1000's of zip files and do not want to manually enter in information for each entry if there's a way to automate it. I am proficient in perl and mysql, and wondering if there is any way to utilize my skillsets to port this information into the Microsoft Sharepoint website.
Thank you in advance for any advice or suggestions.
There a many, many ways to meet your requirement.
You could write a event receiver to parse the files once uploaded and set metadata.
A better approach for your use case might be to write a .NET based console application and reference Microsoft.SharePoint.Client and then upload your files using the Client side object model (CSOM) and set the metadata during that process as outlined here: Upload a document to a SharePoint list from Client Side Object Model
There are also REST and ASMX webservices that you could call from a non .NET runtime process.
Plenty of options, pick the one that fits your needs and skills best.

Actionscript is there a way to upload file straight into memory?

I'm looking into uploading an XML file and then storing it's contents in database. It looks like upload method of flash.net.FileReference would do the job however it just gives you an option to upload it to server.
I could upload it to server, read it from that server and then delete that file but I would like to avoid extra work.
Is there a way to just load a file into memory without saving it on some remote location?
No this cannot be done, uploads can only be done to a server, probably for security reason.
If you need to store the content to a database anyway, why don't you make the server-side bakend handle it?
If this is just some data that you need then throw away after the program is complete, perhaps you could consider asking the user to copy and paste their data to some textfield. That might depend on your target audience thought: IT-types - no problem, non-IT types-problem :D
If you are trying to have the user select an XML file from their local machine, after your myFileReference.load(), in your Event.COMPLETE handler function you can use var myXML:XML = XML(myFileReference.data); to get the data of the file you selected.
yes you can load all content in to cache, just push it into an array, when ever you want it just call it out.

How to handle uploading html content to an AppEngine application?

I would like to allow my users to upload HTML content to my AppEngine web app. However if I am using the Blobstore to upload all the files (HTML files, css files, images etc.) this causes a problem as all the links to other files (pages, resources) will not work.
I see two possibilities, but both of them are not very pretty and I would like to avoid using them:
Go over all the links in the html files and change them to the relevant blob key.
Save a mapping between a file and a blob key, catch all the redirections and serve the blobs (could cause problems with same name files).
How can I solve this elegantly without having to go over and change my user's files?
Because app engine is running your content on multiple servers, you are not able to write to the filesystem. What you could do is ask them to upload a zip file containing their html, css, js, images,... The zipfile module from python is available in appengine, so you can unzip these files, and store them individually. This way, you know the directory structure of the zip. This allows you to create a mapping of relative paths to the content in the blobstore. I don't have enough experience with zipfile to write a full example here, I hope someone more experienced can edit my answer, or create a new one with an example.
Saving a mapping is the best option here. You'll need to identify a group of files in some way, since multiple users may upload a file with the same name, then associate unique pathnames with each file in that group. You can use key names to make it a simple datastore get to find the blob associated with a given path. No redirects are required - just use the standard Blobstore serving approach of setting the blobstore header to have App Engine serve the blob to the user.
Another option is to upload a zip, as Frederik suggests. There's no need to unpack and store the files individually, though - you can serve them directly out of the zip in blobstore, as this demo app does.