I'm looking into uploading an XML file and then storing it's contents in database. It looks like upload method of flash.net.FileReference would do the job however it just gives you an option to upload it to server.
I could upload it to server, read it from that server and then delete that file but I would like to avoid extra work.
Is there a way to just load a file into memory without saving it on some remote location?
No this cannot be done, uploads can only be done to a server, probably for security reason.
If you need to store the content to a database anyway, why don't you make the server-side bakend handle it?
If this is just some data that you need then throw away after the program is complete, perhaps you could consider asking the user to copy and paste their data to some textfield. That might depend on your target audience thought: IT-types - no problem, non-IT types-problem :D
If you are trying to have the user select an XML file from their local machine, after your myFileReference.load(), in your Event.COMPLETE handler function you can use var myXML:XML = XML(myFileReference.data); to get the data of the file you selected.
yes you can load all content in to cache, just push it into an array, when ever you want it just call it out.
Related
I am building a simple editor-type application in react-redux, and I want to mimic the operation of downloading and uploading json files for saving and loading data - entirely client side. The server side does not need the data. Local storage may be too small, and it would be nice to provide the user the data in a portable file they could upload on a new machine. Is this even possible, and if so how?
Using a blob file.
You can set the content of a new file which is temp and local, then trigger a click event to download the file.
duplicate answer here and here
I'm working on an application that would allow users to create a custom character sheet for role play games. I have most of the code figured out, but I want users to be able to send their character sheets between devices.
So here's the question: is there a way to save and send a shared object file, or a way to create a txt file that can easily be saved and copied?
I don't believe you can send a SharedObject from one device to another, at least without a lot of work. You could however create an XML file containing the data and save that up to a server. You could allow the user to then download character sheets from your server and the app would read the XML data before converting to a SharedObject. Can't really provide any code for this as the details are lacking.
If I understand you correctly, you could sort of do this.
You cannot literally "send and receive" a SharedObject (well, you might be able to copy your shared object data on the file system directly, but not from Flash).
What you can do is provide options to the user to save and load a file that encodes all the shared data in AMF bytes. Here's the general idea:
First you need to give the user an option to save their data. You can use ByteArray/writeObject() to write your data using the same AMF format that SharedObjectuses, and FileReference/save() to allow the user to save it to a file on their file-system.
Next, you can use FileReference/load() to load the file and ByteArray/readObject() to read all the data into AS3. Now you can simply store it in the SharedObject however you want, just like you did before.
:image => StorageRoom::Image.new_with_filename(path)
I have to get the path of the image. So far i have specified the path manually and it worked and now i have put in heroku but it shows Load Error - No such file present.
How can i get the path value of the local system using browse button.
Your problem may not be related to path names, but to the fact that Heroku has a read-only file system. If you try to write files onto disk in a Heroku app, it simply doesn't work -- the file will not be saved.
The exception is the "temp" directory. You can save files there, but they are not guaranteed to persist for longer than the duration of a single request.
Is the file you are trying to open actually saved in your Git repo? If so, it will be on the disk in your Heroku app, and you should be able to open it.
To see what the filesystem layout looks like on your Heroku instance, you can create a controller method like:
render :inline => Dir['**/*'].inspect
File.expand_path
Reference : http://saaridev.blogspot.com/2006/11/ruby-finding-absolute-path-of-running.html
You don't need the full path. As far as file path in the client machine is concerned for file uploads, the path is irrelevant as it poses security risks for the user.
Most modern browsers don't send the file path for file uploads. You could get the path using Javascript or Flash but still I don't see the logic behind doing this.
When a user clicks on the submit button the browser should at least send you the file name with the file data together with a bunch of other information like the mime type. Your web server would either write the file to disk or process it in memory assuming you have near infinite memory resources. Look at the RFC 1867 for file uploads for more on this.
I'm currently building an AIR file uploader designed to handle multiple and very large files. I've played around with different methods of breaking up file into chunks(100mb) and progressively uploading each so that I can guard agains a failed upload/disconnection etc.
I have managed to break up the file in smaller files which I then write to a scratch area on the disc however I'm finding that the actual process of writing the file is quite slow and chews up a lot of processing power. My UI basically grinds to a halt when its writing. not to mention that I'm effectively doubling the local disc space of every file.
The other method I used was to read into the original file in 100mb chunks and store that data in a byteArray which I can then upload as a POST data using the URLLoader class. Problem is that this way I cant keep track of the upload progress because the ProgressEvent.PROGRESS does not work properly for POST requests.
What I would like to know is if it's possible to read into the file in my 100mb chunks and upload that data without having to create a new file but still using the FileReference.upload() method in order to listen to all the available events that method gives me. Ie. create a File() that is made up of bytes 0 - 100mb of the original file, then call upload() on that new File.
I can post my code for both methods if that helps.
Cheers, much appreciated
I had such problem, but we were solve it in another way, we decided to write an socket connector, which will connect to server (e.g. FTP/HTTP) and write down to socket this ByteArray, and we did it also in chunks around the same size, and the biggest file we had to upload was BlueRay movie around ~150GB.
So I hope you got some interesting ideas from my message, If you'd like it, I could share some piece of code for you.
I have a CGI program I have written using Perl. One of its functions is to upload pics to the server.
All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db?
I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases.
Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db.
I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).
It sounds like you have your method complete where you take the upload, make it a string and toss it unto mysql similar to reading file in as a string. However since your given a filehandle versus a filename to read by CGI. You are wondering where that file actually is.
If your using CGI.pm, the upload, uploadInfo, the param for the upload, and upload private files will help you deal with the upload file sources. Where they are stashed after the remote client and the CGI are done isn't permanent usually and a minimum is volatile.
You've got a bunch of uploaded files that need to be added to the db? Should be trivial to dash off a one-off script to loop through all the files and insert the details into the DB. If they're all in one spot, then a simple opendir()/readdir() type loop would catch them all, otherwise you can make a list of file paths to loop over and loop over that.
If you've talking about recording new uploads in the server, then it would be something along these lines:
user uploads file to server
script extracts any wanted/needed info from the file (name, size, mime-type, checksums, etc...)
start database transaction
insert file info into database
retrieve ID of new record
move uploaded file to final resting place, using the ID as its filename
if everything goes file, commit the transaction
Using the ID as the filename solves the worries of filename collisions and new uploads overwriting previous ones. And if you store the uploads somewhere outside of the site's webroot, then the only access to the files will be via your scripts, providing you with complete control over downloads.