I am creating an application which requires slicing a video file (mp4 format) into chunks. Our server caps the upload_max_filesize at 2MB, but we have files that are hundreds of MBs in size which require uploading. So far I slice the file (into 1MB chunks) using HTML5 FileReader() then upload each chunk using ajax. (here is part of the function I have written)
reader.onload = function() {
$.ajax -> send current blob to server by method POST
};
blob = window.videoFile.slice(byteStart, byteEnd);
reader.readAsBinaryString(blob);
Here's the question: will concatenating the files (in order) on the backend then simply setting the Content-type as so:
header('Content-Type: video/mp4');
before saving the file actually reproduce the video file (i.e., perfectly not as some choppy second rate facsimile) or am I missing something here? This will require some time and the faster option may be for me to beg our server admin to alter the php.ini file to allow a much larger upload_max_filesize.
Related
I have an Angular app where I am loading some static data from a file under assets.
urlDataFile = './assets/data.json';
data: any = null;
constructor(
private _http: HttpClient
) {
this.loadData().subscribe((res) => {
this.data = res;
});
}
private loadData() {
return this._http.get(this.urlDataFile);
}
This works absolutely fine for me for truly static data.
When I build my app for distribution, the data gets packaged into the app.
However, once deployed, I want to be able to publish an updated data file - just (manually) updating a single file into the deployment location.
In an ideal world, I would like to develop with a dummy or sample data file (to be held in source control etc.), to exclude that file from deployment, and once deployed, to push a new data file into the deployed app.
What is the standard convention for accomplishing this?
What you have there should work just fine?
There's two ways of doing JSON stuff; one is as you're doing and dynamically request the file, the other is to literally import dataJson from './assets/data.json' in your file directly.
The second option, I was surprised to find out, actually gets compiled into your code so that the json values are literally part of your, e.g. main.js, app files.
So, yours is good for not being a part of your app, it will request that file on every app load (or whenever you tell it to).
Means it will load your local debug file because that's what it's got, and then the prod file when deployed; it's just requesting a file, after all.
What I foresee you needing to contend with is two things:
Live updates
Unless your app keeps requesting the file periodically, it won't magically get new data from any new file that you push. Until/unless someone F5's or freshly browses to the site, you otherwise won't get that new data.
Caching
Even if you occasionally check that file for new data, you need to handle the fact browsers try to be nice and cache files for you to make things quicker. I guess this would be handled with various cache headers and things that I know exist but have never had to touch in detail myself.
Otherwise, the browser will just return the old, cached data.json instead of actually going and retrieving the new one.
After that, though, I can't see anything wrong with doing what you're doing.
Slap your file request in an interval and put no-cache headers on the file itself and... good enough, probably?
You are already following the right convention, which is to call the data with the HTTP client, not importing the file.
Now you can just gitignore the file, and replace it in a deployment step or whatever suits you.
Just watch for caching. You might want to add a dummy query string with some value based on a time or something to ensure the server sends a new file, based on how often you might update this file.
I am building a web app and I would like to show PDF files to my users. My files are mainly stored as byte arrays in the database as they are generated in the backend. I am using the embed element and have found three ways to display a PDF:
Local file path in src attribute: Works, but I need to generate a file from the database byte array, which is not desirable as I have to manage routines to delete them once they are not needed anymore.
Online file path in src attribute: Not possible since my files may not be hosted anywhere but on the server. Also has the same issues as the previous method anyway.
Data as base64 string in src attribute: Current method, but I ran into a problem for larger files (>2MB). Edge and Chrome will not display a PDF when I covert a PDF of this size to a base64 string (no error but the docs reveal that there is a limit for the data in the src attribute). It works on Firefox but I cannot have my users be restricted to Firefox.
Is there any other way to transmit valid PDF data from a byte array out of the database without generating a file locally?
You have made the common mistake of thinking of URLs and file paths as the same thing; but a URL is just a string that's sent to the server, and some content is sent back. Just as you wouldn't save an HTML file to disk for every dynamic page on the site, you don't have to write to the file system to display a dynamic PDF.
So the solution to this is to have a script on your server that takes the identifier of a PDF in your system, maybe does some access checking, and outputs it to the browser.
For example, if you were using PHP, you might write the HTML with <embed src="/loadpdf.php?id=42"> and then in loadpdf.php would write something like this:
$pdfContent = load_pdf_from_database((int)$_GET['id']);
header('Content-Type: application/pdf');
echo $pdfContent;
Loading /loadpdf.php?id=42 directly in the browser would then render the PDF just the same as if it was a "real" file, and embedding it should work the same way too.
Is it possible to obtain file handles in HTML5 and store it as a blob in webDB for upload later?
(Upload selected images when the 3G network is available again, without re-selecting the files.)
The HTML5 will be loaded from the local client device and
action="http://.../insert.jsp"
be used to upload the files to the server.
Any help or ideas will be very useful.
C-:
Any File object can be converted to an URL.
It is simple to do by using object URLs as described by:
https://developer.mozilla.org/en-US/docs/Using_files_from_web_applications
(I still have to confirm that the URLs remain valid across sessions.)
And it does not remain valid across sessions in Chrome!
I'm working on a project where all user image uploads are stored on S3. To save bandwidth and avoid the upload going through our servers, we're using HTML Form based uploads (see http://docs.amazonwebservices.com/AmazonS3/latest/dev/HTTPPOSTForms.html).
What are the best practices for validating the contents of the upload to avoid non-image/malicious files sneaking onto and being served from my account? Is there a way to do this out of the box with S3? Or do I need to validate this on my server, after the file has been uploaded (which would pretty much defeat the purpose of going direct to s3 in the first place)?
If you use jquery file upload combined with the html5 s3 form you can use some code similar to below to handle the uploading proccess. Before the file is uploaded you can check the file type and also the size with file.size < 50000000 before the file hits the server.
jQuery ->
$('#fileupload').fileupload
add: (e, data) ->
types = /(\.|\/)(gif|jpe?g|png)$/i
file = data.files[0]
if types.test(file.type) || types.test(file.name)
data.context = $(tmpl("template-upload", file))
$('#fileupload').append(data.context)
data.submit()
else
alert("#{file.name} is not a gif, jpeg, or png image file")
I'm currently building an AIR file uploader designed to handle multiple and very large files. I've played around with different methods of breaking up file into chunks(100mb) and progressively uploading each so that I can guard agains a failed upload/disconnection etc.
I have managed to break up the file in smaller files which I then write to a scratch area on the disc however I'm finding that the actual process of writing the file is quite slow and chews up a lot of processing power. My UI basically grinds to a halt when its writing. not to mention that I'm effectively doubling the local disc space of every file.
The other method I used was to read into the original file in 100mb chunks and store that data in a byteArray which I can then upload as a POST data using the URLLoader class. Problem is that this way I cant keep track of the upload progress because the ProgressEvent.PROGRESS does not work properly for POST requests.
What I would like to know is if it's possible to read into the file in my 100mb chunks and upload that data without having to create a new file but still using the FileReference.upload() method in order to listen to all the available events that method gives me. Ie. create a File() that is made up of bytes 0 - 100mb of the original file, then call upload() on that new File.
I can post my code for both methods if that helps.
Cheers, much appreciated
I had such problem, but we were solve it in another way, we decided to write an socket connector, which will connect to server (e.g. FTP/HTTP) and write down to socket this ByteArray, and we did it also in chunks around the same size, and the biggest file we had to upload was BlueRay movie around ~150GB.
So I hope you got some interesting ideas from my message, If you'd like it, I could share some piece of code for you.