I'm trying to upload a large file > 200 MB using new Box API.
Can I upload it in chunks?
Currently the box API does not support uploading a file in chunks. You may include a "Content-MD5" header with your request that contains the SHA1 hash of the file. Box will check this against the uploaded contents to ensure the file is not corrupted in transit.
See: http://developers.box.com/docs/#files-upload-a-file
Related
I exported a session from Fiddler to saz files.
This session includes only jpg files and I'm wondering - how can I extract the jpg files from saz quickly and easily?
Thanks!
The easiest way to extract the JPEG files is to use Fiddler itself. Fiddler allows you to load a SAZ file (under File/Load Archive..).
Once loaded, just right-click on the HTTP message with the JPEG and select Save/Response/Response Body....
If you want to do it the hard way, a SAZ file is just a zip file. Below is from the Fiddler FAQ page
SAZ files are simply specially formatted .ZIP files. If you rename a
.SAZ file to .ZIP, you can open it for viewing using standard ZIP
viewing tools.
According to the FAQ, the HTTP payload data is stored in a directory called raw. The JPEG data will be in one of the sessid#_s.txt files, but embedded in a HTTP response message. Strip the HTTP headers to get the JPEG (assuming there is no extra encoding in the HTTP message).
sessid#_s.txt - contains the raw server request
Fiddler now requires a password to work, and I'm not willing to register.
First, I create a folder and put the .saz file inside
Second, I change to that directory and use unzip to extract the files because a saz file is a standard archive
Third, I open _index.htm with any browser and click the links in the index file
Have fun.
I am building a simple editor-type application in react-redux, and I want to mimic the operation of downloading and uploading json files for saving and loading data - entirely client side. The server side does not need the data. Local storage may be too small, and it would be nice to provide the user the data in a portable file they could upload on a new machine. Is this even possible, and if so how?
Using a blob file.
You can set the content of a new file which is temp and local, then trigger a click event to download the file.
duplicate answer here and here
I'm new in this area. I have right now a file original in excelc(data from sensor), i want to upload it into azure and use stream to process it, as the format of data supports CSV, I'm think about saving the excel in csv and upload it in blob storage (or should I send it into event hub?), however the stream analytics shows nothing in output. The original file looks like below, does anyone know something about this?
When i try to upload a large csv file to CKAN datastore it fails and shows the following message
Error: Resource too large to download: 5158278929 > max (10485760).
I changed the maximum in megabytes a resources upload to
ckan.max_resource_size = 5120
in
/etc/ckan/production.ini
What else do i need to change to upload a large csv to ckan.
Screenshot:
That error message comes from the DataPusher, not from CKAN itself: https://github.com/ckan/datapusher/blob/master/datapusher/jobs.py#L250. Unfortunately it looks like the DataPusher's maximum file size is hard-coded to 10MB: https://github.com/ckan/datapusher/blob/master/datapusher/jobs.py#L28. Pushing larger files into the DataStore is not supported.
Two possible workarounds might be:
Use the DataStore API to add the data yourself.
Change the MAX_CONTENT_LENGTH on the line in the DataPusher source code that I linked to above, to something bigger.
I'm working on a project where all user image uploads are stored on S3. To save bandwidth and avoid the upload going through our servers, we're using HTML Form based uploads (see http://docs.amazonwebservices.com/AmazonS3/latest/dev/HTTPPOSTForms.html).
What are the best practices for validating the contents of the upload to avoid non-image/malicious files sneaking onto and being served from my account? Is there a way to do this out of the box with S3? Or do I need to validate this on my server, after the file has been uploaded (which would pretty much defeat the purpose of going direct to s3 in the first place)?
If you use jquery file upload combined with the html5 s3 form you can use some code similar to below to handle the uploading proccess. Before the file is uploaded you can check the file type and also the size with file.size < 50000000 before the file hits the server.
jQuery ->
$('#fileupload').fileupload
add: (e, data) ->
types = /(\.|\/)(gif|jpe?g|png)$/i
file = data.files[0]
if types.test(file.type) || types.test(file.name)
data.context = $(tmpl("template-upload", file))
$('#fileupload').append(data.context)
data.submit()
else
alert("#{file.name} is not a gif, jpeg, or png image file")