HTML5 FileReader API crashes chrome 17 when reading large file as slice - html

I'm trying to read large file (3GB) as slice as 100Mb.
***function sliceMe() {
var file = document.getElementById('files').files[0],
fr = new FileReader;
var chunkSize = document.getElementById('txtSize').value;
chunkSize =1048576;
var chunks = Math.ceil(file.size / chunkSize);
var chunk = 0;
document.getElementById('byte_range').innerHTML = "";
function loadNext() {
var start, end,
blobSlice = File.prototype.mozSlice || File.prototype.webkitSlice;
start = chunk * chunkSize;
if (start > file.size)
start = end+1;
end = start + (chunkSize -1) >= file.size ? file.size : start + (chunkSize -1);
fr.onload = function(e) {
if (++chunk <= chunks) {
document.getElementById('byte_range').innerHTML += chunk + " " +
['Read bytes: ', start , ' - ', end,
' of ', file.size, ' byte file'].join('')+"<br>";
//console.info(chunk);
loadNext(); // shortcut here
}
};
fr.readAsArrayBuffer(blobSlice.call(file, start, end));
}
loadNext();
}***
Above code works as expected in Firefox and in Chrome 16. But in Chrome 17 & 18dev version, after reading 1GB data browser crashes.
Is it known issue in Chrome 17?

I had the same problem reading in a 1.8 GB file. If I watch task manager, chrome.exe would take up to 1.5 GB of memory and then crash. My solution was to use a Javascript worker and then use FileReaderSync instead of FileReader. The javascript worker runs in a separate thread, and FileReaderSync will only work in a javascript worker.

You need to change your algorithm that should change the chunk size run time according to the file size. the google chrome crashes when loop running continuously.

Related

Forge chunk upload .NET Core

I have question about uploading large objects in forge bucket. I know that I need to use /resumable api, but how can I get the file( when I have only filename). In this code what is exactly FILE_PATH? Generally, should I save file on server first and then do the upload on bucket?
private static dynamic resumableUploadFile()
{
Console.WriteLine("*****begin uploading large file");
string path = FILE_PATH;
if (!File.Exists(path))`enter code here`
path = #"..\..\..\" + FILE_PATH;
//total size of file
long fileSize = new System.IO.FileInfo(path).Length;
//size of piece, say 2M
long chunkSize = 2 * 1024 * 1024 ;
//pieces count
long nbChunks = (long)Math.Round(0.5 + (double)fileSize / (double)chunkSize);
//record a global response for next function.
ApiResponse<dynamic> finalRes = null ;
using (FileStream streamReader = new FileStream(path, FileMode.Open))
{
//unique id of this session
string sessionId = RandomString(12);
for (int i = 0; i < nbChunks; i++)
{
//start binary position of one certain piece
long start = i * chunkSize;
//end binary position of one certain piece
//if the size of last piece is bigger than total size of the file, end binary
// position will be the end binary position of the file
long end = Math.Min(fileSize, (i + 1) * chunkSize) - 1;
//tell Forge about the info of this piece
string range = "bytes " + start + "-" + end + "/" + fileSize;
// length of this piece
long length = end - start + 1;
//read the file stream of this piece
byte[] buffer = new byte[length];
MemoryStream memoryStream = new MemoryStream(buffer);
int nb = streamReader.Read(buffer, 0, (int)length);
memoryStream.Write(buffer, 0, nb);
memoryStream.Position = 0;
//upload the piece to Forge bucket
ApiResponse<dynamic> response = objectsApi.UploadChunkWithHttpInfo(BUCKET_KEY,
FILE_NAME, (int)length, range, sessionId, memoryStream,
"application/octet-stream");
finalRes = response;
if (response.StatusCode == 202){
Console.WriteLine("one certain piece has been uploaded");
continue;
}
else if(response.StatusCode == 200){
Console.WriteLine("the last piece has been uploaded");
}
else{
//any error
Console.WriteLine(response.StatusCode);
break;
}
}
}
return (finalRes);
}
FILE_PATH: is the path where you stored file on your server.
You should upload your file to server first. Why? Because when you upload your file to Autodesk Forge Server you need internal token, which should be kept secret (that why you keep it in your server), you dont want someone take that token and mess up your Forge Account.
The code you pasted from this article is more about uploading from a server when the file is already stored there - either for caching purposes or the server is using/modifying those files.
As Paxton.Huynh said, FILE_PATH there contains the location on the server where the file is stored.
If you just want to upload the chunks to Forge through your server (to keep credentials and internal access token secret), like a proxy, then it's probably better to just pass on those chunks to Forge instead of storing the file on the server first and then passing it on to Forge - what the sample code you referred to is doing.
See e.g. this, though it's in NodeJS: https://github.com/Autodesk-Forge/forge-buckets-tools/blob/master/server/data.management.js#L171

How to program a URL? (For search query)

A co-worker of mine shared an autohotkey script (it's actually an exe file that runs on the background). Anyways, when I click the hotkeys it opens up a company webiste and creates a shared query for whatever's on the clipboard. I was wondering how this is done and how I can make my own.
I'm specially curious about the "URL" modification that includes all these search options:
https://<COMPANYWEBSITE>/GotoDocumentSearch.do
That's the URL where I can search (sorry it's restricted and even if I link it you cant access it).
Anyways, after I set up all my options and stuff and click the search button I get the following URL:
https://<COMPANYWEBSITE>/DocumentSearch.do
I inspected the website source and this is the function that's called when I press the search button:
function preSubmitSearch(docPress) {
document.pressed = docPress;
// setup local doc types for submit by lopping over multi selects and building json data string
var localDocTypesJson = "{";
var sep = "";
jQuery(".localTypeSel").each(function (i) {
var selLocalTypes = jQuery(this).multiselect("getChecked");
// get doc type code from id ex. 'localTypeSel_PD'
//window.console.log("this.id=" + this.id);
var tmpArr = this.id.split("_");
var docTypeCode = tmpArr[1];
var selLocalTypesCnt = selLocalTypes.length;
if (selLocalTypesCnt > 0) {
var localTypes = "";
var sep2 = "";
for (var i2 = 0; i2 < selLocalTypesCnt; i2++) {
localTypes += sep2 + "\"" + selLocalTypes[i2].value + "\"";
sep2 = ",";
}
localDocTypesJson += sep + "\"" + docTypeCode + "\": [" + localTypes + "]";
sep = ",";
}
});
localDocTypesJson += "}";
jQuery("#localDocTypesJson").val(localDocTypesJson);
}
HOWEVER, the working code that was shared with me (that was written ages ago by some employee who's not here anymore). Has the following URL when I use the autohotkey:
https://<COMPANYWEBSITE>/DocumentSearch.do?searchType=all&localDocTypesJson=7D&formAction=search&formInitialized=true&searchResultsView=default&btn_search=Search&docName=*<CLIPBOARD>*&wildcards=on&docRevision=&latestRevOnly=true&docProjectNumber=&docEngChangeOrder=&docLocation=&findLimit=500&docTypes=Customer+Drawing&docTypes=Production+Drawing&docTypes=Manufacturing+Process+Document&docTypes=Specification+Or+Standard
Note: replaced text with "CLIPBOARD" for clarification.
I was wondering if that's a type of "URL-programming" or how can I make a direct URL that prompts for the search results from the website? is that Javascript? or how is that programmed? (I know Swift and some Java, but have never really used Javascript).
It doesn't seem like you are asking an AutoHotKey (AHK) question, but to give you an AHK example you can copy, here is how I would use AHK to use Google.com to search for whatever is in my clipboard:
wb := ComObjCreate("InternetExplorer.Application")
wb.Visible := true
wb.Navigate("https://www.google.com/search?q=" . StrReplace(Clipboard, " ", "+") . "", "")
Note, the URL format includes the query ("?q=whatever+you+had+in+Clipboard") in it with spaces replaced by "+"s.
Hth,

ffmpeg azure function consumption plan low CPU availability for high volume requests

I am running an azure queue function on a consumption plan; my function starts an FFMpeg process and accordingly is very CPU intensive. When I run the function with less than 100 items in the queue at once it works perfectly, azure scales up and gives me plenty of servers and all of the tasks complete very quickly. My problem is once I start doing more than 300 or 400 items at once, it starts fine but after a while the CPU slowly goes from 80% utilisation to only around 10% utilisation - my functions cant finish in time with only 10% CPU. This can be seen in the image shown below.
Does anyone know why the CPU useage is going lower the more instances my function creates? Thanks in advance Cuan
edit: the function is set to only run one at a time per instance, but the problem exists when set to 2 or 3 concurrent processes per instance in the host.json
edit: the CPU drops get noticeable at 15-20 servers, and start causing failures at around 60. After that the CPU bottoms out at an average of 8-10% with individuals reaching 0-3%, and the server count seems to increase without limit (which would be more helpful if I got some CPU with the servers)
Thanks again, Cuan.
I've also added the function code to the bottom of this post in case it helps.
using System.Net;
using System;
using System.Diagnostics;
using System.ComponentModel;
public static void Run(string myQueueItem, TraceWriter log)
{
log.Info($"C# Queue trigger function processed a request: {myQueueItem}");
//Basic Parameters
string ffmpegFile = #"D:\home\site\wwwroot\CommonResources\ffmpeg.exe";
string outputpath = #"D:\home\site\wwwroot\queue-ffmpeg-test\output\";
string reloutputpath = "output/";
string relinputpath = "input/";
string outputfile = "video2.mp4";
string dir = #"D:\home\site\wwwroot\queue-ffmpeg-test\";
//Special Parameters
string videoFile = "1 minute basic.mp4";
string sub = "1 minute sub.ass";
//guid tmp files
// Guid g1=Guid.NewGuid();
// Guid g2=Guid.NewGuid();
// string f1 = g1 + ".mp4";
// string f2 = g2 + ".ass";
string f1 = videoFile;
string f2 = sub;
//guid output - we will now do this at the caller level
string g3 = myQueueItem;
string outputGuid = g3+".mp4";
//get input files
//argument
string tmp = subArg(f1, f2, outputGuid );
//String.Format("-i \"" + #"input/tmp.mp4" + "\" -vf \"ass = '" + sub + "'\" \"" + reloutputpath +outputfile + "\" -y");
log.Info("ffmpeg argument is: "+tmp);
//startprocess parameters
Process process = new Process();
process.StartInfo.FileName = ffmpegFile;
process.StartInfo.Arguments = tmp;
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.RedirectStandardError = true;
process.StartInfo.WorkingDirectory = dir;
//output handler
process.OutputDataReceived += new DataReceivedEventHandler(
(s, e) =>
{
log.Info("O: "+e.Data);
}
);
process.ErrorDataReceived += new DataReceivedEventHandler(
(s, e) =>
{
log.Info("E: "+e.Data);
}
);
//start process
process.Start();
log.Info("process started");
process.BeginOutputReadLine();
process.BeginErrorReadLine();
process.WaitForExit();
}
public static void getFile(string link, string fileName, string dir, string relInputPath){
using (var client = new WebClient()){
client.DownloadFile(link, dir + relInputPath+ fileName);
}
}
public static string subArg(string input1, string input2, string output1){
return String.Format("-i \"" + #"input/" +input1+ "\" -vf \"ass = '" + #"input/"+input2 + "'\" \"" + #"output/" +output1 + "\" -y");
}
When you use the D:\home directory you are writing to the virtual function, which means each instance has to continually try to write to the same spot as the functions run which causes the massive I/O block. Instead writing to D:\local and then sending the finished file somewhere else solves that issue, this way rather than each instance constantly writing to a location they only write when completed, and write to a location designed to handle high throughput.
The easiest way I could find to manage the input and output after writing to D:\local was just to hook up the function to an azure storage container and handle the ins and outs that way. Doing so made the average CPU stay at 90-100% for upwards of 70 concurrent Instances.

Capture loaded source of audio tag, using Ruby on Rails

I need to save the currently-loaded source file of an audio tag. Sounds simple, but here's the catch: the source gives a random sound file on every request.
The audio tag is created, the source set, and the audio played with JavaScript, as seen here:
function createAudio() {
var audio = document.createElement('audio');
audio.setAttribute('id', 'file_audio')
audio.setAttribute('controls', 'controls');
audio.setAttribute('autoplay', 'true');
audio.setAttribute('hidden', 'true');
audio.appendChild(createSource());
return audio;
}
function createSource() {
var source = document.createElement('source');
var d = new Date();
source.setAttribute('id', 'file_audio_source')
source.setAttribute('src', 'file.wav?r=' + d.getTime());
source.setAttribute('type', 'audio/wav');
return source;
}
this.switchAudio = function() {
var d = new Date();
$svjq("#file_audio").find('audio').remove();
$svjq("#file_audio").find('source').remove();
$svjq("#file_audio").find('embed').remove();
if (Modernizr.audio.wav) {
document.getElementById("file_audio").appendChild(createAudio());
} else {
$svjq("#file_audio").append('<embed id="file_audio_embed" name="file_audio_embed" src="file.wav?r=' + d.getTime() + '" autostart="true" cache="false" type="audio/wav" hidden="true" loop="false" enablejavascript="true">');
}
};
this.playAgain = function() {
if (Modernizr.audio.wav) {
document.getElementById('file_audio').play();
} else {
document.getElementById('file_audio_embed').play();
}
};
I need be able to save the currently-loaded file in the source. However, if you access the file URL in the browser it returns a different file.
Automated processes such as Watir-WebDriver, Capybara (Capybara-Webkit), and Mechanize also return a random file. For example:
require 'capybara'
session = Capybara::Session.new(:selenium)
session.visit('url')
session.click_link 'play sound' #on every click u get a new sound
session.click_link 'play again'
#file_audio_source
e = session.find_by_id('file_audio_source')
e[:src]
#save the current open page and opens it
#session.save_and_open_page
#returns different file
session.visit(e[:src])
#returns different file
session.execute_script("window.open('"+e[:src]+"')")
require 'Mechanize'
agent = Mechanize.new{|agent| agent.ssl_version, agent.verify_mode = 'SSLv3', OpenSSL::SSL::VERIFY_NONE}
filedata = agent.get(e[:src]).content
aFile = File.new("/Users/me/Documents/test/test111.wav", 'wb')
#aFile.syswrite(filedata)
Could the file be embedded into the HTML or cached? And is there a way to get the file and save it locally?
Other options include recording from the sound device or using the mic to record the sound played, though this option is not at all ideal.
opt.1:
require 'capybara'
session = Capybara::Session.new(:selenium)
session.visit('url')
session.click_link 'Play sound'#this gets the file into the cache, then use the codes to get in out
opt.2
#execute the javascript that loads the file/creates the sound url. no playing of the sound
session.execute_script("document.getElementById('file_audio').appendChild(createSource());")
e = session.find_by_id("file_audio_source")
session.visit(e[:src])
Watir and Capybara perform great ! :)
but now the problem is to make it headless
and it seems that the headless browser doesnt act the same and the non-headless ??
A method to give headless functionality
def headless_get_file url
require 'uri'
res = #session.driver.cookies
agent = Mechanize.new {|agent| agent.ssl_version, agent.verify_mode = 'SSLv3', OpenSSL::SSL::VERIFY_NONE}
uri = URI('https://....')
res.keys.each do |i|
temp = res[i]
cookie = Mechanize::Cookie.new(i, temp.value)
cookie.domain = temp.domain
cookie.path = temp.path
agent.cookie_jar.add(uri,cookie)
end
filedata = agent.get(url).content
aFile = File.new("#{dir}/file.wav", 'wb')
aFile.syswrite(filedata)
end
Could the file be embedded into the html or cached?
yes it can ! Is it possible to use data URIs in video and audio tags?
<audio controls="controls" autobuffer="autobuffer" autoplay="autoplay">
<source src="data:audio/wav;base64,UklGRhwMAABXQVZFZm10IBAAAAABAAEAgD4AAIA+AAABAAgAZGF0Ya4LAACAgICAgICAgICAgICAgICAgICAgICAgICAf3hxeH+AfXZ1eHx6dnR5fYGFgoOKi42aloubq6GOjI2Op7ythXJ0eYF5aV1AOFFib32HmZSHhpCalIiYi4SRkZaLfnhxaWptb21qaWBea2BRYmZTVmFgWFNXVVVhaGdbYGhZbXh1gXZ1goeIlot1k6yxtKaOkaWhq7KonKCZoaCjoKWuqqmurK6ztrO7tbTAvru/vb68vbW6vLGqsLOfm5yal5KKhoyBeHt2dXBnbmljVlJWUEBBPDw9Mi4zKRwhIBYaGRQcHBURGB0XFxwhGxocJSstMjg6PTc6PUxVV1lWV2JqaXN0coCHhIyPjpOenqWppK6xu72yxMu9us7Pw83Wy9nY29ve6OPr6uvs6ezu6ejk6erm3uPj3dbT1sjBzdDFuMHAt7m1r7W6qaCupJOTkpWPgHqAd3JrbGlnY1peX1hTUk9PTFRKR0RFQkRBRUVEQkdBPjs9Pzo6NT04Njs+PTxAPzo/Ojk6PEA5PUJAQD04PkRCREZLUk1KT1BRUVdXU1VRV1tZV1xgXltcXF9hXl9eY2VmZmlna3J0b3F3eHyBfX+JgIWJiouTlZCTmpybnqSgnqyrqrO3srK2uL2/u7jAwMLFxsfEv8XLzcrIy83JzcrP0s3M0dTP0drY1dPR1dzc19za19XX2dnU1NjU0dXPzdHQy8rMysfGxMLBvLu3ta+sraeioJ2YlI+MioeFfX55cnJsaWVjXVlbVE5RTktHRUVAPDw3NC8uLyknKSIiJiUdHiEeGx4eHRwZHB8cHiAfHh8eHSEhISMoJyMnKisrLCszNy8yOTg9QEJFRUVITVFOTlJVWltaXmNfX2ZqZ21xb3R3eHqAhoeJkZKTlZmhpJ6kqKeur6yxtLW1trW4t6+us7axrbK2tLa6ury7u7u9u7vCwb+/vr7Ev7y9v8G8vby6vru4uLq+tri8ubi5t7W4uLW5uLKxs7G0tLGwt7Wvs7avr7O0tLW4trS4uLO1trW1trm1tLm0r7Kyr66wramsqaKlp52bmpeWl5KQkImEhIB8fXh3eHJrbW5mYGNcWFhUUE1LRENDQUI9ODcxLy8vMCsqLCgoKCgpKScoKCYoKygpKyssLi0sLi0uMDIwMTIuLzQ0Njg4Njc8ODlBQ0A/RUdGSU5RUVFUV1pdXWFjZGdpbG1vcXJ2eXh6fICAgIWIio2OkJGSlJWanJqbnZ2cn6Kkp6enq62srbCysrO1uLy4uL+/vL7CwMHAvb/Cvbq9vLm5uba2t7Sysq+urqyqqaalpqShoJ+enZuamZqXlZWTkpGSkpCNjpCMioqLioiHhoeGhYSGg4GDhoKDg4GBg4GBgoGBgoOChISChISChIWDg4WEgoSEgYODgYGCgYGAgICAgX99f398fX18e3p6e3t7enp7fHx4e3x6e3x7fHx9fX59fn1+fX19fH19fnx9fn19fX18fHx7fHx6fH18fXx8fHx7fH1+fXx+f319fn19fn1+gH9+f4B/fn+AgICAgH+AgICAgIGAgICAgH9+f4B+f35+fn58e3t8e3p5eXh4d3Z1dHRzcXBvb21sbmxqaWhlZmVjYmFfX2BfXV1cXFxaWVlaWVlYV1hYV1hYWVhZWFlaWllbXFpbXV5fX15fYWJhYmNiYWJhYWJjZGVmZ2hqbG1ub3Fxc3V3dnd6e3t8e3x+f3+AgICAgoGBgoKDhISFh4aHiYqKi4uMjYyOj4+QkZKUlZWXmJmbm52enqCioqSlpqeoqaqrrK2ur7CxsrGys7O0tbW2tba3t7i3uLe4t7a3t7i3tre2tba1tLSzsrKysbCvrq2sq6qop6alo6OioJ+dnJqZmJeWlJKSkI+OjoyLioiIh4WEg4GBgH9+fXt6eXh3d3V0c3JxcG9ubWxsamppaWhnZmVlZGRjYmNiYWBhYGBfYF9fXl5fXl1dXVxdXF1dXF1cXF1cXF1dXV5dXV5fXl9eX19gYGFgYWJhYmFiY2NiY2RjZGNkZWRlZGVmZmVmZmVmZ2dmZ2hnaGhnaGloZ2hpaWhpamlqaWpqa2pra2xtbGxtbm1ubm5vcG9wcXBxcnFycnN0c3N0dXV2d3d4eHh5ent6e3x9fn5/f4CAgIGCg4SEhYaGh4iIiYqLi4uMjY2Oj5CQkZGSk5OUlJWWlpeYl5iZmZqbm5ybnJ2cnZ6en56fn6ChoKChoqGio6KjpKOko6SjpKWkpaSkpKSlpKWkpaSlpKSlpKOkpKOko6KioaKhoaCfoJ+enp2dnJybmpmZmJeXlpWUk5STkZGQj4+OjYyLioqJh4eGhYSEgoKBgIB/fn59fHt7enl5eHd3dnZ1dHRzc3JycXBxcG9vbm5tbWxrbGxraWppaWhpaGdnZ2dmZ2ZlZmVmZWRlZGVkY2RjZGNkZGRkZGRkZGRkZGRjZGRkY2RjZGNkZWRlZGVmZWZmZ2ZnZ2doaWhpaWpra2xsbW5tbm9ub29wcXFycnNzdHV1dXZ2d3d4eXl6enp7fHx9fX5+f4CAgIGAgYGCgoOEhISFhoWGhoeIh4iJiImKiYqLiouLjI2MjI2OjY6Pj46PkI+QkZCRkJGQkZGSkZKRkpGSkZGRkZKRkpKRkpGSkZKRkpGSkZKRkpGSkZCRkZCRkI+Qj5CPkI+Pjo+OjY6Njo2MjYyLjIuMi4qLioqJiomJiImIh4iHh4aHhoaFhoWFhIWEg4SDg4KDgoKBgoGAgYCBgICAgICAf4CAf39+f35/fn1+fX59fHx9fH18e3x7fHt6e3p7ent6e3p5enl6enl6eXp5eXl4eXh5eHl4eXh5eHl4eXh5eHh3eHh4d3h4d3h3d3h4d3l4eHd4d3h3eHd4d3h3eHh4eXh5eHl4eHl4eXh5enl6eXp5enl6eXp5ent6ent6e3x7fHx9fH18fX19fn1+fX5/fn9+f4B/gH+Af4CAgICAgIGAgYCBgoGCgYKCgoKDgoOEg4OEg4SFhIWEhYSFhoWGhYaHhoeHhoeGh4iHiIiHiImIiImKiYqJiYqJiouKi4qLiouKi4qLiouKi4qLiouKi4qLi4qLiouKi4qLiomJiomIiYiJiImIh4iIh4iHhoeGhYWGhYaFhIWEg4OEg4KDgoOCgYKBgIGAgICAgH+Af39+f359fn18fX19fHx8e3t6e3p7enl6eXp5enl6enl5eXh5eHh5eHl4eXh5eHl4eHd5eHd3eHl4d3h3eHd4d3h3eHh4d3h4d3h3d3h5eHl4eXh5eHl5eXp5enl6eXp7ent6e3p7e3t7fHt8e3x8fHx9fH1+fX59fn9+f35/gH+AgICAgICAgYGAgYKBgoGCgoKDgoOEg4SEhIWFhIWFhoWGhYaGhoaHhoeGh4aHhoeIh4iHiIeHiIeIh4iHiIeIiIiHiIeIh4iHiIiHiIeIh4iHiIeIh4eIh4eIh4aHh4aHhoeGh4aHhoWGhYaFhoWFhIWEhYSFhIWEhISDhIOEg4OCg4OCg4KDgYKCgYKCgYCBgIGAgYCBgICAgICAgICAf4B/f4B/gH+Af35/fn9+f35/fn1+fn19fn1+fX59fn19fX19fH18fXx9fH18fXx9fH18fXx8fHt8e3x7fHt8e3x7fHt8e3x7fHt8e3x7fHt8e3x7fHt8e3x8e3x7fHt8e3x7fHx8fXx9fH18fX5+fX59fn9+f35+f35/gH+Af4B/gICAgICAgICAgICAgYCBgIGAgIGAgYGBgoGCgYKBgoGCgYKBgoGCgoKDgoOCg4KDgoOCg4KDgoOCg4KDgoOCg4KDgoOCg4KDgoOCg4KDgoOCg4KDgoOCg4KDgoOCg4KDgoOCg4KCgoGCgYKBgoGCgYKBgoGCgYKBgoGCgYKBgoGCgYKBgoGCgYKBgoGCgYKBgoGBgYCBgIGAgYCBgIGAgYCBgIGAgYCBgIGAgYCBgIGAgYCAgICBgIGAgYCBgIGAgYCBgIGAgYCBgExJU1RCAAAASU5GT0lDUkQMAAAAMjAwOC0wOS0yMQAASUVORwMAAAAgAAABSVNGVBYAAABTb255IFNvdW5kIEZvcmdlIDguMAAA" />
</audio>
is there a way to get it and save it locally?
yes u can !
you just have to find the cache directory :)
http://www.digitalmediaminute.com/article/626/viewing-browser-cache-in-firefox
and then right a little code to go and fetch it this codes goes with opt.1 not opt.2
def getlatestdir(newdirs)
times = Array.new
newdirs.each_with_index do |newdir,index|
times[index] = File::mtime(newdir)
end
temp = times[0]
count = 0
times.each_with_index do |time,index|
if temp < time
temp = time
count = index
end
end
return newdirs[count]
end
def getCacheDir
#how to get the path
#in irb enter
#require 'capybara'
#session = Capybara::Session.new(:selenium)
#session.visit('https://www.google.co.za')
#--- then open a new tab and enter about:cache
#copy the disk cache device cache directory ( from /var/... to .../T/ )
path = '/var/folders/9x/51cvmc215xx6zy9vd_64sxwc0000gn/T/'
dirs = Dir.glob(path +'*/')
newdirs = Array.new
dirs.each_with_index do |dir,index|
if(dir.include? 'webdriver-profile')
newdirs[newdirs.length] = dir
end
end
the_cache_dir = getlatestdir(newdirs) + 'Cache'
return the_cache_dir
end
def saveFile
rifffile = ''
count = 0
the_cache_dir = getCacheDir
files = Dir.glob(the_cache_dir + '/*/*/*')
files.each_with_index do |file,index|
bytes = open(file, 'rb'){|io|io.read}
str = bytes[0].to_s + bytes[1].to_s + bytes[2].to_s + bytes[3].to_s
if(str == 'RIFF')
count = index
rifffile = file
break
end
end
puts rifffile
filename = 'test123.wav'
#read file bytes
bytes = File.open(rifffile, 'rb'){|io|io.read}
#write file to the directory
f = File.new(filename, 'wb')
f.syswrite(bytes)
return filename
end
granted the above code isnt the greatest or fastest, but it gets the job done
Other options include recording from the sound device, or using the mic to record the sound played
thats would take too long and too much effort :P
In summary, opt.1 is ok but not great, opt.2 is far, far better :)
ajt

HTML5 File API - slicing or not?

There are some nice examples about file uploading at HTML5 Rocks but there are something that isn't clear enough for me.
As far as i see, the example code about file slicing is getting a specific part from file then reading it. As the note says, this is helpful when we are dealing with large files.
The example about monitoring uploads also notes this is useful when we're uploading large files.
Am I safe without slicing the file? I meaning server-side problems, memory, etc. Chrome doesn't support File.slice() currently and i don't want to use a bloated jQuery plugin if possible.
Both Chrome and FF support File.slice() but it has been prefixed as File.webkitSlice() File.mozSlice() when its semantics changed some time ago. There's another example of using it here to read part of a .zip file. The new semantics are:
Blob.webkitSlice(
in long long start,
in long long end,
in DOMString contentType
);
Are you safe without slicing it? Sure, but remember you're reading the file into memory. The HTML5Rocks tutorial offers chunking the upload as a potential performance improvement. With some decent server logic, you could also do things like recovering from a failed upload more easily. The user wouldn't have to re-try an entire 500MB file if it failed at 99% :)
This is the way to slice the file to pass as blobs:
function readBlob() {
var files = document.getElementById('files').files;
var file = files[0];
var ONEMEGABYTE = 1048576;
var start = 0;
var stop = ONEMEGABYTE;
var remainder = file.size % ONEMEGABYTE;
var blkcount = Math.floor(file.size / ONEMEGABYTE);
if (remainder != 0) blkcount = blkcount + 1;
for (var i = 0; i < blkcount; i++) {
var reader = new FileReader();
if (i == (blkcount - 1) && remainder != 0) {
stop = start + remainder;
}
if (i == blkcount) {
stop = start;
}
//Slicing the file
var blob = file.webkitSlice(start, stop);
reader.readAsBinaryString(blob);
start = stop;
stop = stop + ONEMEGABYTE;
} //End of loop
} //End of readblob