I have question about uploading large objects in forge bucket. I know that I need to use /resumable api, but how can I get the file( when I have only filename). In this code what is exactly FILE_PATH? Generally, should I save file on server first and then do the upload on bucket?
private static dynamic resumableUploadFile()
{
Console.WriteLine("*****begin uploading large file");
string path = FILE_PATH;
if (!File.Exists(path))`enter code here`
path = #"..\..\..\" + FILE_PATH;
//total size of file
long fileSize = new System.IO.FileInfo(path).Length;
//size of piece, say 2M
long chunkSize = 2 * 1024 * 1024 ;
//pieces count
long nbChunks = (long)Math.Round(0.5 + (double)fileSize / (double)chunkSize);
//record a global response for next function.
ApiResponse<dynamic> finalRes = null ;
using (FileStream streamReader = new FileStream(path, FileMode.Open))
{
//unique id of this session
string sessionId = RandomString(12);
for (int i = 0; i < nbChunks; i++)
{
//start binary position of one certain piece
long start = i * chunkSize;
//end binary position of one certain piece
//if the size of last piece is bigger than total size of the file, end binary
// position will be the end binary position of the file
long end = Math.Min(fileSize, (i + 1) * chunkSize) - 1;
//tell Forge about the info of this piece
string range = "bytes " + start + "-" + end + "/" + fileSize;
// length of this piece
long length = end - start + 1;
//read the file stream of this piece
byte[] buffer = new byte[length];
MemoryStream memoryStream = new MemoryStream(buffer);
int nb = streamReader.Read(buffer, 0, (int)length);
memoryStream.Write(buffer, 0, nb);
memoryStream.Position = 0;
//upload the piece to Forge bucket
ApiResponse<dynamic> response = objectsApi.UploadChunkWithHttpInfo(BUCKET_KEY,
FILE_NAME, (int)length, range, sessionId, memoryStream,
"application/octet-stream");
finalRes = response;
if (response.StatusCode == 202){
Console.WriteLine("one certain piece has been uploaded");
continue;
}
else if(response.StatusCode == 200){
Console.WriteLine("the last piece has been uploaded");
}
else{
//any error
Console.WriteLine(response.StatusCode);
break;
}
}
}
return (finalRes);
}
FILE_PATH: is the path where you stored file on your server.
You should upload your file to server first. Why? Because when you upload your file to Autodesk Forge Server you need internal token, which should be kept secret (that why you keep it in your server), you dont want someone take that token and mess up your Forge Account.
The code you pasted from this article is more about uploading from a server when the file is already stored there - either for caching purposes or the server is using/modifying those files.
As Paxton.Huynh said, FILE_PATH there contains the location on the server where the file is stored.
If you just want to upload the chunks to Forge through your server (to keep credentials and internal access token secret), like a proxy, then it's probably better to just pass on those chunks to Forge instead of storing the file on the server first and then passing it on to Forge - what the sample code you referred to is doing.
See e.g. this, though it's in NodeJS: https://github.com/Autodesk-Forge/forge-buckets-tools/blob/master/server/data.management.js#L171
Related
I am running an azure queue function on a consumption plan; my function starts an FFMpeg process and accordingly is very CPU intensive. When I run the function with less than 100 items in the queue at once it works perfectly, azure scales up and gives me plenty of servers and all of the tasks complete very quickly. My problem is once I start doing more than 300 or 400 items at once, it starts fine but after a while the CPU slowly goes from 80% utilisation to only around 10% utilisation - my functions cant finish in time with only 10% CPU. This can be seen in the image shown below.
Does anyone know why the CPU useage is going lower the more instances my function creates? Thanks in advance Cuan
edit: the function is set to only run one at a time per instance, but the problem exists when set to 2 or 3 concurrent processes per instance in the host.json
edit: the CPU drops get noticeable at 15-20 servers, and start causing failures at around 60. After that the CPU bottoms out at an average of 8-10% with individuals reaching 0-3%, and the server count seems to increase without limit (which would be more helpful if I got some CPU with the servers)
Thanks again, Cuan.
I've also added the function code to the bottom of this post in case it helps.
using System.Net;
using System;
using System.Diagnostics;
using System.ComponentModel;
public static void Run(string myQueueItem, TraceWriter log)
{
log.Info($"C# Queue trigger function processed a request: {myQueueItem}");
//Basic Parameters
string ffmpegFile = #"D:\home\site\wwwroot\CommonResources\ffmpeg.exe";
string outputpath = #"D:\home\site\wwwroot\queue-ffmpeg-test\output\";
string reloutputpath = "output/";
string relinputpath = "input/";
string outputfile = "video2.mp4";
string dir = #"D:\home\site\wwwroot\queue-ffmpeg-test\";
//Special Parameters
string videoFile = "1 minute basic.mp4";
string sub = "1 minute sub.ass";
//guid tmp files
// Guid g1=Guid.NewGuid();
// Guid g2=Guid.NewGuid();
// string f1 = g1 + ".mp4";
// string f2 = g2 + ".ass";
string f1 = videoFile;
string f2 = sub;
//guid output - we will now do this at the caller level
string g3 = myQueueItem;
string outputGuid = g3+".mp4";
//get input files
//argument
string tmp = subArg(f1, f2, outputGuid );
//String.Format("-i \"" + #"input/tmp.mp4" + "\" -vf \"ass = '" + sub + "'\" \"" + reloutputpath +outputfile + "\" -y");
log.Info("ffmpeg argument is: "+tmp);
//startprocess parameters
Process process = new Process();
process.StartInfo.FileName = ffmpegFile;
process.StartInfo.Arguments = tmp;
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.RedirectStandardError = true;
process.StartInfo.WorkingDirectory = dir;
//output handler
process.OutputDataReceived += new DataReceivedEventHandler(
(s, e) =>
{
log.Info("O: "+e.Data);
}
);
process.ErrorDataReceived += new DataReceivedEventHandler(
(s, e) =>
{
log.Info("E: "+e.Data);
}
);
//start process
process.Start();
log.Info("process started");
process.BeginOutputReadLine();
process.BeginErrorReadLine();
process.WaitForExit();
}
public static void getFile(string link, string fileName, string dir, string relInputPath){
using (var client = new WebClient()){
client.DownloadFile(link, dir + relInputPath+ fileName);
}
}
public static string subArg(string input1, string input2, string output1){
return String.Format("-i \"" + #"input/" +input1+ "\" -vf \"ass = '" + #"input/"+input2 + "'\" \"" + #"output/" +output1 + "\" -y");
}
When you use the D:\home directory you are writing to the virtual function, which means each instance has to continually try to write to the same spot as the functions run which causes the massive I/O block. Instead writing to D:\local and then sending the finished file somewhere else solves that issue, this way rather than each instance constantly writing to a location they only write when completed, and write to a location designed to handle high throughput.
The easiest way I could find to manage the input and output after writing to D:\local was just to hook up the function to an azure storage container and handle the ins and outs that way. Doing so made the average CPU stay at 90-100% for upwards of 70 concurrent Instances.
I wrote a sequence which can be generic to a variety of tests. I want to do it by adding configuration files for each test.
The code for the sequnce:
//----------------------------------------------------------------------
//Sequence
//----------------------------------------------------------------------
class axi_sequence extends uvm_sequence#(axi_transaction);
`uvm_object_utils(axi_sequence)
//new
function new (string name = "axi_sequence");
super.new(name);
endfunction: new
//main task
task body();
int file_p, temp, len;
byte mode;
bit [31:0] addr;
string str;
axi_transaction axi_trx;
bit [31:0] transfers [$];
bit [31:0] data;
//open file
file_p = $fopen("./sv/write_only.txt", "r"); //the name of the file should be same as the name of the test
//in case file doesn't exist
`my_fatal(file_p != 0, "FILE OPENED FAILED")
//read file
while ($feof(file_p) == 0)
begin
temp = $fgets(str, file_p);
axi_trx = axi_transaction::type_id::create(.name("axi_trx"), .contxt(get_full_name()));
// ~start_item~ and <finish_item> together will initiate operation of
// a sequence item.
start_item(axi_trx);
transfers = {};
$sscanf(str, "%c %d %h", mode, len, addr);
//assign the data to str
str = str.substr(12,str.len()-1);
//create and assign to transfers queue
if(mode == "w")
begin
for (int i = 0; i <= len; i++) begin
temp = $sscanf(str, "%h", data);
`my_fatal(temp > 0, "THE LENGHT PARAM IS WRONG- too big")
transfers. push_back(data);
str = str.substr(13+(i+1)*8,str.len()-1);
end//end for
`my_fatal($sscanf(str, "%h", temp) <= 0, "THE LENGHT PARAM IS WRONG- too small")
end//if
axi_trx.init(mode,len,addr,transfers);
if (to_random == 1) to_random should be a part of the configuration file.
trx.my_random(); //trx is transaction instance
else
trx.delay = const_config; //const_delay should be a part of the configuration file.
//contains the send_request which send the request item to the sequencer, which will forward
// it to the driver.
finish_item(axi_trx);
end//begin
endtask: body
endclass: axi_sequence
Should I do it by using different configuration file, or can I do it by values that will be passed from the test to the agent through the config db?
And how can I pass different path (for the file_p = $fopen()) for each test?
You shouldn't need a separate configuration file for each test. Ideally, you would just pass down the configuration from the test level down into the env through the config_db (or through a separate configuration object for your agent)
When you create your sequence in your test (or virtual sequencer), you should be able to set your variables as needed.
I am trying to pass an image into a report via the text report parameter.However, it only seems to work when the image is small. The code I am using to call the report is along these lines:
private CustomerAttachment LoadFromReportServer(byte[] imgAsByte)
{
string mimeType, encoding, extension;
string[] streamids;
Warning[] warnings;
string base64String = Convert.ToBase64String(imgAsByte);
_reportParams = new List<ReportParameter>();
_reportParams.Add(new ReportParameter("p_farm_map", base64String));
var rptViewer = new ReportViewer();
rptViewer.ShowCredentialPrompts = false;
rptViewer.ShowParameterPrompts = false;
rptViewer.ProcessingMode = ProcessingMode.Remote;
_reportServerUrl = "http://MyReportServer.wesenergy.local/ReportServer";
_reportFolderPath = "/WcfReportTest/";
_reportName = "FarmMapReport";
rptViewer.ServerReport.ReportServerUrl = new Uri(_reportServerUrl);
rptViewer.ServerReport.ReportPath = _reportFolderPath + _reportName;
rptViewer.ServerReport.SetParameters(_reportParams);
//Fails on this line here
byte[] bytes = rptViewer.ServerReport.Render(_reportFormat, deviceInfo, out mimeType, out encoding, out extension, out streamids, out warnings);
return new CustomerAttachment(_customerId, _fileName,"application/pdf", bytes);
}
In the report p_farm_map is a Text report parameter.
The error I get on larger files is rsInvalidParameter error.
Is there a way to explicitly set the max size of the Text data type?
Due to lack of a response to this question and project time constraints the alternate solution was to save the image to a share; pass the file location through via the text field to the report and have the image use that text value as the source for the image. Not an ideal solution as I have to write and read from a disk (slow). Plus I now have to clean up these files as well.
I am using 'quotaBytedUsed' property while getting Files using an authorized get request - Files.list.
I am converting the long value obtained to file size in KB/MB/Gb as appropriate.
However, size of all folders obtained is 1 KB. This value doesn't reflect the sum total of sizes of all content in the folder.
How can I get the this sum ( if possible without any extra request to server )?
Code used for converting 'quotaBytesUsed' to file size is
private string[] SizeSuffixes = new[] { "B", "KB", "MB", "GB", "TB" };
private string SizeConvert(long? fileSize)
{
if (!fileSize.HasValue)
return "";
var size = fileSize.Value;
if (size <= 1024)
{
return "1 KB";
}
var suffixIndex = 0;
while (size > 1024)
{
size = size / 1024;
suffixIndex++;
}
return size.ToString(CultureInfo.InvariantCulture.NumberFormat) + " " + SizeSuffixes[suffixIndex];
}
You are getting the size of the folder object, not its contents.
The api doesnt support getting a folder content's size.
Given that the same file/folder can be in multiple folders, I doubt it will ever be supported.
You need to recursively calculate it, using appengine task queues for example.
There are some nice examples about file uploading at HTML5 Rocks but there are something that isn't clear enough for me.
As far as i see, the example code about file slicing is getting a specific part from file then reading it. As the note says, this is helpful when we are dealing with large files.
The example about monitoring uploads also notes this is useful when we're uploading large files.
Am I safe without slicing the file? I meaning server-side problems, memory, etc. Chrome doesn't support File.slice() currently and i don't want to use a bloated jQuery plugin if possible.
Both Chrome and FF support File.slice() but it has been prefixed as File.webkitSlice() File.mozSlice() when its semantics changed some time ago. There's another example of using it here to read part of a .zip file. The new semantics are:
Blob.webkitSlice(
in long long start,
in long long end,
in DOMString contentType
);
Are you safe without slicing it? Sure, but remember you're reading the file into memory. The HTML5Rocks tutorial offers chunking the upload as a potential performance improvement. With some decent server logic, you could also do things like recovering from a failed upload more easily. The user wouldn't have to re-try an entire 500MB file if it failed at 99% :)
This is the way to slice the file to pass as blobs:
function readBlob() {
var files = document.getElementById('files').files;
var file = files[0];
var ONEMEGABYTE = 1048576;
var start = 0;
var stop = ONEMEGABYTE;
var remainder = file.size % ONEMEGABYTE;
var blkcount = Math.floor(file.size / ONEMEGABYTE);
if (remainder != 0) blkcount = blkcount + 1;
for (var i = 0; i < blkcount; i++) {
var reader = new FileReader();
if (i == (blkcount - 1) && remainder != 0) {
stop = start + remainder;
}
if (i == blkcount) {
stop = start;
}
//Slicing the file
var blob = file.webkitSlice(start, stop);
reader.readAsBinaryString(blob);
start = stop;
stop = stop + ONEMEGABYTE;
} //End of loop
} //End of readblob