Large (> 4mb) File Attachments using the Microsoft Graph Java SDK - microsoft-graph-sdks

How to add attachments to the existing meeting calendar event id if the attachment size is more than 4 MB using JAVA

Use the java library odata-client-msgraph:
GraphService client = ...
File file = new File("example.txt");
AttachmentItem attachmentItem = AttachmentItem
.builder()
.attachmentType(AttachmentType.FILE)
.contentType("text/plain")
.name("example.txt")
.size(file.length())
.build();
int chunkSize = 512 * 1024;
client
.user("USERID")
.calendars()
.events("EVENTID")
.attachments()
.createUploadSession(attachmentItem)
.get()
.putChunked()
.readTimeout(10, TimeUnit.MINUTES)
.upload(file, chunkSize);

Related

Forge chunk upload .NET Core

I have question about uploading large objects in forge bucket. I know that I need to use /resumable api, but how can I get the file( when I have only filename). In this code what is exactly FILE_PATH? Generally, should I save file on server first and then do the upload on bucket?
private static dynamic resumableUploadFile()
{
Console.WriteLine("*****begin uploading large file");
string path = FILE_PATH;
if (!File.Exists(path))`enter code here`
path = #"..\..\..\" + FILE_PATH;
//total size of file
long fileSize = new System.IO.FileInfo(path).Length;
//size of piece, say 2M
long chunkSize = 2 * 1024 * 1024 ;
//pieces count
long nbChunks = (long)Math.Round(0.5 + (double)fileSize / (double)chunkSize);
//record a global response for next function.
ApiResponse<dynamic> finalRes = null ;
using (FileStream streamReader = new FileStream(path, FileMode.Open))
{
//unique id of this session
string sessionId = RandomString(12);
for (int i = 0; i < nbChunks; i++)
{
//start binary position of one certain piece
long start = i * chunkSize;
//end binary position of one certain piece
//if the size of last piece is bigger than total size of the file, end binary
// position will be the end binary position of the file
long end = Math.Min(fileSize, (i + 1) * chunkSize) - 1;
//tell Forge about the info of this piece
string range = "bytes " + start + "-" + end + "/" + fileSize;
// length of this piece
long length = end - start + 1;
//read the file stream of this piece
byte[] buffer = new byte[length];
MemoryStream memoryStream = new MemoryStream(buffer);
int nb = streamReader.Read(buffer, 0, (int)length);
memoryStream.Write(buffer, 0, nb);
memoryStream.Position = 0;
//upload the piece to Forge bucket
ApiResponse<dynamic> response = objectsApi.UploadChunkWithHttpInfo(BUCKET_KEY,
FILE_NAME, (int)length, range, sessionId, memoryStream,
"application/octet-stream");
finalRes = response;
if (response.StatusCode == 202){
Console.WriteLine("one certain piece has been uploaded");
continue;
}
else if(response.StatusCode == 200){
Console.WriteLine("the last piece has been uploaded");
}
else{
//any error
Console.WriteLine(response.StatusCode);
break;
}
}
}
return (finalRes);
}
FILE_PATH: is the path where you stored file on your server.
You should upload your file to server first. Why? Because when you upload your file to Autodesk Forge Server you need internal token, which should be kept secret (that why you keep it in your server), you dont want someone take that token and mess up your Forge Account.
The code you pasted from this article is more about uploading from a server when the file is already stored there - either for caching purposes or the server is using/modifying those files.
As Paxton.Huynh said, FILE_PATH there contains the location on the server where the file is stored.
If you just want to upload the chunks to Forge through your server (to keep credentials and internal access token secret), like a proxy, then it's probably better to just pass on those chunks to Forge instead of storing the file on the server first and then passing it on to Forge - what the sample code you referred to is doing.
See e.g. this, though it's in NodeJS: https://github.com/Autodesk-Forge/forge-buckets-tools/blob/master/server/data.management.js#L171

UVM- Using my own configuration files vs using config db

I wrote a sequence which can be generic to a variety of tests. I want to do it by adding configuration files for each test.
The code for the sequnce:
//----------------------------------------------------------------------
//Sequence
//----------------------------------------------------------------------
class axi_sequence extends uvm_sequence#(axi_transaction);
`uvm_object_utils(axi_sequence)
//new
function new (string name = "axi_sequence");
super.new(name);
endfunction: new
//main task
task body();
int file_p, temp, len;
byte mode;
bit [31:0] addr;
string str;
axi_transaction axi_trx;
bit [31:0] transfers [$];
bit [31:0] data;
//open file
file_p = $fopen("./sv/write_only.txt", "r"); //the name of the file should be same as the name of the test
//in case file doesn't exist
`my_fatal(file_p != 0, "FILE OPENED FAILED")
//read file
while ($feof(file_p) == 0)
begin
temp = $fgets(str, file_p);
axi_trx = axi_transaction::type_id::create(.name("axi_trx"), .contxt(get_full_name()));
// ~start_item~ and <finish_item> together will initiate operation of
// a sequence item.
start_item(axi_trx);
transfers = {};
$sscanf(str, "%c %d %h", mode, len, addr);
//assign the data to str
str = str.substr(12,str.len()-1);
//create and assign to transfers queue
if(mode == "w")
begin
for (int i = 0; i <= len; i++) begin
temp = $sscanf(str, "%h", data);
`my_fatal(temp > 0, "THE LENGHT PARAM IS WRONG- too big")
transfers. push_back(data);
str = str.substr(13+(i+1)*8,str.len()-1);
end//end for
`my_fatal($sscanf(str, "%h", temp) <= 0, "THE LENGHT PARAM IS WRONG- too small")
end//if
axi_trx.init(mode,len,addr,transfers);
if (to_random == 1) to_random should be a part of the configuration file.
trx.my_random(); //trx is transaction instance
else
trx.delay = const_config; //const_delay should be a part of the configuration file.
//contains the send_request which send the request item to the sequencer, which will forward
// it to the driver.
finish_item(axi_trx);
end//begin
endtask: body
endclass: axi_sequence
Should I do it by using different configuration file, or can I do it by values that will be passed from the test to the agent through the config db?
And how can I pass different path (for the file_p = $fopen()) for each test?
You shouldn't need a separate configuration file for each test. Ideally, you would just pass down the configuration from the test level down into the env through the config_db (or through a separate configuration object for your agent)
When you create your sequence in your test (or virtual sequencer), you should be able to set your variables as needed.

How to protect localstorage and websql data in cordova - ionic application

data stored in local-storage or in WebSql database is not protected.
we can directly see all the data of WebSql and local-storage because they are stored as plain text.
is there any way to protect data?
yes, you can encrypt/decrypt your data using something like AES or other Algorithm. Maybe you can try implementation https://github.com/digitalbazaar/forge#md5
// generate a random key and IV
// Note: a key size of 16 bytes will use AES-128, 24 => AES-192, 32 => AES-256
var key = forge.random.getBytesSync(16);
var iv = forge.random.getBytesSync(16);
/* alternatively, generate a password-based 16-byte key
var salt = forge.random.getBytesSync(128);
var key = forge.pkcs5.pbkdf2('password', salt, numIterations, 16);
*/
// encrypt some bytes using CBC mode
// (other modes include: CFB, OFB, CTR, and GCM)
var cipher = forge.cipher.createCipher('AES-CBC', key);
cipher.start({iv: iv});
cipher.update(forge.util.createBuffer(someBytes));
cipher.finish();
var encrypted = cipher.output;
// outputs encrypted hex
console.log(encrypted.toHex());
// decrypt some bytes using CBC mode
// (other modes include: CFB, OFB, CTR, and GCM)
var decipher = forge.cipher.createDecipher('AES-CBC', key);
decipher.start({iv: iv});
decipher.update(encrypted);
decipher.finish();
// outputs decrypted hex
console.log(decipher.output.toHex());

How to use "Process Document From File" operator of RapidMiner in Java Code

I have just started using rapid miner for text classification. I have created a process in which i used "Process Document from Files" operator for tf-idf conversion. I want to ask how to use this operator in Java code ? I search on internet but all are using the already created process or word list generated from documents ? I want to start it from scratch i.e.
1 ) Process Documents From File
1.1) Tokenization
1.2) Filtering
1.3) Stemming
1.4) N-Gram
2) Validation
2.1) Training (K-NN)
2.2) Apply Model
May be source code and image below can help You:
String processDefinitionFileName = "/home/maximk/.RapidMiner5/repositories/Local Repository/processes/processOpenCSV.rmp";
File processDefinition = new File( processDefinitionFileName );
Process readCSV = new Process( processDefinition );
File csvFile = new File( "/home/maximk/test.cvs" );
IOObject inObject = new SimpleFileObject( csvFile );
IOContainer inParameters = new IOContainer( inObject );
IOContainer outParameters = readCSV.run( inParameters );
SimpleExampleSet resultDataSet = (SimpleExampleSet) outParameters.getElementAt( 0 );

Drive SDK Get Folder Size

I am using 'quotaBytedUsed' property while getting Files using an authorized get request - Files.list.
I am converting the long value obtained to file size in KB/MB/Gb as appropriate.
However, size of all folders obtained is 1 KB. This value doesn't reflect the sum total of sizes of all content in the folder.
How can I get the this sum ( if possible without any extra request to server )?
Code used for converting 'quotaBytesUsed' to file size is
private string[] SizeSuffixes = new[] { "B", "KB", "MB", "GB", "TB" };
private string SizeConvert(long? fileSize)
{
if (!fileSize.HasValue)
return "";
var size = fileSize.Value;
if (size <= 1024)
{
return "1 KB";
}
var suffixIndex = 0;
while (size > 1024)
{
size = size / 1024;
suffixIndex++;
}
return size.ToString(CultureInfo.InvariantCulture.NumberFormat) + " " + SizeSuffixes[suffixIndex];
}
You are getting the size of the folder object, not its contents.
The api doesnt support getting a folder content's size.
Given that the same file/folder can be in multiple folders, I doubt it will ever be supported.
You need to recursively calculate it, using appengine task queues for example.