Drive SDK Get Folder Size - google-drive-api

I am using 'quotaBytedUsed' property while getting Files using an authorized get request - Files.list.
I am converting the long value obtained to file size in KB/MB/Gb as appropriate.
However, size of all folders obtained is 1 KB. This value doesn't reflect the sum total of sizes of all content in the folder.
How can I get the this sum ( if possible without any extra request to server )?
Code used for converting 'quotaBytesUsed' to file size is
private string[] SizeSuffixes = new[] { "B", "KB", "MB", "GB", "TB" };
private string SizeConvert(long? fileSize)
{
if (!fileSize.HasValue)
return "";
var size = fileSize.Value;
if (size <= 1024)
{
return "1 KB";
}
var suffixIndex = 0;
while (size > 1024)
{
size = size / 1024;
suffixIndex++;
}
return size.ToString(CultureInfo.InvariantCulture.NumberFormat) + " " + SizeSuffixes[suffixIndex];
}

You are getting the size of the folder object, not its contents.
The api doesnt support getting a folder content's size.
Given that the same file/folder can be in multiple folders, I doubt it will ever be supported.
You need to recursively calculate it, using appengine task queues for example.

Related

Forge chunk upload .NET Core

I have question about uploading large objects in forge bucket. I know that I need to use /resumable api, but how can I get the file( when I have only filename). In this code what is exactly FILE_PATH? Generally, should I save file on server first and then do the upload on bucket?
private static dynamic resumableUploadFile()
{
Console.WriteLine("*****begin uploading large file");
string path = FILE_PATH;
if (!File.Exists(path))`enter code here`
path = #"..\..\..\" + FILE_PATH;
//total size of file
long fileSize = new System.IO.FileInfo(path).Length;
//size of piece, say 2M
long chunkSize = 2 * 1024 * 1024 ;
//pieces count
long nbChunks = (long)Math.Round(0.5 + (double)fileSize / (double)chunkSize);
//record a global response for next function.
ApiResponse<dynamic> finalRes = null ;
using (FileStream streamReader = new FileStream(path, FileMode.Open))
{
//unique id of this session
string sessionId = RandomString(12);
for (int i = 0; i < nbChunks; i++)
{
//start binary position of one certain piece
long start = i * chunkSize;
//end binary position of one certain piece
//if the size of last piece is bigger than total size of the file, end binary
// position will be the end binary position of the file
long end = Math.Min(fileSize, (i + 1) * chunkSize) - 1;
//tell Forge about the info of this piece
string range = "bytes " + start + "-" + end + "/" + fileSize;
// length of this piece
long length = end - start + 1;
//read the file stream of this piece
byte[] buffer = new byte[length];
MemoryStream memoryStream = new MemoryStream(buffer);
int nb = streamReader.Read(buffer, 0, (int)length);
memoryStream.Write(buffer, 0, nb);
memoryStream.Position = 0;
//upload the piece to Forge bucket
ApiResponse<dynamic> response = objectsApi.UploadChunkWithHttpInfo(BUCKET_KEY,
FILE_NAME, (int)length, range, sessionId, memoryStream,
"application/octet-stream");
finalRes = response;
if (response.StatusCode == 202){
Console.WriteLine("one certain piece has been uploaded");
continue;
}
else if(response.StatusCode == 200){
Console.WriteLine("the last piece has been uploaded");
}
else{
//any error
Console.WriteLine(response.StatusCode);
break;
}
}
}
return (finalRes);
}
FILE_PATH: is the path where you stored file on your server.
You should upload your file to server first. Why? Because when you upload your file to Autodesk Forge Server you need internal token, which should be kept secret (that why you keep it in your server), you dont want someone take that token and mess up your Forge Account.
The code you pasted from this article is more about uploading from a server when the file is already stored there - either for caching purposes or the server is using/modifying those files.
As Paxton.Huynh said, FILE_PATH there contains the location on the server where the file is stored.
If you just want to upload the chunks to Forge through your server (to keep credentials and internal access token secret), like a proxy, then it's probably better to just pass on those chunks to Forge instead of storing the file on the server first and then passing it on to Forge - what the sample code you referred to is doing.
See e.g. this, though it's in NodeJS: https://github.com/Autodesk-Forge/forge-buckets-tools/blob/master/server/data.management.js#L171

Reading YUY2 data from an IMFSample that appears to have improper data on Windows 10

I am developing an application that using IMFSourceReader to read data from video files. I am using DXVA for improved performance. I am having trouble with one specific full-HD H.264 encoded AVI file. Based on my investigation this far, I believe that the IMFSample contains incorrect data. My workflow is below:
Create a source reader with a D3D manager to enable hardware acceleration.
Set the current media type to YUY2 as DXVA does not
decode to any RGB colorspace.
Call ReadSample to get an IMFSample. Works fine.
Use the VideoProcessorBlt to perform YUY2 to BGRA32
conversion. For this specific file it errors out with an
E_INVALIDARGS error code. Decided to do the conversion myself.
Used IMFSample::ConvertToContiguousBuffer to receive an IMFMediaBuffer. When locking this buffer, the pitch is reported as 1280 bytes. This I believe is incorrect, because for a full HD video, the pitch should be (1920 + 960 + 960 = 3840 bytes).
I dumped the raw memory and extracted the Y, U and V components based on my understanding of the YUY2 layout. You can find it below. So, the data is there but I do not believe it is laid out as YUY2. Need some help in interpreting the data.
My code for reading is below:
// Direct3D surface that stores the result of the YUV2RGB conversion
CComPtr<IDirect3DSurface9> _pTargetSurface;
IDirectXVideoAccelerationService* vidAccelService;
initVidAccelerator(&vidAccelService); // Omitting the code for this.
// Create a new surface for doing the color conversion, set it up to store X8R8G8B8 data.
hr = vidAccelService->CreateSurface( static_cast<UINT>( 1920 ),
static_cast<UINT>( 1080 ),
0, // no back buffers
D3DFMT_X8R8G8B8, // data format
D3DPOOL_DEFAULT, // default memory pool
0, // reserved
DXVA2_VideoProcessorRenderTarget, // to use with the Blit operation
&_pTargetSurface, // surface used to store frame
NULL);
GUID processorGUID;
DXVA2_VideoDesc videoDescriptor;
D3DFORMAT processorFmt;
UINT numSubStreams;
IDirectXVideoProcessor* _vpd;
initVideoProcessor(&vpd); // Omitting the code for this
// We get the videoProcessor parameters on creation, and fill up the videoProcessBltParams accordingly.
_vpd->GetCreationParameters(&processorGUID, &videoDescriptor, &processorFmt, &numSubStreams);
RECT targetRECT; // { 0, 0, width, height } as left, top, right, bottom
targetRECT.left = 0;
targetRECT.right = videoDescriptor.SampleWidth;
targetRECT.top = 0;
targetRECT.bottom = videoDescriptor.SampleHeight;
SIZE targetSIZE; // { width, height }
targetSIZE.cx = videoDescriptor.SampleWidth;
targetSIZE.cy = videoDescriptor.SampleHeight;
// Parameters that are required to use the video processor to perform
// YUV2RGB and other video processing operations
DXVA2_VideoProcessBltParams _frameBltParams;
_frameBltParams.TargetRect = targetRECT;
_frameBltParams.ConstrictionSize = targetSIZE;
_frameBltParams.StreamingFlags = 0; // reserved.
_frameBltParams.BackgroundColor.Y = 0x0000;
_frameBltParams.BackgroundColor.Cb = 0x0000;
_frameBltParams.BackgroundColor.Cr = 0x0000;
_frameBltParams.BackgroundColor.Alpha = 0xFFFF;
// copy attributes from videoDescriptor obtained above.
_frameBltParams.DestFormat.VideoChromaSubsampling = videoDescriptor.SampleFormat.VideoChromaSubsampling;
_frameBltParams.DestFormat.NominalRange = videoDescriptor.SampleFormat.NominalRange;
_frameBltParams.DestFormat.VideoTransferMatrix = videoDescriptor.SampleFormat.VideoTransferMatrix;
_frameBltParams.DestFormat.VideoLighting = videoDescriptor.SampleFormat.VideoLighting;
_frameBltParams.DestFormat.VideoPrimaries = videoDescriptor.SampleFormat.VideoPrimaries;
_frameBltParams.DestFormat.VideoTransferFunction = videoDescriptor.SampleFormat.VideoTransferFunction;
_frameBltParams.DestFormat.SampleFormat = DXVA2_SampleProgressiveFrame;
// The default values are used for all these parameters.
DXVA2_ValueRange pRangePABrightness;
_vpd->GetProcAmpRange(DXVA2_ProcAmp_Brightness, &pRangePABrightness);
DXVA2_ValueRange pRangePAContrast;
_vpd->GetProcAmpRange(DXVA2_ProcAmp_Contrast, &pRangePAContrast);
DXVA2_ValueRange pRangePAHue;
_vpd->GetProcAmpRange(DXVA2_ProcAmp_Hue, &pRangePAHue);
DXVA2_ValueRange pRangePASaturation;
_vpd->GetProcAmpRange(DXVA2_ProcAmp_Saturation, &pRangePASaturation);
_frameBltParams.ProcAmpValues = { pRangePABrightness.DefaultValue, pRangePAContrast.DefaultValue,
pRangePAHue.DefaultValue, pRangePASaturation.DefaultValue };
_frameBltParams.Alpha = DXVA2_Fixed32OpaqueAlpha();
_frameBltParams.DestData = DXVA2_SampleData_TFF;
// Input video sample for the Blt operation
DXVA2_VideoSample _frameVideoSample;
_frameVideoSample.SampleFormat.VideoChromaSubsampling = videoDescriptor.SampleFormat.VideoChromaSubsampling;
_frameVideoSample.SampleFormat.NominalRange = videoDescriptor.SampleFormat.NominalRange;
_frameVideoSample.SampleFormat.VideoTransferMatrix = videoDescriptor.SampleFormat.VideoTransferMatrix;
_frameVideoSample.SampleFormat.VideoLighting = videoDescriptor.SampleFormat.VideoLighting;
_frameVideoSample.SampleFormat.VideoPrimaries = videoDescriptor.SampleFormat.VideoPrimaries;
_frameVideoSample.SampleFormat.VideoTransferFunction = videoDescriptor.SampleFormat.VideoTransferFunction;
_frameVideoSample.SrcRect = targetRECT;
_frameVideoSample.DstRect = targetRECT;
_frameVideoSample.PlanarAlpha = DXVA2_Fixed32OpaqueAlpha();
_frameVideoSample.SampleData = DXVA2_SampleData_TFF;
CComPtr<IMFSample> sample; // Assume that this was read in from a call to ReadSample
CComPtr<IMFMediaBuffer> buffer;
HRESULT hr = sample->GetBufferByIndex(0, &buffer);
CComPtr<IDirect3DSurface9> pSrcSurface;
// From the MediaBuffer, we get the Source Surface using MFGetService
hr = MFGetService( buffer, MR_BUFFER_SERVICE, __uuidof(IDirect3DSurface9), (void**)&pSrcSurface );
// Update the videoProcessBltParams with frame specific values.
LONGLONG sampleStartTime;
sample->GetSampleTime(&sampleStartTime);
_frameBltParams.TargetFrame = sampleStartTime;
LONGLONG sampleDuration;
sample->GetSampleDuration(&sampleDuration);
_frameVideoSample.Start = sampleStartTime;
_frameVideoSample.End = sampleStartTime + sampleDuration;
_frameVideoSample.SrcSurface = pSrcSurface;
// Run videoProcessBlt using the parameters setup (this is used for color conversion)
// The returned code is E_INVALIDARGS
hr = _vpd->VideoProcessBlt( _pTargetSurface, // target surface
&_frameBltParams, // parameters
&_frameVideoSample, // video sample structure
1, // one sample
NULL); // reserved
After a call to ReadSample of IMFSourceReader or inside OnReadSample callback function of the IMFSourceReaderCallback implementation, you might receive the MF_SOURCE_READERF_CURRENTMEDIATYPECHANGED flag. It means that the current media type has changed for one or more streams. To get the current media type call the IMFSourceReader::GetCurrentMediaType method.
In your case you need to query (again) the IMFMediaType's MF_MT_FRAME_SIZE attribute for the video stream, in order to obtain the new correct video resolution. You should use the new video resolution to set the "width" and "height" values of the VideoProcessorBlt parameters' source and destination rectangles.

Using ItemCollection on a BoxFolder type with Box API only returns 100 results and cannot retrieve the remaining ones

For a while now, I've been using the Box API to connect Acumatica ERP to Box and everything has been going fine until recently. Whenever I try to use a BoxCollection type with the property ItemCollection, I'll only get the first 100 results no matter the limit I set in the GetInformationAsync(). Here is the code snippet:
[PermissionSet(SecurityAction.Assert, Name = "FullTrust")]
public BoxCollection<BoxItem> GetFolderItems(string folderId, int limit = 500, int offset = 0)
{
var response = new BoxCollection<BoxItem>();
var fieldsToGet = new List<string>() { BoxItem.FieldName, BoxItem.FieldDescription, BoxItem.FieldParent, BoxItem.FieldEtag, BoxFolder.FieldItemCollection };
response = Task.Run(() => Client.FoldersManager.GetFolderItemsAsync(folderId, limit, offset)).Result;
return response;
}
I then pass that information on to a BoxFolder type variable, and then try to use the ItemCollection.Entries property, but this only returns 100 results at a time, with no visible way to extract the remaining 61 (in my case, the Count = 161, but Entries = 100 always)
Another code snippet of the used variable, I am basically trying to get the folder ID based on the name of the folder inside Box:
private static void SyncProcess(BoxFolder rootFolder, string folderName)
{
var boxFolder = rootFolder.ItemCollection.Entries.SingleOrDefault(ic => ic.Type == "folder" && ic.Name == folderName);
}
I wasn't able to find anything related to that limit = 100 in the documentation and it only started to give me problems recently.
I had to create a work around by using the following:
var boxCollection = client.GetFolderItems(rootFolder.Id);
var boxFolder = boxCollection.Entries.SingleOrDefault(ic => ic.Type == "folder" && ic.Name == folderName);
I was just wondering if there was a better way to get the complete collection using the property ItemCollection.Entries like I used to, instead of having to fetch them again.
Thanks!
Box pages folder items to keep response times short. The default page size is 100 items. You must iterate through the pages to get all of the items. Here's a code snippet that'll get 100 items at a time until all items in the folder are fetched. You can request up to 1000 items at a time.
var items = new List<BoxItem>();
BoxCollection<BoxItem> result;
do
{
result = await Client.FoldersManager.GetFolderItemsAsync(folderId, 100, items.Count());
items.AddRange(result.Entries);
} while (items.Count() < result.TotalCount);
John's answer can lead to a duplicate values in your items collection if there will be external/shared folders in your list. Those are being hidden when you are calling "GetFolderItemsAsync" with "asUser" header set.
There is a comment about it in the Box API's codeset itself (https://github.com/box/box-windows-sdk-v2/blob/main/Box.V2/Managers/BoxFoldersManager.cs)
Note: If there are hidden items in your previous response, your next offset should be = offset + limit, not the # of records you received back.
The total_count returned may not match the number of entries when using enterprise scope, because external folders are hidden the list of entries.
Taking this into account, it's better to not rely on comparing the number of items retrieved and the TotalCount property.
var items = new List<BoxItem>();
BoxCollection<BoxItem> result;
int limit = 100;
int offset = 0;
do
{
result = await Client.FoldersManager.GetFolderItemsAsync(folderId, limit, offset);
offset += limit;
items.AddRange(result.Entries);
} while (offset < result.TotalCount);

Get physical screen(display) resolution. Windows Store App

I need current display resolution. How can i get this? I know about Window.Current.Bounds, but application can worked in windows mode.
what do you mean VisibleBounds is not working on Deskptop?
I tried in my win10 UWP program, it works fine. I can get my desktop resotion like below:
var bounds = ApplicationView.GetForCurrentView().VisibleBounds;
var scaleFactor = DisplayInformation.GetForCurrentView().RawPixelsPerViewPixel;
var size = new Size(bounds.Width * scaleFactor, bounds.Height * scaleFactor);
Besides, if you are using DX in store app, you can create an IDXGIFactory object and use it to enumerate the available adapters. Then call IDXGIOutput::GetDisplayModeList to retrieve an array of DXGI_MODE_DESC structures and the number of elements in the array. Each DXGI_MODE_DESC structure represents a valid display mode for the output. e.g.:
UINT numModes = 0;
DXGI_MODE_DESC* displayModes = NULL;
DXGI_FORMAT format = DXGI_FORMAT_R32G32B32A32_FLOAT;
// Get the number of elements
hr = pOutput->GetDisplayModeList( format, 0, &numModes, NULL);
displayModes = new DXGI_MODE_DESC[numModes];
// Get the list
hr = pOutput->GetDisplayModeList( format, 0, &numModes, displayModes);
Please let me know if you need further information.

Methods for deleting blank (or nearly blank) pages from TIFF files

I have something like 40 million TIFF documents, all 1-bit single page duplex. In about 40% of cases, the back image of these TIFFs is 'blank' and I'd like to remove them before I do a load to a CMS to reduce space requirements.
Is there a simple method to look at the data content of each page and delete it if it falls under a preset threshold, say 2% 'black'?
I'm technology agnostic on this one, but a C# solution would probably be the easiest to support. Problem is, I've no image manipulation experience so don't really know where to start.
Edit to add: The images are old scans and so are 'dirty', so this is not expected to be an exact science. The threshold would need to be set to avoid the chance of false positives.
You probably should:
open each image
iterate through its pages (using Bitmap.GetFrameCount / Bitmap.SelectActiveFrame methods)
access bits of each page (using Bitmap.LockBits method)
analyze contents of each page (simple loop)
if contents is worthwhile then copy data to another image (Bitmap.LockBits and a loop)
This task isn't particularly complex but will require some code to be written. This site contains some samples that you may search for using method names as keywords).
P.S. I assume that all of images can be successfully loaded into a System.Drawing.Bitmap.
You can do something like that with DotImage (disclaimer, I work for Atalasoft and have written most of the underlying classes that you'd be using). The code to do it will look something like this:
public void RemoveBlankPages(Stream source stm)
{
List<int> blanks = new List<int>();
if (GetBlankPages(stm, blanks)) {
// all pages blank - delete file? Skip? Your choice.
}
else {
// memory stream is convenient - maybe a temp file instead?
using (MemoryStream ostm = new MemoryStream()) {
// pulls out all the blanks and writes to the temp stream
stm.Seek(0, SeekOrigin.Begin);
RemoveBlanks(blanks, stm, ostm);
CopyStream(ostm, stm); // copies first stm to second, truncating at end
}
}
}
private bool GetBlankPages(Stream stm, List<int> blanks)
{
TiffDecoder decoder = new TiffDecoder();
ImageInfo info = decoder.GetImageInfo(stm);
for (int i=0; i < info.FrameCount; i++) {
try {
stm.Seek(0, SeekOrigin.Begin);
using (AtalaImage image = decoder.Read(stm, i, null)) {
if (IsBlankPage(image)) blanks.Add(i);
}
}
catch {
// bad file - skip? could also try to remove the bad page:
blanks.Add(i);
}
}
return blanks.Count == info.FrameCount;
}
private bool IsBlankPage(AtalaImage image)
{
// you might want to configure the command to do noise removal and black border
// removal (or not) first.
BlankPageDetectionCommand command = new BlankPageDetectionCommand();
BlankPageDetectionResults results = command.Apply(image) as BlankPageDetectionResults;
return results.IsImageBlank;
}
private void RemoveBlanks(List<int> blanks, Stream source, Stream dest)
{
// blanks needs to be sorted low to high, which it will be if generated from
// above
TiffDocument doc = new TiffDocument(source);
int totalRemoved = 0;
foreach (int page in blanks) {
doc.Pages.RemoveAt(page - totalRemoved);
totalRemoved++;
}
doc.Save(dest);
}
You should note that blank page detection is not as simple as "are all the pixels white(-ish)?" since scanning introduces all kinds of interesting artifacts. To get the BlankPageDetectionCommand, you would need the Document Imaging package.
Are you interested in shrinking the files or just want to avoid people wasting their time viewing blank pages? You can do a quick and dirty edit of the files to rid yourself of known blank pages by just patching the second IFD to be 0x00000000. Here's what I mean - TIFF files have a simple layout if you're just navigating through the pages:
TIFF Header (4 bytes)
First IFD offset (4 bytes - typically points to 0x00000008)
IFD:
Number of tags (2-bytes)
{individual TIFF tags} (12-bytes each)
Next IFD offset (4 bytes)
Just patch the "next IFD offset" to a value of 0x00000000 to "unlink" pages beyond the current one.