how to recover image or other file from ipfs? - ipfs

I have a cid returned by ipfs.add api, and i want get the file(image) but i can't restore the picture even i can get the Uint8Array, and I have read the api docs, but i still can't get it.
I want to know how to manipulate data to recover my picture or just get the pic using cid.
here is my code
let chunks = '';
const cid = 'something hash';
for await (const buf of ipfs.get(cid)) {
chunks += buf;
}
fs.writeFileSync('./img.jpg', chunks);

Related

Google app script - make if condition with json data

I cannot find match condition with google app script
here i have json data contain "externalIds" and contain "type=organization, value=TIS-00096028"
I would like to get "value=TIS-00096028" but some data has externalIds some are not.
so i dont know what condition can fix it(I have not find "externalIds.Isempty" function or "find(externalIds)" function )
Some part of code
for(var i = 0; i < users.length; i++){
json = JSON.parse(users[i])
Logger.log(i)
Logger.log(json)
Logger.log(typeof(json))
Logger.log(JSON.stringify(users[i]))
Logger.log(typeof(JSON.stringify(users[i])))
Logger.log(json["externalIds"])
//Logger.log(typeof(json["externalIds"]))
//Logger.log(json["externalIds"][0]["value"])
if(json["externalIds"].ContenService ="null"){
empolyee_code=""
Logger.log("No empolyee_code "+empolyee_code)
}
else{
empolyee_code = json["externalIds"][0]["value"];
Logger.log("empolyee_code "+empolyee_code)
}
rows.push([users[i].name.givenName,
users[i].name.familyName,
users[i].primaryEmail,
getUserOrg(users[i].primaryEmail, users[i].orgUnitPath),
getStatus(users[i].suspended),
users[i].lastLoginTime.slice(0,10),
empolyee_code,
]);
}
In red circle i try to write some condition but not work.
In Excution log is a one json data of data.
here some json data contain "externalIds" field
{isDelegatedAdmin=false, addresses=[{formatted=DSM, type=work, primary=true}], isEnforcedIn2Sv=true, creationTime=2015-10-08T16:01:13.000Z, kind=admin#directory#user, lastLoginTime=2022-06-22T05:23:12.000Z, archived=false, suspended=false, languages=[{preference=preferred, languageCode=en}], nonEditableAliases=[a_achara#tripetch-isuzu.co.th.test-google-a.com, a_achara#g.tripetch-isuzu.co.th], isEnrolledIn2Sv=true, recoveryPhone=+66896847807, orgUnitPath=/Google/Tripetch (Google)/TIS, isAdmin=false, thumbnailPhotoEtag="Ap9BCC4uRt3h_SrDO0G_EX9zYVZwuEfjT_jX812ihkE/ZDtakkHxKrU_zGt1nyNxvPl7a88", agreedToTerms=true, customerId=C019gbqss, etag="Ap9BCC4uRt3h_SrDO0G_EX9zYVZwuEfjT_jX812ihkE/-E_Pbpdoymtn76EXDbsRNGh2ZWU", emails=[{address=a_achara#tripetch-isuzu.co.th, primary=true}, {address=a_achara#tripetch-isuzu.co.th.test-google-a.com}, {address=a_achara#g.tripetch-isuzu.co.th}], thumbnailPhotoUrl=https://www.google.com/s2/photos/private/AIbEiAIAAABECMrAkLP6qcXoxAEiC3ZjYXJkX3Bob3RvKig3ODk1NWNmN2ZjMDdiMmFiZmY4MmEwZjRkMTMwNTRjMmE0Y2U5YjgzMAG2IKR1qfqcYesdZgOUe5fVpfBDag, name={familyName=ANUMORN, givenName=ACHARA, fullName=ACHARA ANUMORN}, ipWhitelisted=false, externalIds=[{type=organization, value=TIS-00096028}], changePasswordAtNextLogin=false, phones=[{value=3261, type=work}], primaryEmail=a_achara#tripetch-isuzu.co.th, id=114182140133404581962, includeInGlobalAddressList=true, isMailboxSetup=true}
This is second data does not contain "externalIds":[{"value":"TIS-000xxxx","type":"organization"}]

Forge chunk upload .NET Core

I have question about uploading large objects in forge bucket. I know that I need to use /resumable api, but how can I get the file( when I have only filename). In this code what is exactly FILE_PATH? Generally, should I save file on server first and then do the upload on bucket?
private static dynamic resumableUploadFile()
{
Console.WriteLine("*****begin uploading large file");
string path = FILE_PATH;
if (!File.Exists(path))`enter code here`
path = #"..\..\..\" + FILE_PATH;
//total size of file
long fileSize = new System.IO.FileInfo(path).Length;
//size of piece, say 2M
long chunkSize = 2 * 1024 * 1024 ;
//pieces count
long nbChunks = (long)Math.Round(0.5 + (double)fileSize / (double)chunkSize);
//record a global response for next function.
ApiResponse<dynamic> finalRes = null ;
using (FileStream streamReader = new FileStream(path, FileMode.Open))
{
//unique id of this session
string sessionId = RandomString(12);
for (int i = 0; i < nbChunks; i++)
{
//start binary position of one certain piece
long start = i * chunkSize;
//end binary position of one certain piece
//if the size of last piece is bigger than total size of the file, end binary
// position will be the end binary position of the file
long end = Math.Min(fileSize, (i + 1) * chunkSize) - 1;
//tell Forge about the info of this piece
string range = "bytes " + start + "-" + end + "/" + fileSize;
// length of this piece
long length = end - start + 1;
//read the file stream of this piece
byte[] buffer = new byte[length];
MemoryStream memoryStream = new MemoryStream(buffer);
int nb = streamReader.Read(buffer, 0, (int)length);
memoryStream.Write(buffer, 0, nb);
memoryStream.Position = 0;
//upload the piece to Forge bucket
ApiResponse<dynamic> response = objectsApi.UploadChunkWithHttpInfo(BUCKET_KEY,
FILE_NAME, (int)length, range, sessionId, memoryStream,
"application/octet-stream");
finalRes = response;
if (response.StatusCode == 202){
Console.WriteLine("one certain piece has been uploaded");
continue;
}
else if(response.StatusCode == 200){
Console.WriteLine("the last piece has been uploaded");
}
else{
//any error
Console.WriteLine(response.StatusCode);
break;
}
}
}
return (finalRes);
}
FILE_PATH: is the path where you stored file on your server.
You should upload your file to server first. Why? Because when you upload your file to Autodesk Forge Server you need internal token, which should be kept secret (that why you keep it in your server), you dont want someone take that token and mess up your Forge Account.
The code you pasted from this article is more about uploading from a server when the file is already stored there - either for caching purposes or the server is using/modifying those files.
As Paxton.Huynh said, FILE_PATH there contains the location on the server where the file is stored.
If you just want to upload the chunks to Forge through your server (to keep credentials and internal access token secret), like a proxy, then it's probably better to just pass on those chunks to Forge instead of storing the file on the server first and then passing it on to Forge - what the sample code you referred to is doing.
See e.g. this, though it's in NodeJS: https://github.com/Autodesk-Forge/forge-buckets-tools/blob/master/server/data.management.js#L171

How do I search sub-folders and sub-sub-folders in Google Drive?

This is a commonly asked question.
The scenario is:-
folderA____ folderA1____folderA1a
\____folderA2____folderA2a
\___folderA2b
... and the question is how do I list all the files in all of the folders under the root folderA.
EDIT: April 2020 Google have announced that multi-parent files is being disabled from September 2020. This alters the narrative below and means option 2 is no longer an option. It might be possible to implement Option 2 using shortcuts. I will update this answer further as I test the new restrictions/features
We are all used to the idea of folders (aka directories) in Windows/nix etc. In the real world, a folder is a container, into which documents are placed. It is also possible to place smaller folders inside bigger folders. Thus the big folder can be thought of as containing all of the documents inside its smaller children folders.
However, in Google Drive, a Folder is NOT a container, so much so that in the first release of Google Drive, they weren't even called Folders, they were called Collections. A Folder is simply a File with (a) no contents, and (b) a special mime-type (application/vnd.google-apps.folder). The way Folders are used is exactly the same way that tags (aka labels) are used. The best way to understand this is to consider GMail. If you look at the top of an open mail item, you see two icons. A folder with the tooltip "Move to" and a label with the tooltip "Labels". Click on either of these and the same dialogue box appears and is all about labels. Your labels are listed down the left hand side, in a tree display that looks a lot like folders. Importantly, a mail item can have multiple labels, or you could say, a mail item can be in multiple folders. Google Drive's Folders work in exactly the same way that GMail labels work.
Having established that a Folder is simply a label, there is nothing stopping you from organising your labels in a hierarchy that resembles a folder tree, in fact this is the most common way of doing so.
It should now be clear that a file (let's call it MyFile) in folderA2b is NOT a child or grandchild of folderA. It is simply a file with a label (confusingly called a Parent) of "folderA2b".
OK, so how DO I get all the files "under" folderA?
Alternative 1. Recursion
The temptation would be to list the children of folderA, for any children that are folders, recursively list their children, rinse, repeat. In a very small number of cases, this might be the best approach, but for most, it has the following problems:-
It is woefully time consuming to do a server round trip for each sub folder. This does of course depend on the size of your tree, so if you can guarantee that your tree size is small, it could be OK.
Alternative 2. The common parent
This works best if all of the files are being created by your app (ie. you are using drive.file scope). As well as the folder hierarchy above, create a dummy parent folder called say "MyAppCommonParent". As you create each file as a child of its particular Folder, you also make it a child of MyAppCommonParent. This becomes a lot more intuitive if you remember to think of Folders as labels. You can now easily retrieve all descdendants by simply querying MyAppCommonParent in parents.
Alternative 3. Folders first
Start by getting all folders. Yep, all of them. Once you have them all in memory, you can crawl through their parents properties and build your tree structure and list of Folder IDs. You can then do a single files.list?q='folderA' in parents or 'folderA1' in parents or 'folderA1a' in parents.... Using this technique you can get everything in two http calls.
The pseudo code for option 3 is a bit like...
// get all folders from Drive files.list?q=mimetype=application/vnd.google-apps.folder and trashed=false&fields=parents,name // store in a Map, keyed by ID // find the entry for folderA and note the ID // find any entries where the ID is in the parents, note their IDs // for each such entry, repeat recursively // use all of the IDs noted above to construct a ... // files.list?q='folderA-ID' in parents or 'folderA1-ID' in parents or 'folderA1a-ID' in parents...
Alternative 2 is the most effificient, but only works if you have control of file creation. Alternative 3 is generally more efficient than Alternative 1, but there may be certain small tree sizes where 1 is best.
Sharing a Python solution to the excellent Alternative 3 by #pinoyyid, above, in case it's useful to anyone. I'm not a developer so it's probably hopelessly un-pythonic... but it works, only makes 2 API calls, and is pretty quick.
Get a master list of all the folders in a drive.
Test whether the folder-to-search is a parent (ie. it has subfolders).
Iterate through subfolders of the folder-to-search testing whether they too are parents.
Build a Google Drive file query with one '<folder-id>' in parents segment per subfolder found.
Interestingly, Google Drive seems to have a hard limit of 599 '<folder-id>' in parents segments per query, so if your folder-to-search has more subfolders than this, you need to chunk the list.
FOLDER_TO_SEARCH = '123456789' # ID of folder to search
DRIVE_ID = '654321' # ID of shared drive in which it lives
MAX_PARENTS = 500 # Limit set safely below Google max of 599 parents per query.
def get_all_folders_in_drive():
"""
Return a dictionary of all the folder IDs in a drive mapped to their parent folder IDs (or to the
drive itself if a top-level folder). That is, flatten the entire folder structure.
"""
folders_in_drive_dict = {}
page_token = None
max_allowed_page_size = 1000
just_folders = "trashed = false and mimeType = 'application/vnd.google-apps.folder'"
while True:
results = drive_api_ref.files().list(
pageSize=max_allowed_page_size,
fields="nextPageToken, files(id, name, mimeType, parents)",
includeItemsFromAllDrives=True, supportsAllDrives=True,
corpora='drive',
driveId=DRIVE_ID,
pageToken=page_token,
q=just_folders).execute()
folders = results.get('files', [])
page_token = results.get('nextPageToken', None)
for folder in folders:
folders_in_drive_dict[folder['id']] = folder['parents'][0]
if page_token is None:
break
return folders_in_drive_dict
def get_subfolders_of_folder(folder_to_search, all_folders):
"""
Yield subfolders of the folder-to-search, and then subsubfolders etc. Must be called by an iterator.
:param all_folders: The dictionary returned by :meth:`get_all_folders_in-drive`.
"""
temp_list = [k for k, v in all_folders.items() if v == folder_to_search] # Get all subfolders
for sub_folder in temp_list: # For each subfolder...
yield sub_folder # Return it
yield from get_subfolders_of_folder(sub_folder, all_folders) # Get subsubfolders etc
def get_relevant_files(self, relevant_folders):
"""
Get files under the folder-to-search and all its subfolders.
"""
relevant_files = {}
chunked_relevant_folders_list = [relevant_folders[i:i + MAX_PARENTS] for i in
range(0, len(relevant_folders), MAX_PARENTS)]
for folder_list in chunked_relevant_folders_list:
query_term = ' in parents or '.join('"{0}"'.format(f) for f in folder_list) + ' in parents'
relevant_files.update(get_all_files_in_folders(query_term))
return relevant_files
def get_all_files_in_folders(self, parent_folders):
"""
Return a dictionary of file IDs mapped to file names for the specified parent folders.
"""
files_under_folder_dict = {}
page_token = None
max_allowed_page_size = 1000
just_files = f"mimeType != 'application/vnd.google-apps.folder' and trashed = false and ({parent_folders})"
while True:
results = drive_api_ref.files().list(
pageSize=max_allowed_page_size,
fields="nextPageToken, files(id, name, mimeType, parents)",
includeItemsFromAllDrives=True, supportsAllDrives=True,
corpora='drive',
driveId=DRIVE_ID,
pageToken=page_token,
q=just_files).execute()
files = results.get('files', [])
page_token = results.get('nextPageToken', None)
for file in files:
files_under_folder_dict[file['id']] = file['name']
if page_token is None:
break
return files_under_folder_dict
if __name__ == "__main__":
all_folders_dict = get_all_folders_in_drive() # Flatten folder structure
relevant_folders_list = [FOLDER_TO_SEARCH] # Start with the folder-to-archive
for folder in get_subfolders_of_folder(FOLDER_TO_SEARCH, all_folders_dict):
relevant_folders_list.append(folder) # Recursively search for subfolders
relevant_files_dict = get_relevant_files(relevant_folders_list) # Get the files
Sharing a javascript solution using recursion to build an array of folders, starting with the first level folder and moving down the hierarchy. This array is composed by recursively cycling through the parent Id's of the file in question.
The extract below makes 3 separate queries to the gapi:
get the root folder id
get a list of folders
get a list of files
the code iterates through the list of files, then creating an array of folder names.
const { google } = require('googleapis')
const gOAuth = require('./googleOAuth')
// resolve the promises for getting G files and folders
const getGFilePaths = async () => {
//update to use Promise.All()
let gRootFolder = await getGfiles().then(result => {return result[2][0]['parents'][0]})
let gFolders = await getGfiles().then(result => {return result[1]})
let gFiles = await getGfiles().then(result => {return result[0]})
// create the path files and create a new key with array of folder paths, returning an array of files with their folder paths
return pathFiles = gFiles
.filter((file) => {return file.hasOwnProperty('parents')})
.map((file) => ({...file, path: makePathArray(gFolders, file['parents'][0], gRootFolder)}))
}
// recursive function to build an array of the file paths top -> bottom
let makePathArray = (folders, fileParent, rootFolder) => {
if(fileParent === rootFolder){return []}
else {
let filteredFolders = folders.filter((f) => {return f.id === fileParent})
if(filteredFolders.length >= 1 && filteredFolders[0].hasOwnProperty('parents')) {
let path = makePathArray(folders, filteredFolders[0]['parents'][0])
path.push(filteredFolders[0]['name'])
return path
}
else {return []}
}
}
// get meta-data list of files from gDrive, with query parameters
const getGfiles = () => {
try {
let getRootFolder = getGdriveList({corpora: 'user', includeItemsFromAllDrives: false,
fields: 'files(name, parents)',
q: "'root' in parents and trashed = false and mimeType = 'application/vnd.google-apps.folder'"})
let getFolders = getGdriveList({corpora: 'user', includeItemsFromAllDrives: false,
fields: 'files(id,name,parents), nextPageToken',
q: "trashed = false and mimeType = 'application/vnd.google-apps.folder'"})
let getFiles = getGdriveList({corpora: 'user', includeItemsFromAllDrives: false,
fields: 'files(id,name,parents, mimeType, fullFileExtension, webContentLink, exportLinks, modifiedTime), nextPageToken',
q: "trashed = false and mimeType != 'application/vnd.google-apps.folder'"})
return Promise.all([getFiles, getFolders, getRootFolder])
}
catch(error) {
return `Error in retriving a file reponse from Google Drive: ${error}`
}
}
// make call out gDrive to get meta-data files. Code adds all files in a single array which are returned in pages
const getGdriveList = async (params) => {
const gKeys = await gOAuth.get()
const drive = google.drive({version: 'v3', auth: gKeys})
let list = []
let nextPgToken
do {
let res = await drive.files.list(params)
list.push(...res.data.files)
nextPgToken = res.data.nextPageToken
params.pageToken = nextPgToken
}
while (nextPgToken)
return list
}
The following works very well but requires an additional call to the API.
It shares the root folder, does a search where file is shared, then removed the share. This works great in our production environments.
userPermission = new Permission()
{
Type = "user",
Role = "reader",
EmailAddress = "AnyEmailAddress"
};
var request = service.Permissions.Create(userPermission, rootFolderID);
var result = request.ExecuteAsync().ContinueWith(t =>
{
Permission permission = t.Result;
if (t.Exception == null)
{
//Do your search here
// make sure you add 'AnyEmailAddress' in readers
service.Files.List......
// then remove the share
var requestDeletePermission = service.Permissions.Delete(rootFolderID, permission.filePermissionID);
requestDeletePermission.Execute();
}
});
For Google Apps Script, I've written this function:
function getSubFolderIdsByFolderId(folderId, result = []) {
let folder = DriveApp.getFolderById(folderId);
let folders = folder.getFolders();
if (folders && folders.hasNext()) {
while (folders.hasNext()) {
let f = folders.next();
let childFolderId = f.getId();
result.push(childFolderId);
result = getSubFolderIdsByFolderId(childFolderId, result);
}
}
return result.filter(onlyUnique);
}
function onlyUnique(value, index, self) {
return self.indexOf(value) === index;
}
With this call:
const subFolderIds = getSubFolderIdsByFolderId('1-id-of-the-root-folder-to-check')
And this for loop:
let q = [];
for (let i in subFolderIds) {
let subFolderId = subFolderIds[i];
q.push('"' + subFolderId + '" in parents');
}
if (q.length > 0) {
q = '(' + q.join(' or ') + ') and';
} else {
q = '';
}
I get the required query part, for the DriveApp.searchFiles call.
A major disadvantage of this approach is the number of requests and the time you'll have to wait for, till you got the complete list - depending on the size of the root directory. I would not call this an ideal solution!
Maybe caching could increase the performance for additional calls, when you take the modification date into account of the drive API query.
I'm curious because, in the Google Drive Browser version, you can search recursively within folders. And it does not take that much time, as my approach.

Using ItemCollection on a BoxFolder type with Box API only returns 100 results and cannot retrieve the remaining ones

For a while now, I've been using the Box API to connect Acumatica ERP to Box and everything has been going fine until recently. Whenever I try to use a BoxCollection type with the property ItemCollection, I'll only get the first 100 results no matter the limit I set in the GetInformationAsync(). Here is the code snippet:
[PermissionSet(SecurityAction.Assert, Name = "FullTrust")]
public BoxCollection<BoxItem> GetFolderItems(string folderId, int limit = 500, int offset = 0)
{
var response = new BoxCollection<BoxItem>();
var fieldsToGet = new List<string>() { BoxItem.FieldName, BoxItem.FieldDescription, BoxItem.FieldParent, BoxItem.FieldEtag, BoxFolder.FieldItemCollection };
response = Task.Run(() => Client.FoldersManager.GetFolderItemsAsync(folderId, limit, offset)).Result;
return response;
}
I then pass that information on to a BoxFolder type variable, and then try to use the ItemCollection.Entries property, but this only returns 100 results at a time, with no visible way to extract the remaining 61 (in my case, the Count = 161, but Entries = 100 always)
Another code snippet of the used variable, I am basically trying to get the folder ID based on the name of the folder inside Box:
private static void SyncProcess(BoxFolder rootFolder, string folderName)
{
var boxFolder = rootFolder.ItemCollection.Entries.SingleOrDefault(ic => ic.Type == "folder" && ic.Name == folderName);
}
I wasn't able to find anything related to that limit = 100 in the documentation and it only started to give me problems recently.
I had to create a work around by using the following:
var boxCollection = client.GetFolderItems(rootFolder.Id);
var boxFolder = boxCollection.Entries.SingleOrDefault(ic => ic.Type == "folder" && ic.Name == folderName);
I was just wondering if there was a better way to get the complete collection using the property ItemCollection.Entries like I used to, instead of having to fetch them again.
Thanks!
Box pages folder items to keep response times short. The default page size is 100 items. You must iterate through the pages to get all of the items. Here's a code snippet that'll get 100 items at a time until all items in the folder are fetched. You can request up to 1000 items at a time.
var items = new List<BoxItem>();
BoxCollection<BoxItem> result;
do
{
result = await Client.FoldersManager.GetFolderItemsAsync(folderId, 100, items.Count());
items.AddRange(result.Entries);
} while (items.Count() < result.TotalCount);
John's answer can lead to a duplicate values in your items collection if there will be external/shared folders in your list. Those are being hidden when you are calling "GetFolderItemsAsync" with "asUser" header set.
There is a comment about it in the Box API's codeset itself (https://github.com/box/box-windows-sdk-v2/blob/main/Box.V2/Managers/BoxFoldersManager.cs)
Note: If there are hidden items in your previous response, your next offset should be = offset + limit, not the # of records you received back.
The total_count returned may not match the number of entries when using enterprise scope, because external folders are hidden the list of entries.
Taking this into account, it's better to not rely on comparing the number of items retrieved and the TotalCount property.
var items = new List<BoxItem>();
BoxCollection<BoxItem> result;
int limit = 100;
int offset = 0;
do
{
result = await Client.FoldersManager.GetFolderItemsAsync(folderId, limit, offset);
offset += limit;
items.AddRange(result.Entries);
} while (offset < result.TotalCount);

How do i parse the KairosSDK JSON recognise response in Swift?

For those who don't know what the Kairos SDK is, it's basically a facial recognition api.
When you give it an image, it will tell you who if they can match you with someone in the database.
When i give it an image; the api sends me back this response:
[images: (
{
attributes = {
gender = {
confidence = "80%";
type = F;
};
};
candidates = (
{
"enrollment_timestamp" = 1436883322;
face3rd = "0.988351106643677";
},
{
"enrollment_timestamp" = 1436883214;
hi = "0.94137054681778";
},
{
"enrollment_timestamp" = 1436883132;
hi = "0.94137054681778";
}
);
time = "6.43676";
transaction = {
confidence = "0.988351106643677";
"distance_apart" = "0.046980559825897";
"gallery_name" = test1;
height = 482;
"matching_threshold" = "0.4";
"next_subject" = hi;
"next_subject_confidence" = "0.94137054681778";
"simularity_threshold" = "0.1";
status = success;
subject = face3rd;
topLeftX = 148;
topLeftY = 92;
width = 482;
};
}
)]
What i have done is put three images in the database and have called each of them respectively, face3rd, hi, hi (sorry for the two hi's)
I have been trying to parse the names and the number next to it for soo long, i can get around the 6 second response time.
The reason i have not been able to get the names is because, as you can see, i don't know what to tell Swift to look for. The image name changes depending on who i get back.
I don't know if i've explained my situation, bestly, but if you look at the response. The parts that say:
face3rd = "0.988351106643677";
hi = "0.94137054681778";
hi = "0.94137054681778";
I need the information on both sides of the equal sign.
Thank you for your help and apologise, if reading it was pedantic or you felt like their was a lot of repetion.
Thanks!
Yes, it is poorly formatted JSON that we are returning. We will fix it in an upcoming version of the API (no release date at this time..sorry).
If all you need is the closest match, you can just access the subject variable directly and ignore the candidates array.
Otherwise, you would need to parse the candidates array manually unfortunately. I'm not sure how to do that in Swift.