List only top level folders in GCP GCS from Cloud Function bucket API? - google-cloud-functions

List top level folders in GCP GCS from Cloud Function bucket API?
I have a GCS bucket that has objects like...
myfile.pdf
myimg.png
folder001/stuff/<some files or deep folders>
folder002/<some files or deep folders>
.
.
.
someOtherFolderName00n/<some files or deep folders>
... and just want to get the list of top level folders folder001, ..., someOtherFolderName00n.
I have a snippet of code in GCP's Cloud Functions using the Bucket API that looks like...
const admin = require('firebase-admin');
admin.initializeApp();
const sourceBucket = admin.storage().bucket("test_source_001");
exports.my_function = async (event, context) => {
// get top level bucket folders
const [sourceFiles] = await sourceBucket.getFiles({
prefix: '',
delimiter: '/'
});
// extract name property from each object
const sourceFileNames = sourceFiles.map((file) => file.name);
console.log(sourceFileNames)
... but this actually ends up listing everything in that bucket except for just top level directories (even the top level files that don't even have trailing '/'s), so I get a list like
myfile.pdf
myimg.png
folder001/stuff/
folder001/stuff/file1
...
folder001/stuff/fileN
folder002/file1
...
folder002/fileN
...
someOtherFolderName00n/file1
...
someOtherFolderName00n/fileN
I think I could just do something like...
s = new Set()
for (let f of sourceFileNames) {
s.add(f.split('/')[0])
}
... but is there any way to just have the getFiles query return top level folders in the first place? (New to using GCP and Cloud Functions, so wonder if I'm just missing something simple here).

There is an ability to specify a prefix of the required path in options.
Prefixes and delimiters can be used to emulate directory listings.
Prefixes can be used to filter objects starting with a prefix.
The delimiter argument can be used to restrict the results to only the objects in the given "directory". Without the delimiter, the entire tree under the prefix is returned.
If you want to list only folders try changing the prefix like this:
const [sourceFiles] = await sourceBucket.getFiles({
prefix: 'folder',
delimiter: '/'
});
For more information refer to this document on how to specify prefixes

Related

IPFS file extension for GLB

I'm using the ipfs-http-client module to interact with IPFS. My problem is that I need the file extension on the link that I generate, and it seems that I can only get it with the wrapWithDirectory flag (-w with the command line). But this flag makes the result empty so far. The documentation on IPFS is only about the command line, and I've only found out a few tutorials about how to do it, but with other tool than JS, or by uploading folders manually. I need to do it from a JS script, from a single file. The motivation is that I want to generate metadata for an NFT, and a metadata field requires to point to a file with a specific extension.
Full detail: I need to add a GLB file on Opensea. GLB are like GLTF, it's a standard for 3D file. Opensea can detect the animation_url field of the metadata of an NFT and render that file. But it needs to end with .glb. Translation, my NFT needs its metadata to look like that:
{
name: <name>,
description: <description>,
image: <image>,
animation_url: 'https://ipfs.io/ipfs/<hash>.glb' // Opensea requires the '.glb' ending.
}
The way I do this so far is as follows:
import { create } from 'ipfs-http-client';
const client = create({
host: 'ipfs.infura.io',
port: 5001,
protocol: 'https',
headers: { authorization },
});
const result = await client.add(file); // {path: '<hash>', cid: CID}
const link = `https://ipfs.io/ipfs/${result.path}` // I can't add an extension here.
In that code, I can put animation_url: link in the metadata object, but OpenSea won't recognize it.
I have tried adding the option mentioned above as well:
const result = await client.add(file, {wrapWithDirectory: true}); // {path: '', cid: CID}
But then result.path is an empty string.
How can I generate a link ending with .glb?
Found out the solution. It indeed involves creating a directory, which is the returned CID, so that we can append the file name with its extension at the end. The result is https://ipfs.io/ipfs/<directory_hash>/<file_name_with_extension>.
So, correcting the code above it gives the following:
import { create } from 'ipfs-http-client';
const client = create({
host: 'ipfs.infura.io',
port: 5001,
protocol: 'https',
headers: { authorization },
});
const content = await file.arrayBuffer(); // The file needs to be a buffer.
const result = await client.add(
{content, path: file.name},
{wrapWithDirectory: true}
);
// result.path is empty, it needs result.cid.toString(),
// and then one can manually append the file name with its extension.
const link = `https://ipfs.io/ipfs/${result.cid.toString()}/${result.name}`;

How exactly does ipfs cat method find and display contents of files using a CID by making use of DHT?

I have done a lot of research on the internet to learn how exactly ipfs cat and get methods work find and download files from other peers using a CID. I want to fully understand how this process works: "The cat method first searches your own node for the file requested, and if it can't find it there, it will attempt to find it on the broader IPFS network(https://proto.school/regular-files-api/04)".
This is the ipfs source code for cat:
async function * cat (ipfsPath, options = {}) {
ipfsPath = normalizeCidPath(ipfsPath)
if (options.preload !== false) {
const pathComponents = ipfsPath.split('/')
preload(CID.parse(pathComponents[0]))
}
const file = await exporter(ipfsPath, repo.blocks, options)
// File may not have unixfs prop if small & imported with rawLeaves true
if (file.type === 'directory') {
throw new Error('this dag node is a directory')
}
if (!file.content) {
throw new Error('this dag node has no content')
}
yield * file.content(options)
}
I deduce that two important arguments that allow for peer routing and file fetching are repo.blocks and preload. repo.blocks is created during ipfs.create() and then passed as a parameter to ipfs.createCat() which is the method that actually creates the cat method. preload is also created by ipfs.create() and passed as an argument to ipfs.createCat() so that it can be used in ipfs.cat(). What confuses me the most is which one of preload or repo.blocks is actually responsible for CID querying. I analyzed the underlying methods for this part of cat:
const pathComponents = ipfsPath.split('/')
preload(CID.parse(pathComponents[0]))
and learned that this is the part of ipfs.cat that makes http connections to other peers. However, this part:
const file = await exporter(ipfsPath, repo.blocks, options)
includes sub-methods like
const block = await blockstore.get(cid, options);
const node = dagPb.decode(block);
which also seem to be related to CID querying through use of distributed hash tables. blockstore.get did not make use of any methods that seemed to connect to other peers or search for peers that have a CID, but I am still very confused on whether these methods have any relation to CID querying. I highly appreciate any help on how the cat method works under the hood from someone who is an expert in ipfs or at least resources I can use to learn the material myself.

How can i get the list of link files from a translated compressed/zip of revit file?

I have translated a revit file with several link files. From viewer i can browse all elements from the root revit model including all elements from link files using 'Model Browser' default extension. Even i also created a custom extension from where i can isolate each object type's all elements.
Now, i want to create a extension like 'Model Browser', which will show Root file name as top or parent node and all link file's name as child node.I also want, by clicking each link file, all elements from that link file should isolate in the viewer and by clicking Root file, all elements including all link files elements should show .
For information, my application is built using C# and JavaScript in .Net platform.
Can anyone advice me which api, i can try? It would be also very helpful if someone share examples or url where i can get help.
Thanks in advance!
You can take advantage of the AecModelData to get linked models data and rebuild relationships from the PropertyDB inside Forge Viewer.
If an object is from the linked RVT, you can check its' external id. If the external id contains a slash symbol, then this means it is from a linked RVT. Here is an example:
Object extetnal id: ffa0b0a8-8aab-48f9-beb5-dba5d9b4968f-0010cfee/e021b7a9-1e57-428c-87db-8e087322cd49-0015a0f6
An instanceId from the linkedDocuments in the AECModelData: ffa0b0a8-8aab-48f9-beb5-dba5d9b4968f-0010cfee
You can see the GUID on the left side of the slash symbol matches the instance id mentioned above.
To get the linked RVT model name, we can reuse the instanceId from the linkedDocuments of the AECModelData to get the information we need again. Here is a code snippet for you, and assume the instance id is ffa0b0a8-8aab-48f9-beb5-dba5d9b4968f-0010cfee:
function getExternalIdMappingAsync( model ) {
return new Promise( ( resolve, reject ) => {
model.getExternalIdMapping(
map => resolve( map ),
error => reject( error )
);
});
}
function getPropertiesAsync( dbId, viewer ) {
return new Promise( ( resolve, reject ) => {
viewer.getProperties(
dbId,
result => resolve( result ),
error => reject( error )
);
});
}
//1. Get external id mapping for converting external id to Viewer's dbId
let externalIdMapping = await getExternalIdMappingAsync( viewer.model );
let dbId = externalIdMapping['ffa0b0a8-8aab-48f9-beb5-dba5d9b4968f-0010cfee'];
//2. Get properties of the linked model instance
let propResult = await getPropertiesAsync( dbId, viewer )
//3. Find the type name property for its value
let linkNameProp = propResult.properties.find( prop => prop.displayName == 'Type Name' || prop.attributeName == 'Type Name' );
let linkName = linkNameProp.displayValue; //!<<< This is linked RVT name
Here is the snapshot of my test:
Hope it helps~

How to convert Pulumi Output<t> to string?

I am dealing with creating AWS API Gateway. I am trying to create CloudWatch Log group and name it API-Gateway-Execution-Logs_${restApiId}/${stageName}. I have no problem in Rest API creation.
My issue is in converting restApi.id which is of type pulumi.Outout to string.
I have tried these 2 versions which are proposed in their PR#2496
const restApiId = apiGatewayToSqsQueueRestApi.id.apply((v) => `${v}`);
const restApiId = pulumi.interpolate `${apiGatewayToSqsQueueRestApi.id}`
here is the code where it is used
const cloudWatchLogGroup = new aws.cloudwatch.LogGroup(
`API-Gateway-Execution-Logs_${restApiId}/${stageName}`,
{},
);
stageName is just a string.
I have also tried to apply again like
const restApiIdStrign = restApiId.apply((v) => v);
I always got this error from pulumi up
aws:cloudwatch:LogGroup API-Gateway-Execution-Logs_Calling [toString] on an [Output<T>] is not supported.
Please help me convert Output to string
#Cameron answered the naming question, I want to answer your question in the title.
It's not possible to convert an Output<string> to string, or any Output<T> to T.
Output<T> is a container for a future value T which may not be resolved even after the program execution is over. Maybe, your restApiId is generated by AWS at deployment time, so if you run your program in preview, there's no value for restApiId.
Output<T> is like a Promise<T> which will be eventually resolved, potentially after some resources are created in the cloud.
Therefore, the only operations with Output<T> are:
Convert it to another Output<U> with apply(f), where f: T -> U
Assign it to an Input<T> to pass it to another resource constructor
Export it from the stack
Any value manipulation has to happen within an apply call.
So long as the Output is resolvable while the Pulumi script is still running, you can use an approach like the below:
import {Output} from "#pulumi/pulumi";
import * as fs from "fs";
// create a GCP registry
const registry = new gcp.container.Registry("my-registry");
const registryUrl = registry.id.apply(_=>gcp.container.getRegistryRepository().then(reg=>reg.repositoryUrl));
// create a GCP storage bucket
const bucket = new gcp.storage.Bucket("my-bucket");
const bucketURL = bucket.url;
function GetValue<T>(output: Output<T>) {
return new Promise<T>((resolve, reject)=>{
output.apply(value=>{
resolve(value);
});
});
}
(async()=>{
fs.writeFileSync("./PulumiOutput_Public.json", JSON.stringify({
registryURL: await GetValue(registryUrl),
bucketURL: await GetValue(bucketURL),
}, null, "\t"));
})();
To clarify, this approach only works when you're doing an actual deployment (ie. pulumi up), not merely a preview. (as explained here)
That's good enough for my use-case though, as I just want a way to store the registry-url and such after each deployment, for other scripts in my project to know where to find the latest version.
Short Answer
You can specify the physical name of your LogGroup by specifying the name input and you can construct this from the API Gateway id output using pulumi.interpolate. You must use a static string as the first argument to your resource. I would recommend using the same name you're providing to your API Gateway resource as the name for your Log Group. Here's an example:
const apiGatewayToSqsQueueRestApi = new aws.apigateway.RestApi("API-Gateway-Execution");
const cloudWatchLogGroup = new aws.cloudwatch.LogGroup(
"API-Gateway-Execution", // this is the logical name and must be a static string
{
name: pulumi.interpolate`API-Gateway-Execution-Logs_${apiGatewayToSqsQueueRestApi.id}/${stageName}` // this the physical name and can be constructed from other resource outputs
},
);
Longer Answer
The first argument to every resource type in Pulumi is the logical name and is used for Pulumi to track the resource internally from one deployment to the next. By default, Pulumi auto-names the physical resources from this logical name. You can override this behavior by specifying your own physical name, typically via a name input to the resource. More information on resource names and auto-naming is here.
The specific issue here is that logical names cannot be constructed from other resource outputs. They must be static strings. Resource inputs (such as name) can be constructed from other resource outputs.
Encountered a similar issue recently. Adding this for anyone that comes looking.
For pulumi python, some policies requires the input to be stringified json. Say you're writing an sqs queue and a dlq for it, you may initially write something like this:
import pulumi_aws
dlq = aws.sqs.Queue()
queue = pulumi_aws.sqs.Queue(
redrive_policy=json.dumps({
"deadLetterTargetArn": dlq.arn,
"maxReceiveCount": "3"
})
)
The issue we see here is that the json lib errors out stating type Output cannot be parsed. When you print() dlq.arn, you'd see a memory address for it like <pulumi.output.Output object at 0x10e074b80>
In order to work around this, we have to leverage the Outputs lib and write a callback function
import pulumi_aws
def render_redrive_policy(arn):
return json.dumps({
"deadLetterTargetArn": arn,
"maxReceiveCount": "3"
})
dlq = pulumi_aws.sqs.Queue()
queue = pulumi_aws.sqs.Queue(
redrive_policy=Output.all(arn=dlq.arn).apply(
lambda args: render_redrive_policy(args["arn"])
)
)

Loading JSON Without an HTTP Request

I am working on a project using Angular 4, NPM, Node.js, and the Angular CLI.
I have a rather unusual need to load JSON into an Angular service (using an #Injectable) without an HTTP request, i.e. it will always be loaded locally as part of the package, and not retrieved from a server.
Everything I've found so far indicates that you either have to modify the project's typings.d.ts file or use an HTTP request to retrieve it from the /assets folder or similar, neither of which is an option for me.
What I am trying to accomplish is this. Given the following directory structure:
/app
/services
/my-service
/my.service.ts
/myJson.json
I need the my.service.ts service, which is using #Injectable, to load the JSON file myJson.json. For my particular case, there will be multiple JSON files sitting next to the my.service.ts file that will all need to be loaded.
To clarify, the following approaches will not work for me:
Using an HTTP Service to Load JSON File From Assets
URL: https://stackoverflow.com/a/43759870/1096637
Excerpt:
// Get users from the API
return this.http.get('assets/ordersummary.json')//, options)
.map((response: Response) => {
console.log("mock data" + response.json());
return response.json();
}
)
.catch(this.handleError);
Modifying typings.d.ts To Allow Loading JSON Files
URL: https://hackernoon.com/import-json-into-typescript-8d465beded79
Excerpt:
Solution: Using Wildcard Module Name
In TypeScript version 2 +, we can use wildcard character in module name. In your TS definition file, e.g. typings.d.ts, you can add this line:
declare module "*.json" {
const value: any;
export default value;
}
Then, your code will work like charm!
// TypeScript
// app.ts
import * as data from './example.json';
const word = (<any>data).name;
console.log(word); // output 'testing'
The Question
Does anyone else have any ideas for getting these files loaded into my service without the need for either of these approaches?
You will get an error if you call json directly, but a simple workaround is to declare typings for all json files.
typings.d.ts
declare module "*.json" {
const value: any;
export default value;
}
comp.ts
import * as data from './data.json';
The solution I found to this was using RequireJS, which was available to me via the Angular CLI framework.
I had to declare require as a variable globally:
declare var require: any;
And then I could use require.context to get all of the files in a folder I created to hold on the types at ../types.
Please find below the entire completed service that loads all of the JSON files (each of which is a type) into the service variable types.
The result is an object of types, where the key for the type is the file name, and the related value is the JSON from the file.
Example Result loading files type1.json, type2.json, and type3.json from the folder ../types:
{
type1: {
class: "myClass1",
property1: "myProperty1"
},
type2: {
class: "myClass2",
property1: "myProperty2"
},
type3: {
class: "myClass3",
property1: "myProperty3"
}
}
The Final Service File
import { Injectable } from '#angular/core';
declare var require: any;
#Injectable()
export class TypeService {
constructor(){
this.init()
};
types: any;
init: Function = () => {
// Get all of the types of branding available in the types folder
this.types = (context => {
// Get the keys from the context returned by require
let keys = context.keys();
// Get the values from the context using the keys
let values = keys.map(context);
// Reduce the keys array to create the types object
return keys.reduce(
(types, key, i) => {
// Update the key name by removing "./" from the begining and ".json" from the end.
key = key.replace(/^\.\/([^\.]+)\.json/, (a, b)=> { return b; });
// Set the object to the types array using the new key and the value at the current index
types[key] = values[i].data;
// Return the new types array
return types;
}, {}
);
})(require.context('../types', true, /.json/));
}
}
You can directly access variables in services from their object that is defined in the constructor.
...So say your constructor loads the service like this
constructor(private someService:SomeService){}
You can just do someService.theJsonObject to access it.
Just be careful not to do this before it gets loaded by the service function that loads it. You'd then get a null value.
You can assign variables to your service files the same way you do in component files.
Just declare them in the service
public JsonObject:any;
And (easiest way) is to let the function that called your service assign the JSON object for you.
So say you called the service like this
this.serviceObject.function().subscribe
(
resp =>
{
this.serviceObject.JsonObject = resp;
}
);
After this is done once, other components can access that JSON content using someService.theJsonObject as discussed earlier.
In your case I think all you need to do is embed your JSON object in your code. Maybe you can use const. That's not bad code or anything.