IPFS file extension for GLB - ipfs

I'm using the ipfs-http-client module to interact with IPFS. My problem is that I need the file extension on the link that I generate, and it seems that I can only get it with the wrapWithDirectory flag (-w with the command line). But this flag makes the result empty so far. The documentation on IPFS is only about the command line, and I've only found out a few tutorials about how to do it, but with other tool than JS, or by uploading folders manually. I need to do it from a JS script, from a single file. The motivation is that I want to generate metadata for an NFT, and a metadata field requires to point to a file with a specific extension.
Full detail: I need to add a GLB file on Opensea. GLB are like GLTF, it's a standard for 3D file. Opensea can detect the animation_url field of the metadata of an NFT and render that file. But it needs to end with .glb. Translation, my NFT needs its metadata to look like that:
{
name: <name>,
description: <description>,
image: <image>,
animation_url: 'https://ipfs.io/ipfs/<hash>.glb' // Opensea requires the '.glb' ending.
}
The way I do this so far is as follows:
import { create } from 'ipfs-http-client';
const client = create({
host: 'ipfs.infura.io',
port: 5001,
protocol: 'https',
headers: { authorization },
});
const result = await client.add(file); // {path: '<hash>', cid: CID}
const link = `https://ipfs.io/ipfs/${result.path}` // I can't add an extension here.
In that code, I can put animation_url: link in the metadata object, but OpenSea won't recognize it.
I have tried adding the option mentioned above as well:
const result = await client.add(file, {wrapWithDirectory: true}); // {path: '', cid: CID}
But then result.path is an empty string.
How can I generate a link ending with .glb?

Found out the solution. It indeed involves creating a directory, which is the returned CID, so that we can append the file name with its extension at the end. The result is https://ipfs.io/ipfs/<directory_hash>/<file_name_with_extension>.
So, correcting the code above it gives the following:
import { create } from 'ipfs-http-client';
const client = create({
host: 'ipfs.infura.io',
port: 5001,
protocol: 'https',
headers: { authorization },
});
const content = await file.arrayBuffer(); // The file needs to be a buffer.
const result = await client.add(
{content, path: file.name},
{wrapWithDirectory: true}
);
// result.path is empty, it needs result.cid.toString(),
// and then one can manually append the file name with its extension.
const link = `https://ipfs.io/ipfs/${result.cid.toString()}/${result.name}`;

Related

Upload JSON data with Google Drive API in the browser

The Google Drive API lets us upload JSON files like that:
const fileMetadata = {
name: "config.json",
};
const media = {
mimeType: "application/json",
body: fs.createReadStream("files/config.json"),
};
const file = await gapi.client.files.create({
resource: fileMetadata,
media: media,
fields: "id",
});
console.log("File Id:", file.data.id);
This works fine in Node.js, but i want this to run in the browser, however, when i pass the media argument, with the body set to a string, an empty Untitled file is created without any extension.
The filename only works when media is not present.
My question is: How to pass data for a JSON from a string, so it can be read later?
I already tried creating the file and updating it later with its ID.

ipfs add single item from type FileStream

I was expecting ipfs.add with onlyHash:true to return the same hash as onlyHash:false, what don't I understand here?
data.file is coming from a file upload const data = await request.file(); which is a FileStream
const onlyHash = await ipfs.add(data.file, {
pin: false,
onlyHash: true,
});
console.log(onlyHash.path) // QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH
and
const notOnlyHash = await ipfs.add(data.file, {
pin: true,
onlyHash: false,
});
console.log(notOnlyHash.path) // QmdPcEi2MAiJmSvv1YHjRafKueDygQNtL33yX6WRgDYPXn
if I cat either cid with ipfs cat QmdPcEi2MAiJmSvv1YHjRafKueDygQNtL33yX6WRgDYPXn ipfs just hangs and never shows me the content
if I add the file with ipfs add text.txt the cid does match QmSiLSbT9X9TZXr7uvfBgZ2jWpekSGNjYq3cCAebLyN8yD but I can now cat it and get its contents
I tried using ipfs.addAll
https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/FILES.md#ipfsaddallsource-options
but I get this error
ERROR (71377): Unexpected input: single item passed - if you are using ipfs.addAll, please use ipfs.add instead
Do I need to buffer the file and then add it to IPFS or save it to disk and then add it?
I can write the file to disk just fine
await pump(data.file, fs.createWriteStream(data.filename));
trying to avoid using server resources as much as possible

How to configure AWS CDK ApplicationLoadBalancedFargateService to log parsed JSON lines with Firelens and Firebit

When I create an ApplicationLoadBalancedFargateService with a Firelens logdriver, and the application writes JSON lines as the log message, such as when using net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder with Logback, the log messages are displayed in my logging repository (ex. Sumo Logic), as an escaped string, like:
How can I get the log messages to save as parsed JSON?
After scanning CDK source code, browsing several related references (which I will provide links for to help direct appropriate traffic here), and using cdk diff until the only change was to enable json parsing, I was able to make is work as shown in the following code. The key here is the use of the addFirelensLogRouter method and the Firelens config contained therein.
TaskDefinition code does not automatically create a LogRouter container, if the task definition already contains one, which is what allows us to override the default behavior.
protected _createFargateService() {
const logDriver = LogDrivers.firelens({
options: {
Name: 'http',
Host: this._props.containerLogging.endpoint,
URI: this._props.containerLogging.uri,
Port: '443',
tls: 'on',
'tls.verify': 'off',
Format: 'json_lines'
}
});
const fargateService = new ApplicationLoadBalancedFargateService(this, this._props.serviceName, {
cluster: this._accountEnvironmentLookups.getComputeCluster(),
cpu: this._props.cpu, // Default is 256
desiredCount: this._props.desiredCount, // Default is 1
taskImageOptions: {
image: ContainerImage.fromEcrRepository(this._props.serviceRepository, this._props.imageVersion),
environment: this._props.environment,
containerPort: this._props.containerPort,
logDriver
},
memoryLimitMiB: this._props.memoryLimitMiB, // Default is 512
publicLoadBalancer: this._props.publicLoadBalancer, // Default is false
domainName: this._props.domainName,
domainZone: !!this._props.hostedZoneDomain ? HostedZone.fromLookup(this, 'ZoneFromLookup', {
domainName: this._props.hostedZoneDomain
}) : undefined,
certificate: !!this._props.certificateArn ? Certificate.fromCertificateArn(this, 'CertificateFromArn', this._props.certificateArn) : undefined,
serviceName: `${this._props.accountShortName}-${this._props.deploymentEnvironment}-${this._props.serviceName}`,
// The new ARN and resource ID format must be enabled to work with ECS managed tags.
//enableECSManagedTags: true,
//propagateTags: PropagatedTagSource.SERVICE,
// CloudMap properties cannot be set from a stack separate from the stack where the cluster is created.
// see https://github.com/aws/aws-cdk/issues/7825
});
if (this._props.logMessagesAreJsonLines) {
// The default log driver setup doesn't enable json line parsing.
const firelensLogRouter = fargateService.service.taskDefinition.addFirelensLogRouter('log-router', {
// Figured out how get the default fluent bit ECR image from here https://github.com/aws/aws-cdk/blob/60c782fe173449ebf912f509de7db6df89985915/packages/%40aws-cdk/aws-ecs/lib/base/task-definition.ts#L509
image: obtainDefaultFluentBitECRImage(fargateService.service.taskDefinition, fargateService.service.taskDefinition.defaultContainer?.logDriverConfig),
essential: true,
firelensConfig: {
type: FirelensLogRouterType.FLUENTBIT,
options: {
enableECSLogMetadata: true,
configFileType: FirelensConfigFileType.FILE,
// This enables parsing of log messages that are json lines
configFileValue: '/fluent-bit/configs/parse-json.conf'
}
},
memoryReservationMiB: 50,
logging: new AwsLogDriver({streamPrefix: 'firelens'})
});
firelensLogRouter.logDriverConfig;
}
fargateService.targetGroup.configureHealthCheck({
path: this._props.healthUrlPath,
port: this._props.containerPort.toString(),
interval: Duration.seconds(120),
unhealthyThresholdCount: 5
});
const scalableTaskCount = fargateService.service.autoScaleTaskCount({
minCapacity: this._props.desiredCount,
maxCapacity: this._props.maxCapacity
});
scalableTaskCount.scaleOnCpuUtilization(`ScaleOnCpuUtilization${this._props.cpuTargetUtilization}`, {
targetUtilizationPercent: this._props.cpuTargetUtilization
});
scalableTaskCount.scaleOnMemoryUtilization(`ScaleOnMemoryUtilization${this._props.memoryTargetUtilization}`, {
targetUtilizationPercent: this._props.memoryTargetUtilization
});
this.fargateService = fargateService;
}
Resources:
How I first discovered it might be possible.
https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/master/examples/fluent-bit/parse-json
How I discovered it might be possible with CDK https://github.com/aws/aws-cdk/pull/6322
Understanding it from an AWS service standpoint https://docs.aws.amazon.com/AmazonECS/latest/userguide/using_firelens.html
Narrowing in on where it resides in CDK source. https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-ecs.FirelensLogRouter.html
Eventually I landed here and figured out https://github.com/aws/aws-cdk/blob/60c782fe173449ebf912f509de7db6df89985915/packages/%40aws-cdk/aws-ecs/lib/base/task-definition.ts#L509

fs.readFileSync cannot find file when deploying with lambda

In my code I am calling a query from my lambda function
let featured_json_data = JSON.parse(fs.readFileSync('data/jsons/featured.json'))
This works locally because my featured.json is in the directory that I am reading from. However when I deploy with serverless, the zip it generates doesn't have those files, I get a
ENOENT: no such file directory, open...
I tried packaging by adding
package:
include:
- data/jsons/featured.json
but it just doesn't work. The only way I get this to work is manually adding the json file and then change my complied handler.js code to read from the json file in the root directory.
In this screenshot I have to add the jsons then manually upload it again and in the compiled handler.js code change the directory to exclude the data/jsons
I want to actually handle this in my servereless.yml
You can load JSON files using require().
const featured_json_data = require('./featured.json')
Or better yet, convert your JSON into JS!
For working with non-JSON files, I found that process.cwd() works for me in most cases. For example:
const fs = require('fs');
const path = require('path');
export default async (event, context, callback) => {
try {
console.log('cwd path', process.cwd());
const html = fs.readFileSync(
path.resolve(process.cwd(), './html/index.html'),
'utf-8'
);
const response = {
statusCode: 200,
headers: {
'Content-Type': 'text/html'
},
body: html
};
callback(null, response);
} catch (err) {
console.log(err);
}
};
I recommend looking at copy-webpack-plugin: https://github.com/webpack-contrib/copy-webpack-plugin
You can use it to package other files to include with your Lambda deployment.
In my project, I had a bunch of files in a /templates directory. In webpack.config.js to package up these templates, for me it looks like:
const CopyWebpackPlugin = require('copy-webpack-plugin');
module.exports = {
plugins: [
new CopyWebpackPlugin([
'./templates/*'
])
]
};
fs.readFileSync cannot find file when deploying with lambda
Check the current directory and check target directory content in deploy environment. Add appropriate code for that checking to your program/script.

How to create multiple pages from single json files in Gatsby

I am new to node.js and react but I love gatsby.js. I have followed all the tutorials that I can find and it's such a great tool.
However one of the main reasons why I want to use it is I have a file json with 1000 different records in and I would like to generate a new page for each record.
I believe I have come to the conclusion that I need to learn more about the gatsby-node.js file and am aware of the following resource but are there any tutorials or other examples on this topic that maybe a little easier to follow:
https://www.gatsbyjs.org/docs/creating-and-modifying-pages/#creating-pages-in-gatsby-nodejs
The example you were referring to should already give you a good idea. The basic concept is to import the JSON file, loop over it and run createPage for each of the items in your JSON source. So given an example source file like:
pages.json
[{
"page": "name-1"
}, {
"page": "name-2"
}]
You can then use the Node API to create pages for each:
gatsby-node.js
const path = require('path');
const data = require('./pages.json');
exports.createPages = ({ boundActionCreators }) => {
const { createPage } = boundActionCreators;
// Your component that should be rendered for every item in JSON.
const template = path.resolve(`src/template.js`);
// Create pages for each JSON entry.
data.forEach(({ page }) => {
const path = page;
createPage({
path,
component: template,
// Send additional data to page from JSON (or query inside template)
context: {
path
}
});
});
};