ipfs add single item from type FileStream - ipfs

I was expecting ipfs.add with onlyHash:true to return the same hash as onlyHash:false, what don't I understand here?
data.file is coming from a file upload const data = await request.file(); which is a FileStream
const onlyHash = await ipfs.add(data.file, {
pin: false,
onlyHash: true,
});
console.log(onlyHash.path) // QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH
and
const notOnlyHash = await ipfs.add(data.file, {
pin: true,
onlyHash: false,
});
console.log(notOnlyHash.path) // QmdPcEi2MAiJmSvv1YHjRafKueDygQNtL33yX6WRgDYPXn
if I cat either cid with ipfs cat QmdPcEi2MAiJmSvv1YHjRafKueDygQNtL33yX6WRgDYPXn ipfs just hangs and never shows me the content
if I add the file with ipfs add text.txt the cid does match QmSiLSbT9X9TZXr7uvfBgZ2jWpekSGNjYq3cCAebLyN8yD but I can now cat it and get its contents
I tried using ipfs.addAll
https://github.com/ipfs/js-ipfs/blob/master/docs/core-api/FILES.md#ipfsaddallsource-options
but I get this error
ERROR (71377): Unexpected input: single item passed - if you are using ipfs.addAll, please use ipfs.add instead
Do I need to buffer the file and then add it to IPFS or save it to disk and then add it?
I can write the file to disk just fine
await pump(data.file, fs.createWriteStream(data.filename));
trying to avoid using server resources as much as possible

Related

Creating mysqldump in node with dockerode is resulting in a file with wrong mime-type

I'm creating an application that should do a mysqldump from mysql running in a docker container. The application i'm creating is build in node.
This is the script that i'm using.
const containerId = '71501a8ab0f8';
const database = 'my-db';
const exportPath = `${database}.sql`;
const docker = new Dockerode({socketPath: '/var/run/docker.sock'});
const container = docker.getContainer(containerId);
const exec = await container.exec({
{
Cmd: [
'mysqldump',
'--single-transaction',
database,
],
AttachStdin: true,
AttachStdout: true
}
});
const stream = await exec.start({
hijack: true,
stdin: false
});
const writeStream = fs.createWriteStream(exportPath);
stream.pipe(writeStream);
stream.on('end', () => {
console.log('Database dump successfully saved!');
});
This creates the sql file with dump of the database, but it's not really readable. When I do a file -I my-db.sql I get the following result:
my-db.sql: application/octet-stream; charset=binary
When I open in in a text-editor (sublime) I see a random sequence of characters.
But when I open de file with for example nano I just see the (plain text) content of the mysqldump.
What I noticed is that every chuck that was added to the generated file starts with some random characters. When I remove them manually and do the file -I my-db.sql again the result is:
my-db.sql: text/plain; charset=us-ascii
Now I'm also able to open the file in my text editor and see the actual dump.
I tried to just writes the chunks to the file when data arrives, like:
stream.on('data', (chunk: Buffer) => {
writeStream.write(chunk.toString());
});
But this results in the same issue.
Since the script has to able to make dumps of large databases I really want to use the stream and pipe it to a file.
How can I get rid of the "characters" that are added before each inserted chuck so the file mime-type will be text/plain.
Just found the anwser myself. The stream needed to be demux first.
const stream = await exec.start({
hijack: true,
stdin: false
});
const writeStream = fs.createWriteStream(path);
docker.modem.demuxStream(stream, writeStream, process.stderr);

IPFS file extension for GLB

I'm using the ipfs-http-client module to interact with IPFS. My problem is that I need the file extension on the link that I generate, and it seems that I can only get it with the wrapWithDirectory flag (-w with the command line). But this flag makes the result empty so far. The documentation on IPFS is only about the command line, and I've only found out a few tutorials about how to do it, but with other tool than JS, or by uploading folders manually. I need to do it from a JS script, from a single file. The motivation is that I want to generate metadata for an NFT, and a metadata field requires to point to a file with a specific extension.
Full detail: I need to add a GLB file on Opensea. GLB are like GLTF, it's a standard for 3D file. Opensea can detect the animation_url field of the metadata of an NFT and render that file. But it needs to end with .glb. Translation, my NFT needs its metadata to look like that:
{
name: <name>,
description: <description>,
image: <image>,
animation_url: 'https://ipfs.io/ipfs/<hash>.glb' // Opensea requires the '.glb' ending.
}
The way I do this so far is as follows:
import { create } from 'ipfs-http-client';
const client = create({
host: 'ipfs.infura.io',
port: 5001,
protocol: 'https',
headers: { authorization },
});
const result = await client.add(file); // {path: '<hash>', cid: CID}
const link = `https://ipfs.io/ipfs/${result.path}` // I can't add an extension here.
In that code, I can put animation_url: link in the metadata object, but OpenSea won't recognize it.
I have tried adding the option mentioned above as well:
const result = await client.add(file, {wrapWithDirectory: true}); // {path: '', cid: CID}
But then result.path is an empty string.
How can I generate a link ending with .glb?
Found out the solution. It indeed involves creating a directory, which is the returned CID, so that we can append the file name with its extension at the end. The result is https://ipfs.io/ipfs/<directory_hash>/<file_name_with_extension>.
So, correcting the code above it gives the following:
import { create } from 'ipfs-http-client';
const client = create({
host: 'ipfs.infura.io',
port: 5001,
protocol: 'https',
headers: { authorization },
});
const content = await file.arrayBuffer(); // The file needs to be a buffer.
const result = await client.add(
{content, path: file.name},
{wrapWithDirectory: true}
);
// result.path is empty, it needs result.cid.toString(),
// and then one can manually append the file name with its extension.
const link = `https://ipfs.io/ipfs/${result.cid.toString()}/${result.name}`;

Nodejs - trying to edit images' metadata with Exiftool

I am currently working on a NodeJS (Express) project to edit images' metadata with Exiftool.
To edit images' metadata with Exiftool, I've to create a JSON file containing all metadata to modify then execute the command :
exiftool -j=metadata.json pathToTheImage/image.jpg
The json file must look like that :
[{"SourceFile":"pathToTheImage/image.jpg","XMP-dc:Title":"Image's title"}]
Here's my code to do that :
const {exec} = require('child_process');
let fs = require('fs');
let uploadPath = "uploads";
let uploadName = "image.jpg";
...
app.post('/metadata/editor', (req, res) => {
let jsonToImport = [...];
fs.writeFileSync("metadata.json", JSON.stringify(jsonToImport));
exec('exiftool -j=metadata.json ' + uploadPath + '/' + uploadName, (error, stdout, stderr) => {
if (error) {
console.error(error);
return;
}
res.redirect('/metadata/checker/' + uploadName);
});
});
The problem is at the level of "writeFileSync/exec".
Independently these two lines work well, that's to say that if I've just the first line, the JSON file is well created. And if I've just the second ligne, image's metadata are well updated.
But when I execute this two lines together, the JSON file is well created but the exec line do "nothing" (or something that I can't determine).
This code uses synchronous functions, I've test it with asynchronous functions, this is the same behavior.
Currently, to do what I need, I must execute the code above to create the JSON file, then I must comment the writeFileSync line and I must reexecute the code to update image's metadata correctly.
It's really strange, I've try to read the JSON file content before the exec line but everything is ok. I've use asynchronous functions, with and without promise... there is nothing to do it doesn't work.
Thank you for your help.
I'll answer my own question:
The problem was that I use nodemon, however by default nodemon watches JSON files. But in my code I created a JSON file to use it right after. So, I created the JSON file correctly, nodemon sees it, and restarts the node server; the rest of the code does not run.
To fix this, I added an option to ignore the created files in my package.json:
"nodemonConfig": {
"ignore": [
"path/to/files/to/ingore/*"
]
}

JSON report not generating for failed scenarios using protractor

If my scenarios got failed the JSON report not generating. But for passes scenarios I can able to see the JSON report.
Please find my config file as below.
In comment prompt console I can able to see the failure message:
W/launcher - Ignoring uncaught error AssertionError: expected false to equal true
E/launcher - BUG: launcher exited with 1 tasks remaining
You can save the report by using a hook, so don't generate the file form the protractor.conf.js file, but use a cucumber-hook for it.
The hook can look like this
reportHook.js:
const cucumber = require('cucumber');
const jsonFormatter = cucumber.Listener.JsonFormatter();
const fs = require('fs-extra');
const jsonFile = require('jsonfile');
const path = require('path');
const projectRoot = process.cwd();
module.exports = function reportHook() {
this.registerListener(jsonFormatter);
/**
* Generate and save the report json files
*/
jsonFormatter.log = function(report) {
const jsonReport = JSON.parse(report);
// Generate a featurename without spaces, we're gonna use it later
const featureName = jsonReport[0].name.replace(/\s+/g, '_').replace(/\W/g, '').toLowerCase();
// Here I defined a base path to which the jsons are written to
const snapshotPath = path.join(projectRoot, '.tmp/json-output');
// Think about a name for the json file. I now added a featurename (each feature
// will output a file) and a timestamp (if you use multiple browsers each browser
// execute each feature file and generate a report)
const filePath = path.join(snapshotPath, `report.${featureName}.${new Date}.json`);
// Create the path if it doesn't exists
fs.ensureDirSync(snapshotPath);
// Save the json file
jsonFile.writeFileSync(filePath, jsonReport, {
spaces: 2
});
};
}
You can save this code to the file reportHook.js and then add it to the cucumberOpts:.require so it will look like this in your code
cucumberOpts: {
require: [
'../step_definitions/*.json',
'../setup/hooks.js',
'../setup/reportHook.js'
],
....
}
Even with failed steps / scenario's it should generate the report file.
Hope it helps

Parse and convert xls file (received from GET request to URL) to JSON without writing to disk

The title says everything.
I want to get an xls file from a third party server. (said service keeps fueling
records, and they do not expose any kind of api, only the excel file).
Then parse that file with a library like node-excel-to-json, and convert it into JSON format I can use to import the data in mongo.
I want to manipulate the file in-memory, without writing it to disk.
So, say I am getting the file with this code,
parseFuelingReport() {
let http = require('http');
let fs = require('fs');
// let excel2Json = require('node-excel-to-json');
let file = fs.createWriteStream("document.xls");
let request = http.get("http://www.everydayexcel.com/files/Excel_Test_Basic_1_cumulative_sum.xls", function (response) {
});
},
I want to load the response in memory and parse it with something like
excel2Json(/* this is supposed to be the path to the xls file */, {
'convert_all_sheet': false,
'return_type': 'File',
'sheetName': 'survey'
}, function (err, output) {
console.log('err, res', err, output);
});
I assume you are using https://github.com/kashifeqbal/node-excel-to-json, which is available as node package.
If you take a look at this line,
you can see, two things:
It calls XLSX.readFile(filePath);, what will load a file from disk. Hard to call with an in-memory object in.
Internally it uses a XLSX package, most likely this one: https://www.npmjs.com/package/xlsx
The XLSX API seems not as convenient as the excel2Json, but it provides a read() function which takes a JavaScript object:
/* Call XLSX */
var workbook = XLSX.read(bstr, {type:"binary"});
Hope this helps