Upload JSON data with Google Drive API in the browser - google-drive-api

The Google Drive API lets us upload JSON files like that:
const fileMetadata = {
name: "config.json",
};
const media = {
mimeType: "application/json",
body: fs.createReadStream("files/config.json"),
};
const file = await gapi.client.files.create({
resource: fileMetadata,
media: media,
fields: "id",
});
console.log("File Id:", file.data.id);
This works fine in Node.js, but i want this to run in the browser, however, when i pass the media argument, with the body set to a string, an empty Untitled file is created without any extension.
The filename only works when media is not present.
My question is: How to pass data for a JSON from a string, so it can be read later?
I already tried creating the file and updating it later with its ID.

Related

IPFS file extension for GLB

I'm using the ipfs-http-client module to interact with IPFS. My problem is that I need the file extension on the link that I generate, and it seems that I can only get it with the wrapWithDirectory flag (-w with the command line). But this flag makes the result empty so far. The documentation on IPFS is only about the command line, and I've only found out a few tutorials about how to do it, but with other tool than JS, or by uploading folders manually. I need to do it from a JS script, from a single file. The motivation is that I want to generate metadata for an NFT, and a metadata field requires to point to a file with a specific extension.
Full detail: I need to add a GLB file on Opensea. GLB are like GLTF, it's a standard for 3D file. Opensea can detect the animation_url field of the metadata of an NFT and render that file. But it needs to end with .glb. Translation, my NFT needs its metadata to look like that:
{
name: <name>,
description: <description>,
image: <image>,
animation_url: 'https://ipfs.io/ipfs/<hash>.glb' // Opensea requires the '.glb' ending.
}
The way I do this so far is as follows:
import { create } from 'ipfs-http-client';
const client = create({
host: 'ipfs.infura.io',
port: 5001,
protocol: 'https',
headers: { authorization },
});
const result = await client.add(file); // {path: '<hash>', cid: CID}
const link = `https://ipfs.io/ipfs/${result.path}` // I can't add an extension here.
In that code, I can put animation_url: link in the metadata object, but OpenSea won't recognize it.
I have tried adding the option mentioned above as well:
const result = await client.add(file, {wrapWithDirectory: true}); // {path: '', cid: CID}
But then result.path is an empty string.
How can I generate a link ending with .glb?
Found out the solution. It indeed involves creating a directory, which is the returned CID, so that we can append the file name with its extension at the end. The result is https://ipfs.io/ipfs/<directory_hash>/<file_name_with_extension>.
So, correcting the code above it gives the following:
import { create } from 'ipfs-http-client';
const client = create({
host: 'ipfs.infura.io',
port: 5001,
protocol: 'https',
headers: { authorization },
});
const content = await file.arrayBuffer(); // The file needs to be a buffer.
const result = await client.add(
{content, path: file.name},
{wrapWithDirectory: true}
);
// result.path is empty, it needs result.cid.toString(),
// and then one can manually append the file name with its extension.
const link = `https://ipfs.io/ipfs/${result.cid.toString()}/${result.name}`;

Firebase functions image download URL issues

I am making a thumbnail image using firebase functions, however, I tried two approaches to get the downloadURL of the resulted image but both I have different problems:
The first approach: I used the code below
const signedUrls = await bucket.file(thumbFilePath).getSignedUrl({
action: "read",
expires: "03-09-2491"
});
For this approach after some days, the url is no more valid even though the expiration date is very far, I couldn't find a proper solution and in meanwhile, I found another approach so I tried it.
the second approach: I used the code below
// Uploading the thumbnail than make it public to be able to access it.
await bucket.upload(tempFilePath_des, {
destination: thumbFilePath,
metadata: metadata
});
await storage
.bucket(fileBucket)
.file(thumbFilePath)
.makePublic();
thumbURL =
"https://storage.cloud.google.com/" + fileBucket + "/" + thumbFilePath;
This approach works well when I use Google Auth, but when I use email auth the following error is thrown:
Cross-Origin Read Blocking (CORB) blocked cross-origin response
with MIME type text/html. See for more details.
Please, Any help?
I found another approach that seems to be working in this post
when uploading the image we need to give it a UUID or we can retrieve it if we want from the metadata of the file using the following
const [metadata] = await storage.bucket(fileBucket).file(filePath).getMetadata();
const uuid = metadata.metadata.firebaseStorageDownloadTokens
than we construct our URL, here my working code:
const UUID = require('uuid/v4');
let uuid = UUID()
await bucket.upload(tempFilePath_des, {
destination: thumbFilePath,
metadata : {
contentType: metdata.contentType,
metadata: {
firebaseStorageDownloadTokens: uuid
}
}
});
thumbURL =
"https://firebasestorage.googleapis.com/v0/b/" + fileBucket + "/o/" + encodeURIComponent(thumbFilePath) + "?alt=media&token=" + uuid;

AWS S3 - Allow public to view HTML files

I am looking to allow public users to view HTML files located on an AWS S3 bucket on their browser. These HTML files are created and uploaded to my S3 bucket via node.js, and a URL linking to the file is generated.
I am using this method to upload the HTML files:
s3.upload({
Bucket: bucket,
Key: "HTMLFiles/file.HTML",
Body: fileStream,
ACL: 'public-read'
}, function (err, data) {
if (err) {
console.log("Error: ", err);
}
if (data) {
console.log("Success: ", data.Location);
}
}).on('httpUploadProgress', event => {
console.log(`Uploaded ${event.loaded} out of ${event.total}`);
});
When the script is run, the generated URL looks something like this:
https://bucket-name.s3.region.amazonaws.com/HTMLFiles/file.html
(Obviously this is only an example URL and not the actual URL)
When a user goes to this URL, instead of viewing the HTML file, the browser instead downloads the file.
How can I specify that this file is meant to be loaded on the browser and viewed, not downloaded?
This is because the content type is missing so the browser doesn't know that your file should be interpreted as HTML.
Please add ContentType: 'text/html' in the parameters passed to s3.upload.
See also the explanations and links given in Upload Image into S3 bucket using Api Gateway, Lambda funnction

Stackdriver export to .txt or PDF on drive/mail

I've set up an script which reads data from a spreadsheet and sends emails according this data.
Now, I've also set it up to do some simple logging via stackdriver.
What I'd like to do is to export these logs (after/at the end of every execution of the mail-script) to a .txt or .pdf file which then get saved to a specific Google Drive folder or been send by mail.
Unfortunately I can't seem to find out how to do this, or if its even posible?
There is no way to edit a Google docs file if this is what you where thinking of doing. Your going to have to create your .txt or .pdf file locally then upload the file to Google drive or send it as an email. Technically if you upload the file as a .txt i think that Google drive will allow you to export it as pdf but i haven't tried with the new version of Google drive.
var fileId = '1ZdR3L3qP4Bkq8noWLJHSr_iBau0DNT4Kli4SxNc2YEo';
var dest = fs.createWriteStream('/tmp/resume.pdf');
drive.files.export({
fileId: fileId,
mimeType: 'application/pdf'
})
.on('end', function () {
console.log('Done');
})
.on('error', function (err) {
console.log('Error during download', err);
})
.pipe(dest);
Downloading google Documents
I also dont think that you will be able to email a file directly from Google Drive you will have to download the file locally then add send your email.
Stackdriver has an error reporting API. Documentation for Stackdriver The API has REST capability, which means that you can call it from Apps Script using UrlFetchApp.fetch(url) where url is the url needed to get error reporting information. The base url for the Stackdriver API is: https://clouderrorreporting.googleapis.com The API must be enabled.
There are multiple methods that can be used with the API.
The method that you probably need is the list method, which requires the url:
https://clouderrorreporting.googleapis.com/v1beta1/{projectName=projects/*}/events
where the projectName parameter must be a Google Cloud Platform project ID.
See documentation on list at: projects.events.list
The return value for that HTTPS Request, if successful, is a "response body" with the following structure and data:
{
"errorEvents": [
{
object (ErrorEvent)
}
],
"nextPageToken": string,
"timeRangeBegin": string
}
The ErrorEvent is a JSON object with the following structure and data:
{
"eventTime": string,
"serviceContext": {
object (ServiceContext)
},
"message": string,
"context": {
object (ErrorContext)
}
}
So, if you want to send an email with error data from Stackdriver, it won't be sent directly from Stackdriver, you need to make a request to Stackdriver from Apps Script, get the error information, and then send an email from Apps Script.
Of course, you could have your own error handling system, that logged error information to some external target, (Eg. your spreadsheet, or a database) using UrlFetchApp.fetch(url);
To make the request to the Stackdriver API you would need code something like this:
var projectID = "Enter project ID";
var url = 'https://clouderrorreporting.googleapis.com/v1beta1/' + projectID
+ '/events';
var tkn = ScriptApp.getOAuthToken();
var options = {};
options.headers = {Authorization: 'Bearer ' + tkn}
options.muteHttpExceptions = true;
var rtrnObj = UrlFetchApp.fetch(url,options);
Logger.log(rtrnObj.getContentText())
I haven't use this API and I haven't tested this code. If anyone uses it, and has information or finds an error, please make a comment.

Import Google app script project from JSON file

In Google Drive, it's possible to download an app script project as a .json file.
When such file is imported back to a Google Drive it's not properly associated with Google Script editor app.
Is there any way to do it properly?
Importing and exporting of Apps Script files requires the use of the import/export API.
To modify an existing script you will need to have a Oauth2 token with the scope of: https://www.googleapis.com/auth/drive.scripts
For updating a file you will "PUT" the updated JSON to:
https://www.googleapis.com/upload/drive/v2/files/{FileId}
The Apps Script file looks like
{
files:
[
{
name:{fileName},
type:{/* server_js or html */},
source:{/* source code for this file */},
id:{ /* Autogenerated. Omit this key for a new file, or leave value unmodified for an updated file */},
},
{...}
]
}
To add a file:
Add an object to the files array with the keys name, type, source
To modify a file:
Modify the values of name, type, or source of the file object but do not modify the id.
When you PUT the file back make sure you put the entire files array with your modifications, not just the new file object.
To make the modification in GAS itself would look like:
var scriptFiles = JSON.parse(downloadedJSONFile);
scriptFiles.files.push({"name":fileName,"type":fileType,"source":source});
var url = "https://www.googleapis.com/upload/drive/v2/files/"+scriptId;
var parameters = { method : 'PUT',
headers : {'Authorization': 'Bearer '+ tokenWithProperScope,
payload : JSON.stringify(scriptFiles),
contentType:'application/vnd.google-apps.script+json',
muteHttpExceptions:true};
var response = UrlFetchApp.fetch(url,parameters);
You will get a response code of 200 for a successful change. The response text will include the entire new JSON files with the assigned id to the file you added.
Fine more at:
https://developers.google.com/apps-script/import-export
Set the mimetype as application/vnd.google-apps.script