How to deal with Access-Control-Allow-Origin for myDrive files - google-apps-script

The question arises from this note:
Someone here suggested using div's. The HTML requirement is very
skeletal. The 3D display is basically canvas, but it requires seven
three.js files, ten js files of my own making to exchange parameters
and other variables with the global variable and .dae collada files
for each of the 3D models you can see. If they could be linked in like
jQuery that might be the solution but I wonder about conflicts.
on Questions on extending GAS spreadsheet usefulness
principally, if they can be linked like jQuery part
The files to be linked are on myDrive. The thinking is that if I can copy the files into GAS editor, it seems as secure and more flexible to bring them into the html directly.
code.gs
function sendUrls(){
var folder = DriveApp.getFoldersByName("___Blazer").next();
var sub = folder.getFoldersByName("assembler").next();
var contents = sub.getFiles();
var file;
var data = []
while(contents.hasNext()) {
file = contents.next();
type = file.getName().split(".")[1];
url = file.getUrl();
data.push([type,url]);
}
return data;
}
html
google.script.run.withSuccessHandler(function (files) {
$.each(files,function(i,v){
if(v[0] === "js"){
$.get(v[1])
}
})
})
.sendUrls();
The first url opens the proper script file but the origin file is not recognisable to me.

I am not sure that this is a proper answer as it relies on cors-anywhere, viz:
function importFile(name){
var myUrl = 'http://glasier.hk/cors/tba.html';
var proxy = 'https://cors-anywhere.herokuapp.com/';
var finalURL = proxy + myUrl;
$.get(finalURL,function(data) {
$("body").append(data);
importNset();
})
}
function importNset(){
google.script.run
.withSuccessHandler(function (code) {
path = "https://api.myjson.com/bins/"+code;
$.get(path)
.done((data, textStatus, jqXHR) => {
nset = data;
cfig = nset.cfig;
start();
})
})
.sendCode();
}
var nset,cfig;
$(document).ready(function(){
importFile();
});
but it works, albeit on my machine, using my own website as the resource.
I used the Gas function in gas Shop to make the eight previously tested js files into the single tba.html script only file. I swapped the workshop specific script files for those needed for google.script.run but otherwise that was it. If I could find out how to cors-enable my site, I think I might be able to demonstrate how scripts might be imported to generate different views from the same TBA and spreadsheet interfaces.

Related

Is this possible to send and get back the value from google app script to html page without rendering html output?

After much discussion and R&D, image cropping is not possible with Google APP scripts. So I decided to try one using the Canvas API.
I am trying to pass the value from server script(.gs) to the HTML file and get back the value in the server side script without opening HTML output as in sidebar or model/modelLess dialog box. You can say silently call HTML, complete the process and return the value to server script method.
I tried but getFromFileArg() is not running when i am running the callToHtml().
Is this possible with below script? what you will suggest?
Server side (.gs)
function callToHtml() {
var ui = SlidesApp.getUi();
var htmlTemp = HtmlService.createTemplateFromFile('crop_img');
htmlTemp["data"] = pageElements.asImage().getBlob();
var htmlOutput = htmlTemp.evaluate();
}
function getFromFileArg(data) {
Logger.log(data);
}
crop_img.html template :
<script>
var data = <?= data ?>;
//call the server script method
google.script.run
.withSuccessHandler(
function(result, element) {
element.disabled = false;
})
.withFailureHandler(
function(msg, element) {
console.log(msg);
element.disabled = false;
})
.withUserObject(this)
.getFromFileArg(data);
</script>
You cannot "silently" call the HTML this way, no.
The HTML needs to go to the user and the user is not inside of your web app, but Google's web app (Slides), so you have to play by their rules.
You need to use one of the available UI methods such as showSidebar. You could have the displayed HTML be a spinner or message like "processing..." while the JavaScript runs.
function callToHtml() {
var ui = SlidesApp.getUi();
var htmlTemp = HtmlService.createTemplateFromFile('crop_img');
htmlTemp["data"] = pageElements.asImage().getBlob();
ui.showSidebar(htmlTemp.evaluate());
}

Microsoft cognitive services face API call

I've build an application on the Azure (microsoft) emotion API, but that was just merged with their cognitive services face API. I'm using a webcam to send an image (in binary data) to their server for analysis, and used to get an xml in return. (I've already commented out some old code, in this example. Trying to get it fixed).
function saveSnap(data){
// Convert Webcam IMG to BASE64BINARY to send to EmotionAPI
var file = data.substring(23).replace(' ', '+');
var img = Base64Binary.decodeArrayBuffer(file);
var ajax = new XMLHttpRequest();
// On return of data call uploadcomplete function.
ajax.addEventListener("load", function(event) {
uploadcomplete(event);
}, false);
// AJAX POST request
ajax.open("POST", "https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=emotion","image/jpg");
ajax.setRequestHeader("Content-Type","application/json");
//ajax.setRequestHeader("Accept","text/html,application/xhtml+xml,application/xml");
ajax.setRequestHeader("Ocp-Apim-Subscription-Key","subscription_key");
ajax.send(img);
}
now I understood from their website the call returns a JSON. But I just can't get it to work. I can see there is data coming back, but how do I even get the JSON out of it. I'm probably missing something essential, and hope someone can help me out. :) the program was working when I could still use the Emotion API.
function uploadcomplete(event){
console.log("complete");
console.log(event);
//var xmlDoc = event.target.responseXML;
//var list = xmlDoc.getElementsByTagName("scores");
console.log(JSON.stringify(event));
A few issues to address:
You'll want to wait for the POST response, not just for the upload
completion.
You'll want to set the content type to be application/octet-stream if you are uploading a binary as you are.
You'll want to set the subscription key to the real value (you probably did before pasting your code here.)
.
function saveSnap(data) {
// Convert Webcam IMG to BASE64BINARY to send to EmotionAPI
var file = data.substring(23).replace(' ', '+');
var img = Base64Binary.decodeArrayBuffer(file);
ajax = new XMLHttpRequest();
ajax.onreadystatechange = function() {
if (ajax.readyState == XMLHttpRequest.DONE) {
console.log(JSON.stringify(ajax.response));
}
}
ajax.open('post', 'https://westcentralus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=emotion');
ajax.setRequestHeader('Content-Type', 'application/octet-stream');
ajax.setRequestHeader('Ocp-Apim-Subscription-Key', key);
ajax.send(img);
}

How to Add a file to Google Drive via Docs Add On using App Script? [duplicate]

This question already has answers here:
Uploading Multiple Files to Google Drive with Google App Script
(5 answers)
Closed 6 years ago.
Here is my scenario. I've created an Add-On for Google Docs that acts as a video toolbox.
A feature I'm trying to add is the ability to record a video using the built in web cam (using videojs-recorder) and then link to that video within the doc. I've got the video part working, but not sure how to get the webm JS Blob converted into a Google Blob so I can create a file on the users Google Drive for sharing and linking.
Just to figure out how this might work this is what I've done so far without any luck.
CLIENT SIDE CODE
//event handler for video recording finish
vidrecorder.on('finishRecord', function()
{
// the blob object contains the recorded data that
// can be downloaded by the user, stored on server etc.
console.log('finished recording: ', vidrecorder.recordedData);
google.script.run.withSuccessHandler(function(){
console.log("winning");
}).saveBlob(vidrecorder.recordedData);
});
SERVER SIDE CODE
function saveBlob(blob) {
Logger.log("Uploaded %s of type %s and size %s.",
blob.name,
blob.size,
blob.type);
}
The errors I get seem to be related to serialization of the blob. But really the exceptions aren't very useful - and just point to some minimized code.
EDIT: Note that there is no FORM object involved here, hence no form POST, and no FileUpload objects, as others have indicated that this might be a duplicate, however it's slightly different in that we are getting a Blob object and need to save it to the server.
Thanks goes to Zig Mandel and Steve Webster who provided some insight from the G+ discussion regarding this.
I finally pieced together enough bits to get this working.
CLIENT CODE
vidrecorder.on('finishRecord', function()
{
// the blob object contains the recorded data that
// can be downloaded by the user, stored on server etc.
console.log('finished recording: ', vidrecorder.recordedData.video);
var blob = vidrecorder.recordedData.video;
var reader = new window.FileReader();
reader.readAsDataURL(blob);
reader.onloadend = function() {
b64Blob = reader.result;
google.script.run.withSuccessHandler(function(state){
console.log("winning: ", state);
}).saveB64Blob(b64Blob);
};
});
SERVER CODE
function saveB64Blob(b64Blob) {
var success = { success: false, url: null};
Logger.log("Got blob: %s", b64Blob);
try {
var blob = dataURItoBlob(b64Blob);
Logger.log("GBlob: %s", blob);
var file = DriveApp.createFile(blob);
file.setSharing(DriveApp.Access.ANYONE_WITH_LINK, DriveApp.Permission.COMMENT);
success = { success: true, url: file.getUrl() };
} catch (error) {
Logger.log("Error: %s", error);
}
return success;
}
function dataURItoBlob(dataURI) {
// convert base64/URLEncoded data component to raw binary data held in a string
var byteString;
if (dataURI.split(',')[0].indexOf('base64') >= 0)
byteString = Utilities.base64Decode(dataURI.split(',')[1]);
else
byteString = decodeURI(dataURI.split(',')[1]);
// separate out the mime component
var mimeString = dataURI.split(',')[0].split(':')[1].split(';')[0];
return Utilities.newBlob(byteString, mimeString, "video.webm");
}

Gulp-header plus merge-stream results in a glitch?

Here’s some puzzling behavior.
I want to create a gulp task that will 1) build js-files into one file using gulp-requirejs-optimize and place it into the build directory, 2) copy a couple of config js-files into relevant subfolders of the build directory, and 3) add a header to these files.
Here’s how I am attempting to do this:
In a banner.js file, I create a header using gulp-header:
var header = require('gulp-header');
var bannerTemplate = [
'/**',
' * Hello ${name}',
' */'
].join('\n');
var banner = header(bannerTemplate, {name: 'world'});
module.exports = banner;
Then, in the file that is building javascript, I do the following:
var gulp = require('gulp');
var merge = require('merge-stream');
var requirejsOptimize = require('gulp-requirejs-optimize');
var banner = require('./banner.js');
gulp.task('js:build:test', function() {
// this is the entry point for our javascript files;
// will produce a single main.js file
var jsEntry = path.join(global.paths.jsDirectory, 'main.js');
var options = {
baseUrl: global.paths.jsDirectory,
mainConfigFile: path.join(global.paths.jsDirectory,
'libs/customized/requirejs/require.config.js'),
preserveLicenseComments: false
};
var jsOutput = path.join(global.paths.buildDirectory, 'js');
// I am also copying the require.js library and its customization; don't ask why
var jsForCopy = [
path.join(global.paths.jsDirectory, 'libs/vendors/requirejs/require.js'),
path.join(global.paths.jsDirectory, 'libs/customized/requirejs/**/*.js')
];
var requirejsOptimized = gulp.src(jsEntry)
.pipe(requirejsOptimize(options))
.pipe(banner)
.pipe(gulp.dest(jsOutput));
var copiedJS = gulp.src(jsForCopy, {base: global.paths.root})
.pipe(banner) // having this line will cause a glitch
.pipe(gulp.dest(global.paths.buildDirectory));
return merge(requirejsOptimized, copiedJS);
});
So here is where it is getting interesting. If I pipe through my banner only the stream that is building the main.js file (var requirejsOptimized in my code sample), then everything is fine — I get a build folder with correct files and correct structure:
if, however, I also pipe through the banner the stream that is copying other js files (var copiedJS in my code sample), then the structure of my build directory gets all jumbled up:
(notice the duplication of main.js and the absence of the libs folder with its subfolders)
So my question is, am I doing something obviously wrong with my gulp task here? Is it an expected result or is it a glitch of some kind?
I wouldn't reuse the banner stream in both pipelines. Both pipelines are writing to and reading from the same stream. That is why the output is weird.
Instead, use a new banner stream for each pipeline:
Change banner.js so it exports a function that creates a new banner stream
module.exports = function createBannerStream() {
return header(bannerTemplate, {name: 'world'});
};
In your main file invoke the function when you need a banner stream
var requirejsOptimized = gulp.src(jsEntry)
.pipe(requirejsOptimize(options))
.pipe(banner()) // <-- note the function call here
.pipe(gulp.dest(jsOutput));
var copiedJS = gulp.src(jsForCopy, {base: global.paths.root})
.pipe(banner()) // <-- note the function call here
.pipe(gulp.dest(global.paths.buildDirectory));

Google Chrome Indexdb - redundant code

I am tryint yo understand some code from an opensource project that handles indexDB commands within a Google Chrome application.
The code is as follows :
var db = pm.indexedDB.db;
var trans = db.transaction([pm.indexedDB.TABLE_DRIVE_CHANGES], "readwrite");
var store = trans.objectStore(pm.indexedDB.TABLE_DRIVE_CHANGES);
var boundKeyRange = IDBKeyRange.only(driveChange.id);
var request = store.put(driveChange);
request.onsuccess = function (e) {
callback(driveChange);
};
request.onerror = function (e) {
console.log(e.value);
};
Although the app works, to me it seems that the following line is redundant code
var boundKeyRange = IDBKeyRange.only(driveChange.id);
Or am I missing something? The variable 'boundKeyRange' is never referenced anywhere.
Unless boundKeyRange is used later, you're not missing something. IDBKeyRange.only just creates an IDBKeyRange object, and if that object isn't used in some IndexedDB request, it does absolutely nothing.