Resume file uploading using Tus protocol - vimeo

I am developing a website using Laravel, and I am using tus-js-client to upload files directly to Vimeo without going through my server. The uploading works perfect.
But, lets say the uploading reached 44%, and then the user refreshed the browser... as I understand It should continue uploading from 44% when the user start uploading the same file again.. but that doesn't happen and it start from the beginning.
I think this is happening because when I send an API request to Vimeo to get the upload_link ( step 1 ) It will give me a new upload_link every time the user refresh the page..
// Upload process start
var self = this;
// Send request to server to get (upload.upload_link) from Vimeo API (Step 1)
var uploadEndPoint = self.getUploadEndPoint();
// Start uploading ( Step 2 )
self.uploader = new tus.Upload(file, {
uploadUrl: uploadEndPoint,
retryDelays: [0, 1000, 3000, 5000],
metadata: {
filename: file.name,
filetype: file.type
},
resume: true,
uploadSize: file.size,
onError: function(error) {
console.log("Failed because: " + error);
},
onProgress: function(bytesUploaded, bytesTotal) {
var percentage = (bytesUploaded / bytesTotal * 100).toFixed(2);
console.log(bytesUploaded, bytesTotal, percentage + "%");
},
onSuccess: function() {
console.log(
"Download %s from %s",
self.uploader.file.name,
self.uploader.url
);
}
});
What is the best way to handle this, so the user can resume the upload?

What i did:
set Laravel backend endpoint to get download link
for the first endpoint request make request from your backend to Vimeo and save uploadlink on backend
for further requests check if client is going to download same file (by name and size, or by hash) and if yes return saved uploadlink, if not request new one
by doing that i solve two problems:
keep record of the upload link until the file is not fully uploaded
keep my permanent Vimeo access token uncompromised on server, sending on client only upload link

Related

Problem with Firebase Image Resize extension [duplicate]

I am following a tutorial to resize images via Cloud Functions on upload and am experiencing two major issues which I can't figure out:
1) If a PNG is uploaded, it generates the correctly sized thumbnails, but the preview of them won't load in Firestorage (Loading spinner shows indefinitely). It only shows the image after I click on "Generate new access token" (none of the generated thumbnails have an access token initially).
2) If a JPEG or any other format is uploaded, the MIME type shows as "application/octet-stream". I'm not sure how to extract the extension correctly to put into the filename of the newly generated thumbnails?
export const generateThumbs = functions.storage
.object()
.onFinalize(async object => {
const bucket = gcs.bucket(object.bucket);
const filePath = object.name;
const fileName = filePath.split('/').pop();
const bucketDir = dirname(filePath);
const workingDir = join(tmpdir(), 'thumbs');
const tmpFilePath = join(workingDir, 'source.png');
if (fileName.includes('thumb#') || !object.contentType.includes('image')) {
console.log('exiting function');
return false;
}
// 1. Ensure thumbnail dir exists
await fs.ensureDir(workingDir);
// 2. Download Source File
await bucket.file(filePath).download({
destination: tmpFilePath
});
// 3. Resize the images and define an array of upload promises
const sizes = [64, 128, 256];
const uploadPromises = sizes.map(async size => {
const thumbName = `thumb#${size}_${fileName}`;
const thumbPath = join(workingDir, thumbName);
// Resize source image
await sharp(tmpFilePath)
.resize(size, size)
.toFile(thumbPath);
// Upload to GCS
return bucket.upload(thumbPath, {
destination: join(bucketDir, thumbName)
});
});
// 4. Run the upload operations
await Promise.all(uploadPromises);
// 5. Cleanup remove the tmp/thumbs from the filesystem
return fs.remove(workingDir);
});
Would greatly appreciate any feedback!
I just had the same problem, for unknown reason Firebase's Resize Images on purposely remove the download token from the resized image
to disable deleting Download Access Tokens
goto https://console.cloud.google.com
select Cloud Functions from the left
select ext-storage-resize-images-generateResizedImage
Click EDIT
from Inline Editor goto file FUNCTIONS/LIB/INDEX.JS
Add // before this line (delete metadata.metadata.firebaseStorageDownloadTokens;)
Comment the same line from this file too FUNCTIONS/SRC/INDEX.TS
Press DEPLOY and wait until it finish
note: both original and resized will have the same Token.
I just started using the extension myself. I noticed that I can't access the image preview from the firebase console until I click on "create access token"
I guess that you have to create this token programatically before the image is available.
I hope it helps
November 2020
In connection to #Somebody answer, I can't seem to find ext-storage-resize-images-generateResizedImage in GCP Cloud Functions
The better way to do it, is to reuse the original file's firebaseStorageDownloadTokens
this is how I did mine
functions
.storage
.object()
.onFinalize((object) => {
// some image optimization code here
// get the original file access token
const downloadtoken = object.metadata?.firebaseStorageDownloadTokens;
return bucket.upload(tempLocalFile, {
destination: file,
metadata: {
metadata: {
optimized: true, // other custom flags
firebaseStorageDownloadTokens: downloadtoken, // access token
}
});
});

Can i send data from one website’s console to another?

So im trying to automate a task at work, and im wondering if theres anyway to send data from the console of one webpage to the console of another web page.
The task i am trying to automate consists of a website that has a prefilled form. I need to get elements from this form, and then copy them into another totally different website. Ive already written a script that pulls the data i need from the form and displays it in the console. Now I need to find a way to send the data (which is simply variables) to the other page’s console. Is this possible?
Keep in mind this is in a work computer, not allowed to download anything on it.
Are you an admin of the webpages and are these pages from the same site? if the answer is yes, i would recommend you use localStorage for saving and retrieving the data then display it to the console.
If it's not your website and you want it to work anyway just create a simple browser extension.
Here are some links to help you get started with extensions
MDN doc
Chrome doc
The idea is for you to target webpage A collect the data and post it to Github
Then target webpage B to read data from your github gist and you dispaly it in the console.
Cheers, i hope it was helpfull
Which server side language are you using ?
Usually for these, you could just have a form which is posting data to another website's form.
Look at this php example :
https://www.ostraining.com/blog/coding/retrieve-html-form-data-with-php/
Correct me If I did not understand your question correctly.
//Store the logs in following way
console.stdlog = console.log.bind(console);
console.logs = [];
console.log = function(){
console.logs.push(Array.from(arguments));
console.stdlog.apply(console, arguments);
}
//copying the logs into a json file
(function(console){
console.save = function(data, filename){
if(!data) {
console.error('Console.save: No data')
return;
}
if(!filename) filename = 'console.json'
if(typeof data === "object"){
data = JSON.stringify(data, undefined, 4)
}
var blob = new Blob([data], {type: 'text/json'}),
e = document.createEvent('MouseEvents'),
a = document.createElement('a')
a.download = filename
a.href = window.URL.createObjectURL(blob)
a.dataset.downloadurl = ['text/json', a.download, a.href].join(':')
e.initMouseEvent('click', true, false, window, 0, 0, 0, 0, 0, false, false, false, false, 0, null)
a.dispatchEvent(e)
}
})(console)
console.save(console.logs) //prints the logs in console.json file
// from the console.json file, you can use log information from another page
//Store the logs in following way
console.stdlog = console.log.bind(console);
console.logs = [];
console.log = function(){
console.logs.push(Array.from(arguments));
console.stdlog.apply(console, arguments);
}
localStorage.setItem('Logs', console.logs);
localStorage.getItem('Logs'); // from any browser

Chrome doesn't use cache after power loss?

I am creating a digital signage player that uses Chrome as it's display engine. We need to be able to still muddle along if the network goes down without too much interruption.
Chrome works fine caching images, and I've set the "Exipres" header to be a month after access. I can set the player computer offline and have the app run for days with no problem. If I reboot the machine the right way (Start->Shut Down), caching still works as expected.
The issue is that when Chrome exits abnormally - Either a crash or power loss - on reboot, Chrome ignores the cache and refuses to load images. This happens if I cut power 5 minutes after it loads the page, so content is not expiring.
My guess is that Chrome is set to ignore the cache after an abnormal exit to prevent corrupted cache from continually crashing the browser. However, this behavior is not what I need.
Does anyone know of a command line arg or flag I can set to keep this from happening?
Thanks for your help.
I tried everything I could think of to make Chrome not invalidate the local cache on system failure, and came up empty. There's a few other people who had the same question, and I didn't see an answer.
Here's what I did that made this work, and if someone else is having the same problem, it might be the workaround that you need.
I added a service worker that would cache images. The code below isn't perfect yet, but should be a starting place for someone... (FYI, I learned this 5 minutes ago, so if someone wants to give me a pointer or two on how to make this more elegant, I'm all ears.)
We cache anything that has a response type of "cors" so we cache only images coming from the remote server. Note that your images must be loaded via https for this to work.
Taken (mostly) from: https://developers.google.com/web/fundamentals/getting-started/primers/service-workers
var CACHE_NAME = 'shine_cache';
var urlsToCache = [
'/'
];
self.addEventListener('install', function(event) {
// Perform install steps
event.waitUntil(
caches.open(CACHE_NAME)
.then(function(cache) {
console.log('Opened cache');
return cache.addAll(urlsToCache);
})
);
});
self.addEventListener('fetch', function(event) {
//console.log('Handling fetch event for', event.request);
if (event.request.method == 'POST') {
//console.log("Skipping POST");
event.respondWith(fetch(event.request));
return;
}
if (event.request.headers.get('Accept').indexOf('image') !== -1) {
event.respondWith(
caches.match(event.request)
.then(function(response) {
// Cache hit - return response
if (response) {
console.log("Returning from cache.", event.request);
return response;
}
// IMPORTANT: Clone the request. A request is a stream and
// can only be consumed once. Since we are consuming this
// once by cache and once by the browser for fetch, we need
// to clone the response.
var fetchRequest = event.request.clone();
return fetch(fetchRequest).then(
function(response) {
console.log("Have a response.", response);
// Check if we received a valid response
if(!response || response.status !== 200 || response.type !== 'cors') {
return response;
}
// IMPORTANT: Clone the response. A response is a stream
// and because we want the browser to consume the response
// as well as the cache consuming the response, we need
// to clone it so we have two streams.
var responseToCache = response.clone();
caches.open(CACHE_NAME)
.then(function(cache) {
console.log("Caching response", event.request);
cache.put(event.request, responseToCache);
});
return response;
}
);
})
);
}
});

Raspberry PI server - GPIO ports status JSON response

I'm struggling for a couple of days. Question is simple, is there a way that can I create a server on Raspberry PI that will return current status of GPIO ports in JSON format?
Example:
Http://192.168.1.109:3000/led
{
"Name": "green led",
"Status": "on"
}
I found Adafruit gpio-stream library useful but don't know how to send data to JSON format.
Thank you
There are a variety of libraries for gpio interaction for node.js. One issue is that you might need to run it as root to have access to gpio, unless you can adjust the read access for those devices. This is supposed to be fixed in the latest version of rasbian though.
I recently built a node.js application that was triggered from a motion sensor, in order to activate the screen (and deactivate it after a period of time). I tried various gpio libraries but the one that I ended up using was "onoff" https://www.npmjs.com/package/onoff mainly because it seemed to use an appropriate way to identify changes on the GPIO pins (using interrupts).
Now, you say that you want to send data, but you don't specify how that is supposed to happen. If we use the example that you want to send data using a POST request via HTTP, and send the JSON as body, that would mean that you would initialize the GPIO pins that you have connected, and then attach event handlers for them (to listen for changes).
Upon a change, you would invoke the http request and serialize the JSON from a javascript object (there are libraries that would take care of this as well). You would need to keep a name reference yourself since you only address the GPIO pins by number.
Example:
var GPIO = require('onoff').Gpio;
var request = require('request');
var x = new GPIO(4, 'in', 'both');
function exit() {
x.unexport();
}
x.watch(function (err, value) {
if (err) {
console.error(err);
return;
}
request({
uri: 'http://example.org/',
method: 'POST',
json: true,
body: { x: value } // This is the actual JSON data that you are sending
}, function () {
// this is the callback from when the request is finished
});
});
process.on('SIGINT', exit);
I'm using the npm modules onoff and request. request is used for simplifying the JSON serialization over a http request.
As you can see, I only set up one GPIO here. If you need to track multiple, you must make sure to initialize them all, distinguish them with some sort of name and also remember to unexport them in the exit callback. Not sure what happens if you don't do it, but you might lock it for other processes.
Thank You, this was very helpful. I did not express myself well, sorry for that. I don't want to send data (for now) i just want to enter web address like 192.168.1.109/led and receive json response. This is what I manage to do for now. I don't know if this is the right way. PLS can you review this or suggest better method..
var http = require('http');
var url = require('url');
var Gpio = require('onoff').Gpio;
var led = new Gpio(23, 'out');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
var command = url.parse(req.url).pathname.slice(1);
switch(command) {
case "on":
//led.writeSync(1);
var x = led.readSync();
res.write(JSON.stringify({ msgId: x }));
//res.end("It's ON");
res.end();
break;
case "off":
led.writeSync(0);
res.end("It's OFF");
break;
default:
res.end('Hello? yes, this is pi!');
}
}).listen(8080);

When using HTML5 file system features, where are the files saved?

I'm just trying out the file system API.
As described in http://www.html5rocks.com/en/tutorials/file/filesystem
code:
window.webkitStorageInfo.requestQuota(PERSISTENT, 1024 * 1024, function (grantedBytes) {
window.requestFileSystem(PERSISTENT, grantedBytes, successCallback, errorHandler);
}, function (e) {
console.log('Error', e);
});
function successCallback(fs) {
window.fileSystem = fs;
fs.root.getFile('kiki.txt', {
create: false,
exclusive: true
},
function (fileEntry) {
// Create a FileWriter object for our FileEntry (log.txt).
fileEntry.createWriter(function (fileWriter) {
fileWriter.onwriteend = function (e) {
console.log('Write completed.');
};
fileWriter.onerror = function (e) {
console.log('Write failed: ' + e.toString());
};
fileWriter.seek(fileWriter.length);
// Create a new Blob and write it to log.txt.
var blob = new Blob(['Lorem Ipsum'], {
type: 'text/plain'
});
fileWriter.write(blob);
}, errorHandler);
}, errorHandler);
}
(the create: false is because I already created that file before).
Chrome asks permission to use the file system and I grant it.
When I try to read it, I can, it's persistent. But where is it saved?
According to the docs, it is saved in the root folder ("/"), but it is not there (I'm using nginx). I search the entire HD for this file ("kiki.txt") and it is not found.
So where is it saved?
You are using the HTML5 file system APIs but trying to find data files on the server.
Client browsers will save the data most probably on the client's file system. Quote from the link you provided: http://www.html5rocks.com/en/tutorials/file/filesystem/
With the FileSystem API, a web app can create, read, navigate, and write to a sandboxed section of the user's local file system.
As for your question - each browser will have their own implementation of the HTML5 file system APIs and the data might be saved anywhere using custom format.
As a key value pair in a database stored in the user profile which may be different for each person based on operating system, browser, and configuration.
But here is one example, copied from:Where is the html5 local database located on a client machine?
C:\Users\<user>\AppData\Roaming\Mozilla\Firefox\Profiles\<profile-name>\webappsstore.sqlite