chrome storage api not saving passed data - google-chrome

I have this code:
bookmarkProfile() {
var username = this.unsername
var userData = {
profileInfo: this.userInfo,
profileMedia: this.igFeed
}
browser.storage.local.set({username: userData})
console.log('Profile saved')
}
// this is what I use to get all the data
getWatchedProfiles() {
browser.storage.local.get(null, (items) => {
console.log(items)
})
}
What I expect is that the passed data are saved using the chrome storage api, but if I inspect the storage I will not see any data. I'm trying to implement this inside a chrome extension to store some users data fetched using ajax. Is there any error or a fix ?

Related

Google App Scripts User Cache Service: Accessing Cache Information in a Different Script but Same User

Looking into a way of sharing data via Google App Scripts's Cache Services from one web app to another.
Users load up the first webpage and filled out their information. Once submitted a function is run on this data and stored via the cache.
CacheService.getUserCache().put('FirstName','David')
CacheService.getUserCache().put('Surname','Armstrong')
Console log shows reports back that these two elements have been saved to cache.
However in the second web app when cache is called upon the console log returns null
var cache = CacheService.getUserCache().get('Firstname');
var cache2 = CacheService.getUserCache().get('Surname');
console.log(cache)
console.log(cache2)
Any ideas?
A possible solution would be to implement a service to synchronize the cache between web apps.
This can be achieved by creating a WebApp that via POST allows to add to the ScriptCache of the "Cache Synchronizer" the UserCache of the individual Web Apps.
The operation would be very simple:
From the web app that we want to synchronize, we check if we have cache of the user.
If it exists, we send it to the server so that it stores it.
If it does not exist, we check if the server has stored the user's cache.
Here is a sketch of how it could work.
CacheSync.gs
const cacheService = CacheService.getScriptCache()
const CACHE_SAVED_RES = ContentService
.createTextOutput(JSON.stringify({ "msg": "Cache saved" }))
.setMimeType(ContentService.MimeType.JSON)
const doPost = (e) => {
const { user, cache } = JSON.parse(e.postData.contents)
const localCache = cacheService.get(user)
if (!localCache) {
/* If no local data, we save it */
cacheService.put(user, JSON.stringify(cache))
return CACHE_SAVED_RES
} else {
/* If data we send it */
return ContentService
.createTextOutput(JSON.stringify(localCache))
.setMimeType(ContentService.MimeType.JSON)
}
}
ExampleWebApp.gs
const SYNC_SERVICE = "<SYNC_SERVICE_URL>"
const CACHE_TO_SYNC = ["firstName", "lastName"]
const cacheService = CacheService.getUserCache()
const syncCache = () => {
const cache = cacheService.getAll(CACHE_TO_SYNC)
const options = {
method: "POST",
payload: JSON.stringify({
user: Session.getUser().getEmail(),
cache
})
}
if (Object.keys(cache).length === 0) {
/* If no cache try to fetch it from the cache service */
const res = UrlFetchApp.fetch(SYNC_SERVICE, options)
const parsedResponse = JSON.parse(JSON.parse(res.toString()))
Object.keys(parsedResponse).forEach((k)=>{
console.log(k, parsedResponse[k])
cacheService.put(k, parsedResponse[k])
})
} else {
/* If cache send it to the sync service */
const res = UrlFetchApp.fetch(SYNC_SERVICE, options)
console.log(res.toString())
}
}
const createCache = () => {
cacheService.put('firstName', "Super")
cacheService.put('lastName', "Seagull")
}
const clearCache = () => {
cacheService.removeAll(CACHE_TO_SYNC)
}
Additional information
The synchronization service must be deployed with ANYONE access. You can control the access via an API_KEY.
This is just an example, and is not fully functional, you should adapt it to your needs.
The syncCache function of the web App is reusable, and would be the function you should use in all Web Apps.
There is a disadvantage when retrieving the cache, since you must provide the necessary keys, which forces you to write them manually (ex CACHE_TO_SYNC).
It could be considered to replace ScriptCache with ScriptProperties.
Documentation
Cache
Properties
Session
The doc says:
Gets the cache instance scoped to the current user and script.
As it is scoped to the script, accessing from another script is not possible. This is also the case with PropertiesService:
Properties cannot be shared between scripts.
To share, you can use a common file shared between them, like a drive text file or a spreadsheet.

GDrive API v3 files.get download progress?

How can I show progress of a download of a large file from GDrive using the gapi client-side v3 API?
I am using the v3 API, and I've tried to use a Range request in the header, which works, but the download is very slow (below). My ultimate goal is to playback 4K video. GDrive limits playback to 1920x1280. My plan was to download chunks to IndexedDB via v3 API and play from the locally cached data. I have this working using the code below via Range requests, but it is unusably slow. A normal download of the full 438 MB test file directly (e.g. via the GDrive web page) takes about 30-35s on my connection, and, coincidentally, each 1 MB Range requests takes almost exactly the same 30-35s. It feels like the GDrive back-end is reading and sending the full file for each subrange?
I've also tried using XHR and fetch to download the file, which fails. I've been using the webContent link (which typically ends in &export=download) but I cannot get access headers correct. I get either CORS or other odd permission issues. The webContent links work fine in <image> and <video> src tags. I expect this is due to special permission handling or some header information I'm missing that the browser handles specifically for these media tags. My solution must be able to read private (non-public, non-sharable) links, hence the use of the v3 API.
For video files that are smaller than the GDrive limit, I can set up a MediaRecorder and use a <video> element to get the data with progress. Unfortunately, the 1920x1080 limit kills this approach for larger files, where progress feedback is even more important.
This is the client-side gapi Range code, which works, but is unusably slow for large (400 MB - 2 GB) files:
const getRange = (start, end, size, fileId, onProgress) => (
new Promise((resolve, reject) => gapi.client.drive.files.get(
{ fileId, alt: 'media', Range: `bytes=${start}-${end}` },
// { responseType: 'stream' }, Perhaps this fails in the browser?
).then(res => {
if (onProgress) {
const cancel = onProgress({ loaded: end, size, fileId })
if (cancel) {
reject(new Error(`Progress canceled download at range ${start} to ${end} in ${fileId}`))
}
}
return resolve(res.body)
}, err => reject(err)))
)
export const downloadFileId = async (fileId, size, onProgress) => {
const batch = 1024 * 1024
try {
const chunks = []
for (let start = 0; start < size; start += batch) {
const end = Math.min(size, start + batch - 1)
const data = await getRange(start, end, size, fileId, onProgress)
if (!data) throw new Error(`Unable to get range ${start} to ${end} in ${fileId}`)
chunks.push(data)
}
return chunks.join('')
} catch (err) {
return console.error(`Error downloading file: ${err.message}`)
}
}
Authentication works fine for me, and I use other GDrive commands just fine. I'm currently using drives.photos.readonly scope, but I have the same issues even if I use a full write-permission scope.
Tangentially, I'm unable to get a stream when running client-side using gapi (works fine in node on the server-side). This is just weird. If I could get a stream, I think I could use that to get progress. Whenever I add the commented-out line for the responseType: 'stream', I get the following error: The server encountered a temporary error and could not complete your request. Please try again in 30 seconds. That’s all we know. Of course waiting does NOT help, and I can get a successful response if I do not request the stream.
I switched to using XMLHttpRequest directly, rather than the gapi wrapper. Google provides these instructions for using CORS that show how to convert any request from using gapi to a XHR. Then you can attach to the onprogress event (and onload, onerror and others) to get progres.
Here's the drop-in replacement code for the downloadFileId method in the question, with a bunch of debugging scaffolding:
const xhrDownloadFileId = (fileId, onProgress) => new Promise((resolve, reject) => {
const user = gapi.auth2.getAuthInstance().currentUser.get()
const oauthToken = user.getAuthResponse().access_token
const xhr = new XMLHttpRequest()
xhr.open('GET', `https://www.googleapis.com/drive/v3/files/${fileId}?alt=media`)
xhr.setRequestHeader('Authorization', `Bearer ${oauthToken}`)
xhr.responseType = 'blob'
xhr.onloadstart = event => {
console.log(`xhr ${fileId}: on load start`)
const { loaded, total } = event
onProgress({ loaded, size: total })
}
xhr.onprogress = event => {
console.log(`xhr ${fileId}: loaded ${event.loaded} of ${event.total} ${event.lengthComputable ? '' : 'non-'}computable`)
const { loaded, total } = event
onProgress({ loaded, size: total })
}
xhr.onabort = event => {
console.warn(`xhr ${fileId}: download aborted at ${event.loaded} of ${event.total}`)
reject(new Error('Download aborted'))
}
xhr.onerror = event => {
console.error(`xhr ${fileId}: download error at ${event.loaded} of ${event.total}`)
reject(new Error('Error downloading file'))
}
xhr.onload = event => {
console.log(`xhr ${fileId}: download of ${event.total} succeeded`)
const { loaded, total } = event
onProgress({ loaded, size: total })
resolve(xhr.response)
}
xhr.onloadend = event => console.log(`xhr ${fileId}: download of ${event.total} completed`)
xhr.ontimeout = event => {
console.warn(`xhr ${fileId}: download timeout after ${event.loaded} of ${event.total}`)
reject(new Error('Timout downloading file'))
}
xhr.send()
})

How to get metadata from Amazon Kinesis Video Streams via Video.js and http-streaming?

Now, I am working on client-side of Amazon Kinesis Video Streams, using video.js and http-streaming to display video.
However, on stream server there are some metadata (text only) for each fragment (as this link: https://aws.amazon.com/about-aws/whats-new/2018/10/kinesis-video-streams-fragment-level-metadata-support/).
I don't know how to get this data by using AWSJavaScriptSDK (Ex: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/KinesisVideoMedia.html).
I've test with getMedia function, but it not working as expectation (just get media info one time, not each fragment)
var kinesisvideomedia = new AWS.KinesisVideoMedia({
//apiVersion: '2017-09-30',
region: options.region,
accessKeyId: options.accessKeyId,
secretAccessKey: options.secretAccessKey,
endpoint: response.DataEndpoint
});
// 3. Create the parameters for getMedia()
var mopts = {
StartSelector: {
StartSelectorType: 'EARLIEST'
},
StreamName: streamName
};
kinesisvideomedia.getMedia(mopts, function (error, vmresp) {
if (error) {
console.log(error);
}
//console.log(vmresp);
});
Many thanks for any support!
Your parameters only tells getMedia to grab the earliest fragment from the stream. If you want to get all the following fragments you have to use the ContinuationToken that was returned in the response from the previous call to getMedia when doing additional calls to getMedia.
Regarding the metadata on the fragment level, you need to parse the response payload, for example like in this example, using the video streams parser library
getMedia is not well documented in the js aws-sdk, the main trick is to use request.createReadStream() in order to stream the media chunks.
You could do it like
var kinesisvideomedia = new AWS.KinesisVideoMedia();
var kinesisvideo = new AWS.KinesisVideo();
const params = {
APIName: "GET_MEDIA",
StreamName: streamName
}
kinesisvideo.getDataEndpoint(params, function(err, data) {
if (err) {
throw(err)
}
console.log("Changing endpoint to", data.DataEndpoint);
kinesisvideomedia.endpoint = data.DataEndpoint;
var mopts = {
StartSelector: {
StartSelectorType: 'EARLIEST'
},
StreamName: streamName
};
const request = kinesisvideomedia.getMedia(mopts);
const stream = request.createReadStream();
stream.on('data', function(data) { console.log("data", data)})
});

Login to Chrome extension with a Google user other than the one in use by Chrome

I have a Chrome extension that requests a user to login using the chrome.identity.getAuthToken route. This works fine, but when you login you can only use the users that you have accounts in Chrome for.
The client would like to be able to login with a different Google account; so rather than using the.client#gmail.com, which is the account Chrome is signed in to, they want to be able to login using the.client#company.com, which is also a valid Google account.
It is possible for me to be logged in to Chrome with one account, and Gmail with a second account, and I do not get the option to choose in the extension.
Is this possible?
Instead of authenticating the user using the chrome.identity.getAuthToken , just implement the OAuth part yourself.
You can use libraries to help you, but the last time I tried the most helpful library (the Google API Client) will not work on a Chrome extension.
Check out the Google OpenID Connect documentation for more info. In the end all you have to do is redirect the user to the OAuth URL, use your extension to get Google's answer (the authorization code) and then convert the authorization code to an access token (it's a simple POST call).
Since for a Chrome extension you cannot redirect to a web server, you can use the installed app redirect URI : urn:ietf:wg:oauth:2.0:oob. With this Google will display a page containing the authorization code.
Just use your extension to inject some javascript code in this page to get the authorization code, close the HTML page, perform the POST call to obtain the user's email.
Based on David's answer, I found out that chrome.identity (as well as generic browser.identity) API now provides a chrome.identity.launchWebAuthFlow method which can be used to launch an OAuth workflow. Following is a sample class showing how to use it:
class OAuth {
constructor(clientId) {
this.tokens = [];
this.redirectUrl = chrome.identity.getRedirectURL();
this.clientId = clientId;
this.scopes = [
"https://www.googleapis.com/auth/gmail.modify",
"https://www.googleapis.com/auth/gmail.compose",
"https://www.googleapis.com/auth/gmail.send"
];
this.validationBaseUrl = "https://www.googleapis.com/oauth2/v3/tokeninfo";
}
generateAuthUrl(email) {
const params = {
client_id: this.clientId,
response_type: 'token',
redirect_uri: encodeURIComponent(this.redirectUrl),
scope: encodeURIComponent(this.scopes.join(' ')),
login_hint: email
};
let url = 'https://accounts.google.com/o/oauth2/auth?';
for (const p in params) {
url += `${p}=${params[p]}&`;
}
return url;
}
extractAccessToken(redirectUri) {
let m = redirectUri.match(/[#?](.*)/);
if (!m || m.length < 1)
return null;
let params = new URLSearchParams(m[1].split("#")[0]);
return params.get("access_token");
}
/**
Validate the token contained in redirectURL.
This follows essentially the process here:
https://developers.google.com/identity/protocols/OAuth2UserAgent#tokeninfo-validation
- make a GET request to the validation URL, including the access token
- if the response is 200, and contains an "aud" property, and that property
matches the clientID, then the response is valid
- otherwise it is not valid
Note that the Google page talks about an "audience" property, but in fact
it seems to be "aud".
*/
validate(redirectURL) {
const accessToken = this.extractAccessToken(redirectURL);
if (!accessToken) {
throw "Authorization failure";
}
const validationURL = `${this.validationBaseUrl}?access_token=${accessToken}`;
const validationRequest = new Request(validationURL, {
method: "GET"
});
function checkResponse(response) {
return new Promise((resolve, reject) => {
if (response.status != 200) {
reject("Token validation error");
}
response.json().then((json) => {
if (json.aud && (json.aud === this.clientId)) {
resolve(accessToken);
} else {
reject("Token validation error");
}
});
});
}
return fetch(validationRequest).then(checkResponse.bind(this));
}
/**
Authenticate and authorize using browser.identity.launchWebAuthFlow().
If successful, this resolves with a redirectURL string that contains
an access token.
*/
authorize(email) {
const that = this;
return new Promise((resolve, reject) => {
chrome.identity.launchWebAuthFlow({
interactive: true,
url: that.generateAuthUrl(email)
}, function(responseUrl) {
resolve(responseUrl);
});
});
}
getAccessToken(email) {
if (!this.tokens[email]) {
const token = await this.authorize(email).then(this.validate.bind(this));
this.tokens[email] = token;
}
return this.tokens[email];
}
}
DISCLAIMER: above class is based on open-source sample code from Mozilla Developer Network.
Usage:
const clientId = "YOUR-CLIENT-ID"; // follow link below to see how to get client id
const oauth = new OAuth();
const token = await oauth.getAccessToken("sample#gmail.com");
Of course, you need to handle the expiration of tokens yourself i.e. when you get 401 from Google's API, remove token and try to authorize again.
A complete sample extension using Google's OAuth can be found here.

Dart HttpServer Cache

I have Http server loading data from different server (through HttpClient). You can see the code here: Dart HTTP server and Futures
I am able to load response to JSON map and now I want to keep this map in memory until it's invalidated (I can get information on change way faster that whole set). What would be best way to save my JSON map to in memory object and retrieve it from memory until it needs to be refreshed?
Solved in comments above. Summary: if you need to keep value in very simple "cache" scenario, you can make a copy of the object and invalidate it by copying new values over.
main() {
Map cachedMap = new Map();
var lastCached = new DateTime.now();
final Duration cacheDuration = new Duration(minutes: 10);
HttpServer.bind(InternetAddress.ANY_IP_V4, 4040).then((HttpServer server) {
print('listening on localhost, port ${server.port}');
server.listen((HttpRequest request) {
var now = new DateTime.now();
if (lastCached.add(cacheDuration).isAfter(now) && cachedMap.isNotEmpty) {
handleMap(request, cachedMap);
} else {
//do something to fill map
}
;
});
}).catchError((e) => print(e.toString()));
}