Override Chrome Storage API - google-chrome

I would like to override chrome.storage.local.set in order to log the value of a storage key when the Chrome Storage API is called:
var storage_local_set = chrome.storage.local.set;
chrome.storage.local.set = function(key, value) {
console.log(key);
storage_local_set(key);
}
But I get Illegal invocation: Function must be called on an object of type StorageArea when storage_local_set(key) gets called. How can I store the data after logging the key?
For chrome.storage.sync.set this is what I tried based on the solution given:
const api = chrome.storage.sync;
const { set } = api;
api.set = function (data, callback) {
console.log(data);
set.apply(api, arguments);
};
But this is not hooking the API

You need to use .call() or .apply() to provide this object:
storage_local_set.apply(this, arguments);
The parameters in your override don't match the documentation though. If you want to match the documentation then do it like this:
const api = chrome.storage.local;
const { set } = api;
api.set = function (data, callback) {
console.log(data);
set.apply(api, arguments);
};
And if you want to change the signature to separate key and value:
const api = chrome.storage.local;
const { set } = api;
api.set = (key, value, callback) => {
console.log(key, value);
set.call(api, {[key]: value}, callback);
};

Related

Api call to json to interface to Mat-Tree

I'm running into issues with trying to convert a json response from an Api call into an interface that will be accepted by this buildFileTree. So the call is pulling from SQL, it is working in dapper, I also see the array of data in my webapp in my console. The issue is when I try to change the initialize() value for buildFileTree from my static json file 'SampleJson' (inside the project) to my new interface 'VehicleCatalogMod' the tree shows up with SampleJson but when I switch the data to VehicleCatalogMod, the tree collapses.
dataStoreNew: VehicleCatalogMod[] = [];
constructor(private _servicesService: ServicesService){
this._servicesService.GetVehicleCat()
.subscribe(data => {
this.dataStoreNew = [];
this.dataStoreNew = data;
console.log(data);
})
this.initialize();
}
initialize() {
this.treeData = SampleJson;
// Working as SampleJson this is where the problem happens
const data = this.buildFileTree(VehicleCatalogMod, 0);
console.log(data);
this.dataChange.next(data);
}
buildFileTree(obj: object, level: number): TodoItemNode[] {
return Object.keys(obj).reduce<TodoItemNode[]>((accumulator, key) => {
let value = obj[key];
const node = new TodoItemNode();
node.item = key;
if (value != null) {
if (typeof value === 'object') {
node.children = this.buildFileTree(value, level + 1);
} else {
node.item = value;
}
}
return accumulator.concat(node);
}, []);
}
GetVehicleCat(): Observable<any> {
console.log('Vehicle Catalog got called');
return this.http.get('https://api/right/here',
{ headers: this.options.headers });
}
I tried a multitude of different things to try & get this working. I'm pretty much stagnated. Same error occurs when I try this.dataStoreNew instead. No errors in console, it literally just collapses the tree into one non distinguishable line. Also when I used: const vcm = new VehicleCatalogMod(); it made the tree pop up with the different properties but not the API values.
I also attached an image of the HTML element that appears.
with VehicleCatalogMod
with SampleJson

Resize all existing images stored in firebase storage and update the newly resized image url to database via api call [duplicate]

This question already has an answer here:
How can I resize all existing images in firebase storage?
(1 answer)
Closed 9 months ago.
I have requirement to resize new and existing images stored in firebase store. For new image, I enabled firebase's resize image extension. For existing image, how can I resized the image and get the newly resized image url to update back to database via api.
Here is my firebase function to get existing image urls from database. My question is how to resize the image and get the new image url?
const functions = require("firebase-functions");
const axios =require("axios");
async function getAlbums() {
const endpoint = "https://api.mydomain.com/graphql";
const headers = {
"content-type": "application/json",
};
const graphqlQuery = {
"query": `query Albums {
albums {
id
album_cover
}
}`
};
functions.logger.info("Call API");
const response = await axios({
url: endpoint,
method: 'post',
headers: headers,
data: graphqlQuery
});
if(response.errors) {
functions.logger.info("API ERROR : ", response.errors) // errors if any
} else {
return response.data.data.albums;
}
}
exports.manualGenerateResizedImage = functions.https.onRequest(async () => {
const albums = await getAlbums();
functions.logger.info("No. of Album : ", albums.length);
});
I think the below answer from Renaud Tarnec will definitely help you.
If you look at the code of the "Resize Images" extension, you will see that the Cloud Function that underlies the extension is triggered by a onFinalize event, which means:
When a new object (or a new generation of an existing object) is
successfully created in the bucket. This includes copying or rewriting
an existing object.
So, without rewriting/regenerating the existing images the Extension will not be triggered.
However, you could easily write your own Cloud Function that does the same thing but is triggered, for example, by a call to a specific URL (HTTPS cloud Function) or by creating a new document in a temporary Firestore Collection (background triggered CF).
This Cloud Function would execute the following steps:
Get all the files of your bucket, see the getFiles() method of the
Google Cloud Storage Node.js Client API. This method returns a
GetFilesResponse object which is an Array of File instances.
By looping over the array, for each file, check if the file has a
corresponding resized image in the bucket (depending on the way you
configured the Extension, the resized images may be in a specific
folder)
If a file does not have a corresponding resized image, execute the
same business logic of the Extension Cloud Function for this File.
There is an official Cloud Function sample which shows how to create a Cloud Storage triggered Firebase Function that will create resized thumbnails from uploaded images and upload them to the database URL, (see the last lines of index.js file)
Note : If you have a lot of files to treat, you should most probably work by batch, since there is a limit of 9 minutes for Cloud Function execution. Also, depending on the number of images to treat, you may need to increase the timeout value and/or the allocated memory of your Cloud Function, see https://firebase.google.com/docs/functions/manage-functions#set_timeout_and_memory_allocation
In case someone need it. This is how I resized existing image.
const functions = require("firebase-functions");
const axios = require("axios");
const { Storage } = require("#google-cloud/storage");
const storage = new Storage();
// Don't forget to replace with your bucket name
const bucket = storage.bucket("projectid.appspot.com");
async function getAlbums() {
const endpoint = "https://api.mydomain.com/graphql";
const headers = {
"content-type": "application/json",
};
const graphqlQuery = {
query: `query Albums {
albums {
id
album_cover
}
}`,
};
const response = await axios({
url: endpoint,
method: "post",
headers: headers,
data: graphqlQuery,
});
if (response.errors) {
functions.logger.error("API ERROR : ", response.errors); // errors
if any
} else {
return response.data.data.albums;
}
}
function getFileName(url) {
var decodeURI = decodeURIComponent(url);
var index = decodeURI.lastIndexOf("/") + 1;
var filenameWithParam = decodeURI.substr(index);
index = filenameWithParam.lastIndexOf("?");
var filename = filenameWithParam.substr(0, index);
return filename;
}
function getFileNameFromFirestore(url) {
var index = url.lastIndexOf("/") + 1;
var filename = url.substr(index);
return filename;
}
const triggerBucketEvent = async () => {
bucket.getFiles(
{
prefix: "images/albums", // you can add a path prefix
autoPaginate: false,
},
async (err, files) => {
if (err) {
functions.logger.error(err);
return;
}
const albums = await getAlbums();
await Promise.all(
files.map((file) => {
var fileName = getFileNameFromFirestore(file.name);
var result = albums.find((obj) => {
return getFileName(obj.album_cover) === fileName;
});
if (result) {
var file_ext = fileName.substr(
(Math.max(0, fileName.lastIndexOf(".")) || Infinity) + 1
);
var newFileName = result.id + "." + file_ext;
// Copy each file on thumbs directory with the different name
file.copy("images/albums/" + newFileName);
} else {
functions.logger.info(file.name, " not found in album list!");
}
})
);
}
);
};
exports.manualGenerateResizedImage = functions.https.onRequest(async () => {
await triggerBucketEvent();
});

Google Data Studio Community Connector getData() not working as expected

function getData(request){
try{
var options = {
'method' : 'post',
'contentType': 'application/json',
'payload' : JSON.stringify(request)
};
response=UrlFetchApp.fetch(getDataUrl, options);
resData = JSON.parse(response.getContentText())
return resData
}catch (e) {
e = (typeof e === 'string') ? new Error(e) : e;
Logger.log("Catch", e);
throw e;
}
}
The the above is my getData() function.
My isAdminUser() returns true.
When I try to visualize my data, I get the following error
Data Set Configuration Error
Data Studio cannot connect to your data set.
There was an error requesting data from the community connector. Please report the issue to the provider of this community connector if this issue persists.
Error ID: 3d11b88b
https://i.stack.imgur.com/x3Hki.png
The error code changes every time I refresh data and I can't find any dictionary to map the error id to an error
I tried debugging by logging the request parameter, response.getContentText() and resData variable to make sure I my data is formatted correctly.
Following are the logs printed in Stackdriver logs
request
{configParams={/Personal config data/}, fields=[{name=LASTNAME}]}
response.getContentText()
{"schema":[{"name":"LASTNAME","dataType":"STRING"}],"rows":[{"values":["test"]},{"values":["test"]},{"values":["Dummy"]},{"values":["One"]},{"values":["Nagargoje"]},{"values":[""]},{"values":[""]},{"values":[""]},{"values":[""]},{"values":[""]}],"filtersApplied":false}
resData
{rows=[{values=[test]}, {values=[test]}, {values=[Dummy]},
{values=[One]}, {values=[Nagargoje]}, {values=[]}, {values=[]},
{values=[]}, {values=[]}, {values=[]}], filtersApplied=false,
schema=[{name=LASTNAME, dataType=STRING}]}
I am not sure what is wrong with my getData() function.
The Object that I am returning seems to match the structure given here https://developers.google.com/datastudio/connector/reference#getdata
So there was no issue with my getData() function, the issue existed in the manifest file.
I was searching about passing parameter via URL and I stumbled upon a field called
dataStudio.useQueryConfig and added that to my manifest file and set its value to true.
Google Data studio was expecting me to return a query Config for getData().
But what I really wanted was this.
Anyways, I was able to debug it thanks to Matthias for suggesting me to take a look at Open-Source implementations
I implemented JSON connect which worked fine, so I Logged what it was returning in getData() and used that format/structure in my code, but my connector still didn't work.
My next assumption was maybe there is something wrong with my getSchema() return value. So I logged that as well and then copy pasted the hard coded value of both getData() and getSchema() return varaibles from JSON connect.
And even that didn't work, so my last bet was there must be something wrong with the manifest file, maybe the dummy links I added in it must be the issue. Then, after carrying out field by comparison I was finally able to get my community connector working.
This would have been easier to debug if the error messages were a bit helpful and didn't seem so generic.
First: You can always check out the Open-Source implementations that others did for custom Google Data Studio connectors. They are a great source if information. Fore more information checkout the documentation on Open Source Community Connectors.
Second: My implementation is for a time tracking system thus having confidential GDPR relevant data. That's why I can not just give you response messages. But I assembled this code. It contains authentifiction, HTTP GET data fetch and data conversions. Explanation is below the code. Again, checkout the open-source connectors if you need further assistance.
var cc = DataStudioApp.createCommunityConnector();
const URL_DATA = 'https://www.myverysecretdomain.com/api';
const URL_PING = 'https://www.myverysecretdomain.com/ping';
const AUTH_USER = 'auth.user'
const AUTH_KEY = 'auth.key';
const JSON_TAG = 'user';
String.prototype.format = function() {
// https://coderwall.com/p/flonoa/simple-string-format-in-javascript
a = this;
for (k in arguments) {
a = a.replace("{" + k + "}", arguments[k])
}
return a
}
function httpGet(user, token, url, params) {
try {
// this depends on the URL you are connecting to
var headers = {
'ApiUser': user,
'ApiToken': token,
'User-Agent': 'my super freaky Google Data Studio connector'
};
var options = {
headers: headers
};
if (params && Object.keys(params).length > 0) {
var params_ = [];
for (const [key, value] of Object.entries(params)) {
var value_ = value;
if (Array.isArray(value))
value_ = value.join(',');
params_.push('{0}={1}'.format(key, encodeURIComponent(value_)))
}
var query = params_.join('&');
url = '{0}?{1}'.format(url, query);
}
var response = UrlFetchApp.fetch(url, options);
return {
code: response.getResponseCode(),
json: JSON.parse(response.getContentText())
}
} catch (e) {
throwConnectorError(e);
}
}
function getCredentials() {
var userProperties = PropertiesService.getUserProperties();
return {
username: userProperties.getProperty(AUTH_USER),
token: userProperties.getProperty(AUTH_KEY)
}
}
function validateCredentials(user, token) {
if (!user || !token)
return false;
var response = httpGet(user, token, URL_PING);
if (response.code == 200)
console.log('API key for the user %s successfully validated', user);
else
console.error('API key for the user %s is invalid. Code: %s', user, response.code);
return response;
}
function getAuthType() {
var cc = DataStudioApp.createCommunityConnector();
return cc.newAuthTypeResponse()
.setAuthType(cc.AuthType.USER_TOKEN)
.setHelpUrl('https://www.myverysecretdomain.com/index.html#authentication')
.build();
}
function resetAuth() {
var userProperties = PropertiesService.getUserProperties();
userProperties.deleteProperty(AUTH_USER);
userProperties.deleteProperty(AUTH_KEY);
console.info('Credentials have been reset.');
}
function isAuthValid() {
var credentials = getCredentials()
if (credentials == null) {
console.info('No credentials found.');
return false;
}
var response = validateCredentials(credentials.username, credentials.token);
return (response != null && response.code == 200);
}
function setCredentials(request) {
var credentials = request.userToken;
var response = validateCredentials(credentials.username, credentials.token);
if (response == null || response.code != 200) return { errorCode: 'INVALID_CREDENTIALS' };
var userProperties = PropertiesService.getUserProperties();
userProperties.setProperty(AUTH_USER, credentials.username);
userProperties.setProperty(AUTH_KEY, credentials.token);
console.info('Credentials have been stored');
return {
errorCode: 'NONE'
};
}
function throwConnectorError(text) {
DataStudioApp.createCommunityConnector()
.newUserError()
.setDebugText(text)
.setText(text)
.throwException();
}
function getConfig(request) {
// ToDo: handle request.languageCode for different languages being displayed
console.log(request)
var params = request.configParams;
var config = cc.getConfig();
// ToDo: add your config if necessary
config.setDateRangeRequired(true);
return config.build();
}
function getDimensions() {
var types = cc.FieldType;
return [
{
id:'id',
name:'ID',
type:types.NUMBER
},
{
id:'name',
name:'Name',
isDefault:true,
type:types.TEXT
},
{
id:'email',
name:'Email',
type:types.TEXT
}
];
}
function getMetrics() {
return [];
}
function getFields(request) {
Logger.log(request)
var fields = cc.getFields();
var dimensions = this.getDimensions();
var metrics = this.getMetrics();
dimensions.forEach(dimension => fields.newDimension().setId(dimension.id).setName(dimension.name).setType(dimension.type));
metrics.forEach(metric => fields.newMetric().setId(metric.id).setName(metric.name).setType(metric.type).setAggregation(metric.aggregations));
var defaultDimension = dimensions.find(field => field.hasOwnProperty('isDefault') && field.isDefault == true);
var defaultMetric = metrics.find(field => field.hasOwnProperty('isDefault') && field.isDefault == true);
if (defaultDimension)
fields.setDefaultDimension(defaultDimension.id);
if (defaultMetric)
fields.setDefaultMetric(defaultMetric.id);
return fields;
}
function getSchema(request) {
var fields = getFields(request).build();
return { schema: fields };
}
function convertValue(value, id) {
// ToDo: add special conversion if necessary
switch(id) {
default:
// value will be converted automatically
return value[id];
}
}
function entriesToDicts(schema, data, converter, tag) {
return data.map(function(element) {
var entry = element[tag];
var row = {};
schema.forEach(function(field) {
// field has same name in connector and original data source
var id = field.id;
var value = converter(entry, id);
// use UI field ID
row[field.id] = value;
});
return row;
});
}
function dictsToRows(requestedFields, rows) {
return rows.reduce((result, row) => ([...result, {'values': requestedFields.reduce((values, field) => ([...values, row[field]]), [])}]), []);
}
function getParams (request) {
var schema = this.getSchema();
var params;
if (request) {
params = {};
// ToDo: handle pagination={startRow=1.0, rowCount=100.0}
} else {
// preview only
params = {
limit: 20
}
}
return params;
}
function getData(request) {
Logger.log(request)
var credentials = getCredentials()
var schema = getSchema();
var params = getParams(request);
var requestedFields; // fields structured as I want them (see above)
var requestedSchema; // fields structured as Google expects them
if (request) {
// make sure the ordering of the requested fields is kept correct in the resulting data
requestedFields = request.fields.filter(field => !field.forFilterOnly).map(field => field.name);
requestedSchema = getFields(request).forIds(requestedFields);
} else {
// use all fields from schema
requestedFields = schema.map(field => field.id);
requestedSchema = api.getFields(request);
}
var filterPresent = request && request.dimensionsFilters;
//var filter = ...
if (filterPresent) {
// ToDo: apply request filters on API level (before the API call) to minimize data retrieval from API (number of rows) and increase speed
// see https://developers.google.com/datastudio/connector/filters
// filter = ... // initialize filter
// filter.preFilter(params); // low-level API filtering if possible
}
// get HTTP response; e.g. check for HTTT RETURN CODE on response.code if necessary
var response = httpGet(credentials.username, credentials.token, URL_DATA, params);
// get JSON data from HTTP response
var data = response.json;
// convert the full dataset including all fields (the full schema). non-requested fields will be filtered later on
var rows = entriesToDicts(schema, data, convertValue, JSON_TAG);
// match rows against filter (high-level filtering)
//if (filter)
// rows = rows.filter(row => filter.match(row) == true);
// remove non-requested fields
var result = dictsToRows(requestedFields, rows);
console.log('{0} rows received'.format(result.length));
//console.log(result);
return {
schema: requestedSchema.build(),
rows: result,
filtersApplied: filter ? true : false
};
}
A sample request that filters for all users with names starting with J.
{
configParams={},
dateRange={
endDate=2020-05-14,
startDate=2020-04-17
},
fields=[
{name=name}
],
scriptParams={
lastRefresh=1589543208040
},
dimensionsFilters=[
[
{
values=[^J.*],
operator=REGEXP_EXACT_MATCH,
type=INCLUDE,
fieldName=name
}
]
]
}
The JSON data returned by the HTTP GET contains all fields (full schema).
[ { user:
{ id: 1,
name: 'Jane Doe',
email: 'jane#doe.com' } },
{ user:
{ id: 2,
name: 'John Doe',
email: 'john#doe.com' } }
]
Once the data is filtered and converted/transformed, you'll get this result, which is perfectly displayed by Google Data Studio:
{
filtersApplied=true,
schema=[
{
isDefault=true,
semantics={
semanticType=TEXT,
conceptType=DIMENSION
},
label=Name,
name=name,
dataType=STRING
}
],
rows=[
{values=[Jane Doe]},
{values=[John Doe]}
]
}
getData should return data for only the requested fields. In request.fields should have the list of all requested fields. Limit your data for those fields only and then send the parsed data back.

Why is async.map passing only the value of my JSON?

I have a function in node.js that looks like this:
exports.getAllFlights = function(getRequest) {
// this is the package from npm called "async"
async.map(clients, getFlight, function(err, results) {
getRequest(results);
});
}
The variable clients should be a JSON that looks like this:
{'"A4Q"': 'JZA8187', "'B7P"': 'DAL2098' }.
I expect that the map function will pass the individual indices of the array of the variable clients to getFlight. However, instead it passed the values of that each(ex: 'DAL2098', 'JZA8187' and so on).
Is this the expected functionality? Is there a function in async that will do what I want?
The signature of getFlight is getFlight(identifier, callback). Identifier is what is currently messed up. It returns callback(null, rtn). Null reprsents the nonexistence of an error, rtn represents the JSON that my function produces.
Yes, that's the expected result. The documentation is not very clear but all iterating functions of async.js pass the values of the iterable, not the keys. There is the eachOf series of functions that pass both key and value. For example:
async.eachOf(clients, function (value, key, callback) {
// process each client here
});
Unfortunately there is no mapOf.
If you don't mind not doing things in parallel you can use eachOfSeries:
var results = [];
async.eachOfSeries(clients, function (value, key, callback) {
// do what getFlight needs to do and append to results array
}, function(err) {
getRequest(results);
});
Another (IMHO better) workaround is to use proper arrays:
var clients = [{'A4Q': 'JZA8187'},{'B7P': 'DAL2098'}];
Then use your original logic. However, I'd prefer to use a structure like the following:
var clients = [
{key: 'A4Q', val: 'JZA8187'},
{key: 'B7P', val: 'DAL2098'}
];
First create a custom event. Attach a listener for return data. then process it.
var EventEmitter = require('events');
var myEmitter = new EventEmitter();
myEmitter.emit('clients_data',{'"A4Q"': 'JZA8187'}); //emit your event where ever
myEmitter.on('clients_data', (obj) => {
if (typeof obj !=='undefined') {
if (obj.contructor === Object && Object.keys(obj).lenth == 0) {
console.log('empty');
} else {
for(var key in obj) {
var value = obj[key];
//do what you want here
}
}
}
});
Well, you need to format your clients object properly before you can use it with async.map(). Lodash _.map() can help you:
var client_list = _.map(clients, function(value, key) {
var item = {};
item[key] = value;
return item;
});
After that, you will have an array like:
[ { A4Q: 'JZA8187' }, { B7P: 'DAL2098' } ]
Then, you can use async.map():
exports.getAllFlights = function(getRequest) {
async.map(client_list, getFlight, function(err, results) {
getRequest(results);
});
};

Is there any workaround to make use of html5 localstorage on both http and https?

I need to store some data client side and this data is too large to store it in a cookie. LocalStorage seemed like the perfect way of doing this but the thing is that the website that i will be using this has some parts that work on https and others with just http and as local storage can't access data from https that you set with http this doesn't seem like a viable solution anymore.
Any idea if there is any solution to this? Any other alternatives?
Store all data on one domain, e.g. https://my.domain.org/.
On https protocols, simply use localStorage.setItem('key', 'value') to save the data.
On http protocols, embed a https frame, and use postMessage to save the data:
Demo: http://jsfiddle.net/gK7ce/4/ (with the helper page being located at http://jsfiddle.net/gK7ce/3/).
// Script at https://my.domain.org/postMessage
window.addEventListener('message', function(event) {
// Domain restriction (to not leak variables to any page..)
if (event.origin == 'http://my.domain.org' ||
event.origin == 'https://my.domain.org') {
var data = JSON.parse(event.data);
if ('setItem' in data) {
localStorage.setItem(data.setItem, data.value);
} else if ('getItem' in data) {
var gotItem = localStorage.getItem(data.getItem);
// See below
event.source.postMessage(
'#localStorage#' + data.identifier +
(gotItem === null ? 'null#' : '#' + gotItem),
event.origin
);
} else if ('removeItem' in data) {
localStorage.removeItem(data.removeItem);
}
}
}, false);
On the http(s) page, the frame can be embedded as follows (replace https://my.mydomain.com with the actual URL. Note that you can simply get a reference to the frame, and use the src attribute):
<iframe name="myPostMessage" src="https://my.domain.org/postMessage" style="display:none;"></iframe>
// Example: Set the data
function LSsetItem(key, value) {
var obj = {
setItem: key,
value: value
};
frames['myPostMessage'].postMessage(JSON.stringify(obj), 'https://my.domain.com');
}
LSsetItem('key', 'value');
Note that the method is asynchronous, because of postMessage. An implementation of the getItem method has to be implemented differently:
var callbacks = {};
window.addEventListener('message', function(event) {
if (event.source === frames['myPostMessage']) {
var data = /^#localStorage#(\d+)(null)?#([\S\s]*)/.exec(event.data);
if (data) {
if (callbacks[data[1]]) {
// null and "null" are distinguished by our pattern
callbacks[data[1]](data[2] === 'null' ? null : data[3]);
}
delete callbacks[data[1]];
}
}
}, false);
function LSgetItem(key, callback) {
var identifier = new Date().getTime();
var obj = {
identifier: identifier,
getItem: key
};
callbacks[identifier] = callback;
frames['myPostMessage'].postMessage(JSON.stringify(obj), 'https://my.domain.com');
}
// Usage:
LSgetItem('key', function(value) {
console.log('Value: ' + value);
});
Note that each callback is stored in a hash. Each message also contains an identifier, so that the window which receives the message calls the correct corresponding callback.
For completeness, here's the LSremoveItem method:
function LSremoveItem(key) {
var obj = {
removeItem: key
};
frames['myPostMessage'].postMessage(JSON.stringify(obj), 'https://my.domain.com');
}