I try to sync changes from GDrive, So I'm using the changes api. It works like a charm when I'm using it with restrictToMyDrive=true.
But when I tried to expand it to also track shared files restrictToMyDrive=false I encounter some major drawback. I have inconsist results - the parents field is missing sporadically.
Let say that we have user A that share that folder to user B:
rootSharedFolder => subFolder => subSubFolder => File
If user B calls the changes API relatively close to the time that user A share the rootSharedFolder , Then in 3/10 times some of the inner folder will be received without the parents field.
Even trying to use the files.get API on the received changed item , results in empty parents field. But if I wait a minute or two and then call it again , the parents field does exist in the result.
Anyone else encounter this problem , and maybe have a workaround for it?
Thanks!
this behavior happens only when calling the changes api close to the time that the other user share you the items
Parents field may be missing due two factors:
A propagation delay
Insufficient permissions to see the parents
In both cases you'll notice the same: the parents field for the file is missing, at first there is no way to tell in which case you are:
A propagation issue:
This may happen when you request some file details related to a shared file not owned by you right after it was shared to you.
You may not be able to find it's related parents at first glance, this is because changes are still propagating on the file system, this is call a propagation issue, it should not last long and it should be possible to identify and solve this inconvenience by retrieving this field data a couple of minutes after the permission changes.
Not having access to the parents:
In this case you may have access to a certain file, but not to it's parent folder, thus, you can not get to know what parent it has, because it's not been shared with you.
This is on the documentation:
parents — A parent does not appear in the parents list if the requesting user is a not a member of the shared drive and does not have access to the parent. In addition, with the exception of the top level folder, the parents list must contain exactly one item if the file is located within a shared drive.
Side note: you may be interested on using SharedDrives, where files are owned by a organization rather than individual users, simplifying the sharing process and maybe avoiding the problems you are facing here.
https://developers.google.com/drive/api/v3/enable-shareddrives
How to know which case is?
A way to go is to implement an exponential back-off algorithm to try to retrieve the missing parents field, if after a max number of attempts it does not retrieve we are probably on the second case:
exponentialBackoff(getParent, 7, 300, function(result) {
console.log('the result is',result);
});
// A function that keeps trying, "getParent" until it returns true or has
// tried "max" number of times. First retry has a delay of "delay".
// "callback" is called upon success.
function exponentialBackoff(toTry, max, delay, callback) {
console.log('max',max,'next delay',delay);
var result = toTry();
if (result) {
callback(result);
} else {
if (max > 0) {
setTimeout(function() {
exponentialBackoff(toTry, --max, delay * 2, callback);
}, delay);
} else {
console.log('we give up');
}
}
}
function getParent() {
var percentFail = 0.8;
return Math.random() >= 0.8;
}
Related
In Windows and Android Google Chrome browser, (haven't tested for others yet) response time from a service worker increases linearly to number of items stored in that specific cache storage when you use Cache.match() function with following option;
ignoreSearch = true
Dividing items in multiple caches helps but not always convenient to do so. Plus even a small amount of increase in items stored makes a lot of difference in response times. According to my measurements response time is roughly doubled for every tenfold increase in number of items in the cache.
Official answer to my question in chromium issue tracker reveals that the problem is a known performance issue with Cache Storage implementation in Chrome which only happens when you use Cache.match() with ignoreSearch parameter set to true.
As you might know ignoreSearch is used to disregard query parameters in URL while matching the request against responses in cache. Quote from MDN:
...whether to ignore the query string in the url. For example, if set to
true the ?value=bar part of http://example.com/?value=bar would be ignored
when performing a match.
Since it is not really convenient to stop using query parameter match, I have come up with following workaround, and I am posting it here in hopes of it will save time for someone;
// if the request has query parameters, `hasQuery` will be set to `true`
var hasQuery = event.request.url.indexOf('?') != -1;
event.respondWith(
caches.match(event.request, {
// ignore query section of the URL based on our variable
ignoreSearch: hasQuery,
})
.then(function(response) {
// handle the response
})
);
This works great because it handles every request with a query parameter correctly while handling others still at lightning speed. And you do not have to change anything else in your application.
According to the guy in that bug report, the issue was tied to the number of items in a cache. I made a solution and took it to the extreme, giving each resource its own cache:
var cachedUrls = [
/* CACHE INJECT FROM GULP */
];
//update the cache
//don't worry StackOverflow, I call this only when the site tells the SW to update
function fetchCache() {
return Promise.all(
//for all urls
cachedUrls.map(function(url) {
//add a cache
return caches.open('resource:'url).then(function(cache) {
//add the url
return cache.add(url);
});
});
);
}
In the project we have here, there are static resources served with high cache expirations set, and we use query parameters (repository revision numbers, injected into the html) only as a way to manage the [browser] cache.
It didn't really work to use your solution to selectively use ignoreSearch, since we'd have to use it for all static resources anyway so that we could get cache hits!
However, not only did I dislike this hack, but it still performed very slowly.
Okay, so, given that it was only a specific set of resources I needed to ignoreSearch on, I decided to take a different route;
just remove the parameters from the url requests manually, instead of relying on ignoreSearch.
self.addEventListener('fetch', function(event) {
//find urls that only have numbers as parameters
//yours will obviously differ, my queries to ignore were just repo revisions
var shaved = event.request.url.match(/^([^?]*)[?]\d+$/);
//extract the url without the query
shaved = shaved && shaved[1];
event.respondWith(
//try to get the url from the cache.
//if this is a resource, use the shaved url,
//otherwise use the original request
//(I assume it [can] contain post-data and stuff)
caches.match(shaved || event.request).then(function(response) {
//respond
return response || fetch(event.request);
})
);
});
I had the same issue, and previous approaches caused some errors with requests that should be ignoreSearch:false. An easy approach that worked for me was to simply apply ignoreSearch:true to a certain requests by using url.contains('A') && ... See example below:
self.addEventListener("fetch", function(event) {
var ignore
if(event.request.url.includes('A') && event.request.url.includes('B') && event.request.url.includes('C')){
ignore = true
}else{
ignore = false
}
event.respondWith(
caches.match(event.request,{
ignoreSearch:ignore,
})
.then(function(cached) {
...
}
I've been trying to debug for hours a Firebase rule problem and was wondering if there is something easier available.
My problem is that I save my firebaseObject with $save (or create with $add) and get a permission denied because of my rules. However, both the rules and the object is pretty complex and there are dozens of rules which are involved. In my simulator, I think I got it all, but still get permission denied.
The problem is that I am not 100% sure how the JSON data actually looks which $save tries to send to Firebase. If I use the normal console.log(myObject), I get of course a list of all values and functions inside this object, but this isn't the same as the raw JSON I would expect (like { "name": "value" }).
Is there any way to display the actual plain JSON data $save sends to copy this into the rule simulator and debug? Or is there any other way to see which exact permission is denied?
Otherwise, I have to go one by one, switching my permissions off and on which would be a pretty long night for me. :(
If the value of the $firebaseObject is an object, the only difference (in addition to the prototype-wired methods) should be a number of $-prefixed properties (like $id and $resolved). So you should be able to see the actual JSON of what will be written to the database using something like this:
var written = {};
Object.keys(myObject).forEach(function (key) {
if (key.charAt(0) !== "$") { written[key] = myObject[key]; }
});
console.log(JSON.stringify(written));
The $$hashKey entries mentioned in your comment are added by AngularJS. A more general mechanism could be used to remove/ignore all $-prefixed keys throughout the object:
console.log(JSON.stringify(myObject, function (key, val) {
return key.charAt(0) === "$" ? undefined : val;
}));
I'm trying to translate from a Google Drive link on the web (well, the fileId anyway) to the Windows Google Drive app's path on the hard disk, and back again.
It would be helpful if there was something in the API for this (eg produce a path excluding the C:\Users\[User]\Google Drive\ from a file/folder ID, and vice versa), but there isn't.
So far I do:
Windows Path to ID: get the first folder of the path and (starting from the root) look for a matching folder, then repeat until finished (possibly with a file name). PROBLEM: Items can be called the same thing, whether files or folders or combinations of both, which is tricky in Windows. The app adds a number ' (1)' and so on, which I have to catch, but how can I know which item ID is the correct one? I believe that numbering is based on date but I'm not sure. So I can potentially end up with multiple results and no way to tell which is which.
ID to Windows Path: take the name of the file/folder from the ID, then keep adding the parent folder(s) until I build up a path. PROBLEM: same as 1 above, if there are multiple matching items then I can't tell which I should use when translating to Windows. PROBLEM: Apparently items in Google Drive can have more than one parent. Not sure how that works in the Windows app.
Can anyone help me fine tune how I do this, or tell me the exact details of how the Google Drive app does it? Code is welcome but not required, and I in turn can provide the code I use if needed.
I'm not sure if I fully understand the question, but I try to smack an answer anyway:
1/ assuming you have a Windows path,
C:\Users\User\Google Drive\myfile.ext
you create a file with a similar path on GooDrive iterating your path's tokens
recursively creating a tree structure on GooDrive. If the tree nodes (folders/files) exist, return ID's, otherwise create the objects. The main difference in GooDrive is that title query may return multiple objects (list of folders/files). Bad luck, you either use the first one or quit with an error.
global path = "C:\Users\User\Google Drive\myfile.ext"
createTree(String path) {
rootFolderId = create your root or use GooDrive root
fileId = iterate (firstToken(path, "\"), rootFolderId);
}
iterate(title, parentFolderId) {
ID (or multiple IDs) = search for title in parentFolderId
if (multiple IDs exist)
BOOM - report error and quit or use the first one
if (token not last) {
if (single ID for title exists) {
folderId = found ID
} else {
folderId = createFolder with title and parentFolderId metadata
}
iterate(nextToken(path, "\"), folderId)
} else { (last token represent file)
if (single ID for title exists) {
fileId = found ID
} else {
fileId = createFile with title and parentFolderId metadata
}
return fileId
}
}
You did not specify the language, but in case it is Java, you can see similar procedure here in the createTree() method (it is Android code, so there is a lot of Android specific goo there, sorry)
2/ assuming you have a Google Drive fileId, you construct the Windows path with this pseudocode (going from bottom up to the root). Again, you may have multiple parents you have to deal with (error or multiple paths with links to a single object)
String path = fileId's title
while () {
parentID = get fileId's parent
if (multiple parentIDs exist)
BOOM - report error and quit or construct multiple paths
(multiple paths would represent file/folder links)
if (parentID not valid or parentId's title not valid)
break
path = parentID's title + "\" + path
if (parentID's title is your root)
break
}
One more thing: You say "Folders and files can be called the same thing..."
In GooDrive, look at the MIME type, there is a specific MIME type "application/vnd.google-apps.folder" that tells you it is a folder. Also, any parentId metadata represents folder, since files can't be parents.
Good Luck
I like the user experience of cubism, and would like to use this on top of a backend we have.
I've read the API doc's and some of the code, most of this seems to be extracted away. How could I begin to use other data sources exactly?
I have a data store of about 6k individual machines with 5 minute precision on around 100 or so stats.
I would like to query some web app with a specific identifier for that machine and then render a dashboard similar to cubism via querying a specific mongo data store.
Writing the webapp or the querying to mongo isn't the issue.
The issue is more in line with the fact that cubism seems to require querying whatever data store you use for each individual data point (say you have 100 stats across a window of a week...expensive).
Is there another way I could leverage this tool to look at data that gets loaded using something similar to the code below?
var data = [];
d3.json("/initial", function(json) { data.concat(json); });
d3.json("/update", function(json) { data.push(json); });
Cubism takes care of initialization and update for you: the initial request is the full visible window (start to stop, typically 1,440 data points), while subsequent requests are only for a few most recent metrics (7 data points).
Take a look at context.metric for how to implement a new data source. The simplest possible implementation is like this:
var foo = context.metric(function(start, stop, step, callback) {
d3.json("/data", function(data) {
if (!data) return callback(new Error("unable to load data"));
callback(null, data);
});
});
You would extend this to change the "/data" URL as appropriate, passing in the start, stop and step times, and whatever else you want to use to identify a metric. For example, both Cube and Graphite use a metric expression as an additional query parameter.
In WordPress there is Biographical Info under Profile. I would like to prevent the user from exceeding a maximum length of 400 characters. Also, the number of hyperlinks he can place in the biographical info should not exceed three. How do I do that? I am very familiar with JQuery, if that helps in this question. I am just a newbie with WordPress.
For the Javascript side, you should attach the necessary events to the description field. You can load your script via the wp_enqueue_script hook, and you probably want to do all this in your handler for admin_enqueue_scripts, where you check for the passed $hook_name, which in this case is the page name. It is user-edit.php when an admin edits a user, and profile.php when the user edits their own information (in which case IS_PROFILE_PAGE will also be defined as TRUE).
add_action('admin_enqueue_scripts', 'add_description_validation_script');
function add_description_validation_script($pagename) {
if ($pagename == 'profile.php' || $pagename == 'user-edit.php') {
wp_enqueue_script('description_validation', '/path/to/description_validation.js');
}
}
For the PHP side, you need the pre_user_description filter. This gives you the current biographical text, and your function can change this and return something else.
add_filter('pre_user_description', 'sanitize_description');
function sanitize_description($description)
{
// Do stuff with the description
return $description;
}
If, instead of silently changing the description, you want to show an error, you should look into the user_profile_update_errors hook. There you can validate the given data and return error messages to the user.
You probably want to wrap this all up in a plugin, so you can keep the code together and easily enable or disable it. A plugin is just a file in the /wp-content/plugins/ directory, most likely in a subdirectory named after your plugin.