files.list behavior seems to have changed - google-drive-api

I've been naming Drive files with a very specific convention to facilitate searching for them from an app. The search functionality in the v3 Drive API (files.list) had been working as recently as three weeks ago and has since stopped working.
For example, using the following files,
"ABC-123 template", "ABC-123 gogo", "ABC-123 bobo"
... enables me to search via the API with
name contains 'ABC-123'
This search should return all three files. Instead it returns no results. Note that the same query in the Drive web interface is successful and the convention follows the rules laid out in the documentation.
This was working, and now it has stopped. Did the search API change?! I can find other files with the implementation, just not those that use the naming convention.
Here's the full code snippet of the request in NodeJS.
Google.prototype.findFiles = function(file_prefix, callback) {
var service = google.drive('v3');
service.files.list({
q: "name contains '" + file_prefix + "'",
fields: 'nextPageToken, files(id, name)',
spaces: 'drive',
corpus: 'domain',
auth: this.auth
}, function(err, response) {
if(err) {
console.log('Error : findFiles failed. ' + err);
callback(err);
} else {
callback(null,response.files);
}
});
};

The root cause turned out to be the corpus value. For reasons that are not clear from the documentation, the use of corpus=domain is preventing the search from working.
Removing corpus: 'domain', from the above code sample solves the problem.
The files I'm searching are very much in my domain. I'm not sure if this behavior changed recently or if added this constraint in the code and simply don't remember doing so.
Onward.

Related

How exactly does ipfs cat method find and display contents of files using a CID by making use of DHT?

I have done a lot of research on the internet to learn how exactly ipfs cat and get methods work find and download files from other peers using a CID. I want to fully understand how this process works: "The cat method first searches your own node for the file requested, and if it can't find it there, it will attempt to find it on the broader IPFS network(https://proto.school/regular-files-api/04)".
This is the ipfs source code for cat:
async function * cat (ipfsPath, options = {}) {
ipfsPath = normalizeCidPath(ipfsPath)
if (options.preload !== false) {
const pathComponents = ipfsPath.split('/')
preload(CID.parse(pathComponents[0]))
}
const file = await exporter(ipfsPath, repo.blocks, options)
// File may not have unixfs prop if small & imported with rawLeaves true
if (file.type === 'directory') {
throw new Error('this dag node is a directory')
}
if (!file.content) {
throw new Error('this dag node has no content')
}
yield * file.content(options)
}
I deduce that two important arguments that allow for peer routing and file fetching are repo.blocks and preload. repo.blocks is created during ipfs.create() and then passed as a parameter to ipfs.createCat() which is the method that actually creates the cat method. preload is also created by ipfs.create() and passed as an argument to ipfs.createCat() so that it can be used in ipfs.cat(). What confuses me the most is which one of preload or repo.blocks is actually responsible for CID querying. I analyzed the underlying methods for this part of cat:
const pathComponents = ipfsPath.split('/')
preload(CID.parse(pathComponents[0]))
and learned that this is the part of ipfs.cat that makes http connections to other peers. However, this part:
const file = await exporter(ipfsPath, repo.blocks, options)
includes sub-methods like
const block = await blockstore.get(cid, options);
const node = dagPb.decode(block);
which also seem to be related to CID querying through use of distributed hash tables. blockstore.get did not make use of any methods that seemed to connect to other peers or search for peers that have a CID, but I am still very confused on whether these methods have any relation to CID querying. I highly appreciate any help on how the cat method works under the hood from someone who is an expert in ipfs or at least resources I can use to learn the material myself.

chrome native messaging: how to receive > 1MB

What would be a good way to work with Chrome's incoming 1MB limit for native messaging extensions? The data that we would be sending to the extension is json-serialized gpx, if that matters.
When the original message is >1MB, it seems like this question really has two parts:
how to partition the data on the sending end (i.e. the client)
this part should be pretty trivial. Even if we need to split into separate self-contained complete gpx strings, that is pretty straightforward.
how to join the <1MB messages back in to the original >1MB
is there a standard known solution for this question? We can call background.js (ie. the function passed to chrome.runtime.onMessageExternal.addListener) once for each <1MB incoming message, but, how would we combine the strings from those repeated calls in to one response for the extension?
UPDATE 8-18-16:
what we've been doing is just appending each message 'chunk' on a buffer variable in background.js, and not send it back to Chrome until disconnection:
var gpxText="";
port.onMessage.addListener(function(msg) {
// msg must be a JSON-serialized simple string;
// append each incoming msg to the collective gpxText string
// but do not send it to Chrome until disconnection
// console.log("received " + msg);
gpxText+=msg;
});
port.onDisconnect.addListener(function(msg) {
if (gpxText!="") {
sendResponse(JSON.parse(gpxText));
gpxText="";
} else {
sendResponse({status: 'error', message: 'something really bad happened'});
}
// build the response object here with msg, status, error tokens, and always send it
console.log("disconnected");
});
We will have to make that appending a bit smarter to handle and send both status and message keys/values, but that should be easy.
I have this same issue and have been scouring the web for the past couple days to figure out something to do. In my application, I am currently shipping a JSON string over to the background script in chunks, having to create a subprotocol to handle this special case. e.g. my initial question might look like:
{action:"getImage",guid:"123"}
and the response for <1MB might look like:
{action:"getImage",guid:"123",status:"success",return:"ABBA..."}
where ABBA... represents a base64 encoding of the bytes. when >1MB, however, the response will look like:
{action:"getStream",guid:"123",status:"success",return:"{action:\"getImage\",guid:\"123\",return:\"ABBA...",more:true}
and upon receipt of the payload with method==='stream', the background page will immediately issue a new request like:
{action:"getStream",guid:"123"}
and the next response might look like:
{action:"getStream",guid:"123",status:"success",return:"...DEAF==",more:false}
so your onMessage handler would look something like:
var streams;
function onMessage( e ) {
var guid = e.guid;
if ( e.action === 'getStream' ) {
if ( !streams[ guid ] ) streams[ guid ] = '';
streams[ guid ] += e[ 'return' ];
if ( e.more ) {
postMessage( { action: 'getStream', guid: guid } );
// no more processing to do here, bail
return;
}
e = JSON.parse( streams[ guid ] );
streams[ guid ] = null;
}
// do something with e as if it was never chunked
...
}
it works, but I am somewhat convinced that it is slower than it should be (though this could be due to the slow feeling of the STDIO signaling and, in my particular app, additional signaling that has to happen for each new chunk).
Ideally I'd like to stream the file in a more efficient protocol supported natively by Chrome. I looked into WebRTC, but it would mean that I'd need to implement the API into my native messaging host (as best I can tell), which is not an option I'm willing to take on. I played with 'passing' the contents by file as such:
if ( e.action = 'getFile' ) {
xhr = new XMLHttpRequest();
xhr.onreadystatechange = function( e ) {
if ( e.target.readyState === 4 ) {
onMessage( e.target.responseText );
}
};
xhr.open( 'GET', chrome.extension.getURL( '/' + e.file ), true );
xhr.send();
return;
}
where I have my native message host write a .json file out to extension's install directory and it seems to work, but there is no way for me to reliably derive the path (without fudging things and hoping for the best), because as best I can the location of the extensions install path is determined by your Chrome user profile and there's no API I could find to give me that path. Additionally, there's a 'version' folder created under your extension id which includes an _0 that I don't know how to calculate (is the _0 constant for some future use? does it tick up when an extension is published anew to the web store, but the version is not adjusted?).
At this point I'm out of ideas and I'm hoping someone will stumble across this question with some guidance.

Wildcard in Angular http.get?

I have multiple JSON files in one directory, and I am going to build the view contents from those JSON files. The JSON files are identical in structure.
What is the correct syntax for loading multiple JSON files for use with ng-repeat? I tried with this, but it throws a permission denied error (the view is loaded via a route, if it matters. Still learning Angular...).
I use these:
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.4.5/angular.min.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.5/angular-route.min.js"></script>
Snippet from the view:
<div ng-controller="releases">
<article ng-repeat="album in albums">
{{ album.artist }}
</article>
</div>
Controller:
myApp.controller('releases', function($scope, $http) {
$scope.albums = [];
$http.get('contents/releases/*.json')
.then(function(releases) {
$scope.albums = releases.data;
console.log($scope.albums);
});
});
The JSON files are like this:
{
"artist" : "Artist name",
"album" : "Album title",
"releaseDate" : "2015-09-16"
}
The error message is:
You don't have permission to access /mypage/angular/contents/releases/*.json on this server.
If I use an exact filename, for example $http.get('contents/releases/album.json'), I can access the data correctly. But naturally only for one JSON, instead of the 11 files I have.
In a previous site I have done with PHP, I used an identical method, and there I could access the same files with no problem. For both, I'm using WAMP server (Apache 2) as the platform.
Could it still have something to do with the Apache config? The reason I don't think it is that, is because it does work in PHP like this:
// Get release data
$releasesDataLocation = 'contents/releases/*.json';
$releasesDataFiles = glob($releasesDataLocation);
rsort($releasesDataFiles); // Rsort = newest release first, comment out to show oldest first
// Show the releases
foreach($releasesDataFiles as $releaseData) {
$release = new Release($releaseData);
$release->display();
}
Wildcard AFAIK in such URLs is not allowed. You should build a server side endpoint that should read all the files in your directory on server, concatenate and return the response to you.
For eX: you could expose a GET URL: /api/contents/releases
and server side handler of it can read the directory containing all release JSONs and return to you.

Grab data from Yahoo Finance using Meteor Package, some work some do not

I am using the following package for Meteor https://atmospherejs.com/ajbarry/yahoo-finance
I cant seem to get a specified field to work, here is a link that contains a list of all the available fields, however 'j2' and some others I tested don't work, in the sense there is no response in the result object, or no json key pair values.
Heres is my client side code.
Template.stock.rendered = function (){
if ( _.isEmpty(Session.get('ENW.V')) ) {
Meteor.call('getQuote', 'ENW.V', function(err, result) {
Session.set('ENW.V', result['ENW.V']);
console.log(result)
});
}
}
Template.stock.helpers({
stock: function() {
return Session.get('ENW.V');
}
})
Server side Method
Meteor.methods({
getQuote: function( stockname ) {
return YahooFinance.snapshot({symbols: [stockname] , fields:['n','a','b','j2'] });
}
});
Thanks for any Help in Advance. Happy to add any additional info if needed.
Did a test run after commenting out that line and it seems to work fine. Create an issue with the package owner to see if you can have it fixed for the long run.
The package you are using is deliberately excluding those fields. For what reason, I cannot say. For a full list of fields that it is avoiding, look here:
https://github.com/pilwon/node-yahoo-finance/blob/master/lib/index.js#L122

Determining the location of a JSON parse error

I am creating a web application that allows a user to load data in JSON format. I am currently using the following function to read JSON files that I have saved on my local disk:
function retrieveJSON(url, callback)
{
// this is needed because FireFox tries to parse files as XML
$.ajaxSetup({ mimeType: "text/plain" });
// send out an AJAX request and return the result
$.getJSON(url, function(response) {
console.log("Data acquired successfully");
callback(response);
}).error(function(jqXHR, textStatus, errorThrown) {
console.log("Error...\n" + textStatus + "\n" + errorThrown);
});
}
This works perfectly for well-formed JSON data. However, for malformed data, the console log displays the following:
Error...
parsererror
SyntaxError: JSON.parse: unexpected character
This is almost entirely unhelpful because it does not tell me what the unexpected character is or what line number it can be found on. I could use a JSON validator to correct the file on my local disk, but this is not an option when the page is loading files from remote URLs on the web.
How can I obtain the location of any error? I would like to obtain the token if possible, but I need to obtain the line number at minimum. There is a project requirement to display an excerpt of the JSON code to the user and highlight the line where any error occurred.
I am currently using jQuery, but jQuery is not a project requirement, so if another API or JSON parser provides this functionality, I could use that instead.
Yeah, life with deadlines is never easy :).
This might help you out, after couple of hours googling around, I've found jsonlint on Git Hub. It looks promising, it includes a shell script that could be used on server side, and there is a browser JavaScript version of it that seems to be exactly what you were looking for.
Hope that this will help You.
i agree that life with deadlines is hard.
i'm incredibly happy that i don't have to live with deadlines, i'm my own boss.
so in search of a better solution to this problem, i came up with the following :
...
readConfig : function () {
jQuery.ajax({
type : 'GET',
url : 'config.json',
success : function (data, ts, xhr) {
var d = JSON.parse(data);
},
error : function (xhr, ajaxOptions, thrownError) {
if (typeof thrownError.message=='string') {
// ./config.json contains invalid data.
var
text = xhr.responseText,
pos = parseInt(thrownError.message.match(/position (\d+)/)[1]),
html = text.substr(0,pos)+'<span style="color:red;font-weight:bold;">__'+text.substr(pos,1)+'__</span>'+text.substr(pos+1, text.length-pos-1);
cm.install.displayErrorMsg('Could not read ./config.json :(<br/>'+thrownError+'<br/>'+html);
} else {
cm.install.displayErrorMsg('Error retrieving ./config.json<br/>HTTP error code : '+xhr.status);
};
}
});
},
...