I have 2 firestore collections - crews/{crew}/clients and crews/{crew}/pros. If a new client registers and a new document is created, I want to search collection pros for pros working the matching sector and living within 5 km (of the new client), and send notification to the pros filtered. In order to implement that in cloud functions,
I installed geofirestore using npm, saved crews/{crew}/pros like this;
https://i.stack.imgur.com/YwAFO.png
but after executing this function, I have error message on cloud functions console like this;
Error: Registration token(s) provided to sendToDevice() must be a non-empty string or a non-empty array
Is there anything wrong with my firestore data structure? Thank you.
I found that this data structure was correct, because I could get notification to work with this data structure. I tried another structure like this;
https://i.stack.imgur.com/vtB5X.jpg
However, it gave me another error.
Related
There is no proper documentation for AngularFire2 v5, so I don't know where am i supposed to learn from, but here it's only for Firestore. I don't use firestore, I use Firebase Realtime Database. https://github.com/angular/angularfire2/tree/master/docs
However, I'm pretty new to it, and I'm looking how can I store data in a collection, that would look something like this:
nepal-project-cffb1:
users:
userid:
email: "lajmis#mail.com"
score: 1
My function now looks like this:
this._db.list("users/" + "id").set({
email: this.user,
score: this.score
})
but it doesn't work, I get Expected 2 arguments, but got 1. error. There are a bunch of syntaxes, like .ref .list and i don't know which one to use.
There is an entire section of the AngularFire guide dedicated to RTDB, so I'm confused by the statement that there is no real documentation. Typically you would use valueChanges() to create a list observer as described here for reading, and set() is covered here.
So as that doc illustrates, you need to pass the key of the record to be modified, and also the data to be changed when you call it.
this._db.list("users").set("id", {...});
But if you aren't monitoring the collection, you can also just use db.object directly:
this._db.object("users/id").set({...});
One aspect that may not be immediately intuitive with AngularFire is that it's mainly an adapter pattern intended to take away some of the boilerplate of ferrying data between your Angular model and your Firebase servers.
If you aren't syncing data (downloading a local copy) then there's no reason to use AngularFire here. You can simply call the Firebase SDK directly and avoid the overhead of subscribing to remote endpoints:
firebase.database().ref("users/id").set({...});
I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.
I'm building an application that stores files into the FIWARE Object Storage. I don't quite understand what is the correct way of storing files into the storage.
The code python code snippet below taken from the Object Storage - User and Programmers Guide shows 2 ways of doing it:
def store_text(token, auth, container_name, object_name, object_text):
headers = {"X-Auth-Token": token}
# 1. version
#body = '{"mimetype":"text/plain", "metadata":{}, "value" : "' + object_text + '"}'
# 2. version
body = object_text
url = auth + "/" + container_name + "/" + object_name
return swift_request('PUT', url, headers, body)
The 1. version confuses me, because when I first looked at the only Node.js module (repo: fiware-object-storage) that works with Object Storage, it seemed to use 1. version. As the module was making calls to the old (v.1.1) API version instead of the presumably newest (v.2.0), referencing to the python example, not sure if that is an outdated version of doing it or not.
As I played more with the module, realised it didn't work and the code for it was a total mess. So I forked the project and quickly understood that I will need rewrite it form the ground up, taking the above mention python example from the usage guide as an reference. Link to my repo.
As of writing this the only methods that aren't implement is the object storage (PUT) and object fetching (GET).
Had some addition questions about the Object Storage which I sent to fiware-lab-help#lists.fiware.org, but haven't heard anything back so asking them here.
Haven't got much experience with writing API libraries. Should I need to worry about auth token expiring? I presume it is not needed to make a new authentication, every time we interact with storage. The authentication should happen once when server is starting-up (we create a instance) and it internally keeps it. Should I implement some kind of mechanism that refreshes the token?
Does the tenant id change? From the quote below is presume that getting a tenant I just a one time deal, then later you can use it in the config to make less authentication calls.
A valid token is required to access an object store. This section
describes how to get a valid token assuming an identity management
system compatible with OpenStack Keystone is being used. If the
username, password and tenant details are known, only step 3 is
required. source
During the authentication when fetching tenants how should I select the "right" one? For now i'm just taking the first one similar as the example code does.
Is it true that a object storage container belongs to only a single region?
Use only what you call version 2. Ignore your version 1. It is commented out in the example. It should be removed from the documentation.
(1) The token will be valid for some period of time. This could be an hour or a day, depending on the setup. This period of time should be specified in the token that is returned by the authentication service. The token needs to be periodically refreshed.
(2) The tenant id does not change.
(3) Typically only one tenant id is returned. It is possible, however, that you were assigned more than one id, in which case you have to pick which one you are currently using. Containers typically belong to a single tenant and are not shared between tenants.
(4) Containers are typically limited to a single region. This may change in the future when multi-region support for a container is added to Swift.
Solved my troubles and created the NPM module that works with the FIWARE Object Storage: https://github.com/renarsvilnis/fiware-object-storage-ge
hi guys how can i get information like how many number of class files which will be executed from particular test class from sonarqube database,my sonarqube database is resided in MySQL db i am not finding any answers can guys help to this problem
The short answer is: it is not recommended to access SonarQube DB to get information, so forget about directly manipulating SQ's database.
A longer answer might be: have a look at SonarQube's webservice API, especially these ones :
http://nemo.sonarqube.org/api_documentation/api/tests/list
http://nemo.sonarqube.org/api_documentation/api/tests/covered_files
The first one should allow you to retrieve all test id then you can pass the ID you're looking for to the second webservice then check the size of the files array... but I don't think that this will be easy as it isn't straightforward to get the testFileId you need to feed the first webservice (you can't pass a file's key as far as I know.)
I've successfully installed and run the Google Drive Quick Start application called DriveCommandLine. I've also adapted it a little to GET file info for one of the files in my Drive account.
What I would like to do now is save the credentials somehow and re-use them without the user having to visit a web page each time to get an authorization code. I have checked out this page with instructions to Retrieve and Use OAuth 2.0 credentials. In order to use the example class (MyClass), I have modified the line in DriveCommandLine where the Credential object is instantiated:
Credential credential = MyClass.getCredentials(code, "");
This results in the following exception being thrown:
java.lang.NullPointerException
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
at com.google.api.client.json.jackson.JacksonFactory.createJsonParser(JacksonFactory.java:84)
at com.google.api.client.json.JsonFactory.fromInputStream(JsonFactory.java:247)
at com.google.api.client.googleapis.auth.oauth2.GoogleClientSecrets.load(GoogleClientSecrets.java:168)
at googledrive.MyClass.getFlow(MyClass.java:145)
at googledrive.MyClass.exchangeCode(MyClass.java:166)
at googledrive.MyClass.getCredentials(MyClass.java:239)
at googledrive.DriveCommandLine.<init>(DriveCommandLine.java:56)
at googledrive.DriveCommandLine.main(DriveCommandLine.java:115)
I've been looking at these APIs (Google Drive and OAuth) for 2 days now and have made very little progress. I'd really appreciate some help with the above error and the problem of getting persistent credentials in general.
This whole structure seems unnecessarily complicated to me. Anybody care to explain why I can't just create a simple Credential object by passing in my Google username and password?
Thanks,
Brian O Carroll, Dublin, Ireland
* Update *
Ok, I've just gotten around the above error and now I have a new one.
The way I got around the first problem was by modifying MyClass.getFlow(). Instead of creating a GoogleClientServices object from a json file, I have used a different version of GoogleAuthorizationCodeFlow.Builder that allows you to enter the client ID and client secret directly as Strings:
flow = new GoogleAuthorizationCodeFlow.Builder(httpTransport, jsonFactory, "<MY CLIENT ID>", "<MY CLIENT SECRET>", SCOPES).setAccessType("offline").setApprovalPrompt("force").build();
The problem I have now is that I get the following error when I try to use flow (GoogleAuthorizationCodeFlow object) to exchange the authorization code for the Credentials object:
An error occurred: com.google.api.client.auth.oauth2.TokenResponseException: 400 Bad Request
{
"error" : "invalid_scope"
}
googledrive.MyClass$CodeExchangeException
at googledrive.MyClass.exchangeCode(MyClass.java:185)
at googledrive.MyClass.getCredentials(MyClass.java:262)
at googledrive.DriveCommandLine.<init>(DriveCommandLine.java:56)
at googledrive.DriveCommandLine.main(DriveCommandLine.java:115)
Is there some other scope I should be using for this? I am currently using the array of scopes provided with MyClass:
private static final List<String> SCOPES = Arrays.asList(
"https://www.googleapis.com/auth/drive.file",
"https://www.googleapis.com/auth/userinfo.email",
"https://www.googleapis.com/auth/userinfo.profile");
Thanks!
I feel your pain. I'm two months in and still getting surprised.
Some of my learnings...
When you request user permissions, specify "offline=true". This will ("sometimes" sic) return a refreshtoken, which is as good as a password with restricted permissions. You can store this and reuse it at any time (until the user revokes it) to fetch an access token.
My feeling is that the Google SDKs are more of a hinderence than a help. One by one, I've stopped using them and now call the REST API directly.
On your last point, you can (just) use the Google clientlogin protocol to access the previous generation of APIs. However this is totally deprecated and will shortly be turned off. OAuth is designed to give fine grained control of authorisation which is intrinsically complex. So although I agree it's complicated, I don't think it's unnecessarily so. We live in a complicated world :-)
Your and mine experiences show that the development community is still in need of a consolidated document and recipes to get this stuff into our rear-view mirrors so we can focus on the task at hand.
Oath2Scopes is imported as follows:
import com.google.api.services.oauth2.Oauth2Scopes;
You need to have the jar file 'google-api-services-oauth2-v2-rev15-1.8.0-beta.jar' in your class path to access that package. It can be downloaded here.
No, I don't know how to get Credentials without having to visit the authorization URL at least once and copy the code. I've modified MyClass to store and retrieve credentials from a database (in my case, it's a simple table that contains userid, accesstoken and refreshtoken). This way I only have to get the authorization code once and once I get the access/refresh tokens, I can reuse them to make a GoogleCredential object. Here's how Imake the GoogleCredential object:
GoogleCredential credential = new GoogleCredential.Builder().setJsonFactory(jsonFactory)
.setTransport(httpTransport).setClientSecrets(clientid, clientsecret).build();
credential.setAccessToken(accessToken);
credential.setRefreshToken(refreshToken);
Just enter your clientid, clientsecret, accessToken and refreshToken above.
I don't really have a whole lot of time to separate and tidy up my entire code to post it up here but if you're still having problems, let me know and I'll see what I can do. Although, you are effectively asking a blind man for directions. My understanding of this whole system is very sketchy!
Cheers,
Brian
Ok, I've finally solved the second problem above and I'm finally getting a working GoogleCredential object with an access token and a refresh token.
I kept trying to solve the scopes problem by modifying the list of scopes in MyClass (the one that manages credentials). In the end I needed to adjust the scopes in my modified version of DriveCommandLine (the one that's originally used to get an authorization code). I added 2 scopes from Oauth2Scopes:
GoogleAuthorizationCodeFlow flow = new GoogleAuthorizationCodeFlow.Builder(
httpTransport, jsonFactory, CLIENT_ID, CLIENT_SECRET,
Arrays.asList(DriveScopes.DRIVE, Oauth2Scopes.USERINFO_EMAIL, Oauth2Scopes.USERINFO_PROFILE))
.setAccessType("offline").setApprovalPrompt("force").build();
Adding the scopes for user information allowed me to get the userid later in MyClass. I can now use the userid to store the credentials in a database for re-use (without having to get the user to go to a URL each time). I also set the access type to "offline" as suggested by pinoyyid.