Can I put a map in my http api for extensibility? - json

My http api uses JSON to pass parameters, it looks like:
{
param1: xxx
param2: xxx
param3: xxx
}
However, my system is a plugin system that each plugin needs to have its own parameters in the JSON body and all plugins will cooperate with each other to produce final result.
for example, let's say the api is
CreateACar {
name: xxx
description: xxx
model: xxxx
}
the base api has three fields for basic meta data. And the system has plugins like:
CarColorPlugin: needs parameters as
{
doorColor: xxx
roofColor: xxx
decoratorColor: xxx
}
TirePlugin: needs parameters as
{
tireSize: xxx
tireBrand: xxx
}
WindShieldPlugin: needs parameters as
{
brand: xxx
needRearWindShield: true or false
}
you can imagine this kind of plugins as many as possible. Now the problem is all plugins need api CreateACar to carry their information and sometimes later a new plugin may join the system so CreateACar must be extensible for future needs.
Now I am considering to put a map in JSON body and passing api CreateACar to all plugins so they can fetch parameters by themselves.
However, this design looks a little ugly to me. I have been researching for a while, the projects having beautiful api usually have limited business domain. For projects having broad unanticipated business domain usually use extensible data structure like XML in API body, however, all these API I have seen so far are mess, especially these without good documentation.

Yes - presumably many of your requests will ask for one car, not all of them, though.

Here's my overall design suggestion:
Use the Java SPI system to handle your plugins. Define a plugin interface that includes a method to identify the plugin's key ("color", "wipers", and so on) and one that takes a Map and returns a plugin data object. Collect all of the implementations in a Map<String,Plugin>.
Write a getPlugins() method on your Car class that does not have a backing field but instead collates the information from all of the plugins applied to the Car in a nested Map.
Write a setPlugins() method that takes the nested map, iterates over the keys, looks up the appropriate factory plugin by name, and rehydrates the plugin data object from the JSON object data.

Related

Best(Standard) practice to send multiple parameters to REST GET resource

Best(Standard) practice to send multiple parameters to REST get resource - Spring
What's the best practice to send multiple parameters on a REST GET resource call. Normally we can call GET call with path param &/ query. what is the best practice on how to achieve this(second and third).
Ex: /user/{username}/locaction} - Its a standard way
2. /user/{username}/{locaction}/{mobile_umber} - is it a standard way?
3. /user/{username}/{locaction}/{mobile_umber}/{age} - is it a standard way?
In terms of URLs, params tend to be more like RPC where you're invoking a function, e.g.:
/user?username=123
which is more like a traditional RPC call:
Object user = GetUsername(123);
REST represents a state of a resource which can be made up of many "states". Core information such as personal info, where 123 is the username:
/user/123 -> {"name":"Joe Bloggs"}
some aspects of the state of the user could change over time:
/user/123/location -> {"username":"123",lat:123456,lon:54568}
aspects of the user that are unlikely to change rapidly could be included in the core info /user/123 or if they're unlikely to be needed by a client, they can be requested separately:
/user/123/mobile -> {"username":"123",mobile:"345345"}
A User has a location and a User has a mobile and a User has an age. A location doesn't have a mobile or age so those aspects would never come under /user/location. They would come under the URL that represents the object that does have them:
/user/123/age -> {"username":"123", "age":100}
A location can have an accuracy which can either be requested separately:
/user/123/location/accuracy {"accuracy":"-1"}
or more likely included in the response to /user/123/location.
So the REST structure in this case mirrors the object hierarchy, the has parts.
The REST structure could also mirror the business structure:
/user/account
/user/contactinfo
/user/location
it just depends how you want to expose the data that represents a User and their states.

How do you cache a route with a wildcard in a PWA service worker?

I'm trying to convert my application to a PWA and I understand that I need to provide a complete list of endpoints and static files to the service worker, so it can manage the caching. In all the examples I'm finding, the pages are static links like /report. But in most real world applications, the pages contain dynamic parts, like /report/{reportId:int}. How do you tell the service worker about such an endpoint?
You can use javascript test() method and check if the url matches the regExp expression that would fit your url.
You can read about it on this official guide:
https://developers.google.com/web/fundamentals/instant-and-offline/offline-cookbook#putting_it_together
extract:
if (/^\/article\//.test(requestURL.pathname)) {
event.respondWith(/* some other combination of patterns */);
return;
}

How to create JSON tree and nested nodes in Firebase using AngularFire2

There is no proper documentation for AngularFire2 v5, so I don't know where am i supposed to learn from, but here it's only for Firestore. I don't use firestore, I use Firebase Realtime Database. https://github.com/angular/angularfire2/tree/master/docs
However, I'm pretty new to it, and I'm looking how can I store data in a collection, that would look something like this:
nepal-project-cffb1:
users:
userid:
email: "lajmis#mail.com"
score: 1
My function now looks like this:
this._db.list("users/" + "id").set({
email: this.user,
score: this.score
})
but it doesn't work, I get Expected 2 arguments, but got 1. error. There are a bunch of syntaxes, like .ref .list and i don't know which one to use.
There is an entire section of the AngularFire guide dedicated to RTDB, so I'm confused by the statement that there is no real documentation. Typically you would use valueChanges() to create a list observer as described here for reading, and set() is covered here.
So as that doc illustrates, you need to pass the key of the record to be modified, and also the data to be changed when you call it.
this._db.list("users").set("id", {...});
But if you aren't monitoring the collection, you can also just use db.object directly:
this._db.object("users/id").set({...});
One aspect that may not be immediately intuitive with AngularFire is that it's mainly an adapter pattern intended to take away some of the boilerplate of ferrying data between your Angular model and your Firebase servers.
If you aren't syncing data (downloading a local copy) then there's no reason to use AngularFire here. You can simply call the Firebase SDK directly and avoid the overhead of subscribing to remote endpoints:
firebase.database().ref("users/id").set({...});

Data Studio connector making multiple calls to API when it should only be making 1

I'm finalizing a Data Studio connector and noticing some odd behavior with the number of API calls.
Where I'm expecting to see a single API call, I'm seeing multiple calls.
In my apps script I'm keeping a simple tally which increments by 1 every url fetch and that is giving me the correct number I expect to see with getData().
However, in my API monitoring logs (using Runscope) I'm seeing multiple API requests for the same endpoint, and varying numbers for different endpoints in a single getData() call (they should all be the same). E.g.
I can't post the code here (client project) but it's substantially the same framework as the Data Connector code on Google's docs. I have caching and backoff implemented.
Looking for any ideas or if anyone has experienced something similar?
Thanks
Per the this reference, GDS will also perform semantic type detection if you aren't explicitly defining this property for your fields. If the query is semantic type detection, the request will feature sampleExtraction: true
When Data Studio executes the getData function of a community connector for the purpose of semantic detection, the incoming request will contain a sampleExtraction property which will be set to true.
If the GDS report includes multiple widgets with different dimensions/metrics configuration then GDS might fire multiple getData calls for each of them.
Kind of a late answer but this might help others who are facing the same problem.
The widgets / search filters attached to a graph issue getData calls of their own. If your custom adapter is built to retrieve data via API calls from third party services, data which is agnostic to the request.fields property sent forward by GDS => then these API calls are multiplied by N+1 (where N = the amout of widgets / search filters your report is implementing).
I could not find an official solution for this either, so I invented a workaround using cache.
The graph's request for getData (typically requesting more fields than the Search Filters) will be the only one allowed to query the API Endpoint. Before starting to do so it will store a key in the cache "cache_{hashOfReportParameters}_building" => true.
if (enableCache) {
cache.putString("cache_{hashOfReportParameters}_building", 'true');
Logger.log("Cache is being built...");
}
It will retrieve API responses, paginating in a look, and buffer the results.
Once it finished it will delete the cache key "cache_{hashOfReportParameters}building", and will cache the final merged results it buffered so far inside "cache{hashOfReportParameters}_final".
When it comes to filters, they also invoke: getData but typically with only up to 3 requested fields. First thing we want to do is make sure they cannot start executing prior to the primary getData call... so we add a little bit of a delay for things that might be the search filters / widgets that are after the same data set:
if (enableCache) {
var countRequestedFields = requestedFields.asArray().length;
Logger.log("Total Requested fields: " + countRequestedFields);
if (countRequestedFields <= 3) {
Logger.log('This seams to be a search filters.');
Utilities.sleep(1000);
}
}
After that we compute a hash on all of the moving parts of the report (date range, plus all of the other parameters you have set up that could influence the data retrieved form your API endpoints):
Now the best part, as long as the main graph is still building the cache, we make these getData calls wait:
while (cache.getString('cache_{hashOfReportParameters}_building') === 'true') {
Logger.log('A similar request is already executing, please wait...');
Utilities.sleep(2000);
}
After this loop we attempt to retrieve the contents of "cache_{hashOfReportParameters}_final" -- and in case we fail, its always a good idea to have a backup plan - which would be to allow it to traverse the API again. We have encountered ~ 2% error rate retrieving data we cached...
With the cached result (or buffered API responses), you just transform your response as per the schema GDS needs (which differs between graphs and filters).
As you start implementing this, you`ll notice yet another problem... Google Cache is limited to max 100KB per key. There is however no limit on the amount of keys you can cache... and fortunately others have encountered similar needs in the past and have come up with a smart solution of splitting up one big chunk you need cached into multiple cache keys, and gluing them back together into one object when retrieving is necessary.
See: https://github.com/lwbuck01/GASs/blob/b5885e34335d531e00f8d45be4205980d91d976a/EnhancedCacheService/EnhancedCache.gs
I cannot share the final solution we have implemented with you as it is too specific to a client - but I hope that this will at least give you a good idea on how to approach the problem.
Caching the full API result is a good idea in general to avoid round trips and server load for no good reason if near-realtime is good enough for your needs.

How do I use FIDO U2F to allow users to authenticate with my website?

With all the recent buzz around the FIDO U2F specification, I would like to implement FIDO U2F test-wise on a testbed to be ready for the forthcoming roll out of the final specification.
So far, I have a FIDO U2F security key produced by Yubico and the FIDO U2F (Universal 2nd Factor) extension installed in Chrome. I have also managed to set up the security key to work with my Google log-in.
Now, I'm not sure how to make use of this stuff for my own site. I have looked through Google's Github page for the U2F project and I have checked their web app front-end. It looks really simple (JavaScript only).
So is implementing second factor auth with FIDO as simple as implementing a few JavaScript calls? All that seems to be happening for the registration in the example is this:
var registerRequest = {
appId: enrollData.appId,
challenge: enrollData.challenge,
version: enrollData.version
};
u2f.register([registerRequest], [], function (result) {
if (result.errorCode) {
document.getElementById('status')
.innerHTML = "Failed. Error code: " + result.errorCode;
return;
}
document.location = "/enrollFinish"
+ "?browserData=" + result.clientData
+ "&enrollData=" + result.registrationData
+ "&challenge=" + enrollData.challenge
+ "&sessionId=" + enrollData.sessionId;
});
But how can I use that for an implementation myself? Will I be able to use the callback from this method call for the user registration?
What you are trying to do is implement a so called "relying party", meaning that your web service will rely on the identity assertion provided by the FIDO U2F token.
You will need to understand the U2F specifications to do that. Especially how the challenge-response paradigm is to be implemented and how app ids and facets work. This is described in the spec in detail.
You are right: The actual code necessary to work with FIDO U2F from the front end of you application is almost trivial (that is, if you use the "high-level" JavaScript API as opposed to the "low-level" MessagePort API). Your application will however need to work with the messages generated by the token and validate them. This is not trivial.
To illustrate how you could pursue implementing a relying party site, I will give a few code examples, taken from a Virtual FIDO U2F Token Extension that I have programmed lately for academic reasons. You can see the page for the full example code.
Before your users can use their FIDO U2F tokens to authenticate, they need to register it with you.
In order to allow them to do so, you need to call window.u2f.register in their browser. To do that, you need to provide a few parameters (again; read the spec for details).
Among them a challenge and the id of your app. For a web app, this id must be the web origin of the web page triggering the FIDO operation. Let's assume it is example.org:
window.u2f.register([
{
version : "U2F_V2",
challenge : "YXJlIHlvdSBib3JlZD8gOy0p",
appId : "http://example.org",
sessionId : "26"
}
], [], function (data) {
});
Once the user performs a "user presence test" (e.g. by touching the token), you will receive a response, which is a JSON object (see spec for more details)
dictionary RegisterResponse {
DOMString registrationData;
DOMString clientData;
};
This data contains several elements that your application needs to work with.
The public key of the generated key pair -- You need to store this for future authentication use.
The key handle of the generated key pair -- You also need to store this for future use.
The certificate -- You need to check whether you trust this certificate and the CA.
The signature -- You need to check whether the signature is valid (i.e. confirms to the key stored with the certificate) and whether the data signed is the data expected.
I have prepared a rough implementation draft for the relying party server in Java that shows how to extract and validate this information lately.
Once the registration is complete and you have somehow stored the details of the generated key, you can sign requests.
As you said, this can be initiated short and sweet through the high-level JavaScript API:
window.u2f.sign([{
version : "U2F_V2",
challenge : "c3RpbGwgYm9yZWQ/IQ",
app_id : "http://example.org",
sessionId : "42",
keyHandle: "ZHVtbXlfa2V5X2hhbmRsZQ"
}], function (data) {
});
Here, you need to provide the key handle, you have obtained during registration.
Once again, after the user performs a "user presence test" (e.g. by touching the token), you will receive a response, which is a JSON object (again, see spec for more details)
dictionary SignResponse {
DOMString keyHandle;
DOMString signatureData;
DOMString clientData;
};
You the need to validate the signature data contained herein.
You need to make sure that the signature matches the public key you have obtained before.
You also need to validate that the string signed is appropriate.
Once you have performed these validations, you can consider the user authenticated. A brief example implementation of the server side code for that is also contained in my server example.
I have recently written instructions for this, as well as listing all U2F server libraries (most of them bundles a fully working demo server), at developers.yubico.com/U2F. The goal is to enable developers to implement/integrate U2F without having to read the specifications.
Disclaimer: I work as a developer at Yubico.