U2F with multi-facet App ID - fido-u2f

We have been directly using U2F on our auth web app with the hostname as our app ID (https://auth.company.com) and that's working fine. However, we'd like to be able to authenticate with the auth server from other apps (and hostnames, e.g. https://customer.app.com) that communicate with the auth server via HTTP API.
I can generate the sign requests and what-not through API calls and return them to the client apps, but it fails server-side (auth server) because the app ID doesn't validate (clients are using their own hostnames as app ID). This is understandable, but how should I handle this? I've read about facets but I cannot get it to work at all.
The client app JS is like:
var registerRequests = // ...
var signRequests = // ...
u2f.register('http://localhost:3000/facets', registerRequests, signRequests, function(registerResponse) {
if (registerResponse.errorCode) {
return alert("Registration error: " + registerResponse.errorCode);
}
// etc.
});
This gives me an Error code 5 (timeout error) after a while. I don't see any request to /facets . Is there a way around this or am I barking up the wrong tree (or a different forest)?
————
Okay, so after a few hours of researching this; I'm pretty sure this fiendish bit of the Firefox U2F plugin is the source of some of my woes:
if (u.scheme == "http")
if (url2str(u, true) == url2str(ou, true))
return resolve(challenge);
else
return reject("Not matching appID");
https://github.com/prefiks/u2f4moz/blob/master/ext/appIdValidator.js#L106-L110
It's essentially saying, if the appID's scheme is http, only allow it if it's exactly the same as the page's host (it goes on to do the behaviour for fetching the trusted facets JSON but only for https).
Still not sure if I'm on the right track though in how I'm trying to design this.

I didn't need to worry about facets for my particular situation. In the end I just pass the client app hostname through to the Auth server via the secure API interface and it uses that as the App ID. Seems to work okay so far.
The issue I was having with facets was due to using http in dev and the Firefox U2F plugin not permitting that with JSON facets.

Related

Serve dynamic content with Firebase Hosting/Functions in EU

I would like to serve a Next.js app in europe using Firebase Hosting & Functions capabilities.
I do understand from the doc that:
If you are using HTTP functions to serve dynamic content for Firebase
Hosting, you must use us-central1
and that
Firebase Hosting supports Cloud Functions in us-central1 only
It's pretty clear: you must use us-central. But my main target is Europe..
I've read the following on the Cloud Functions locations guide:
For HTTP and callable functions, we recommend that you first set your
function to the destination region, or closest to where most expected
customers are located, and then alter your original function to
redirect its HTTP request to the new function (they can have the same
name). [Solution 1] If clients of your HTTP function support
redirects, you can simply change your original function to return an
HTTP redirect status (301) along with the URL of your new function.
[Solution 2] If your clients do not handle redirects well, you can
proxy the request from the original function to the new function by
initiating a new request from the original function to the new
function. The final step is to ensure that all clients are calling the
new function.
I've highlighted what seems to be two solutions to my initial problem:
Solution 1
Have a us-central1 function that send a 301 redirection to https://europe-west1-[myProject].cloudfunctions.net/[myEuropeanFunction]
Have a europe-west1 function that does the job (in my case, serve the Next.js app)
Happily using Firestore located in europe-west1
This would only work if clients of the HTTP function support redirects. In my case, it's fine: all browsers support redirection.
exports.nextServer = functions
.https
.onRequest((req, res) => {
res.set('location', 'https://europe-west1-<my-project>.cloudfunctions.net/nextServerEurope');
res.status(301).send()
});
exports.nextServerEurope = functions
.region('europe-west1')
.https
.onRequest((req, res) => {
return server.prepare().then(() => nextjsHandle(req, res));
});
The issue with that solution is that the URL changes in the browser to https://europe-west1-.cloudfunctions.net/nextServerEurope :-/
Solution 2
Have a us-central1 function that initiate a new/proxy request to the europe-west1 function
Have the same europe-west1 function that does the job (in my case, serve the Next.js app)
Still happily using Firestore located in europe-west1
By proxy request (as suggested in the guide), it would mean using a lib like axios I suppose. I know there are some libraries to perform proxy request available for node as well.
However, with that solution, the first issue I can think of is the unnecessary delay introcuded by passing by the us endpoint:
client -> us endpoint -> eu endpoint -> do stuff -> us endpoint -> client
Billing wise, I'm wondering what would be the impact..
I know that two services from different regions calling each others can increase the latency and the billing (egress).
With the first solution, there's no egress traffic as it's only a redirection to the european endpoint. But the redirection itself is not a valid solution in my case.
It's unclear for me what would be the additional billing cost with the second solution (beside the latency cost): is the traffic for the proxy request from us to eu going to be expensive?
To wrap-up:
The solution 1 is easy but leads to a non-transparent redirection
The solution 2 seems ok but it requires extra http request which leads to extra-latency (and potentially extra billing)
In the end, both solutions doesn't seem quite okay.
Therefore my question:
How do you serve in Europe dynamic content using Firebase Hosting and Functions?
Firebase Hosting only supports Cloud Functions in Us-Central as you mentioned and as stated in the Firebase Hosting Official Documentation.
I have created a Feature Request in Public Issue Tracker to support other regions when using Firebase Hosting with Cloud Functions. Please note, there is no ETA when this will be implemented.
So as #Doug Stevenson suggest, you can use Firebase Hosting with Cloud Run instead to serve your Dynamic Content.
Just to update. As of August 2022.
Finally, latency issue can be solved easily for now.
Firebase Hosting rewrites to CF3 are able to be done to any CF3
region, not just us-central1.
Reference: Feature Request Ticket

Custom service/route creation using feathersjs

I have been reading the documentation for last 2 days. I'm new to feathersjs.
First issue: any link related to feathersjs is not accessible. Such as this.
Giving the following error:
This page isn’t working
legacy.docs.feathersjs.com redirected you too many times.
Hence I'm unable to traceback to similar types or any types of previously asked threads.
Second issue: It's a great framework to start with Real-time applications. But not all real time application just require alone DB access, their might be access required to something like Amazon S3, Microsoft Azure etc. In my case it's the same and it's more like problem with setting up routes.
I have executed the following commands:
feathers generate app
feathers generate service (service name: upload, REST, DB: Mongoose)
feathers generate authentication (username and password)
I have the setup with me, ready but how do I add another custom service?
The granularity of the service starts in the following way (Use case only for upload):
Conventional way of doing it >> router.post('/upload', (req, res, next) =>{});
Assume, I'm sending a file using data form, and some extra param like { storage: "s3"} in the req.
Postman --> POST (Only) to /upload ---> Process request (isStorageExistsInRequest?) --> Then perform the actual upload respectively to the specific Storage in Req and log the details in local db as well --> Send Response (Success or Failure)
Another thread on stack overflow where you have answered with this:
app.use('/Category/ExclusiveContents/:categoryId', {
create(data, params) {
// do complex stuff here
params.categoryId // the id of the category
data // -> additional data from the POST request
}
});
The solution can viewed in this way as well, since featherjs supports micro service approach, It would be great to have sub-routes like:
/upload_s3 -- uploads to s3
/upload_azure -- uploads to azure and so on.
/upload -- main route which is exposed to users. User requests, process request, call the respective sub-route. (Authentication and Auth to be included as well)
How to solve these types of problems using existing setup of feathersjs?
1) This is a deployment issue, Netlify is looking into it. The current documentation is not on the legacy domain though, what you are looking for can be found at docs.feathersjs.com/api/databases/querying.html.
2) A custom service can be added by running feathers generate service and choosing the custom service option. The functionality can then be implemented in src/services/<service-name>/<service-name>.class.js according to the service interface. For file uploads, an example on how to customize the parameters for feathers-blob (which is used in the file uploading guide) can be found in this issue.

Web API call not returning

I have a RESTful Web API that is running properly as I can test it with Fiddler. I see calls going through, I see responses coming back.
I am developing a tablet application that needs to use the Web API in order to fetch data or make updates in the repository.
My calls do not return and there is not a single trace in the Fiddler to show that my calls even reach the server.
The first call I need to make is to login. The URI would be this:
http://localhost:53060/api/user
This call would normally return some information about the user (such as group membership, level of authorization and so on). The Web API uses Windows Authentication, so the repository is able to resolve all these fields based on the credentials passed in. As I said, in Fiddler I see the three calls made to the URI as the authentication is negotiated between the caller and the server. The third call returns with a JSON object that contains all information generated from the repository as expected.
Now, moving to my client I have the following:
var webApiClient = new HttpClient(new HttpClientHandler()
{
UseDefaultCredentials = true
})
{
BaseAddress = new Uri("http://localhost:53060/")
};
webApiClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
HttpResponseMessage response = await webApiClient.GetAsync("api/user");
var userLoginInfo = await response.Content.ReadAsAsync<UserLoginInformation>();
My call to "GetAsync" never returns and, like I said, I see no trace of it in Fiddler.
Any idea of what I'm doing wrong?
Changing the URL where the Web API was exposed seemed to have fixed the problem. Thanks to #Nkosi for the suggestion.
For anyone stumbling onto this question and asking themselves how to change the URL of the Web API, there are two ways. If the simulator is running on the same machine with the Web API, the change has to be made in the "applicationhost.config" file for IIS Express. You can locate this file by right-clicking on the IIS Express icon in the Notification Area (the bottom right corner) and selecting show all websites. Highlight the desired Web API and it will show where the application host configuration file is located. In there, one needs to locate the following section:
<bindings>
<binding protocol="http" bindingInformation="*:53060:localhost" />
</bindings>
and replace the "localhost" name with the IP address of the machine where the Web API is running.
However, this approach will not work once you start testing your tablet app with a real device. IIS Express must be coerced into exposing the Web API to the outside world. I found an excellent node.js package that can help with that. It is called IISExpress-proxy.

VimeoUpload not re-authenticating After Deletion of App Access on Vimeo.com

I was able to connect and upload videos using the library but when I deleted the app connection on Vimeo.com (as a test) the app didn't authorize again.
the upload looks like it's working but nothing is uploaded as the app is no longer connected.
I deleted the app on the phone and restarted but it still won't re-authorize the app.
This comes up in the output:
Vimeo upload state : Executing
Vimeo upload state : Finished
Invalid http status code for download task.
And this is in OldVimeoUpload.swift: ( didn't include the actual access code!)
import Foundation
class OldVimeoUpload: VimeoUpload
{
static var VIMEO_ACCESS_TOKEN :String! // = "there's a string of numbers here"
static let sharedInstance = OldVimeoUpload(backgroundSessionIdentifier: "") { () -> String? in
return VIMEO_ACCESS_TOKEN // See README for details on how to obtain and OAuth token
}
// MARK: - Initialization
override init(backgroundSessionIdentifier: String, authTokenBlock: AuthTokenBlock)
{
super.init(backgroundSessionIdentifier: backgroundSessionIdentifier, authTokenBlock: authTokenBlock)
}
}
It looks like the access token number is commented out. I deleted the 2 forward slashes to see if that would fix it but it didn't.
I spoke too soon.
It sounds like you went to developer.vimeo.com and created an auth token. Used it to upload videos. And then went back to developer.vimeo.com and deleted the auth token.
The app / VimeoUpload will not automatically re-authenticated in this situation. You've killed the token and the app cannot request a new one for you. You'll need to create a new auth token and plug it into the app.
If this is not accurate and you're describing a different issue let us know.
If you inspect the error that's thrown from the failing request I'm guessing you'll see it's a 401 unauthorized related to using an invalid token.
Edit:
Disconnecting your app (as described in your comment below) has the same effect as deleting your auth token from developer.vimeo.com.
Also, VimeoUpload accepts a hardcoded auth token (as you see from the README and your code sample). It will not automatically re-authenticate, probably ever.
If you'd like to handle authentication in your app check out VimeoNetworking or VIMNetworking. Either of those libraries can be used to create a variety of authentication flows / scenarios. Still, if a logged in user disconnects or deletes their token, you will need them to deliberately re-authenticate (i.e. you will need to build that flow yourself). In that case, the user has explicitly stated that they don't want the app to be able to access information on their behalf. It would go against our security contract with them to automatically re-authenticate somehow.
Does that make sense?

How to fix cross-site origin policy for server and web-site

I'm using Dropwizard, which I'm hosting, along with a website, on the google cloud (GCE). This means that there are 2 locations currently active:
Some.IP.Address - UI
Some.IP.Address:8080 - Dropwizard server
When the UI tries to call anything from my dropwizard server, I get cross-site origin errors, which is understandable. However, this is posing a problem for me. How do I fix this? It would be great if I could somehow spoof the addresses so that I don't have to fully qualify the resource in the UI.
What I'm looking to do is this:
$.get('/provider/upload/display_information')
Or, if I have to fully qualify
$.get('http://Some.IP.Address:8080/provider/upload/display_information')
I tried setting Origin Filters in Dropwizard per this google groups thread (https://groups.google.com/forum/#!topic/dropwizard-user/ybDOTOxjlLI), but it doesn't seem to work.
In index.html that is served by the server at http://Some.IP.Address you might have a jQuery script that look as follows.
$.get('http://Some.IP.Address:8080/provider/upload/display_information', data, callback);
Of course your browser will not allow accessing http://Some.IP.Address:8080 due to the Same-Origin-Policy (SOP). The protocol (http, https) and the host as well as the port have to be the same.
To achieve Cross-Origin Resource Sharing (CORS) on Dropwizard, you have to add a CrossOriginFilter to the servlet environment. This filter will add some Access-Control-Headers to every response the server is sending. In the run method of your Dropwizard application write:
import org.eclipse.jetty.servlets.CrossOriginFilter;
public class SomeApplication extends Application<SomeConfiguration> {
#Override
public void run(TodoConfiguration config, Environment environment) throws Exception {
FilterRegistration.Dynamic filter = environment.servlets().addFilter("CORS", CrossOriginFilter.class);
filter.addMappingForUrlPatterns(EnumSet.allOf(DispatcherType.class), true, "/*");
filter.setInitParameter("allowedOrigins", "http://Some.IP.Address"); // allowed origins comma separated
filter.setInitParameter("allowedHeaders", "Content-Type,Authorization,X-Requested-With,Content-Length,Accept,Origin");
filter.setInitParameter("allowedMethods", "GET,PUT,POST,DELETE,OPTIONS");
filter.setInitParameter("preflightMaxAge", "5184000"); // 2 months
filter.setInitParameter("allowCredentials", "true");
// ...
}
// ...
}
This solution works for Dropwizard 0.7.0 and can be found on https://groups.google.com/d/msg/dropwizard-user/xl5dc_i8V24/gbspHyl4y5QJ.
This filter will add some Access-Control-Headers to every response. Have a look on http://www.eclipse.org/jetty/documentation/current/cross-origin-filter.html for a detailed description of the initialisation parameters of the CrossOriginFilter.