How to authenticate with Chrome sync XMPP servers? - google-chrome

I need to get the currently opened tabs of a Google Chrome user in my Java application (not on the same machine). Chrome sync is enabled so the current tabs are synced with Google servers.
According to the documentation of Chrome sync it is done via XMPP. So I guess it should be possible to connect to the Google XMPP server (xmpp.google.com), e.g. via Smack (Java library for XMPP), authenticate and listen for protobuf messages that indicate a tab session change.
Of course the login credentials of the user or the "client_id" Chrome uses to identify clients are available.
But I'm having a hard time getting behind the authentication method that is used to connect to the XMPP server – I can't figure out how it's done in the Chromium source code and there's no documentation available besides the very low-level comments in the code.
The libjingle library Google uses for it's XMPP based services is only available for C++ and not well maintained/documented.
So is there anyone who has done something like that before and who can give any advice/hints on how the authentication process works?

I'm not sure chrome sync uses xmpp, at least on the level when it has to exchange info with client. It uses 'protocol buffers' Google technology. The protocol is given by using .proto protocol description files and you can convert it to your language's objects by using special compiler.
The sync server seems to rest at https://clients4.google.com/chrome-sync and client sends POST requests with the binary body where typed ClientToServerMessage message is placed.
Here's the output from when first connecting to sync server.
The first output Python object is a pprint of 'environ' WSGI variable where HTTP headers are placed too. The second object (after '====' ) is actual protocol message.
{'CONTENT_LENGTH': '54',
'CONTENT_TYPE': 'application/octet-stream',
'GATEWAY_INTERFACE': 'CGI/1.1',
'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'HTTP_ACCEPT_ENCODING': 'gzip,deflate,sdch',
'HTTP_ACCEPT_LANGUAGE': 'en-US,en;q=0.8',
'HTTP_AUTHORIZATION': 'GoogleLogin auth=MKhiqZsdz2RV4WrUJzPltxc2smTMcRnlfPALTOpf-Xdy9vsp6yUpS5cGuND0awqrYVUK4lhOJlh6OMsg093eBRghGGIgvWUTzU8PUvquy_c8Xn4sRiz_3tVJcke5eXi3q4qFDa6iVuEbT_0QhyPOjIQyeDOKRpZzMR3rpHsAs0ptFiTtUeTHsoIeUFT9nZPYzkET4-yHbDAp45_dxWdb-U6DPg24',
'HTTP_CONNECTION': 'keep-alive',
'HTTP_HOST': 'localhost:8080',
'HTTP_USER_AGENT': 'Chrome MAC 0.4.21.6 (130497)-devel',
'PATH_INFO': '/chrome-sync/dev/command/',
'QUERY_STRING': 'client_id=SOME_SPECIAL_STRING',
'REMOTE_ADDR': '127.0.0.1',
'REMOTE_PORT': '59031',
'REQUEST_METHOD': 'POST',
'SCRIPT_NAME': '',
'SERVER_NAME': 'vian-bizon.local',
'SERVER_PORT': '8080',
'SERVER_PROTOCOL': 'HTTP/1.0',
'SERVER_SOFTWARE': 'gevent/1.0 Python/2.6',
'wsgi.errors': <open file '<stderr>', mode 'w' at 0x100416140>,
'wsgi.input': <gevent.pywsgi.Input object at 0x102a04250>,
'wsgi.multiprocess': False,
'wsgi.multithread': False,
'wsgi.run_once': False,
'wsgi.url_scheme': 'https',
'wsgi.version': (1, 0)}
'==================================='
share: "MY_EMAIL_WAS_HERE#gmail.com"
protocol_version: 30
message_contents: GET_UPDATES
get_updates {
caller_info {
source: NEW_CLIENT
notifications_enabled: false
}
fetch_folders: true
from_progress_marker {
data_type_id: 47745
token: ""
notification_hint: ""
}
}
debug_info {
events {
type: INITIALIZATION_COMPLETE
}
events_dropped: false
}
This happens for OAuth based authentication. You can see the OAuth token in HTTP_AUTHORIZATION field. The OAuth token is given to you when you interact with HTML dialog 'Google Account Login'. I'm not sure but seems like the API to get an access token for Google services is available publicly.
If you are looking for XMPP auth instead, please see the description of X-GOOGLE-TOKEN auth mechanism here:
Authenticate to Google Talk (XMPP, Smack) using an authToken
For the X-OAUTH2 authorization, you can access the info here: https://developers.google.com/talk/jep_extensions/oauth
And a sample here: http://pits.googlecode.com/svn/trunk/xmpp.c
Note that you can add XMPP stream flow to the Chrome log files populated on each run of the browser - chrome_debug.log. To enable this, run Chrome with following options: --enable-logging --v=2

Related

AASA - Apple App Site Association - Not working

I have been having a long and frustrating experience trying to get AASA to work for webcredentials. My goal here is to allow usernames and passwords to be stored in the iOS keychain.
I did have this working on a root domain the other week but it is not sufficient for my scenario as I will explain. It didn't work for me straight away I have to say but it eventually started working after a clean build so I thought this was the issue then but now I am not so sure.
I am using Expo with EAS build. We have a multi-tenant application. From a single codebase we deploy to multiple apps in the store. All are on the same team ID but they are separate applications and use separate credentials, nothing is shared.
I am confident my apps textContentType of username and password on my TextFields is correct as this has not changed from when I managed to get it working originally and I have checked it countless times.
Expectation
For the "Save Password" prompt to be displayed after login. What I have noticed however is when going to store a password manually using "add password" via iCloudKeychain from the keyboard accessory this does accurately show the correct "TENANT_SUBDOMAIN.example.com". I find this confusing.
Goal Scenario
I am hosting a site on Netlify. I have it setup to support wildcard subdomains with a LetsEncrypt provisioned wildcard SSL certificate. I then have edge functions which change the content of my index.html and apple-app-site-association file dynamically based on the requested subdomain.
I have added the Associated Domains capability to my provisioning profile.
I am using the latest Expo 47 and EAS build. I have added in the appropriate associated domains configuration and I can see this when introspecting my entitlements under com.apple.developer.associated-domains and it is correct.
I am using TestFlight for testing. I am doing a --clean-build on EAS every time and I also increase the runtime version. I have also tried manually refreshing credentials outside of the build process which does this automatically. This must be using the correct provisioning profile otherwise I would get a build failure as the requested entitlements wouldn't match.
The AASA file is currently hosted just in the .well-known directory. I have tried using the root and also tried using both. There are no redirects taking place.
I am aware the AASA file is pulled on application installation and update. I repeatedly remove the apps and then reboot my phone in an attempt to reset any device caches.
The content-type of the file is application/json and I have confirmed this using developer tools in the browser.
There is no robots.txt or anything blocking the request from an infrastructure perspective. There are no additional firewalls or geo restricted access as I am just using plain Netlify to host this, nothing fancy.
I am confident the Team ID and bundle IDs are correct in the AASA file.
I remove the content-length header in the Edge function so it is correctly calculated by the network instead and I have confirmed this using curl.
When I check the file using https://app-site-association.cdn-apple.com/a/v1/site.example.com Apple has the correct file cached on it's CDN so I would expect it to work.
I added in an applinks section so I could use the Apple App Search API validation tool and the Branch.io AASA verification tool to verify correctness. Branch.io says the file is fine and Apple says it's fine also but because the App has not been deployed to the store yet I see Error no apps with domain entitlements. From what I can tell this is normal in development and makes sense as it uses the current released version of the app to verify the deep link configuration. So to me this means Apple can parse the file correctly.
When I stream my device console logs; on install I can see the AASA requesting the correct domains. I see no errors on swcd I just see the Beginning data task AASA-XXXX with the correct domains.
When I run Charles proxy on my phone with a verified SSL installation (also reinstalled a few times now) I do not see quite what I would expect - but the device logs seem to imply it is doing the correct thing. When I view the app-site-association... URL requests in Charles there is one per application install which is correct. The request is marked as Unknown and when I look at the request the host is shown but as you would expect from SSL I see no path. The info says METHOD: CONNECT with Error - Input Error: EOF. This is the only error I see, I am not sure if it is a red herring and something to do with Charles. Given the error as you expect there is no body in the request or response. It is worth noting in general testing I have no VPN enabled and I have do not have Private Relay enabled in my iOS settings.
When I perform a sysdiagnose I see the following at the timestamp in my console log in the swcutil_show.txt device log. This looks correct in comparison to other apps webcredentials and applinks services I see there and I see no errors:
Service: webcredentials
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x141816200> { v = 0, t = 0x8, u = 0x1e7c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x7c1e000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-09 14:14:32 +0000
Next Check: 2022-12-14 14:03:00 +0000
Service: applinks
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x13fd38d00> { v = 0, t = 0x8, u = 0x219c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x9c21000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
Patterns: {"/":"*"}
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-13 13:13:23 +0000
Next Check: 2022-12-18 13:01:51 + 0000
At end of file:
MYTEAMID.com.cf.example.b2c.ios: 8 bytes
(This seems correct for all apps)
Other Scenario
I have tried setting this up using an apex on another domain which hasn't been seen before by Apple. I have tried using a subdomain with a root domain serving the same content and I have tried the subdomain and root domain on their own. I have also tried not using the Edge functions and having static files but to no avail.
When I do this I ensure I wait for the Apple CDN to catch up and remove/add entries prior to deleting the apps, rebooting my device, and reinstalling to test.
AASA File
AASA content comes back with the correct payload and Content-Type: application/json and Content-Length headers, both from Apples CDN and the origin. When I had this somehow working in my initial test it was on a root domain and I did not have an applinks section, this was only added so I could use the verification tools for universal links.
I am not sending back different content or duplicated content and I block the www subdomain - I have also tried it with a www subdomain for the record.
{
"applinks": {
"details": [
{
"appIDs": [
"MYTEAMID.com.cf.example.b2c.ios"
],
"components": [
{
"#": "no_universal_links",
"exclude": true,
"comment": "Matches any URL with a fragment that equals no_universal_links and instructs the system not to open it as a universal link."
}
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
I have also tried this with the older format:
{
"applinks": {
"apps": [],
"details": [
{
"appID": "MYTEAMID.com.cf.example.b2c.ios",
"paths": [
"*"
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
associatedDomains iOS. expo config
associatedDomains: [
`webcredentials:${SUBDOMAIN}.example.app`,
`applinks:${SUBDOMAIN}.example.app`,
],
Help :)
I have been trying to get this to work for a long time now and I am completely out of ideas. If anybody has any suggestions I would really appreciate it. I am very confused how the devices request seems correct and the CDN content is correct but it is still not working. It's worth also reiterating that I need to have different subdomains for each tenant as the credentials must not be shared across apps so the keychain->domain association store must be different.
I am wondering if it's the LetsEncrypt wildcard SSL certificate but I wouldn't expect it to verify and for Apple to cache the file if this was the case. It seems very unlikely to me but it is the only thing I haven't tried at this point.
Many Thanks,
Mark

Custom service/route creation using feathersjs

I have been reading the documentation for last 2 days. I'm new to feathersjs.
First issue: any link related to feathersjs is not accessible. Such as this.
Giving the following error:
This page isn’t working
legacy.docs.feathersjs.com redirected you too many times.
Hence I'm unable to traceback to similar types or any types of previously asked threads.
Second issue: It's a great framework to start with Real-time applications. But not all real time application just require alone DB access, their might be access required to something like Amazon S3, Microsoft Azure etc. In my case it's the same and it's more like problem with setting up routes.
I have executed the following commands:
feathers generate app
feathers generate service (service name: upload, REST, DB: Mongoose)
feathers generate authentication (username and password)
I have the setup with me, ready but how do I add another custom service?
The granularity of the service starts in the following way (Use case only for upload):
Conventional way of doing it >> router.post('/upload', (req, res, next) =>{});
Assume, I'm sending a file using data form, and some extra param like { storage: "s3"} in the req.
Postman --> POST (Only) to /upload ---> Process request (isStorageExistsInRequest?) --> Then perform the actual upload respectively to the specific Storage in Req and log the details in local db as well --> Send Response (Success or Failure)
Another thread on stack overflow where you have answered with this:
app.use('/Category/ExclusiveContents/:categoryId', {
create(data, params) {
// do complex stuff here
params.categoryId // the id of the category
data // -> additional data from the POST request
}
});
The solution can viewed in this way as well, since featherjs supports micro service approach, It would be great to have sub-routes like:
/upload_s3 -- uploads to s3
/upload_azure -- uploads to azure and so on.
/upload -- main route which is exposed to users. User requests, process request, call the respective sub-route. (Authentication and Auth to be included as well)
How to solve these types of problems using existing setup of feathersjs?
1) This is a deployment issue, Netlify is looking into it. The current documentation is not on the legacy domain though, what you are looking for can be found at docs.feathersjs.com/api/databases/querying.html.
2) A custom service can be added by running feathers generate service and choosing the custom service option. The functionality can then be implemented in src/services/<service-name>/<service-name>.class.js according to the service interface. For file uploads, an example on how to customize the parameters for feathers-blob (which is used in the file uploading guide) can be found in this issue.

U2F with multi-facet App ID

We have been directly using U2F on our auth web app with the hostname as our app ID (https://auth.company.com) and that's working fine. However, we'd like to be able to authenticate with the auth server from other apps (and hostnames, e.g. https://customer.app.com) that communicate with the auth server via HTTP API.
I can generate the sign requests and what-not through API calls and return them to the client apps, but it fails server-side (auth server) because the app ID doesn't validate (clients are using their own hostnames as app ID). This is understandable, but how should I handle this? I've read about facets but I cannot get it to work at all.
The client app JS is like:
var registerRequests = // ...
var signRequests = // ...
u2f.register('http://localhost:3000/facets', registerRequests, signRequests, function(registerResponse) {
if (registerResponse.errorCode) {
return alert("Registration error: " + registerResponse.errorCode);
}
// etc.
});
This gives me an Error code 5 (timeout error) after a while. I don't see any request to /facets . Is there a way around this or am I barking up the wrong tree (or a different forest)?
————
Okay, so after a few hours of researching this; I'm pretty sure this fiendish bit of the Firefox U2F plugin is the source of some of my woes:
if (u.scheme == "http")
if (url2str(u, true) == url2str(ou, true))
return resolve(challenge);
else
return reject("Not matching appID");
https://github.com/prefiks/u2f4moz/blob/master/ext/appIdValidator.js#L106-L110
It's essentially saying, if the appID's scheme is http, only allow it if it's exactly the same as the page's host (it goes on to do the behaviour for fetching the trusted facets JSON but only for https).
Still not sure if I'm on the right track though in how I'm trying to design this.
I didn't need to worry about facets for my particular situation. In the end I just pass the client app hostname through to the Auth server via the secure API interface and it uses that as the App ID. Seems to work okay so far.
The issue I was having with facets was due to using http in dev and the Firefox U2F plugin not permitting that with JSON facets.

Web API call not returning

I have a RESTful Web API that is running properly as I can test it with Fiddler. I see calls going through, I see responses coming back.
I am developing a tablet application that needs to use the Web API in order to fetch data or make updates in the repository.
My calls do not return and there is not a single trace in the Fiddler to show that my calls even reach the server.
The first call I need to make is to login. The URI would be this:
http://localhost:53060/api/user
This call would normally return some information about the user (such as group membership, level of authorization and so on). The Web API uses Windows Authentication, so the repository is able to resolve all these fields based on the credentials passed in. As I said, in Fiddler I see the three calls made to the URI as the authentication is negotiated between the caller and the server. The third call returns with a JSON object that contains all information generated from the repository as expected.
Now, moving to my client I have the following:
var webApiClient = new HttpClient(new HttpClientHandler()
{
UseDefaultCredentials = true
})
{
BaseAddress = new Uri("http://localhost:53060/")
};
webApiClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
HttpResponseMessage response = await webApiClient.GetAsync("api/user");
var userLoginInfo = await response.Content.ReadAsAsync<UserLoginInformation>();
My call to "GetAsync" never returns and, like I said, I see no trace of it in Fiddler.
Any idea of what I'm doing wrong?
Changing the URL where the Web API was exposed seemed to have fixed the problem. Thanks to #Nkosi for the suggestion.
For anyone stumbling onto this question and asking themselves how to change the URL of the Web API, there are two ways. If the simulator is running on the same machine with the Web API, the change has to be made in the "applicationhost.config" file for IIS Express. You can locate this file by right-clicking on the IIS Express icon in the Notification Area (the bottom right corner) and selecting show all websites. Highlight the desired Web API and it will show where the application host configuration file is located. In there, one needs to locate the following section:
<bindings>
<binding protocol="http" bindingInformation="*:53060:localhost" />
</bindings>
and replace the "localhost" name with the IP address of the machine where the Web API is running.
However, this approach will not work once you start testing your tablet app with a real device. IIS Express must be coerced into exposing the Web API to the outside world. I found an excellent node.js package that can help with that. It is called IISExpress-proxy.

VimeoUpload not re-authenticating After Deletion of App Access on Vimeo.com

I was able to connect and upload videos using the library but when I deleted the app connection on Vimeo.com (as a test) the app didn't authorize again.
the upload looks like it's working but nothing is uploaded as the app is no longer connected.
I deleted the app on the phone and restarted but it still won't re-authorize the app.
This comes up in the output:
Vimeo upload state : Executing
Vimeo upload state : Finished
Invalid http status code for download task.
And this is in OldVimeoUpload.swift: ( didn't include the actual access code!)
import Foundation
class OldVimeoUpload: VimeoUpload
{
static var VIMEO_ACCESS_TOKEN :String! // = "there's a string of numbers here"
static let sharedInstance = OldVimeoUpload(backgroundSessionIdentifier: "") { () -> String? in
return VIMEO_ACCESS_TOKEN // See README for details on how to obtain and OAuth token
}
// MARK: - Initialization
override init(backgroundSessionIdentifier: String, authTokenBlock: AuthTokenBlock)
{
super.init(backgroundSessionIdentifier: backgroundSessionIdentifier, authTokenBlock: authTokenBlock)
}
}
It looks like the access token number is commented out. I deleted the 2 forward slashes to see if that would fix it but it didn't.
I spoke too soon.
It sounds like you went to developer.vimeo.com and created an auth token. Used it to upload videos. And then went back to developer.vimeo.com and deleted the auth token.
The app / VimeoUpload will not automatically re-authenticated in this situation. You've killed the token and the app cannot request a new one for you. You'll need to create a new auth token and plug it into the app.
If this is not accurate and you're describing a different issue let us know.
If you inspect the error that's thrown from the failing request I'm guessing you'll see it's a 401 unauthorized related to using an invalid token.
Edit:
Disconnecting your app (as described in your comment below) has the same effect as deleting your auth token from developer.vimeo.com.
Also, VimeoUpload accepts a hardcoded auth token (as you see from the README and your code sample). It will not automatically re-authenticate, probably ever.
If you'd like to handle authentication in your app check out VimeoNetworking or VIMNetworking. Either of those libraries can be used to create a variety of authentication flows / scenarios. Still, if a logged in user disconnects or deletes their token, you will need them to deliberately re-authenticate (i.e. you will need to build that flow yourself). In that case, the user has explicitly stated that they don't want the app to be able to access information on their behalf. It would go against our security contract with them to automatically re-authenticate somehow.
Does that make sense?