AASA - Apple App Site Association - Not working - apple-app-site-associate

I have been having a long and frustrating experience trying to get AASA to work for webcredentials. My goal here is to allow usernames and passwords to be stored in the iOS keychain.
I did have this working on a root domain the other week but it is not sufficient for my scenario as I will explain. It didn't work for me straight away I have to say but it eventually started working after a clean build so I thought this was the issue then but now I am not so sure.
I am using Expo with EAS build. We have a multi-tenant application. From a single codebase we deploy to multiple apps in the store. All are on the same team ID but they are separate applications and use separate credentials, nothing is shared.
I am confident my apps textContentType of username and password on my TextFields is correct as this has not changed from when I managed to get it working originally and I have checked it countless times.
Expectation
For the "Save Password" prompt to be displayed after login. What I have noticed however is when going to store a password manually using "add password" via iCloudKeychain from the keyboard accessory this does accurately show the correct "TENANT_SUBDOMAIN.example.com". I find this confusing.
Goal Scenario
I am hosting a site on Netlify. I have it setup to support wildcard subdomains with a LetsEncrypt provisioned wildcard SSL certificate. I then have edge functions which change the content of my index.html and apple-app-site-association file dynamically based on the requested subdomain.
I have added the Associated Domains capability to my provisioning profile.
I am using the latest Expo 47 and EAS build. I have added in the appropriate associated domains configuration and I can see this when introspecting my entitlements under com.apple.developer.associated-domains and it is correct.
I am using TestFlight for testing. I am doing a --clean-build on EAS every time and I also increase the runtime version. I have also tried manually refreshing credentials outside of the build process which does this automatically. This must be using the correct provisioning profile otherwise I would get a build failure as the requested entitlements wouldn't match.
The AASA file is currently hosted just in the .well-known directory. I have tried using the root and also tried using both. There are no redirects taking place.
I am aware the AASA file is pulled on application installation and update. I repeatedly remove the apps and then reboot my phone in an attempt to reset any device caches.
The content-type of the file is application/json and I have confirmed this using developer tools in the browser.
There is no robots.txt or anything blocking the request from an infrastructure perspective. There are no additional firewalls or geo restricted access as I am just using plain Netlify to host this, nothing fancy.
I am confident the Team ID and bundle IDs are correct in the AASA file.
I remove the content-length header in the Edge function so it is correctly calculated by the network instead and I have confirmed this using curl.
When I check the file using https://app-site-association.cdn-apple.com/a/v1/site.example.com Apple has the correct file cached on it's CDN so I would expect it to work.
I added in an applinks section so I could use the Apple App Search API validation tool and the Branch.io AASA verification tool to verify correctness. Branch.io says the file is fine and Apple says it's fine also but because the App has not been deployed to the store yet I see Error no apps with domain entitlements. From what I can tell this is normal in development and makes sense as it uses the current released version of the app to verify the deep link configuration. So to me this means Apple can parse the file correctly.
When I stream my device console logs; on install I can see the AASA requesting the correct domains. I see no errors on swcd I just see the Beginning data task AASA-XXXX with the correct domains.
When I run Charles proxy on my phone with a verified SSL installation (also reinstalled a few times now) I do not see quite what I would expect - but the device logs seem to imply it is doing the correct thing. When I view the app-site-association... URL requests in Charles there is one per application install which is correct. The request is marked as Unknown and when I look at the request the host is shown but as you would expect from SSL I see no path. The info says METHOD: CONNECT with Error - Input Error: EOF. This is the only error I see, I am not sure if it is a red herring and something to do with Charles. Given the error as you expect there is no body in the request or response. It is worth noting in general testing I have no VPN enabled and I have do not have Private Relay enabled in my iOS settings.
When I perform a sysdiagnose I see the following at the timestamp in my console log in the swcutil_show.txt device log. This looks correct in comparison to other apps webcredentials and applinks services I see there and I see no errors:
Service: webcredentials
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x141816200> { v = 0, t = 0x8, u = 0x1e7c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x7c1e000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-09 14:14:32 +0000
Next Check: 2022-12-14 14:03:00 +0000
Service: applinks
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x13fd38d00> { v = 0, t = 0x8, u = 0x219c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x9c21000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
Patterns: {"/":"*"}
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-13 13:13:23 +0000
Next Check: 2022-12-18 13:01:51 + 0000
At end of file:
MYTEAMID.com.cf.example.b2c.ios: 8 bytes
(This seems correct for all apps)
Other Scenario
I have tried setting this up using an apex on another domain which hasn't been seen before by Apple. I have tried using a subdomain with a root domain serving the same content and I have tried the subdomain and root domain on their own. I have also tried not using the Edge functions and having static files but to no avail.
When I do this I ensure I wait for the Apple CDN to catch up and remove/add entries prior to deleting the apps, rebooting my device, and reinstalling to test.
AASA File
AASA content comes back with the correct payload and Content-Type: application/json and Content-Length headers, both from Apples CDN and the origin. When I had this somehow working in my initial test it was on a root domain and I did not have an applinks section, this was only added so I could use the verification tools for universal links.
I am not sending back different content or duplicated content and I block the www subdomain - I have also tried it with a www subdomain for the record.
{
"applinks": {
"details": [
{
"appIDs": [
"MYTEAMID.com.cf.example.b2c.ios"
],
"components": [
{
"#": "no_universal_links",
"exclude": true,
"comment": "Matches any URL with a fragment that equals no_universal_links and instructs the system not to open it as a universal link."
}
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
I have also tried this with the older format:
{
"applinks": {
"apps": [],
"details": [
{
"appID": "MYTEAMID.com.cf.example.b2c.ios",
"paths": [
"*"
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
associatedDomains iOS. expo config
associatedDomains: [
`webcredentials:${SUBDOMAIN}.example.app`,
`applinks:${SUBDOMAIN}.example.app`,
],
Help :)
I have been trying to get this to work for a long time now and I am completely out of ideas. If anybody has any suggestions I would really appreciate it. I am very confused how the devices request seems correct and the CDN content is correct but it is still not working. It's worth also reiterating that I need to have different subdomains for each tenant as the credentials must not be shared across apps so the keychain->domain association store must be different.
I am wondering if it's the LetsEncrypt wildcard SSL certificate but I wouldn't expect it to verify and for Apple to cache the file if this was the case. It seems very unlikely to me but it is the only thing I haven't tried at this point.
Many Thanks,
Mark

Related

Catch 22? Blocked by CORS policy: Same server, internal/external IP, no SSL

My apologies if this is a duplicate. I can find a million results about CORS policy issues, but not about this specific one:
I developed a simple "speed test" site for my users (wfh employees of my company) to access. It tests speeds across the public net to different datacenters we utilize, and via the users' VPN connection to one of our DCs.
There are more complicated elements, but for a basic round-trip "ping" I have an extremely simple PHP script on the server that contains:
<?php
header('Access-Control-Allow-Origin: *');
header('Access-Control-Allow-Headers: *');
if ($_GET['simple'] == '1')
die('{ }');
?>
It is called like this:
$.ajax({
type: 'GET',
url: sURL,
data: { ignore: (pingCounter.start = new Date().getTime()) },
dataType: 'text',
timeout: iTimeout
})
.done(function(ret) {
pingCounter.end = new Date().getTime();
[...] (additional code omitted for brevity)
(I know this has additional overhead other than the raw round-trip network traffic timing, but I don't need sub-ms accuracy. I just need to be able to tell users "the problem is on your end" or "ah yes, the problem is the latency between your house and this particular DC".)
The same server running that PHP code is addressable at the following URLs at the DC wherein our VPN server lies:
http://speedtest-int.mycompany.com/ping.php
http://speedtest-ext.mycompany.com/ping.php
Public DNS resolves like this:
speedtest-int.mycompany.com IN A 1.1.1.1 (Actual public IP redacted)
speedtest-int.mycompany.com IN A 10.1.1.1 (Actual internal IP redacted)
If I access either URL from my browser directly, it loads fine (which is to say it responds with { }).
When loading via the JS snippet above, the call to http://speedtest-ext.mycompany.com/ping.php works fine.
The call to http://speedtest-int.mycompany.com/ping.php fails with "The request client is not a secure context and the resource is in more-private address space 'private'".
Fair enough, the solution is to add Access-Control-Allow-Private-Network: *, right?
EXCEPT that apparently can only be used with SSL:
https://developer.chrome.com/blog/private-network-access-update/
I have a self-signed cert on there, but that obviously fails by policy for that reason.
I could just get a LetsEncrypt cert for multiple subdomains. EXCEPT it will never validate the URL http://speedtest-int.mycompany.com because the LetsEncrypt servers won't be able to reach that to validate ownership, as it's a private IP.
I have no control over most of my users' machines, so I can't necessarily install trusted internal certs or change browser options. Most users use Chrome.
So is my solution to buy a UCC or wildcard cert?
I feel like I'm in a catch-22, and I don't want to spend however-much on a UCC cert for an internal app that will be very very very occasionally used by one of our 25 home-based employees when I want to prove that their home "internet is bad" and not the corp network.
Thanks in advance; I'm sure there's a stupidly obvious solution I'm not seeing.
(I'm considering pushing a /32 route to my VPN users for another real public IP to be used in place of the internal IP. Then I can have the "internal" test run against an otherwise publicly accessible IP which could be validated by LetsEncrypt, but VPN users would hit it via the VPN. Is that silly?)
Edit: If anyone is curious -- or it helps to clarify my goal here -- this is the output when accessing the speedtest page:
http://s.co.tt/wp-content/uploads/2021/12/Internal_Speedtest_Example-Redacted.png
It repeats for 20 cycles (or until stopped) and runs each element a varying number of times per cycle, collecting the average time for each. It ain't pretty, but it work(ed).

Vorto Dashboard not displaying the device model

while running the vorto dashboard im getting the following error
JWT expired, getting new Token Wed Aug 26 2020 07:38:56 GMT+0100 (BST)... StatusCodeError: 401 -
{"status":401,"error":"gateway:authentication.failed","message":"Multiple authentication
mechanisms were applicable but none succeeded.","description":"For a successful authentication
see the following suggestions: { The JSON Web Token is not valid. },
{ Please provide a valid JWT in the authorization header prefixed with 'Bearer ' }."
The contents of config.json is as follows
{
"client_id": "xxxxxxxxxxx",
"client_secret": "xxxxxxxxxxxx",
"scope": "xxxxxxxxxx",
"intervalMS": 10000
}
Tried with setting the contents of config.json as environment variables. Then also im getting same error. Screenshot of web front end on accessing localhost:8080 is attached
Tried with the following links Error running Vorto Dashboard for Bosch iot suite. But still its not working. Please help me in solving this issue
I have discussed the matter internally to Bosch (disclaimer: I am an employee).
After discussing with the Bosch Suite Auth team, here is a summary of what happened.
The Suite Auth team recently transitioned from Keycloack to Hydra for their authentication technology
The relevant bit here is that previously, the scopes passed to the token request were ignored
The Vorto Dashboard app had been passing the wrong key for the scope parameter all along, when requesting a token, but it was ignored
Now that this parameter is relevant, the (incorrect) notation was not failing to produce a token, but obtained one that was not suitable to authorize with Bosch IoT Things, because it did not contain the appropriate scope
In turn, fixing this key produces a token that successfully authorizes with Bosch IoT Things
If you're in a hurry, you can check out this branch with the fix (it's literally an 8 characters change set).
Otherwise, you can monitor this GitHub ticket for closure - I will close it when the fix is merged to the master branch of the Vorto Examples project.

Receiving JSON POST requests from HPKP error respondents

I'm experimenting with setting up HPKP (https://scotthelme.co.uk/hpkp-http-public-key-pinning/) on my web server and one of its options is to specify an error reporting URI in the header for clients to send error notices to in the form of a JSON POST request structured as such:
{
"date-time": date-time,
"hostname": hostname,
"port": port,
"effective-expiration-date": expiration-date,
"include-subdomains": include-subdomains,
"noted-hostname": noted-hostname,
"served-certificate-chain": [
pem1, ... pemN
],
"validated-certificate-chain": [
pem1, ... pemN
],
"known-pins": [
known-pin1, ... known-pinN
]
}
My question is how can I set something up within Linux to listen for the JSON POSTs on port 80 (or 443)?
Does anything exist for this already? thanks everyone for your help.
Scott Helme, who's link you included, also runs this service which takes care of it for you:
https://report-uri.io
Alternatively if you want to try it out yourself any web scripting language (cgi via perl, php... etc.) should be able to listen to a post request and dump it out to a log file. Personally I use a NodeJS service, but anything will do. I'm not aware of any scripts people have shared but that's probably because there no need as so simple (listen for post request, print out results).
Also you cannot listen on port 443 on the same domain as the site you are monitoring as the report also uses HPKP so won't be able to connect, since the only time you want to report is when you can't connect! Would work fine in report only mode though.
I know you're only experimenting but I would caution to be very careful with HPKP as its very easy to brick your site with this, and it adds a lot of extra considerations to certificate renewal. Personally I don't think it's that great as the risk it introduces, to me anyway, far out weigh the risk it mitigates for most sites. More thoughts of that from me here: https://www.tunetheweb.com/security/http-security-headers/hpkp/#downsides

Using Google Chrome remote debugging protocol

I need to get the network events from Chrome. I've found this:
https://developer.chrome.com/devtools/docs/debugger-protocol
https://developer.chrome.com/devtools/docs/protocol/1.1/network#command-enable
It seems that Chrome uses a port to get messages, answer and send events, for remote debugging. It says it uses JSON, so I decided to try it.
So, I wrote some simple java code that opens the port that chrome is listening on (ofcourse i've started it by using google-chrome --remote-debugging-port=9222 on my ubuntu machine). I have a thread that writes to stdout anything coming from this port, and then the code writes this to the outputstream of the socket using this line (a sample method from the protocol):
out.println("{\"id\": 1,\"method\": \"Network.enable\"}");
I would expect some answer (according to the protocol) in the input stream but nothing happens.
Does anyone ever done something like this? I can't find anything on the net.
Finally I've got it. Credit goes to https://www.igvita.com/2012/04/09/driving-google-chrome-via-websocket-api/.
First I send an HTTP request to http://localhost:9222/json. This returns a JSON list of open tabs in Chrome, for each I also get a WebSocket uri (webSocketDebuggerUrl):
[
{
"description": "",
"devtoolsFrontendUrl": "/devtools/devtools.html?ws=localhost:9222/devtools/page/C014A09F-BD0A-40BA-B23C-7B18B84942CD",
"faviconUrl": "http://cdn.sstatic.net/stackoverflow/img/favicon.ico?v=00a326f96f68",
"id": "C014A09F-BD0A-40BA-B23C-7B18B84942CD",
"title": "Using Google Chrome remote debugging protocol - Stack Overflow",
"type": "page",
"url": "https://stackoverflow.com/questions/28430479/using-google-chrome-remote-debugging-protocol",
"webSocketDebuggerUrl": "ws://localhost:9222/devtools/page/C014A09F-BD0A-40BA-B23C-7B18B84942CD"
}
]
Then I can use WebSocket to send messages for debugging a specific tab, using this URI. I also found this for using Jetty implementation of WebSocket: javax.websocket client simple example.

How to authenticate with Chrome sync XMPP servers?

I need to get the currently opened tabs of a Google Chrome user in my Java application (not on the same machine). Chrome sync is enabled so the current tabs are synced with Google servers.
According to the documentation of Chrome sync it is done via XMPP. So I guess it should be possible to connect to the Google XMPP server (xmpp.google.com), e.g. via Smack (Java library for XMPP), authenticate and listen for protobuf messages that indicate a tab session change.
Of course the login credentials of the user or the "client_id" Chrome uses to identify clients are available.
But I'm having a hard time getting behind the authentication method that is used to connect to the XMPP server – I can't figure out how it's done in the Chromium source code and there's no documentation available besides the very low-level comments in the code.
The libjingle library Google uses for it's XMPP based services is only available for C++ and not well maintained/documented.
So is there anyone who has done something like that before and who can give any advice/hints on how the authentication process works?
I'm not sure chrome sync uses xmpp, at least on the level when it has to exchange info with client. It uses 'protocol buffers' Google technology. The protocol is given by using .proto protocol description files and you can convert it to your language's objects by using special compiler.
The sync server seems to rest at https://clients4.google.com/chrome-sync and client sends POST requests with the binary body where typed ClientToServerMessage message is placed.
Here's the output from when first connecting to sync server.
The first output Python object is a pprint of 'environ' WSGI variable where HTTP headers are placed too. The second object (after '====' ) is actual protocol message.
{'CONTENT_LENGTH': '54',
'CONTENT_TYPE': 'application/octet-stream',
'GATEWAY_INTERFACE': 'CGI/1.1',
'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'HTTP_ACCEPT_ENCODING': 'gzip,deflate,sdch',
'HTTP_ACCEPT_LANGUAGE': 'en-US,en;q=0.8',
'HTTP_AUTHORIZATION': 'GoogleLogin auth=MKhiqZsdz2RV4WrUJzPltxc2smTMcRnlfPALTOpf-Xdy9vsp6yUpS5cGuND0awqrYVUK4lhOJlh6OMsg093eBRghGGIgvWUTzU8PUvquy_c8Xn4sRiz_3tVJcke5eXi3q4qFDa6iVuEbT_0QhyPOjIQyeDOKRpZzMR3rpHsAs0ptFiTtUeTHsoIeUFT9nZPYzkET4-yHbDAp45_dxWdb-U6DPg24',
'HTTP_CONNECTION': 'keep-alive',
'HTTP_HOST': 'localhost:8080',
'HTTP_USER_AGENT': 'Chrome MAC 0.4.21.6 (130497)-devel',
'PATH_INFO': '/chrome-sync/dev/command/',
'QUERY_STRING': 'client_id=SOME_SPECIAL_STRING',
'REMOTE_ADDR': '127.0.0.1',
'REMOTE_PORT': '59031',
'REQUEST_METHOD': 'POST',
'SCRIPT_NAME': '',
'SERVER_NAME': 'vian-bizon.local',
'SERVER_PORT': '8080',
'SERVER_PROTOCOL': 'HTTP/1.0',
'SERVER_SOFTWARE': 'gevent/1.0 Python/2.6',
'wsgi.errors': <open file '<stderr>', mode 'w' at 0x100416140>,
'wsgi.input': <gevent.pywsgi.Input object at 0x102a04250>,
'wsgi.multiprocess': False,
'wsgi.multithread': False,
'wsgi.run_once': False,
'wsgi.url_scheme': 'https',
'wsgi.version': (1, 0)}
'==================================='
share: "MY_EMAIL_WAS_HERE#gmail.com"
protocol_version: 30
message_contents: GET_UPDATES
get_updates {
caller_info {
source: NEW_CLIENT
notifications_enabled: false
}
fetch_folders: true
from_progress_marker {
data_type_id: 47745
token: ""
notification_hint: ""
}
}
debug_info {
events {
type: INITIALIZATION_COMPLETE
}
events_dropped: false
}
This happens for OAuth based authentication. You can see the OAuth token in HTTP_AUTHORIZATION field. The OAuth token is given to you when you interact with HTML dialog 'Google Account Login'. I'm not sure but seems like the API to get an access token for Google services is available publicly.
If you are looking for XMPP auth instead, please see the description of X-GOOGLE-TOKEN auth mechanism here:
Authenticate to Google Talk (XMPP, Smack) using an authToken
For the X-OAUTH2 authorization, you can access the info here: https://developers.google.com/talk/jep_extensions/oauth
And a sample here: http://pits.googlecode.com/svn/trunk/xmpp.c
Note that you can add XMPP stream flow to the Chrome log files populated on each run of the browser - chrome_debug.log. To enable this, run Chrome with following options: --enable-logging --v=2