Catch 22? Blocked by CORS policy: Same server, internal/external IP, no SSL - google-chrome

My apologies if this is a duplicate. I can find a million results about CORS policy issues, but not about this specific one:
I developed a simple "speed test" site for my users (wfh employees of my company) to access. It tests speeds across the public net to different datacenters we utilize, and via the users' VPN connection to one of our DCs.
There are more complicated elements, but for a basic round-trip "ping" I have an extremely simple PHP script on the server that contains:
<?php
header('Access-Control-Allow-Origin: *');
header('Access-Control-Allow-Headers: *');
if ($_GET['simple'] == '1')
die('{ }');
?>
It is called like this:
$.ajax({
type: 'GET',
url: sURL,
data: { ignore: (pingCounter.start = new Date().getTime()) },
dataType: 'text',
timeout: iTimeout
})
.done(function(ret) {
pingCounter.end = new Date().getTime();
[...] (additional code omitted for brevity)
(I know this has additional overhead other than the raw round-trip network traffic timing, but I don't need sub-ms accuracy. I just need to be able to tell users "the problem is on your end" or "ah yes, the problem is the latency between your house and this particular DC".)
The same server running that PHP code is addressable at the following URLs at the DC wherein our VPN server lies:
http://speedtest-int.mycompany.com/ping.php
http://speedtest-ext.mycompany.com/ping.php
Public DNS resolves like this:
speedtest-int.mycompany.com IN A 1.1.1.1 (Actual public IP redacted)
speedtest-int.mycompany.com IN A 10.1.1.1 (Actual internal IP redacted)
If I access either URL from my browser directly, it loads fine (which is to say it responds with { }).
When loading via the JS snippet above, the call to http://speedtest-ext.mycompany.com/ping.php works fine.
The call to http://speedtest-int.mycompany.com/ping.php fails with "The request client is not a secure context and the resource is in more-private address space 'private'".
Fair enough, the solution is to add Access-Control-Allow-Private-Network: *, right?
EXCEPT that apparently can only be used with SSL:
https://developer.chrome.com/blog/private-network-access-update/
I have a self-signed cert on there, but that obviously fails by policy for that reason.
I could just get a LetsEncrypt cert for multiple subdomains. EXCEPT it will never validate the URL http://speedtest-int.mycompany.com because the LetsEncrypt servers won't be able to reach that to validate ownership, as it's a private IP.
I have no control over most of my users' machines, so I can't necessarily install trusted internal certs or change browser options. Most users use Chrome.
So is my solution to buy a UCC or wildcard cert?
I feel like I'm in a catch-22, and I don't want to spend however-much on a UCC cert for an internal app that will be very very very occasionally used by one of our 25 home-based employees when I want to prove that their home "internet is bad" and not the corp network.
Thanks in advance; I'm sure there's a stupidly obvious solution I'm not seeing.
(I'm considering pushing a /32 route to my VPN users for another real public IP to be used in place of the internal IP. Then I can have the "internal" test run against an otherwise publicly accessible IP which could be validated by LetsEncrypt, but VPN users would hit it via the VPN. Is that silly?)
Edit: If anyone is curious -- or it helps to clarify my goal here -- this is the output when accessing the speedtest page:
http://s.co.tt/wp-content/uploads/2021/12/Internal_Speedtest_Example-Redacted.png
It repeats for 20 cycles (or until stopped) and runs each element a varying number of times per cycle, collecting the average time for each. It ain't pretty, but it work(ed).

Related

AASA - Apple App Site Association - Not working

I have been having a long and frustrating experience trying to get AASA to work for webcredentials. My goal here is to allow usernames and passwords to be stored in the iOS keychain.
I did have this working on a root domain the other week but it is not sufficient for my scenario as I will explain. It didn't work for me straight away I have to say but it eventually started working after a clean build so I thought this was the issue then but now I am not so sure.
I am using Expo with EAS build. We have a multi-tenant application. From a single codebase we deploy to multiple apps in the store. All are on the same team ID but they are separate applications and use separate credentials, nothing is shared.
I am confident my apps textContentType of username and password on my TextFields is correct as this has not changed from when I managed to get it working originally and I have checked it countless times.
Expectation
For the "Save Password" prompt to be displayed after login. What I have noticed however is when going to store a password manually using "add password" via iCloudKeychain from the keyboard accessory this does accurately show the correct "TENANT_SUBDOMAIN.example.com". I find this confusing.
Goal Scenario
I am hosting a site on Netlify. I have it setup to support wildcard subdomains with a LetsEncrypt provisioned wildcard SSL certificate. I then have edge functions which change the content of my index.html and apple-app-site-association file dynamically based on the requested subdomain.
I have added the Associated Domains capability to my provisioning profile.
I am using the latest Expo 47 and EAS build. I have added in the appropriate associated domains configuration and I can see this when introspecting my entitlements under com.apple.developer.associated-domains and it is correct.
I am using TestFlight for testing. I am doing a --clean-build on EAS every time and I also increase the runtime version. I have also tried manually refreshing credentials outside of the build process which does this automatically. This must be using the correct provisioning profile otherwise I would get a build failure as the requested entitlements wouldn't match.
The AASA file is currently hosted just in the .well-known directory. I have tried using the root and also tried using both. There are no redirects taking place.
I am aware the AASA file is pulled on application installation and update. I repeatedly remove the apps and then reboot my phone in an attempt to reset any device caches.
The content-type of the file is application/json and I have confirmed this using developer tools in the browser.
There is no robots.txt or anything blocking the request from an infrastructure perspective. There are no additional firewalls or geo restricted access as I am just using plain Netlify to host this, nothing fancy.
I am confident the Team ID and bundle IDs are correct in the AASA file.
I remove the content-length header in the Edge function so it is correctly calculated by the network instead and I have confirmed this using curl.
When I check the file using https://app-site-association.cdn-apple.com/a/v1/site.example.com Apple has the correct file cached on it's CDN so I would expect it to work.
I added in an applinks section so I could use the Apple App Search API validation tool and the Branch.io AASA verification tool to verify correctness. Branch.io says the file is fine and Apple says it's fine also but because the App has not been deployed to the store yet I see Error no apps with domain entitlements. From what I can tell this is normal in development and makes sense as it uses the current released version of the app to verify the deep link configuration. So to me this means Apple can parse the file correctly.
When I stream my device console logs; on install I can see the AASA requesting the correct domains. I see no errors on swcd I just see the Beginning data task AASA-XXXX with the correct domains.
When I run Charles proxy on my phone with a verified SSL installation (also reinstalled a few times now) I do not see quite what I would expect - but the device logs seem to imply it is doing the correct thing. When I view the app-site-association... URL requests in Charles there is one per application install which is correct. The request is marked as Unknown and when I look at the request the host is shown but as you would expect from SSL I see no path. The info says METHOD: CONNECT with Error - Input Error: EOF. This is the only error I see, I am not sure if it is a red herring and something to do with Charles. Given the error as you expect there is no body in the request or response. It is worth noting in general testing I have no VPN enabled and I have do not have Private Relay enabled in my iOS settings.
When I perform a sysdiagnose I see the following at the timestamp in my console log in the swcutil_show.txt device log. This looks correct in comparison to other apps webcredentials and applinks services I see there and I see no errors:
Service: webcredentials
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x141816200> { v = 0, t = 0x8, u = 0x1e7c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x7c1e000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-09 14:14:32 +0000
Next Check: 2022-12-14 14:03:00 +0000
Service: applinks
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x13fd38d00> { v = 0, t = 0x8, u = 0x219c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x9c21000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
Patterns: {"/":"*"}
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-13 13:13:23 +0000
Next Check: 2022-12-18 13:01:51 + 0000
At end of file:
MYTEAMID.com.cf.example.b2c.ios: 8 bytes
(This seems correct for all apps)
Other Scenario
I have tried setting this up using an apex on another domain which hasn't been seen before by Apple. I have tried using a subdomain with a root domain serving the same content and I have tried the subdomain and root domain on their own. I have also tried not using the Edge functions and having static files but to no avail.
When I do this I ensure I wait for the Apple CDN to catch up and remove/add entries prior to deleting the apps, rebooting my device, and reinstalling to test.
AASA File
AASA content comes back with the correct payload and Content-Type: application/json and Content-Length headers, both from Apples CDN and the origin. When I had this somehow working in my initial test it was on a root domain and I did not have an applinks section, this was only added so I could use the verification tools for universal links.
I am not sending back different content or duplicated content and I block the www subdomain - I have also tried it with a www subdomain for the record.
{
"applinks": {
"details": [
{
"appIDs": [
"MYTEAMID.com.cf.example.b2c.ios"
],
"components": [
{
"#": "no_universal_links",
"exclude": true,
"comment": "Matches any URL with a fragment that equals no_universal_links and instructs the system not to open it as a universal link."
}
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
I have also tried this with the older format:
{
"applinks": {
"apps": [],
"details": [
{
"appID": "MYTEAMID.com.cf.example.b2c.ios",
"paths": [
"*"
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
associatedDomains iOS. expo config
associatedDomains: [
`webcredentials:${SUBDOMAIN}.example.app`,
`applinks:${SUBDOMAIN}.example.app`,
],
Help :)
I have been trying to get this to work for a long time now and I am completely out of ideas. If anybody has any suggestions I would really appreciate it. I am very confused how the devices request seems correct and the CDN content is correct but it is still not working. It's worth also reiterating that I need to have different subdomains for each tenant as the credentials must not be shared across apps so the keychain->domain association store must be different.
I am wondering if it's the LetsEncrypt wildcard SSL certificate but I wouldn't expect it to verify and for Apple to cache the file if this was the case. It seems very unlikely to me but it is the only thing I haven't tried at this point.
Many Thanks,
Mark

Serve dynamic content with Firebase Hosting/Functions in EU

I would like to serve a Next.js app in europe using Firebase Hosting & Functions capabilities.
I do understand from the doc that:
If you are using HTTP functions to serve dynamic content for Firebase
Hosting, you must use us-central1
and that
Firebase Hosting supports Cloud Functions in us-central1 only
It's pretty clear: you must use us-central. But my main target is Europe..
I've read the following on the Cloud Functions locations guide:
For HTTP and callable functions, we recommend that you first set your
function to the destination region, or closest to where most expected
customers are located, and then alter your original function to
redirect its HTTP request to the new function (they can have the same
name). [Solution 1] If clients of your HTTP function support
redirects, you can simply change your original function to return an
HTTP redirect status (301) along with the URL of your new function.
[Solution 2] If your clients do not handle redirects well, you can
proxy the request from the original function to the new function by
initiating a new request from the original function to the new
function. The final step is to ensure that all clients are calling the
new function.
I've highlighted what seems to be two solutions to my initial problem:
Solution 1
Have a us-central1 function that send a 301 redirection to https://europe-west1-[myProject].cloudfunctions.net/[myEuropeanFunction]
Have a europe-west1 function that does the job (in my case, serve the Next.js app)
Happily using Firestore located in europe-west1
This would only work if clients of the HTTP function support redirects. In my case, it's fine: all browsers support redirection.
exports.nextServer = functions
.https
.onRequest((req, res) => {
res.set('location', 'https://europe-west1-<my-project>.cloudfunctions.net/nextServerEurope');
res.status(301).send()
});
exports.nextServerEurope = functions
.region('europe-west1')
.https
.onRequest((req, res) => {
return server.prepare().then(() => nextjsHandle(req, res));
});
The issue with that solution is that the URL changes in the browser to https://europe-west1-.cloudfunctions.net/nextServerEurope :-/
Solution 2
Have a us-central1 function that initiate a new/proxy request to the europe-west1 function
Have the same europe-west1 function that does the job (in my case, serve the Next.js app)
Still happily using Firestore located in europe-west1
By proxy request (as suggested in the guide), it would mean using a lib like axios I suppose. I know there are some libraries to perform proxy request available for node as well.
However, with that solution, the first issue I can think of is the unnecessary delay introcuded by passing by the us endpoint:
client -> us endpoint -> eu endpoint -> do stuff -> us endpoint -> client
Billing wise, I'm wondering what would be the impact..
I know that two services from different regions calling each others can increase the latency and the billing (egress).
With the first solution, there's no egress traffic as it's only a redirection to the european endpoint. But the redirection itself is not a valid solution in my case.
It's unclear for me what would be the additional billing cost with the second solution (beside the latency cost): is the traffic for the proxy request from us to eu going to be expensive?
To wrap-up:
The solution 1 is easy but leads to a non-transparent redirection
The solution 2 seems ok but it requires extra http request which leads to extra-latency (and potentially extra billing)
In the end, both solutions doesn't seem quite okay.
Therefore my question:
How do you serve in Europe dynamic content using Firebase Hosting and Functions?
Firebase Hosting only supports Cloud Functions in Us-Central as you mentioned and as stated in the Firebase Hosting Official Documentation.
I have created a Feature Request in Public Issue Tracker to support other regions when using Firebase Hosting with Cloud Functions. Please note, there is no ETA when this will be implemented.
So as #Doug Stevenson suggest, you can use Firebase Hosting with Cloud Run instead to serve your Dynamic Content.
Just to update. As of August 2022.
Finally, latency issue can be solved easily for now.
Firebase Hosting rewrites to CF3 are able to be done to any CF3
region, not just us-central1.
Reference: Feature Request Ticket

Receiving JSON POST requests from HPKP error respondents

I'm experimenting with setting up HPKP (https://scotthelme.co.uk/hpkp-http-public-key-pinning/) on my web server and one of its options is to specify an error reporting URI in the header for clients to send error notices to in the form of a JSON POST request structured as such:
{
"date-time": date-time,
"hostname": hostname,
"port": port,
"effective-expiration-date": expiration-date,
"include-subdomains": include-subdomains,
"noted-hostname": noted-hostname,
"served-certificate-chain": [
pem1, ... pemN
],
"validated-certificate-chain": [
pem1, ... pemN
],
"known-pins": [
known-pin1, ... known-pinN
]
}
My question is how can I set something up within Linux to listen for the JSON POSTs on port 80 (or 443)?
Does anything exist for this already? thanks everyone for your help.
Scott Helme, who's link you included, also runs this service which takes care of it for you:
https://report-uri.io
Alternatively if you want to try it out yourself any web scripting language (cgi via perl, php... etc.) should be able to listen to a post request and dump it out to a log file. Personally I use a NodeJS service, but anything will do. I'm not aware of any scripts people have shared but that's probably because there no need as so simple (listen for post request, print out results).
Also you cannot listen on port 443 on the same domain as the site you are monitoring as the report also uses HPKP so won't be able to connect, since the only time you want to report is when you can't connect! Would work fine in report only mode though.
I know you're only experimenting but I would caution to be very careful with HPKP as its very easy to brick your site with this, and it adds a lot of extra considerations to certificate renewal. Personally I don't think it's that great as the risk it introduces, to me anyway, far out weigh the risk it mitigates for most sites. More thoughts of that from me here: https://www.tunetheweb.com/security/http-security-headers/hpkp/#downsides

Problems with WebSession when executing a WebService (GeneXus)

Here is the problem: I have a KB Called APP1 that will execute an WebService of an Identity Provider (centralizes all the logins/sessions for different applications) that will return true if there is a logged user in current WebSession that has been granted to access the Application or false otherwise. When I create an web panel at the same KB as the Identity Provider, it works just fine, I get TRUE when there's a logged user, and FALSE when there's not. But when I call it from APP1 it always returns false, I believe that the problem is because the WebSession won't work properly when called through an WS. Any ideas of how to solve it?
My first advice is to try using GAM Single Sign on (X Evolution 3)
WebServices should be Stateless. I think that using the Database instead of WebSession could do the job.
Nonetheless, in order to call a restful WebService you will have to do something more complex as dealing with CookieContainers as stated in the following link.
Consider this solution:
User tries to access App1
There's no web session (App1 doesn't know who is connecting)
App1 redirects User to an IdentityProvider's special login page
If User is not logged, it provides credentials and logs in
IdentityProvider has a session for the user (it knows who is connecting), then it redirects to the referer, appending to the url an encrypted userid parameter.
App1 decodes the parameter, now it knows who is connecting.
App1 saves the userid to the web session, now the user is authenticated
App1 and IdentityProvider must share an encryption key.
Consider that if the encryption key gets compromised or cracked anyone can impersonate another user.
Depending in how secure you want your system to be, you should study other security issues:
every time the user connects it's encrypted login is the same an it shows in the url, it can be easily solved adding a nonce or salt.
The system could be abused generating multiple requests until it gets a valid encrypted userid. It can be mitigated using a large Salt and/or blocking multiple attempts from the same source.
Note that this isn't a tested protocol and I didn't study the security in depth. I got some inspiration from OpenId, but this is a simplified protocol and I could be missing security holes.

How to fix cross-site origin policy for server and web-site

I'm using Dropwizard, which I'm hosting, along with a website, on the google cloud (GCE). This means that there are 2 locations currently active:
Some.IP.Address - UI
Some.IP.Address:8080 - Dropwizard server
When the UI tries to call anything from my dropwizard server, I get cross-site origin errors, which is understandable. However, this is posing a problem for me. How do I fix this? It would be great if I could somehow spoof the addresses so that I don't have to fully qualify the resource in the UI.
What I'm looking to do is this:
$.get('/provider/upload/display_information')
Or, if I have to fully qualify
$.get('http://Some.IP.Address:8080/provider/upload/display_information')
I tried setting Origin Filters in Dropwizard per this google groups thread (https://groups.google.com/forum/#!topic/dropwizard-user/ybDOTOxjlLI), but it doesn't seem to work.
In index.html that is served by the server at http://Some.IP.Address you might have a jQuery script that look as follows.
$.get('http://Some.IP.Address:8080/provider/upload/display_information', data, callback);
Of course your browser will not allow accessing http://Some.IP.Address:8080 due to the Same-Origin-Policy (SOP). The protocol (http, https) and the host as well as the port have to be the same.
To achieve Cross-Origin Resource Sharing (CORS) on Dropwizard, you have to add a CrossOriginFilter to the servlet environment. This filter will add some Access-Control-Headers to every response the server is sending. In the run method of your Dropwizard application write:
import org.eclipse.jetty.servlets.CrossOriginFilter;
public class SomeApplication extends Application<SomeConfiguration> {
#Override
public void run(TodoConfiguration config, Environment environment) throws Exception {
FilterRegistration.Dynamic filter = environment.servlets().addFilter("CORS", CrossOriginFilter.class);
filter.addMappingForUrlPatterns(EnumSet.allOf(DispatcherType.class), true, "/*");
filter.setInitParameter("allowedOrigins", "http://Some.IP.Address"); // allowed origins comma separated
filter.setInitParameter("allowedHeaders", "Content-Type,Authorization,X-Requested-With,Content-Length,Accept,Origin");
filter.setInitParameter("allowedMethods", "GET,PUT,POST,DELETE,OPTIONS");
filter.setInitParameter("preflightMaxAge", "5184000"); // 2 months
filter.setInitParameter("allowCredentials", "true");
// ...
}
// ...
}
This solution works for Dropwizard 0.7.0 and can be found on https://groups.google.com/d/msg/dropwizard-user/xl5dc_i8V24/gbspHyl4y5QJ.
This filter will add some Access-Control-Headers to every response. Have a look on http://www.eclipse.org/jetty/documentation/current/cross-origin-filter.html for a detailed description of the initialisation parameters of the CrossOriginFilter.