Receiving JSON POST requests from HPKP error respondents - json

I'm experimenting with setting up HPKP (https://scotthelme.co.uk/hpkp-http-public-key-pinning/) on my web server and one of its options is to specify an error reporting URI in the header for clients to send error notices to in the form of a JSON POST request structured as such:
{
"date-time": date-time,
"hostname": hostname,
"port": port,
"effective-expiration-date": expiration-date,
"include-subdomains": include-subdomains,
"noted-hostname": noted-hostname,
"served-certificate-chain": [
pem1, ... pemN
],
"validated-certificate-chain": [
pem1, ... pemN
],
"known-pins": [
known-pin1, ... known-pinN
]
}
My question is how can I set something up within Linux to listen for the JSON POSTs on port 80 (or 443)?
Does anything exist for this already? thanks everyone for your help.

Scott Helme, who's link you included, also runs this service which takes care of it for you:
https://report-uri.io
Alternatively if you want to try it out yourself any web scripting language (cgi via perl, php... etc.) should be able to listen to a post request and dump it out to a log file. Personally I use a NodeJS service, but anything will do. I'm not aware of any scripts people have shared but that's probably because there no need as so simple (listen for post request, print out results).
Also you cannot listen on port 443 on the same domain as the site you are monitoring as the report also uses HPKP so won't be able to connect, since the only time you want to report is when you can't connect! Would work fine in report only mode though.
I know you're only experimenting but I would caution to be very careful with HPKP as its very easy to brick your site with this, and it adds a lot of extra considerations to certificate renewal. Personally I don't think it's that great as the risk it introduces, to me anyway, far out weigh the risk it mitigates for most sites. More thoughts of that from me here: https://www.tunetheweb.com/security/http-security-headers/hpkp/#downsides

Related

AASA - Apple App Site Association - Not working

I have been having a long and frustrating experience trying to get AASA to work for webcredentials. My goal here is to allow usernames and passwords to be stored in the iOS keychain.
I did have this working on a root domain the other week but it is not sufficient for my scenario as I will explain. It didn't work for me straight away I have to say but it eventually started working after a clean build so I thought this was the issue then but now I am not so sure.
I am using Expo with EAS build. We have a multi-tenant application. From a single codebase we deploy to multiple apps in the store. All are on the same team ID but they are separate applications and use separate credentials, nothing is shared.
I am confident my apps textContentType of username and password on my TextFields is correct as this has not changed from when I managed to get it working originally and I have checked it countless times.
Expectation
For the "Save Password" prompt to be displayed after login. What I have noticed however is when going to store a password manually using "add password" via iCloudKeychain from the keyboard accessory this does accurately show the correct "TENANT_SUBDOMAIN.example.com". I find this confusing.
Goal Scenario
I am hosting a site on Netlify. I have it setup to support wildcard subdomains with a LetsEncrypt provisioned wildcard SSL certificate. I then have edge functions which change the content of my index.html and apple-app-site-association file dynamically based on the requested subdomain.
I have added the Associated Domains capability to my provisioning profile.
I am using the latest Expo 47 and EAS build. I have added in the appropriate associated domains configuration and I can see this when introspecting my entitlements under com.apple.developer.associated-domains and it is correct.
I am using TestFlight for testing. I am doing a --clean-build on EAS every time and I also increase the runtime version. I have also tried manually refreshing credentials outside of the build process which does this automatically. This must be using the correct provisioning profile otherwise I would get a build failure as the requested entitlements wouldn't match.
The AASA file is currently hosted just in the .well-known directory. I have tried using the root and also tried using both. There are no redirects taking place.
I am aware the AASA file is pulled on application installation and update. I repeatedly remove the apps and then reboot my phone in an attempt to reset any device caches.
The content-type of the file is application/json and I have confirmed this using developer tools in the browser.
There is no robots.txt or anything blocking the request from an infrastructure perspective. There are no additional firewalls or geo restricted access as I am just using plain Netlify to host this, nothing fancy.
I am confident the Team ID and bundle IDs are correct in the AASA file.
I remove the content-length header in the Edge function so it is correctly calculated by the network instead and I have confirmed this using curl.
When I check the file using https://app-site-association.cdn-apple.com/a/v1/site.example.com Apple has the correct file cached on it's CDN so I would expect it to work.
I added in an applinks section so I could use the Apple App Search API validation tool and the Branch.io AASA verification tool to verify correctness. Branch.io says the file is fine and Apple says it's fine also but because the App has not been deployed to the store yet I see Error no apps with domain entitlements. From what I can tell this is normal in development and makes sense as it uses the current released version of the app to verify the deep link configuration. So to me this means Apple can parse the file correctly.
When I stream my device console logs; on install I can see the AASA requesting the correct domains. I see no errors on swcd I just see the Beginning data task AASA-XXXX with the correct domains.
When I run Charles proxy on my phone with a verified SSL installation (also reinstalled a few times now) I do not see quite what I would expect - but the device logs seem to imply it is doing the correct thing. When I view the app-site-association... URL requests in Charles there is one per application install which is correct. The request is marked as Unknown and when I look at the request the host is shown but as you would expect from SSL I see no path. The info says METHOD: CONNECT with Error - Input Error: EOF. This is the only error I see, I am not sure if it is a red herring and something to do with Charles. Given the error as you expect there is no body in the request or response. It is worth noting in general testing I have no VPN enabled and I have do not have Private Relay enabled in my iOS settings.
When I perform a sysdiagnose I see the following at the timestamp in my console log in the swcutil_show.txt device log. This looks correct in comparison to other apps webcredentials and applinks services I see there and I see no errors:
Service: webcredentials
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x141816200> { v = 0, t = 0x8, u = 0x1e7c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x7c1e000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-09 14:14:32 +0000
Next Check: 2022-12-14 14:03:00 +0000
Service: applinks
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x13fd38d00> { v = 0, t = 0x8, u = 0x219c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x9c21000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
Patterns: {"/":"*"}
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-13 13:13:23 +0000
Next Check: 2022-12-18 13:01:51 + 0000
At end of file:
MYTEAMID.com.cf.example.b2c.ios: 8 bytes
(This seems correct for all apps)
Other Scenario
I have tried setting this up using an apex on another domain which hasn't been seen before by Apple. I have tried using a subdomain with a root domain serving the same content and I have tried the subdomain and root domain on their own. I have also tried not using the Edge functions and having static files but to no avail.
When I do this I ensure I wait for the Apple CDN to catch up and remove/add entries prior to deleting the apps, rebooting my device, and reinstalling to test.
AASA File
AASA content comes back with the correct payload and Content-Type: application/json and Content-Length headers, both from Apples CDN and the origin. When I had this somehow working in my initial test it was on a root domain and I did not have an applinks section, this was only added so I could use the verification tools for universal links.
I am not sending back different content or duplicated content and I block the www subdomain - I have also tried it with a www subdomain for the record.
{
"applinks": {
"details": [
{
"appIDs": [
"MYTEAMID.com.cf.example.b2c.ios"
],
"components": [
{
"#": "no_universal_links",
"exclude": true,
"comment": "Matches any URL with a fragment that equals no_universal_links and instructs the system not to open it as a universal link."
}
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
I have also tried this with the older format:
{
"applinks": {
"apps": [],
"details": [
{
"appID": "MYTEAMID.com.cf.example.b2c.ios",
"paths": [
"*"
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
associatedDomains iOS. expo config
associatedDomains: [
`webcredentials:${SUBDOMAIN}.example.app`,
`applinks:${SUBDOMAIN}.example.app`,
],
Help :)
I have been trying to get this to work for a long time now and I am completely out of ideas. If anybody has any suggestions I would really appreciate it. I am very confused how the devices request seems correct and the CDN content is correct but it is still not working. It's worth also reiterating that I need to have different subdomains for each tenant as the credentials must not be shared across apps so the keychain->domain association store must be different.
I am wondering if it's the LetsEncrypt wildcard SSL certificate but I wouldn't expect it to verify and for Apple to cache the file if this was the case. It seems very unlikely to me but it is the only thing I haven't tried at this point.
Many Thanks,
Mark

understanding cuckoo sandbox json report

I have setup cuckoo sandbox and already analyzing some malware
the problem is im having a difficult time trying to understand the json report . could anyone please help me understand the following : UDP, procmemory, dns_servers , http , icmp, domains ,apistats ,processtree
just a brief of what they are please
attached sample picture of the json report
thank you in advance
Well I think the output is pretty much clear if you just run one sample but anyway, if you want to better understand the output, you can check this
paper.
As far as I know, "domains", "DNS", "UDP", "TCP",... show the communications of the sample using these protocols. For example, if a malware tries to connect to a URL, then you will have a DNS query in "DNS" section, an HTTP query in "HTTP" section, a domain name in "domains" section and a "UDP" communication in the "UDP" section (since DNS queries are usually over UDP protocol) all related to that one URL the malware tries to connect.
"apistats" shows the statistics about the API that are called by the sample file.
"procmemory" shows the details about different region of the memory with their size, protection level, start and end address.
I hope it helps.

Catch 22? Blocked by CORS policy: Same server, internal/external IP, no SSL

My apologies if this is a duplicate. I can find a million results about CORS policy issues, but not about this specific one:
I developed a simple "speed test" site for my users (wfh employees of my company) to access. It tests speeds across the public net to different datacenters we utilize, and via the users' VPN connection to one of our DCs.
There are more complicated elements, but for a basic round-trip "ping" I have an extremely simple PHP script on the server that contains:
<?php
header('Access-Control-Allow-Origin: *');
header('Access-Control-Allow-Headers: *');
if ($_GET['simple'] == '1')
die('{ }');
?>
It is called like this:
$.ajax({
type: 'GET',
url: sURL,
data: { ignore: (pingCounter.start = new Date().getTime()) },
dataType: 'text',
timeout: iTimeout
})
.done(function(ret) {
pingCounter.end = new Date().getTime();
[...] (additional code omitted for brevity)
(I know this has additional overhead other than the raw round-trip network traffic timing, but I don't need sub-ms accuracy. I just need to be able to tell users "the problem is on your end" or "ah yes, the problem is the latency between your house and this particular DC".)
The same server running that PHP code is addressable at the following URLs at the DC wherein our VPN server lies:
http://speedtest-int.mycompany.com/ping.php
http://speedtest-ext.mycompany.com/ping.php
Public DNS resolves like this:
speedtest-int.mycompany.com IN A 1.1.1.1 (Actual public IP redacted)
speedtest-int.mycompany.com IN A 10.1.1.1 (Actual internal IP redacted)
If I access either URL from my browser directly, it loads fine (which is to say it responds with { }).
When loading via the JS snippet above, the call to http://speedtest-ext.mycompany.com/ping.php works fine.
The call to http://speedtest-int.mycompany.com/ping.php fails with "The request client is not a secure context and the resource is in more-private address space 'private'".
Fair enough, the solution is to add Access-Control-Allow-Private-Network: *, right?
EXCEPT that apparently can only be used with SSL:
https://developer.chrome.com/blog/private-network-access-update/
I have a self-signed cert on there, but that obviously fails by policy for that reason.
I could just get a LetsEncrypt cert for multiple subdomains. EXCEPT it will never validate the URL http://speedtest-int.mycompany.com because the LetsEncrypt servers won't be able to reach that to validate ownership, as it's a private IP.
I have no control over most of my users' machines, so I can't necessarily install trusted internal certs or change browser options. Most users use Chrome.
So is my solution to buy a UCC or wildcard cert?
I feel like I'm in a catch-22, and I don't want to spend however-much on a UCC cert for an internal app that will be very very very occasionally used by one of our 25 home-based employees when I want to prove that their home "internet is bad" and not the corp network.
Thanks in advance; I'm sure there's a stupidly obvious solution I'm not seeing.
(I'm considering pushing a /32 route to my VPN users for another real public IP to be used in place of the internal IP. Then I can have the "internal" test run against an otherwise publicly accessible IP which could be validated by LetsEncrypt, but VPN users would hit it via the VPN. Is that silly?)
Edit: If anyone is curious -- or it helps to clarify my goal here -- this is the output when accessing the speedtest page:
http://s.co.tt/wp-content/uploads/2021/12/Internal_Speedtest_Example-Redacted.png
It repeats for 20 cycles (or until stopped) and runs each element a varying number of times per cycle, collecting the average time for each. It ain't pretty, but it work(ed).

Unable to add Fine grain access for Elasticsearch service for Cloudformation

I am creating CloudFormation stack with Elasticsearch service, however it fails for AdvancedSecurityOptions, which works perfectly fine with aws es create-elasticsearch-domain
my JSON template snippet is below:
...
"AdvancedOptions": {
"rest.action.multi.allow_explicit_index": true
},
"AdvancedSecurityOptions": {
"Enabled": true,
"InternalUserDatabaseEnabled": false,
"MasterUserOptions": {
"MasterUserARN": "arn:aws:iam::1234567890:role/role_name"
}
},
"DomainName": {
"Ref": "ESDomainName"
}
...
I am unable to get this code working, any help related to fine grain access control would be really appreciated.
The AdvancedSecurityOptions is the latest addition to Amazon Elasticsearch service added recently as part of Fine Grained Access Control. This is available only via Console, CLI and API for now.
I am not sure if the thread is with outdated info, but according to the official AWS documentation on this link it should be possible to use the AdvancedSecurityOptions for Fine Grained Access Control. It even states that it is meant to be used for FGAC at the top of the page.
Continuing from DNakevski# answer above, for FGAC we need to ensure the following three settings in the CFN template are set to true since they serve as pre-requisites:
EncryptionAtRestOptions
NodeToNodeEncryptionOptions and
HTTPS.
Further, the important parameter for FGAC in the CFN template is AdvancedSecurityOptions and needs to be set to Enabled: true
AmazonES/Opendistro-for-ES provides two ways for security with FGAC. One is through using a IAM user as a master-user and other is through having basic auth.
If you need to take the IAM way then set the InternalUserDatabaseEnabled to false and only have the parameter *MasterUserARN: "IAM User ARN" under the MasterUserOptions field.
If you need to take the basic auth (username and password) approach set the InternalUserDatabaseEnabled to true and have the MasterUserName: "any-name" and the MasterUserPassword: "xxx"* Please have at least one lower case, one upper case, one digit and one special character for the password else the CFN template will rollback. However, the failure message is easily seen on the CFN console under events.
I have a simple working CFN yaml here doing the same just in case.

GWT / Comet: any experience?

Is there any way to "subscribe" from GWT to JSON objects stream and listen to incoming events on keep-alive connection, without trying to fetch them all at once? I believe that the buzzword-du-jour for this technology is "Comet".
Let's assume that I have HTTP service which opens keep-alive connection and put JSON objects with incoming stock quotes there in real time:
{"symbol": "AAPL", "bid": "88.84", "ask":"88.86"}
{"symbol": "AAPL", "bid": "88.85", "ask":"88.87"}
{"symbol": "IBM", "bid": "87.48", "ask":"87.49"}
{"symbol": "GOOG", "bid": "305.64", "ask":"305.67"}
...
I need to listen to this events and update GWT components (tables, labels) in realtime. Any ideas how to do it?
There is a GWT Comet Module for StreamHub:
http://code.google.com/p/gwt-comet-streamhub/
StreamHub is a Comet server with a free community edition. There is an example of it in action here.
You'll need to download the StreamHub Comet server and create a new SubscriptionListener, use the StockDemo example as a starting point, then create a new JsonPayload to stream the data:
Payload payload = new JsonPayload("AAPL");
payload.addField("bid", "88.84");
payload.addField("ask", "88.86");
server.publish("AAPL", payload);
...
Download the JAR from the google code site, add it to your GWT projects classpath and add the include to your GWT module:
<inherits name="com.google.gwt.json.JSON" />
<inherits name="com.streamhub.StreamHubGWTAdapter" />
Connect and subscribe from your GWT code:
StreamHubGWTAdapter streamhub = new StreamHubGWTAdapter();
streamhub.connect("http://localhost:7979/");
StreamHubGWTUpdateListener listener = new StockListener();
streamhub.subscribe("AAPL", listener);
streamhub.subscribe("IBM", listener);
streamhub.subscribe("GOOG", listener);
...
Then process the updates how you like in the update listener (also in the GWT code):
public class StockListener implements StreamHubGWTUpdateListener {
public void onUpdate(String topic, JSONObject update) {
String bid = ((JSONString)update.get("bid")).stringValue();
String ask = ((JSONString)update.get("ask")).stringValue();
String symbol = topic;
...
}
}
Don't forget to include streamhub-min.js in your GWT projects main HTML page.
I have used this technique in a couple of projects, though it does have it's problems. I should note that I have only done this specifically through GWT-RPC, but the principle is the same for whatever mechanism you are using to handle data. Depending on what exactly you are doing, there might not be much need to over complicate things.
First off, on the client side, I do not believe that GWT can properly support any sort of streaming data. The connection has to close before the client can actually process the data. What this means from a server-push standpoint is that your client will connect to the server and block until data is available at which point it will return. Whatever code executes on the completed connection should immediately re-open a new connection with the server to wait for more data.
From the server side of things, you simply drop into a wait cycle (the java concurrent package is particularly handy for this with blocks and timeouts), until new data is available. At that point in time, the server can return a package of data down to the client which will update accordingly. There are a bunch of considerations depending on what your data flow is like, but here are a few to think about:
Is a client getting every single update important? If so, then the server needs to cache any potential events between the time the client gets some data and then reconnects.
Are there going to be gobs of updates? If this is the case, it might be wiser to package up a number of updates and push down chunks at a time every several seconds rather than having the client get one update at a time.
The server will likely need a way to detect if a client has gone away to avoid piling up huge amounts of cached packages for that client.
I found there were two problems with the server push approach. With lots of clients, this means lots of open connections on the web server. Depending on the web server in question, this could mean lots of threads being created and held open. The second has to do with the typical browser's limit of 2 requests per domain. If you are able to serve your images, css and other static content fro second level domains, this problem can be mitigated.
there is indeed a cometd-like library for gwt - http://code.google.com/p/gwteventservice/
But i ve not personally used it, so cant really vouch for whether its good or not, but the doco seems quite good. worth a try.
Theres a few other ones i ve seen, like gwt-rocket's cometd library.
Some preliminary ideas for Comet implementation for GWT can be found here... though I wonder whether there is something more mature.
Also, some insight on GWT/Comet integration is available there, using even more cutting-and-bleeding edge technology: "Jetty Continuations". Worth taking a look.
Here you can find a description (with some source samples) of how to do this for IBM WebSphere Application Server. Shouldn't be too different with Jetty or any other Comet-enabled J2EE server. Briefly, the idea is: encode your Java object to JSON string via GWT RPC, then using cometd send it to the client, where it is received by Dojo, which triggers your JSNI code, which calls your widget methods, where you deserialize the object again using GWT RPC. Voila! :)
My experience with this setup is positive, there were no problems with it except for the security questions. It is not really clear how to implement security for comet in this case... Seems that Comet update servlets should have different URLs and then J2EE security can be applied.
The JBoss Errai project has a message bus that provides bi-directional messaging that provides a good alternative to cometd.
We are using Atmosphere Framewrok(http://async-io.org/) for ServerPush/Comet in GWT aplication.
On a client side Framework has GWT integration that is pretty straightforward. On a server side it uses plain Servlet.
We are currently using it in production with 1000+ concurent users in clustered environment. We had some problems on the way that had to be solved by modifying Atmosphere source. Also the documentation is really thin.
Framework is free to use.