Why does Chrome display a "SHA1" message with a SHA2 certificate - google-chrome

I have just re-keyed a SHA1 certificate and installed a new SHA2 certificate in its place.
Everything is working fine. There is no insecure content. Digicert's diagnostic tool says everything is ok, and "Signature algorithm = SHA256 + RSA". However, Google Chrome says (note my emphasis):
The identity of this website has been verified by DigiCert SHA2 High
Assurance Server CA but does not have public audit records.
Your connection to [www.domain.com] is encrypted with 128-bit
encryption.
The connection uses TLS 1.0.
The connection is encrypted using AES_128_CBC, with SHA1 for message
authentication and DHE_RSA as the key exchange mechanism.
Why does Google Chrome say that the connection is using "SHA1 for message authentication"?
(Note: I have cleared cache and refreshed page)

Message authentication is used for authentication the data in transit. It is not used for securing the certificates (using digital signatures).
Many cipher suites will still use HMAC using SHA-1 as SHA-1 (and even MD5) is quite safe within a HMAC scheme (due to the fact that a key is hashed both at the start and at the end of the data to protect).
The structure of the HMAC algorithm makes it less susceptible to attacks on properties of the underlying hash algorithm. HMAC is quite resilient against the current (successful) attacks on MD5 and SHA-1.

Related

How does Web3-Token verify method really work and is it really secure?

I have read how Web3 Auth works and this picture shows it:
It seems that the web3-token.verify method is completely sync and it is a bunch of base64 decode and verify. But hypothetically if I completely re-implement the front end library to mimic the Ethereum API to generate signature with my own public/private key pairs and address. Would I be able to impersonate any address?
As stated in the web3-token package.json, this package implements the EIP-4361 standard (which is currently unfinished in May 2022).
Its actual signature part uses asymmetric cryptography - signing the message with the signer's private key, and verifying the signature with their public key.
The only way to impersonate an address is to bypass the verification and treat the signer as 0x123 on the application level even though they are in fact 0x456. But it's not possible to bypass the math behind the cryptography and sign a message for them if you don't know their private key.
Here's a great article describing the signature mechanics in more depth.

WebRTC ICE-Stun message-integrity attribute

What do I use for the HMAC-SHA1 key when verifying the MESSAGE-INTEGRITY attribute of STUN Binding Requests from Chrome? (chrome is in the ICE-CONTROLLING role as the SDP offer is from an ICE-LITE peer)
RFC-5245 states:
To compute the message integrity for the check, the agent uses the
remote username fragment and password learned from the SDP from its
peer. The local username fragment is known directly by the agent for
its own candidate.
But it does not state how these are concatenated by the agent to form the HMAC SHA1 key
I have tried different combinations of ice-username:ice-password to form the key, but none seem to be able to generate the same hash as the message integrity attribute in the Binding request from chrome.
Does anyone know how the HMAC key is formed?
Requests for you will be signed with your local ice-pwd and the responses must be signed with it (as described ħere).
See RFC 5389 on how to compute the hash.

Is the AEM CSRF Authentication/protection-framework stateless?

If no, where are these CSRF generated tokens stored at: JCR Repository or Objects in the application heap? Also how does it validate the received token at very high level?
If yes, is this not a scalabilty issue?
I tried to follow these links: https://datatracker.ietf.org/doc/html/draft-ietf-oauth-json-web-token-16#section-7 & https://datatracker.ietf.org/doc/html/draft-ietf-jose-json-web-signature-41#appendix-A.4.2 and seems like they use a sort of public-private key, along with user, user-agent and other info to build a key-pair and a signature, and validate it in a similar fashion, where the token is deciphered in a sense, but not exactly matching it to another stored token. But not sure, hence the question.
Short answer: Yes the AEM CSRF Authentication/protection framework is stateless.
Details
The tokens are not persisted and all the information is in the token encrypted using a Symmetric Algorithm. As long as all your instances share the same Crypto Key, any instance can decrypt and decode the CSRF token issued within the cluster. More details on this can be found in the official CSRF documentation.

What's wrong with this authorization exchange?

I've set up a MediaWiki server on an Azure website with the PluggableAuth and OpenID Connect extensions. The latter uses the PHP OpenID Connect Basic Client library. I am an administrator in the Azure AD domain example.com, wherein I've created an application with App ID URI, sign-on URL and reply URL all set to https://wiki.azurewebsites.net/. When I navigate to the wiki, I observe the following behavior (cookie values omitted for now):
Client Request
GET https://wiki.azurewebsites.net/ HTTP/1.1
RP Request
GET https://login.windows.net/example.com/.well-known/openid-configuration
IP Response
(some response)
RP Response
HTTP/1.1 302 Moved Temporarily
Location: https://login.windows.net/{tenant_id}/oauth2/authorize?response_type=code&redirect_uri=https%3A%2F%2Fwiki.azurewebsites.net%2F&client_id={client_id}&nonce={nonce}&state={state}
Client Request
(follows redirect)
IP Response
HTTP/1.1 302 Found
Location: https://wiki.azurewebsites.net/?code={code}&state={state}&session_state={session_state}
Client Request
(follows redirect)
RP Request (also repeats #2 & #3)
POST https://login.windows.net/{tenant_id}/oauth2/token
grant_type=authorization_code&code={code}&redirect_uri=https%3A%2F%2Fwiki.azurewebsites.net%2F&client_id={client_id}&client_secret={client_secret}
IP Response
(As interpreted by MediaWiki; I don't have the full response logged at this time)
AADSTS50001: Resource identifier is not provided.
Note that if I change the OpenID PHP client to provide the 'resource' parameter in step 8, I get the following error response from AAD instead:
RP Request
POST https://login.windows.net/{tenant_id}/oauth2/token
grant_type=authorization_code&code={code}&redirect_uri=https%3A%2F%2Fwiki.azurewebsites.net%2F&resource=https%3A%2F%2Fwiki.azurewebsites.net%2F&client_id={client_id}&client_secret={client_secret}
IP Response
AADSTS90027: The client '{client_id}' and resource 'https://wiki.azurewebsites.net/' identify the same application.
(This has come up before.)
Update
I've made some progress based on #jricher's suggestions, but after working through several more errors I've hit one that I can't figure out. Once this is all done I'll submit pull requests to the affected libraries.
Here's what I've done:
I've added a second application to the example.com Azure AD domain, with the App ID URI set to mediawiki://wiki.azurewebsites.net/, as a dummy "resource". I also granted the https://wiki.azurewebsites.net/ application delegated access to this new application.
Passing in the dummy application's URI as the resource parameter in step #8, I'm now getting back the access, refresh, and ID tokens in #9!
The OpenID Connect library requires that the ID token be signed, but while Azure AD signs the access token it doesn't sign the ID token. It comes with the following properties: {"typ":"JWT","alg":"none"}. So I had to modify the library to allow the caller to specify that unsigned ID tokens are considered "verified". Grrr.
Okay, next it turns out that the claims can't be verified because the OpenID Provider URL I specified and the issuer URL returned in the token are different. (Seriously?!) So, the provider has to be specified as https://sts.windows.net/{tenant_id}/, and then that works.
Next, I found that I hadn't run the MediaWiki DB upgrade script for the OpenID Connect extension yet. Thankfully that was a quick fix.
After that, I am now left with (what I hope is) the final problem of trying to get the user info from AAD's OpenID Connect UserInfo endpoint. I'll give that its own section.
Can't get the user info [Updated]
This is where I am stuck now. After step #9, following one or two intermediate requests to get metadata and keys for verifying the token, the following occurs:
RP Request:
(Updated to use GET with Authorization: Bearer header, per MSDN and the spec.)
GET https://login.windows.net/{tenant_id}/openid/userinfo
Authorization: Bearer {access_token}
IP Response:
400 Bad Request
AADSTS50063: Credential parsing failed. AADSTS90010: JWT tokens cannot be used with the UserInfo endpoint.
(If I change #10 to be either a POST request, with access_token in the body, or a GET request with access_token in the query string, AAD returns the error: AADSTS70000: Authentication failed. UserInfo token is not valid. The same occurs if I use the value of the id_token in place of the access_token value that I received.)
Help?
Update
I'm still hoping someone can shed light on the final issue (the UserInfo endpoint not accepting the bearer token), but I may split that out into a separate question. In the meantime, I'm adding some workarounds to the libraries (PRs coming soon) so that the claims which are already being returned in the bearer token can be used instead of making the call to the UserInfo endpoint. Many thanks to everyone who's helped out with this.
There's also a nagging part of me that wonders if the whole thing would not have been simpler with the OpenID Connect Basic Profile. I assume there's a reason why that was not implemented by the MediaWiki extension.
Update 2
I just came across a new post from Vittorio Bertocci that includes this helpful hint:
...in this request the application is asking for a token for itself! In Azure AD this is possible only if the requested token is an id_token...
This suggests that just changing the token request type in step 8 from authorization_code to id_token could remove the need for the non-standard resource parameter and also make the ugly second AAD application unnecessary. Still a hack, but it feels like much less of one.
Justin is right. For authorization code grant flow, your must specify the resource parameter in either the authorization request or the token request.
Use &resource=https%3A%2F%2Fgraph.windows.net%2F to get an access token for the Azure AD Graph API.
Use &resource=https%3A%2F%2Fmanagement.core.windows.net%2F to get a token for the Azure Service Management APIs.
...
Hope this helps
Microsoft's implementation of OpenID Connect (and OAuth2) has a known bug where it requires the resource parameter to be sent by the client. This is an MS-specific parameter and requiring it unfortunately breaks compatibility with pretty much every major OAuth2 and OpenID Connect library out there. I know that MS is aware of the issue (I've been attempting to do interoperability testing with their team for quite a while now), but I don't know of any plans to fix the problem.
So in the mean time, your only real path is to hack your client software so that it sends a resource parameter that the AS will accept. It looks like you managed to make it send the parameter, but didn't send a value that it liked.
I had issues getting this running on Azure, even though I got something working locally. Since I was trying to setup a private wiki anyway, I ended up enabling Azure AD protection for the whole site by turning on:
All Settings -> Features -> Authentication / Authorization
From within the website in https://portal.azure.com
This made it so you had to authenticate to Azure-AD before you saw any page of the site. Once you were authenticated a bunch of HTTP Headers are set for the application with your username, including REMOTE_USER. As a result I used the following plugin to automatically log the already authenticated user into Azure:
https://www.mediawiki.org/wiki/Extension:Auth_remoteuser

Role-based access control with REST (HTTP)?

I'm creating a system with a JavaScript client that will communicate with the server over REST (HTTP)[JSON].
I am using role-based access control to manage the calls.
Example: [explicit URL will stay the same]
Anonymous -> request \
Server -> route to login form: \login\
User (now with cookie!) -> request \
if (user->role == "manager") return "\manager-homepage\";
else return "\homepage\";
Since REST is stateless how would I go about managing this use-case?
Do I send the cookie with each request, and the returned HTTP status codes will tell the JS where to route?
[Which would be rather inefficient + open to MITM attacks]
Can you not use a standard authentication scheme, such as http digest?
Example: [from Wikipedia page]
The client asks for a page that requires authentication but does not provide a username and password. Typically this is because the user simply entered the address or followed a link to the page.
The server responds with the 401 "client-error" response code, providing the authentication realm and a randomly-generated, single-use value called a nonce.
At this point, the browser will present the authentication realm (typically a description of the computer or system being accessed) to the user and prompt for a username and password. The user may decide to cancel at this point.
Once a username and password have been supplied, the client re-sends the same request but adds an authentication header that includes the response code.
In this example, the server accepts the authentication and the page is returned. If the username is invalid and/or the password is incorrect, the server might return the "401" response code and the client would prompt the user again.
Note: A client may already have the required username and password without needing to prompt the user, e.g. if they have previously been stored by a web browser.
See also this answer to a very similar question: REST and authentication variants
Depending on your desired security level, you could serve the whole thing over ssl. That will prevent mitm attacks.