What do I use for the HMAC-SHA1 key when verifying the MESSAGE-INTEGRITY attribute of STUN Binding Requests from Chrome? (chrome is in the ICE-CONTROLLING role as the SDP offer is from an ICE-LITE peer)
RFC-5245 states:
To compute the message integrity for the check, the agent uses the
remote username fragment and password learned from the SDP from its
peer. The local username fragment is known directly by the agent for
its own candidate.
But it does not state how these are concatenated by the agent to form the HMAC SHA1 key
I have tried different combinations of ice-username:ice-password to form the key, but none seem to be able to generate the same hash as the message integrity attribute in the Binding request from chrome.
Does anyone know how the HMAC key is formed?
Requests for you will be signed with your local ice-pwd and the responses must be signed with it (as described ħere).
See RFC 5389 on how to compute the hash.
Related
We're using puppeteer and sometimes playwright to run some integration tests. We mock some of the target page's script dependencies, which causes subresource integrity hash mismatches.
Failed to find a valid digest in the 'integrity' attribute for resource 'http://localhost:3000/static/third-party/adobe-target/at-js/2.4.0-cname/at.js' with computed SHA-256 integrity '47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='. The resource has been blocked."
Is there a way to disable integrity hash checking via a flag or configuration property?
No. I believe the only way is to fix or remove the integrity attribute from the source that loads the script.
Looking at the chromium (or blink) source, unless the integrity attribute is empty, the FetchManager::Loader will instantiate the SRIVerifier, whose constructor calls its OnStateChange method, where for response types of basic, cors, default (leaving out opaque responses and errors), SubresourceIntegrity::CheckSubresourceIntegrity is called. Unless the parsing of the integrity attribute fails, SubresourceIntegrity::CheckSubresourceIntegrityImpl will either successfully verify one of the digests, or it will fail with the given error message. There is no configuration option checked along this path to override a failed check.
On a project we spent considerable effort to work around basic authentication (because webdriver tests were depending on it, and webdriver has no api for basic authentication), and I remember basic authentication in the URL clearly not working. I.e. could not load http://username:password#url
Just google "basic authentication in url" and you will find tons of people complaining: https://medium.com/#lmakarov/say-goodbye-to-urls-with-embedded-credentials-b051f6c7b6a3
https://www.ietf.org/rfc/rfc3986.txt
Use of the format "user:password" in the userinfo field is deprecated.
Now today I told this quagmire to a friend and he said they are using http://username:password#url style basic authentication in webdriver tests without any problem.
I went in my current Chrome v71 to a demo page and to my surprise I found it indeed very well working: https://guest:guest#jigsaw.w3.org/HTTP/Basic/
How is this possible?? Are we living in parallel dimensions at the same time? Which one is true: is basic authentication using credentials in the URL supported or deprecated? (Or was this maybe added back to Chrome due to complaints of which I can't find any reference?)
Essentially, deprecated does not mean unsupported.
Which one is true: is basic authentication using credentials in the URL supported or deprecated?
The answer is yes, both are true. It is deprecated, but for the most part (anecdotally) still supported.
From the medium article:
While you would not usually have those hardcoded in a page, when you open a URL likehttps://user:pass#host and that page makes subsequent requests to resources linked via relative paths, that’s when those resources will also get the user:pass# part applied to them and banned by Chrome right there.
This means urls like <img src=./images/foo.png> but not urls like <a href=/foobar>zz</a>.
The rfc spec states:
Use of the format "user:password" in the userinfo field is
deprecated. Applications should not render as clear text any data
after the first colon (":") character found within a userinfo
subcomponent unless the data after the colon is the empty string
(indicating no password). Applications may choose to ignore or
reject such data when it is received as part of a reference and
should reject the storage of such data in unencrypted form. The
passing of authentication information in clear text has proven to be
a security risk in almost every case where it has been used.
Applications that render a URI for the sake of user feedback, such as
in graphical hypertext browsing, should render userinfo in a way that
is distinguished from the rest of a URI, when feasible. Such
rendering will assist the user in cases where the userinfo has been
misleadingly crafted to look like a trusted domain name
(Section 7.6).
So the use of user:pass#url is discouraged and backed up by specific recommendations and reasons for disabling the use. It also states that apps may opt to reject the userinfo field, but it does not say that apps must reject this.
If no, where are these CSRF generated tokens stored at: JCR Repository or Objects in the application heap? Also how does it validate the received token at very high level?
If yes, is this not a scalabilty issue?
I tried to follow these links: https://datatracker.ietf.org/doc/html/draft-ietf-oauth-json-web-token-16#section-7 & https://datatracker.ietf.org/doc/html/draft-ietf-jose-json-web-signature-41#appendix-A.4.2 and seems like they use a sort of public-private key, along with user, user-agent and other info to build a key-pair and a signature, and validate it in a similar fashion, where the token is deciphered in a sense, but not exactly matching it to another stored token. But not sure, hence the question.
Short answer: Yes the AEM CSRF Authentication/protection framework is stateless.
Details
The tokens are not persisted and all the information is in the token encrypted using a Symmetric Algorithm. As long as all your instances share the same Crypto Key, any instance can decrypt and decode the CSRF token issued within the cluster. More details on this can be found in the official CSRF documentation.
I have just re-keyed a SHA1 certificate and installed a new SHA2 certificate in its place.
Everything is working fine. There is no insecure content. Digicert's diagnostic tool says everything is ok, and "Signature algorithm = SHA256 + RSA". However, Google Chrome says (note my emphasis):
The identity of this website has been verified by DigiCert SHA2 High
Assurance Server CA but does not have public audit records.
Your connection to [www.domain.com] is encrypted with 128-bit
encryption.
The connection uses TLS 1.0.
The connection is encrypted using AES_128_CBC, with SHA1 for message
authentication and DHE_RSA as the key exchange mechanism.
Why does Google Chrome say that the connection is using "SHA1 for message authentication"?
(Note: I have cleared cache and refreshed page)
Message authentication is used for authentication the data in transit. It is not used for securing the certificates (using digital signatures).
Many cipher suites will still use HMAC using SHA-1 as SHA-1 (and even MD5) is quite safe within a HMAC scheme (due to the fact that a key is hashed both at the start and at the end of the data to protect).
The structure of the HMAC algorithm makes it less susceptible to attacks on properties of the underlying hash algorithm. HMAC is quite resilient against the current (successful) attacks on MD5 and SHA-1.
I have been testing webhooks from http://context.io/ with Firebase. Which will fire off a POST whenever a valid email is sent.
The issue is that a couple of the keys have a '.' in the name. Which has Firebase sending me a 400 error:
"error" : "Invalid data; couldn't parse JSON object, array, or value. Perhaps you're using invalid characters in your key names."
Can I use security rules to manipulate the newData to replace the '.' or do i need to a use a proxy server in-between.
If so, what is the recommended approach for a thin nodejs proxy server, only made to do this.
Security rules only enforce security and cannot be used as translators or filters. Thus, you'll have to manipulate the keys before sending them to Firebase.
It doesn't look like you are forced to use the email as the key, since you can structure the URL to which context.io sends your requests. Could you save the effort of a proxy by using the context.io unique ids or some other unique id instead of email address?
If you REALLY want to work with the email as the key, you can still do it using a base64 encoded value of the email address.
This has many benefits including sorting integrity as well as faster lookups if you're constantly searching by email and accessing data within that.
Ref:
Python: https://docs.python.org/3/library/base64.html
Javascript: http://www.w3schools.com/jsref/met_win_atob.asp