I am trying to give users of my website the ability to download files from Amazon S3. The URLs are digitally signed by my AWS private key on my webserver than sent to the client via AJAX and embedded in the action attribute of an html form.
The problem arises when the form is submitted. The action attribute of the form contains a url that has a digital signature. This signature often times contains + symbols which get percent-encoded. It completely invalidates the signature. How can I keep forms from percent-encoding my urls?
I (respectfully) suggest that you need to more carefully identify the precise nature of the problem, where in the process flow it breaks down, and identify precisely what it is that you actually need to fix. URLEncoding of "+" is the correct thing for the browser to do, because the literal "+" in a query string is correctly interpreted by the server as " " (space).
Your question prompted me to review code I've written that generates signed urls for S3 and my recollection was correct -- I'm changing '+' to %2B, '=' to %3D, and '/' to %2F in the signature... so that is not invalid. This is assuming we are talking about the same thing, such that the "digital signature" you mention in the question is the signature discussed here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationQueryStringAuth
Note the signature in the example has a urlencoded '+' in it: Signature=vjbyPxybdZaNmGa%2ByT272YEAiv4%3D
I will speculate that the problem you are having might not be '+' → '%2B' (which should be not only valid, but required)... but perhaps it's a double-encoding, such that you are, at some point, double-encoding it so that '+' → '%2B' → '%252B' ... with the percent sign being encoded as a literal, which would break the signature.
Related
I'm inserting untrusted data into a href attribute of an tag.
Based on the OWASP XSS Prevention Cheat Sheet, I should URI encode the untrusted data before inserting it into the href attribute.
But would HTML encoding also prevent XSS in this case? I know that it's an URI context and therefore I should use URI encoding, but are there any security advantages of URI encoding over using HTML encoding in this case?
The browser will render the link properly in both cases as far as I know.
I'm assuming this is Rule #5:
URL Escape Before Inserting Untrusted Data into HTML URL Parameter
Values
(Not rule #35.)
This is referring to individual parameter values:
<a href="http://www.example.com?test=...ESCAPE UNTRUSTED DATA BEFORE PUTTING HERE...">link</a >
URL and HTML encoding protect against different things.
URL encoding prevents a parameter breaking out of a URL parameter context:
e.g. ?firstname=john&lastname=smith&salary=20000
Say this is a back-end request made by an admin user. If john and smith aren't correctly URL encoded then a malicious front-end user might enter their name as john&salary=40000 which would render the URL as
?firstname=john&salary=40000&lastname=smith&salary=20000
and say the back-end application takes the first parameter value in the case of duplicates. The user has successfully doubled their salary. This attack is known as HTTP Parameter Pollution.
So if you're inserting a parameter into a URL which is then inserted into an HTML document, you technically need to URL encode the parameter, then HTML encode the whole URL. However, if you follow the OWASP recommendation to the letter:
Except for alphanumeric characters, escape all characters with ASCII
values less than 256 with the %HH escaping format.
then this will ensure no characters with special meaning to HTML will be output, therefore you can skip the HTML encoding part, making it simpler.
Example - If user input is allowed to build a relative link (to http://server.com/), and javascript:alert(1) is provided by the user.
URL-encoding: <a href="javascript%3Aalert%281%29"> - Link will lead to http://server.com/javascript%3Aalert%281%29
Entity-encoding only: <a href="javascript:alert;(1)"> - Click leads to javascript execution
As with any user supplied data, the URLs will need to be escaped and filtered appropriately to avoid all sorts of exploits. I want to be able to
Put user supplied URLs in href attributes. (Bonus points if I don't get screwed if I forget to write the quotes)
...
Forbid malicious URLs such as javascript: stuff or links to evil domain names.
Allow some leeway for the users. I don't want to raise an error just because they forgot to add an http:// or something like that.
Unfortunately, I can't find any "canonical" solution to this sort of problem. The only thing I could find as inspiration is the encodeURI function from Javascript but that doesn't help with my second point since it just does a simple URL parameter encoding but leaving alone special characters such as : and /.
OWASP provides a list of regular expressions for validating user input, one of which is used for validating URLs. This is as close as you're going to get to a language-neutral, canonical solution.
More likely you'll rely on the URL parsing library of the programming language in use. Or, use a URL parsing regex.
The workflow would be something like:
Verify the supplied string is a well-formed URL.
Provide a default protocol such as http: when no protocol is specified.
Maintain a whitelist of acceptable protocols (http:, https:, ftp:, mailto:, etc.)
The whitelist will be application-specific. For an address-book app the mailto: protocol would be indispensable. It's hard to imagine a use case for the javascript: and data: protocols.
Enforce a maximum URL length - ensures cross-browser URLs and prevents attackers from polluting the page with megabyte-length strings. With any luck your URL-parsing library will do this for you.
Encode a URL string for the usage context. (Escaped for HTML output, escaped for use in an SQL query, etc.).
Forbid malicious URLs such as javascript: stuff or links or evil domain names.
You can utilize the Google Safe Browsing API to check a domain for spyware, spam or other "evilness".
For the first point, regular attribute encoding works just fine. (escape characters into HTML entities. escaping quotes, the ampersand and brackets is OK if attributes are guaranteed to be quotes. Escaping other alphanumeric characters will make the attribute safe if its accidentally unquoted.
The second point is vague and depends on what you want to do. Just remember to use a whitelist approach instead of a blacklist one its possible to use html entity encoding and other tricks to get around most simple blacklists.
I have a problem with IE(7&8) browser's handling of security certificate errors.
Our application needs to send out a secure link to the user's email, consisting of a randomly generated token which may have special characters. So before sending out, we encode the token. The sample URL would be like this:
localhost:8080/myapp?t=7f%26DX%243q9a
When the user opens this in IE, it gives the certificate error page. ("There is a problem with this website's security certificate.") The continue link ON that page re-encodes our token into something else:
localhost:8080/myapp?t=7f%2526DX%25243q9a
(Thus the user would be sent to a slightly different URL than what we're expecting, as you can see.)
Here, you can see that the "%" s I'd sent get turned into "%25" s. how can I decode the token correctly after this?
Nasty!
If this is a reproducible bug and not funny behaviour caused by some character set issues or something - it doesn't look like it! - then I think your only way to work around it is to use an encoding method for the parameter that uses only letters and numbers, like base64.
how can I do this without getting "forbidden". Other sites do it, for example http://twitter.com?status=http://somesite.com works just fine. I've been looking everywhere for an answer. Please can somebody help! Please note my example is automatically encoded (imagine it without the %3A)
You will need to encode the url. A query string with an unencoded url is going to be a problem.
If you don't encode urls inside urls, then whoever is interpreting it will not see it as a valid URL. This is because in your example
http://twitter.com?status=http%3A//somesite.com
The %3A is a colon. But according to the URI specification, the colon is a schema delimiter (http, ftp, irc, whatever), and a uri can only contain one. And if I've read enough of these specs, I'm guessing it says the equivalent to "servers receiving an badly formed url should return an error message" or "..try to interpret it without guaranteeing a positive response".
Technically the // should also be escaped, since they are path delimiters, but only a server serving static content would react to that.
For the URI specification, see http://labs.apache.org/webarch/uri/rfc/rfc3986.html
If you are asking how to do this in Javascript you should use the escape/unescape and handle the special case of the / character.
Take a look at this reference.
I was wondering if somebody could shed some light on this browser behaviour:
I have a form with a textarea that is submitted to to the server either via XHR (using jQuery, I've also tried with plain XMLHttpRequest just to rule jQuery out and the result is the same) or the "old fashioned" way via form submit. In both cases method="POST" is used.
Both ways submit to the same script on the server.
Now the funny part: if you submit via XHR new line characters are transferred as "%0A" (or \n if I am not mistaken), and if you submit the regular way they are transferred as "%0D%0A" (or \r\n).
This, off course, causes some problems on the server side, but that is not the question here.
I'd just like to know why this difference? Shouldn't new lines be transferred the same no matter what method of submitting you use? What other differences are there (if any)?
XMLHttpRequest will when sending XML strip the CR characters from the stream. This is in accord with the XML specification which indicates that CRLF be normalised to simple LF.
Hence if you package your content as XML and send it via XHR you will lose the CRs.
In part 3.7.1 of RFC2616(HTTP1.1), it allows either \r\n,\r,\n to represent newline.
HTTP relaxes this requirement and allows the
transport of text media with plain CR or LF alone representing a line
break when it is done consistently for an entire entity-body. HTTP
applications MUST accept CRLF, bare CR, and bare LF as being
representative of a line break in text media received via HTTP.
But this does not apply to control structures:
This flexibility regarding
line breaks applies only to text media in the entity-body; a bare CR
or LF MUST NOT be substituted for CRLF within any of the HTTP control
structures (such as header fields and multipart boundaries).