allowing users to add html formatted notes - html

We want to allow the users of our web application, to leave notes formatted with html.
On client side we are providing them with ckeditor [http://ckeditor.com/] which is a wisywig editor that generates html, that is then submitted to the server via a form
We then want to display the notes created by the users, with exactly the same formatting as they submitted them
My concerns are:
Putting attacks and bad intentions aside, how can I encapsulate the note when displayed on the site, so that
a. They don't inherit the design from the rest of the page
b. They don't influence the rest of the page, for example by opening and not closing a tag accidentally, or closing without opening.
Malicious code injection attacks
At the moment, the first is much more important, as it's an in house product for our clients, and is not open to the wide public. But security comments are very wellcome as well
Possible solutions that I consider are:
Ideally, I look for a way to encapsulate this pieces of user html, like : inside this area I show what you submitted (rendered, not source), you cannot influence and are not influenced by the code on other parts of the page
Specifically, we thought of displaying the notes inside iframes.
Other natural direction is dealing with parsing the inserted contents, and stripping out stuff.
Any inputs are welcome, and mainly:
How can I "encapsulate" the inserted contents, if I can?
Any comments on the iframe direction
Do I have to parse the contents anyway? What do I absolutely have to strip out?

How can I "encapsulate" the inserted contents, if I can?
The truth is unless you 'fix' their code (via some kind of check) you will get issues (think broken divs, etc). I don't see how you can encapsulate HTML FROM HTML. I would however only let them put in content like bold, italicize, center, etc;
Any comments on the iframe direction
Personally I wouldn't go that route, new can of worms for security and not a 'clean' way of doing this.
Do I have to parse the contents anyway? What do I absolutely have to strip out?
Yes don't be lazy, some devs always say "well I dont need it, its internal" and then it becomes an external thing, and at that point its so big that ONLY a full re-write will set it right, and it keeps chugging along until something is broken, then shit hits the fan and the big boss cries out why hasn't this been done. Long story short.
Yes you have to parse / validate / check all your input, wether internal or external. Anything other than that is just lazy.
In closing I would do it by using an editor like here on SO, which only allows some types of selective formatting. After all a broken <b> will not kill your whole layout, a <div> will...

Markdown formatting
You could use exactly the same type of intermediary solution that this site (StackOverflow) uses in it's user-generated-content (questions, answers, comments).
It's not the complete solution that could replace WYSIWYG solutions like the code editor, but it's just what a usual user-generated-content woudl require. It even allows you to include images.
For a complete guide:
https://www.markdownguide.org/cheat-sheet

Related

Dynamically Obfuscate HTML

I was wondering if there was any way to dynamically obfuscate html on a live server but not offline, so soon as my website was visited the source would be obfuscated rather than in plain text.
Since the client (browser) will have to parse it into a sensible DOM tree, this is pretty much fruitless. These days it's a lot more common to inspect a site using Firebug/Webkit Inspector, which provides a nicely formatted, navigable tree. Most people won't even notice that the HTML is "obfuscated", much less be stopped by it.
Executable code can be obfuscated by minimizing variable names and such without changing the result. HTML is the result though, if you change anything about it, the result will change. So "obfuscation" would mostly be limited to creative use of spacing anyway.
The real question you should ask yourself is "why do I need to obfuscate HTML?". If you're hiding sensitive information, then you should be either encrypting that data, or never presenting it to the client.
Most sensitive information or transactions should take place on the server, and the client only receives a token, or encrypted information, or a unique transaction identifier that can be passed back and forth.
Let me put it this way: There's no way to dynamically obfuscate the HTML on your site such that any reasonably competent person couldn't get it anyway.
You could use JavaScript to attempt to obfuscate it, but you'd have to do it in a way that didn't actually affect the DOM.
You could generate the contents of the page itself with JavaScript, but that is likely to damage accessibility, and once again the DOM will have to be in a condition the browser can use.
You could insert massive amounts of whitespace into the source, but that is easily overcome as well.
All this, and you make it harder and more annoying to manage your site. Minification has its purpose, but obfuscation here is lose-lose.
Your could search for and remove all tabs, newlines, extra spaces, and comments
If you are using php, IonCube has a plugin. it can be found here: http://www.ioncube.com/html_encoder.php it turns your html page into minified javascript.

What web admin wysiwyg preview options are available?

I have a web admin where there is a wysiwyg editor when a user edits information.
There is also a view only template.The user views the information before clicking an edit action.
Currently the view template results in one line for the saved field value.
<p><b>Hello</b></p><p>there</p>
What options do I have to al least make the a little more readable when the user is "viewing"?
Options I can think of are:
Leave as it. Well, that can become a long line of text.
Somehow to avoid encoding of MVC3 and to add actual <br> in place of the </p> or <br> that is in the content. At least the lines will break up.
Have the content actually present as html. This is, you will see bold. What if there is an unclosed tag.
With any of the above, i may place it in a scrollable div.
(I had trouble tagging this question. Feel free to retag).
Typically when you are working with editors you are going to eventually be presenting the HTML live on the site anyway, so encoding shouldn't be a big concern as you are already trusting them.
Now, what I've done in the past is with using editors, such as ckeditor, etc, they cleanup the content which would fix the issue with your concern about unclosed tag.
so I would go with option 3 on your list.
Also ensure that any editor you support encoded data before sending to the server. Do not turn off request validation.
Use the [AllowHtml] attribute on a model property if necessary.
Also use the Anti-xss library from Microsoft - specifically the HTML sanitizer to help remove evil script and help protect against cross site scripting.

What is the best way to handle user generated html content that will be viewed by the public?

In my web application I allow user generated content to be posted for public consumption similar to Stackoverflow.
What is the best practice for handing this?
My current steps for handling user generated content are:
I use MarkItUp to allow users
an easy way to format their html.
After a user has submitted thier
changes I run it through an HTML
Sanitizer (scroll to the
bottem) that uses a white list
approach.
If the Sanitization process has
removed any user created content I
do not save the content. I then
Return there modified content with a
warning message, "Some illegal
content tags where detected and
removed double check your work and
try again."
If the content passes through the
sanitization process cleanly, I save
the raw html content to the
database.
When rendering to the client I just
pass the raw html out of the db to
the page.
That's an entirely reasonable approach. For typical applications it will be entirely sufficient.
The trickiest part of white-listing raw HTML is the style attribute and embed/object. There are legitimate reasons why someone might want to put CSS styles into an otherwise untrusted block of formatted text, or say, an embedded YouTube video. This issue comes up most commonly with feeds. You can't trust the arbitrary block of text contained within a feed entry, but you don't want to strip out, e.g., syntax highlighting CSS or flash video, because that would fundamentally change the content and potentially confuse anyone reading it. Because CSS can contain dangerous things like behaviors in IE, you may have to parse the CSS if you decide to allow the style attribute to stay in. And with embed/object you may need to white-list hostnames.
Addenda:
In worst case scenarios, HTML escaping everything in sight can lead to a very poor user experience. It's much better to use something like one of the HTML5 parsers to go through the DOM with your whitelist. This is much more flexible in terms of how you present the sanitized output to your users. You can even do things like:
<div class="sanitized">
<div class="notice">
This was sanitized for security reasons.
</div>
<div class="raw"><pre>
<script>alert("XSS!");</script>
</pre></div>
</div>
Then hide the .raw stuff with CSS, and use jQuery to bind a click handler to the .sanitized div that toggles between .raw and .notice:
CSS:
.raw {
display: none;
}
jQuery:
$('.sanitized').click(function() {
$(this).find('.notice').toggle();
$(this).find('.sanitized').toggle();
});
The white list is a good move. Any black list solution is prone to letting through more than it should, because you just can't think of everything. I've seen some attemts of using black lists (for example The Code Project), and if they manage to catch everything, generally they still cause additional problems like replacing characters in code so that it can't be used without manually restoring it first.
The safest method would be:
HTML encode all the text.
Match a set of allowed tags and attributes and decode those.
Using a regular expression you can even require that each opening tag has a closing tag, so that an unclosed tag can't mess up the page.
You should be able to do this in something like ten lines of code, so the code that you linked to seems overly complicated.

Programmatically detecting "most important content" on a page

What work, if any, has been done to automatically determine the most important data within an html document? As an example, think of your standard news/blog/magazine-style website, containing navigation (with submenu's possibly), ads, comments, and the prize - our article/blog/news-body.
How would you determine what information on a news/blog/magazine is the primary data in an automated fashion?
Note: Ideally, the method would work with well-formed markup, and terrible markup. Whether somebody uses paragraph tags to make paragraphs, or a series of breaks.
Readability does a decent job of exactly this.
It's open source and posted on Google Code.
UPDATE: I see (via HN) that someone has used Readability to mangle RSS feeds into a more useful format, automagically.
think of your standard news/blog/magazine-style website, containing navigation (with submenu's possibly), ads, comments, and the prize - our article/blog/news-body.
How would you determine what information on a news/blog/magazine is the primary data in an automated fashion?
I would probably try something like this:
open URL
read in all links to same website from that page
follow all links and build a DOM tree for each URL (HTML file)
this should help you come up with redundant contents (included templates and such)
compare DOM trees for all documents on same site (tree walking)
strip all redundant nodes (i.e. repeated, navigational markup, ads and such things)
try to identify similar nodes and strip if possible
find largest unique text blocks that are not to be found in other DOMs on that website (i.e. unique content)
add as candidate for further processing
This approach of doing it seems pretty promising because it would be fairly simple to do, but still have good potential to be adaptive, even to complex Web 2.0 pages that make excessive use of templates, because it would identify similiar HTML nodes in between all pages on the same website.
This could probably be further improved by simpling using a scoring system to keep track of DOM nodes that were previously identified to contain unique contents, so that these nodes are prioritized for other pages.
Sometimes there's a CSS Media section defined as 'Print.' It's intended use is for 'Click here to print this page' links. Usually people use it to strip a lot of the fluff and leave only the meat of the information.
http://www.w3.org/TR/CSS2/media.html
I would try to read this style, and then scrape whatever is left visible.
You can use support vector machines to do text classification. One idea is to break pages into different sections (say consider each structural element like a div is a document) and gather some properties of it and convert it to a vector. (As other people suggested this could be number of words, number of links, number of images more the better.)
First start with a large set of documents (100-1000) that you already choose which part is the main part. Then use this set to train your SVM.
And for each new document you just need to convert it to vector and pass it to SVM.
This vector model actually quite useful in text classification, and you do not need to use an SVM necessarily. You can use a simpler Bayesian model as well.
And if you are interested, you can find more details in Introduction to Information Retrieval. (Freely available online)
I think the most straightforward way would be to look for the largest block of text without markup. Then, once it's found, figure out the bounds of it and extract it. You'd probably want to exclude certain tags from "not markup" like links and images, depending on what you're targeting. If this will have an interface, maybe include a checkbox list of tags to exclude from the search.
You might also look for the lowest level in the DOM tree and figure out which of those elements is the largest, but that wouldn't work well on poorly written pages, as the dom tree is often broken on such pages. If you end up using this, I'd come up with some way to see if the browser has entered quirks mode before trying it.
You might also try using several of these checks, then coming up with a metric for deciding which is best. For example, still try to use my second option above, but give it's result a lower "rating" if the browser would enter quirks mode normally. Going with this would obviously impact performance.
I think a very effective algorithm for this might be, "Which DIV has the most text in it that contains few links?"
Seldom do ads have more than two or three sentences of text. Look at the right side of this page, for example.
The content area is almost always the area with the greatest width on the page.
I would probably start with Title and anything else in a Head tag, then filter down through heading tags in order (ie h1, h2, h3, etc.)... beyond that, I guess I would go in order, from top to bottom. Depending on how it's styled, it may be a safe bet to assume a page title would have an ID or a unique class.
I would look for sentences with punctuation. Menus, headers, footers etc. usually contains seperate words, but not sentences ending containing commas and ending in period or equivalent punctuation.
You could look for the first and last element containing sentences with punctuation, and take everything in between. Headers are a special case since they usually dont have punctuation either, but you can typically recognize them as Hn elements immediately before sentences.
While this is obviously not the answer, I would assume that the important content is located near the center of the styled page and usually consists of several blocks interrupted by headlines and such. The structure itself may be a give-away in the markup, too.
A diff between articles / posts / threads would be a good filter to find out what content distinguishes a particular page (obviously this would have to be augmented to filter out random crap like ads, "quote of the day"s or banners). The structure of the content may be very similar for multiple pages, so don't rely on structural differences too much.
Instapaper does a good job with this. You might want to check Marco Arment's blog for hints about how he did it.
Today most of the news/blogs websites are using a blogging platform.
So i would create a set of rules by which i would search for content.
By example two of the most popular blogging platforms are wordpress and Google Blogspot.
Wordpress posts are marked by:
<div class="entry">
...
</div>
Blogspot posts are marked by:
<div class="post-body">
...
</div>
If the search by css classes fails you could turn to the other solutions, identifying the biggest chunk of text and so on.
As Readability is not available anymore:
If you're only interested in the outcome, you use Readability's successor Mercury, a web service.
If you're interested in some code how this can be done and prefer JavaScript, then there is Mozilla's Readability.js, which is used for Firefox's Reader View.
If you prefer Java, you can take a look at Crux, which does also pretty good job.
Or if Kotlin is more your language, then you can take a look at Readability4J, a port of above's Readability.js.

Apart from <script> tags, what should I strip to make sure user-entered HTML is safe?

I have an app that reprocesses HTML in order to do nice typography. Now, I want to put it up on the web to let users type in their text. So here's the question: I'm pretty sure that I want to remove the SCRIPT tag, plus closing tags like </form>. But what else should I remove to make it totally safe?
Oh good lord you're screwed.
Take a look at this
Basically, there are so many things you want to strip out. Plus, there's stuff that's valid, but could be used in malicious ways. What if the user wants to set their font size smaller on a footnote? Do you care if that get applied to your entire page? How about setting colors? Now all the words on your page are white on a white background.
I would look into the requirements phase again.
Is a markdown-like alternative possible?
Can you restrict access to the final content, reducing risk of exposure? (meaning, can you set it up so the user only screws themselves, and can't harm other people?)
You should take the white-list rather than the black-list approach: Decide which features are desired, rather than try to block any unwanted feature.
Make a list of desired typographic features that match your application. Note that there is probably no one-size-fits-all list: It depends both on the nature of the site (programming questions? teenagers' blog?) and the nature of the text box (are you leaving a comment or writing an article?). You can take a look at some good and useful text boxes in open source CMSs.
Now you have to chose between your own markup language and HTML. I would chose a markup language. The pros are better security, the cons are incapability to add unexpected internet contents, like youtube videos. A good idea to prevent users' rage is adding an "HTML to my-site" feature that translates the corresponding HTML tags to your markup language, and delete all other tags.
The pros for HTML are consistency with standards, extendability to new contents types and simplicity. The big con is code injection security issues. Should you pick HTML tags, try to adopt some working system for filtering HTML (I think Drupal is doing quite a good job in this case).
Instead of blacklisting some tags, it's always safer to whitelist. See what stackoverflow does: What HTML tags are allowed on Stack Overflow?
There are just too many ways to embed scripts in the markup. javascript: URLs (encoded of course)? CSS behaviors? I don't think you want to go there.
There are plenty of ways that code could be sneaked in - especially watch for situations like <img src="http://nasty/exploit/here.php"> that can feed a <script> tag to your clients, I've seen <script> blocked on sites before, but the tag got right through, which resulted in 30-40 passwords stolen.
<iframe>
<style>
<form>
<object>
<embed>
<bgsound>
Is what I can think of. But to be sure, use a whitelist instead - things like <a>, <img>† that are (mostly) harmless.
† Just make sure that any javascript:... / on*=... are filtered out too... as you can see, it can get quite complicated.
I disagree with person-b. You're forgetting about javascript attributes, like this:
<img src="xyz.jpg" onload="javascript:alert('evil');"/>
Attackers will always be more creative than you when it comes to this. Definitely go with the whitelist approach.
MediaWiki is more permissive than this site; yes, it accepts setting colors (even white on white), margins, indents and absolute positioning (including those that would put the text completely out of screen), null, clippings and "display;none", font sizes (even if they are ridiculously small or excessively large) and font-names (even if this is a legacy non-Unicode Symbol font name that will not render text successfully), as opposed to this site which strips out almost everything.
But MediaWiki successifully strips out the dangerous active scripts from CSS (i.e. the behaviors, the onEvent handlers, the active filters or javascript link targets) without filtering completely the style attribute, and bans a few other active elements like object, embed, bgsound.
Both sits are banning marquees as well (not standard HTML, and needlessly distracting).
But MediaWiki sites are patrolled by lots of users and there are policy rules to ban those users that are abusing repeatedly.
It offers support for animated iamges, and provides support for active extensions, such as to render TeX maths expressions, or other active extensions that have been approved (like timeline), or to create or customize a few forms.