From the information I have been able to find so far, <noindex> is supposed to achieve this, making a single section of a page hidden from search engine spiders. But then it also seems this is not obeyed by many browsers - so if that is the case, what markup should be used instead of / in addition to it?
Yahoo uses a built-in class: <span class="robots-nocontent">
Googlebot has no equivalent(?)
Yandex uses <noindex>
Others?
There is no way to stop crawlers from indexing anything, it's up to their author to decide what the crawlers would do. The rule-obeying ones, like Yahoo Slurp, Googlebot, etc. they each have their own rule, as you've already discovered, but it's still up to them whether to completely obey the rules, or not - say you set robots-nocontent but that part is still indexed and put in some other place, maybe for checks for spam, illegal material, malware, etc.
And that's just for the "good" ones, there's no telling what the bad ones would do. So think of all the noindex stuff as a set of guidelines, not a set of strict rules.
And the only thing that works for sure: if you have sensitive data, or you simply don't want something indexed - don't make it publicly available.
Related
When users visit my website, they don't care about how perfect or how much standard the page is coded. They only care about whether it works or not.
There are tags that are deprecated but have consistent behavior throughout all major, minor, and very minor browsers. They work now and will work in the future. (I'm not talking about optional tags like <marquee> and <blink> which will probably be removed in the future since their non-existence doesn't break pages.) The tags I'm talking about are for example:
<center> (used by google.com homepage, yes and it's May 2014)
<body bgcolor=, alink=, vlink=, link= (all used by google.com)
<font size= (also used by google.com)
If my HTML generator produces tags like <body bgcolor=black>, it is guaranteed to work for near 100% of users.
If it instead produce CSS like background:black;, it will be supported by lesser users compared to <body bgcolor=black>. (Start with https://superuser.com/q/732669/78897 and https://superuser.com/q/447269/78897, though I'm sure they are not the only ones in the whole world.)
Bear with me, this is a real question based on a true problem. Exactly what are the real disadvantages of having these tags as output?
Potential disadvantages include the following:
1) Your customer might actually care about how standard the code is. Maybe not now, but in the future. Maybe for questionable reasons, but still.
2) Deprecated constructs do not always work consistently. For example, align=center attribute set on a table may have different effects depending on browser mode. This is a relatively weak argument, though, since the browser practices have been described rather well in HTML5 CR and you can manage the potential problems. (Besides, even CSS settings may work inconsistently.)
3) There is no guarantee that deprecated features will be supported by all future browsers. On the other hand, the same applies to standard features. In practice, very few features that have been defined in HTML specifications have actually been removed from browsers. (Regarding tags, I think basefont is the only case.) All the examples mentioned, and also marquee, have been described in HTML5 CR as “obsolete” but still well-defined, and according to HTML5 CR, browsers are expected, and partly required, to support them all.
4) Your colleagues (designers/developers/...) may regard your code (and you) as old-fashioned, non-semantic, and whatever.
5) Code maintenance and development may be more difficult. If you have 1,000 pages with <body bgcolor=black> and the customer says they want a somewhat different background color, you would need to edit each page. This argument is, however, weaker than it seems to be. First, how often do such things actually happen? Second, if the pages have actually been generated using suitable tools, perhaps you just need to change the value of one parameter and regenerate them (or just let servers do that, if the pages are dynamically generated). Third, if you have a link element on all pages, referring to basic style sheet for the pages, as you normally should, you just need to add one rule to that style sheet. It is easy to override presentational HTML attributes with CSS.
To summarize, the practical arguments against your approach are rather weak. The most important arguments relate to coding style and principles.
I've added some more disadvantages:
Another disadvantage of using those tags is site bandwidth. When you put in html center, bgcolor and similar tags every time browser needs to load the whole content even if on every page those tags are the same or even if user visited this site many times. But when you place design in css file browsers may cache those files (especially when you set headers properly) so they only load html and images (if no cache is set).
One another thing is that if you decide to redesign the site/style new elements, it's much easier to put changes only in CSS files. It's possible in future you won't be doing those changes on your own or other companies/freelancers will be doing them and it will be much easier for them to make changes in the site. So the site will be cheaper to maintain.
In addition if html / php code is poor (or site is very complex) and many "visual conditions" appear in many files (for example on one page you decide to use one colour and you put it in HTML, on the other another colour) and something goes wrong it will be much easier to find the problem because you may simple cut some css and check where's the problem.
The disadvantage is when one of the major browsers chooses to get rid of the deprecated tag in a future release.
The advantage of using CSS over tags is that you can change the whole web site look and feel in a simple move.
Consider people that require larger font sizes. Colour blindness and also enable the most use of screen readers.
Even those consistent behaviour tags may be removed from browser. What if you would like to create HTML5 website? Then you will need to learn everything from scratch and change literally everything for your website to make it work because you never know if those tags will be supported in HTML 5 in future or only in older HTML documents
CSS provides easier maintenance, for one; client decides they want some elements aligned left instead of center? Change your css rule and poof, you're done. But if you're using old-school valign and such? Get ready to go change every single instance of that in the file(s).
As far as I understand, the only real advantage of HTML5 semantic markup is for search engines and web crawlers to interpret the document better.
Since intranet applications have nothing to do with search engines or web crawlers, what are the advantages of using semantic markup in HTML5?
There is no straightforward example to point out, but the website (even intranet) can be consumed by different user agents (on different devices).
You are probably familiar with Skype (and the iOS Safari) making phone-number-like words clickable. In the future I can easily imagine mobile browsers being smarter to assist the user in completing tasks on the page, like importing a clearly indicated contact to the address book.
Screenreaders for blind people?
While there is not a whole lot of immediate benefit for non-disabled people, it is still good practice. Does your company not have any externally facing sites? If it does, do those people not look at internal page code? Good practices spread just like bad ones.
see also: http://en.wikipedia.org/wiki/Semantic_HTML
Simply said there the only rule you have to follow if you create your html documents is that it is valid html, otherwise you will have the problem that the browser would try to correct your broken syntax which may result in defects of the visual representation of your content.
In modern browsers you can use display to given any element - with some limitations e.g. with input element - the same visual look and behavior then any other element.
So if you ask what are the advantages of using semantic markup in HTML5 you should ask, why to use any of the semantic markup if it is possible to have the same result using css.
The short answer is, no one will stop you if it is your own project where you are responsible to - except the client that probably gives you requirements.
It is similar to asking: Language xyz provides comments and there is a syntax for doc-comments, but why should I use them?.
Using the semantics wisely increases the readability and thus maintainability. You are not required to use every possibility of semantics at all costs.
Using them will help you to get into the code again if you haven't looked at it for a longer time, e.g. to distinguish between the elements that encapsulate logical parts and elements that are used for styling. Especially if you use a template engine to create your code or to search for certain elements in multiple files.
Even if you are now the only one who works on the code it may happen that if the project grows that you need other people to work on the code. Or for the situation, you for some reason are not available, someone else needs to maintain your code, a good markup is essential.
Using the correct markup and additions like WAI-ARIA is not only essential for handicapped people, but also allows the browser to recognize the meaning of elements, allowing to e.g. improve the keyboard navigation. Especially in a productive environment where you need to type much, it is often faster to navigate with keyboard then using a trackpad or a mouse.
When creating a website why should you care for HTML with no style?
Is there any device which will render HTML only (no CSS or JavaScript)?
Do you usually care how your website will display without CSS?
Why is it important?
There are several cases in which websites may be used without styling. As mentioned in the comments, screen readers (such as those used by visually-impaired people) read only content, not styling.
Perhaps more importantly, many search engine spiders (think: Google) read your site without styling. When you view your site without CSS, you will gain a better understanding of how search engines view your content.
And if you are lucky, or your content is particularly geeky, you may get the occasional guru who browses your site via Lynx.
There seem to be a few misconceptions here. First of all and most importantly, screenreaders do take into account CSS and JavaScript. Why? Simply because unlike in the past they are not running on their own, but rather work as addons for existing browsers or include the render engines inline in their own systems.
Does that mean you don't need to concern yourself with screenreaders at all? Sadly that's not the case either. For example, if you add display:table to an element just because you want to vertically align something some screenreaders will actually treat it like a real table (which makes no practical sense). The good part is though that pages are read top-to-bottom, header and menu first (if found) and that adding display:none through javascript to an element will hide the element from the screen reader as well. Now, the following is going to sound really harsh, but except if you're making a real high profile website I wouldn't advice you to concern yourself with this too much. On one hand screenreaders are becoming better and better (try for example the one that's included on your android device if you have a recent version of android) and on the other hand blind people are used to websites being a 'bit' messy. Now, that doesn't mean you should start using flash or otherwise crazy stuff, but it does mean that if you just write a proper website, make your menu a list, make your divisions divs and not tables etc. you should in general be fine. And if you are making a high profile website then you should check out WAI-ARIA.
Now, getting to the search engine part, that's not true either for the big search engines at least. Google does take styling into account. Not all the styling that's unimportant for Google, but it actually will realize which stuff is hidden and analyzes the javascript whether hidden content will be shown (as part of it's anti-SEO work), it will search for links in your javascript and probably lots more I am not aware of. Bing does this to a large extend as well, though for example duckduckgo does not do this too much/at all. Either way, once again, the notion that Google sees your site like lynx does was true in the far past, but by now is invalid.
And if you check your serverlogs you will see that nobody accessed your site through lynx. That's just the reality of life nowadays. In the past (again) people would occasionally use lynx if they only had access to a console, but nowadays it's far easier to pull your phone from your pocket which runs a full web browser.
First part of the answer : 'text based browsers'
Text-based browser list
Alynx
ELinks (active version of Links)
Emacs/W3
Line Mode Browser
Links
Lynx
Net-Tamer
w3m
WebbIE
Second part : 'search engines'
List of semantic search engines
List of search engines
Third part : 'web accessibility' where software helps people with disabilities get access to the web.
It's important to note that for the third part, accessibility, it is
sometimes a legal requirement. For example, in the UK it is illegal to
have a website that is not accessible to blind people. There are
similar requirements for US government services. – slebetman
It's also an applicable law in canada
See this list of tools from w3 for a Complete List of Web Accessibility Evaluation Tools
CSS isn’t an on/off thing. Although CSS may be completely disabled, it is much more common that some of your CSS settings get ignored or overridden. Here is an incomplete list of cases (see my CSS Caveats for some additional details):
Speech-based browsers generally ignore most of CSS, largely because most of CSS is directed towards visual rendering.
So do “text-only browsers” (more accurately, character cell browsers, which render in plain text only using a monospace font but may be able to use colors and bolding).
Search engines generally don’t care about CSS at all.
CSS support varies. The more advanced CSS features you use, the more probable it is that many browsers don’t implement them.
CSS support may be disabled by the user, completely or partially.
User style sheets may interfere with your CSS code or even override them.
Browser settings e.g. on minimum font size may make some of you CSS settings ineffective.
Browsers have bugs. The more complicated CSS techniques you use, the more probable it is that you trigger a bug in some browsers.
An external CSS file (the recommended way of using CSS) may get lost, e.g. a browser may need to wait (perhaps in vain) from a server, or an archiving system may archive an HTML file but fail to archive the CSS file.
Styling may get lost in transfer, e.g. when copying content from a web page to MS Word or Excel or Notepad or email.
I need to implement a simple and efficient XSS Filter in C++ for CppCMS. I can't use existing high quality filters
written in PHP because because it is high performance framework that uses C++.
The basic idea is provide a filter that have a while list of HTML tags and a white
list of options for these tags. For example. typical HTML input can consist of
<b>, <i>, tags and <a> tag with href. But straightforward implementation is not
good enough, because, even allowed simple links may include XSS:
Click On Me
There are many other examples can be found there. So I though also about a possibility to create a white list of prefixes for tags like href/src -- so I always need to check if it starts with (https?|ftp)://
Questions:
Are these assumptions are good enough for most of purposes? Meaning that If I do not
give an options for style tags and check src/href using white list of prefixes it solves XSS problems? Are there problems that can't be fixes this way?
Is there a good reference for formal grammar of HTML/XHTML in order to write simple
parser that would cleanup all incorrect of forbidden tags like <script>
You can take a look at the Anti Samy project, trying to accomplish the same thing. It's Java and .NET though.
http://www.owasp.org/index.php/Category:OWASP_AntiSamy_Project#.NET_version
http://www.owasp.org/index.php/Category:OWASP_AntiSamy_Project_.NET
Edit 1, A bit extra :
You can potentially come up with a very strict white listing. It should be structured well and should be pretty tight and not much flexible. When you combine flexibility, so many tags, attributes and different browsers generally you end up with a XSS vulnerability.
I don't know what is your requirements but I'd go with a strict and simple tag support (only b li h1 etc.) and then strict attribute support based on the tag (for example src is only valid under href tag), then you need to do whitelisting in the attribute values as you stated http|https|ftp or style="color|background-color" etc.
Consider this one:
<x style="express/**/ion:(alert(/bah!/))">
Also you need to think about some character whitelisting or some UTF-8 normalization, because different encodings can cause awkward issues. Such as new lines in attributes, non valid UTF-8 sequences.
All details of HTML parsing are specified in HTML 5. However implementation of it is quite a lot of work, and it doesn't matter whether you'll parse HTML exactly with all corner cases. At worst you'll end up with different DOM, but you have to sanitize DOM anyway.
As you mentioned, there are various PHP implementations of this, but I don't know of any in C++, since that's not a language typically applied to web development. Overall, it's going to depend on how complex of an implementation you want to come up with.
A very restrictive whitelist is probably the "simplest" way, but if you want to be really comprehensive I would look into doing a conversion of one of the established versions to C++, as opposed to trying to write your own from scratch. There are so many tricks to worry about, that I think you'd be better off standing on the shoulders of others that have already gone through all that.
I don't know anything about using C++ for web development, but converting PHP to it doesn't seem like it would be a particularly difficult task, PHP doesn't really have any magical capabilities that C++ won't be able to duplicate. I'm sure there will be some small hitches, but overall if you want to go the more-complex route it'd definitely still be faster to do a conversion than a full design from scratch.
HTML Purifier seems like a strong PHP implementation that is still actively maintained, there's a comparison document where the author discuss some differences between his approach and others', probably worth reading.
Whatever you come up with, definitely test it with all the examples you link, and make sure it passes all those. Good luck!
I have an app that reprocesses HTML in order to do nice typography. Now, I want to put it up on the web to let users type in their text. So here's the question: I'm pretty sure that I want to remove the SCRIPT tag, plus closing tags like </form>. But what else should I remove to make it totally safe?
Oh good lord you're screwed.
Take a look at this
Basically, there are so many things you want to strip out. Plus, there's stuff that's valid, but could be used in malicious ways. What if the user wants to set their font size smaller on a footnote? Do you care if that get applied to your entire page? How about setting colors? Now all the words on your page are white on a white background.
I would look into the requirements phase again.
Is a markdown-like alternative possible?
Can you restrict access to the final content, reducing risk of exposure? (meaning, can you set it up so the user only screws themselves, and can't harm other people?)
You should take the white-list rather than the black-list approach: Decide which features are desired, rather than try to block any unwanted feature.
Make a list of desired typographic features that match your application. Note that there is probably no one-size-fits-all list: It depends both on the nature of the site (programming questions? teenagers' blog?) and the nature of the text box (are you leaving a comment or writing an article?). You can take a look at some good and useful text boxes in open source CMSs.
Now you have to chose between your own markup language and HTML. I would chose a markup language. The pros are better security, the cons are incapability to add unexpected internet contents, like youtube videos. A good idea to prevent users' rage is adding an "HTML to my-site" feature that translates the corresponding HTML tags to your markup language, and delete all other tags.
The pros for HTML are consistency with standards, extendability to new contents types and simplicity. The big con is code injection security issues. Should you pick HTML tags, try to adopt some working system for filtering HTML (I think Drupal is doing quite a good job in this case).
Instead of blacklisting some tags, it's always safer to whitelist. See what stackoverflow does: What HTML tags are allowed on Stack Overflow?
There are just too many ways to embed scripts in the markup. javascript: URLs (encoded of course)? CSS behaviors? I don't think you want to go there.
There are plenty of ways that code could be sneaked in - especially watch for situations like <img src="http://nasty/exploit/here.php"> that can feed a <script> tag to your clients, I've seen <script> blocked on sites before, but the tag got right through, which resulted in 30-40 passwords stolen.
<iframe>
<style>
<form>
<object>
<embed>
<bgsound>
Is what I can think of. But to be sure, use a whitelist instead - things like <a>, <img>† that are (mostly) harmless.
† Just make sure that any javascript:... / on*=... are filtered out too... as you can see, it can get quite complicated.
I disagree with person-b. You're forgetting about javascript attributes, like this:
<img src="xyz.jpg" onload="javascript:alert('evil');"/>
Attackers will always be more creative than you when it comes to this. Definitely go with the whitelist approach.
MediaWiki is more permissive than this site; yes, it accepts setting colors (even white on white), margins, indents and absolute positioning (including those that would put the text completely out of screen), null, clippings and "display;none", font sizes (even if they are ridiculously small or excessively large) and font-names (even if this is a legacy non-Unicode Symbol font name that will not render text successfully), as opposed to this site which strips out almost everything.
But MediaWiki successifully strips out the dangerous active scripts from CSS (i.e. the behaviors, the onEvent handlers, the active filters or javascript link targets) without filtering completely the style attribute, and bans a few other active elements like object, embed, bgsound.
Both sits are banning marquees as well (not standard HTML, and needlessly distracting).
But MediaWiki sites are patrolled by lots of users and there are policy rules to ban those users that are abusing repeatedly.
It offers support for animated iamges, and provides support for active extensions, such as to render TeX maths expressions, or other active extensions that have been approved (like timeline), or to create or customize a few forms.