Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
It's a well known fact that browsers will accept invalid HTML and do their best trying to make sense out of it. If you create a web page containing only the following code:
<html>
<head>
<title>This is bad HTML</title>
<body>
<h1>Bad HTML</h2>
<p>This is a paragraph
</body>
then you will get a webpage parsed in a way that will show an acceptable view. Whether it is what you meant or not, depends on each browser's understanding of your mistakes.
This, to me, is the same as if Javascript could be written like this:
if (some_var == 1) {
say_something("some text');
else {
do_something_else();
// END OF CODE
which, a Javascript compiler written with the same effort to make sense out of invalid code could proably parse as you meant - or make its own sense but run it after all.
I've seen several articles and questions regarding the question "Is it even worth it writting valid HTML?", which present several opinions on the pros and cons of writting valid HTML. However, what this really makes me wonder is:
Why are browsers accepting invalid HTML in the first place?
NOTE: The following questions are not more questions, but a way to give context to the only question I'm asking here:
Why aren't browsers strict?
Why don't they reject with errors invalid code, just like any other programming language? (not that I'm calling HTML a programming language, but you get the point)
Wouldn't that force all developers to write HTML code that will be interpreted exactly the same in any browser?
If browsers refused to parse invalid markup, wouldn't that effectively result in valid markup everywhere and from anyone wanting to publish content in the web?
If this comes from historical reasons and backward compatibility, isn't it time already to change when we already see sites like adsense.google.com refusing compatibility with IE < v10?
EDIT: Those voting to close this question, please reconsider. This is not a broad question neither is a opinion based one. It's a very specific question on a very specific subject, completely related to the programming world and that can definitely be answered with a real answer by those who actually know it. Thanks.
"Why are browsers accepting invalid HTML in the first place?"
For compatibility reasons, and in the case of newer browsers, because HTML5 dictates an algorithm for parsing even invalid documents.
Earlier HTML specifications were ambiguous on many situations,
such as what happens when the wrong tag is seen, or inconsistent nesting of
tags, such as <b><i></b></i>. Even so, many documents "just work" because some earlier browsers ignore unexpected tags or even "correct" incorrect nesting.
But now the HTML5 specification includes a much less ambiguous algorithm for parsing HTML documents. Note that the algorithm includes points where "parse errors" can occur. But these parse errors usually don't stop a modern browser from displaying an HTML document, although the browser is free to display parse errors in its developer tools if it chooses to:
[U]ser agents, while parsing an HTML document, may abort the parser at the first parse error that they encounter for which they do not wish to apply the rules described in this specification. [Emphasis added.]
But again, no modern browser, to my knowledge, aborts parsing a document this early because of parse errors (barring extraordinary situations, such as running out of memory).
On the adsense.google.com situation: This probably has nothing to do with invalid HTML, but rather, perhaps, because IE9 and earlier's DOM support is not sufficient for adsense.google.com's needs.
I don't know why they allowed it from the start, but here is why they cant switch now: Legacy Support. If a browser forced strict html, huge parts of the internet would just break, and yes some people would update their code, but some pages would just be lost. There is no incentive for browsers to do this because it would seem to the consumer that browser just doesn't work on some pages and would switch to another that still supports less optimal html.
Basically because it was allowed from the beginning, now it has to be allowed now.
To avoid opinion-based answers, this type of question requires an answer based on an authorative reference with credible and/or official sources.
The following excerpts are quotes from W3C Validator Help & FAQ that addresses Why are browsers accepting invalid HTML in the first place? and some other demonstrated concerns related to that.
About Markup
Most pages on the World Wide Web are written in computer languages
(such as HTML) that allow Web authors to structure text, add
multimedia content, and specify what appearance, or style, the result
should have.
As for every language, these have their own grammar, vocabulary and
syntax, and every document written with these computer languages are
supposed to follow these rules. The (X)HTML languages, for all
versions up to XHTML 1.1, are using machine-readable grammars called
DTDs, a mechanism inherited from SGML.
However, Just as texts in a natural language can include spelling or
grammar errors, documents using Markup languages may (for various
reasons) not be following these rules.
[...]
Concepts
One of the important maxims of computer programming is: "Be
conservative in what you produce; be liberal in what you accept."
Browsers follow the second half of this maxim by accepting Web pages
and trying to display them even if they're not legal HTML. Usually
this means that the browser will try to make educated guesses about
what you probably meant. The problem is that different browsers (or
even different versions of the same browser) will make different
guesses about the same illegal construct; worse, if your HTML is
really pathological, the browser could get hopelessly confused and
produce a mangled mess, or even crash.
That's why you want to follow the first half of the maxim by making
sure your pages are legal HTML.
[...]
Validity might not mean quality, and invalidity might not mean poor quality
A valid Web page is not necessarily a good web page, but an invalid
Web page has little chance of being a good web page.
For that reason, the fact that the W3C Markup Validator says that one
page passes validation does not mean that W3C assesses that it is a
good page. It only means that a tool (not necessarily without flaws)
has found the page to comply with a specific set of rules. No more, no
less. This is also why the "valid ..." icons should never be
considered as a "W3C seal of quality".
Unexpected browser behavior might mean that they actually don't accept invalid markup
While contemporary Web browsers do an increasingly good job of parsing
even the worst HTML “tag soup”, some errors are not always caught
gracefully. Very often, different software on different platforms will
not handle errors in a similar fashion, making it extremely difficult
to apply style or layout consistently.
Using standard, interoperable markup and stylesheets, on the other
hand, offers a much greater chance of having one's page handled
consistently across platforms and user-agents.
[...]
Compatibility problems
Checking that a page “displays fine” in several contemporary browsers
may be a reasonable insurance that the page will “work” today, but it
does not guarantee that it will work tomorrow.
In the past, many authors who relied on the quirks of Netscape 1.1
suddenly found their pages appeared totally blank in Netscape 2.0.
Whilst Internet Explorer initially set out to be bug-compatible with
Netscape, it too has moved towards standards compliance in later
releases.
[...]
Relying too much on 3rd party tools
The answer to this one is that markup languages are no more than data
formats. So a website doesn't look like anything at all! It only takes
on a visual appearance when it is presented by your browser.
In practice, different browsers can and do display the same page very
differently. This is deliberate, and doesn't imply any kind of browser
bug. A term sometimes used for this is WYSINWOG - What You See Is Not
What Others Get (unless by coincidence). It is indeed one of the
principal strengths of the web, that (for example) a visually impaired
user can select very large print or text-to-speech without a publisher
having to go to the trouble and expense of preparing a separate
edition.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
It's a well known fact that browsers will accept invalid HTML and do their best trying to make sense out of it. If you create a web page containing only the following code:
<html>
<head>
<title>This is bad HTML</title>
<body>
<h1>Bad HTML</h2>
<p>This is a paragraph
</body>
then you will get a webpage parsed in a way that will show an acceptable view. Whether it is what you meant or not, depends on each browser's understanding of your mistakes.
This, to me, is the same as if Javascript could be written like this:
if (some_var == 1) {
say_something("some text');
else {
do_something_else();
// END OF CODE
which, a Javascript compiler written with the same effort to make sense out of invalid code could proably parse as you meant - or make its own sense but run it after all.
I've seen several articles and questions regarding the question "Is it even worth it writting valid HTML?", which present several opinions on the pros and cons of writting valid HTML. However, what this really makes me wonder is:
Why are browsers accepting invalid HTML in the first place?
NOTE: The following questions are not more questions, but a way to give context to the only question I'm asking here:
Why aren't browsers strict?
Why don't they reject with errors invalid code, just like any other programming language? (not that I'm calling HTML a programming language, but you get the point)
Wouldn't that force all developers to write HTML code that will be interpreted exactly the same in any browser?
If browsers refused to parse invalid markup, wouldn't that effectively result in valid markup everywhere and from anyone wanting to publish content in the web?
If this comes from historical reasons and backward compatibility, isn't it time already to change when we already see sites like adsense.google.com refusing compatibility with IE < v10?
EDIT: Those voting to close this question, please reconsider. This is not a broad question neither is a opinion based one. It's a very specific question on a very specific subject, completely related to the programming world and that can definitely be answered with a real answer by those who actually know it. Thanks.
"Why are browsers accepting invalid HTML in the first place?"
For compatibility reasons, and in the case of newer browsers, because HTML5 dictates an algorithm for parsing even invalid documents.
Earlier HTML specifications were ambiguous on many situations,
such as what happens when the wrong tag is seen, or inconsistent nesting of
tags, such as <b><i></b></i>. Even so, many documents "just work" because some earlier browsers ignore unexpected tags or even "correct" incorrect nesting.
But now the HTML5 specification includes a much less ambiguous algorithm for parsing HTML documents. Note that the algorithm includes points where "parse errors" can occur. But these parse errors usually don't stop a modern browser from displaying an HTML document, although the browser is free to display parse errors in its developer tools if it chooses to:
[U]ser agents, while parsing an HTML document, may abort the parser at the first parse error that they encounter for which they do not wish to apply the rules described in this specification. [Emphasis added.]
But again, no modern browser, to my knowledge, aborts parsing a document this early because of parse errors (barring extraordinary situations, such as running out of memory).
On the adsense.google.com situation: This probably has nothing to do with invalid HTML, but rather, perhaps, because IE9 and earlier's DOM support is not sufficient for adsense.google.com's needs.
I don't know why they allowed it from the start, but here is why they cant switch now: Legacy Support. If a browser forced strict html, huge parts of the internet would just break, and yes some people would update their code, but some pages would just be lost. There is no incentive for browsers to do this because it would seem to the consumer that browser just doesn't work on some pages and would switch to another that still supports less optimal html.
Basically because it was allowed from the beginning, now it has to be allowed now.
To avoid opinion-based answers, this type of question requires an answer based on an authorative reference with credible and/or official sources.
The following excerpts are quotes from W3C Validator Help & FAQ that addresses Why are browsers accepting invalid HTML in the first place? and some other demonstrated concerns related to that.
About Markup
Most pages on the World Wide Web are written in computer languages
(such as HTML) that allow Web authors to structure text, add
multimedia content, and specify what appearance, or style, the result
should have.
As for every language, these have their own grammar, vocabulary and
syntax, and every document written with these computer languages are
supposed to follow these rules. The (X)HTML languages, for all
versions up to XHTML 1.1, are using machine-readable grammars called
DTDs, a mechanism inherited from SGML.
However, Just as texts in a natural language can include spelling or
grammar errors, documents using Markup languages may (for various
reasons) not be following these rules.
[...]
Concepts
One of the important maxims of computer programming is: "Be
conservative in what you produce; be liberal in what you accept."
Browsers follow the second half of this maxim by accepting Web pages
and trying to display them even if they're not legal HTML. Usually
this means that the browser will try to make educated guesses about
what you probably meant. The problem is that different browsers (or
even different versions of the same browser) will make different
guesses about the same illegal construct; worse, if your HTML is
really pathological, the browser could get hopelessly confused and
produce a mangled mess, or even crash.
That's why you want to follow the first half of the maxim by making
sure your pages are legal HTML.
[...]
Validity might not mean quality, and invalidity might not mean poor quality
A valid Web page is not necessarily a good web page, but an invalid
Web page has little chance of being a good web page.
For that reason, the fact that the W3C Markup Validator says that one
page passes validation does not mean that W3C assesses that it is a
good page. It only means that a tool (not necessarily without flaws)
has found the page to comply with a specific set of rules. No more, no
less. This is also why the "valid ..." icons should never be
considered as a "W3C seal of quality".
Unexpected browser behavior might mean that they actually don't accept invalid markup
While contemporary Web browsers do an increasingly good job of parsing
even the worst HTML “tag soup”, some errors are not always caught
gracefully. Very often, different software on different platforms will
not handle errors in a similar fashion, making it extremely difficult
to apply style or layout consistently.
Using standard, interoperable markup and stylesheets, on the other
hand, offers a much greater chance of having one's page handled
consistently across platforms and user-agents.
[...]
Compatibility problems
Checking that a page “displays fine” in several contemporary browsers
may be a reasonable insurance that the page will “work” today, but it
does not guarantee that it will work tomorrow.
In the past, many authors who relied on the quirks of Netscape 1.1
suddenly found their pages appeared totally blank in Netscape 2.0.
Whilst Internet Explorer initially set out to be bug-compatible with
Netscape, it too has moved towards standards compliance in later
releases.
[...]
Relying too much on 3rd party tools
The answer to this one is that markup languages are no more than data
formats. So a website doesn't look like anything at all! It only takes
on a visual appearance when it is presented by your browser.
In practice, different browsers can and do display the same page very
differently. This is deliberate, and doesn't imply any kind of browser
bug. A term sometimes used for this is WYSINWOG - What You See Is Not
What Others Get (unless by coincidence). It is indeed one of the
principal strengths of the web, that (for example) a visually impaired
user can select very large print or text-to-speech without a publisher
having to go to the trouble and expense of preparing a separate
edition.
Granted, this is a very generic question, but I am wondering if w3c validation is considered a best practice for html validation, or if there are better approaches to ensure contemporary standards-compliant markup.
This question arose when I noticed duplicate IDs on an MDN page (a site I would have assumed would be very strict about its coding practices). It appeared to be an artifact of how they generated the sections of the page.
Curious, I validated the page's code on the w3c validator, and there were various "errors" that suggested that MDN was just ignoring that a certain attribute or value was not valid. Generally, these related to seemingly appropriate uses of rel attributes.
I was left wondering if standards for valid, semantic markup matter less, or if there's a new ideal approach to code validation and standardization than relying on w3c validation.
Maintainer of the current W3C HTML Checker (validator) here. I think it's important to understand the intended purpose of the current HTML checker, which is different from the purpose of the legacy W3C Markup Validator.
The purpose of the checker is documented at https://validator.w3.org/nu/about.html#why-validate:
The core reason to run your HTML documents through a conformance checker is simple: To catch unintended mistakes—mistakes you might have otherwise missed—so that you can fix them.
Beyond that, some document-conformance requirements (validity rules) in the HTML spec are there to help you and the users of your documents avoid certain kinds of potential problems.
There are some markup cases defined as errors because they are potential problems for accessibility, usability, interoperability, security, or maintainability—or because they can result in poor performance, or that might cause your scripts to fail in ways that are hard to troubleshoot.
Along with those, some markup cases are defined as errors because they can cause you to run into potential problems in HTML parsing and error-handling behavior—so that, say, you’d end up with some unintuitive, unexpected result in the DOM
Validating your documents alerts you to those potential problems.
So as far as your question about "are better approaches to ensure contemporary standards-compliant markup", the answer is that it's not an either-or thing; there are a variety of approaches and the W3C HTML Checker is just one of them, and its goal isn't to be the single way to determine anything but instead to just help you catch mistakes you might otherwise miss and that might cause unexpected problems for your users.
As far as ways to get alerted to specific device issues or browser-implementation issues, we don’t have good automated checking tools for that, but a couple of things which are huge help there are:
https://caniuse.com/ — detailed information about the level of support for particular web-runtime features in different browsers, and in different versions of those browsers, and in release of the browsers for mobile devices vs desktop
https://wptdashboard.appspot.com/ — current test results across all major browser engines for dozens of web-runtime features/specs; if https://caniuse.com/ doesn’t have information about a particular feature, you can look through this dashboard and browse to the directory that has tests for that feature, and find whether a browser passes the tests for the feature
But as far as good automated tools we do actually have for checking other things, here are two:
https://validator.w3.org/i18n-checker/ — W3C Internationalization Checker
https://observatory.mozilla.org/ — for doing a security assessment of the content of your site
I recently faced a problem with mentioned above W3C HTML Checker. I respect a huge amount of work that was done by author of this validator, but it did not allow me in any way a tag <script type="text/vbscript" src="file.vbs">. It was said to change type value to empty string, a JavaScript MIME type, or module, which makes my page useless.
I know than VBScript language is rarely used now, it was just a test page, but let me share with you less tricky alternative, as good as the first one for HTML error checking.
Maintainer of the current JsonFormatter (validator) is here
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
So as the title suggests, im interested / curious about how much does HTML 5 outlining help boost SEO in general.
there are alot of tutorials and explanations as to what HTML5 will do for XYZ, and alot of how-tos but for some of the pluses that HTML5 currently(and future) brings, there isn't a clear mention(that i know of) as to how much better it will/or curently make things.
I work in a...normal team i guess in where we have dedicated graphic designers, coders, programmers etc and part of this team of course are the SEO guys.
Now MANY websites show (under the document outline tool) that many designers/developers are using HTML5 but when you look at their document outline, its all over the place and even have unnamed sections in their outlines.
my guess is that, theyre just using the regular DOC type of HTML <!DOCTYPE HTML> and rolling with it instead of typing the long/massive doctype and everything therein etc...
but that said, and as oppose to the regular "old" rules and debates of H1 tags etc...
Since many of HTML5s features are already in play, and most current browsers supporting them(at diff levels), does it hurt when having a semi messed up, unnamed sections in your outline hurt?
Overall, if i convert my page from html of old to new HTML5 standards and proper outlining etc, will it make my standings better?
in a simple question, FOR EXAMPLE, if my current page rankings are say 5/100 on google,
will implementing a better, newer doc outline with HTML5 bring me higher, say 3/100 or even 1/100.
You are under the wrongful impression that formatting of the code, or use of standards has something to do with how search engines rank your page. This isn't actually the case.
What does happen with semantic tags/classes/ids, validating documents, and standards compliant markup is make it a lot easier for search engines interpret the page properly.
The content, at which levels / order and importance (say with the header tags) mark how relevant it is, and a properly formed document only helps in recognizing the different sections, but has no inherent effect on how well its ranked.
In the end though SEO is slightly guessing how smart the search engine algorithms are, one might for instance assume that Google has some sort of consideration of grading an HTML4 markup page vs HTML5. Since HTML5 is more recent, it's likely to get some leverage over HTML4 markup since HTML4 is outdated.
There is no boost from using HTML5. Semantic markup helps SEO but the version you use, or format (as in microformats) does not. It's the content that gets ranked, not your markup.
I'm not sure what outlining means. Do you intent to verify that your document doesn't have untitled sections like in this article?
Which is your question:
Does Google like HTML5 more than other HTML's?
Does Google like valid HTML5 more than other valid HTML's that don't have valid HTML5 structre (i.e. messy or untitled sections)?
This is just my gut feeling but I'm guessing it doesn't matter what HTML you use. This article indicates code validity is not an issue in page ranking. Here it says Doctype isn't even registered by Google. (Just a couple of links, don't know how realiable they are.) It makes sense thought. PageRank algorithm can't really (again just my gut feeling) be so simple that would put a lot of weight on the validity of code or the used HTML language. Unique content, locality, links to other quelity sites etc. are far more important then few missing title tags or the fact source code is written using the latest HTML specification. Where would it end? Should PHP pages rank better than "pure" HTML. Is JavaScript an boosting factor or not? What JS library would be the most important one?
Sorry if I'm splitting hairs but used technology shouldn't matter, content should. (Don't really know if only content should matter thought...)
I need to know the reason why we use or rely on html validator for html?? I guess it doesnt meet our requirements of the browser compatibility.. Or else what is the use of this HTML Validator??
This document
attempts to answer the questions many people have regarding why they should bother with Validating their web sites and tries to dispel a few common myths.
Many browsers will 'cheat' the established standards and render erroneous HTML in a way that looks OK visually. An HTML validator can tell you whether or not your page actually conforms to the standard, giving you peace of mind into the future (of possibly stricter browsers).
HTML interpretation is sloppy as crap. There is a significant amount of garbage that HTML processors with interpret when they shouldn't. This means problems that should destroy a HTML document will be processed anyways resulting in malformed content or function features that do not resemble their intended objectives. Some of these behaviors include unquoted attributes that cause problems if attribute values collide with attribute names or contain spaces or so forth. Missing ending tags can cause all sort of problems. Honestly, a browser should reject processing of pages that are so broken, but they won't. Instead they will perform any level of defamation that your horrible HTML code will allow them to render.
By validating your code avoids these destructive syntax problems. Validation is necessary, because if the browser is willing to processing any level of garbage that it can then it is certainly not going to warn of you problems, so there is no way to know there are problems unless the code is validated.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
(I'd like this to be the definitive community wiki. I seeded it from my own answer to this question.)
Specify Everything
A lot of cross-browser issues amount to this: you didn't specify something, and different browsers make different assumptions. Therefore:
Declare a valid doctype
Your doctype tells the browser what rules you'll be using in your code. If you don't specify, the browser has to guess, and different browsers will guess differently.
In my experience, a "strict" doctype makes IE behave better (enables things like CSS :hover selectors on divs in IE7).
This article gives good background on doctypes.
Use Web standards
Avoid browser-specific markup, or only use it when its failure in other browsers won't be significant to the site experience.
Validate your HTML and CSS
You don't have to get everything perfect, but validation is good feedback. As Jeff said:
Knowing the rules and boundaries helps you define what you're doing, and gives you legitimate ammunition for agreeing or disagreeing. You can make an informed choice, instead of a random "I just do this and it works" one.
Imagine you opened a paragraph tag and never closed it. If you then open a list tag, did you mean it to be inside the paragraph or not? Validating will help you catch that, close the tag, and eliminate ambiguity.
Consider a CSS Reset
Different browsers assume different baseline CSS rules. You can help them all to act the same by explicitly ironing out the differences up front. Eric Meyer, who wrote CSS: The Definitive Guide, uses this reset. Another popular choice is YUI Reset CSS.
Use a Javascript library for DOM interactions
Whenever your Javascript needs to work with elements on your page, it's best to use a library like jQuery, Prototype, or MooTools. These libraries are used by many thousands of developers, and they take most of the inconsistencies between browsers' interpretation of Javascript, deal with those internally, and give you a consistent set of commands that just work. Trying to find and work around all these inconsistencies yourself is a waste of time and likely to create bugs.
Test in multiple browsers, deal with IE last
Test in multiple browsers as you go. Generally, you'll find that non-IE browsers behave similarly and IE is a special case - especially if you follow the advice above. When necessary, you can add IE hacks in a separate stylesheet and only load it for IE users.
Quirksmode.com is a good place for hunting down random browser differences.
Browsershots.org can help show how your page will be displayed in an assortment of browsers and operating systems.
Fail Gracefully
No site will look perfect in every browser that exists. If a user doesn't have Flash, or Javascript, or advanced CSS, etc, you want your site to be usable anyway. Design with that in mind:
Check the bare HTML
Try loading your site with bare HTML - no styles, no scripts. Are menu options available? Does primary content precede secondary content? Is the site usable, even if ugly?
Consider test-driven progressive enhancement
Described in this article, this technique uses javascript to check if a browser has a given capability, such as support for a given CSS property, before using it on the page. It is unlike browser sniffing because it tests for features rather than a specific browser.
Use a library like jQuery abstract away the differences in the DOM, AJAX and JavaScript.
Make sure you're keeping HTML, CSS and Javascript in separate files as much a possible. Mixing structure, presentation and behavior in your HTML file just makes finding and fixing problems harder.
Use Firebug in Firefox for:
Debugging/stepping through your JS.
Seeing how your stylesheets are being interpreted and hacking them up on the fly to see how to fix your problem.
See how many calls you are making for remote resources and how long they take.
Profile your code.
Chrome and IE8 have similar tools built-in that can be used for the same thing.
Opera and Safari (and IE) have Firebug Lite.
Use CSS Reset on start of your stylesheet...
You can get one here...
Validate your code by w3c ...
You can validate your code here by page link or simply copy paste page element
My #1 rule is use a strict doctype. HTML or XHTML is fine, but using the strict doctype removes pretty much every browser quirk there is, especially in IE7+.
Imagine you opened a paragraph tag and never closed it. If you then open a list tag, did you mean it to be inside the paragraph or not?
Actually you can't put any other block tags inside a <p> tag, that's why the spec allows you to omit the closing tag. If you start a list without closing a paragraph, then the paragraph is implicitly closed. And the validator won't complain.
That's not to say you shouldn't close tags, because it generally makes code easier to skim (you don't need to remember the above rules).
Consider programming you web-site's UI using Google Web Toolkit. With GWT you write all code in Java programming language which GWT then cross-compiles into optimized JavaScript that automatically works across all major browsers.
I think using best practice is the way to go, progressive enhancement is designing with the user in mind and needs to be done with all designers. I believe that a lot of testing on browsers is a good way to ensure proper content is being displayed, many developers over look this.