I have been assigned the task of validating a website.
I searched the web and found this post of w3schools that
allows validating the HTML, CSS, XHTML, XML, WML of a website.
My question is, is that all the validation we need to do?
We want our website to be cross-browser and to work well on mobile phones.
Are there any better methods to validate our site?
Thank You.
First of all, w3schools is a terrible website; you'd be better off not using it for anything. For a markup validator, try this or this. Keep in mind that validation only ensures that your markup is syntactically, and to a certain extent, semantically correct, but does not ensure that it'll render or function how you intend across different browsers, for example. For that, you'll have to do testing yourself.
If all you've been told is that you need to "validate" the web site, then you've been given a really bad instruction, because it's nowhere near as complete as it needs to be.
Are you supposed to validate the HTML? Everything can be perfect in the HTML and CSS, and the site may still not work the same way in different browsers. A difference in interpretation of HTML/CSS is not necessarily an indication that something is "invalid".
Get better (or more precise) instructions if you can. Otherwise, make them up, deliver your result, and if it's not what was expected, charge normal consulting rates to do things again once you DO have more precise instructions.
If it's just HTML/CSS validation you're after, the validator at w3.org is the one you should be using. There are also validator add-ons for Firefox and Chrome.
If your goal is equivalent rendering in multiple browsers, the only solution is to test your site in the browsers that are important to you. Remember that different versions of a browser also count as different browsers.
Having a W3 valid structure/mark up doesn't mean, your website will look the same on every browser. There exist a lot of tools and plugins to emulate different resolutions or mobile device, but these often don't do a good job.
The only real way to find out, is to ask people to test it on their device.
Related
I need to render some HTML content (created by the application) and I'm wondering whether I should use QTextBrowser or QWebView. Although they seem quite similar, the doc doesn't discuss the differences between then.
I guess QWebView is almost a full-featured browser, but how about QTextBrowser? Does it also use webkit? Am I likely to run into some limitations if I use it?
QTextBrowser supports only a subset of HTML and CSS, documenation see here.
It has the advantage that it is lightweight, QWebView uses a lot more code and ressources.
Some more information The answer is accurate, however I feel compelled to complete it with some more info (OP). As a warning, read through the provided link to get the tags and styles that work. em sizes don't seem to work at all so set all your sizes in pixels; HR styling is extremely limited; bottom-border styling is not available (which could have been a good alternative to HR); . So don't do your design in Firefox and expect it's going to work in Qt. It most likely won't. Check the ref when things don't work as you except and tweak as you go. Eventually, it's probably possible to do many designs with QTextBrowser but it's better to check as you go.
Recently, I have been battling with: weird table borders/margins, div alignments, positioning problems, and am having a down right nightmare supporting Internet Explorer 6. I know a lot of you like me are forced to support, IE6-IE8, Web-Kit, and Mozilla based browsers.
My questions to you are:
What are the important rules you use before hand, when developing across multiple browsers to save you time?
How do you prevent yourself from writing incompatible tags?
What is the best way to avoid hacking your code?
Where do you find research on browser compatibility, do you use any tools?
Finally, when do you cross the line/where do you draw it?
I usually code against Firefox (or Safari) first. That usually produces the best results across browsers other than IE. I then hit IE8, IE7, then finally IE6.
Know what tags are going to cause you trouble and avoid using them at all costs. It's all about how familiar with each browser's issues.
Don't use hacks. Use IE conditional comments. By using conditional comments, you can load one stylesheet for all other browsers, one for IE8, one for IE7, and yet another for IE6 (if you need that kind of granularity to fix your issues). It will give you nice clean stylesheets with as little hack-i-ness as possible.
LitmusApp
There really aren't lines to cross. If you need compatibility, you need compatibility. You just whiddle down your issues as best as you can one at a time until you have something usable.
I would say to start with standards-compliant code. Always test in a standards-compliant browser first like Firefox, or Safari/Chrome. I prefer Firefox for the addons (such as Firebug, HTTPFox and the Web Development Bar). Then work your way DOWN (and by down, I mean all versions of Internet Explorer).
Try to stay away from temporary fixes per situation or site and generalize your code as much as possible. For example, as Justin Neesner said in his answer, using conditional comments and a general style sheet for IE6 , 7, and 8 will knock out most of your problems with layout and formatting, without using (too many) hacks. You can reuse the IE stylesheets, and just place the site-specific code in it.
Use a testing platform like browsershots, netrenderder or LitmusApp so you can see what your site is doing in as many versions of browsers out there. Studying browser compatibility deeply will make you pull your hair out, but any great resource like quirksmode.org can give you information on the little gnats of incompatibility, so you don't go crazy, and bald.
As far as when do I cross/draw the line, it's 99% Internet Explorer issues, and if it's close enough to looking like FF or Chrome/Safari, I'm done. Almost like art, it's not when you're finished adding, it's when you're done removing the crap you don't want to see; that's when you know it's done.
I say there is not that much you can do except sticking to the basics:
Code standards compliant HTML
Validate early, validate often
For Javascript, use a framework like JQuery, Prototype or Dojo
Pick one "main browser" you optimize for first.
in every project, there will be a number of issues, but if you stick to these points, not too many.
I find it extremely helpful to build 100% W3C valid code. Not because it matters - much of what the W3C validator complains about will not make any difference in real world browsers - but because being able to run a validation, and getting a green light and knowing that all is well on that end is very helpful.
To test multiple IE instances at once, you can use IETester. It's not perfect - conditional comments won't work in it for example - , but mostly usable in everyday development work.
use a doctype like html 4.1 transititional which makes ie6 render in standardsmode. You can also use a reset stylesheet.
Use strictly compliant HTML and CSS markup and do not use browser proprietary extensions to CSS.
IE 6 does not implement CSS correctly beginning with its absolute ignorance of the box model.
IE 8 on the other hand has a test suite to prove they implement every aspect of CSS 2.1 correctly (which no other browser does).
Develop for Firefox as stated above and make a decision about supporting IE 6 & 7 or not. Frankly, I no longer develop separate style sheets for those browsers. They don't have enough market share (at least on my site.)
Given the interest in HTML 5 and the lack of interest in XHTML 2, develop HTML 4.01 Strict and follow these practices: (which are recommended in HTML 4, and required in HTML 5 and XHTML 1.1)
all elements and attribute names must appear in lower case,
all attribute values must be quoted,
non-Empty Elements require a closing tag,
no attribute minimization is allowed,
In Strict mode, all inline elements must be contained in a block element.
Why to learn HTML 4.01 Strict with references.
Also see the CSS 2.1 Test Suite
So I have been running around for a while now, 26 days later I think I am ready to give some feedback to my findings.
Coding:
First of all, I found that column systems seem to cut down the amount of HTML and CSS written. They are also very cross-browser friendly. Although there are many of them, I found that the 960 Grid System works the best for me: http://960.gs
Next up I found that Ruby has a cool extension for CSS called, LESS. This has been ported over to .NET and can be found at: http://www.dotlesscss.com. It is easy to setup, and is very powerful when used right.
Javascript, can be tricky but I found out that avoiding certain selectors and shortcuts in languages like JQuery will not only speed up the performance of your application, but exhibit less awkward behavior.
Testing the browsers:
Here comes the interesting part. Without any software I found that testing my applications against Internet Explorer 6 and Safari at the same actually helped increase the speed of my development. Firebug and Firefox actually cause me to develop around the problems and this is what causes a lot of extra code. When developing simple code against IE6 and Safari I found that going back to Firefox and IE8 is incredibly easier. Most problems I have had were with border widths, which were easy fixes.
The best software solution I found that was accessible to me was Expression Web with Superpreview, if you ever wonder why Microsoft releases a bunch of version of IE, its probably so they can sell you tools like these.
Anyway, that's my 2 cents for right now.
I have a PSD design file for fixed width site which I'm converting to accessible cross browser xhtml markup. It is not my design and I cannot write server side coding (php, asp.net, etc). My job is to write only XHTML, CSS and javascript/jquery if needed.
What questions should I ask my client before starting the work?
Edit:
Client want Template should be compatible to all A Grade browsers http://developer.yahoo.com/yui/articles/gbs/
and Code should be compatible with screen reader and WCAG 2.0 compatible.
He want if i use javascript then without javascript at least site should be function-able and accessible
Behavior, such as :hover, :focus, animation, etc. are important with PSDs, because they're not obvious.
Are there any concessions the client will be willing to make for certain browsers in order to fulfill for example WCAG requirements? Your work needs to comply, but does the designer's work also comply?
Contingency design: if there are forms, and/or if something goes wrong, what should happen? How should that look? Is there a design for it?
Is it clear what things will be handled server-side and what the client wants handled client-side?
Be careful with charging more for IE6. If you're worth your salt with CSS and use a good reset, it shouldn't really be much of an issue as far as layout/bugs go.
DO discuss what subtle differences are acceptable for IE6 and some other browsers: normal corners instead of rounded corners? Transparency? etc. Some of these things WILL take more time to make work for IE6, and may have performance implications.
Put it on paper, have them sign it, and have them be explicit.
Agree to this: if it's not EXPLICITLY requested/mentioned in the contract, it's not part of your work, because you haven't had an opportunity to estimate costs for it.
Hope this helps.
What browsers does he want it to work in? Double the price if everything has to work in IE6.
Also, do you know what has to happen when certain parts of content are overflowing?
Ask him everything that couldn't fit into a PSD file: animations, client-side data validation, what to do on events, what to do on mouseovers, what to do on overflow, how much speed is an issue (this will effect asset compression, file formats, etc), if he wants the code commented.
A wireframe should provide all the static information: colors, layout, proportions, etc. What you can't tell from a wireframe is the dynamic stuff: link hover styles, menu flyouts, animations, etc.
I was looking at the www.google.com in Firebug and noticed something odd: The Google logo is centered using a center tag.
So I went and checked the page with the W3C validator and it found 48 errors. Now, I know there are times when you can't make a page valid, especially when we're talking about something like www.google.com and you want it to be as small as possible, but can someone please explain why they use the center tag?
I attended a panel at SXSW a few years ago called "F*ck Standards" which was all about breaking from standards when it makes sense. There was a Google engineer on the panel who talked about the Google home page failing validation, using deprecated tags, etc. He said it was all about performance. He specifically mentioned layout rendering with tables beating divs and CSS in this case. As long as the page worked for their users, they favored performance over standards.
This is a very simple page with high traffic so it makes sense. I imagine if you're building a complex app that this approach might not scale well.
From the horse's mouth.
Because it's just the easiest, most concise way to get the job done. <center> is deprecated, for sure, but as long as it's still supported, you're likely to still see them using it.
Shorter than margin:0 auto. Quicker to parse. It is valid HTML4. No external dependencies, so less HTTP requests.
Usability is NOT validity.
Google Search's biggest achievement has been to build a site which is easy to use, and can be widely used. Now, if Google achieved this with a page which does not validate, well, there's a lesson there to learn.
I think a better question to ask would be "why would Google make it validate if it works fine?" It makes no difference to the user.
There has been speculation and discussion about whether this is intentional; the basic test carried out in the first link does result in a smaller page, and even gzipped, through millions of page views it theoretically stacks up. I doubt that's the reason though: it was created, tested on many browsers at the time, it worked, and continues to work.
Google's breaks validation in many ways on their home page. The very likely real reason - they are all about speed and bandwidth costs. Look at the size of the home page HTML particularly after Gzip is applied at the packet level. They are clearly trying to avoid packet fragmentation (which will mean more bandwidth) and willing to do whatever it takes to get it (identifier shortening, quote removal, deprecated tags, white space removal, etc.
If you look at this just as a validity question, fine but they break the rules on purpose if you don't assume this of course you may jump to a negative conclusion. BTW you can further optimize their pages both in positive and negative manners but why once inside the typical packet size it is somewhat pointless.
They also use other deprecated presentational tags like font and u. My guess is it makes the page quicker to load then using an external stylesheet and allows it to work on more platforms.
It's deprecated, sure, but I think simplicity is the answer to your question.
I've been a web developer for quite some time and what has helped me in learning is to visually see what is going on.
That's the reason for Tools like Aardvark, Web developer, Firebug and many others.
But when i saw the Gecko Reflow Videos they just blew my mind.
Then my question is, is it possible to truly debug html (step through each element)? Or come close to it?
What i've been doing a lot is to use Aardvark and remove elements but Aardvark has its issues with "background" and same size elements and not being able to target those.
UPDATE: I've been trying to write a good update for this question since it has left me thinking about it more. But since English isn't my primary language its been tough.
In the past years it's been the browsers who have had the task of being compatible with the standards. As they get closer to that goal, it is us who should be thinking about what we can truly create when browser compatibility is minimal, and if there are techniques we can utilize that makes rendering a page faster.
We can think of the past decades as the early years of HTML/CSS, where the main goal was just to get the thing to work. Now we should be looking for techniques that speed up the current process. An example of this is in the video above where the Gecko engine is running through the code twice. Why is that? And are there other instances where its doing unnecessary things (even though they work and are compatible)
This is something that clearly needs to be tested to be confirmed, hence my original question of a true debugger.
My $0.02:
"True" HTML debugging, in the sense you're talking about, is not technically possible, because there is no requirement of HTML user agents (web browsers) to render HTML elements in a particular order, nor is there anything like an atomic unit of execution like a "statement".
For instance, when rendering a table, should a user agent reserve space for each <tr> before rendering their child <td>s (breadth-first)? Or should it render each child <td> and each <td>s child and so forth (depth-first)? In practice, user agents make all kinds of guesses to try to render pages as quickly as possible. In other words, there would be no guarantee that debug-order will match actual render-order, nor should there be.
HTML can be thought of as an declarative language in this sense, in that it specifies what should be done (the page rendered to spec) but not exactly how to do it (exactly which order to render elements to the screen). In general, it's best to assume that everything happens at once, although the W3C does give some tips on speeding up <table> rendering based on how user agents should render <table> elements.
IMO, the webdev toolbar and Firebug are the best we've got, where we can edit/disable specific HTML elements and CSS rules.
ok - serious answer.
Judging by the comments on the sites that I've followed from that link, I think that you and I know that there probably isn't. There are a lot of smart blokes and blokettes on those threads, and they all seam to point towards the "no, this is all clever $4!# that wont help us in understanding rendering.
However, I think that what your question might want to emphasis is that rendering at a browser level is very interesting.
Let me just throw this one out there. Do you think that putting body { overflow: scroll; } as a default might speed us up just a little???
In my professional opinion, there's really only one effective tool for time-factoring / assessing / debugging within the html milieu: The WebDev Iterator
Personally, I feel as long as your HTML validates to W3C spec, isn't that all that matters? One should develop their HTML to spec and let browser companies worry about their bugs (which are pretty rare these days) than to focus on old browser mistakes of the past.
HTML Validator plugin for Firefox (aka Tidy) is all any web developer needs to see if their markup is correct, what's wrong, and where it's wrong.
Even if you could do true debugging, each browser parses HTML it's own way, so even if you could step through Firefox to see how a rendering bug occurs, that won't help you with IE or Safari/Chrome at all because they execute parsing in their own manner. This isn't like PHP, .NET or Java where the parsing of the code is the same for everybody, debugging makes sense there.
Then my question is, is it possible to truly debug html (step through each element)? Or come close to it?
You could probably step through the page rendering process by running Firefox under gdb, or modify an open-source browser to have a "step" button, but I really doubt this will achieve anything useful.
CSS isn't that complicated, everything is basically a box, with a width/height/padding/margin.. The problem with web-development (CSS particularly) is every browser implements rendering slightly differently (some more differently than others)..
If you want to know the render-order to speed your page load up, I'd say you're going about this the wrong way.. The browser rendering the page probably accounts for maybe 5% of the load time, the rest is page-generation time and network latency.
You could possibly shave 2ms of your page load by reordering some tags and using a different CSS positioning method.. or you could reduce the page-generation time by 200ms by caching, and half the network latency by setting up a second web-server nearer your users.. Compressing your logo better, or minifying your javascript would most likely improve load-time (universally, across all browsers!)
Basically, if you're concerned about load time, there are much better places to start. If you're concerned about how the page is being rendered, Firebug(-Lite) and http://browsershots.org (or a virtual machine or two) are all you need!