Browser bugs in building a prototype - html

I am learning how to program and my goal is to build a simple functional prototype...I'm at the very beginning.
I am not concerned with the visual design at this stage, other than as it relates to being able to demonstrate the functionality.
My question is: do I need to worry about ironing out cross-browser bugs in the HTML/CSS, or can I do development on a single browser? (Perhaps a better way of asking this is does the back-end programming have any effect on which browser is displaying it).

If you are at the very beginning and only want a functional prototype, do not worry about cross browser HTML/CSS. In fact forget the CSS altogether and focus on printing just standard HTML. Since the visual design will change, focus on the content, styles can always be applied and switched later.
If you need Javascript/AJAX stuff I would recommend using a library like JQuery that has already solved many cross browser problems for you.
The back-end stuff "Perl, PHP, Python, etc" shouldn't care about the browser as it is simply printing text for the browser to render as it will.

The back-end programming will affect the way a given browser displays your page and there might well be two schools of thought on whether you should be picky about the browser compatibility issues.
On the one hand, if you're just finding your feet in web development it might be asking too much to expect to have a standards-perfect, cross-browser site or application every time. It might be better to focus on actually accomplishing a finished result and learning as much syntax and technique as possible.
On the other hand, it might be argued that it's a good idea to get into the habit of adopting good practices now and recognising the sorts of things that are going to give you headaches... probably when you view your page in Internet Explorer. This takes more time to reach a finished product, but it would teach you good habits up-front.
Really it comes down to your own approach and preferences. Do you want to be detail-oriented and turn out a polished result in a longer period of time, or would you prefer to just get to the finish line and identify issues on a case-by-case basis?

Do car prototypes have a working stereo, leather upholstery, chrome rims, dice, and other random stuff which does not demonstrate the functionality of the newly-designed car?
My rule of thumb is that if it takes you more than 10 minutes to make it look acceptable to others (I'm completely fine with a disgusting design when prototyping), you're spending too much time on the aesthetics and less time on the actual clockwork.
What good does a "pretty-looking" site do if it has no functional layout?

This depends on both your audience and on your tooling. If you are trying to support all users on all browsers, then you will certainly need to do testing on those browsers (although actively developing on those browsers may not be necessary), whereas if you only need to support WebKit-based browsers (Chrome, Safari) or WebKit-based browsers and Firefox, that is less testing that you need to do.
It also depends on your tooling. For example, if you are writing directly in HTML and CSS, then you are much more likely to run into browser compatibility issues. However, if you use a tool such as GWT, which can generate browser-specific output automatically, there are fewer such issues to deal with.
Note that you can use Selenium (aka WebDriver), to automatically test your code on multiple different browsers, even if you only actively develop within a single browser environment. That way, you can know if you've broken something, but not have to constantly manually test in multiple browsers.

Related

Accessibility wise, what's the oldest browser currently in use? What's the oldest browser I can realistically program for? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Which syntax and features should I use if I wanted to write a webpage that could be correctly parsed by any device realistically still in use?
By currently in use I mean also devices whose usage is not statistically meaningful to be included on browser usage statistics, but still reasonable enough to be used in the real world without being too much of a stretch: e.g. A screen reader, a Point-Of-Sale/Kiosk Device, an Internet appliance like a smart TV or a console.
I've recently been assigned to develop an HTML landing page requiring support for - hear me out - the ancient Microsoft Internet Explorer 6. Yes, you read that right.
Long story short, it was in the requirements for a public administration project that was in a legal limbo for a long time, and reworking these requirements would be more problematic than developing it and be done with it. At least according to my boss. Besides, it shouldn't be that much different than writing a current-day HTML newsletter.
But it got me thinking - what's the oldest browser (as in, the one with the oldest HTML standard support available) that I could realistically find in the wild?
I'm not talking about relevance or caniuse-level-of-support numbers, just something that for some more-or-less realistic reasons (excluding "here's a Windows 3.1 VM getting to the internet" retro challenges) people would still be using to get online. Also some security requirements (mostly updated SSL/TLS) cut off quite a bit of older devices, mostly games consoles like the Dreamcast, PS2 w/Network Access Disk and the PSP, or the original Nintendo DS, so those are also excluded.
Other than the aforementioned IE6.0, I can think of Opera Mini browser (which version?) for some legacy mobile phones, or maybe some kind of internet appliance still using the Netfront runtime (IIRC The Playstation 3 still uses a pre-webkit version of it).
Any suggestion?
preword - apologies this became a bit of a rant, I will try and tidy it up later and focus it more. I also understand you may have to listen to your boss so this has become me ranting on your behalf to your boss. I hope you can use some of it to construct a strong argument with your boss to get the spec changed. It also doesn't answer the question as it is titled. Probably worth a few down votes but I enjoyed having a rant!
Given the comment the OP asked this answer is applicable to that.
What are the oldest browser standards I need to support for accessibility?
The WebAIM annual survey is a great place to get this information.
Under the section 'Browsers' we can see that 2.1% of respondents still use IE 6,7 or 8.
Unfortunately they do not indicate what split of those who use IE6 but I would imagine it would be less than 0.1% overall as very few sites still operate on IE6 correctly.
I personally develop to IE9 standards, so use ES5 for JavaScript (or compile to ES5), HTML 5.0 features and CSS3 non-experimental features.
The reason for this is that those who do use IE8 and below tend to be screen reader users due to compatibility with the software they own. They also tend to use a very specific set of websites due to the fact that most websites just will not work for them.
Many people would argue that you must include everyone, and in an ideal world you should. However you also have to consider that your goal in accessibility is to provide the best experience possible for the widest number of people. This is not possible using IE8 and below as you lose a lot of the HTML5 standards that are useful for accessibility (particular form related attributes, navigating by regions such as <main>, <nav> etc.).
So what about the people stuck on IE6-8 then?
As stated they tend to be screen reader users. The beauty of this is that as long as you develop valid HTML, user correct WAI-ARIA attributes etc. (basically go for AAA WCAG 2.1 rating or as close as possible, at least AA rating) then in IE6-8 the site will still function reasonably well and be usable, even if not perfect.
CSS is where your main struggle will be, making a site work in IE6 is just horrible if you still want to follow best practices (try styling a button for example to make the font size large enough and colour contrast AAA rated). Also how do you then make it responsive for mobile users without hundreds of hacks?
So what if you have to support older browsers.
You can provide a separate version of the site (shock, horror, gasp).
Use user agent sniffing to redirect people to a separate, stripped back, text only with minimal styling version of the site. Be sure to include a link to the main site and indicate that this is a version of the site designed for older browsers.
This requires careful planning as you have to ensure that the same information is maintained on both sites and that features introduced are replicated across both sites.
This is why nobody does this and isn't something I would recommend at all. The amount of things to consider, the limitations on features you can introduce and the chance of updating one version and not the other (even with a Content Management System) is very high. Making just one of those mistakes is likely to cause you more problems than not implementing this at all.
Conclusion
HTML5 introduces many great features for accessibility.
Regions for navigation are one of the biggest wins.
By designing to older standards you will likely end up with a site that is less accessible.
For example, a lot of WAI-ARIA attributes don't work in modern browsers, never mind in older browsers. Trying to fall back to using WAI-ARIA attributes is not going to work.
What about an accessible and easy to use layout? IE6 was the days of tables for layouts because CSS was so limited with positioning. (want a sticky navigation bar? Good luck implementing that!)
To provide 'best effort' I would use a HTML5 shim to at least fix the layout for things like IE7 and IE8, IE6 just isn't feasible.
Another argument is security. How on earth are you meant to write secure JavaScript that is IE6 compatible. Heck try finding a single piece of information alive on the internet today on 'how do i fix this problem in IE6'. Perhaps your solution here is to just not use JavaScript at all. (and yes someone will argue progressive enhancement allows you to use JavaScript, but then you have JavaScript errors which would then mean you fail WCAG under 'Robust').
Oh and did you know that you can't even use <style>*{position:relative}</style><table><input></table> as that will crash the browser?
As a final thought on this rant part, Windows 98 will run IE9 and NVDA. So we cannot make the argument that people may not have the financial resources to upgrade as NVDA is free. We could argue they do not have the technical knowledge on how to upgrade, at which point all I could say is write an article on the site on how to upgrade and redirect people there maybe? It gets a bit silly at that point as to where you draw the line.
How to win the argument with your boss / help them get the spec changed
Your boss is in a bad position but probably doesn't have the 'firepower' they need to persuade the spec to be changed easily.
Here is a simple way for them to win the argument:
WCAG 2.1 AA is the minimum legal standard that we can build this site to without risking getting sued.
This is impossible while maintaining compatibility with IE6 and without compromising on accessibility features.
The cost of getting accessibility wrong, coupled with the additional development cost associated with trying to develop to such an old standard makes this infeasible.
Couple that with the likelihood of developing a application with massive security flaws that could make the site vulnerable I would highly recommend we change the minimum requirements to IE8.
tl;dr
The oldest browser you realistically need to worry about is Internet Explorer 9. Even then you are being generous with your compatibility requirements. Do not try and develop a site that works in every browser ever encountered since 2001 (yes IE6 was 2001), you will break things for more people trying to accommodate people who do not want to upgrade (Windows 98 will run IE9 and NVDA).
Do not attempt to support IE8 and below as you will actually end up making decisions that will negatively impact accessibility.
If you can create a HTML5 valid and CSS3 valid website that is WCAG 2.1 AA standards compliant then you are already more accessible than 99% of websites in the world.
You're not asking the right question.
Users of what I build may be different than the users of what you build. To one audience, it may not matter if the site only loads on current browsers. To another audience, it may be the only thing that matters.
Additionally, some sites outright require newer browser features to function... no web-based fallbacks are available. Also, there are current browsers that don't support much of anything (i.e. Safari, Lynx, etc.) so it's not necessarily a matter of age... but capability.
The question you should be asking is, how do you make the decision of what to support and what not to support. Short of pointing you at your analytics and logs, this isn't a technical question, but a business question.

Who are the primary users of Internet Explorer <= 8?

tl; dr: making website for biological researchers. Are these guys apt to use IE8?
I'm developing a website that will act as a reference for biology researchers. I'd like to implement loads of HTML5 and CSS3 if I can. But I can't spend forever developing it though, and fixing websites for IE8 takes too long, especially with the heavy reliance on SVG elements I have planned. With such a user base (researchers), would it be safe to drop support for IE8 and below? I've heard that it's mostly banks and airline companies that use IE <= 8, but I've never come across an actual statistic, other than that global usage of IE8 is around 10%, which is a bit high for my taste.
The short answer is, of course, "it depends", but some things you might consider:
-Who benefits from people using your site? If it's you who wants them to use it, then the onus is more on you to make their lives easier by supporting whatever browser they use (most ecommerce sites would fit this category). If there's less value to you and you're genuinely just trying to be helpful by producing a valuable tool for you particular research community, it'd probably be more reasonable to expect them to update if they care enough.
-How much effort would support be? If it's little extra effort, then just do it, but for what you've described it sounds like support would be very difficult. If it's going to take 5 times as long to support 10% more people, I personally wouldn't bother.
-What sort of age are they? Not always true, but generally speaking younger people will tend to use more recent browsers.
In your position, I'd try to ask 10 or so people who you'd expect to use the site what browser they use. If only 1 or 2 people would be affected, I'd ditch IE8 support, and make sure the site falls gracefully back to a page explaining why they aren't able to use it and how to update their browser.
Points to consider:
Many people who use IE8 may also have other browsers installed as well. (choice of browser isn't an 'either-or' option)
How about making your site semi-functional for IE8 rather than entirely non-functional? If parts of it don't work, explain that to the user, but don't stop them using the bits that could work for them.
There are fall-back options (polyfills) for IE8 that allow you to do some modern browser features, including SVG -- see here. They won't work as well as modern browser, but they might make it possible to support IE8 a bit more than you think, without too much extra work on your part.
If your resource is useful enough, and the IE8 users can see what they're missing, it might push people to upgrade in order to use it.
Surely a group of intelligent people (scientists, researchers) would be sufficiently technically literate that they'd at least know about alternative browsers, even if they are still using an old PC that originally came with IE8. With luck, your target market may have fewer IE8 users than the general population.

Why HTML/JavaScript/CSS are not compiled languages and will they ever be?

Why HTML/JavaScript/CSS are not becoming compiled languages (or maybe even merge into a single compiled language)? What if browsers were running "Browser Virtual Machine" and html/javascript/css sources could by compiled to a "browser bytecode". Wouldn't it help developers and users a lot?
I can see a few challenges:
What to do with zillions of existing pages? Make this compilation optional, so if you want you can use plain old html. If you want to feed a browser with a compiled page just use .chtml for example.
How search providers would index pages? Make a decompiler that would decompile bytecode into exact original sources (for example like flash can be decompiled). Or search providers can use the same virtual machine and get data they need from there.
How to make it compatible with all browsers? Have one centralized developer (lets say w3c) to develop this virtual machine and then each browser would embed it.
But what about benefits:
Speed.
Size.
No more "loose" and "half-correct" html. It is either correct or won't compile.
Looks the same in every (supported) browser.
If not a bytecode then at least have some native compression going on, html probably is not the most efficient way of data storing. I know there is gzip but why to compress pages every time on a server and decompress in a browser if we can compress it once and feed it to a browser?
So what stops us from taking this road (well, besides a huge amount of effort to make it all happen)?
Ah, but Javascript IS becoming a compiled language. Check out Firefox 3.5 with TraceMonkey. It's insanely fast compared to um you-know-who's browser. It's true that JS will never be C, but it's a much more dynamic language than C is, and in many ways that makes it more expressive and powerful.
As far as HTML goes, I don't think that the lack of validity of HTML is a huge detriment to speed. I think the engines that put together the visual representation and manipulate the DOM need to get a lot better (um, IE, I'm looking in your general direction...). CSS compliance needs to get better, and CSS itself needs to get more powerful. (Get on the bus with CSS 3 people!)
But I do think that speed is going to get better on Firefox and Chrome to such an extent that people really ARE going to start using it for mainstream application development. It's funny. Adobe seems to be selling Flash as their platform for dynamic web content, MSFT is selling Silverlight for dynamic web content, and Google just wants to really improve HTML and Javascript to display dynamic web content. And Google's doing pretty well at it so far, I must say...
Your ideas have validity when they are applied to JavaScript. As others have noted, to one degree or another several vendors are trying to apply those principles to JS even now. Another big step in this area will likely be the Chrome OS Google has announced. However, when it comes to (X)HTML and CSS I think your ideas may be missing the point.
The world wide web is not a buggy and inconsistent application platform but a massive and unprecedented collection of interconnected documents. The power of the web is in the abstraction of the data from the often rigid (and breakable) visual layouts and increasingly complex in-page functionality largely provided via JavaScript. Encoding these pages in (X)HTML is ideal for making them accessible to the widest possible audience both in terms of browsers and in terms of technical knowledge required to author a page.
More and more the web is being used as an application platform - which is a powerful and exciting use of this technology - but we cannot lose sight of the fact that these Ajax-driven "web 2.0" apps are merely documents with extended functionality. Compilation doesn't make sense for a document and compression is already happening (via gzip and the like).
On a more practical note, the W3C moves at a glacier's pace and browser vendors take turns between jumping-the-gun supporting experimental features in unfinished specs and taking their sweet time supporting other specs which have been on the table and in common usage for years. The whole processes is like herding cats. I wouldn't hold my breath for them to make the kind of radical changes you're proposing any time soon.
Since HTML and CSS aren't code they can't be compiled. Google Chrome's V8 engine does actually convert JS into byte code, expect other rendering engines to follow suit!
http://code.google.com/apis/v8/design.html
We recently reworked a php template system I've helped create to use minify to compress multiple JS and CSS into one file each, seeing our file sizes drop to about 20% of the origial combined sizes. Minify also does gzip and caching so it's really amazing for speeding up websites.
http://code.google.com/p/minify/
In short you can't compile non-code, which HTML and CSS are. JS can be compiled and is starting to be, but all depends on what browsers feel like doing.
Browsers just need to be on the ball regarding supporting web standards. The more browsers do this, the less headache us web developers have. I was quite happy with YouTube's very public drop of support for IE6. We need more action like that for the web to move forward.
The V8 javascript engine (also embedded in Google Chrome, but it's open-source and liberally licensed so you're welcome to use it in the next browser you write!) does compile Javascript to native machine code -- of course, it does it "just in time" (like most modern compilers -- Java, C#, etc!), not "ahead of time" (like Fortran did in 1954 when computers were just too weak to handle compilation in the midst of execution). I'd be surprised if other good JS engines, like those in the very latest Firefox and Safari, didn't do the same.
Looks like you're not advocating "javascript as a compiled language" (since it obviously already IS compiled, if you're using a good JS engine), but rather "ahead-of-time" compilation for it (just when most modern languages are essentially abandoning ahead-of-time compilation). Pushing machine code rather than compilable code down the wire sounds like a mostly horrible idea -- much larger size, difficulties in supporting one CPU vs another, security nightmares in properly sandboxing it, etc, etc) with not much in term of compensating benefits.
That said, if you're really keen on pushing machine code to the client, try out nativeclient (as long as the client is an x86 machine - forget every smart phone on the planet, many netbooks, good old macs, etc) -- at least it promises a fix to the security nightmares. If and when you're happy with nativeclient, transforming a just-in-time compiler into an ahead-of-time one is a far easier technical challenge (if you want to keep using Javascript for the sources rather than other languages, of course).
See here for a previous discussion on the matter
Not all of the reasons given are necessarily valid, but one important one is that, unless you're Google, server-side CPU cycles are a lot more valuable than client-side cycles: so it's easier to have the client compile/optimize what is quite often dynamically generated HTML/JavaScript, rather than the server.
Ken
Speed.
You're assuming that it takes significant time to parse HTML. However it might be that that time is insignificant compared to the time required for something else, e.g. the time required to layout the text on the end-user's window.
No more "loose" and "half-correct" html. It is either correct or won't compile.
You already get that, using [X]HTML.
Looks the same in every (supported) browser.
You seem to be saying that there should only be one browser, or that all browsers would support it equally.
Internet standards don't happen by having a single body (the w3c) implementing something and declaring it a standard. Instead, internet standards happen by having multiple independent bodies creating multiple implementations. A consequence is:
Some people have developed something that isn't standard yet (i.e. they're ahead of the standard)
Some people haven't yet developed something that is standard (i.e. they're behind of the standard)
I think your idea is sound, however there's still no way to enforce a standard. Thus if there was a non-supported feature, there's a good chance the entire page would simply not display anything. In the current setup, critical information can still be passed.
Google V8, which is one of many new-generation javascript engines 'compiles' javascript into pseudocode, much like .NET 'compiles' c# on the fly. Nothing magical here. Expect more of it esp. as webapps get heavier and more demanding
HTML
HTML is pretty much XML. DTD'd exist for various versions and developers can check against that at any time.
CSS
CSS is not a programming language, however I do agree that "compiled" CSS could work seeing as compilation would compress it. However with the support that CSS has and with the number of essential hacks any CSS needs to have, you'd never manage to compile it without errors.
JS
As others have mentioned, JS IS becoming a compiled language except the browser compiles it for you and not you yourself.

Is true HTML debugging possible?

I've been a web developer for quite some time and what has helped me in learning is to visually see what is going on.
That's the reason for Tools like Aardvark, Web developer, Firebug and many others.
But when i saw the Gecko Reflow Videos they just blew my mind.
Then my question is, is it possible to truly debug html (step through each element)? Or come close to it?
What i've been doing a lot is to use Aardvark and remove elements but Aardvark has its issues with "background" and same size elements and not being able to target those.
UPDATE: I've been trying to write a good update for this question since it has left me thinking about it more. But since English isn't my primary language its been tough.
In the past years it's been the browsers who have had the task of being compatible with the standards. As they get closer to that goal, it is us who should be thinking about what we can truly create when browser compatibility is minimal, and if there are techniques we can utilize that makes rendering a page faster.
We can think of the past decades as the early years of HTML/CSS, where the main goal was just to get the thing to work. Now we should be looking for techniques that speed up the current process. An example of this is in the video above where the Gecko engine is running through the code twice. Why is that? And are there other instances where its doing unnecessary things (even though they work and are compatible)
This is something that clearly needs to be tested to be confirmed, hence my original question of a true debugger.
My $0.02:
"True" HTML debugging, in the sense you're talking about, is not technically possible, because there is no requirement of HTML user agents (web browsers) to render HTML elements in a particular order, nor is there anything like an atomic unit of execution like a "statement".
For instance, when rendering a table, should a user agent reserve space for each <tr> before rendering their child <td>s (breadth-first)? Or should it render each child <td> and each <td>s child and so forth (depth-first)? In practice, user agents make all kinds of guesses to try to render pages as quickly as possible. In other words, there would be no guarantee that debug-order will match actual render-order, nor should there be.
HTML can be thought of as an declarative language in this sense, in that it specifies what should be done (the page rendered to spec) but not exactly how to do it (exactly which order to render elements to the screen). In general, it's best to assume that everything happens at once, although the W3C does give some tips on speeding up <table> rendering based on how user agents should render <table> elements.
IMO, the webdev toolbar and Firebug are the best we've got, where we can edit/disable specific HTML elements and CSS rules.
ok - serious answer.
Judging by the comments on the sites that I've followed from that link, I think that you and I know that there probably isn't. There are a lot of smart blokes and blokettes on those threads, and they all seam to point towards the "no, this is all clever $4!# that wont help us in understanding rendering.
However, I think that what your question might want to emphasis is that rendering at a browser level is very interesting.
Let me just throw this one out there. Do you think that putting body { overflow: scroll; } as a default might speed us up just a little???
In my professional opinion, there's really only one effective tool for time-factoring / assessing / debugging within the html milieu: The WebDev Iterator
Personally, I feel as long as your HTML validates to W3C spec, isn't that all that matters? One should develop their HTML to spec and let browser companies worry about their bugs (which are pretty rare these days) than to focus on old browser mistakes of the past.
HTML Validator plugin for Firefox (aka Tidy) is all any web developer needs to see if their markup is correct, what's wrong, and where it's wrong.
Even if you could do true debugging, each browser parses HTML it's own way, so even if you could step through Firefox to see how a rendering bug occurs, that won't help you with IE or Safari/Chrome at all because they execute parsing in their own manner. This isn't like PHP, .NET or Java where the parsing of the code is the same for everybody, debugging makes sense there.
Then my question is, is it possible to truly debug html (step through each element)? Or come close to it?
You could probably step through the page rendering process by running Firefox under gdb, or modify an open-source browser to have a "step" button, but I really doubt this will achieve anything useful.
CSS isn't that complicated, everything is basically a box, with a width/height/padding/margin.. The problem with web-development (CSS particularly) is every browser implements rendering slightly differently (some more differently than others)..
If you want to know the render-order to speed your page load up, I'd say you're going about this the wrong way.. The browser rendering the page probably accounts for maybe 5% of the load time, the rest is page-generation time and network latency.
You could possibly shave 2ms of your page load by reordering some tags and using a different CSS positioning method.. or you could reduce the page-generation time by 200ms by caching, and half the network latency by setting up a second web-server nearer your users.. Compressing your logo better, or minifying your javascript would most likely improve load-time (universally, across all browsers!)
Basically, if you're concerned about load time, there are much better places to start. If you're concerned about how the page is being rendered, Firebug(-Lite) and http://browsershots.org (or a virtual machine or two) are all you need!

Better to develop cross-browser code up front or develop for one browser and go back and make it work in the others later?

I'm looking for feedback on peoples experiences with developing sites that work across browsers. It seems to me there are at least two obvious ways to approach the task of making your site/webapp work across browser:
Constantly test across all supported browsers every step of the way; or
Pick a browser, get everything working in it as a reference implementation and then make all the other browsers match the reference implementation.
Each approach has an obvious drawback -- the problem with #1 is that you end up doing a lot of unnecessary work -- especially if you are developing a webapp that is going through a lot of iterations/prototyping/spikes etc. You will make a bunch of stuff work across browsers that will subsequently be discarded/removed.
The disadvantage to approach #2 is that while it makes the initial development much quicker and more painful it makes it much harder to figure out where some of the specific errors arose, especially for more complex issues -- whereas if you had been developing for all browsers at once you should catch it right away and know what change(s) introduced the problems.
A somewhat obvious third option would be a hybrid approach, but it seems to me that you would end up losing more by experiencing both of the problems with #1 and #2 than you would gain from the benefits of doing both.
What have you found to be the most effective way(s) to approach this challenge?
I’ve found that if you get too deep into developing a website without looking at other browsers you’ll quickly get to a place that is too much of a headache to debug. I consistently open my web pages in all the browsers I care about.
I strongly suggest you verify all browsers each time you make a large change to the site.
Make it work with all browsers up front. This will mean extra testing during development but will cause you less pain later. I find it's usually easier to diagnose problems if I've just developed the thing, rather than coming back later and trying to figure it out...along with a list of other issues.
It partly depends on whether you know it's going to have to work in all browsers up front. If you do, then you really are better off just making it cross-browser to begin with. You don't need to test that everything is 100% compliant every step of the way, but you should code toward that.
And really, it's not that hard, especially what with JS frameworks like jQuery and Dojo around that take care of the scutwork. If you find yourself continually battling one browser or another, you might want to reconsider your design, as you may have chosen to do things in a way that is inherently more difficult to do when cross-browser compatibility is important.
Well, you do #1, but you do it whilst greating a style guide. A litle bit like this: http://www.sitefromscratch.com/content/html-xhtml-css-testing.
So when designing a new site or design, you create a page containing the desired HTML for all the visual elements that will be used on your site. Ignore style for now, just use the HTML that makes the most sense.
Then you style it. Preferably, your style guide would be all on 1 page so it can be open in each browser you're supporting and all you need to do is hit refresh between changes, (avoid opting for several pages, because it will take you longer to inspect them all).
When you build your site, build from the style guide. If it's not in the style guide, add it and test it there first. If you discover a problem when building the site, (perhaps you didn't consider a particular element when it wrapped, for example), replicate and fix the problem in the style guide.
Here's the killer advantage: A new version of one of your supported browsers is released, what would you prefer to test, your entire site, or the style guide?
So that's the CSS taken care of, now you need to do that all over with your generic JS functions, if any.
I created an interface compatibility layer between the browser and my code - basically, I wrapped certain functionality and made the wrappers determine what javascript/html was needed.
As browsers change, you change this compatibility layer and you can leave the rest of your code alone.
If you have this layer in your architecture, the answer to your question becomes "whenever you want".
If you can get an enterprise lock-in then multiple browser support can become less of an issue, e.g. if your customers are all companies using Internet Explorer then why build the site to look good in Safari or Chrome?
If though you are making something for the general public then there is a hybrid approach I'd use which is that I use one browser to get all the functionality there and working and then test across browsers when I'm in that "pretifying" phase of the project. Initially the key is for it to work, then it has to look good.
I don't think I could see a logic to testing across all browsers as I initially fill out a form or do some other basic functionality as it could be a big productivity drain to test across at least a few browsers, e.g. IE 6 & 7, Firefox 2 and 3, Opera 9.5, Safari, and Chrome if one wants to get all the big boys, and at least a couple O/S as Safari on a Mac can be different than Safari on Windows, which is a lot of tests even for just one or two pages. On the other hand, near the end, this is when I can refactor my CSS and inline styling and make the code better for handing off for someone else to maintain, archive until a service pack project is planned, or keep some documents just in case something has to be done. Also, by waiting to clean things up, I can have more confidence in the final UI parts as these can evolve and change considerably over a short period of time.
I usually start out ensuring that all the HTML and controls that I write/use adhere to the specification.
Tools-->Options-->Text Editor-->HTML-->Validation-->Check Show Errors and choose your Target
This starts me out on a solid base. I functionally test new features in a single browser and then about once/twice a day test the full set of them across all browsers.
Using this approach, CSS and JS are the usual suspects when something isn't right, its rare that it's the actual HTML markup.
If you can do it right the first time. Do it then. It will probably never be right later.
Depends mostly on your experience, which could be applied to any programming activity including this one. If you know in advance what likely pitfalls you're going to have to avoid (eg. in cross-browser development, don't make it hard on yourself by trying to do something that is going to be a hassle in a different browser), then you can probably safely develop everything in one browser and then go in after it's done and tweak things to get it working everywhere.
I usually advise junior developers to keep all browsers open during development and to refresh each browser when making changes, but I myself tend to write everything with Firebug for support and then go back and see how it's doing in IE7 (etc) afterwards. Since I've been doing this for several years, most of the time I can predict what's going to be causing a headache and often immediately know where to look to fix it.
If you are new to Web Design/Development then getting things right in different browsers can be a chore at times.
However, it's really not that hard to get a website working in every major browser and coded towards the W3C standard. In my opinion EVERY designer/developer should do this out of principle, otherwise they are no better than they were in the IE years.
Develop cross-browser code, make sure it validates and never think about designing for one browser again.