As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm talking about these.
Yeah, I know they are intended to show that the page conforms to the standards and should link to page revalidation service. Ok. But why should I as a regular user bother with this? As a visitor I'm indifferent to whether the page is strict XHTML or not, whether it contains dirty IE hacks or not. It is important that a page renders correctly, is convenient and works fast. That's all! And in reality, in many cases these requirements don't get along with W3C standards smoothly.
So what is the mania to add something targeted toward developers to a product face? Am I missing a point?
It's not a selling point like the "Be Safe With " type tags.
Including the w3c badges are a way to show that you know that there are standards that should be followed for web page construction. It's a way of showing that you want to be courteous to all users no matter the browser and to help promote the idea that browsers should implement, at least, the standards.
It's also a way to educate your readers. Not everyone knows that these standards exist or why they exist. Educating your readers will hopefully empower them to find a browser that fits their browsing expectations and to raise those expectations above "show me some images from 4chan."
Though, at the end of the day, it usually turns out to be another way to put things on a website because you lack the artistic savvy to make things look good without putting 'stickers' all over something.
It´s weird but virtual medals do work. It´s no coincidence that SO has rep and badges.
There are plenty of sites with important text that is missed by my browser because I don't use the browser that the author of the page used. If I see one of these badges then I can be confident that all of the page is rendered.
It is important that a page renders correctly, is convenient and works fast. Thats all! And in reality, in many cases these requirements don't get along with W3C standards smoothly.
You don't think these are conflicting ?
I would take the validated page over the "with hacks for every browser" page any day.
It matters to no one, including potential employers and other developers and especially not to users. I've seen pages that do NOT validate despite the badges, and valid markup only means the syntax is correct but does not mean it's well designed, laid out, formatted, well thought out, flexible, usable, or of any interest to anyone else. I'll look at the markup to see what the author has done and that is what counts.
It's just a way to show your technically competent users that you are technically competent. They have no other reason to be. I try to validate always, but never put them. If I had a blog, I might put them in the about section.
It's just bragging rights. Same as any badge/award implementation. Sure, it doesn't really matter to 99% of your visitors, but it might matter to you, the developer.
You do not need the badges. When they are present however, they indicate for you that all modern browsers will render the page [almost] identically.
An example: they give the user a confidence that when he goes back home (at office he is forced to use Mac OS X, but has Windows at home), the page will still display properly for him. Nothing critical, but sometimes really important to know.
As a visitor I'm indifferent to whether the page is strict XHTML or not, whether it contains dirty IE hacks or not.
Well, if you are using Safari and the site only works with IE because of "dirty IE hacks" then for you, the site is broken and useless. Likewise, if you are an IE user, and the site is full of "dirty Firefox hacks" then the site will be broken for you as well.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Which syntax and features should I use if I wanted to write a webpage that could be correctly parsed by any device realistically still in use?
By currently in use I mean also devices whose usage is not statistically meaningful to be included on browser usage statistics, but still reasonable enough to be used in the real world without being too much of a stretch: e.g. A screen reader, a Point-Of-Sale/Kiosk Device, an Internet appliance like a smart TV or a console.
I've recently been assigned to develop an HTML landing page requiring support for - hear me out - the ancient Microsoft Internet Explorer 6. Yes, you read that right.
Long story short, it was in the requirements for a public administration project that was in a legal limbo for a long time, and reworking these requirements would be more problematic than developing it and be done with it. At least according to my boss. Besides, it shouldn't be that much different than writing a current-day HTML newsletter.
But it got me thinking - what's the oldest browser (as in, the one with the oldest HTML standard support available) that I could realistically find in the wild?
I'm not talking about relevance or caniuse-level-of-support numbers, just something that for some more-or-less realistic reasons (excluding "here's a Windows 3.1 VM getting to the internet" retro challenges) people would still be using to get online. Also some security requirements (mostly updated SSL/TLS) cut off quite a bit of older devices, mostly games consoles like the Dreamcast, PS2 w/Network Access Disk and the PSP, or the original Nintendo DS, so those are also excluded.
Other than the aforementioned IE6.0, I can think of Opera Mini browser (which version?) for some legacy mobile phones, or maybe some kind of internet appliance still using the Netfront runtime (IIRC The Playstation 3 still uses a pre-webkit version of it).
Any suggestion?
preword - apologies this became a bit of a rant, I will try and tidy it up later and focus it more. I also understand you may have to listen to your boss so this has become me ranting on your behalf to your boss. I hope you can use some of it to construct a strong argument with your boss to get the spec changed. It also doesn't answer the question as it is titled. Probably worth a few down votes but I enjoyed having a rant!
Given the comment the OP asked this answer is applicable to that.
What are the oldest browser standards I need to support for accessibility?
The WebAIM annual survey is a great place to get this information.
Under the section 'Browsers' we can see that 2.1% of respondents still use IE 6,7 or 8.
Unfortunately they do not indicate what split of those who use IE6 but I would imagine it would be less than 0.1% overall as very few sites still operate on IE6 correctly.
I personally develop to IE9 standards, so use ES5 for JavaScript (or compile to ES5), HTML 5.0 features and CSS3 non-experimental features.
The reason for this is that those who do use IE8 and below tend to be screen reader users due to compatibility with the software they own. They also tend to use a very specific set of websites due to the fact that most websites just will not work for them.
Many people would argue that you must include everyone, and in an ideal world you should. However you also have to consider that your goal in accessibility is to provide the best experience possible for the widest number of people. This is not possible using IE8 and below as you lose a lot of the HTML5 standards that are useful for accessibility (particular form related attributes, navigating by regions such as <main>, <nav> etc.).
So what about the people stuck on IE6-8 then?
As stated they tend to be screen reader users. The beauty of this is that as long as you develop valid HTML, user correct WAI-ARIA attributes etc. (basically go for AAA WCAG 2.1 rating or as close as possible, at least AA rating) then in IE6-8 the site will still function reasonably well and be usable, even if not perfect.
CSS is where your main struggle will be, making a site work in IE6 is just horrible if you still want to follow best practices (try styling a button for example to make the font size large enough and colour contrast AAA rated). Also how do you then make it responsive for mobile users without hundreds of hacks?
So what if you have to support older browsers.
You can provide a separate version of the site (shock, horror, gasp).
Use user agent sniffing to redirect people to a separate, stripped back, text only with minimal styling version of the site. Be sure to include a link to the main site and indicate that this is a version of the site designed for older browsers.
This requires careful planning as you have to ensure that the same information is maintained on both sites and that features introduced are replicated across both sites.
This is why nobody does this and isn't something I would recommend at all. The amount of things to consider, the limitations on features you can introduce and the chance of updating one version and not the other (even with a Content Management System) is very high. Making just one of those mistakes is likely to cause you more problems than not implementing this at all.
Conclusion
HTML5 introduces many great features for accessibility.
Regions for navigation are one of the biggest wins.
By designing to older standards you will likely end up with a site that is less accessible.
For example, a lot of WAI-ARIA attributes don't work in modern browsers, never mind in older browsers. Trying to fall back to using WAI-ARIA attributes is not going to work.
What about an accessible and easy to use layout? IE6 was the days of tables for layouts because CSS was so limited with positioning. (want a sticky navigation bar? Good luck implementing that!)
To provide 'best effort' I would use a HTML5 shim to at least fix the layout for things like IE7 and IE8, IE6 just isn't feasible.
Another argument is security. How on earth are you meant to write secure JavaScript that is IE6 compatible. Heck try finding a single piece of information alive on the internet today on 'how do i fix this problem in IE6'. Perhaps your solution here is to just not use JavaScript at all. (and yes someone will argue progressive enhancement allows you to use JavaScript, but then you have JavaScript errors which would then mean you fail WCAG under 'Robust').
Oh and did you know that you can't even use <style>*{position:relative}</style><table><input></table> as that will crash the browser?
As a final thought on this rant part, Windows 98 will run IE9 and NVDA. So we cannot make the argument that people may not have the financial resources to upgrade as NVDA is free. We could argue they do not have the technical knowledge on how to upgrade, at which point all I could say is write an article on the site on how to upgrade and redirect people there maybe? It gets a bit silly at that point as to where you draw the line.
How to win the argument with your boss / help them get the spec changed
Your boss is in a bad position but probably doesn't have the 'firepower' they need to persuade the spec to be changed easily.
Here is a simple way for them to win the argument:
WCAG 2.1 AA is the minimum legal standard that we can build this site to without risking getting sued.
This is impossible while maintaining compatibility with IE6 and without compromising on accessibility features.
The cost of getting accessibility wrong, coupled with the additional development cost associated with trying to develop to such an old standard makes this infeasible.
Couple that with the likelihood of developing a application with massive security flaws that could make the site vulnerable I would highly recommend we change the minimum requirements to IE8.
tl;dr
The oldest browser you realistically need to worry about is Internet Explorer 9. Even then you are being generous with your compatibility requirements. Do not try and develop a site that works in every browser ever encountered since 2001 (yes IE6 was 2001), you will break things for more people trying to accommodate people who do not want to upgrade (Windows 98 will run IE9 and NVDA).
Do not attempt to support IE8 and below as you will actually end up making decisions that will negatively impact accessibility.
If you can create a HTML5 valid and CSS3 valid website that is WCAG 2.1 AA standards compliant then you are already more accessible than 99% of websites in the world.
You're not asking the right question.
Users of what I build may be different than the users of what you build. To one audience, it may not matter if the site only loads on current browsers. To another audience, it may be the only thing that matters.
Additionally, some sites outright require newer browser features to function... no web-based fallbacks are available. Also, there are current browsers that don't support much of anything (i.e. Safari, Lynx, etc.) so it's not necessarily a matter of age... but capability.
The question you should be asking is, how do you make the decision of what to support and what not to support. Short of pointing you at your analytics and logs, this isn't a technical question, but a business question.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
In an attempt to understand if W3C validation can assist better DOM rendering or if it is just a standard for HTML coding, I tried to validate major websites but all of them fail with some errors.
Here are typical examples:
google.com 36 Errors, 2 warning(s)
facebook.com 42 Errors
youtube.com 91 Errors, 3 warning(s)
yahoo.com 212 Errors, 8 warning(s)
amazon.com 510 Errors, 138 warning(s)
When major websites do not seem to spend enough time for W3C validation, is it needed to spend time to do so for small- and medium-sized websites?
Validation is a sore issue. In XHTML days (pre html5 doctype ubiquity) it was almost impossible to validate a complex layout against the strict DTD published by the W3C. I think you could probably point fingers at IE for being the prime culprit, as so many totally non-standard hacks were needed to make it behave in a reasonable cross-browser way, and IE was and is the most-used browser on the planet. It is to be lamented that MS, instead of following the lead given by webkit and gecko engines, have decided to add yet more browser extensions and hacks to muddy the waters, instead of going for plain adherence to the 'standards'.
We all know that if time were not an issue, we as developers could create pages that validate, but in practical terms, as the others have pointed out, validation ends up being a helpful tool not a defacto objective. If a client demands validation, then there is a cost involved, and that has to be explained - managing the expectations here is very important.
The html web advanced in very short time from being a very simplistic semantic text layout engine to fully dynamic applications running inside the browser, and the validation tools simply have not kept up with this. I'm not even sure they can, given that browser technology is advancing daily, across a thousand or more different platforms.
So - rounding up, it's a tool to be used by developers, but your own personal ability is what will determine if the project is fit to purpose or not. Having an icon or green 'ok' box in a validator is absolutely not going to define if a project fits this definition or not.
Validation is cheap quality assurance. It will help you spot errors (especially nesting errors and those those caused by mistyping something). It will save more time then it costs (especially if implemented at the outset).
I've not seen any performance metrics for error recovery routines in browsers. It would be hard to produce any which would give useful information as there are so many different errors.
In my opinion the best reason why you should validate your pages, is that then you have the highest probability that your page looks the same in every browser. And you minimize the probability that the layout (or even some Javascript logic) is broken.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Is there a free open source software for protecting HTML from "select all", "copy" and "save as".
No. Neither is there non-free nor non-open source software to do it in a reasonable way.
There are a couple of tricks you could try (generating all the content via obfuscated JS, checking the URI with JS, and so on), but they are easily bypassed and have unpleasant side effects (not least of which is making it impossible for search engines to index the content).
As David Dorward said, you effectively cannot do this, but there are a few approaches you can take as long as you're willing to abandon pure HTML. All are ugly and I do not advocate their use for a multitude of reasons. Most of those reasons should be pretty obvious.
I mention them only because I have been in situations where I have been required to do this by the business requirements of a customer. I could either lose the customer or figure out a way. Since most business folks aren't software purists, they simply didn't care how it was done. If you're in this situation, I sympathize.
You could generate an image on the server side and display the information as an image. This approach was once used by the local sheriff's office, but they eventually went back to HTML since images are costly in terms of server resources and bandwidth. There are many open source ways to generate images.
You could use a technology like Flash or Silverlight to display the information. You would then have much greater programmatic control over disabling the copying of the information. I strongly prefer the HTML/CSS/JS approach to web programming due to its far reach and simplicity, so I don't recommend this approach either. Also, since it isn't open source, it's probably not an option for you.
Good luck.
To put it bluntly there is nothing that will piss me off faster about a web site than the developer trying to take control of what I can do with my browser. I'm not alone in this regard so do realize that many people will eventually just say to he'll with the web site and whatever business it is trying to support.
The simple way to deal with this is to simply avoid showing the nonsubscriber the data in question. Rather feed the nonsubscriber dummy info or good info that gas critical parts obscured. What you don't want to do is to screw around with code that hamstings somebodies browser or worst makes that browser unstable.
Basically it comes down to this do not cause harm and do not impact normal operation of a browser. There are perfectly sane ways to show a nonsubscriber what he might get if he where a subscriber with out risking vast amounts of content.
Dave
This is absolutely no way to achieve your underlying aim, which is (apparently) to stop a non-subscriber copying or saving data. If a user can see the data, then they can copy it. End of story.
They can take a screenshot and use OCR. They can always get out a pad of paper and a pencil and copy it down. This might seem like a lot of effort, but you have to ask yourself: is it worth the effort to evade what you're charging for subscription?
The best advice is to find another way to do business.
If you allow something to be readable by users on the web, it will be possible to copy.
One can always take screenshots or simply write the text again while looking at the original.
I've been a web developer for quite some time and what has helped me in learning is to visually see what is going on.
That's the reason for Tools like Aardvark, Web developer, Firebug and many others.
But when i saw the Gecko Reflow Videos they just blew my mind.
Then my question is, is it possible to truly debug html (step through each element)? Or come close to it?
What i've been doing a lot is to use Aardvark and remove elements but Aardvark has its issues with "background" and same size elements and not being able to target those.
UPDATE: I've been trying to write a good update for this question since it has left me thinking about it more. But since English isn't my primary language its been tough.
In the past years it's been the browsers who have had the task of being compatible with the standards. As they get closer to that goal, it is us who should be thinking about what we can truly create when browser compatibility is minimal, and if there are techniques we can utilize that makes rendering a page faster.
We can think of the past decades as the early years of HTML/CSS, where the main goal was just to get the thing to work. Now we should be looking for techniques that speed up the current process. An example of this is in the video above where the Gecko engine is running through the code twice. Why is that? And are there other instances where its doing unnecessary things (even though they work and are compatible)
This is something that clearly needs to be tested to be confirmed, hence my original question of a true debugger.
My $0.02:
"True" HTML debugging, in the sense you're talking about, is not technically possible, because there is no requirement of HTML user agents (web browsers) to render HTML elements in a particular order, nor is there anything like an atomic unit of execution like a "statement".
For instance, when rendering a table, should a user agent reserve space for each <tr> before rendering their child <td>s (breadth-first)? Or should it render each child <td> and each <td>s child and so forth (depth-first)? In practice, user agents make all kinds of guesses to try to render pages as quickly as possible. In other words, there would be no guarantee that debug-order will match actual render-order, nor should there be.
HTML can be thought of as an declarative language in this sense, in that it specifies what should be done (the page rendered to spec) but not exactly how to do it (exactly which order to render elements to the screen). In general, it's best to assume that everything happens at once, although the W3C does give some tips on speeding up <table> rendering based on how user agents should render <table> elements.
IMO, the webdev toolbar and Firebug are the best we've got, where we can edit/disable specific HTML elements and CSS rules.
ok - serious answer.
Judging by the comments on the sites that I've followed from that link, I think that you and I know that there probably isn't. There are a lot of smart blokes and blokettes on those threads, and they all seam to point towards the "no, this is all clever $4!# that wont help us in understanding rendering.
However, I think that what your question might want to emphasis is that rendering at a browser level is very interesting.
Let me just throw this one out there. Do you think that putting body { overflow: scroll; } as a default might speed us up just a little???
In my professional opinion, there's really only one effective tool for time-factoring / assessing / debugging within the html milieu: The WebDev Iterator
Personally, I feel as long as your HTML validates to W3C spec, isn't that all that matters? One should develop their HTML to spec and let browser companies worry about their bugs (which are pretty rare these days) than to focus on old browser mistakes of the past.
HTML Validator plugin for Firefox (aka Tidy) is all any web developer needs to see if their markup is correct, what's wrong, and where it's wrong.
Even if you could do true debugging, each browser parses HTML it's own way, so even if you could step through Firefox to see how a rendering bug occurs, that won't help you with IE or Safari/Chrome at all because they execute parsing in their own manner. This isn't like PHP, .NET or Java where the parsing of the code is the same for everybody, debugging makes sense there.
Then my question is, is it possible to truly debug html (step through each element)? Or come close to it?
You could probably step through the page rendering process by running Firefox under gdb, or modify an open-source browser to have a "step" button, but I really doubt this will achieve anything useful.
CSS isn't that complicated, everything is basically a box, with a width/height/padding/margin.. The problem with web-development (CSS particularly) is every browser implements rendering slightly differently (some more differently than others)..
If you want to know the render-order to speed your page load up, I'd say you're going about this the wrong way.. The browser rendering the page probably accounts for maybe 5% of the load time, the rest is page-generation time and network latency.
You could possibly shave 2ms of your page load by reordering some tags and using a different CSS positioning method.. or you could reduce the page-generation time by 200ms by caching, and half the network latency by setting up a second web-server nearer your users.. Compressing your logo better, or minifying your javascript would most likely improve load-time (universally, across all browsers!)
Basically, if you're concerned about load time, there are much better places to start. If you're concerned about how the page is being rendered, Firebug(-Lite) and http://browsershots.org (or a virtual machine or two) are all you need!
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I heard Joel and Jeff talking about sIFR in one of the early podcasts. I've been using it on www.american-data.com and www.chartright.us with some fairly mixed results.
Yesterday I was informed that the first line of text on my website appeared upside down in Internet Explorer 6 without flash player. I'm pretty sure that assessment was wrong, owing to no flash player = no sIFR. But I'm getting some odd behavior on my pages, at least in IE 6, 7 and 8. I only really wanted to use sIFR because my fonts looked crummy on my computer in Firefox.
My question is: if you use sIFR, when do you use sIFR? In which cases do you disable sIFR? When is it better to just use the browser font?
You use sIFR moderately, say for headlines. Try not to use it for links, because links in Flash don't work as well as normal HTML links. It also makes little sense to use sIFR only for text that never changes, an image would work a lot better.
I haven't heard about the upside-down problem in a few years now, but in any case, that's an issue with IE 6 and (an old?) Flash player. In any case, it always makes sense to test thoroughly.
Also, did you look into sIFR 3 lately? It's much improved over v2.
I had plenty of headaches after implementing sIFR on my last website project. Most of the problems were to do with browser inconsistencies like you are describing. Text would appear in odd places, not wrap properly or just not display the way I wanted it to. I found that, as per usual, firefox was displaying nicely while I had to implement several different css hacks in order to get the same code to display properly in IE7 and IE6.
I say stick to standard browser fonts if you can, but if the project / client requires you to use it then make sure you test it thoroughly in all browsers and with various flash blockers etc.
Try to consider up front what kind of headache you're creating for yourself (if you are, which isn't always the case) by implementing sIFR. It's probably advisable to only use it when your site design is relatively straightforward. As soon as you start having to deal with specific browser rendering exceptions (CSS, for instance) due to a complex design, you're going to run into problems related to sIFR. And if you design sites for clients, it's tough to go back and tell them halfway through that sIFR is going to have to be removed. So try to identify issues up front.
One example we ran into was having sIFR titles, and then directly to the right of the title, say about padding-right: 20px (so, dependent on the width of the title text), some kind of icon. That led to a lot of hassle, making us wish we hadn't started using sIFR in the first place.