Should I make it a priority to semantically mark up my pages? Or is the Semantic Web a good idea that will never really get off the ground? - html

The Semantic Web is an awesome idea. And there are a lot of really cool things that have been done using the semantic web concept. But after all this time I am beginning to wonder if it is all just a pipe dream in the end. If we will ever truly succeed in making a fully semantic web, and if we are not going to be able to utilize semantic web to provide our users a deeper experience on the web is it worth spending the time and extra effort to ensure FULLY semantic web pages are created by myself or my team?
I know that semantic pages usually just turn out better (more from attention to detail than anything I would think), so I am not questioning attempting semantic page design, what I am currently mulling over, is dropping the review and revision process of making a partially semantic page, fully semantic in hopes of some return in the future.

On a practical level, some aspects of the semantic web are taking off:
1) Semantic markup helps search engines identify key content and improves keyword results.
2) Online identity is a growing concern, and semantic markup in links like rel='me' help to disambiguate these things. Autodiscovery of social connections is definitely upcoming. (Twitter uses XFN markup for all of your information and your friends, for example)
3) Google (and possibly others) are starting to pay attention to microformats like hCard and hCalendar to gather greater information about people and events going on. This feature is still on the "very new" list, but these microformats are useful examples of the semantic web.
It may take some time for it all to get there, but there are definite possible benefits. I wouldn't put a huge amount of effort into it these days, but its definitely worth keeping in mind when you're developing a site.

Yahoo and Google have both announced support for RDFa annotations in your HTML content. Check out Yahoo SearchMonkey and Google Rich Snippets. If you care about SEO and driving traffic to your site, these are good ways to get better search engine coverage today.
Additionally, the Common Tag vocabulary is an RDFa vocabulary for annotating and organizing your content using semantic tags. Yahoo and Google will make use of these annotations, and existing publishing platforms such as Drupal 7 are investigating adopting the Common Tag format.

I would say no.
The reason I would say this is that the current return for creating a fully semantic web page right now is practically zero. You will have to spend extra time and effort, and there is very little to show for it now.
Effort is not like investing, however, so doing it now has no practical advantage. If the semantic web does start to show potential, then you can always revisit it and tap into that potential later.

It should be friendly to search engines, but going further is not going to provide good ROI.
Furthermore, what are you selling? A lot of the purpose behind being semantic beyond being indexable is easier 3rd party integration and data mining (creating those ontologies). Are these desirable traits for your data sets? If you are selling advertisement, making it easier for others to pull in your content is probably not going to be helpful.
It's all about where you want to spend your time.

You shouldn't do anything without a requirement. Otherwise, how do you know if you've succeeded? Do you have a requirement for being semantic? How much? How do you measure success? How do you measure return on investment?
Don't do anything just because of fads, unless keeping up with fads is a requirement.

Let me ask you a question - would you live in a house or buy a car that wasn't built according to a spec?
"So is this 4x4 lumber, upheld with a steel T-Beam?"
"Nope...we managed to rig the foundation on on PVC Piping...pretty cool, huh."

Related

Adapt website for blind people

As my title explains, how do I adapt a website for blind people? From what I've heard, there's a new Swedish law at the end of this year that says that websites should be adapted for blind people. It doesn't concern all websites, but website that contain information about the authorities such as police, hospitals, banks, pension/retirement benefit and so on.
This is the absolute first time I've heard of this. I have no idea how to adapt the homepage for blind people for the company I work for. Any ideas?
Where do I begin and how do I apply sounds that reads the content of the site? Is there any tutorial on this matter?
This is quite a broad topic but can be covered relatively easily.
What you're talking about is making use of Web Accessibility. The best way to be able to achieve eb accessibility is to make sure that your mark-up is valid and everything keeps to a certain structure that a screen reader or robot would be able to read it with ease and relay that back to a blind person.
The W3C have a full initiative towards Web Accessibility (the WAI) who are there to sort out how to verify a page is accessible for those incapable of accessing the web in the normal way (Mouse, keyboard and monitor).
They have a set of easy checks which are easy to follow and make amendments from your website to.
Read them here: http://www.w3.org/WAI/eval/preliminary
Ultimately, your best way to achieve full web accessibility is to ensure that the HTML mark-up is clean and has every last piece of meta data required, all images have alt tags and the structure to your page is easy for robots to understand and follow. HTML5 helps with this massively as you now have the usage of tags such as <article> and <aside> which are there to determine article areas and article details areas (great for blogs and news stories).
Hope this helps and if you need more information, then the W3C and WAI are your best bets.
W3C: http://www.w3.org/
Wai: http://www.w3.org/WAI/
There are a number of different standards people use when designing for accessibility. You generally need to include things like alt tags for images and links on your page, ensure your page can be viewed without JavaScript, high contrast modes, etc.
Try looking at WAI for some standards and how to get started. If accessibility is being enforced by your countries legislature, they will generally choose a standard or create one that you should adhere to.
Generally people viewing websites that need to hear the content will use a screen reader. It will go through the content and read it to the user. There are a number of readers that you can use for free and try out.

When should I use HTML5 Microdata for SEO?

So I've been looking into this HTML 5 Microdata, but I'm not sure if or when it is appropriate to use. I know that if used with rating and you search a website it will pull up things like video rating and article ratings etc. But for Microdata like People or Places, is that so useful that I should start implementing it into all my websites - big and small? How big of an impact will this really have on my SEO if I start using Microdata on everything?
Maybe using something like http://schema.org/ as my standard term dictionary. I think that is what Google suggests using. Here's a link to the dev of microdata http://dev.w3.org/html5/md/ which will be helpful if you are unfamiliar with microdata
Following to that Schema.org - Why You're Behind if You're Not Using It... article on SEOMoz, I must say this question is not just about microdata and Google SERPs positions. I think it has to be taken in a much wider meaning:
Some advantages:
Implementing microdata on a website DOES increase CHANCE for Rich
Snippets displayed next to your site on Google search results. You can't say 'microdata = rich snippets', but you also can't say 'no microdata = no rich snippets' :)
Having rich snippets increases users' attention to that single search result and it CAN result in more clicks => visitors on your page.
Some cons:
Some rich snippets, which can be a result of using microdata, can let users find information they're looking for directly on the search results, without actually reaching your page. eg. if user is looking for a phone number and see it on rich snippet, he doesn't have to click and visit your page.
You have to decide on your own if you can take that risk. From my own experience (and that article comments as well), that risk is quite small and if you can, you should implement microdata. Of course, 'if you can' should really mean: 'if you can and it won't need the whole site to be rebuilt' :) If you have more serious things to do on your site, you should put them in front of a queue. Today, it's only 'nice-to-have', not 'must-have'.
And just for the end - I know my answer is not just yes or not the answer, but it's because the question is not that kind of question. However, I hope it could help you make your own decision.
My answer would be "Always."
It's the emerging standard for categorizing all forms of information on the web.
Raven Tools (no affiliation) has a schema.org microdata generator that's a good place to start:
http://schema-creator.org/product.php
They have a couple stock schema templates on that page (look on the left column).

What is the state of the art in HTML content extraction?

There's a lot of scholarly work on HTML content extraction, e.g., Gupta & Kaiser (2005) Extracting Content from Accessible Web Pages, and some signs of interest here, e.g., one, two, and three, but I'm not really clear about how well the practice of the latter reflects the ideas of the former. What is the best practice?
Pointers to good (in particular, open source) implementations and good scholarly surveys of implementations would be the kind of thing I'm looking for.
Postscript the first: To be precise, the kind of survey I'm after would be a paper (published, unpublished, whatever) that discusses both criteria from the scholarly literature, and a number of existing implementations, and analyses how unsuccessful the implementations are from the viewpoint of the criteria. And, really, a post to a mailing list would work for me too.
Postscript the second To be clear, after Peter Rowell's answer, which I have accepted, we can see that this question leads to two subquestions: (i) the solved problem of cleaning up non-conformant HTML, for which Beautiful Soup is the most recommended solution, and (ii) the unsolved problem or separating cruft (mostly site-added boilerplate and promotional material) from meat (the contentthat the kind of people who think the page might be interesting in fact find relevant. To address the state of the art, new answers need to address the cruft-from-meat peoblem explicitly.
Extraction can mean different things to different people. It's one thing to be able to deal with all of the mangled HTML out there, and Beautiful Soup is a clear winner in this department. But BS won't tell you what is cruft and what is meat.
Things look different (and ugly) when considering content extraction from the point of view of a computational linguist. When analyzing a page I'm interested only in the specific content of the page, minus all of the navigation/advertising/etc. cruft. And you can't begin to do the interesting stuff -- co-occurence analysis, phrase discovery, weighted attribute vector generation, etc. -- until you have gotten rid of the cruft.
The first paper referenced by the OP indicates that this was what they were trying to achieve -- analyze a site, determine the overall structure, then subtract that out and Voila! you have just the meat -- but they found it was harder than they thought. They were approaching the problem from an improved accessibility angle, whereas I was an early search egine guy, but we both came to the same conclusion:
Separating cruft from meat is hard. And (to read between the lines of your question) even once the cruft is removed, without carefully applied semantic markup it is extremely difficult to determine 'author intent' of the article. Getting the meat out of a site like citeseer (cleanly & predictably laid out with a very high Signal-to-Noise Ratio) is 2 or 3 orders of magnitude easier than dealing with random web content.
BTW, if you're dealing with longer documents you might be particularly interested in work done by Marti Hearst (now a prof at UC Berkely). Her PhD thesis and other papers on doing subtopic discovery in large documents gave me a lot of insight into doing something similar in smaller documents (which, surprisingly, can be more difficult to deal with). But you can only do this after you get rid of the cruft.
For the few who might be interested, here's some backstory (probably Off Topic, but I'm in that kind of mood tonight):
In the 80's and 90's our customers were mostly government agencies whose eyes were bigger than their budgets and whose dreams made Disneyland look drab. They were collecting everything they could get their hands on and then went looking for a silver bullet technology that would somehow ( giant hand wave ) extract the 'meaning' of the document. Right. They found us because we were this weird little company doing "content similarity searching" in 1986. We gave them a couple of demos (real, not faked) which freaked them out.
One of the things we already knew (and it took a long time for them to believe us) was that every collection is different and needs it's own special scanner to deal with those differences. For example, if all you're doing is munching straight newspaper stories, life is pretty easy. The headline mostly tells you something interesting, and the story is written in pyramid style - the first paragraph or two has the meat of who/what/where/when, and then following paras expand on that. Like I said, this is the easy stuff.
How about magazine articles? Oh God, don't get me started! The titles are almost always meaningless and the structure varies from one mag to the next, and even from one section of a mag to the next. Pick up a copy of Wired and a copy of Atlantic Monthly. Look at a major article and try to figure out a meaningful 1 paragraph summary of what the article is about. Now try to describe how a program would accomplish the same thing. Does the same set of rules apply across all articles? Even articles from the same magazine? No, they don't.
Sorry to sound like a curmudgeon on this, but this problem is genuinely hard.
Strangely enough, a big reason for google being as successful as it is (from a search engine perspective) is that they place a lot of weight on the words in and surrounding a link from another site. That link-text represents a sort of mini-summary done by a human of the site/page it's linking to, exactly what you want when you are searching. And it works across nearly all genre/layout styles of information. It's a positively brilliant insight and I wish I had had it myself. But it wouldn't have done my customers any good because there were no links from last night's Moscow TV listings to some random teletype message they had captured, or to some badly OCR'd version of an Egyptian newspaper.
/mini-rant-and-trip-down-memory-lane
One word: boilerpipe.
For the news domain, on a representative corpus, we're now at 98% / 99% extraction accuracy (avg/median)
Demo: http://boilerpipe-web.appspot.com/
Code: http://code.google.com/p/boilerpipe/
Presentation: http://videolectures.net/wsdm2010_kohlschutter_bdu/
Dataset and slides: http://www.l3s.de/~kohlschuetter/boilerplate/
PhD thesis: http://www.kohlschutter.com/pdf/Dissertation-Kohlschuetter.pdf
Also quite language independent (today, I've learned it works for Nepali, too).
Disclaimer: I am the author of this work.
Have you seen boilerpipe? Found it mentioned in a similar question.
I have come across http://www.keyvan.net/2010/08/php-readability/
Last year I ported Arc90′s Readability
to use in the Five Filters project.
It’s been over a year now and
Readability has improved a lot —
thanks to Chris Dary and the rest of
the team at Arc90.
As part of an update to the Full-Text
RSS service I started porting a more
recent version (1.6.2) to PHP and the
code is now online.
For anyone not familiar, Readability
was created for use as a browser addon
(a bookmarklet). With one click it
transforms web pages for easy reading
and strips away clutter. Apple
recently incorporated it into Safari
Reader.
It’s also very handy for content
extraction, which is why I wanted to
port it to PHP in the first place.
there are a few open source tools available that do similar article extraction tasks.
https://github.com/jiminoc/goose which was open source by Gravity.com
It has info on the wiki as well as the source you can view. There are dozens of unit tests that show the text extracted from various articles.
I've worked with Peter Rowell down through the years on a wide variety of information retrieval projects, many of which involved very difficult text extraction from a diversity of markup sources.
Currently I'm focused on knowledge extraction from "firehose" sources such as Google, including their RSS pipes that vacuum up huge amounts of local, regional, national and international news articles. In many cases titles are rich and meaningful, but are only "hooks" used to draw traffic to a Web site where the actual article is a meaningless paragraph. This appears to be a sort of "spam in reverse" designed to boost traffic ratings.
To rank articles even with the simplest metric of article length you have to be able to extract content from the markup. The exotic markup and scripting that dominates Web content these days breaks most open source parsing packages such as Beautiful Soup when applied to large volumes characteristic of Google and similar sources. I've found that 30% or more of mined articles break these packages as a rule of thumb. This has caused us to refocus on developing very low level, intelligent, character based parsers to separate the raw text from the markup and scripting. The more fine grained your parsing (i.e. partitioning of content) the more intelligent (and hand made) your tools must be. To make things even more interesting, you have a moving target as web authoring continues to morph and change with the development of new scripting approaches, markup, and language extensions. This tends to favor service based information delivery as opposed to "shrink wrapped" applications.
Looking back over the years there appears to have been very few scholarly papers written about the low level mechanics (i.e. the "practice of the former" you refer to) of such extraction, probably because it's so domain and content specific.
Beautiful Soup is a robust HTML parser written in Python.
It gracefully handles HTML with bad markup and is also well-engineered as a Python library, supporting generators for iteration and search, dot-notation for child access (e.g., access <foo><bar/></foo>' usingdoc.foo.bar`) and seamless unicode.
If you are out to extract content from pages that heavily utilize javascript, selenium remote control can do the job. It works for more than just testing. The main downside of doing this is that you'll end up using a lot more resources. The upside is you'll get a much more accurate data feed from rich pages/apps.

An online resource to back an argument for cleaner design

I am working in the web dept of a large legal firm, and among other things am responsible for maintaining a professional look for all our email communications (over 600 pieces per year).
Right now I am in a rut. Using a lot of pressure and manipulation, a person in management got to "art direct" a couple of HTML emails working directly with a member of my team and I caught the design at the last moment.
Her "designs" introduced background images behind the text of the emails along with additional, high-contrast imagery sitting behind the title in the header.
I ended up mandating a design change, however she is very insistent on "her" design and questioning all my reasoning for simplifying the look.
Basically she is questioning my expertise and asking for "proof" that her design is not user friendly.
I have the meeting in a couple of hours and was wondering if anyone here could point me to resources that discuss these specific items:
Argument against background images positioned behind the copy of an email. The images are at about 10% opacity, which makes them incomprehensible, and makes the design busy and ugly (my perspective).
Argument against high-contrast images behind titles.
Now, I am aware of the technical implications of including images in HTML emails, Outlook 2007 not loading background images etc. This is not necessary a technical issue, but a serious aesthetic/usability step in the wrong direction.
Thank you!
Facts:
Common sense in communicating dictates that anything that distracts you away from the message -- the content of the emails is not a good idea.
Is there images in the background of your letter heads and on all your invoices? Why so? Why not?
What do background images contribute to the value and perception of the message, the image of your corporation? Is it clearly known the impacts they have?
Go take a look at email newsletter sites. They are covered in guides and tutorials on how to email market effectively.
www.icontact.com
www.constantcontact.com
and so on..
Opinions:
Emails are not meant to be flyers. They are meant to communicate, clearly, simply and concisely while bringing a professional image. Making it look like a cartoon, or a flower shop, or whatever else you are dealing with probably doesn't add to it.
The issue you may run into is she is taking it personally because she is attached to the design for personal reasons and not designing for the needs of the business. So an attack on the design is an attack on her. She is too involved with her ego of looking good and avoiding looking bad or wanting some kind of glory.
Simply put, she should be the one qualifying to you why it IS good design, not the other way around. If she doesn't know, why is she asking you to prove it to her? How would she understand?
There's a book called Dealing with difficult people that may be of use to you.
Of course, if common sense was really common we wouldn't have to point it out as being common sense.
Update us on what happens!
http://www.asciiribbon.org/.
They have a lot of points on why not to use HTML ect, in emails.
Quite a few e-mail clients do not support HTML e-mail.
Other clients have a very poor or broken HTML rendering, causing the messages to be unreadable as well.
Sending HTML e-mails causes great overhead, and is very inefficient.
People that are limited to a text-only terminal, people with disabilities, blind people, basically anyone that cannot use a graphical interface easily or at all, are likely unable to read your mail.
(Extract from link)
In addition to what others have said, consider legal accessibility requirements. I found one example of the US Department of Education accessibility requirements. I'm sure searching for this one can find more examples.
Although it doesn't really apply, you may be able to reference the Americans With Disabilities Act, assuming you're in America.
Also, since you're sending HTTP formatted mail, maybe the Web Content Accessibility Guidelines 1.0 are of interest. For example, Guideline 2.2 of this document states "Ensure that foreground and background color combinations provide sufficient contrast when viewed by someone having color deficits or when viewed on a black and white screen."
A professional email should only be graphic-intensive if those graphics either emulate the look of the company stationary, emulates the look of the company site, or if the graphics are interactive. A common example where this makes reasonable sense would be billing emails from Amazon.com. Note, however, that the content itself does not actually have any graphics, only the frame above, below, and to the sides of the content uses graphics. Similar stuff shows up in banking emails and paypal emails. This sort of thing makes it easier for people to associate the email with the site and makes for nicer printed records that match the online version of the same records.
For standard communication, I'd just go with a header and/or footer graphic.
What I did was escalate to a person who has both the position and understanding of the importance of the issue. Also, presented a version with a cleaner design. Made sure to address all objectives, and did not imply that design decisions are open to discussion.
I HATE email with designs and pictures on it. It is unprofessional IMO
I hate them.
the desirability/niceness of designs and art are subjective. So, how can you be sure all the people who receive them will appreciate them.
For me they are a big turnoff.
Also consider looking up some references in Human Factors engineering texts that show readibility studies. I bet a quick library search in this area would yield much scientific data that her way causing reading errors, eye strain and or slower reading speed.

Balancing HTML/CSS Between Designers and Engineers [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have a development process question.
Background: I work for a modest sized website where, historically, the designers created mockups/screenshots of what they wanted pages and components to look like, and the engineering team (myself included) turned them into html/css.
This works relatively well from a code cleanliness perspective, and helps significantly when it comes to writing javascript. It has fails, however, in helping to maintain consistency from one page/component to another. On one page, a header font might be 12px and on another 11px, largely because its a complicated site with lots to keep track of (and we've cycled through 4 designers.) We have only a few truly universal styles, and they only get used when the engineers recognizes the style - not when the designer tells them to.
Our most recent designer is a relatively capable HTML/CSS coder. We thought we might have him create mockups in HTML/CSS and hand us off the code for quick integration. Our hope was that the designer would be better at being consistent in his style and that it might save us some development time up front.
What we've discovered is that our designer is not quite as good as CSS as we had hoped and that his code is often slightly bloated and incompatible with what we need to do. Also, his style of coding is fundamentally different from the rest of the engineering team and isn't jiving terribly well with our established coding practices.
Question: How do you do the hand off from design to engineering? I know I've heard of companies that let their design team do all of the template coding, but I'm curious how that works. Does the design team actually incorporate members of the engineering team in those scenarios?
As we're structured right now, there's not a chance in hell we'd let our designer write the final templates and check them into SVN, even if he was a proficient HTML wiz. There's too much in the templates that requires knowledge of our codebase and of potential performance issues.
How do we get this process working? Is it a pipe dream?
Specifically - personally - since I come from a web-dev, small-shop background I do the CSS work and slicing the PSD (typically) myself. But then I like to think I'm well rounded like that :)
Generally, the best experience I've had of this was a largish company with very defined groups of developers including the design team who produced the gfx, the apps team who did the vast bulk of server-side coding and app architecture, and the UE (user experience) team who sewed the two together, producing XSLT/JSP/HTML markup in general, and the CSS and JS for the client-side.
There was a very structured process of:
userstory ->
"wireframe" (documents) ->
design (PSD) ->
"flat" markup (DHTML only) ->
integrated markup (with web-app)
Where "wireframe" would be close to a spec for UE, produced with UML or maybe visio. I have heard the term applied to step 4 which I think fits better, but this is what it was referred to as there.
Whilst this works well for the question at hand, I found it had other problems built in. It was very hard to work across teams, and because of the timescales the design team rarely involved UE in decision making (which put UE in some awkward positions), the apps team and design could be working at cross-purposes, and there wasn't a lot of scope to learn in these boxed in teams.
My suspicion (and I think ideal scenario) is that the developers on a project would each be capable of working with, say, 80% of the technology involved (be it CSS, SQL, whatever) to spread the decision control and risk, but each domain would have one (more?) "czar" who could act as authority and oversight within the domain. Actually producing those designs is to my mind a strange and magical skill in it's own right so I see no real overlap with developers there, but I think a pool of artists and project teams of cross-skilled programmers would be very powerful.
Apols for the long-windedness. I could go on at considerable length on this, I've spent a lot of time thinking about it.
btw, it seems like you could do with some serious web-devs there, (no offence). Having problems to "maintain consistency from one page/component to another" screams failure to grok CSS
In my experience, unless you limit your design severely, you need real coding skills to build a web page with interaction. Let me elaborate some. If you have built your pages very modular (think of GUI toolkit widgets) you can give your designer a handful of them, he can build the basic structure like playing toy blocks with a nice finishing paint.
Often, modularization alone is not enough for desired interactivity. So, some blocks needs their interactions to be designed carefully as well (like animation, fluid layout to accommodate indeterminate content, customized behaviour via extra javascript, caching to eliminate redundant requests and speeding up things) or ability to accommodate minor presentation variations, which brings us to the realm of programming, where you calculate dimensions, enable/disable parts, keep track of time, preload stuff, invalidate preloaded stuff and so on.
Enter HTML/CSS/JS. They are more of a product of evolution than intelligent design. You cannot always declare your intent and be done with it. You need attributes declared in your html, stupid hacks in CSS combined with extra markup, ridiculous amounts of js to smooth rough edges, duplicate rendering code on the server side. These tool were never meant to build applications.
I don't think one can achieve a complete separation of design and application development in these tools at hand. The effort required is too high to justify the marginal returns.
If you end up heavily modifying designer's code (which is othen the case if he is not one of the developers also), there is no point in making him suffer trying to express his intent using the wrong tools, nor developers breaking the design while modifying it and consequently fixing it. I don't even mention user experience.
In my opinion, no small internet businesses who want to ship a product in a reasonable time should spend their scarce resources to go against the grain. Let people do what they do best in collaboration if necessary. If you can't divide design process at an arbitrary satisfying point, you may as well not bother to separate at all. Pipelining works well for machines whose goal is determined to the last detail and not changing. I can't say the same for humans building and designing things be it software or hardware.
Where I work it's basically the same. Designers create mock-ups and specifications of the UI design, right down to the pixel, and the developer creates HTML/CSS/code out of that.
The reason I say code, is that we use UI frameworks (namely, GWT), and as much as we would want to, code and CSS styles are still very coupled. I do not believe there exists one UI framework in which code can be completely decoupled from the UI design.
So I guess for now it's still entirely the developers job. Though I would like to hear about organization which are able to hand off some of the work to designers.
The problem with handoffs is that the idea and implementation of one group is not going to match the abilities and implementation of the next group. Just by their nature handoffs are going to be wrought with problems. So what is an alternative to the ubiquitous handoff scenario? I think that integrating the user experience (UX) into an agile and iterative development process makes sure that what is really important occurs:
The customer's needs are researched then validated.
Early and continuous collaborating between usability experts, designers and programmers.
The actual process works by having everyone collaborate with the customer up-front on their needs. Then the design is researched and prototyped in the iteration before coding begins. Thus when coding is occurring, the next set of designs are being worked on. Programmers should be looking forward at what designers are doing and the designers look back to be sure programmers are on target. Once a design is coded, it goes to the customer for acceptance, by that time the programmers are working on the next set of interfaces.
Jeff Patton did a podcast on Agile UX recently that goes into some of the implementation concepts and common problems.
There is a whole group on Yahoo dedicated to agile usability (which mostly involves interface design).
For the CSS inconsistencies... I'd just suggest making a style guide then trying to stick to it. Have someone be in charge of "design consistency" that way the can spank anyone inventing yet another way to display the user.
At my company, my ideal work flow doesn't work very often, but sometimes it does. I löve when this happens: The engineers write the webapp and output semantic html with only minimal CSS. then you have the designers do the CSS.
I like it when it goes this way, because:
It is easy for me to write semantic
HTML.
I am not very good at coming up
with a good design for my semantic
html.
It is entirely possible to do
the CSS without asking me questions.
The markup just speaks for itself.
However, this rarely works. Because:
The CSS has to be modified whenever the HTML changes and the designers' time is sparse.
Moreover, our designers don't enjoy styling my markup, and fighting for their time is not pleasant.
Our designers often want to change the markup. Mostly because they believe some layouts cannot be done without changing the markup or because they believe that it's the only way to make IE obey. They are technically not able to change the markup, though.
I have my doubts about many of their cases. Many times they claim IE incompatibility, I strongly doubt they really know IE that well. There are neat CSS hacks to make IE obey without resorting to
<br clear="all">
So, sadly, usually this ideal is a little off for me.
A separate designer - developer workflow is the best way to go. Designing a website and coding it are altogether different jobs. There are issues of cross browser compatability, CSS, XHTML, apart from coding standards to deal with.
You could also opt for outsourcing your HTML to a specialized PSD to HTML conversion expert like us (ButterflyHTML). It may work out cost effective in the long run