Achecker Link text may not be meaningful - html

I'm trying check my website by http://achecker.ca
and in potential problems section, I have very much problems...
Exactly I mean this problem:
Success Criteria 2.4.4 Link Purpose (In Context) (A)
Check 19: Link text may not be meaningful.
Every link on my main page is listed here as a problem...
What wrong could be in links like that:
PARTY
News title
<img src="images/ico/ico10.gif" alt="games">

The testing tool http://achecker.ca/ shows this potential problem for all links, even for this one (which is as meaningful as it can get):
<a href="http://stackoverflow.com/questions/38933605/achecker-link-text-may-not-be-meaningful" rel="external">
Stack Overflow question: <cite>Achecker Link text may not be meaningful</cite>
</a>
It’s documented at http://achecker.ca/checker/suggestion.php?id=19, where it says under "Short Description":
All a (anchor) elements that contains [sic] any text will generate this error.
This page also gives hints how to determine/check if your links pass or fail.
The relevant WCAG 2.0 guideline is 2.4.4 Link Purpose (In Context).

The potential problems listed by AChecker are typically not problems at all, but a human has to make that confirmation.
Regarding the meaningfulness of your links, think about someone giving you the text of a link on its own and asking you to define its meaning. Is the word "Party" meaningful. This word can be interpreted in multiple ways.
When navigating with a screen reader (if one is blind) listening to links in the sequence they appear on the page, nearby links can add meaning to single words like this. Will the links prior give the word Party the meaning of a "festive occasion" or perhaps a "political party", or even an "interested party," etc.. "News titles" is probably meaningful enough. "Games," the alt text for the image, could also be interpreted in multiple ways. Is there context (surrounding links) that give more specific meaning to the word "games?" Olympic games, video games, playing games, etc. If the context does not add meaning, then the link text itself needs to be adjusted to specify which meaning of the word is being used.
All links will be listed as "potential" problems by AChecker, requiring a human to make a decision on whether the text effectively describes the link's destination or function.
Potential problems are those the checker cannot identify with any certianty. Anywhere there is meaning involved, AChecker will identify potential problems. Known, Likely, and Potential problems are described on the first page of the Handbook, linked from the top right corner of AChecker.
http://achecker.ca/documentation/index.php?p=checker/index.php
Quote from the handbook:
AChecker identifies 3 types of problems:
Known problems: These are problems that have been identified with certainty as accessibility barriers. You must modify your page to fix these problems;
Likely problems:These are problems that have been identified as probable barriers, but require a human to make a decision. You will likely need to modify your page to fix these problems;
Potential problems: These are problems that AChecker cannot identify, that require a human decision. You may have to modify your page for these problems, but in many cases you will just need to confirm that the problem described is not present.

The key here is that they might not be meaningful.
I think they are being flagged because they only have 1-2 words in and so the tool is asking you to verify that the link text is good.
The kind of thing that should actually be considered as a failure of this rule are links that simply read "click here" or "more info", because they aren't actually explaining where the link goes. Other things to watch out for are multiple links with the same text but different destinations, and links with no text at all.

Related

Simple tooltip - Title Attribute?

When I want to have a message show when a user mouse overs an object, and lately I just use the title attribute on my html tags since it's simple and automatically doesn't go off screen.
Question: Is using the title attribute is a bad thing to rely on for a tool-tip?
Ignoring the fact you can't customize it, I'm curious about functionality over using a custom made tool-tip (such as how the standard user interacts with it). A specific web-comic I read, for example, uses the title attribute to add a witty comment / factoid when you hover over it. Yet not many people seem to know about it.
As such it seems a title might be good for a comment, or even saying author of a picture, but is it good for a true simply tool-tip?
Considering for a 'real' tool-tip you need usually 1-2 extra elements, css (and depending how you set it up, possibly some inline style for placement), and possibly even java-script, is the title attribute bad to use since (again) it cannot be customized, is often a small off-topic detail about the element, and only appears after a set amount of time.
Note: If it helps (food for thought), my current situation that brought this question on, is I like when a site has something like [?] for you to hover over to find more details without shoving them into the page, thus keeping it simple.
Also, I learned html from w3schools, and they never mention the title attribute, so not really sure what they are intended / should be used for. (and yes, mentioning w3schools part was a (bad) attempt at getting sympathy)
And I find this question kind of weird to ask considering SO uses them quite a bit, but feel free to assume I know nothing about it as... well... I really don't)
The title attribute (#title), should not be used.
Every browser does their own thing with the #title, even though it looks the same.
For people who just use the keyboard, they cannot get the information in #title.
People accessing the site from a mobile device, cannot get the information.
Some, but not all assistive technology can get the information in the #title
some allows it to be read after enabling it. Which not many people (users) know about.
other technology simply ignores the link text and reads the #title only.
Ex of 4.2:
Delete your account
This will read:
Are you sure? Link
Further Reading: PG: Title attribute
First of all,hats off to your question. Good thinking. I guess we people [I'm speaking about amateur coders like me] needn't develop a big site or rather lack that expertise. We simple need to keep getting our things done in an optimized manner. Therefore, similarily, I have also encountered almost every script using title for tooltips. Guess,it's the simple way to tackle it. Moreover, as long as the tooltip is attractive, isn't slow, and caters our need: its all good.
The title attribute is simple, and simplistic. It is not reliable. No tooltip mechanism really is, but the tooltips generated by title attributes have rather poor usability: tiny font size, problems with line control, timed disappearance, no way (to normal users) to make it stay put so that you can actually read it even if you are a slow reader. Besides, there is normally no hint to the user about the availability of a tooltip.

Is there a way to make search bots ignore certain text? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 months ago.
Improve this question
I have my blog (you can see it if you want, from my profile), and it's fresh, as well as google robots parsing results are.
The results were alarming to me. Apparently the most common 2 words on my site are "rss" and "feed", because I use text for links like "Comments RSS", "Post Feed", etc. These 2 words will be present in every post, while other words will be more rare.
Is there a way to make these links disappear from Google's parsing? I don't want technical links getting indexed. I only want content, titles, descriptions to get indexed. I am looking for something other than replacing this text with images.
I found some old discussions on Google, back from 2007 (I think in 3 years many things could have changed, hopefully this too)
This question is not about robots.txt and how to make Google ignore pages. It is about making it ignore small parts of the page, or transforming the parts in such a way that it will be seen by humans and invisible to robots.
There is a simple way to tell google to not index parts of your documents, that is using googleon and googleoff:
<p>This is normal (X)HTML content that will be indexed by Google.</p>
<!--googleoff: index-->
<p>This (X)HTML content will NOT be indexed by Google.</p>
<!--googleon: index-->
In this example, the second paragraph will not be indexed by Google. Notice the “index” parameter, which may be set to any of the following:
index — content surrounded by “googleoff: index” will not be indexed
by Google
anchor — anchor text for any links within a “googleoff: anchor” area
will not be associated with the target page
snippet — content surrounded by “googleoff: snippet” will not be used
to create snippets for search results
all — content surrounded by “googleoff: all” are treated with all
source
Google ignores HTML tags which have data-nosnippet:
<p>
This text can be included in a snippet
<span data-nosnippet>and this part would not be shown</span>.
</p>
Source: Special tags that Google understands - Inline directives
I work on a site with top-3 google ranking for thousands of school names in the US, and we do a lot of work to protect our SEO. There are 3 main things you could do (which are all probably a waste of time, keep reading):
Move the stuff you want to downplay to the bottom of your HTML and use CSS and/or to place it where you want readers to see it. This won't hide it from crawlers, but they'll value it lower.
Replace those links with images (you say you don't want to do that, but don't explain why not)
Serve a different page to crawlers, with those links stripped. There's nothing black hat about this, as long as the content is fundamentally the same as a browser sees. Search engines will ding you if you serve up a page that's significantly different from what users see, but if you stripped RSS links from the version of the page crawlers index, you would not have a problem.
That said, crawlers are smart, and you're not the only site filled with permalink and rss links. They care about context, and look for terms and phrases in your headings and body text. They know how to determine that your blog is about technology and not RSS. I highly doubt those links have any negative effect on your SEO. What problem are you actually trying to solve?
If you want to build SEO, figure out what value you provide to readers and write about that. Say interesting things that will lead others to link to your blog, and crawlers will understand that you're an information source that people value. Think more about what your readers see and understand, and less about what you think a crawler sees.
Firstly think about the issue. If Google think "RSS" is the main keyword that may suggest the rest of your content is a bit shallow and needs expanding. Perhaps this should be the focus of your attention.If the rest of your content is rich I wouldn't worry about the issue as a search engine should know what the page is about from title and headings. Just make sure RSS etc is not in a heading or bold or strong tag.
Secondly as you rightly mention, you probably don't want use images as they are not assessable to screen readers without alt text and if they have alt text or supporting text then you add the keyword back in. However aria live may help you get around this issue, but I'm not an expert on accessibility.
Options:
Use JavaScript to write that bit of content (maybe ajax it in after load). Search engines like Google can execute JavaScript but I would guess it wont value any JS written content very highly.
Re-word the content or remove duplicates of it, one prominent RSS feed link may be better than several smaller ones dotted around the page.
Use the css content attribute with pseudo :before or :after to add your content. I'm not sure if bots will index words in content attributes in CSS and know that contents value in relation to each page but it seems unlikely. Putting words like RSS in the CSS basically says it's a style thing not an HTML thing, therefore even if engines to index it they wont add much/any value to it. For example, the HTML and CSS could be:
.add-text:after { content:'View my RSS feed'; }
Note the above will not work in older versions of IE, so you may need some IE version comments if you care about that.
"googleon" and "googleoff" are only supported by the Google Search Appliance (when you host your own search results, usually for your own internal website).
They are not supported by Google's web-search at all. So please refrain from doing that and I think that should not be marked as a correct answer as this might create ambiguity.
Now, to get Google to exclude part of a page, you will need to place that content in a separate file, such as excluded.html, and use an iframe to display that content in the host page.
The iframe tag grabs content from another file and inserts it into the host page. I think there is no other available method so far.
The only control that you have over the indexing robots, is the robots.txt file. See this documentation, linked by Google on their page explaining the usage of the file.
You basically can prohibit certain links and URL's but not necessarily keywords.
Other than black-hat server-side methods, there is nothing you can do. You may want to look at why you have those words so often and remove some of them from the site.
It used to be that you could use JS to "hide" things from googlebot, but you can't now that it parses JS. ( http://www.webmasterworld.com/google/4159807.htm )
Google crawler are smart but someone that program them are smartest. Human always sees what is sensible in the page, they will spend time on blog that have some nice content and most rare and unique.
It is all about common sense, how people visit your blog and how much time they spend. Google measure the search result in the same way. Your page ranking also increase as daily visits increase and site content get better and update every day.
This page has "Answer" words repeated multiple times. It doesn't mean that it will not get indexed. It is how much useful is to every one.
I hope it will give you some idea
you have to manually detect the "Google Bot" from request's user agent and feed them little different content than you normally serve to your user.

"Click here to read this article" "Read More" Why these are bad for screen readers?

I use "read more" at the end of paragraph just for reminder for user same like P.T.O
Why it's problematic?
You have to understand that many screen reader users don't wait for the whole page to be read to them. They use keyboard shortcuts to navigate around the page. JAWS (arguably the most common of screen readers) has several very useful shortcut key combinations. One in particular pulls up a list of all of the hyperlinks on any given page. This way the user doesn't need to wait for the reader to get to the section of the page they're interested in before finding out what kind of links the page contains. They can just use the shortcut and get a list of links all at once on demand.
It's when using the list of links shortcut that your "Read More" links are completely useless. When viewing a huge list of all the links on the page, the user is simply read the text inside the tags. There is no context. The user has no idea what preceeds or follows the "Read More" text. All they know is there's a link for them to "read more" about something. This gets especially confusing when there is more than one link like this on the page. The user also does not generally listen to the URL, as that's pretty much worthless given all the insane query strings and the computerized voice struggles with reading URLs.
Does that help answer your question?
As a screen reader user and an occasional web designer (not to mention a web accessibility consultant), sometimes ambiguous links are unavoidable. While it's not always convenient for a screen reader user to figure out the context of a particular ambiguous link, it's not that much of a burden to figure out one or two. The problems come when pages are loaded with them.
When making this decision, you really need to consider if the extra wording in the link is too high a price to pay for the convenience to a screen reader user. Usually, with a little thought, you can come up with a link text that is better for everyone. However, just keep in mind that if you do have to use ambiguous link text, you won't "break" accessibility, just make it slightly less convenient for some. On the spectrum of "must haves" to "nice to haves", this is well within the latter half, unless ambiguous links become the rule, rather than the exception.
This blog entry discusses the drawbacks of 'click here' links. Another drawback of 'click here' links is they do not help identify keywords to that might be associated with their target... think SEO.

Does the CSS property "text-transform" affect SEO results?

I am building a site with a ton of 1999 style capitalization of navigation and headings. I have been simply adding in the text content as it appears (capitalized), but the other designer on the project insists on using lower case text in his HTML and capitalizing it with an applied style:
.tedious {text-transform:uppercase;}
I understand the argument of separation of style from content, but in this case it really doesn't matter because I personally will not maintain the site, nor do I ever imagine that the client will need to un-capitalize all of this text. The question is: 1. will search engines pay any attention at all to capitalization of text in a document and 2. would a crawler go so far as to read my style sheet and look for such things (me thinks not). I know that BOLD, STRONG, EM, etc have a (diminishing) effect on SEO so I can imagine a scenario where CAPS would, but have never heard of anyone actually claiming, let alone confirming this.
Digging this site the last few months. First post.
It will only effect what is shown in the search results, you colleagues work will show as lower case in the results.
You mentioned separation of style from content, but i'm not convinced that text-transform is a style really, it's a change of content, i'm sure some people would argue the other side though.
if i was a search engine - I wouldn't care about casing. I would care about the content.
From a human readability standpoint - upper case isn't as easy to read.
Well, I was taught at school that all proper nouns (eg names and names of places) should begin with capital letters.
How would Google know whether I was talking about reading (as in a book) or Reading (as in the town of Reading, Berkshire), without taking into account the capitalisation? I would argue that capitalisation is definitely a semantic indicator rather than simply a case of aesthetics, and is therefore one factor that could be used for SEO.
As noted elsewhere, Google clearly does have knowledge of the CSS being used to render a page (eg Google can spot black-hat techniques such as white text on a white background).
So if capitalisation (or lack of) is a relevant SEO factor, can the CSS text-transform (or lack of) value also be an SEO factor?
Yes - because Google considers page speed to be an important factor. Text that doesn't need to be transformed by CSS will display faster.
Answer from google:
I don't think we'd do anything special with all-caps headings, but it feels like the kind of thing you'd want to do in CSS instead of in the content, since it's more about styling.
https://mobile.twitter.com/JohnMu/status/1438159561391751170?s=19

Programmatically detecting "most important content" on a page

What work, if any, has been done to automatically determine the most important data within an html document? As an example, think of your standard news/blog/magazine-style website, containing navigation (with submenu's possibly), ads, comments, and the prize - our article/blog/news-body.
How would you determine what information on a news/blog/magazine is the primary data in an automated fashion?
Note: Ideally, the method would work with well-formed markup, and terrible markup. Whether somebody uses paragraph tags to make paragraphs, or a series of breaks.
Readability does a decent job of exactly this.
It's open source and posted on Google Code.
UPDATE: I see (via HN) that someone has used Readability to mangle RSS feeds into a more useful format, automagically.
think of your standard news/blog/magazine-style website, containing navigation (with submenu's possibly), ads, comments, and the prize - our article/blog/news-body.
How would you determine what information on a news/blog/magazine is the primary data in an automated fashion?
I would probably try something like this:
open URL
read in all links to same website from that page
follow all links and build a DOM tree for each URL (HTML file)
this should help you come up with redundant contents (included templates and such)
compare DOM trees for all documents on same site (tree walking)
strip all redundant nodes (i.e. repeated, navigational markup, ads and such things)
try to identify similar nodes and strip if possible
find largest unique text blocks that are not to be found in other DOMs on that website (i.e. unique content)
add as candidate for further processing
This approach of doing it seems pretty promising because it would be fairly simple to do, but still have good potential to be adaptive, even to complex Web 2.0 pages that make excessive use of templates, because it would identify similiar HTML nodes in between all pages on the same website.
This could probably be further improved by simpling using a scoring system to keep track of DOM nodes that were previously identified to contain unique contents, so that these nodes are prioritized for other pages.
Sometimes there's a CSS Media section defined as 'Print.' It's intended use is for 'Click here to print this page' links. Usually people use it to strip a lot of the fluff and leave only the meat of the information.
http://www.w3.org/TR/CSS2/media.html
I would try to read this style, and then scrape whatever is left visible.
You can use support vector machines to do text classification. One idea is to break pages into different sections (say consider each structural element like a div is a document) and gather some properties of it and convert it to a vector. (As other people suggested this could be number of words, number of links, number of images more the better.)
First start with a large set of documents (100-1000) that you already choose which part is the main part. Then use this set to train your SVM.
And for each new document you just need to convert it to vector and pass it to SVM.
This vector model actually quite useful in text classification, and you do not need to use an SVM necessarily. You can use a simpler Bayesian model as well.
And if you are interested, you can find more details in Introduction to Information Retrieval. (Freely available online)
I think the most straightforward way would be to look for the largest block of text without markup. Then, once it's found, figure out the bounds of it and extract it. You'd probably want to exclude certain tags from "not markup" like links and images, depending on what you're targeting. If this will have an interface, maybe include a checkbox list of tags to exclude from the search.
You might also look for the lowest level in the DOM tree and figure out which of those elements is the largest, but that wouldn't work well on poorly written pages, as the dom tree is often broken on such pages. If you end up using this, I'd come up with some way to see if the browser has entered quirks mode before trying it.
You might also try using several of these checks, then coming up with a metric for deciding which is best. For example, still try to use my second option above, but give it's result a lower "rating" if the browser would enter quirks mode normally. Going with this would obviously impact performance.
I think a very effective algorithm for this might be, "Which DIV has the most text in it that contains few links?"
Seldom do ads have more than two or three sentences of text. Look at the right side of this page, for example.
The content area is almost always the area with the greatest width on the page.
I would probably start with Title and anything else in a Head tag, then filter down through heading tags in order (ie h1, h2, h3, etc.)... beyond that, I guess I would go in order, from top to bottom. Depending on how it's styled, it may be a safe bet to assume a page title would have an ID or a unique class.
I would look for sentences with punctuation. Menus, headers, footers etc. usually contains seperate words, but not sentences ending containing commas and ending in period or equivalent punctuation.
You could look for the first and last element containing sentences with punctuation, and take everything in between. Headers are a special case since they usually dont have punctuation either, but you can typically recognize them as Hn elements immediately before sentences.
While this is obviously not the answer, I would assume that the important content is located near the center of the styled page and usually consists of several blocks interrupted by headlines and such. The structure itself may be a give-away in the markup, too.
A diff between articles / posts / threads would be a good filter to find out what content distinguishes a particular page (obviously this would have to be augmented to filter out random crap like ads, "quote of the day"s or banners). The structure of the content may be very similar for multiple pages, so don't rely on structural differences too much.
Instapaper does a good job with this. You might want to check Marco Arment's blog for hints about how he did it.
Today most of the news/blogs websites are using a blogging platform.
So i would create a set of rules by which i would search for content.
By example two of the most popular blogging platforms are wordpress and Google Blogspot.
Wordpress posts are marked by:
<div class="entry">
...
</div>
Blogspot posts are marked by:
<div class="post-body">
...
</div>
If the search by css classes fails you could turn to the other solutions, identifying the biggest chunk of text and so on.
As Readability is not available anymore:
If you're only interested in the outcome, you use Readability's successor Mercury, a web service.
If you're interested in some code how this can be done and prefer JavaScript, then there is Mozilla's Readability.js, which is used for Firefox's Reader View.
If you prefer Java, you can take a look at Crux, which does also pretty good job.
Or if Kotlin is more your language, then you can take a look at Readability4J, a port of above's Readability.js.