http://www.google.co.uk/search?q=mark+zuckerberg+crunchbase
Guys, check out that search, in particular the first result's url. Crunchbase.com > People, and thus the people links to the /people section of the site.
How are they achieving it? I know Google algorithm is intelligent and looks for links and then makes the assumptions itself in cases, but is there any particular markup they are using to help Google to make these connections?
Google is light with details, but here's what they said in their announcement.
The information in these new hierarchies come from analyzing destination web pages. For example, if you visit the ProductWiki Spidersapien page, you'll see a series of similar links at the top, "Home> Toys & Games> Robots." These are standard navigational tools used throughout the web called "breadcrumbs," which webmasters frequently show on their sites to help users navigate. By analyzing site breadcrumbs, we've been able to improve the search snippet for a small percentage of search results, and we hope to expand in the future.
Related
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I know that <meta name="Description" content="[description here]" /> can be used but I wonder how to make a description like the one in facebook.
Does this description use the <meta> tag as well? Or is there some other secret behind it?
Edit: I code my site by myself (no wordpress and stuff) :)
I believe this is how it happens.
Google primarily displays multi link listings when they feel a query
has a strong chance of being navigational in nature. I think they can
determine that something is navigational in nature based on linkage
data and click streams. If the domain is well aligned with the term
that could be another signal to consider.
If you have 10,000 legit links for a term that nobody else has more
than a few dozen external citations for then odds are pretty good that
your site is the official brand source for that term. I think overall
relevancy as primarily determined by link reputation is the driving
factor for weather or not they post mini site map links near your
domain.
This site ranks for many terms, but for most of them I don't get the
multi link map love. For the exceptionally navigational type terms
(like seobook or seo book) I get multi links.
The mini site maps are query specific. For Aaron Wall I do not get the
mini site map. Most people usually refer to the site by it's domain
name instead of my name.
Google may also include subdomains in their mini sitemaps. In some
cases they will list those subdomains as part of the mini site map and
also list them in the regular search results as additional results.
Michael Nguyen put together a post comparing the mini site maps to
Alexa traffic patterns. I think that the mini site maps may roughly
resemble traffic patterns, but I think the mini links may also be
associated with internal link structure.
For instance, I have a sitewide link to my sales letter page which I
use the word testimonials as the anchor text. Google lists a link to
the sales letter page using the word testimonials.
When I got sued the page referencing the lawsuit got tons and tons of
links from many sources, which not only built up a ton of linkage
data, but also sent tons of traffic to that specific page. That page
was never listed on the Google mini site map, which would indicate
that if they place heavy emphasis on external traffic or external
linkage data either they try to smooth the data out over a significant
period of time and / or they have a heavy emphasis on internal
linkage.
My old site used to also list the monthly archives on the right side
of each page, and the February 2004 category used to be one of the
mini site map links in Google.
You should present the pages you want people to visit the most to
search bots the most often as well. If you can get a few extra links
to some of your most important internal pages and use smart channeling
of internal linkage data then you should be able to help control which
pages Google picks as being the most appropriate matches for your mini
site map.
Sometimes exceptionally popular sites will get mini site map
navigational links for broad queries. SEO Chat had them for the term
SEO, but after they ticked off some of their lead moderators they
stopped being as active and stopped getting referenced as much. The
navigational links may ebb and flow like that on broad generic
queries. For your official brand term it may make sense to try to get
them, but for broad generic untargeted terms in competitive markets
the amount of effort necessary to try to get them will likely exceed
the opportunity cost for most webmasters.
Source.
Hope this helps.
It depends on the website popularity.
Google does it, you don't.
Google may do it but you can persuade them.And check this out sub sitelinks in google search result
For starters, be sure you have a “sitemap.xml” file. This is a file
that tells the search engine about the pages on your site and makes
it easier for its spiders to crawl and understand it. Your
webmaster or website provider or Content Management System (like
WordPress) should have handled this for you, but it’s worth
checking. If you’re not a master of website technical stuff,
whoever is your technical support person will be able to tell you if
that page is there, and properly set up.
You should register your site with Google Webmaster Tools, if you
haven’t already. The exact process changes from time to time, but
basically, you’ll give Google the URL of your Sitemap file, which
you’ll have from the previous step. You’ll have to put a “Site
Verification Code” on your site to prove to them that you own the
site, and there are a few other simple steps.
Whenever you link one page to another in your site, use anchor text
and alt text that’s descriptive, and as succinct as possible, and
consistent. For example, you’ve linked to your “concierge services”
page from another page using the anchor text “concierge services.”
That’s perfect. Now, don’t link from another page using “guest
services.” You don’t want to be confusing the poor Google spider,
after all.
I'm working as a web developer and have lots (hundreds) of links with hacks, tutorials and code snippets that I don't want to memorize. I am currently using evernote to save the content of my links as snippets and have them searchable and always available (even if the source site is down).
I spend a lot of time on tagging, sorting, evaluating and saving stuff to evernote and I'm not quite happy with the outcome. I ended up with a multitude of tags and keep reordering and renaming tags while retagging saved articles.
My Requirements
web based
saving web content as snippets with rich styling (code sections, etc.)
interlinked entries possible
chrome plugin for access to content
chrome plugin for content generation
web app or desktop client for faster sorting / tagging / batch processing
good and flexible search mechanism
(bonus) google search integration (search results from KnowledgeBase within google search results)
I had a look at kippt but that doesn't seem to be a solution for me. If I don't find a better solution, I'm willing to stay with evernote as it meets nearly all my needs but I need a good plan to sort through my links/snippets once and get them in order.
Which solutions do you use and how do you manage your knowledge base?
I'm a big Evernote fan but a stern critic of all my tools. I've stuck with Evernote because I'm happy enough with its fundamental information structures. I am, however, currently working on some apps to provide visualisations and hopefully better ways to navigate complex sets of notes.
A few tips, based on years of using Evernote and wiki's for collaboration and software project management:
you can't get away from the need to curate things, regardless of your tool
don't over-think using tags, tags in combination with words are a great way to search (you do know you can say tag:blah in a search to combine that with word searches?)
build index pages for different purposes - I'm using a lot more of the internal note links to treat Evernote like a wiki
refactor into smaller notebooks if you use mobile clients a lot, allowing you to choose to have different collections of content with you at different times
I'm thinking of using a tree view for page navigation in my web application, similar to Windows Explorer. There are a lot of things for administrators to configure in the application so I figured listing all links in a single page in tree form would keep things organized. Related page links are grouped in a "folder", and all folders will show closed initially.
Obviously, this page is for administrators only, so they'd be provided with some training. That being said, is this a good design from user's point of view? Do you see any usability or potential implementation issues?
The best answer involves empirical evidence. A yes or no answer could really vary based on the specific task and your intended audience. Try doing a simple 5 minute usability test with your users. Draw out your page layouts on paper and have a couple of users pretend to use the site (see Paper Protyping). Give them a few simple tasks to complete using your interface and observe what they do.
If they get confused or have trouble with the concept, then it's probably best to find another way to provide navigation.
It totally depends on how your users are using your site. If they're often jumping from one part of the site to a completely different, unrelated place in the site, a tree may be the best way to let them quickly find that "other page" they were looking for.
However, for the vast majority of websites I've ever seen or used, I'd prefer to find what I'm looking for either via Search functionality, or by links on the page I'm looking at that lead me to related data.
I'm looking for ways to prevent indexing of parts of a page. Specifically, comments on a page, since they weigh up entries a lot based on what users have written. This makes a Google search on the page return lots of irrelevant pages.
Here are the options I'm considering so far:
1) Load comments using JavaScript to prevent search engines from seeing them.
2) Use user agent sniffing to simply not output comments for crawlers.
3) Use search engine-specific markup to hide parts of the page. This solution seems quirky at best, though. Allegedly, this can be done to prevent Yahoo! indexing specific content:
<div class="robots-nocontent">
This content will not be indexed!
</div>
Which is a very ugly way to do it. I read about a Google solution that looks better, but I believe it only works with Google Search Appliance (can someone confirm this?):
<!--googleoff: all-->
This content will not be indexed!
<!--googleon: all-->
Does anyone have other methods to recommend? Which of the three above would be the best way to go? Personally, I'm leaning towards #2 since while it might not work for all search engines, it's easy to target the biggest ones. And it has no side-effect on users, unless they're deliberately trying to impersonate a web crawler.
I would go with your JavaScript option. It has two advantages:
1) bots don't see it
2) it would speed up your page load time (load the comments asynchronously and unobtrusively, e.g. via jQuery) ... page load times have a much underrated positive effect on your search rankings
Javascript is an option but engines are getting better at reading javascript, to be honest I think your thinking too much into it, Engines love unique content, the more content you have on each page the better and if the users are providing it... its the holy grail.
Just because your commenter made a reference to star wars on your toaster review doesn't mean your not going to rank for the toaster model, it just means you might rank for star wars toaster.
Another idea would be, you could only show comments to people who are logged in, collegehumor do the same I believe, they show the amount of comments a post has but you have to login to see them.
googleoff and googleon are for the Google Search Appliance, which is a search engine they sell to companies that need to search through their own internal documents. It's not effective for the live Google site.
I think number 1 is the best solution, actually. The search engines doesn't like when you give them other material than you give your users so number 2 could get you kicked out from the search listings altogether.
This is the first I have heard that search engines provide a method for informing them that part of a page is irrelevant.
Google has a feature for web masters to declare parts of their site for a web search engine to use to find pages when crawling.
http://www.google.com/webmasters/
http://www.sitemaps.org/protocol.php
You might be able to relatively de-emphasize some things on the page by specifying the most relevant keywords using META tag(s) in the HEAD section of your HTML pages. I think that is more in line with the engineering philosophy used to architect search engines in the first place.
Look at Google's Search Engine Optimization tips. They spell out clearly what they will and will not let you do to influence how they index your site.
When you go to Google and perform a search, it will return either one of two type of results:
just the title of your webpage, or
the title of your web-page plus, lists subpages it found on that web site
Here is an example of option #2: http://37assets.s3.amazonaws.com/svn/grub-ellis-googlelisting.png
My website on a google.com search only lists my web page title (option #1), how do I get google to list my subpages on the search results (option #2)?
Is is an HTML issue? How do I get Google to know what my subpages are so that it can also list those on a google search.
Those are called "sitelinks" and are automated but you can partially configure them in Google's webmaster's tools. In webmaster's tools, click "sitelinks" in the navigation menu on the left. From the sitelinks page:
Sitelinks are links to a site's interior pages. Not all sites have sitelinks. Google generates these links automatically, but you can remove sitelinks you don't want.
Here is another Google page explaining sitelinks.
You should add a site-map using the Google webmaster tools site, or by maintaining your own. For explanation check out Sitelinks page.
Google has not generated any sitelinks
for your site. Sitelinks are
completely automated, and we show them
only if we think they'll be useful to
the user. If your site's structure
doesn't allow our algorithms to find
good sitelinks, or we don't think that
the sitelinks are relevant to the
user's query, we won't show them.
However, we are always working to
improve how we find and display
sitelinks.
You can also directly enable sitelinks (you don’t have to get lucky) in Google’s Pay-Per-Click platform (AdWords), and it will have a similar very positive impact on your clickthrough rate.
You need to create XML sitemap. Here is all you need to know. Check if your open-source CMS has plugin/add-on/module to do this automatically, there must be generators somewhere too.
http://www.google.lv/search?q=XML+sitemap
http://en.wikipedia.org/wiki/Sitemaps
http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=156184
You are describing "Search Engine Optimization" with your question. If you have a small site, the best thing you can hope for is to ensure every page has a unique title, links back to your home page, you have a good "site map" so search engines can easily discover ALL of your pages, and most important, your pages are THE definitive place for information about whatever you're selling.
Content is king and once you become the authority, your page will pop up in the 1st 1-2 links.
Contact some local SEO folks in your area and ask for a site evaluation. Many will do it for free with their automated tools. You can use the webmaster tools from bing or google if you're on a tight budget.