Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I wondered what would be the markup to achieve the following on google, somehow they recognize the menu items and show it as part of the search result but I couldn't find an easy way to do it.
attached screenshot:
Basically, you are asking how to cause "sitelinks" to appear for your website. Unfortunately as far as SEO is concerned, there isn't any special markup you can use to make these appear. They will be shown if Google's algorithm determines it is appropriate to show them, otherwise, they won't be.
For more information, see the following help article from the Google webmaster tools:
http://www.google.com/support/webmasters/bin/answer.py?answer=47334&topic=8523
There isn't anything special about the markup. Google needs to be able to crawl the site and be able to determine the site's structure based on how pages link to each other. In addition, you can tell Google how the site is structured by submitting a sitemap to them. This is a simple step you can do to encourage Google to build this structure in their search results. Be patient for the results to occur, however, as it can take a while.
A good site navigation tree (logical) and breadcrumbs on the internal page, may help google to check right your "menu". HTML5 too maybe a good idea to say to search engine "Hi. I'm the nav".
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a page with meta description, but for some reason Google indexing sources its value from the content of the page, and not even from the beginning of the text, but from a sentence in the middle of the paragraph. I've checked the html, but don't see any reason for it. Robots are not affecting this page either. What else could be the reason? Forgot to mention I'm using Umbraco 4.7
The snippet Google shows will generally be related to the query the user has entered. In some cases, this will match well with your meta description, but if there is content elsewhere in the page body that better matches the user's search, then Google will show that part of the page instead.
This article on Moz.com goes in to a bit more detail on how you can gently steer Google in the right direction towards your meta description, but ultimately it's not something you can control.
So, is there anything you can do to bend Google to your will and always use your META descriptions? Unfortunately, the short answer is "no". Like so much of SEO, though, there are some ways to nudge Google in the right direction
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to extract an article say this:
http://www.bbc.com/news/magazine-32156264
and only display the article content, so no BBC heading or footer. How would I do this? I'm thinking put it in an iFrame.
As you ask specifically about the BBC:
You are allowed to display the RSS feed of BBC headlines - you could use the WordPress RSS Links widget to do this.
You certainly aren't allowed to just copy someone else's story (or start removing branding etc.) – which is quite reasonable.
Note: The BBC doesn't have an API for news, but some do - e.g. The Guardian's Open Platform - again there will usually be strict restrictions on how you can display things, required branding, what you are/aren't allowed to change.
Correct approach: choose one or two relevant quotes you find interesting, highlight those, and make sure you have prominent link back to the original article.
First of all, there will be legal issues. Second, your page rank will be destroyed because to duplicate content.
If you already considered the above, you should do a PHP curl request, then parse it using a regular expression to get the target data and finally post the retrieved data.
Or, you can use APIs of other news providers like williamt mentioned.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I know there are more posts like this. None of them seem to answer my question though.
Let's say i have two websites. One is a Wordpress site, the other is a self scripted html site.
Both websites are the same in text and so.
The websites just contain a couple of pages with good keyword-rich text. No fancy things.
If i place both websites online, which one will rank out higher in Google?
PS: I know Wordpress has a lot of fancy plug-ins for SEO. I am not counting these in this equation. People say things like: "Google just likes Wordpress's structure. But a couple of HTML documents are much easier and faster to crawl."
Thanks in advance,
It would be difficult to ascertain which would rank higher without seeing the code of both websites side by side. If you do publish both websites together to test then you will probably be punished for having duplicate content.
WordPress's HTML structure and semantics have been created with
accessibility in mind which is what Google would give weight to.
If you use friendly and relative filenames/URLs as WordPress does,
this is also a plus.
If you use simple HTML files as #Paul D. Waite mentions above then
indeed this will be faster to crawl than dynamic pages like PHP.
I would conclude if your website is relatively simple and you don't need to update it regularly then, a static website would rank better as it's just content and none of the fuss.
Don't forget inbound links will play a big factor in your page rank.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am planning redesign of my page (4-5 years old with pagerank 3-4). There will not be any URLs changing, meaning that the same content will stay under the same URL. But I am still bothered, because I heard that changing HTML structure on whole page can have some effect, mainly negative. But there is no way of changing design and layout of the page without changing HTML structure.
Could you please sum up all the things to take into account when redesigning website search-engine-friendly-way ?
I could go into some detail but basically check your site with this to get a detailed breakdown: http://nibbler.silktide.com/ Before and your test site (Preferably on a test domain ie. test.mywebsite.com).
Basic things not to do are: Do not use html tables for anything but displaying data in a grid, do not use semantic html where not needed this is used to highlight things as important.
Order of importance tags on a page
H1 < H2 < H3 < B
Make sure your html is valid and you have all the appropriate meta-tags in place as per the w3c standard you choose for your design.
Content is key, keyword density and page themes are what are important don't dilute a page, if you are going to add a new page.
Make sure you add a site map and submit to all search engines and have a robots.txt file pointing to your local xml sitemap.
For everything that you didn't understand that I said google the phrases in bold and you will find more detail of implementation.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I maintain an HTML page that contains a list of links to photo galleries. Over the past few years it has gone from a small page to a list that contains HUNDREDS of links. My fear is that it has affected the SEO of the page as a whole; being interpeted by spiders as a link farm. Of course, I have no real way of knowing fo sure but I have started to suspect.
Is there an efficient simple way to deal with a large number of links in a manner that is still easy for the user to browse? While having hundreds of links one of top of the other may not be the best looking method, its easy to search since they are all in chronological order. I am looking to figure out a way that I can keep the page simple without creating more of a maintenance nightmare for myself.
One idea I had was use XML to store the links and use some kind of dropdown so that when a spider hit the page it would not see a mountain of links, just a reference to XML
Use a "pager" script to show, say 10 at a time. They are available in every web framework or you could quickly hack up your own.
... how about this. Put links in separate file(s) (or somehow store them outside of the page, db, flat file, etc.) and load them via ajax call as needed. Say, something like 'Category A' button, when clicked loads links into a div. That should keep it out view for spiders.
Then there's this: http://www.robotstxt.org/meta.html and this: http://en.wikipedia.org/wiki/Nofollow