Safe to have page resources without file extensions? - html

I need to decide on naming conventions for a new website.
I can use mod_rewrite at will.
My favourite solution would be to work with no file extension at all.
www.exampledomain.com/language/pagename
this would lead to "pagename" being treated as a directory. I would have to take that into account when using relative links.
Are there any other pitfalls I need to be aware of when doing this?
Is this legal, or are resources supposed to have a "name.prefix" structure?
Do you know of any clients that can't deal with this and start looking for /index.htm or .html?
Can you think of any SEO problems to be expected?

Unless you have a very good reason to add an extension, drop it.
are resources supposed to have a "name.prefix" structure?
Not that I know of. Normally not. Resources are just a concept. A custom resource format may have that extension requirement, the other would not. It will depend.
As for SEO, the short a link is, the better. It will increase relative weight of keywords. An extension would make links longer by 4 characters or more.
Do you know of any clients that can't deal with this and start looking for /index.htm or .html?
A problem may arise if you decide to support multiple entry points.
www.exampledomain.com
www.exampledomain.com/index.html
www.exampledomain.com//index.htm
www.exampledomain.com/index
These are all different urls to search engines. Some people will be linking to you with the shortest name, the others will use the other version. Then ultimately there will be different inbound links pointing to your site start page which will essentially be the same. Search engines will detect it and see it as content duplication. Consequently, your page rank will be divided between several url versions. Finally, all except one will likely be dropped out of their index entirely. To deal with this situation, decide for one "true" url and let others perform 301 redirect (moved permanently) to the "correct" url.

Dropping extensions actually has the significant benefit of not tying you to a specific language. If your URLs are http://example.com/page.php and you switch to another language, you'll either lose the existing URLs (bad!) or have to fake the PHP extension (clunky).

Related

What are the practical benefits of using microformats for every possible thing?

What practical benefits can my client get if I use microformats on his site for every possible thing?
How can I explain these benefits to a non-technical client?
Sometimes it seems like the practical benefits are hard to quantify.
Search engines already pick up and parse microformats (see e.g. https://support.google.com/webmasters/answer/99170). I believe hCard and hCalendar are fairly well supported--and if not, plenty of sites are using it, including places like MySpace.
It's the idea that adding CSS classes and specified IDs make your existing content easier to parse in a machine-readable manner.
hReview is starting to make some inroads, and hResume looks like it take off too.
I heavily use rel="nofollow" on uncontrolled links (3rd party sources) which is actually a microformat.
Check the microformats wiki for a decent starting point.
It just means your viewers can share a few generic "formats". You can generalize stylesheets, and parsing mechanisms. Rather than having a webpage consist of one "html document," you have a webpage that consist of "10 formatted micro-documents".
If you need a real world analog: think of it like attaching a formatted invoice, to a receipt, and a business card, rather than writing it all down on notebook paper with your left hand.
Overall the site becomes easier to digest for the rest of the internet. The data can be reused, combined, cross-referenced, and saved.
A simple example would be to have anywhere on the site a latitude and a longitude (geo). With Microformats, anybody that searches for that latitude and longitude can be easily referenced to their website, increasing traffic, awareness of that person / company, and allow users to easily save that information. (Although I've encountered little of this personally, this is more of 'the future' of things than it is current. But always good to stay up to date).
A second example would be a business card (hCard) where a browser can easily save and transfer it to an address book, so that just one visit to the site and the visitor has the information saved locally. Especially useful if they're getting hits from a cell phone.
I wouldn't recommend using microformats for "every possible thing". Use them for things where you get some benefit, in exchange for the effort of using them.
The main practical benefit I'm aware of is customised search engine results:
https://support.google.com/webmasters/answer/99170
Technically, Google now prefers this to be implemented using microdata (i.e. itemprop attributes) rather than microformats, but it's the same idea.
Having a micro-format can be better than no format since it lets you save every possible thing in the application.
A micro-format for every possible thing can be better than a standard format only because: it's quicker to create so it costs less and it take less space than some standard formats, like XML.
But all this depends on the context of the application and so you must explain it to the client in that context.
microformatting your content extends its reach in every, which way possible. using your sites structure as its "api" the possibilities are what you set your limits too

Generally a Good Idea to Always Hash Unique Identifiers in URL?

Most sites which use an auto-increment primary-key display it openly in the url.
i.e.
example.org/?id=5
This makes it very easy for anyone to spider a site and collect all the information by simply incrementing the value of id. I can understand where in some cases this is a bad thing if permissions/authentication are not setup correctly and anyone could view anything by simply guessing the id, but is it ever a good thing?
example.org/?id=e4da3b7fbbce2345d7772b0674a318d5
Is there ever a situation where hashing the id to prevent crawling is bad-practice (besides losing the time it takes to setup this functionality)? Or is this all a moot topic because by putting something on the web you accept the risk of it being stolen/mined?
Generally with web-sites you're trying to make them easy to crawl and get access to all the information so that you can get good search rankings and drive traffic to your site. Good web developers design their HTML with search engines in mind, and often also provide things like RSS feeds and site maps to make it easier to crawl content. So if you're trying to make crawling more difficult by not using sequential identifiers then (a) you aren't making it more difficult, because crawlers work by following links, not by guessing URLs, and (b) you're trying to make something more difficult that you also spend time trying to make easier, which makes no sense.
If you need security then use actual security. Use checks of the principal to authorize or deny access to resources. Obfuscating URLs is no security at all.
So I don't see any problem with using numeric identifiers, or any value in trying to obfuscate them.
Using a hash like MD5 or SHA on the ID is not a good idea:
there is always the possibility of collisions. That is, two different IDs hash to the same value.
How are you going to unhash it back to the actual ID?
A better approach if you're set on avoiding incrementing IDs would be to use a GUID, or just a random value when you create the ID.
That said, if your application security relies on people not guessing an ID, that shows some flaws elsewhere in the system. My advice: stick to the plain and easy auto-incrementing ID and apply some proper access control.
I think hashing for publicly accessible id's is not a bad thing, but showing sequential id's will in some cases be a bad thing. Even better, use GUID/UUIDs for all your IDs. You can even use sequential GUIDS in a lot of technologies, so it's faster (insert-stage) (though not as good in a distributed environment)
Hashing or randomizing identifiers or other URL components can be a good practice when you don't want your URLs to be traversable. This is not security, but it will discourage the use (or abuse) of your server resources by crawlers, and can help you to identify when it does happen.
In general, you don't want to expose application state, such as which IDs will be allocated in the future, since it may allow an attacker to use a prediction in ways that you didn't forsee. For example, BIND's sequential transaction IDs were a security flaw.
If you do want to encourage crawling or other traversal, a more rigorous way would be to provide links, rather than by providing an implementation detail which may change in the future.
Using sequential integers as IDs can make many things cheaper on your end, and might be a resonable tradeoff to make.
My opinion is that if something is on the web, and is served without requiring authorization, it was put with the intention that it should be publicly accessible. Actively trying to make it more difficult to access seems counter-intuitive.
Often, spidering a site is a Good Thing. If you want your information available as much as possible, you want sites like Google to gather data on your site, so that others can find it.
If you don't want people to read through your site, use authentication, and deny access to people who don't have access.
Random-looking URLs only give the impression of security, without giving the reality. If you put account information (hidden) in a URL, everyone will have access to that web spider's account.
My general rule is to use a GUID if I'm showing something that has to be displayed in a URL and also requires credentials to access or is unique to a particular user (like an order id). http://site.com/orders?id=e4da3b7fbbce2345d7772b0674a318d5
That way another user won't be able to "peek" at the next order by hacking the url. They may be denied access to someone else's order, but throwing a zillion letters and numbers at them is a pretty clear way to say "don't mess with this".
If I'm showing something that's public and not tied to a particular user, then I may use the integer key. For example, for displaying pictures, you might wish to allow your users to hack the url to see the next picture.
http://example.org/pictures?id=4, http://example.org/pictures?id=5, etc.
(I actually wouldn't do either as a simple GET parameter, I'd use mod_rewrite (or something) to make readable urls. Something like http://example.org/pictures/4 -> /pictures.php?picture_id=4, etc.)
Hashing an integer is a poor implementation of security by obscurity, so if that's the goal, a true GUID or even a "sequential" GUID (whether via NEWSEQUENTIALID() or COMB algorithm) is much better.
Either way, no one types URLs anymore, so I don't see much sense in worrying about the difference in length.

Screen scraping gotchas

When screen-scraping, what are the "gotcha"s to look out for?
The inspiration for this is: my spouse's co-worker asked me to scrape all the pages from a Blogger-hosted blog that her friend with cancer kept in her final months and this lady wanted to keep all of the posts in case the blog were ever deleted. I eventually found a free tool that was barely good enough.
One issue with scraping many Blogger pages is that there's often a navigation menu where you can click on the triangles to expand the post lists by year or month. These little buggers created insane amounts of duplicate content because you'd have the same page over and over again with different combinations of the menus being expanded/collapsed. In Blogger's case I'm not sure this is avoidable since the links are all formatted as real http links and not obvious JavaScript calls. Still, it got me thinking:
If you were to scrape a website, what kinds of potentially non-obvious things would you compensate for?
Do not use regex to scrape
While regular expressions can be good for a large variety of tasks, I find it usually falls short when parsing HTML DOM. The problem with HTML is that the structure of your document is so variable that it is hard to accurately (and by accurately I mean 100% success rate with no false positive) extract a tag.
What I recommend you do is use a DOM parser such as BeautifulSoup or equivalent (SimpleHTMLDom in PHP).
Some may think this is overkill, but in the end, it will be easier to maintain and also allows for more extensibility.
A regular expression could be devised to achieve the same goal but would be limited. For example, developing a regex to get the src and alt tag would force the alt attribute to be after the src or the opposite, and to overcome this limitation would add more complexity to the regular expression.
Also, consider the following. To properly match an <img> tag using regular expressions and to get only the src attribute (captured in group 2), you need the following regular expression:
<\s*?img\s+?[^>]*?\s*?src\s*?=\s*?(["'])((\\?+.)*?)\1[^>]*?>
And then again, the above can fail if:
The attribute or tag name is in capital and the i modifier is not used.
Quotes are not used around the src attribute.
Another attribute then src uses the > character somewhere in their value.
Some other reason I have not foreseen.
So again, simply don't use regular expressions to parse a dom document.
I screen scrape a lot. Some advice:
Emulate a User-Agent string for some browser you want to use. Different websites frequently return very different results depending on what your user agent is. If they don't recognize the User-Agent they will often revert to lowest common denominator, so it's usually best to start with some recent browser. (For example the World of Warcraft Armory returns beautiful, easy to parse XML if it thinks you're a recent Firefox. If it doesn't know what you are it sends terrible HTML).
Be polite to the site you're scraping; don't hit it too hard. Your scraper will go faster if you multi-thread it, making many requests at once, but that will annoy the site owner.
Be smart about error handling. Do not write code like while (1) { makeRequest(); }. If your code or the server throws an error a loop like this will immediately fetch another request, generating another error. It can get ugly quickly. Handle errors well and consider putting in sleeps or exits if you see a lot of errors.
When developing your parsing code, test against a cached version rather than hitting the server every time. Will make your development go faster and is the basis of a simple test suite.
First, I'd check for an RSS feed. On blogger, you just have to add /rss to the root url, if I remember correctly.
Then I'd check if there isn't already some tool to scrape blogger.
Then if there's no RSS feed, and no existing tool, I'd give up and do it by hand with copy/paste. Unless we're talking 5000 pages, it's much faster and easier that way. Take it from someone who's tried.
If you have access to the actual account, blogger has an export function.
edit: Or of course, you could try mechanical turk.
As far as gotchas are concerned..It's usually a good idea to limit the amount of requests made over a certain period of time. Smashing a site with alot of requests in a short space of time is a good way to have your requests rejected.
Aside from the technical considerations, make sure your not putting yourself at legal risk. Most large sites have specific legal language in their terms of use that disallows programmatic access to their services via an automated computer program, and also, the obvious copyright concerns.
From a technical standpoint, definitely use a DOM parser library and you'll save loads of time. Many provide the ability to read HTML into an XML structure that can be queried using XPath to find exactly what you need.
If you know someone who has access to the account, they can use Blogger's export "Export blog" feature.

Should I default my website to www.foo or not?

Notice how the default domain for stackoverflow is http://stackoverflow.com and if you try to goto http://www.stackoverflow.com it bounces you to http://stackoverflow.com ?
What is the reason for this? Not the tech reason (as in the http code, etc) but why would the site owners want to do this?
I know it's purely aesthetic and I always have host-headers for both www and not, but is there a reason to bounce a user to a single domain, subheaded or not?
Update 1
Not having a subdomain is called a bare domain. Thanks peeps! never knew it had a term :)
Update 2
Thanks for the answers so far - please note I understand that www.domain.com can point to domain.com. This is not a question about if i should offer both or either/or, it's asking why some sites default to a baredomain instead of www subdomains, or vice-versa. Cheers.
Jeff Atwood actually HAS explained why he's gone for bare domains here and here. (Nod to Jonas Pegerfalk for the post :) )
Jeff's post (and others in this thread) also talk about the problems of a bare domain with cookies and static images. Basically, if you have cookies on in a bare domain, then all subdomains are forced also. The solution is to purchase another domain, as posted by the Yahoo Perf Team here.
Jeff Atwood has written a great article about the The Great Dub-Dub-Dub Debate. There is also a blog entry in the Stackoverflow blog on why and how Stackoverflow has dropped the www prefix.
as far as I can tell, it doesn't really matter, but you should pick one or the other as the default, and forward to that.
the reason is that, depending on the browser implementation, www.example.com cookies are not always accessible to example.com (or is it the other way around?)
for more discussion on this, see:
in favor of www
http://faq.nearlyfreespeech.net/section/domainnameservice/baredomain#baredomain - This webhost lists several good reasons for anyone considering doing more than simple webhosting to consider (such as load balancing, subdomains with different content, etc.)
http://yes-www.org - This blog post from 2005 mainly proposed that most internet users needed the www prefix in order to recognize a URL. This is less important now that browsers have built-in searching. Most computer illiterates I know bypass the URL bar entirely.
in opposition to www
http://no-www.org/
and a miscellaneous related rant about why www should not be used as a CNAME, but only as an A RECORD.
http://member.dnsstuff.com/rc/index.php?option=com_myblog&task=view&id=62&Itemid=37
It is worth noting that you can't have CNAME and an NS record on the same (bare domain) name in DNS. So, if you use a CDN and need to set up a CNAME record for your web server, you can't do it if you are using a bare domain. You must use "www" or some other prefix.
Having said that, I prefer the look of URLs without the "www." prefix so I use a bare domain for all my sites. (I don't need a CDN.)
When I am mentioning URLs for the general public (eg. on a business card), I feel that one has to use either the www. prefix or the http:// prefix. Otherwise, just a bare domain name doesn't tell people they can necessarily type it into their browser. So, since I consider http:// an ugly wart on a business card, I do use the www. prefix there.
What a mess.
In some cases, www might indeed point to a completely separate subdomain in a large corporate environment. Especially on an internal network, having the explicit www can make split DNS easier if the Web site is hosted externally (say, at Rackspace in Texas, but everything else is in your office in Virginia.) In most cases, it doesn't matter.
The important thing is to pick one and add an IHttpModule, rewrite rule, or equivalent for your platform to permanently redirect requests from one to the other.
Having both can lead to scary certificate warrnings when switching from http to https if you don't have a wildcard certificate and forget to explicitly redirect based on your site's name (which you probably don't because you want your code to work in both dev and production, so you're using some variable populated by the server).
Much more importantly, having both accepted results in search engines seeing duplicated content--you get dinged for having duplicated content, and you get dinged because your hits are getting split across two different URIs, hurting your rankings.
actually you can use both of them. So it's better to find user your address or some. I mean actually it doesn't really matter tho :)
But putting www as a prefix is more common in public so I guess I'd prefer to use www behind it.
It's easier to type google.com than www.google.com, so give the option of both. remember, the www is just a subdomain.
Also no www is a commonplace these days, so maybe make the www.foo.com redirect to foo.com.
I think one reason is to help with search rankings so that for each page only one page is getting rankings instead of being split between two domains.
I'm not sure why the StackOVerflow team decided to use only one, but if it were me, I'd do it for simplicity. You'd have to allow for both since a lot of people type www by default or out of habit (I'm sure less "techy" people have no idea that there's a difference).
Aside from that, there used to be a difference as far as search engines were concerned and so there was concern about having either a duplicate content penalty or having link reputation split. But this has long since been handled and so isn't much of a consideration at this point.
So I'd say it's pretty much personal preference to keep things simple.

Unlinked web pages on a server - security hole?

On my website, I have several html files I do not link off the main portal page. Without other people linking to them, is it possible for Jimmy Evil Hacker to find them?
If anyone accesses the pages with advanced options turned on on their Google toolbar, then the address will be sent to Google. This is the only reason I have can figure out why some pages I have are on Google.
So, the answer is yes. Ensure you have a robots.txt or even .htaccess or something.
Hidden pages are REALLY hard to find.
First, be absolutely sure that your web server does not return any default index pages ever. Use the following everywhere in your configuration and .htaccess files. There's probably something similar for IIS.
Options -Indexes
Second, make sure the file name isn't a dictionary word -- the odds of guessing a non-dictionary word fall to astronomically small. Non-zero, there's a theoretical possibility that someone, somewhere might patiently guess every possible file name until they find yours. [I hate these theoretical attacks. Yes, they exist. No, they'll never happen in your lifetime, unless you've given someone a reason to search for your hidden content.]
Your talking about security through obscurity (google it) and it's never a good idea to rely on it.
Yes, it is.
It's unlikely they will be found, but still a possibility.
The term "security through obscurity" comes to mind