browser request header "Accept-Language" does not send country - google-chrome

I am implementing i18n in my webapp and am in the testing phase at the moment. I am using java.util.Locale on the server side to pass the locale to the relevant APIs (date time etc) that consume the information. Here is my setup:
browser language has been set to "Hindi"
operating System country has been set to "India"
I send a request to the server expecting the "Accept-Language" header to be hi-IN but the value remains hi regardless of country setting on my OS ... actual value Accept-Language:hi;en-US,en;q=0.8,q=0.6
my webapp uses the incoming value in the request header and does i18n or l10n accordingly by loading the appropriate language translation from resource files
I have a test case where I manually send in new Locale("hi", "IN") to indicate language and country. This test case prints values in the correct language as I expect but since the value coming in from the request is only hi, I am unable to see the desired result.
Not sure why the browsers (Chrome and Firefox) do not support the language_country format for all entries in their selection. Is there anything I am missing?
Edit: I made a few corrections based on the answer by #pawel-dyda. To quote a part of his reponse
Your language tag should be hi-IN, which I believe should explain the odd behaviour.
The crux of the issue (the reason I am raising this question here) is that I am unable to get my browser to send the value hi-IN to the server in the Accept-Language header.

I think you're missing few things.
Regarding to second point, setting operating system country usually doesn't affect what web browser sends on its Accept-Language list. Usually, because I can give you the counter example: Safari on Mac OS X.
There is a slight chance that it has some effect on mobile web browsers, but I haven't performed any tests myself.
In regards to points 3 and 5... Well, you gave an example of Accept-Language list. Please take a closer look on it: it contains en-US, that is English (US). Your language tag should be hi-IN, which I believe should explain the odd behaviour.
I am not sure what you meant in point 4. Not knowing the implementation details, I can only guess that you're trying to load resource files (and judging by the locale format it would be Java properties...) as well as have some defaults for things like formatting.
For properties files usually (not always!) language alone is enough. But there is a problem with formatting.
Well, most of the times you will receive merely the language and you have no choice, but to accept this fact. There are two ways to mitigate this issue:
You can implement a user profile and let user choose his/her preferred UI language and formatting settings (it is best practice to keep those separated).
You can "guess" the most likely country. In case of Hindi, it's quite obvious what will be the result of guessing. It is a bit more complicated in case of for example German, which is used in Germany ("default"), Austria and Switzerland. There are obviously many more cases, if you want to find the aid in "guessing", CLDR is the best source of information.
The best approach is to actually implement locale settings in the user profile, but use smart guessing based on data taken from CLDR; basically you combine points 1 & 2.
And don't forget about fallback! That is locale fallback (going through the list in Accept-Language header until you find something that your application supports) and resource fallback (should you have messages_fr.properties, but no messages_fr_ca.properties, but the request came as fr-CA, it makes sense to return French translations from the prior file).
By the way: you can open Firefox about:config site. It has a setting named intl.accept_languages. I bet, that if you change its contents, you'll be able to send what you want. However, as I said it is useless, because users won't change their settings...

Related

Best practice for email links that will set a DB flag?

Our business wants to email our customers a survey after they work with support. For internal reasons, we want to ask them the first question in the body of the email. We'd like to have a link for each answer. The link will go to a web service, which will store the answer, then present the rest of the survey.
So far so good.
The challenge I'm running into: making a server-side changed based on an HTTP GET is bad practice, but you can't do a POST from a link. Options seem to be:
Use an HTTP GET instead, even though that's not correct and could cause problems (https://twitter.com/rombulow/status/990684453734203392)
Embed an HTML form in the email and style some buttons to look like links (likely not compatible with a number of email platforms)
Don't include the first question in the email (not possible for business reasons)
Use HTTP GET, but have some sort of mechanism which prevents a link from altering the server state more than once
Does anybody have any better recommendations? Googling hasn't turned up much about this specific situation.
One thing to keep in mind is that HTTP is specifying semantics, not implementation. If you want to change the state of your server on receipt of a GET request, you can. See RFC 7231
This definition of safe methods does not prevent an implementation from including behavior that is potentially harmful, that is not entirely read-only, or that causes side effects while invoking a safe method. What is important, however, is that the client did not request that additional behavior and cannot be held accountable for it. For example, most servers append request information to access log files at the completion of every response, regardless of the method, and that is considered safe even though the log storage might become full and crash the server. Likewise, a safe request initiated by selecting an advertisement on the Web will often have the side effect of charging an advertising account.
Domain agnostic clients are going to assume that GET is safe, which means your survey results could get distorted by web spiders crawling the links, browsers pre-loading resource to reduce the perceived latency, and so on.
Another possibility that works in some cases is to treat the path through the graph as the resource. Each answer link acts like a breadcrumb trail, encoding into itself the history of the clients answers. So a client that answered A and B to the first two questions is looking at /survey/questions/questionThree?AB where the user that answered C to both is looking at /survey/questions/questionThree?CC. In other words, you aren't changing the state of the server, you are just guiding the client through a pre-generated survey graph.

REST API - file (ie images) processing - best practices

We are developing server with REST API, which accepts and responses with JSON. The problem is, if you need to upload images from client to server.
Note: and also I am talking about a use-case where the entity (user) can have multiple files (carPhoto, licensePhoto) and also have other properties (name, email...), but when you create new user, you don't send these images, they are added after the registration process.
The solutions I am aware of, but each of them have some flaws
1. Use multipart/form-data instead of JSON
good : POST and PUT requests are as RESTful as possible, they can contain text inputs together with file.
cons : It is not JSON anymore, which is much easier to test, debug etc. compare to multipart/form-data
2. Allow to update separate files
POST request for creating new user does not allow to add images (which is ok in our use-case how I said at beginning), uploading pictures is done by PUT request as multipart/form-data to for example /users/4/carPhoto
good : Everything (except the file uploading itself) remains in JSON, it is easy to test and debug (you can log complete JSON requests without being afraid of their length)
cons : It is not intuitive, you cant POST or PUT all variables of entity at once and also this address /users/4/carPhoto can be considered more as a collection (standard use-case for REST API looks like this /users/4/shipments). Usually you cant (and dont want to) GET/PUT each variable of entity, for example users/4/name . You can get name with GET and change it with PUT at users/4. If there is something after the id, it is usually another collection, like users/4/reviews
3. Use Base64
Send it as JSON but encode files with Base64.
good : Same as first solution, it is as RESTful service as possible.
cons : Once again, testing and debugging is a lot worse (the body can have megabytes of data), there is increase in size and also in processing time in both - client and server
I would really like to use solution no. 2, but it has its cons... Anyone can give me a better insight of "what is best" solution?
My goal is to have RESTful services with as much standards included as possible, while I want to keep it as simple as possible.
OP here (I am answering this question after two years, the post made by Daniel Cerecedo was not bad at a time, but the web services are developing very fast)
After three years of full-time software development (with focus also on software architecture, project management and microservice architecture) I definitely choose the second way (but with one general endpoint) as the best one.
If you have a special endpoint for images, it gives you much more power over handling those images.
We have the same REST API (Node.js) for both - mobile apps (iOS/android) and frontend (using React). This is 2017, therefore you don't want to store images locally, you want to upload them to some cloud storage (Google cloud, s3, cloudinary, ...), therefore you want some general handling over them.
Our typical flow is, that as soon as you select an image, it starts uploading on background (usually POST on /images endpoint), returning you the ID after uploading. This is really user-friendly, because user choose an image and then typically proceed with some other fields (i.e. address, name, ...), therefore when he hits "send" button, the image is usually already uploaded. He does not wait and watching the screen saying "uploading...".
The same goes for getting images. Especially thanks to mobile phones and limited mobile data, you don't want to send original images, you want to send resized images, so they do not take that much bandwidth (and to make your mobile apps faster, you often don't want to resize it at all, you want the image that fits perfectly into your view). For this reason, good apps are using something like cloudinary (or we do have our own image server for resizing).
Also, if the data are not private, then you send back to app/frontend just URL and it downloads it from cloud storage directly, which is huge saving of bandwidth and processing time for your server. In our bigger apps there are a lot of terabytes downloaded every month, you don't want to handle that directly on each of your REST API server, which is focused on CRUD operation. You want to handle that at one place (our Imageserver, which have caching etc.) or let cloud services handle all of it.
small 2023 update: If possible, but CDN in front of the pictures, it usually will save you a lot of money and make the pictures even more available (i.e. no issues when peaks happen).
Cons : The only "cons" which you should think of is "not assigned images". User select images and continue with filling other fields, but then he says "nah" and turn off the app or tab, but meanwhile you successfully uploaded the image. This means you have uploaded an image which is not assigned anywhere.
There are several ways of handling this. The most easiest one is "I don't care", which is a relevant one, if this is not happening very often or you even have desire to store every image user send you (for any reason) and you don't want any deletion.
Another one is easy too - you have CRON and i.e. every week and you delete all unassigned images older than one week.
There are several decisions to make:
The first about resource path:
Model the image as a resource on its own:
Nested in user (/user/:id/image): the relationship between the user and the image is made implicitly
In the root path (/image):
The client is held responsible for establishing the relationship between the image and the user, or;
If a security context is being provided with the POST request used to create an image, the server can implicitly establish a relationship between the authenticated user and the image.
Embed the image as part of the user
The second decision is about how to represent the image resource:
As Base 64 encoded JSON payload
As a multipart payload
This would be my decision track:
I usually favor design over performance unless there is a strong case for it. It makes the system more maintainable and can be more easily understood by integrators.
So my first thought is to go for a Base64 representation of the image resource because it lets you keep everything JSON. If you chose this option you can model the resource path as you like.
If the relationship between user and image is 1 to 1 I'd favor to model the image as an attribute specially if both data sets are updated at the same time. In any other case you can freely choose to model the image either as an attribute, updating the it via PUT or PATCH, or as a separate resource.
If you choose multipart payload I'd feel compelled to model the image as a resource on is own, so that other resources, in our case, the user resource, is not impacted by the decision of using a binary representation for the image.
Then comes the question: Is there any performance impact about choosing base64 vs multipart?. We could think that exchanging data in multipart format should be more efficient. But this article shows how little do both representations differ in terms of size.
My choice Base64:
Consistent design decision
Negligible performance impact
As browsers understand data URIs (base64 encoded images), there is no need to transform these if the client is a browser
I won't cast a vote on whether to have it as an attribute or standalone resource, it depends on your problem domain (which I don't know) and your personal preference.
Your second solution is probably the most correct. You should use the HTTP spec and mimetypes the way they were intended and upload the file via multipart/form-data. As far as handling the relationships, I'd use this process (keeping in mind I know zero about your assumptions or system design):
POST to /users to create the user entity.
POST the image to /images, making sure to return a Location header to where the image can be retrieved per the HTTP spec.
PATCH to /users/carPhoto and assign it the ID of the photo given in the Location header of step 2.
There's no easy solution. Each way has their pros and cons . But the canonical way is using the first option: multipart/form-data. As W3 recommendation guide says
The content type "multipart/form-data" should be used for submitting forms that contain files, non-ASCII data, and binary data.
We aren't sending forms,really, but the implicit principle still applies. Using base64 as a binary representation, is incorrect because you're using the incorrect tool for accomplish your goal, in other hand, the second option forces your API clients to do more job in order to consume your API service. You should do the hard work in the server side in order to supply an easy-to-consume API. The first option is not easy to debug, but when you do it, it probably never changes.
Using multipart/form-data you're sticked with the REST/http philosophy. You can view an answer to similar question here.
Another option if mixing the alternatives, you can use multipart/form-data but instead of send every value separate, you can send a value named payload with the json payload inside it. (I tried this approach using ASP.NET WebAPI 2 and works fine).

Is it possible to let the client choose the right translation of a page without scripting?

I have written a website for a local Go meeting in Berlin. It is translated into German, English and Chinese. Currently, I use the naming scheme index.<lang>.html for the three translations and a navigation bar on top to let the user choose.
Is it possible to use meta tags on the index.html (which currently is just a symlink) to let the user agent automagically redirect to the site with the right language if possible? I am interested in solutions that neither involve reconfiguring the server nor need java script to be enabled although the first one might be possible.
You can use HTTP content negotiation to select a version that best matches the language preference information that the browser sends. So it is possible without scripting, but you need to set things up in the server for the negotiation.
However, this is not very practical, because the language preference information cannot be relied on. It is mostly based on browser defaults, since few users even know about the relevant settings in the browser, still less set the appropriately.
Is it possible to use meta tags on the index.html (which currently is just a symlink) to let the user agent automagically redirect to the site with the right language if possible?
No.
If you want automatic selection, then you need to pay attention to the Accept header in the request. That needs server configuration or scripting.
Without it, the best you can have is links to the translations of the document which the user can select manually.

How should web sites deal with localization settings? (from “What are common UI misconceptions and annoyances?”)

I’ve chosen to take this as a question in its own right since it was generating so much debate in the comments of the original post.
It’s interesting to see that a lot of people on SO (who are developer's) just don't get localization. Here’s my take on how it should work:
In all browsers that I've looked at (and for the .NET developers out there too) when you look at a user's culture preferences it is in the following format:
language-Culture.
So we have:
en-GB - English language - UK culture
en-US - English language - US culture
en - English language - Invariant culture.
fr-FR – French language – French culture
fr-CH – French language – Swiss culture
de-CH – German language – Swiss culture
de-DE – German language – German culture
See MSDN for a complete list that the .NET framework supports.
When I go to a website it knows that I want the English language from the en part and it knows I’m interested in it being slanted to the UK (number formatting, date formatting). So when I go to google.com and it takes me to google.de (because of my IP address) that’s completely fine if google.de displays everything to me in English but completely wrong since google.de is in German. I have little control over my IP address but complete control over my language and culture settings. If you’re interested Microsoft’s new search engine (bing.com) handles things properly. Let's hope Microsoft can learn how to do search as well as Google or Google can learn to localize as well as Microsoft ;)
MSDN has another good article here for more information
So what are your recommendations for how sites should deal with localizations?
The solution here is so simple, it's annoying that dev's do anything else.
Respect the browser setting. If it says English then by god it's English.
If you absolutely must, then simply add a button at the top to pick something else. Then, and ONLY then, do you override the browser.
If you think your way is better. Stop, have someone slap you. It's not. Repeat as necessary.
Get rid of those web splash pages that ask for someone's country. Just show your normal page, based off of the browser defaults, and see item 2 above. I have yet to run into a site where it actually matters. update: a few years later and there is now a reason to do this. In 2013 the UK instituted policies surrounding cookies that website operators need to respect for sites based in that country that are serving pages to visitors from that country. So pay attention to the laws in the countries you are hosted in.
IF you happen to have a site that really is served by multiple servers across multiple countries, then you can probably detect which one of your servers is really closer to serve from. If you can't, just stop the redirecting madness and then don't try and make a determination for them.
If localization settings are available - including, but not limited to, the HTTP Accept-Language header - then websites absolutely should respect them.
The common argument against this is that "average users" aren't smart enough to find the language settings and configure them to match their own preferences, so these settings are, more often than not, incorrect (unless the user happens to be within the US).
That is the wrong solution.
If a substantial segment of the user population can't find (or can't be bothered to find) their browser's language settings, then the correct response is to make them easier to find, not for sites to ignore what they've been set to. Perhaps make language settings directly accessible from the program's top level menu instead of burying it inside an over-complicated "Preferences" dialog. Perhaps ask for language preferences the first time the program is run. Perhaps use the operating system's localization settings. Or maybe something completely different, if that's what it takes to make it near-certain that the browser will be sending correct information about the user's preferences. But don't just throw up your hands, say "it's useless and can't be fixed!", and ignore it.
Other answers have talked about letting the user choose a language or locale in their profile on the site, which is also important and absolutely should be standard, but that's just to provide a site-specific override to the user's normal settings. If the user has not overriden this on the site, though, the correct action is to default to the most-preferred available language/locale as specified in their browser settings, not to base it on geolocation of their IP address.
At one point in my career, I maintained parts of TCP/IP stack. That puts me in the somewhat rare position of knowing very well that IP addresses should not be used as anything other than Network-layer addresses. Any association between an IP address and a location is all but coincidental - it's an artifact of the way addresses are distributed, not any fundamental part of what an IP address means.
(They're also not useful as the unique identifier of a computer, but that's a different story)
I suggest leaving geolocation out of it. The HTTP standard includes a way for a browser or other user agent to include the users culture preferences with each request (and remember, it's a list of weighted preferences, not necessarily just one culture). Since the browser is closer to the user than you are, you should honor this request, at least as the default.
It's ok to then permit the user to change their preference for your site, either temporarily or permanently. It's even ok to allow the user to choose to view different content with different culture settings. A wild example would be a site that includes both political news and technical information. It's quite reasonable that someone would want the news in their "natural" language, but the technical information in English.
Finally, it's ok to have a fallback pattern. If, for instance, you have a site that services users based on their region (resellers, for instance), then it's possible that Japanese content only exists on your Asian regional sub-site. A Japanese-speaking user visiting your EMEA site might just be stuck seeing English content, which might very well be his last choice.
On the sites I create I usually follow this pattern:
Each page has a unique URL with the language in it somewhere, usually like /en/page or a different (sub)domain
If the user opens a URL with an unspecified language like /page I start to guess:
Is a cookie from a previous session is available?
If not, is Accept-Language available and can I map it to a language available on the site?
If not, if it's a possibility, can I guess by IP?
If not, default to the site's default language.
I set a cookie with the guessed language and redirect the user to a site with the appropriate URL
I put a language switch on every page, so /en/page can easily be switched to /xx/page
Cookie gets updated if the user switches to a different page
Ideally I only have to guess once and from then on use the user's cookie, or the user visits the desired page directly.
I agree, give the user the chance to override them with user preferences in your app. This is especially handy for things like timezone localization issues which you can't derive from browser settings.
I risk being considered impolite, but I think my post on this topic will have more informative answers, mostly because my post is really a question. I am sorry though that I did not find that post before.
There's a difference between smart defaults and the ability of users to override them. In big apps I've worked on, I've assumed the user's locale from browser settings, geolocation, etc. -- but always given users a way to easily switch.
I don't know how else one would do that. Not giving users a chance to correct your assumptions is deeply problematic, because you're going to get it wrong some of the time.
ADDITION:
I think your problem here is that while you can edit your locale settings, if they look basically identical to the default, there's no way for an application developer to tell if you left it as-is intentionally, or because you don't know how or why to change it.
I suggest honoring users' localization settings, except if the setting is the overwhelming default, which users may not change. For example, I believe the great majority (90+%) of users with an en-us setting geolocated in Vietnam would almost always be better served by seeing Vietnamese content, rather than US English content, as long as there's a trivial way to switch locales. On the flip side, if a user geolocated in the US has a Vietnamese setting, by all means give him or her Vietnamese content.
Is this irritating for US-English users in Vietnam? Sure. But it's also the greatest good for the greatest number, and helps ensure that average non-technical users get the best real-world experience. Until we can hold a gun to users' heads and force them to honestly declare their language/culture preferences before turning on a computer, we're going to need heuristics like this.
I have seen enough forceful bug reports from customers that when investigated turn out to be that one of there users had the browser's culture setting wrong, that we now let the customer override the browsers with a config setting. The browser's culture setting is wrong often enough that is it not very useful, it is also too hard for most end users to find or change it.

Detecting a (naughty or nice) URL or link in a text string

How can I detect (with regular expressions or heuristics) a web site link in a string of text such as a comment?
The purpose is to prevent spam. HTML is stripped so I need to detect invitations to copy-and-paste. It should not be economical for a spammer to post links because most users could not successfully get to the page. I would like suggestions, references, or discussion on best-practices.
Some objectives:
The low-hanging fruit like well-formed URLs (http://some-fqdn/some/valid/path.ext)
URLs but without the http:// prefix (i.e. a valid FQDN + valid HTTP path)
Any other funny business
Of course, I am blocking spam, but the same process could be used to auto-link text.
Ideas
Here are some things I'm thinking.
The content is native-language prose so I can be trigger-happy in detection
Should I strip out all whitespace first, to catch "www .example.com"? Would common users know to remove the space themselves, or do any browsers "do-what-I-mean" and strip it for you?
Maybe multiple passes is a better strategy, with scans for:
Well-formed URLs
All non-whitespace followed by '.' followed by any valid TLD
Anything else?
Related Questions
I've read these and they are now documented here, so you can just references the regexes in those questions if you want.
replace URL with HTML Links javascript
What is the best regular expression to check if a string is a valid URL
Getting parts of a URL (Regex)
Update and Summary
Wow, I there are some very good heuristics listed in here! For me, the best bang-for-the-buck is a synthesis of the following:
#Jon Bright's technique of detecting TLDs (a good defensive chokepoint)
For those suspicious strings, replace the dot with a dot-looking character as per #capar
A good dot-looking character is #Sharkey's subscripted · (i.e. "·"). · is also a word boundary so it's harder to casually copy & paste.
That should make a spammer's CPM low enough for my needs; the "flag as inappropriate" user feedback should catch anything else. Other solutions listed are also very useful:
Strip out all dotted-quads (#Sharkey's comment to his own answer)
#Sporkmonger's requirement for client-side Javascript which inserts a required hidden field into the form.
Pinging the URL server-side to establish whether it is a web site. (Perhaps I could run the HTML through SpamAssassin or another Bayesian filter as per #Nathan..)
Looking at Chrome's source for its smart address bar to see what clever tricks Google uses
Calling out to OWASP AntiSAMY or other web services for spam/malware detection.
I'm concentrating my answer on trying to avoid spammers. This leads to two sub-assumptions: the people using the system will therefore be actively trying to contravene your check and your goal is only to detect the presence of a URL, not to extract the complete URL. This solution would look different if your goal is something else.
I think your best bet is going to be with the TLD. There are the two-letter ccTLDs and the (currently) comparitively small list of others. These need to be prefixed by a dot and suffixed by either a slash or some word boundary. As others have noted, this isn't going to be perfect. There's no way to get "buyfunkypharmaceuticals . it" without disallowing the legitimate "I tried again. it doesn't work" or similar. All of that said, this would be my suggestion:
[^\b]\.([a-zA-Z]{2}|aero|asia|biz|cat|com|coop|edu|gov|info|int|jobs|mil|mobi|museum|name|net|org|pro|tel|travel)[\b/]
Things this will get:
buyfunkypharmaceuticals.it
google.com
http://stackoverflo**w.com/**questions/700163/
It will of course break as soon as people start obfuscating their URLs, replacing "." with " dot ". But, again assuming spammers are your goal here, if they start doing that sort of thing, their click-through rates are going to drop another couple of orders of magnitude toward zero. The set of people informed enough to deobfuscate a URL and the set of people uninformed enough to visit spam sites have, I think, a miniscule intersection. This solution should let you detect all URLs that are copy-and-pasteable to the address bar, whilst keeping collateral damage to a bare minimum.
I'm not sure if detecting URLs with a regex is the right way to solve this problem. Usually you will miss some sort of obscure edge case that spammers will be able to exploit if they are motivated enough.
If your goal is just to filter spam out of comments then you might want to think about Bayesian filtering. It has proved to be very accurate in flagging email as spam, it might be able to do the same for you as well, depending on the volume of text you need to filter.
I know this doesn't help with auto-link text but what if you search and replaced all full-stop periods with a character that looks like the same thing, such as the unicode character for hebrew point hiriq (U+05B4)?
The following paragraph is an example:
This might workִ The period looks a bit odd but it is still readableִ The benefit of course is that anyone copying and pasting wwwִgoogleִcom won't get too farִ :)
Well, obviously the low hanging fruit are things that start with http:// and www. Trying to filter out things like "www . g mail . com" leads to interesting philosophical questions about how far you want to go. Do you want to take it the next step and filter out "www dot gee mail dot com" also? How about abstract descriptions of a URL, like "The abbreviation for world wide web followed by a dot, followed by the letter g, followed by the word mail followed by a dot, concluded with the TLD abbreviation for commercial".
It's important to draw the line of what sorts of things you're going to try to filter before you continue with trying to design your algorithm. I think that the line should be drawn at the level where "gmail.com" is considered a url, but "gmail. com" is not. Otherwise, you're likely to get false positives every time someone fails to capitalize the first letter in a sentence.
Since you are primarily looking for invitations to copy and paste into a browser address bar, it might be worth taking a look at the code used in open source browsers (such as Chrome or Mozilla) to decide if the text entered into the "address bar equivalent" is a search query or a URL navigation attempt.
Ping the possible URL
If you don't mind a little server side computation, what about something like this?
urls = []
for possible_url in extracted_urls(comment):
if pingable(possible_url):
urls.append(url) #you could do this as a list comprehension, but OP may not know python
Here:
extracted_urls takes in a comment and uses a conservative regex to pull out possible candidates
pingable actually uses a system call to determine whether the hostname exists on the web. You could have a simple wrapper parse the output of ping.
[ramanujan:~/base]$ping -c 1 www.google.com
PING www.l.google.com (74.125.19.147): 56 data bytes
64 bytes from 74.125.19.147: icmp_seq=0 ttl=246 time=18.317 ms
--- www.l.google.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 18.317/18.317/18.317/0.000 ms
[ramanujan:~/base]$ping -c 1 fooalksdflajkd.com
ping: cannot resolve fooalksdflajkd.com: Unknown host
The downside is that if the host gives a 404, you won't detect it, but this is a pretty good first cut -- the ultimate way to verify that an address is a website is to try to navigate to it. You could also try wget'ing that URL, but that's more heavyweight.
Having made several attempts at writing this exact piece of code, I can say unequivocally, you won't be able to do this with absolute reliability, and you certainly won't be able to detect all of the URI forms allowed by the RFC. Fortunately, since you have a very limited set of URLs you're interested in, you can use any of the techniques above.
However, the other thing I can say with a great deal of certainty, is that if you really want to beat spammers, the best way to do that is to use JavaScript. Send a chunk of JavaScript that performs some calculation, and repeat the calculation on the server side. The JavaScript should copy the result of the calculation to a hidden field so that when the comment is submitted, the result of the calculation is submitted as well. Verify on the server side that the calculation is correct. The only way around this technique is for spammers to manually enter comments or for them to start running a JavaScript engine just for you. I used this technique to reduce the spam on my site from 100+/day to one or two per year. Now the only spam I ever get is entered by humans manually. It's weird to get on-topic spam.
Of course you realize if spammers decide to use tinuyrl or such services to shorten their URLs you're problem just got worse. You might have to write some code to look up the actual URLs in that case, using a service like TinyURL decoder
Consider incorporating the OWASP AntiSAMY API...
I like capar's answer the best so far, but dealing with unicode fonts can be a bit fraught, with older browsers often displaying a funny thing or a little box ... and the location of the U+05B4 is a bit odd ... for me, it appears outside the pipes here |ִ| even though it's between them.
There's a handy · (·) though, which breaks cut and paste in the same way. Its vertical alignment can be corrected by <sub>ing it, eg:
stackoverflow·com
Perverse, but effective in FF3 anyway, it can't be cut-and-pasted as a URL. The <sub> is actually quite nice as it makes it visually obvious why the URL can't be pasted.
Dots which aren't in suspected URLs can be left alone, so for example you could do
s/\b\.\b/<sub>·<\/sub>/g
Another option is to insert some kind of zero-width entity next to suspect dots, but things like ‍ and ‌ and &ampzwsp; don't seem to work in FF3.
There's already some great answers in here, so I won't post more. I will give a couple of gotchas though. First, make sure to test for known protocols, anything else may be naughty. As someone whose hobby concerns telnet links, you will probably want to include more than http(s) in your search, but may want to prevent say aim: or some other urls. Second, is that many people will delimit their links in angle-brackets (gt/lt) like <http://theroughnecks.net> or in parens "(url)" and there's nothing worse than clicking a link, and having the closing > or ) go allong with the rest of the url.
P.S. sorry for the self-referencing plugs ;)
I needed just the detection of simple http urls with/out protocol, assuming that either the protocol is given or a 'www' prefix. I found the above mentioned link quite helpful, but in the end I came out with this:
http(s?)://(\S+\.)+\S+|www\d?\.(\S+\.)+\S+
This does, obviously, not test compliance to the dns standard.
Given the messes of "other funny business" that I see in Disqus comment spam in the form of look-alike characters, the first thing you'll want to do is deal with that.
Luckily, the Unicode people have you covered. Dig up an implementation of the TR39 Skeleton Algorithm for Unicode Confusables in your programming language of choice and pair it with some Unicode normalization and Unicode-aware upper/lower-casing.
The skeleton algorithm uses a lookup table maintained by the Unicode people to do something conceptually similar to case-folding.
(The output may not use sensible characters, but, if you apply it to both sides of the comparison, you'll get a match if the characters are visually similar enough for a human to get the intent.)
Here's an example from this Java implementation:
// Skeleton representations of unicode strings containing
// confusable characters are equal
skeleton("paypal").equals(skeleton("paypal")); // true
skeleton("paypal").equals(skeleton("𝔭𝒶ỿ𝕡𝕒ℓ")); // true
skeleton("paypal").equals(skeleton("ρ⍺у𝓅𝒂ן")); // true
skeleton("ρ⍺у𝓅𝒂ן").equals(skeleton("𝔭𝒶ỿ𝕡𝕒ℓ")); // true
skeleton("ρ⍺у𝓅𝒂ן").equals(skeleton("𝔭𝒶ỿ𝕡𝕒ℓ")); // true
// The skeleton representation does not transform case
skeleton("payPal").equals(skeleton("paypal")); // false
// The skeleton representation does not remove diacritics
skeleton("paypal").equals(skeleton("pàỳpąl")); // false
(As you can see, you'll want to do some other normalization first.)
Given that you're doing URL detection for the purpose of judging whether something's spam, this is probably one of those uncommon situations where it'd be safe to start by normalizing the Unicode to NFKD and then stripping codepoints declared to be combining characters.
(You'd then want to normalize the case before feeding them to the skeleton algorithm.)
I'd advise that you do one of the following:
Write your code to run a confusables check both before and after the characters get decomposed, in case things are considered confusables before being decomposed but not after, and check both uppercased and lowercased strings in case the confusables tables aren't symmetrical between the upper and lowercase forms.
Investigate whether #1 is actually a concern (no need to waste CPU time if it isn't) by writing a little script to inspect the Unicode tables and identify any codepoints where decomposing or lowercasing/uppercasing a pair of characters changes whether they're considered confusable with each other.