Loose searching, e.g. such that "htlm" would find "html5" - html

I have a huge database with keywords such as html, html5, xhtml, and so on.
The user can search for rooms and as of now it is merely just implemented as
[...] WHERE name LIKE '%keyword%' LIMIT 20;
This is a simple solution to start with, but it is not fault-tolerant. And users make a lot of faults. To enhance this, I would like to introduce a "loose search", meaning that if "html" returns no or only few (less than, say, 10) matches it adds "html" and similar to the list.
The real question now is: How do I do that?
Does this 'loose searching' has a technical term?

This is definitely part of text retrieval and is also called fuzzy matching or approximate string matching. For instance, go to Google, type "MSYQL" and it will recommend "MYSQL" instead.
Here is a typical approach. Start with a list of all valid keywords. Yes, that is the place to begin. In many text applications, this would be called a lexicon.
The look for your search term(s) in the list of valid keywords. If you do not find any, then use something called "Levenshtein distance" (described here) to find the closest matches. Then use these in your search. If you search for "Levenshtein distance mysql", you will find implementations of the algorithm here.
If you have just a few known misspellings, then you can also solve the problem with a thesaurus. This replaces one search term with other terms that might match.

Related

How to integrate "Did you mean" functionality in rails?

How can you implement the "Did you mean: " like Google does in some search queries?
PS: I am using sphinx in my product. Can you suggest how can I implement this. Any guides or suggestions for some other search engines who has this functionality are most welcomed.
I am using rails2.3.8, if that helps
One Solution can be:
Make a dictionary of known "keywords" or "phrases", and in search action if nothing is found then run a secondary query in that dictionary. Update that dictionary whenever a searchable entry is created say, a blog post or username.
query = "supreman"
dictionary = ["superman", "batman", "hanuman" ...] (in DB table)
search(query)
if no results, then
search in dictionary (where "keyword" LIKE query or "phrase" LIKE query) => "superman"
Check in sphinx or solr documentation. They might have a better implementation of this "Like" query which returns a % match.
display -> Did you mean "superman"?
But the point is how to make it efficient?
Have a look at the Damerau-Levenshtein distance algorithm. It calculates the "distance" between two strings and determines how many steps it takes to transform one string into another. The less steps the closer the two strings are.
This article shows the algorithm implemented as a MySQL stored function.
The algorithm is so much better than LIKE or SOUNDEX.
I believe Google uses crowd sourced data rather than an algorithm. ie if a user types in abcd, clicks on the back button and then immediately searches for abd then it establishes a relationship between the two search terms as the user wasn't happy with the results. Once you have a very large community searching then the pattern appears.
You should take a look at the actual theory of how Google implements something like this: How to Write a Spelling Corrector.
Although that article is written in Python, there are links to implementations in other languages at the bottom of the article. Here is a Ruby implementation.
I think you're looking for a string match algorithms.
I remember mislav's gist used to raise errors when initialize was slightly misspelled. That might be a good read.
Also, take a look at some of the articles he suggests:
http://www.catalysoft.com/articles/StrikeAMatch.html
http://www.catalysoft.com/articles/MatchingSimilarStrings.html
Now a days did you mean feature is implemented based on phonetic spell corrector. When we misspell we generally write phonetically similar words. Based on this idea phonetic spell corrector searches its database for the most similar word. Similarity ties are broken using context(for a multi-word query other words also help in deciding the correct word) and popularity of the word. If two words are phonetically very close to the misspelled word than the word which fits the context and is more frequently used in daily life is chosen.
this is working for me:
SELECT * FROM table_name WHERE soundex(field_name) LIKE CONCAT('%', soundex('searching_element'), '%')

MySQL: Best way to do a backwords full text search?

I am trying to do basically a reverse full test search but have no clue of the best way to go about doing it.
Basically I have a table of key phrases laid out like this:
id - phrase
1 - "hello world"
2 - "goodbye world"
3 - "this is my world"
I then have a set string, such as "Welcome to the hello world group". I want to find the ID of all rows in my table that has an exact match for phrase. Meaning "o the" would not match because the word is "to the". Also "ello" would not match because the world is "hello".
Using Full Text Search, this can easily be achieved by doing a search of:
AGAINST ('"hello world"' IN BOOLEAN MODE);
Problem is, I don't believe I can use a full text search, since a full text search would find all rows that contains a single phrase. I want all phrases (from a known set of phrases) that match a single set.
I know how to do this using RegEx using the following, however this is way to slow. On a table with 400,000 key phrases it took over 40 seconds:
WHERE "the data I know I want to search goes here" REGEXP CONCAT('[[:<:]]', phrases, '[[:>:]]')
What I need is a more optimized way to do this. How would I possibly go about doing this as a full text search, even if i have to temporarily add it to a table without actually doing a LOOP individually checking each keyword.
I really appreciate the feedback as this is really causing my site to lag on adding new data.
If you are willing to consider a solution that reads the phrases out of the database and constructs a separate data structure used for optimized phrase detection, there are two main techniques that solve the problem. Which one is best for you depends on a number of factors, in particular:
How frequently the phrase list is updated
Whether and how you tokenise the text before running the phrase detection
How long the target strings are
Option 1: Hash-table of the phrases This means you simply insert each of the phrases as key into a hash table (aka dictionary or hash map in many programming languages). The phrase id becomes the value. Updates are fast and easy, but detecting the phrases in a given string can be hard: Firstly you need to tokenise the string and be sure that phrases only occur between token boundaries. Secondly, you need to make a lookup in the hash not only for every token, but also for every pair, triple, quadruple etc. of consecutive tokens. This still works well if the target strings are generally short. You can also maintain a copy of the hash table on disk, e.g. using the Berkeley DB. There are ready-to-use modules in the standard library of most programming languages for this.
Option 2: Search trie (or, slightly more advanced, a minimised search trie or a finite state machine). This can be implemented in very space-efficient ways but is generally larger than a hash table (although 400k entries will not be a problem at all). The big advantage during phrase detection is that you need not cut out tokens (or candidate phrases between token boundaries) before making look-ups. Instead you perform a longest-match look-up at each candidate start position in the text. Storing on disk is possible, although in most programming languages there won't be a standard-library module for this. Updates are quite easy in a trie, but can get difficult (and potentially time-consuming) in a minimised trie or FST.
Both options allow the data structure to be maintained on disk (or a copy of it to be stored on disk, while the actual look-ups happen memory). But you won't get transaction safety or fault-tolerance (which I understand you are not looking for).
You can use search engine. For example solr. You can set specific search filters against text. + search for words only. + It will be blindingly fast.
Or, second idea you can create your own table that stores all words and id of phrase. and search that table maching words only. It will be faster because you can add index on words better then phrases altogether.

How would I find common misspellings of a given word using aspell or another tool

For a given word I'd like to find the n closest misspellings. I was wondering if an open source spell checker like aspell would be useful in that context unless you have other suggestions.
For example: 'health'
would give me: ealth, halth, heallth, healf, ...
Spelling correction tools take misspelled words and offer possible correctly spelled alternatives. You seem to want to go in the other direction.
Going from a correctly spelled word to a set of possible misspellings could probably be performed by applying a set of mutation heuristics to common words. These heuristics might do things like:
randomly adding or removing single characters
randomly apply transpositions of pairs of characters
changing characters to other characters based on keyboard layouts
application of common "point" misspellings; e.g. transposing "ie" to "ei", doubling or undoubling "l"s.
Going from a correctly spelled word to a set of common misspellings is really hard. Probably the only reliable way to do this would be to instrument a spelling checker package used by a large community of users, record the actual spelling corrections made using the spelling checker, and aggregate the results. That is probably (!) beyond the scope of your project.
On revisiting my answer, I think I've missed something.
My heuristics above are mostly for typing error rather than misspellings. A typing error is where the user knows the correct spelling but mistyped the word. A misspelling is where the person doesn't know the correct spelling of a word, and uses either incorrect knowledge or intuition (i.e. a guess). Typical guesses are based on listening to what the word sounds like, and then pick a spelling that (if correct) would most likely be pronounced that way.
So an good heuristic for predicting misspellings would need to be based what the word actually sounds like when spoken. That requires a phonetic dictionary (to go from the actual word to its pronunciation) and a set of rules for generating plausible spellings for the phonetic word. That's more complicated than simple heuristics for typing errors.

Best practices for seaching for alternate forms of a word with Lucene

I have a site which is searchable using Lucene. I've noticed from logs that users sometimes don't find what they're looking for because they enter a singular term, but only the plural version of that term is used on the site. I would like the search to find uses of other forms of a word as well. This is a problem that I'm sure has been solved many times over, so what are the best practices for this?
Please note: this site only has English content.
Some approaches I've thought of:
Look up the word in some kind of thesaurus file to determine alternate forms of a given word.
Some examples:
Searches for "car", also add "cars" to the query.
Searches for "carry", also add "carries" and "carried" to the query.
Searches for "small", also add "smaller" and "smallest" to the query.
Searches for "can", also add "can't", "cannot", "cans", and "canned" to the query.
And it should work in reverse (i.e. search for "carries" should add "carry" and "carried").
Drawbacks:
Doesn't work for many new technical words unless the dictionary/thesaurus is updated frequently.
I'm not sure about the performance of searching the thesaurus file.
Generate the alternate forms algorithmically, based on some heuristics.
Some examples:
If the word ends in "s" or "es" or "ed" or "er" or "est", drop the suffix
If the word ends in "ies" or "ied" or "ier" or "iest", convert to "y"
If the word ends in "y", convert to "ies", "ied", "ier", and "iest"
Try adding "s", "es", "er" and "est" to the word.
Drawbacks:
Generates lots of non-words for most inputs.
Feels like a hack.
Looks like something you'd find on TheDailyWTF.com. :)
Something much more sophisticated?
I'm thinking of doing some kind of combination of the first two approaches, but I'm not sure where to find a thesaurus file (or what it's called, as "thesaurus" isn't quite right, but neither is "dictionary").
Consider including the PorterStemFilter in your analysis pipeline. Be sure to perform the same analysis on queries that is used when building the index.
I've also used the Lancaster stemming algorithm with good results. Using the PorterStemFilter as a guide, it is easy to integrate with Lucene.
Word stemming works OK for English, however for languages where word stemming is nearly impossible (like mine) option #1 is viable. I know of at least one such implementation for my language (Icelandic) for Lucene that seems to work very well.
Some of those look like pretty neat ideas. Personally, I would just add some tags to the query (query transformation) to make it fuzzy, or you can use the builtin FuzzyQuery, which uses Levenshtein edit distances, which would help for mispellings.
Using fuzzy search 'query tags', Levenshtein is also used. Consider a search for 'car'. If you change the query to 'car~', it will find 'car' and 'cars' and so on. There are other transformations to the query that should handle almost everything you need.
If you're working in a specialised field (I did this with horticulture) or with a language that does't play nicely with normal stemming methods you could use the query logging to create a manual stemming table.
Just create a word -> stem mapping for all the mismatches you can think of / people are searching for, then when indexing or searching replace any word that occurs in the table with the appropriate stem. Thanks to query caching this is a pretty cheap solution.
Stemming is a pretty standard way to address this issue. I've found that the Porter stemmer is way to aggressive for standard keyword search. It ends up conflating words together that have different meanings. Try the KStemmer algorithm.

How do I do a fuzzy match of company names in MYSQL with PHP for auto-complete?

My users will import through cut and paste a large string that will contain company names.
I have an existing and growing MYSQL database of companies names, each with a unique company_id.
I want to be able to parse through the string and assign to each of the user-inputed company names a fuzzy match.
Right now, just doing a straight-up string match, is also slow. ** Will Soundex indexing be faster? How can I give the user some options as they are typing? **
For example, someone writes:
Microsoft -> Microsoft
Bare Essentials -> Bare Escentuals
Polycom, Inc. -> Polycom
I have found the following threads that seem similar to this question, but the poster has not approved and I'm not sure if their use-case is applicable:
How to find best fuzzy match for a string in a large string database
Matching inexact company names in Java
You can start with using SOUNDEX(), this will probably do for what you need (I picture an auto-suggestion box of already-existing alternatives for what the user is typing).
The drawbacks of SOUNDEX() are:
its inability to differentiate longer strings. Only the first few characters are taken into account, longer strings that diverge at the end generate the same SOUNDEX value
the fact the the first letter must be the same or you won't find a match easily. SQL Server has DIFFERENCE() function to tell you how much two SOUNDEX values are apart, but I think MySQL has nothing of that kind built in.
for MySQL, at least according to the docs, SOUNDEX is broken for unicode input
Example:
SELECT SOUNDEX('Microsoft')
SELECT SOUNDEX('Microsift')
SELECT SOUNDEX('Microsift Corporation')
SELECT SOUNDEX('Microsift Subsidary')
/* all of these return 'M262' */
For more advanced needs, I think you need to look at the Levenshtein distance (also called "edit distance") of two strings and work with a threshold. This is the more complex (=slower) solution, but it allows for greater flexibility.
Main drawback is, that you need both strings to calculate the distance between them. With SOUNDEX you can store a pre-calculated SOUNDEX in your table and compare/sort/group/filter on that. With the Levenshtein distance, you might find that the difference between "Microsoft" and "Nzcrosoft" is only 2, but it will take a lot more time to come to that result.
In any case, an example Levenshtein distance function for MySQL can be found at codejanitor.com: Levenshtein Distance as a MySQL Stored Function (Feb. 10th, 2007).
SOUNDEX is an OK algorithm for this, but there have been recent advances on this topic. Another algorithm was created called the Metaphone, and it was later revised to a Double Metaphone algorithm. I have personally used the java apache commons implementation of double metaphone and it is customizable and accurate.
They have implementations in lots of other languages on the wikipedia page for it, too. This question has been answered, but should you find any of the identified problems with the SOUNDEX appearing in your application, it's nice to know there are options. Sometimes it can generate the same code for two really different words. Double metaphone was created to help take care of that problem.
Stolen from wikipedia: http://en.wikipedia.org/wiki/Soundex
As a response to deficiencies in the
Soundex algorithm, Lawrence Philips
developed the Metaphone algorithm for
the same purpose. Philips later
developed an improvement to Metaphone,
which he called Double-Metaphone.
Double-Metaphone includes a much
larger encoding rule set than its
predecessor, handles a subset of
non-Latin characters, and returns a
primary and a secondary encoding to
account for different pronunciations
of a single word in English.
At the bottom of the double metaphone page, they have the implementations of it for all kinds of programming languages: http://en.wikipedia.org/wiki/Double-Metaphone
Python & MySQL implementation: https://github.com/AtomBoy/double-metaphone
Firstly, I would like to add that you should be very careful when using any form of Phonetic/Fuzzy Matching Algorithm, as this kind of logic is exactly that, Fuzzy or to put it more simply; potentially inaccurate. Especially true when used for matching company names.
A good approach is to seek corroboration from other data, such as address information, postal codes, tel numbers, Geo Coordinates etc. This will help confirm the probability of your data being accurately matched.
There are a whole range of issues related to B2B Data Matching too many to be addressed here, I have written more about Company Name Matching in my blog (also an updated article), but in summary the key issues are:
Looking at the whole string is unhelpful as the most important part
of a Company Name is not necessarily at the beginning of the Company
Name. i.e. ‘The Proctor and Gamble Company’ or ‘United States Federal
Reserve ‘
Abbreviations are common place in Company Names i.e. HP, GM, GE, P&G,
D&B etc..
Some companies deliberately spell their names incorrectly as part of
their branding and to differentiate themselves from other companies.
Matching exact data is easy, but matching non-exact data can be much more time consuming and I would suggest that you should consider how you will be validating the non-exact matches to ensure these are of acceptable quality.
Before we built Match2Lists.com, we used to spend an unhealthy amount of time validating fuzzy matches. In Match2Lists we incorporated a powerful Visualisation tool enabling us to review non-exact matches, this proved to be a real game changer in terms of match validation, reducing our costs and enabling us to deliver results much more quickly.
Best of Luck!!
Here's a link to the php discussion of the soundex functions in mysql and php. I'd start from there, then expand into your other not-so-well-defined requirements.
Your reference references the Levenshtein methodology for matching. Two problems. 1. It's more appropriate for measuring the difference between two known words, not for searching. 2. It discusses a solution designed more to detect things like proofing errors (using "Levenshtien" for "Levenshtein") rather than spelling errors (where the user doesn't know how to spell, say "Levenshtein" and types in "Levinstein". I usually associate it with looking for a phrase in a book rather than a key value in a database.
EDIT: In response to comment--
Can you at least get the users to put the company names into multiple text boxes; 2. or use an unambigous name delimiter (say backslash); 3. leave out articles ("The") and generic abbreviations (or you can filter for these); 4. Squoosh the spaces out and match for that also (so Micro Soft => microsoft, Bare Essentials => bareessentials); 5. Filter out punctuation; 6. Do "OR" searches on words ("bare" OR "essentials") - people will inevitably leave one or the other out sometimes.
Test like mad and use the feedback loop from users.
the best function for fuzzy matching is levenshtein. it's traditionally used by spell checkers, so that might be the way to go. there's a UDF for it available here: http://joshdrew.com/
the downside to using levenshtein is that it won't scale very well. a better idea might be to dump the whole table in to a spell checker custom dictionary file and do the suggestion from your application tier instead of the database tier.
This answer results in indexed lookup of almost any entity using input of 2 or 3 characters or more.
Basically, create a new table with 2 columns, word and key. Run a process on the original table containing the column to be fuzzy searched. This process will extract every individual word from the original column and write these words to the word table along with the original key. During this process, commonly occurring words like 'the','and', etc should be discarded.
We then create several indices on the word table, as follows...
A normal, lowercase index on word + key
An index on the 2nd through 5th character + key
An index on the 3rd through 6th character + key
Alternately, create a SOUNDEX() index on the word column.
Once this is in place, we take any user input and search using normal word = input or LIKE input%. We never do a LIKE %input as we are always looking for a match on any of the first 3 characters, which are all indexed.
If your original table is massive, you could partition the word table by chunks of the alphabet to ensure the user's input is being narrowed down to candidate rows immediately.
Though the question asks about how to do fuzzy searches in MySQL, I'd recommend considering using a separate fuzzy search (aka typo tolerant) engine to accomplish this. Here are some search engines to consider:
ElasticSearch (Open source, has a ton of features, and so is also complex to operate)
Algolia (Proprietary, but has great docs and super easy to get up and running)
Typesense (Open source, provides the same fuzzy search-as-you-type feature as Algolia)
Check if it's spelled wrong before querying using a trusted and well tested spell checking library on the server side, then do a simple query for the original text AND the first suggested correct spelling (if spell check determined it was misspelled).
You can create custom dictionaries for any spell check library worth using, which you may need to do for matching more obscure company names.
It's way faster to match against two simple strings than it is to do a Levenshtein distance calculation against an entire table. MySQL is not well suited for this.
I tackled a similar problem recently and wasted a lot of time fiddling around with algorithms, so I really wish there had been more people out there cautioning against doing this in MySQL.
Probably been suggested before but why not dump the data out to Excel and use the Fuzzy Match Excel plugin. This will give a score from 0 to 1 (1 being 100%).
I did this for business partner (company) data that was held in a database.
Download the latest UK Companies House data and score against that.
For ROW data its more complex as we had to do a more manual process.