I am making a quiz system, and when quizmakers insert questions into the Question Bank, I am to check the DB for duplicate / very highly similar questions.
Testing MySQL's MATCH() ... AGAINST(), the highest relevance I get is 30+, when I test against a 100% similar string.
So what exactly is the relevance? To quote the manual:
Relevance values are non-negative floating-point numbers. Zero relevance means no similarity. Relevance is computed based on the number of words in the row, the number of unique words in that row, the total number of words in the collection, and the number of documents (rows) that contain a particular word.
My problem is how to test the relevance value if a string is a duplicate. If it's 100% duplicate, prevent it from being inserter into Question Bank. But if it is only so similar, prompt the quizmaker to verify, insert or not. So how do I do that? 30+ for 100% identical string is not percentage, so I'm stump.
Thanks in advance.
The basic data structure for a text retrieval system is an Inverted Index. This is essentially a list of words found in the document collection with a list of the documents they occur in. It can also have metadata about the occurrence for each document, such as the number of times the word appears.
Documents containing the words can be queried by matching on the search terms. To determine relevance, a heuristic known as a Cosine Ranking is calculated on the hits. This works by constructing n-dimensional vector with one component for each of the n search terms. You can also weight the search terms if desired. This vector gives a point in n-dimensional space that corresponds to your search terms.
A similar vector based on the weighted occurrences in each document can be constructed from the inverted index with each axis in the vector corresponding with the axis for each search term. If you calculate a dot product of these vectors you get the cosine of the angle between them. 1.0 is equivalent to cos (0), which would assume the vectors occupy a common line from the origin. The closer the vectors together, the smaller the angle and the closer the cosine is to 1.0.
If you sort the search results by the cosine (or bung them into a priority queue as mg does) you get the most relevant. Cleverer relevance algorithms tend to fiddle with the weights of the search terms, skewing the dot product in favour of terms with high relevance.
If you want to dig a little, Managing Gigabytes by Bell and Moffet discusses the internal architecture of text retrieval systems.
andygeers is on the right track: Those numbers have no empirical meaning other than their relations to each other and cannot be used on their own to determine what is or is not an "exact match". You need to determine that yourself. Even aside from the limitations of fulltext search ranking, there's also the open question of just what you consider to consitiute an "exact match". (Actual text only or do soundex matches count? Do synonyms (e.g., "couch" vs. "sofa") count as matching or as distinct? Should an attempt be made to compensate for misspellings? Etc.)
If I had the need to perform such a check, I would grab only the highest-ranked entry returned by the fulltext search, remove any designated stopwords, normalize whitespace, convert to lowercase, do the comparison, and leave it at that until I encountered a case that called for it to be refined further. It's not really all that much extra work - if you specify the language you're using for your application, you could probably find someone around here who could write the normalization function within a dozen or so lines of code.
I don't know the specifics of the MySQL function you're using, but I imagine it could be that there is no absolute meaning for those numbers - they're just designed to be compared with other values produced by the same function. To check for an absolute match you could select out the text itself and compare manually.
Related
I have a dataset which is a list of prefix ranges, and the prefixes aren't all the same size. Here are a few examples:
low: 54661601 high: 54661679 "bin": a
low: 526219100 high: 526219199 "bin": b
low: 4305870404 high: 4305870404 "bin": c
I want to look up which "bin" corresponds to a particular value with the corresponding prefix. For example, value 5466160179125211 would correspond to "bin" a. In the case of overlaps (of which there are few), we could return either the longest prefix or all prefixes.
The optimal algorithm is clearly some sort of tree into which the bin objects could be inserted, where each successive level of the tree represents more and more of the prefix.
The question is: how do we implement this (in one query) in a database? It is permissible to alter/add to the data set. What would be the best data & query design for this? An answer using mongo or MySQL would be best.
If you make a mild assumption about the number of overlaps in your prefix ranges, it is possible to do what you want optimally using either MongoDB or MySQL. In my answer below, I'll illustrate with MongoDB, but it should be easy enough to port this answer to MySQL.
First, let's rephrase the problem a bit. When you talk about matching a "prefix range", I believe what you're actually talking about is finding the correct range under a lexicographic ordering (intuitively, this is just the natural alphabetic ordering of strings). For instance, the set of numbers whose prefix matches 54661601 to 54661679 is exactly the set of numbers which, when written as strings, are lexicographically greater than or equal to "54661601", but lexicographically less than "54661680". So the first thing you should do is add 1 to all your high bounds, so that you can express your queries this way. In mongo, your documents would look something like
{low: "54661601", high: "54661680", bin: "a"}
{low: "526219100", high: "526219200", bin: "b"}
{low: "4305870404", high: "4305870405", bin: "c"}
Now the problem becomes: given a set of one-dimensional intervals of the form [low, high), how can we quickly find which interval(s) contain a given point? The easiest way to do this is with an index on either the low or high field. Let's use the high field. In the mongo shell:
db.coll.ensureIndex({high : 1})
For now, let's assume that the intervals don't overlap at all. If this is the case, then for a given query point "x", the only possible interval containing "x" is the one with the smallest high value greater than "x". So we can query for that document and check if its low value is also less than "x". For instance, this will print out the matching interval, if there is one:
db.coll.find({high : {'$gt' : "5466160179125211"}}).sort({high : 1}).limit(1).forEach(
function(doc){ if (doc.low <= "5466160179125211") printjson(doc) }
)
Suppose now that instead of assuming the intervals don't overlap at all, you assume that every interval overlaps with less than k neighboring intervals (I don't know what value of k would make this true for you, but hopefully it's a small one). In that case, you can just replace 1 with k in the "limit" above, i.e.
db.coll.find({high : {'$gt' : "5466160179125211"}}).sort({high : 1}).limit(k).forEach(
function(doc){ if (doc.low <= "5466160179125211") printjson(doc) }
)
What's the running time of this algorithm? The indexes are stored using B-trees, so if there are n intervals in your data set, it takes O(log n) time to lookup the first matching document by high value, then O(k) time to iterate over the next k documents, for a total of O(log n + k) time. If k is constant, or in fact anything less than O(log n), then this is asymptotically optimal (this is in the standard model of computation; I'm not counting number of external memory transfers or anything fancy).
The only case where this breaks down is when k is large, for instance if some large interval contains nearly all the other intervals. In this case, the running time is O(n). If your data is structured like this, then you'll probably want to use a different method. One approach is to use mongo's "2d" indexing, with your low and high values codifying x and y coordinates. Then your queries would correspond to querying for points in a given region of the x - y plane. This might do well in practice, although with the current implementation of 2d indexing, the worst case is still O(n).
There are a number of theoretical results that achieve O(log n) performance for all values of k. They go by names such as Priority Search Trees, Segment trees, Interval Trees, etc. However, these are special-purpose data structures that you would have to implement yourself. As far as I know, no popular database currently implements them.
"Optimal" can mean different things to different people. It seems that you could do something like save your low and high values as varchars. Then all you have to do is
select bin from datatable where '5466160179125211' between low and high
Or if you had some reason to keep the values as integers in the table, you could do the CASTing in the query.
I have no idea whether this would give you terrible performance with a large dataset. And I hope I understand what you want to do.
With MySQL you may have to use a stored procedure, which you call to map value to bin. Said procedure would query the list of buckets for each row and do arithmetic or string ops to find the matching bucket. You could improve this design by using fixed length prefixes, arranged in a fixed number of layers. You could assign a fixed depth to your tree and each layer has a table. You won't get tree-like performance with either of these approaches.
If you want to do something more sophisticated, I suspect you have to use a different platform.
Sql Server has a Hierarchy data type:
http://technet.microsoft.com/en-us/library/bb677173.aspx
PostgreSQL has a cidr data type. I'm not familiar with the level of query support it has, but in theory you could build a routing table inside of your db and use that to assign buckets:
http://www.postgresql.org/docs/7.4/static/datatype-net-types.html#DATATYPE-CIDR
Peyton! :)
If you need to keep everything as integers, and want it to work with a single query, this should work:
select bin from datatable where 5466160179125211 between
low*pow(10, floor(log10(5466160179125211))-floor(log10(low)))
and ((high+1)*pow(10, floor(log10(5466160179125211))-floor(log10(high)))-1);
In this case, it would search between the numbers 5466160100000000 (the lowest number with the low prefix & the same number of digits as the number to find) and 546616799999999 (the highest number with the high prefix & the same number of digits as the number to find). This should still work in cases where the high prefix has more digits than the low prefix. It should also work (I think) in cases where the number is shorter than the length of the prefixes, where the varchar code in the previous solution can give incorrect results.
You'll want to experiment to compare the performance of having a lot of inline math in the query (as in this solution) vs. the performance of using varchars.
Edit: Performance seems to be really good either way even on big tables with no indexes; if you can use varchars then you might be able to further boost performance by indexing the low and high columns. Note that you'd definitely want to use varchars if any of the prefixes have initial zeroes. Here's a fix to allow for the case where the number is shorter than the prefix when using varchars:
select * from datatable2 where '5466' between low and high
and length('5466') >= length(high);
I am trying to do basically a reverse full test search but have no clue of the best way to go about doing it.
Basically I have a table of key phrases laid out like this:
id - phrase
1 - "hello world"
2 - "goodbye world"
3 - "this is my world"
I then have a set string, such as "Welcome to the hello world group". I want to find the ID of all rows in my table that has an exact match for phrase. Meaning "o the" would not match because the word is "to the". Also "ello" would not match because the world is "hello".
Using Full Text Search, this can easily be achieved by doing a search of:
AGAINST ('"hello world"' IN BOOLEAN MODE);
Problem is, I don't believe I can use a full text search, since a full text search would find all rows that contains a single phrase. I want all phrases (from a known set of phrases) that match a single set.
I know how to do this using RegEx using the following, however this is way to slow. On a table with 400,000 key phrases it took over 40 seconds:
WHERE "the data I know I want to search goes here" REGEXP CONCAT('[[:<:]]', phrases, '[[:>:]]')
What I need is a more optimized way to do this. How would I possibly go about doing this as a full text search, even if i have to temporarily add it to a table without actually doing a LOOP individually checking each keyword.
I really appreciate the feedback as this is really causing my site to lag on adding new data.
If you are willing to consider a solution that reads the phrases out of the database and constructs a separate data structure used for optimized phrase detection, there are two main techniques that solve the problem. Which one is best for you depends on a number of factors, in particular:
How frequently the phrase list is updated
Whether and how you tokenise the text before running the phrase detection
How long the target strings are
Option 1: Hash-table of the phrases This means you simply insert each of the phrases as key into a hash table (aka dictionary or hash map in many programming languages). The phrase id becomes the value. Updates are fast and easy, but detecting the phrases in a given string can be hard: Firstly you need to tokenise the string and be sure that phrases only occur between token boundaries. Secondly, you need to make a lookup in the hash not only for every token, but also for every pair, triple, quadruple etc. of consecutive tokens. This still works well if the target strings are generally short. You can also maintain a copy of the hash table on disk, e.g. using the Berkeley DB. There are ready-to-use modules in the standard library of most programming languages for this.
Option 2: Search trie (or, slightly more advanced, a minimised search trie or a finite state machine). This can be implemented in very space-efficient ways but is generally larger than a hash table (although 400k entries will not be a problem at all). The big advantage during phrase detection is that you need not cut out tokens (or candidate phrases between token boundaries) before making look-ups. Instead you perform a longest-match look-up at each candidate start position in the text. Storing on disk is possible, although in most programming languages there won't be a standard-library module for this. Updates are quite easy in a trie, but can get difficult (and potentially time-consuming) in a minimised trie or FST.
Both options allow the data structure to be maintained on disk (or a copy of it to be stored on disk, while the actual look-ups happen memory). But you won't get transaction safety or fault-tolerance (which I understand you are not looking for).
You can use search engine. For example solr. You can set specific search filters against text. + search for words only. + It will be blindingly fast.
Or, second idea you can create your own table that stores all words and id of phrase. and search that table maching words only. It will be faster because you can add index on words better then phrases altogether.
A directed acyclic word graph is a great data structure for certain tasks. I can't find any information on the time complexity of performing a lookup though.
I would guess it depends linearly on the average word length, and logarithmically on the number of words in the graph.
So is it O(L * log W), where W is the number of words and L is the average word length?
I think that complexity is just O(L). Number of operations is proportional to length of word and it does not matter how many entries structure have. (there might be differences based on implementation of node searching but that is in worst case and worst implementation just constant whit upper limit equal to size of alphabet)
I’d say it’s just O(L). For each lookup of a word of n characters, you always follow at most n edges, irrespective of how many other edges there are.
(That’s assuming a standard DAWG in which each node has outgoing edges for every letter of the alphabet, i.e. 26 for English. Even if you have fewer outgoing edges per node and therefore more levels, the number of edges to follow is still at most a constant multiple of n, so we still get O(L).)
How many words you already have in your structure seems to be irrelevant.
Even if, at each step, you perform a linear search for the correct edge to follow from the current node, this is still constant-time because the alphabet is bounded, and therefore so is the number of outgoing edges from each node.
There is a lot of information in the literature which says that the time to search a trie is O(N) where N is the length of the pattern.
However, building the tree will also take some time. To me, let's say there are X words with a total of Y characters.
So then O(Y) is the time (because we have to insert each character). Is this assessment correct (I am usually not correct)
So then O(Y) is the time (because we have to insert each character)
Sure, you have to process each input character, and either follow an existing branch or insert a new char.
It can't be faster then O(Y), since you have to look at each input char. There's neither sorting nor any other operation which could make it slower.
Wrong. Creating a trie and searching a trie are two different algorithms. One wouldn't build a trie, search it, then throw away the entire data structure.
My users will import through cut and paste a large string that will contain company names.
I have an existing and growing MYSQL database of companies names, each with a unique company_id.
I want to be able to parse through the string and assign to each of the user-inputed company names a fuzzy match.
Right now, just doing a straight-up string match, is also slow. ** Will Soundex indexing be faster? How can I give the user some options as they are typing? **
For example, someone writes:
Microsoft -> Microsoft
Bare Essentials -> Bare Escentuals
Polycom, Inc. -> Polycom
I have found the following threads that seem similar to this question, but the poster has not approved and I'm not sure if their use-case is applicable:
How to find best fuzzy match for a string in a large string database
Matching inexact company names in Java
You can start with using SOUNDEX(), this will probably do for what you need (I picture an auto-suggestion box of already-existing alternatives for what the user is typing).
The drawbacks of SOUNDEX() are:
its inability to differentiate longer strings. Only the first few characters are taken into account, longer strings that diverge at the end generate the same SOUNDEX value
the fact the the first letter must be the same or you won't find a match easily. SQL Server has DIFFERENCE() function to tell you how much two SOUNDEX values are apart, but I think MySQL has nothing of that kind built in.
for MySQL, at least according to the docs, SOUNDEX is broken for unicode input
Example:
SELECT SOUNDEX('Microsoft')
SELECT SOUNDEX('Microsift')
SELECT SOUNDEX('Microsift Corporation')
SELECT SOUNDEX('Microsift Subsidary')
/* all of these return 'M262' */
For more advanced needs, I think you need to look at the Levenshtein distance (also called "edit distance") of two strings and work with a threshold. This is the more complex (=slower) solution, but it allows for greater flexibility.
Main drawback is, that you need both strings to calculate the distance between them. With SOUNDEX you can store a pre-calculated SOUNDEX in your table and compare/sort/group/filter on that. With the Levenshtein distance, you might find that the difference between "Microsoft" and "Nzcrosoft" is only 2, but it will take a lot more time to come to that result.
In any case, an example Levenshtein distance function for MySQL can be found at codejanitor.com: Levenshtein Distance as a MySQL Stored Function (Feb. 10th, 2007).
SOUNDEX is an OK algorithm for this, but there have been recent advances on this topic. Another algorithm was created called the Metaphone, and it was later revised to a Double Metaphone algorithm. I have personally used the java apache commons implementation of double metaphone and it is customizable and accurate.
They have implementations in lots of other languages on the wikipedia page for it, too. This question has been answered, but should you find any of the identified problems with the SOUNDEX appearing in your application, it's nice to know there are options. Sometimes it can generate the same code for two really different words. Double metaphone was created to help take care of that problem.
Stolen from wikipedia: http://en.wikipedia.org/wiki/Soundex
As a response to deficiencies in the
Soundex algorithm, Lawrence Philips
developed the Metaphone algorithm for
the same purpose. Philips later
developed an improvement to Metaphone,
which he called Double-Metaphone.
Double-Metaphone includes a much
larger encoding rule set than its
predecessor, handles a subset of
non-Latin characters, and returns a
primary and a secondary encoding to
account for different pronunciations
of a single word in English.
At the bottom of the double metaphone page, they have the implementations of it for all kinds of programming languages: http://en.wikipedia.org/wiki/Double-Metaphone
Python & MySQL implementation: https://github.com/AtomBoy/double-metaphone
Firstly, I would like to add that you should be very careful when using any form of Phonetic/Fuzzy Matching Algorithm, as this kind of logic is exactly that, Fuzzy or to put it more simply; potentially inaccurate. Especially true when used for matching company names.
A good approach is to seek corroboration from other data, such as address information, postal codes, tel numbers, Geo Coordinates etc. This will help confirm the probability of your data being accurately matched.
There are a whole range of issues related to B2B Data Matching too many to be addressed here, I have written more about Company Name Matching in my blog (also an updated article), but in summary the key issues are:
Looking at the whole string is unhelpful as the most important part
of a Company Name is not necessarily at the beginning of the Company
Name. i.e. ‘The Proctor and Gamble Company’ or ‘United States Federal
Reserve ‘
Abbreviations are common place in Company Names i.e. HP, GM, GE, P&G,
D&B etc..
Some companies deliberately spell their names incorrectly as part of
their branding and to differentiate themselves from other companies.
Matching exact data is easy, but matching non-exact data can be much more time consuming and I would suggest that you should consider how you will be validating the non-exact matches to ensure these are of acceptable quality.
Before we built Match2Lists.com, we used to spend an unhealthy amount of time validating fuzzy matches. In Match2Lists we incorporated a powerful Visualisation tool enabling us to review non-exact matches, this proved to be a real game changer in terms of match validation, reducing our costs and enabling us to deliver results much more quickly.
Best of Luck!!
Here's a link to the php discussion of the soundex functions in mysql and php. I'd start from there, then expand into your other not-so-well-defined requirements.
Your reference references the Levenshtein methodology for matching. Two problems. 1. It's more appropriate for measuring the difference between two known words, not for searching. 2. It discusses a solution designed more to detect things like proofing errors (using "Levenshtien" for "Levenshtein") rather than spelling errors (where the user doesn't know how to spell, say "Levenshtein" and types in "Levinstein". I usually associate it with looking for a phrase in a book rather than a key value in a database.
EDIT: In response to comment--
Can you at least get the users to put the company names into multiple text boxes; 2. or use an unambigous name delimiter (say backslash); 3. leave out articles ("The") and generic abbreviations (or you can filter for these); 4. Squoosh the spaces out and match for that also (so Micro Soft => microsoft, Bare Essentials => bareessentials); 5. Filter out punctuation; 6. Do "OR" searches on words ("bare" OR "essentials") - people will inevitably leave one or the other out sometimes.
Test like mad and use the feedback loop from users.
the best function for fuzzy matching is levenshtein. it's traditionally used by spell checkers, so that might be the way to go. there's a UDF for it available here: http://joshdrew.com/
the downside to using levenshtein is that it won't scale very well. a better idea might be to dump the whole table in to a spell checker custom dictionary file and do the suggestion from your application tier instead of the database tier.
This answer results in indexed lookup of almost any entity using input of 2 or 3 characters or more.
Basically, create a new table with 2 columns, word and key. Run a process on the original table containing the column to be fuzzy searched. This process will extract every individual word from the original column and write these words to the word table along with the original key. During this process, commonly occurring words like 'the','and', etc should be discarded.
We then create several indices on the word table, as follows...
A normal, lowercase index on word + key
An index on the 2nd through 5th character + key
An index on the 3rd through 6th character + key
Alternately, create a SOUNDEX() index on the word column.
Once this is in place, we take any user input and search using normal word = input or LIKE input%. We never do a LIKE %input as we are always looking for a match on any of the first 3 characters, which are all indexed.
If your original table is massive, you could partition the word table by chunks of the alphabet to ensure the user's input is being narrowed down to candidate rows immediately.
Though the question asks about how to do fuzzy searches in MySQL, I'd recommend considering using a separate fuzzy search (aka typo tolerant) engine to accomplish this. Here are some search engines to consider:
ElasticSearch (Open source, has a ton of features, and so is also complex to operate)
Algolia (Proprietary, but has great docs and super easy to get up and running)
Typesense (Open source, provides the same fuzzy search-as-you-type feature as Algolia)
Check if it's spelled wrong before querying using a trusted and well tested spell checking library on the server side, then do a simple query for the original text AND the first suggested correct spelling (if spell check determined it was misspelled).
You can create custom dictionaries for any spell check library worth using, which you may need to do for matching more obscure company names.
It's way faster to match against two simple strings than it is to do a Levenshtein distance calculation against an entire table. MySQL is not well suited for this.
I tackled a similar problem recently and wasted a lot of time fiddling around with algorithms, so I really wish there had been more people out there cautioning against doing this in MySQL.
Probably been suggested before but why not dump the data out to Excel and use the Fuzzy Match Excel plugin. This will give a score from 0 to 1 (1 being 100%).
I did this for business partner (company) data that was held in a database.
Download the latest UK Companies House data and score against that.
For ROW data its more complex as we had to do a more manual process.