How can I go about querying a database for non-similar, but almost matching items - mysql

How can I go about querying a database for items that are not only exactly similar to a sample, but also those that are almost similar? Almost as search engines work, but only for a small project, preferably in Java. For example:
String sample = "Sample";
I would like to retrieve all the following whenever I query sample:
String exactMatch = "Sample";
String nonExactMatch = "S amp le";
String nonExactMatch_2 = "ampls";

You need to define what similar means in terms that your database can understand.
Some possibilities include Levenshtein distance, for example.
In your example, sample matches...
..."Sample", if you search without case sensitivity.
..."S amp le", if you remove a set of ignored characters (here space only) from both the query string and the target string. You can store the new value in the database:
ActualValue SearchFor
John Q. Smith johnqsmith%
When someone searches for "John Q. Smith, Esq." you can boil it down to johnqsmithesq and run
WHERE 'johnqsmithesq' LIKE SearchFor
"ampls" is more tricky. Why is it that 'ampls' is matched by 'sample'? A common substring? A number of shared letters? Does their order count (i.e. are anagrams valid)? Many approaches are possible, but it is you who must decide. You might use Levenshtein distance, or maybe store a string such as "100020010003..." where every digit encodes the number of letters you have, up to 9 (so 3 C's and 2 B's but no A's would give "023...") and then run the Levenshtein distance between this syndrome and the one from each term in the DB:
ActualValue Search1 Rhymes abcdefghij_Contains anagramOf
John Q. Smith johnqsmith% ith 0000000211011... hhijmnoqst
...and so on.
One approach is to ask oneself, how must I transform both searched value and value searched for, so that they match?, and then proceed and implement that in code.

You can use match_against in myisam full text indexes columns.

Related

How to correctly specify a quantifier for a group of words?

Table has field containing the list of IDs separated by "-".
Example: 559-3319-3537-4345-29923
I need to check rows that use at least 4 of the specified identifiers using regex
Example: before inserting to the db, I need to check the value 559-3319-3537-29923-30762 for this condition.
I've build a pattern that only works in the specified order, but if the IDs are swapped, it doesn't work.
Template: ^.*\b(-*(559|3319|3537|29923|30762)-*){4,}\b.*$
Initially, I thought that a simple (559|3319|3537|29923|30762){4,} should be enough, but in this case it also doesn't work, although it sees all 4 values without a quantifier.
Please tell me how to write such an expression correctly.
For ease of reading/testing, I've simplified the Ids being searched for to single digit integers 1-5. The following pattern will match strings with at least 4 out of the 5 ids:
(\b(1|2|3|4|5)\b.*){4,}
(Play with this here)
OR MySQL's regex dialect:
([[:<:]](1|2|3|4|5)[[:>:]].*){4,}
(Play with MySQL version here)
Here are some examples:
#
Example
Is Match?
Description
1
1-2-3-4-5
YES
All the Ids
2
1-2-3-9-5
YES
Enough Ids
3
1-1-9-1-1
YES
Enough Ids, but there are repeats
4
9-8-7-6-0
NO
None of the Ids
5
1-2-3-9-9
NO
Some, but not enough of the Ids
If the repeated Ids as shown in example 3 are an issue, then regex is probably not a good fit for this problem.
Edit:
^.*\b((559|3319|3537|29923|30762)-?([0-9]*)?-?){4,}\b.*$
The reasoning behind this is that each group is not just one of the 5 numbers, but it can include some extra characters. So the matched groups in your example are:
(559-)
(3319-)
(3537-4345-)
(29923)
Original answer:
This would be one way to do it (not sure if there are other ways to do it):
^.*\b(559|3319|3537|29923|30762)[0-9-]*(559|3319|3537|29923|30762)[0-9-]*(559|3319|3537|29923|30762)[0-9-]*(559|3319|3537|29923|30762)\b.*$

NetSuite Saved Search: REGEXP_SUBSTR Pattern troubles

I am trying to break down a string that looks like this:
|5~13~3.750~159.75~66.563~P20~~~~Bundle A~~|
Here is a second example for reference:
|106~10~0~120~1060.000~~~~~~~|
Here is a third example of a static sized item:
|3~~~~~~~~~~~5:12|
Example 4:
|3~23~5~281~70.250~upper r~~~~~~|
|8~22~6~270~180.000~center~~~~~~|
|16~22~1~265~353.333~center~~~~~~|
Sometimes there are multiple lines in the same string.
I am not super familiar with setting up patterns for regexp_substr and would love some assistance with this!
The string will always have '|' at the beginning and end and 11 '~'s used to separate the numeric/text values which I am hoping to obtain. Also some of the numeric characters have decimals while others do not. If it helps the values are separated like so:
|Quantity~ Feet~ Inch~ Unit inches~ Total feet~ Piece mark~ Punch Pattern~ Notch~ Punch~ Bundling~ Radius~ Pitch|
As you can see, if there isn't something specified it shows as blank, but it may have them in another string, its rare for all of the values to have data.
For this specific case I believe regexp_substr will be my best option but if someone has another suggestion I'd be happy to give it a shot!
This is the formula(Text) I was able to come up with so far:
REGEXP_SUBSTR({custbody_msm_cut_list},'[[:alnum:]. ]+|$',1,1)
This allows me to pull all the matches held in the strings, but if some fields are excluded it makes presenting the correct data difficult.
TRIM(REGEXP_SUBSTR({custbody_msm_cut_list}, '^\|(([^~]*)~){1}',1,1,'i',2))
From the start of the string, match the pipe character |, then match anything except a tilde ~, then match the tilde. Repeat N times {1}. Return the last of these repeats.
You can control how many tildes are processed by the integer in the braces {1}
EG:
TRIM(REGEXP_SUBSTR('|Quantity~ Feet~ Inch~ Unit inches~ Total feet~ Piece mark~ Punch Pattern~ Notch~ Punch~ Bundling~ Radius~ Pitch|', '^\|(([^~]*)~){1}',1,1,'i',2))
returns "Quantity"
TRIM(REGEXP_SUBSTR('|Quantity~ Feet~ Inch~~~ Piece mark~ Punch Pattern~ Notch~ Punch~ Bundling~ Radius~ Pitch|', '^\|(([^~]*)~){7}',1,1,'i',2))
returns "Punch Pattern"
The final value Pitch is a slightly special case as it is not followed by a tilde:
TRIM(REGEXP_SUBSTR('|~~~~~~~~~~ Radius~ Pitch|', '^\|(([^~]*)~){11}([^\|]*)',1,1,'i',3))
Adapted and improved from https://stackoverflow.com/a/70264782/7885772

Synonym dictionary implementation?

How should I approach this problem? I basically need to implement a dictionary of synonyms. It takes as input some "word/synonim" pairs and I have to be able to "query" it for the list of all synonims of a word.
For example:
Dictionary myDic;
myDic.Add("car", "automobile");
myDic.Add("car", "autovehicle");
myDic.Add("car", "vehicle");
myDic.Add("bike", "vehicle");
myDic.ListOSyns("car") // should return {"automobile","autovehicle","vehicle" ± "car"}
// but "bike" should NOT be among the words returned
I'll code this in C++, but I'm interested in an overall idea of the implementation, so the question is not exactly language-specific.
PS: The main idea is to have some groups of words (synonyms). In the example above there would be two such groups:
{"automobile","autovehicle","vehicle", "car"}
{"bike", "vehicle"}
"vehicle" belongs to both, "bike" just to the second one, the others just to the first
I would implement it as a Graph + hash table / search tree
each keyword would be a Vertex, and each connection between 2 keywords would be an edge.
a hash table or a search tree will connect from each word to its node (and vice versa).
when a query is submitted - you find the node with your hash/tree and do BFS/DFS of the required depth. (meaning you cannot continue after a certain depth)
complexity: O(E(d)+V(d)) for searching graph (d = depth) (E(d) = number of edges in the relevant depth, same for V(d))
O(1) for creating an edge (not including searching for the node, detailed below its search)
O(logn) / O(1) for finding node (for tree/hash table)
O(logn) /O(1) for adding a keyword to the tree/hash table and O(1) to add a Vertex
p.s. as mentioned: the designer should keep in mind if he needs a directed or indirected Graph, as mentioned in the comments to the question.
hope that helps...
With the clarification in the comments to the question, it's relatively simple since you're not storing groups of mutual synonyms, but rather separately defining the acceptable synonyms for each word. The obvious container is either:
std::map<std::string, std::set<std::string> >
or:
std::multi_map<std::string, std::string>
if you're not worried about duplicates being inserted, like this:
myDic.Add("car", "automobile");
myDic.Add("car", "auto");
myDic.Add("car", "automobile");
In the case of multi_map, use the equal_range member function to extract the synonyms for each word, maybe like this:
struct Dictionary {
vector<string> ListOSyns(const string &key) const {
typedef multi_map<string, string>::const_iterator constit;
pair<constit, constit> x = innermap.equal_range(key);
vector<string> retval(x.first, x.second);
retval.push_back(key);
return retval;
}
};
Finally, if you prefer a hashtable-like structure to a tree-like structure, then unordered_multimap might be available in your C++ implementation, and basically the same code works.

Inverted search: Phrases per document

I have a database full of phrases (80-100 characters), and some longish documents (50-100Kb), and I would like a ranked list of phrases for a given document; rather than the usual output of a search engine, list of documents for a given phrase.
I've used MYSQL fulltext indexing before, and looked into lucene, but never used it.
They both seem geared to compare the short (search term), with the long (document).
How would you get the inverse of this?
I did something similar with a database of Wikipedia titles and managed to get down to a few hundred milliseconds for each ~50KB document. That was still not fast enough for my needs, but maybe it can work for you.
Basically the idea was to work with hashes as much as possible and only do string comparisons on possible matches, which are pretty rare.
First, you take your database and convert it into an array of hashes. If you have billions of phrases, this may not be for you. When you calculate the hash, be sure to pass the phrases through a tokenizer that will remove punctuation and whitespace. This part needs to be done only once.
Then, you go though the document with the same tokenizer, keeping a running list of the last 1,2,..,n tokens, hashed. At every iteration, you do a binary search of the hashes you have against the hashes database.
When you find a match, you do the actual string comparison to see if you found a match.
Here's some code, to give you a taste of whet I mean, tough this example doesn't actually do the string comparison:
HashSet<Long> foundHashes = new HashSet<Long>();
LinkedList<String> words = new LinkedList<String>();
for(int i=0; i<params.maxPhrase; i++) words.addLast("");
StandardTokenizer st = new StandardTokenizer(new StringReader(docText));
Token t = new Token();
while(st.next(t) != null) {
String token = new String(t.termBuffer(), 0, t.termLength());
words.addLast(token);
words.removeFirst();
for(int len=params.minPhrase; len<params.maxPhrase; len++) {
String term = Utils.join(new ArrayList<String>(words.subList(params.maxPhrase-len,params.maxPhrase)), " ");
long hash = Utils.longHash(term);
if(params.lexicon.isTermHash(hash)) {
foundHashes.add(hash);
}
}
}
for(long hash : foundHashes) {
if(count.containsKey(hash)) {
count.put(hash, count.get(hash) + 1);
} else {
count.put(hash, 1);
}
}
Would it be too slow to turn each phrase into a regex and run each one on the document, counting the number of occurrences?
If that doesn't work, maybe you can combine all the phrases into one huge regex (using |), and compile it. Then, run that huge regex starting from every character in the document. Count the number of matches as you go through the characters.
How large is the database of phrases? I am assuming that it is very large.
I would do the following:
Index the phrases by one of the words in it. You might choose the least common word in each phrase.You might make the search better by assuming that the word is at least e.g. 5 characters long, and padding the word to 5 chars if it is shorter. The padding can be the space after the word, followed by the subsequent word, to reduce matches, or some default character (e.g. "XX") if the word occurs at the end of the phrase.
Go through your document, converting each word (common ones can be discarded) to a key by padding if necessary, retrieving phrases.
Retrieve the relevant phrases by these keywords.
Use an in-memory text search to find the number of occurrences of each of the retrieved phrases.
I am assuming that phrases cannot cross a sentence boundary. In this case, you can read each sentence of the document into a substring in an array, and use the substring function to search through each sentence for each of the phrases and count occurrences, keeping a running sum for each phrase.
Maybe reading Peter Turney on keyphrase extraction will give you some ideas. Overall, his approach has some similarity to what itsadok has suggested.

A StringToken Parser which gives Google Search style "Did you mean:" Suggestions

Seeking a method to:
Take whitespace separated tokens in a String; return a suggested Word
ie:
Google Search can take "fonetic wrd nterpreterr",
and atop of the result page it shows "Did you mean: phonetic word interpreter"
A solution in any of the C* languages or Java would be preferred.
Are there any existing Open Libraries which perform such functionality?
Or is there a way to Utilise a Google API to request a suggested word?
In his article How to Write a Spelling Corrector, Peter Norvig discusses how a Google-like spellchecker could be implemented. The article contains a 20-line implementation in Python, as well as links to several reimplementations in C, C++, C# and Java. Here is an excerpt:
The full details of an
industrial-strength spell corrector
like Google's would be more confusing
than enlightening, but I figured that
on the plane flight home, in less than
a page of code, I could write a toy
spelling corrector that achieves 80 or
90% accuracy at a processing speed of
at least 10 words per second.
Using Norvig's code and this text as training set, i get the following results:
>>> import spellch
>>> [spellch.correct(w) for w in 'fonetic wrd nterpreterr'.split()]
['phonetic', 'word', 'interpreters']
You can use the yahoo web service here:
http://developer.yahoo.com/search/web/V1/spellingSuggestion.html
However it's only a web service... (i.e. there are no APIs for other language etc..) but it outputs JSON or XML, so... pretty easy to adapt to any language...
You can also use the Google API's to spell check. There is an ASP implementation here (I'm not to credit for this, though).
First off:
Java
C++
C#
Use the one of your choice. I suspect it runs the query against a spell-checking engine with a word limit of exactly one, it then does nothing if the entire query is valid, otherwise it replaces each word with that word's best match. In other words, the following algorithm (an empty return string means that the query had no problems):
startup()
{
set the spelling engines word suggestion limit to 1
}
option 1()
{
int currentPosition = engine.NextWord(start the search at word 0, querystring);
if(currentPosition == -1)
return empty string; // Query is a-ok.
while(currentPosition != -1)
{
queryString = engine.ReplaceWord(engine.CurrentWord, queryString, the suggestion with index 0);
currentPosition = engine.NextWord(currentPosition, querystring);
}
return queryString;
}
Since no one has yet mentioned it, I'll give one more phrase to search for: "edit distance" (for example, link text).
That can be used to find closest matches, assuming it's typos where letters are transposed, missing or added.
But usually this is also coupled with some sort of relevancy information; either by simple popularity (to assume most commonly used close-enough match is most likely correct word), or by contextual likelihood (words that follow preceding correct word, or come before one). This gets into information retrieval; one way to start is to look at bigram and trigrams (sequences of words seen together). Google has very extensive freely available data sets for these.
For simple initial solution though a dictionary couple with Levenshtein-based matchers works surprisingly well.
You could plug Lucene, which has a dictionary facility implementing the Levenshtein distance method.
Here's an example from the Wiki, where 2 is the distance.
String[] l=spellChecker.suggestSimilar("sevanty", 2);
//l[0] = "seventy"
http://wiki.apache.org/lucene-java/SpellChecker
An older link http://today.java.net/pub/a/today/2005/08/09/didyoumean.html
The Google SOAP Search APIs do that.
If you have a dictionary stored as a trie, there is a fairly straightforward way to find best-matching entries, where characters can be inserted, deleted, or replaced.
void match(trie t, char* w, string s, int budget){
if (budget < 0) return;
if (*w=='\0') print s;
foreach (char c, subtrie t1 in t){
/* try matching or replacing c */
match(t1, w+1, s+c, (*w==c ? budget : budget-1));
/* try deleting c */
match(t1, w, s, budget-1);
}
/* try inserting *w */
match(t, w+1, s + *w, budget-1);
}
The idea is that first you call it with a budget of zero, and see if it prints anything out. Then try a budget of 1, and so on, until it prints out some matches. The bigger the budget the longer it takes. You might want to only go up to a budget of 2.
Added: It's not too hard to extend this to handle common prefixes and suffixes. For example, English prefixes like "un", "anti" and "dis" can be in the dictionary, and can then link back to the top of the dictionary. For suffixes like "ism", "'s", and "ed" there can be a separate trie containing just the suffixes, and most words can link to that suffix trie. Then it can handle strange words like "antinationalizationalization".