Given a word jumble (i.e. ofbaor), what would be an approach to unscramble the letters to create a real word (i.e. foobar)? I could see this having a couple of approaches, and I think I know how I'd do it in .NET, but I curious to see what some other solutions look like (always happy to see if my solution is optimal or not).
This isn't homework or anything like that, I just saw a word jumble in the local comics section of the paper (yes, good ol' fashioned newsprint), and the engineer in me started thinking.
edit:
please post some pseudo code or real code if you can; it's always nice to try and expand language knowledge by seeing examples like this.
Have a dictionary that's keyed by the letters of each word in sorted order. Then take you jumble an sort the letters - look up all the words in the dictionary by that sorted-letter string.
So, as an example, the words 'bear' and 'bare' would be in the dictionary as follows:
key word
----- ------
aber bear
aber bare
And if you're given the jumble, 'earb', you'd sort the letters to 'aber' and be able to look up both possible words in the dictionary.
Create all the permutations of the string, and look them up in a dictionary.
You can optimize by looking up shorter strings that begin words, and if there are no words of suitable length in the dictionary that start with those strings, eliminating permutations starting with those letters from further consideration.
CodeProject has a couple of articles here and here. The second uses recursion. Wikipedia also outlines a couple of algorithms here. The Wikipedia article also mentions a program called Jumbo that uses a more heuristic approach that solves the problem like a human would. There seem to be a few approaches to the problem.
Depending on the length of the string WhirlWind's approach could be faster, but an alternative way of doing it which has more or less O(n) complexity is instead of creating all the permutations of the string and looking them up, you go through all the words in the dictionary and see if each can be built from the input string.
A really smart algorithm that knows the number of words in the dictionary in advance could do something like this:
sort letters in jumbled string
if factorial(length of jumbled string) > count of words in dictionary:
for each word in dictionary:
sort letters in dictionary word
if sorted letters in dictionary word == sorted letters in jumbled string:
append original dictionary word to acceptable word list
else:
create permutation list of jumbled letters
for each permutation of letters:
search in dictionary for permutation
if permutation in dictionary:
add word to acceptable word list
http://www.codeproject.com/KB/game/Anagrams2.aspx
One approach is to split your dictionary into sorted sub-dictionaries with specific lengths, like 1-letter words, 2-letter words,...
When you search for words of a certain jumble, compare the number of possible permutations with the number of the words in the corresponding dictionary. If the former is larger, then compare words in the dictionary to the jumble, if the latter is, then create permutations then search for those in your dictionary.
You can also optimize it further by dividing the dictionaries into smaller subsets based on their first letters, and how frequently they appear, and then divide further based on the second letter. More division might significantly complicate the database and slow down searching, however.
Related
I need a function that would compare two strings and outputs an edit distance like Levenshtein, but only if the characters are homoglyphs in cursives. I have a list of those homoglyphs so I could feed a custom list to this function.
Example
homoglyphs = [["o","a"],["rn","m","nn"],...] // In cursive they look-alike
compare("Mory", "Mary", homoglyphs) // Levenshtein gives 1
compare("Mory", "Tory", homoglyphs) // Levenshtein gives 1, but I want false, 99 or -1
compare("Morio", "Mario", homoglyphs) // I expect a distance of 1
compare("Morio", "Maria", homoglyphs) // I expect a distance of 2
Tory should give a false result since there's no way someone misread an M as a T. An A could be misread as an O so it can count as 1.
The scoring could be different, I just need to know that Mory is probably Mary not Tory and Morio is a little more likely to be Mario than Maria.
Do something like this exists?
The key to your problem can be thought of like an IQ word association question.
Sound Glyph
--------- = ----------
Homophone Homoglyphs
Now if you know that there is a way to find similar sounding words (homophone) then the same can be applied but instead of sounds change to glyphs (homoglyph).
The way to find similar sounding words is via Soundex (Sound Index).
So just do what Soundex does but instead of having a mapping from similar homophones use similar homoglyphs.
Once you convert each word (glyphs) input into a Glyphdex (Glyph Index) then you can compute the Levenshtein distance for the two Glyphdex.
Make sense?
If you are into cellular biology then codon translation into amino acids (ref) might make more sense. Many amino acids are coded by more than one 3 letter codon.
Note: Since the word glyhdex has been used prior to me writing this I can not say I coined that word, however the usage I currently find via Google (search) for the word are not in the same context as described here. So in the context of converting a sequence of glyphs into an index of similar sequence of glyphs I will take credit.
Given the sentence The animal didn't cross the street because it was too tired, how the self-attention is able to map with a higher score the word aninal intead of the word street ?
I'm wondering if that might be a consequence of the word embedding vectors fed into the network, that some how already encapsulate some degree of distance among the words.
Word Embeddings are first added to Positional Encoding which adds information about the word's position in the sequence. Then through each Encoder stack(6 to be precise), the Embeddings undergo multiple transformations and are refined to form better representations before they are passed on to the decoder.
The modification to the Embeddings as it passes through the Encoder Stack is learnable. Sometimes it may appear that some Attention-Heads at the top Stack are doing something that may look like coreference resolution which you pointed out in your example. Attending more to the word "animal" simply results in better representation than attending to "street".
How do we know which representations are better? The one that minimizes the loss or produces a better output of course!
I have a table of regular expressions that are in an MySQL table that I match text against.
Is there a way, using MySQL or any other language (preferably Perl) that I can take this list of expressions and determine which of them MAY overlap. This should be independent of whatever text may be supplied to the expressions.
All of the expression have anchors.
Here is an example of what I am trying to get:
Expressions:
^a$
^b$
^ab
^b.*c
^batch
^catch
Result:
'^b.*c' and '^batch' MAY overlap
Thoughts?
Thanks,
Scott
Further explanation:
I have a list of user-created regexes and an imported list of strings that are to be matched against the regexes. In this case the strings are "clean" data (ie they are not user-created but imported from another source - they must not change).
When a user adds to the list of regexes I do not want any collisions on either the existing list of strings nor any future strings (which can not be guessed ahead of time - the only constraints being they are ASCII printable characters no longer than 255 characters).
A brute-force method would be to create a "rainbow" table of all of the permutations of strings and each time a regex is added run all of the regexes against the rainbow table. However I'd like to avoid this (I'm not even sure of the cost) and so was wondering aloud as to the possibility of an algorithm that would AT LEAST show which regexes in a list MAY collide.
I will punt on full REs. Even limiting to BREs and/or MySQL-pre-8.0 will be challenging. Here are some thoughts.
If end-anchored and no + or *, the calculate the length. The fixed-length can be used as a discriminator. Also, it could be used for toning back the "brute force" by perhaps an order of magnitude.
Anything followed by + or * gets turned into .* for simplicity. (Re the "may collide" rule.)
Any RE with explicit characters (including those followed by +) becomes a discriminator in some situations. Eg, ^a.*b$ vs ^a.*c$.
For those anchored at the end, reverse the pattern and test it that way. (I don't know how difficult reversing is.)
If you can say that a particular character must be at any position, then use it as a discriminator: ^a.b.*c$ -- a in pos 1; b in pos 3; c at end. Perhaps this can be extended to character classes: ^\w may match, but ^\d and ^a.*\d$ can't.
I'm trying to create a "scrabble-solver" to run stress-tests on a scrabble-like game I'm developing. I have a database containing ~200.000 words and I'm now looking for a way to match the scrabble tiles given with the words in the database.
Example:
Given tiles: A, P, E, F, O, L, M
Result: APE, POLE, PALE, MOLE, PAL...
Is this possible by using a simple SELECT-statement with REGEXP? If possible I would also like to add letters on specific positions and be able to determine max/min length.
I hope this question made sense :)
I've been googling my eyes out but I can't seem to find what I'm looking for. Anyone got an idea?
Thanks! :)
It doesn't sound like a regex problem. I think you'll be better off simply creating all possible combinations of letters from the existing tiles and then running your SELECT statement with the IN clause. For example, with tiles:
A, P, E
your SELECT clause will be
SELECT word FROM words WHERE word IN ('APE', 'AEP', 'PAE' ,'PEA', 'EPA', 'EAP');
You'll get the list of valid words from your table.
A regex would not help you much in this case. You need to construct the possible words by yourself.
The problem is that you have a limited number of each possible letter and a regex cannot encode that information. If you had infinite supply of each letter, then you could use a regex like [APEFOI]*.
You will have to enumerate all the possible words yourself. The implementation would depend on the language your using, but your best bet might be a next_permutation function or better a function that enumerates all permutations. A simple (and slightly inefficient) implementation (in Python-like pseudocode) would be:
words = []
for permutation in permutations(letters): # enumerate all character orders
for i in range(1, len(permutation)): # enumerate all lengths of words
words.append(letters[:i]) # append to candidate set
At that point words will contain all the candidate words you would then use in a SELECT ... IN statement.
That isn't the most efficient approach, but should be practical enough to get you started.
Suppose I want to match address records (or person names or whatever) against each other to merge records that are most likely referring to the same address. Basically, I guess I would like to calculate some kind of correlation between the text values and merge the records if this value is over a certain threshold.
Example:
"West Lawnmower Drive 54 A" is probably the same as "W. Lawn Mower Dr. 54A" but different from "East Lawnmower Drive 54 A".
How would you approach this problem? Would it be necessary to have some kind of context-based dictionary that knows, in the address case, that "W", "W." and "West" are the same? What about misspellings ("mover" instead of "mower" etc)?
I think this is a tricky one - perhaps there are some well-known algorithms out there?
A good baseline, probably an impractical one in terms of its relatively high computational cost and more importantly its production of many false positive, would be generic string distance algorithms such as
Edit distance (aka Levenshtein distance)
Ratcliff/Obershelp
Depending on the level of accuracy required (which, BTW, should be specified both in terms of its recall and precision, i.e. generally expressing whether it is more important to miss a correlation than to falsely identify one), a home-grown process based on [some of] the following heuristics and ideas could do the trick:
tokenize the input, i.e. see the input as an array of words rather than a string
tokenization should also keep the line number info
normalize the input with the use of a short dictionary of common substituions (such as "dr" at the end of a line = "drive", "Jack" = "John", "Bill" = "William"..., "W." at the begining of a line is "West" etc.
Identify (a bit like tagging, as in POS tagging) the nature of some entities (for example ZIP Code, and Extended ZIP code, and also city
Identify (lookup) some of these entities (for example a relative short database table can include all the Cities / town in the targeted area
Identify (lookup) some domain-related entities (if all/many of the address deal with say folks in the legal profession, a lookup of law firm names or of federal buildings may be of help.
Generally, put more weight on tokens that come from the last line of the address
Put more (or less) weight on tokens with a particular entity type (ex: "Drive", "Street", "Court" should with much less than the tokens which precede them.
Consider a modified SOUNDEX algorithm to help with normalization of
With the above in mind, implement a rule-based evaluator. Tentatively, the rules could be implemented as visitors to a tree/array-like structure where the input is parsed initially (Visitor design pattern).
The advantage of the rule-based framework, is that each heuristic is in its own function and rules can be prioritized, i.e. placing some rules early in the chain, allowing to abort the evaluation early, with some strong heuristics (eg: different City => Correlation = 0, level of confidence = 95% etc...).
An important consideration with search for correlations is the need to a priori compare every single item (here address) with every other item, hence requiring as many as 1/2 n^2 item-level comparisons. Because of this, it may be useful to store the reference items in a way where they are pre-processed (parsed, normalized...) and also to maybe have a digest/key of sort that can be used as [very rough] indicator of a possible correlation (for example a key made of the 5 digit ZIP-Code followed by the SOUNDEX value of the "primary" name).
I would look at producing a similarity comparison metric that, given two objects (strings perhaps), returns "distance" between them.
If you fulfil the following criteria then it helps:
distance between an object and
itself is zero. (reflexive)
distance from a to b is the same in
both directions (transitive)
distance from a to c is not more
than distance from a to b plus
distance from a to c. (triangle
rule)
If your metric obeys these they you can arrange your objects in metric space which means you can run queries like:
Which other object is most like
this one
Give me the 5 objects
most like this one.
There's a good book about it here. Once you've set up the infrastructure for hosting objects and running the queries you can simply plug in different comparison algorithms, compare their performance and then tune them.
I did this for geographic data at university and it was quite fun trying to tune the comparison algorithms.
I'm sure you could come up with something more advanced but you could start with something simple like reducing the address line to the digits and the first letter of each word and then compare the result of that using a longest common subsequence algorithm.
Hope that helps in some way.
You can use Levenshtein edit distance to find strings that differ by only a few characters. BK Trees can help speed up the matching process.
Disclaimer: I don't know any algorithm that does that, but would really be interested in knowing one if it exists. This answer is a naive attempt of trying to solve the problem, with no previous knowledge whatsoever. Comments welcome, please don't laugh too laud.
If you try doing it by hand, I would suggest applying some kind of "normalization" to your strings : lowercase them, remove punctuation, maybe replace common abbreviations with the full words (Dr. => drive, St => street, etc...).
Then, you can try different alignments between the two strings you compare, and compute the correlation by averaging the absolute differences between corresponding letters (eg a = 1, b = 2, etc.. and corr(a, b) = |a - b| = 1) :
west lawnmover drive
w lawnmower street
Thus, even if some letters are different, the correlation would be high. Then, simply keep the maximal correlation you found, and decide that their are the same if the correlation is above a given threshold.
When I had to modify a proprietary program doing this, back in the early 90s, it took many thousands of lines of code in multiple modules, built up over years of experience. Modern machine-learning techniques ought to make it easier, and perhaps you don't need to perform as well (it was my employer's bread and butter).
So if you're talking about merging lists of actual mailing addresses, I'd do it by outsourcing if I can.
The USPS had some tests to measure quality of address standardization programs. I don't remember anything about how that worked, but you might check if they still do it -- maybe you can get some good training data.