MySQL what would the best approach to ranking highest to lowest possible match? - mysql

I have a MySQL database I'm searching through. Lets say this is a database of people. When querying for a specific record, it is possible to find a match 100% on each attribute. But querying the database to find closest match on probability (closest matches on table attributes) is more of the strategy.
In this scenario, does it make sense to create a temporary table (much like a tally-sheet) to indicate what attributes match/what attributes are present? What is the typical approach to doing advanced searches on database like this?
Example (below) of a hypothetical stored Procedure
*parameters are just to exemplify how I would search. I'm not concerned how to perform my selects. Question is about approach, strategy, technique *
call FindPerson ("Brown Eyes", "Brown hair", "Height:6'1", "white", "Name:Joe" ,"weight180", "Age 34" "sex m");
RESULT TABLE
NAME AGE HEIGHT WEIGHT HAIR SKIN sex RANK_MATCH
Joe 32 6'1 180 Brown white m 1
Mike 33 6'1 179 Brown white m 2
James 31 6'0 179 Brown black m 3

Just out of my mind. You can create your own score and sort by it. Something like
SELECT `id`,
(IF(`age`=32,1,0)+IF(`height`="6'1",1,0)+...) as `score`
FROM `people`
HAVING `score` > 0
ORDER BY `score` DESC
LIMIT 10;
With this, you can handle every field with its own comparison, and also weight the individual attributes by not just add 1 but 2 or more.
But I'm quiet not sure, how performant this is.

The approach I would use would be to create a scoring function (your stored proc) that would evaluate the given input's standard distance from the mean.
In the proc, you would judge each criteria in a fashion similar to:
INPUT AGE: 32
calculate MEAN of AGE WHERE (sex = m): 34.5
calculate STANDARD DEVIATION of AGE WHERE (sex = m): 2.5
calculate how many STDEVs 32 is from the 34.5 (also known as z-score): 1
Repeat this process for all numeric datatypes, summing them and ORDER BY the sum.
In doing so, the following schema change would be required: height changed from foot/inch form to strictly inches.
Depending on your needs, you may also consider coming up with an arbitrary scale for sex and skin color/hair color. Of course, you may think that measures like these should NOT be factored in because of how drastically it would change the scoring function. If you chose to, you'd have to find some number that would be added to the above SUM...but it's hard because nominative variables don't translate easily into these kinds of things.
If you find that haircolor/skin color is able to be usefully transferred into say, the continous color spectrum, your scoring tidbit would be the same...color value of input vs color value of means and standard deviations.
The query that would find your matches would be something to the effect of:
SELECT
ABS(INPUT_AGE - AVG(AGE)) / STD(AGE) AS age_z,
ABS(INPUT_WT - AVG(WT)) / STD(WT) AS wt_z,
...
(age_z + wt_z + ...) AS score
FROM `table`
ORDER BY score ASC

Related

Find with multiple conditions item in Table List Google Sheets

I have a list with agents results + Rules of priority looking like this
What should happen:
In the Agents Results list, the formula should look for the line where all the rules are met.
And then return Agent name.
Like in the first rule (the Hero's Name is hardcoded).
Agent 1 has Quality Result more than 99% and Productivity Result more than 115%. So he is in the column Hero's Name.
My attempts to solve the task.
To find agent's Name use INDEX function
To find the row argument for INDEX use MATCH function
The obstacles I've met
INDEX function returns only one result. What to do if several rows meet the Rule of Priority?
I can't use numeric range from/to in MATCH function to find row due to Rule of Priority that includes ranges (like Quality 99+ means all the values from 99.01% to 100%)
I can't use multiple conditions in MATCH to find the row that meets both Quality and Productivity conditions from Rules of priority
Please, help. I would be grateful for ideas on how this can be solved.
try:
=IFERROR(JOIN(", "; FiLTER(C5:C; A5:A >= 98%;
A5:A < 99%;
B5:B >= 110%;
B5:B < 115%)); "no one")
You should use filter function:
To have all agents that meet conditions quality > 99 and productivity >115 % you can make a function:
c - agents
b - productivity
a - quality
=filter(c5:c,a5:a>0.99,b5:b>1.15)
this will produce you a column of agents.
For more conditions (ranges of productivity, quality), you just add more conditions into filter fuction: for productivity between 110 and 115 and quality between 98 and 99 you write:
=filter(c5:c,a5:a>0.98,a5:a<0.99,b5:b<1.15,b5:b>1.1)

Warning appears in mixed effect model using spss

I am using spss to conduct mixed effect model of the following project: 
The participant is being asked some open ended questions and their answers are recorded.
For example, if the participant's answer is related to equality, the variable "equality" is coded as "1". Otherwise, it is coded as "0". Therefore, dependent variable is the variable "equality".
Fixed effects: 
- participant's country (Asians vs. Westerners)
- gender (Male vs Female)
- age group (younger age group vs. older age group)
- condition (control group vs. intervention group)
Random effect: Subject ID (participants)
Sample size: over 600 participants
My syntax in spss: 
MIXED  Equality BY Country Gender AgeGroup Condition
/CRITERIA=CIN(95) MXITER(100) MXSTEP(10) SCORING(1) SINGULAR(0.000000000001) HCONVERGE(0, ABSOLUTE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0.000001, ABSOLUTE)
/FIXED= Country Gender AgeGroup Condition  | SSTYPE(3)
/METHOD=ML
/PRINT=SOLUTION TESTCOV
/RANDOM=INTERCEPT | SUBJECT(SubID_R) COVTYPE(VC).
When running this analysis in spss, the following warning appears: 
Iteration was terminated but convergence has not been achieved.
The MIXED procedure continues despite this warning. Subsequent results produced are based on the last iteration. Validity of the model fit is uncertain.    
I try to increase the number of "MXSTEP" from 10 to 10000 in syntax, but another warning appear: 
The final Hessian matrix is not positive definite although all
convergence criteria are satisfied.
The MIXED procedure continues despite this warning. Validity of subsequent results cannot be ascertained.  
I also try to increase the number of  "MXITER" but the warning remains. May I ask how to deal with this problem to get rid of the warning? 
Aside from what you've already tried, in some cases increasing the number of Fisher scoring steps can be helpful, but it may be the case that your random intercept variance is truly redundant and you won't be able to resolve this problem with those data and that model.
Also, typically you would not use a linear model for a binary response variable, but would use something like a logistic model (this can be done in GENLINMIXED, under Analyze>Mixed Models>Generalized Linear in the menus).

Get rows product (multiplication)

SO,
The problem
I have an issue with rows multiplication. In SQL, there is a SUM() function which calculates sum for some field for set of rows. I want to get multiplication, i.e. for table
+------+
| data |
+------+
| 2 |
| -1 |
| 3 |
+------+
that will be 2*(-1)*3 = -6 as a result. I'm using DOUBLE data type for storing my data values.
My approach
From school math it is known that log(A x B) = log(A) + log(B) - so that could be used to created desired expression like:
SELECT
IF(COUNT(IF(SIGN(`col`)=0,1,NULL)),0,
IF(COUNT(IF(SIGN(`col`)<0,1,NULL))%2,-1,1)
*
EXP(SUM(LN(ABS(`col`))))) as product
FROM `test`;
-here you see weakness of this method - since log(X) is undefined when X<=0 - I need to count negative signs before calculating whole expression. Sample data and query for this is given in this fiddle.
Another weakness is that we need to find if there is 0 among column values (Since it is a sample, in real situation I'm going to select product for some subset of table rows with some condition(s) - i.e. I can not simply remove 0-s from my table, because result zero product is a valid and expected result for some rows subsets)
Specifics
And now, finally, my question main part: how to handle situation when we have expression like: X*Y*Z and here X < MAXF, Y<MAXF, but X*Y>MAXF and X*Y*Z<MAXF - so we have possible data type overflow (here MAXF is limit for double MySQL data type). The sample is here. Query above works well, but can I always be sure that it will handle that properly? I.e. may be there is another case with overflow issue when some sub-products causing overflow, but entire product is ok (without overflow).
Or may be there is another way to find rows product? Also, in table there possibly be millions of records (-1.1<X<=1.1 mainly, but probably with values such as 100 or 1000 - i.e. high enough to overflow DOUBLE if multiplied with certain quantity if we have an issue that I've described above) - may be calculating via log will be slow?
I guess this would work...
SELECT IF(MOD(COUNT(data < 0),2)=1
, EXP(SUM(LOG(data)))*-1
, EXP(SUM(LOG(data))))
x
FROM my_table;
If you need this type of calculations often, I suggest you store the signs and the logarithms in separate columns.
The signs can be stored as 1 (for positives), -1 (for negatives) and 0 (for zero.)
The logarithm can be assigned for zero as 0 (or any other value) but it should not be used in calculations.
Then the calculation would be:
SELECT
CASE WHEN EXISTS (SELECT 1 FROM test WHERE <condition> AND datasign = 0)
THEN 0
ELSE (SELECT 1-2*(SUM(datasign=-1)%2) FROM test WHERE <condition>)
END AS resultsign,
CASE WHEN EXISTS (SELECT 1 FROM test WHERE <condition> AND datasign = 0)
THEN -1 -- undefined log for result 0
ELSE (SELECT SUM(datalog) FROM test WHERE <condition> AND datasign <> 0)
END AS resultlog
;
This way, you have no overflow problems. You can check the resultlog if it exceeds some limits or just try to calculate resultdata = resultsign * EXP(resultlog) and see if an error is thrown.
This question is a remarkable one in the sea of low quality ones. Thank you, even reading it was a pleasure.
Precision
The exp(log(a)+log(b)) idea is a good one in itself. However, after reading "What Every Computer Scientist Should Know About Floating-Point Arithmetic", make sure you use DECIMAL or NUMERIC data types to be sure you are using Precision Math, or else your values will be surprisingly inaccurate. For a couple of million rows, errors can add up very quickly! DECIMAL (as per the MySQL doc) has a maximum of 65 digits precision, while for example 64bit IEEE754 floating point values have only up to 16 digits (log10(2^52) = 15.65) precision!
Overflow
As per the relevant part of the MySQL doc:
Integer overflow results in silent wraparound.
DECIMAL overflow results in a truncated result and a warning.
Floating-point overflow produces a NULL result. Overflow for some operations can result in +INF, -INF, or NaN.
So you can detect floating point overflow if it would ever happen.
Sadly, if a series of operations would result in a correct value, fitting into the data type used, but at least one subresult in the process of calculations would not, then you won't get the correct value at the end.
Performance
Premature optimization is the root of all evil. Try it, and if it is slow, take the appropriate actions. Doing this might not be lightning quick, but still might be quicker than getting all the results, and doing it on the application server. Only measurements can decide which gets to be quicker...

Function to dampen a value

I have a list of documents each having a relevance score for a search query. I need older documents to have their relevance score dampened, to try to introduce their date in the ranking process. I already tried fiddling with functions such as 1/(1+date_difference), but the reciprocal function is too discriminating for close recent dates.
I was thinking maybe a mathematical function with range (0..1) and domain(0..x) to amplify their score, where the x-axis is the age of a document. It's best to explain what I further need from the function by an image:
Decaying behavior is often modeled well by an exponentional function (many decaying processes in nature also follow it). You would use 2 positive parameters A and B and get
y(x) = A exp(-B x)
Since you want a y-range [0,1] set A=1. Larger B give slower decays.
If a simple 1/(1+x) decreases too quickly too soon, a sigmoid function like 1/(1+e^-x) or the error function might be better suited to your purpose. Let the current date be somewhere in the negative numbers for such a function, and you can get a value that is current for some configurable time and then decreases towards a base value.
log((x+1)-age_of_document)
Where the base of the logarithm is (x+1). Note the x is as per your diagram and is the "threshold". If the age of the document is greater than x the score goes negative. Multiply by the maximum possible score to introduce scaling.
E.g. Domain = (0,10) with a maximum score of 10: 10*(log(11-x))/log(11)
A bit late, but as thiton says, you might want to use a sigmoid function instead, since it has a "floor" value for your long tail data points. E.g.:
0.8/(1+5^(x-3)) + 0.2 - You can adjust the constants 5 and 3 to control the slope of the curve. The 0.2 is where the floor will be.

Human name comparison: ways to approach this task

I'm not a Natural Language Programming student, yet I know it's not trivial strcmp(n1,n2).
Here's what i've learned so far:
comparing Personal Names can't be solved 100%
there are ways to achieve certain degree of accuracy.
the answer will be locale-specific, that's OK.
I'm not looking for spelling alternatives! The assumption is that the input's spelling is correct.
For example, all the names below can refer to the same person:
Berry Tsakala
Bernard Tsakala
Berry J. Tsakala
Tsakala, Berry
I'm trying to:
build (or copy) an algorithm which grades the relationship 2 input names
find an indexing method (for names in my database, for hash tables, etc.)
note:
My task isn't about finding names in text, but to compare 2 names. e.g.
name_compare( "James Brown", "Brown, James", "en-US" ) ---> 99.0%
I used Tanimoto Coefficient for a quick (but not super) solution, in Python:
"""
Formula:
Na = number of set A elements
Nb = number of set B elements
Nc = number of common items
T = Nc / (Na + Nb - Nc)
"""
def tanimoto(a, b):
c = [v for v in a if v in b]
return float(len(c)) / (len(a)+len(b)-len(c))
def name_compare(name1, name2):
return tanimoto(name1, name2)
>>> name_compare("James Brown", "Brown, James")
0.91666666666666663
>>> name_compare("Berry Tsakala", "Bernard Tsakala")
0.75
>>>
Edit: A link to a good and useful book.
Soundex is sometimes used to compare similar names. It doesn't deal with first name/last name ordering, but you could probably just have your code look for the comma to solve that problem.
We've just been doing this sort of work non-stop lately and the approach we've taken is to have a look-up table or alias list. If you can discount misspellings/misheard/non-english names then the difficult part is taken away. In your examples we would assume that the first word and the last word are the forename and the surname. Anything in between would be discarded (middle names, initials). Berry and Bernard would be in the alias list - and when Tsakala did not match to Berry we would flip the word order around and then get the match.
One thing you need to understand is the database/people lists you are dealing with. In the English speaking world middle names are inconsistently recorded. So you can't make or deny a match based on the middle name or middle initial. Soundex will not help you with common name aliases such as "Dick" and "Richard", "Berry" and "Bernard" and possibly "Steve" and "Stephen". In some communities it is quite common for people to live at the same address and have 2 or 3 generations living at that address with the same name. The only way you can separate them is by date of birth. Date of birth may or may not be recorded. If you have the clout then you should probably make the recording of date of birth mandatory. A lot of "people databases" either don't record date of birth or won't give them away due to privacy reasons.
Effectively people name matching is not that complicated. Its entirely based on the quality of the data supplied. What happens in practice is that a lot of records remain unmatched - and even a human looking at them can't resolve the mismatch. A human may notice name aliases not recorded in the aliases list or may be able to look up details of the person on the internet - but you can't really expect your programme to do that.
Banks, credit rating organisations and the government have a lot of detailed information about us. Previous addresses, date of birth etc. And that helps them join up names. But for us normal programmers there is no magic bullet.
Analyzing name order and the existence of middle names/initials is trivial, of course, so it looks like the real challenge is knowing common name alternatives. I doubt this can be done without using some sort of nickname lookup table. This list is a good starting point. It doesn't map Bernard to Berry, but it would probably catch the most common cases. Perhaps an even more exhaustive list can be found elsewhere, but I definitely think that a locale-specific lookup table is the way to go.
I had real problems with the Tanimoto using utf-8.
What works for languages that use diacritical signs is difflib.SequenceMatcher()