I have a list with agents results + Rules of priority looking like this
What should happen:
In the Agents Results list, the formula should look for the line where all the rules are met.
And then return Agent name.
Like in the first rule (the Hero's Name is hardcoded).
Agent 1 has Quality Result more than 99% and Productivity Result more than 115%. So he is in the column Hero's Name.
My attempts to solve the task.
To find agent's Name use INDEX function
To find the row argument for INDEX use MATCH function
The obstacles I've met
INDEX function returns only one result. What to do if several rows meet the Rule of Priority?
I can't use numeric range from/to in MATCH function to find row due to Rule of Priority that includes ranges (like Quality 99+ means all the values from 99.01% to 100%)
I can't use multiple conditions in MATCH to find the row that meets both Quality and Productivity conditions from Rules of priority
Please, help. I would be grateful for ideas on how this can be solved.
try:
=IFERROR(JOIN(", "; FiLTER(C5:C; A5:A >= 98%;
A5:A < 99%;
B5:B >= 110%;
B5:B < 115%)); "no one")
You should use filter function:
To have all agents that meet conditions quality > 99 and productivity >115 % you can make a function:
c - agents
b - productivity
a - quality
=filter(c5:c,a5:a>0.99,b5:b>1.15)
this will produce you a column of agents.
For more conditions (ranges of productivity, quality), you just add more conditions into filter fuction: for productivity between 110 and 115 and quality between 98 and 99 you write:
=filter(c5:c,a5:a>0.98,a5:a<0.99,b5:b<1.15,b5:b>1.1)
Related
I must calculate a function on EES.
Function: T(t)=((T_surface-T_infinity)*(e^(-bt)))+T_infinity
t is time and limits are between 1 and 40 second. I need calculate every seconds in 1-40.
How can I write this function in EES?
If I understand the question correctly, no differential equation should be solved, no integration, summation or the like should be carried out. Only a calculation with variation of the variable t is to be accomplished.
There are two possibilities. But first: EES is not 'key-sensitive'! So you should choose one—T or t—and the other should get another 'name'. I prefer 'tau' for time.
Parametric table
Write the equation in the equation window and aad the parameter you may have(?). Do not define 'tau'. like:
T=((T_surface-T_infinity)*(exp(-b*tau)))+T_infinity
T_surcace = 80[C]
T_infinity = 20[C]
b=3
Then open the parametic table, add the variables you want to see. At least you should take T and tau. Expand the rows of the table to 40 and enter the respective times. Then press the green play-button (top left in the table).
Duplicate function:
T_surface = 80[C]
$varinfo tau[] units='s'
b=0.1[1/s]
Duplicate i=1,40
tau[i] = i*1[s]
T[i]=((T_surface-T_infinity)*(exp(-b*tau[i] )))+T_infinity
End
T_infinity = 20[C]
$varinfo T[] units='C'
I am using spss to conduct mixed effect model of the following project:
The participant is being asked some open ended questions and their answers are recorded.
For example, if the participant's answer is related to equality, the variable "equality" is coded as "1". Otherwise, it is coded as "0". Therefore, dependent variable is the variable "equality".
Fixed effects:
- participant's country (Asians vs. Westerners)
- gender (Male vs Female)
- age group (younger age group vs. older age group)
- condition (control group vs. intervention group)
Random effect: Subject ID (participants)
Sample size: over 600 participants
My syntax in spss:
MIXED Equality BY Country Gender AgeGroup Condition
/CRITERIA=CIN(95) MXITER(100) MXSTEP(10) SCORING(1) SINGULAR(0.000000000001) HCONVERGE(0, ABSOLUTE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0.000001, ABSOLUTE)
/FIXED= Country Gender AgeGroup Condition | SSTYPE(3)
/METHOD=ML
/PRINT=SOLUTION TESTCOV
/RANDOM=INTERCEPT | SUBJECT(SubID_R) COVTYPE(VC).
When running this analysis in spss, the following warning appears:
Iteration was terminated but convergence has not been achieved.
The MIXED procedure continues despite this warning. Subsequent results produced are based on the last iteration. Validity of the model fit is uncertain.
I try to increase the number of "MXSTEP" from 10 to 10000 in syntax, but another warning appear:
The final Hessian matrix is not positive definite although all
convergence criteria are satisfied.
The MIXED procedure continues despite this warning. Validity of subsequent results cannot be ascertained.
I also try to increase the number of "MXITER" but the warning remains. May I ask how to deal with this problem to get rid of the warning?
Aside from what you've already tried, in some cases increasing the number of Fisher scoring steps can be helpful, but it may be the case that your random intercept variance is truly redundant and you won't be able to resolve this problem with those data and that model.
Also, typically you would not use a linear model for a binary response variable, but would use something like a logistic model (this can be done in GENLINMIXED, under Analyze>Mixed Models>Generalized Linear in the menus).
In the context of DO-178B, the number of conditions and inputs might differ: (A && B) or (A && C) has three inputs but four conditions because each occurence of A is considered a unique condition.
Multiple condition coverage requires 2^n test cases, where n is the number of inputs.
But what about this:
if(X>100 && X<200 && X!=50)
There are three conditions using the same input but I am sure that is not what the authors mean, otherwise I would need just two test cases to cover all combinations among those conditions.
Then I wonder, what is then meant by input - a boolean value in the decision? That would make sense in the quote I mentioned, as A will have the same value in all occurences. But I would like to understand and know if my thought is correct.
I am not familiar with DO-178B, but from the statement that they require
2^n test cases, where n is the number of inputs
I would deduce that the number of inputs in this context is the number of different (or independent) conditions.
This has nothing to do with the fact that in your example all conditions depend on only one integer variable.
However, in your example you will not be able to generate all 2^3 test cases because the 3rd condition is redundant. So in practice you would remove it and end up with two inputs.
I have a histogram, where I count the number of occurrences that a function takes particular values in the range 0.8 and 2.2.
I would like to get the cumulative distribution function for the set of values. Is it correct to just count the total number of occurrences until each particular value.
For example, the cdf at 0.9 will be the sum of all the occurrences from 0.8 to 0.9?
Is it correct?
Thank you
The sum normalised by the number of entries will give you an estimate of the cdf, yes. It will be as accurate as the histogram is an accurate representation of the pdf. If you want to evaluate the cdf anywhere except the bin endpoints, it makes sense to include a fraction of the counts, so that if you have break points b_i and b_j, then to evaluate the cdf at some point b_i < p < b_j you should add the fraction of counts (p - b_i) / (b_j-b_i) from the relevant cell. Essentially this assumes uniform density within the cells.
You can get an estimate of the cdf from the underlying values, too (based on your question I'm not quite sure what you have access to, whether its bin counts in the histogram or the actual values). Beware that doing so will give your CDF discontinuities (steps) at each data point, so think about whether you have enough, and what you're using the CDF for, to determine whether this is appropriate.
As a final note of warning, beware that evaluating the cdf outside of the range of observed values will give you an estimated probability of zero or one (zero for x<0.8, one for x>2.2). You should consider whether the function is truly bounded to that interval, and if not, employ some smoothing to ensure small amounts of probability mass outside the range of observed values.
I have a MySQL database I'm searching through. Lets say this is a database of people. When querying for a specific record, it is possible to find a match 100% on each attribute. But querying the database to find closest match on probability (closest matches on table attributes) is more of the strategy.
In this scenario, does it make sense to create a temporary table (much like a tally-sheet) to indicate what attributes match/what attributes are present? What is the typical approach to doing advanced searches on database like this?
Example (below) of a hypothetical stored Procedure
*parameters are just to exemplify how I would search. I'm not concerned how to perform my selects. Question is about approach, strategy, technique *
call FindPerson ("Brown Eyes", "Brown hair", "Height:6'1", "white", "Name:Joe" ,"weight180", "Age 34" "sex m");
RESULT TABLE
NAME AGE HEIGHT WEIGHT HAIR SKIN sex RANK_MATCH
Joe 32 6'1 180 Brown white m 1
Mike 33 6'1 179 Brown white m 2
James 31 6'0 179 Brown black m 3
Just out of my mind. You can create your own score and sort by it. Something like
SELECT `id`,
(IF(`age`=32,1,0)+IF(`height`="6'1",1,0)+...) as `score`
FROM `people`
HAVING `score` > 0
ORDER BY `score` DESC
LIMIT 10;
With this, you can handle every field with its own comparison, and also weight the individual attributes by not just add 1 but 2 or more.
But I'm quiet not sure, how performant this is.
The approach I would use would be to create a scoring function (your stored proc) that would evaluate the given input's standard distance from the mean.
In the proc, you would judge each criteria in a fashion similar to:
INPUT AGE: 32
calculate MEAN of AGE WHERE (sex = m): 34.5
calculate STANDARD DEVIATION of AGE WHERE (sex = m): 2.5
calculate how many STDEVs 32 is from the 34.5 (also known as z-score): 1
Repeat this process for all numeric datatypes, summing them and ORDER BY the sum.
In doing so, the following schema change would be required: height changed from foot/inch form to strictly inches.
Depending on your needs, you may also consider coming up with an arbitrary scale for sex and skin color/hair color. Of course, you may think that measures like these should NOT be factored in because of how drastically it would change the scoring function. If you chose to, you'd have to find some number that would be added to the above SUM...but it's hard because nominative variables don't translate easily into these kinds of things.
If you find that haircolor/skin color is able to be usefully transferred into say, the continous color spectrum, your scoring tidbit would be the same...color value of input vs color value of means and standard deviations.
The query that would find your matches would be something to the effect of:
SELECT
ABS(INPUT_AGE - AVG(AGE)) / STD(AGE) AS age_z,
ABS(INPUT_WT - AVG(WT)) / STD(WT) AS wt_z,
...
(age_z + wt_z + ...) AS score
FROM `table`
ORDER BY score ASC