Rule conflict in Jess rule engine - jess

I use Jess rule engine and i have some rules which change some slot values as a conclusion.This is achieved with two different ways:
1.with Jess modify (e.g.=>modify fact)
2.with a jess function (e.g. => change(slotvalue))
The problem is when two rules change the same slotvalue. I use Jess salience for this and it works ok only for type 1=>modify fact.
When i have a rule with the function,the rule is excecuted every time.How can i solve this?
Here is the code:
1)if person is student=>(modify ?fact( fredo 2))
2) if day is Wednesday=>(modify ?fact ( fredo (discount (fact-slot-value ?fact fredo) 50)))
3)if time >5 =>(modify ?fact( fredo 5))
if a person is student,day is wednsesday and time >5 then all rules are fired and fredo slot value will be 5 because this rule fired last.
How can i solve priority issues in this case (e.g to fire only the second rule); is it possible;
Thanks for your reply.

Related

Find with multiple conditions item in Table List Google Sheets

I have a list with agents results + Rules of priority looking like this
What should happen:
In the Agents Results list, the formula should look for the line where all the rules are met.
And then return Agent name.
Like in the first rule (the Hero's Name is hardcoded).
Agent 1 has Quality Result more than 99% and Productivity Result more than 115%. So he is in the column Hero's Name.
My attempts to solve the task.
To find agent's Name use INDEX function
To find the row argument for INDEX use MATCH function
The obstacles I've met
INDEX function returns only one result. What to do if several rows meet the Rule of Priority?
I can't use numeric range from/to in MATCH function to find row due to Rule of Priority that includes ranges (like Quality 99+ means all the values from 99.01% to 100%)
I can't use multiple conditions in MATCH to find the row that meets both Quality and Productivity conditions from Rules of priority
Please, help. I would be grateful for ideas on how this can be solved.
try:
=IFERROR(JOIN(", "; FiLTER(C5:C; A5:A >= 98%;
A5:A < 99%;
B5:B >= 110%;
B5:B < 115%)); "no one")
You should use filter function:
To have all agents that meet conditions quality > 99 and productivity >115 % you can make a function:
c - agents
b - productivity
a - quality
=filter(c5:c,a5:a>0.99,b5:b>1.15)
this will produce you a column of agents.
For more conditions (ranges of productivity, quality), you just add more conditions into filter fuction: for productivity between 110 and 115 and quality between 98 and 99 you write:
=filter(c5:c,a5:a>0.98,a5:a<0.99,b5:b<1.15,b5:b>1.1)

Data model for timeline event synchronisation

I am looking for ideas on the data model for the following problem (and the proper CS terminology):
A (horizontal) "timeline" with several rows (A,B,C) contains "events" (1,2,3) width different durations (width) at different times (absolute x position or by delay "." after previous event):
A 1111....222222
B 33333
------------------
T 0123456789ABCDEF
(The rows are only interesting for graphical representation of overlapping/parallel "events", so they probably are not essential to the data model.)
Event duration may vary, affecting the whole timing:
A 11....222222
B 33333+3
------------------
T 0123456789ABCDEF
But let event 2 require events 1 and 3 to be finished, so the timing should look like this:
A 11.... 222222
B 33333+3
------------------
T 0123456789ABCDEF
(let's ignore that the original delay at T=7 is now missing.)
Originally I thought I'd have to have some "elastic" synchronization elements, one for each row:
A 11....####222222
B 33333+3#
------------------
T 0123456789ABCDEF
Thus the original problem of how to model and sync the sync elements in the two different "rows". But, as established above, this is only a matter of graphical/parallel representation.
Rather, the sync is a condition that could be "attached" to event 2, modifiying or determining its beginning.
If an event "has" a condition, it will not have an absolute or relative start time. Its start can only be determined at the ends of the "linked" events (1 and 3).
So, given (a list of) some events with variable duration and either an absolute start time or a delay relative to another event's end, how could the condition "events 1 and 3 ended" be modelled to determine the start of "event 2"?
(I will prototype this in JavaScript and eventually implement in C/C++, so any sample code provided should not use high-level data types or libraries.)
What you need is an object that I would call a TimeFrame. The object would have the attributes duration, link and type, where link can be a precise time or a link to another TimeFrame and type accounts for the kind of link. For instance, a given TimeFrame that starts at a known time would have that time as its link attribute and the type would be TIME. A TimeFrame that is linked to the end of another would have that other TimeFrame as its link attribute and START-END as its type and so on.
Using the combination between link and type you could also support other types of links such as START-START, END-START or END-END.
UPDATE
Also, in order to allow some time interval between say, the end of a TimeFrame and the start of the next, one can add the attribute lag, which represents any delay between events. So, for instance if tf1 and tf2 are TimeFrames such that tf2 must start 5 time units after the end of tf1 the attributes of tf2 would be link = tf1, type = START-END, duration = <something> and lag = 5. Note also that the lag could be negative, which would extend the expressiveness of the model to a broad range of relationships.
While #Leandro Caniglia nicely rephrased my question into an Object and Attributes, essentially, I see two options:
the whole list of "events" needs to be evaluated at "condition" (start/end) to check dependent "events".
adding a "link" to a "parent" also creates a link to the "child" (no need to evaluate all pending event's links).
Also:
The "link" property needs to be a List or Array to be able hold several references (e.g. 2:[1,3]).
Analogous to the link property start_me_on_condition a stop_me_on_condition association may be desirable (see Leandro's suggestion of type, it would need to be extended to support multiple links+type)
An independet delay "event" might be more practical than a lag property.

Proton CEP Fiware: delete old received events

I've got this kind of problem with Proton CEP: i currently have a "Sequence" EPA; its input are 2 events. But these events have different granularity: let's say i have A and B events; i receive N "A" events, and M "B" events, where M << N.
So i'd like to have a rule like "if event of type A is not consumed within X seconds, remove it", otherwise i've got a long A events queue; i only need the rule to be evaluated for closest (temporally) events.
Practically, i've got a fake room temperature sensor that sends its temperature updates every 5seconds, and i've got another program that checks external weather and sends it every minute.
Any idea how to solve this situation?
Thank you very much!
I guess that in "consume" you mean arrival, so do you want to evaluate the time the A event took to get to the proton pcoressor? or the time between A events? Do you want to ensure that the A events are indeed continuous in a fix rate? "Removing" an event means to ignore it, since events are not kept anywhere, just processed. At the end, what is that you want to detect here? Like, what is the trend of room temperature compared to the outside temperature? then, emit output events accordingly?
Thanks.
all the relevant event instances are kept within the local state of a corresponding EPA.
For each EPA operand you have policies which dictates how the state is gathered and how the matching set for event derivation is built.
For example, instance selection policy which is defined per operand, and has the values of "Each", "First" and "Last" will tell you if all A instances are examined for match with B instance, or the first (in the order of arrival), or the last.
The consumption policy says what to do with the operand state once a seqence is detected - should the instances of say A which participated in sequence be removed from EPA's state ("consume" value of the policy) or should they remain.
Playing with combination of those policies should give you the behaviour you require

CEP's sequence detection‏

In developing for Fiware's Proton CEP, I came across an issue with Sequence event detection. I'll take advantage of DoSAttack example project, that comes with the software, to explain the issue.
I make two main changes to an original copy of DoSAttack:
-One is to make ExpectedCrash event have 3 more variables. This way I can log to DoSAttackTRConsumer file the 3 values that triggered it.
-Then I also change the Cardinality Policy of the Agent from Single to Unrestricted. This way the event can be triggered several times in a row, as TrafficReports come in (this may be a source to the issue).
I test this result and I find it works ok. I can see in the log that the values that trigger detection are the sequence of 3 values that arrived just before the event, after the first three events have arrived.
This, taking into account that the test beeing made on those 3 values still remains the original example test: (TR3.volume>1.50* TR2.volume AND TR2.volume>1.50 * TR1.volume).
The issue arrises if I make the test be just (TR3.volume>1.50* TR2.volume), for example, then CEP doesn't hold TR1 correctly. Now TR1 is the same as TR2, so cep loses "memory" of this value.
Going a step further, I make the test, just the condition (3>2) which is always true and should trigger a detection on any event that arrives. In this case, as events arrive, all TR1, TR2 and TR3 are the same and CEP has no memory of past values, even though the agent is of Type: Sequence.
The desired application would be for the CEP to recieve 22 readings as a sequence of input events and analyse only the 1st, 8th, 15th and 22nd values of this sequence, at each value that enters. But I find I can't make CEP remember the values correctly unless I'm testing all of them explicitly in the Condition view-box.
What would be the correct way to analyse the 1st, 8th, 15th and 22nd values that arrived, evaluating each time a new one arrives?
Here is the specificatin of DoSAttack, after altering it:
{"epn":{"events":[{"name":"TrafficReport","attributes":[{"name":"volume","type":"Integer","dimension":0}]},{"name":"ExpectedCrash","attributes":[{"name":"Cost","type":"Double","dimension":0},{"name":"TR1","type":"Integer","dimension":"0"},{"name":"TR2","type":"Integer","dimension":"0"},{"name":"TR3","type":"Integer","dimension":"0"}]}],"epas":[{"name":"IncreasingTraffic","epaType":"Sequence","context":"3MinAfterStartUp","inputEvents":[{"name":"TrafficReport","alias":"TR1","consumptionPolicy":"Consume","instanceSelectionPolicy":"First"},{"name":"TrafficReport","alias":"TR2","consumptionPolicy":"Consume","instanceSelectionPolicy":"First"},{"name":"TrafficReport","alias":"TR3","consumptionPolicy":"Consume","instanceSelectionPolicy":"First"}],"computedVariables":[],"assertion":"3>2","evaluationPolicy":"Immediate","cardinalityPolicy":"Unrestricted","internalSegmentation":[],"derivedEvents":[{"name":"ExpectedCrash","reportParticipants":false,"expressions":{"Cost":"10","TR1":"TR1.volume","TR2":"TR2.volume","TR3":"TR3.volume"}}],"derivedActions":[]}],"contexts":{"temporal":[{"name":"3MinAfterStartUp","type":"TemporalInterval","atStartup":true,"neverEnding":false,"initiators":[],"terminators":[{"terminatorType":"RelativeTime","terminationType":"Terminate","relativeTime":"180000"}]}],"segmentation":[],"composite":[]},"consumers":[{"name":"SysTemCrashConsumer","type":"File","properties":[{"name":"filename","value":"/opt/tomcat10/sample/DoSAttack_PredictedCrash.txt"},{"name":"formatter","value":"json"},{"name":"delimiter","value":";"},{"name":"tagDataSeparator","value":"="},{"name":"SendingDelay","value":"1000"}],"events":[{"name":"ExpectedCrash"}],"actions":[]},{"name":"DoSAttackTRConsumer","type":"File","properties":[{"name":"filename","value":"/opt/tomcat10/sample/DoSAttack_TrafficReport.txt"},{"name":"formatter","value":"json"},{"name":"delimiter","value":";"},{"name":"tagDataSeparator","value":"="},{"name":"SendingDelay","value":"1000"}],"events":[{"name":"TrafficReport"}],"actions":[]}],"producers":[{"name":"TrafficReportFileProducer","type":"File","properties":[{"name":"filename","value":"/opt/tomcat10/sample/DoSAttackScenarioJSON.txt"},{"name":"pollingInterval","value":"1000"},{"name":"sendingDelay","value":"1500"},{"name":"formatter","value":"json"},{"name":"delimiter","value":";"},{"name":"tagDataSeparator","value":"="}],"events":[]}],"actions":[],"name":"DoSAttack"}}
The producer file, DoSAttackScenarioJSON.txt, is still the original one, unaltered:
{"Name":"TrafficReport", "volume":"1000"}
{"Name":"TrafficReport", "volume":"1600"}
{"Name":"TrafficReport", "volume":"2500"}
If you do include more values than 3 you can see that the issue propagates.
If you need more information let me know.
Thank you
In the Sequence pattern, the engine looks for event instances that occurred in a particular order.
In Sequence (A, B, C), the engine looks for three event instances, the first one of type A, the second of type B and the third of type C, where:
(A's detection time) <= (B's detection time) AND (B's detection time) <= (C's detection time)
Usually in a Sequence pattern, either the event types are different, or there is other condition above the participants events (as in the DoSAttack example).
When you use the same event type in a sequence (e.g., Sequence(A, A, A)), then the same event instance can be used in all the three places, since it holds the detection order listed above.
In addition, if you use a "consumptionPolicy": "Consume" for a participant event, then after the event was used to detect the pattern, it will not be used for future detections of this pattern.
This is why when you have a Sequence(A, A, A) with no condition, and event instance A1 of type A arrives, it causes a pattern detection, and since it has Consume policy, it will not be kept for future detections. Later when event A2 of type A arrives, it causes another detection based on A2 alone.
Also, according to the Sequence built-in condition over the detection time, a sequence of events can be detected although other events arrived in between.
Please describe the pattern you would like to detect. Maybe you can use a Trend or Aggregate EPA instead.

Boolean Implication

I need some help with this Boolean Implication.
Can someone explain how this works in simple terms:
A implies B = B + A' (if A then B). Also equivalent to A >= B
Boolean implication A implies B simply means "if A is true, then B must be true". This implies (pun intended) that if A isn't true, then B can be anything. Thus:
False implies False -> True
False implies True -> True
True implies False -> False
True implies True -> True
This can also be read as (not A) or B - i.e. "either A is false, or B must be true".
Here's how I think about it:
if(A)
return B;
else
return True;
if A is true, then b is relevant and should be checked, otherwise, ignore B and return true.
I think I see where Serge is coming from, and I'll try to explain the difference. This is too long for a comment, so I'll post it as an answer.
Serge seems to be approaching this from the perspective of questioning whether or not the implication applies. This is somewhat like a scientist trying to determine the relationship between two events. Consider the following story:
A scientist visits four different countries on four different days. In each country she wants to determine if rain implies that people will use umbrellas. She generates the following truth table:
Did it rain? Did people Does rain => umbrellas? Comment
use umbrellas?
No No ?? It didn't rain, so I didn't get to observe
No Yes ?? People were shielding themselves from the hot sun; I don't know what they would do in the rain
Yes No No Perhaps the local government banned umbrellas and nobody can use them. There is definitely no implication here.
Yes Yes ?? Perhaps these people use umbrellas no matter what weather it is
In the above, the scientist doesn't know the relationship between rain and umbrellas and she is trying to determine what it is. Only on one of the days in one of the countries can she definitively say that implies is not the correct relationship.
Similarly, it seems that Serge is trying to test whether A=>B, and is only able to determine it in one case.
However, when we are evaluating boolean logic we know the relationship ahead of time, and want to test whether the relationship was adhered to. Another story:
A mother tells her son, "If you get dirty, take a bath" (dirty=>bath). On four separate days, when the mother comes home from work, she checks to see if the rule was followed. She generates the following truth table:
Get dirty? Take a bath? Follow rule? Comment
No No Yes Son didn't get dirty, so didn't need to take a bath. Give him a cookie.
No Yes Yes Son didn't need to take a bath, but wanted to anyway. Extra clean! Give him a cookie.
Yes No No Son didn't follow the rule. No cookie and no TV tonight.
Yes Yes Yes He took a bath to clean up after getting dirty. Give him a cookie.
The mother has set the rule ahead of time. She knows what the relationship between dirt and baths are, and she wants to make sure that the rule is followed.
When we work with boolean logic, we are like the mother: we know the operators ahead of time, and we want to work with the statement in that form. Perhaps we want to transform the statement into a different form (as was the original question, he or she wanted to know if two statements are equivalent). In computer programming we often want to plug a set of variables into the statement and see if the entire statement evaluates to true or false.
It's not a matter of knowing whether implies applies - it wouldn't have been written there if it shouldn't be. Truth tables are not about determining whether a rule applies, they are about determining whether a rule was adhered to.
I like to use the example: If it is raining, then it is cloudy.
Raining => Cloudy
Contrary to what many beginners might think, this in no way suggests that rain causes cloudiness, or that cloudiness causes rain. (EDIT: It means only that, at the moment, it is not both raining and not cloudy. See my recent blog posting on material implication here. There I develop, among other things, a rationale for the usual "definition" for material implication. The reader will require some familiarity with basic methods of proof, e.g. direct proof and proof by contradiction.)
~[Raining & ~Cloudy]
Judging from the truth tables, it is possible to infer the value of a=>b only for a=1 and b=0. In this case the value of a=>b is 0. For the rest of values (a,b), the value of a=>b is undefined: both (a=>b)=0 ("a doesn't imply b") and (a=>b)=1 ("a implies b") are possible:
a b a=>b comment
0 0 ? it is not possible to infer whether a implies b because a=0
0 1 ? --"--
1 0 0 b is 0 when a is 1, so it is possible to conclude
that a does not imply b
1 1 ? whether a implies b is undefined because it is not known
whether b can be 0 when a=1 .
For a to imply b it is necessary and sufficient that b=1 always when a=1, so that there is no counterexample when a=1 and b=0. For the rows 1, 2 and 4 in the truth table it is not known whether there is counterexample: these rows do not contradict to (a=>b)=1, but they also do not prove (a=>b)=1 . In contrast, row 3 immediately disproves (a=>b)=1 because it provides a counterexample when a=1 and b=0.
I guess I may shock some readers with these explanations, but it seems there are severe errors somewhere in the basics of the logic we are taught, and that is one of the reasons for such problems as Boolean Satisfiability being not solved yet.
The best contribution on this question is given by Serge Rogatch.
Boolean logic applies only where the result of quantifying(or evaluation) is either true or false and the relationship between boolean logic propositions is based on this fact.
So there must exist a relationship or connection between the propositions.
In higher order logic, the relationship is not just a case of on/off, 1/0 or +voltage/-voltage, the evaluation of a worded proposition is more complex. If no relationship exists between the worded propositions, then implication for worded propositions is not equivalent to boolean logic propositions.
While the implication truth table always yields correct results for binary propositions, this is not the case with worded propositions which may not be related in any way at all.
~A V B truth table:
A B Result/Evaluation
1 1 1
1 0 0
0 1 1
0 0 1
Worded proposition A: The moon is made of sour cream.
Worded proposition B: Tomorrow I will win the lotto.
A B Result/Evaluation
1 ? ?
As you can see, in this case, you can't even determine the state of B which will decide the result. Does this make sense now?
In this truth table, proposition ~A always evaluates to 1, therefore, the last two rows don't apply. However, the last two rows always apply in boolean logic.
http://thenewcalculus.weebly.com
Here's a compact statement:
Suppose we have two statements, A and B, each of which could either be true or false. Without any further information, there are 2 x 2 = 4 possibilities: "A and not B", "B and not A", "neither A nor B", and "both A and B".
Now impose the additional restriction that "if A, then also B". After imposing this restriction, the expression "x -> y", where -> is the "implication" operator, denotes whether it is still possible for A == x and B == y. The only outcome that is no longer possible after this additional restriction is A == 1 and B == 0, since that contradicts the restriction itself. Hence, we have 1 -> 0 is zero, and every other pair is 1.