I am trying my hands with JESS wherein I want to write a rule as following.
When order amount is greater than 1000 and customer is preferred and customer name matches to order name then do something.
My Order.java has following properties
int amount, Customer cust
And Customer.java is a plain bean class holding following properties.
string name, string address
I am not able to find a way wherein I can get the value of Order.cust.name and compare with Customer.name in JESS.
Can anyone help me here please?
I tried using following but not working out for me.
(defrule HelloCustomer "When customer is preferred and amount is greater than 1001"
?person1 <- (Customer)
?cust <- (Customer {isPreferred == true})
?o <- (Order{amount > (+ 1000 1)})
?person2 <- (Order(customerA))
?person2Name <- (Customer{name == (Order{customerA.name})})
=>
(modify ?o (totalAmount 1000))
(printout t "Found two different " (call ?person2.customerA getName) crlf))
(printout t "Found two different*** " ?person1.name crlf))
You have many of the details right but the fundamentals are mostly wrong. First, note that each “Customer” and “Order” pattern matches a new object; this might match as many as five different objects. Secondly, you’ll need to bind variables to slot values so you can test them in other slots. Lastly, you’ll need to make use of the “OBJECT” slot to retrieve the Java object represented by each of this patterns. Roughly, I think you want something like
(defrule HelloCustomer
(Customer {isPreferred == true} (name ?name) (OBJECT ?customer))
(Order {amount > 1001} (name ?name) (OBJECT ?order)) ;; Repeating variable binds these together
=>
;; do something with ?customer and ?order
The Jess manual covers all of this, but you do have to read the whole thing. After all, you’re learning a whole new programming language.
Related
I'm new to Clojure, after trying multiple methods I'm completely stuck. I know how to achieve this in any other imperative languages, but not in Clojure.
I have a JSON file https://data.nasa.gov/resource/y77d-th95.json containing meteor fall data, each fall includes a mass and year.
I'm trying to find which year had the greatest collective total mass of falls.
Here's what I have so far:
(def jsondata
(json/read-str (slurp "https://data.nasa.gov/resource/y77d-th95.json") :key-fn keyword))
;Get the unique years
(def years (distinct (map :year jsondata)))
;Create map of unique years with a number to hold the total mass
(def yearcount (zipmap years (repeat (count years) 0)))
My idea was to use a for function to iterate through the jsondata, and update the yearcount map with the corresponding key (year in the fall object) with the mass of the object (increment it by, as in += in C)
I tried this although I knew it probably wouldn't work:
(for [x jsondata]
(update yearcount (get x :year) (+ (get yearcount (get x :year)) (Integer/parseInt (get x :mass)))))
The idea of course being that the yearcount map would hold the totals for each year, on which I could then use frequencies, sort-by, and last to get the year with the highest mass.
Also defined this function to update values in a map with a function, although Im not sure if I could actually use this:
(defn map-kv [m f]
(reduce-kv #(assoc %1 %2 (f %3)) {} m))
I've tried a few different methods, had lots of issues and just can't get anywhere.
Here's an alternate version just to show an approach with a slightly different style. Especially if you're new to clojure it may be easier to see the stepwise thinking that led to the solution.
The tricky part might be the for statement, which is another nice way to build up a new collection by (in this case) applying functions to each key and value in an existing map.
(defn max-meteor-year [f]
(let [rdr (io/reader f)
all-data (json/read rdr :key-fn keyword)
clean-data (filter #(and (:year %) (:mass %)) all-data)
grouped-data (group-by #(:year %) clean-data)
reduced-data
(for [[k v] grouped-data]
[(subs k 0 4) (reduce + (map #(Double/parseDouble (:mass %)) v))])]
(apply max-key second reduced-data)))
clj.meteor> (max-meteor-year "meteor.json")
["1947" 2.303023E7]
Here is my solution. I think you'll like it because its parts are decoupled and are not joined into a single treading macro. So you may change and test any part of it when something goes wrong.
Fetch the data:
(def jsondata
(json/parse-string
(slurp "https://data.nasa.gov/resource/y77d-th95.json")
true))
Pay attention, you may just pass true flag that indicates the keys should be keywords rather than strings.
Declare a helper function that takes into account a case when the first argument is missing (is nil):
(defn add [a b]
(+ (or a 0) b))
Declare a reduce function that takes a result and an item from a collection of meteor data. It updates the result map with our add function we created before. Please note, some items do not have either mass or year keys; we should check them for existence before operate on them:
(defn process [acc {:keys [year mass]}]
(if (and year mass)
(update acc year add (Double/parseDouble mass))
acc))
The final step is to run reducing algorithm:
(reduce process {} jsondata)
The result is:
{"1963-01-01T00:00:00.000" 58946.1,
"1871-01-01T00:00:00.000" 21133.0,
"1877-01-01T00:00:00.000" 89810.0,
"1926-01-01T00:00:00.000" 16437.0,
"1866-01-01T00:00:00.000" 559772.0,
"1863-01-01T00:00:00.000" 33710.0,
"1882-01-01T00:00:00.000" 314462.0,
"1949-01-01T00:00:00.000" 215078.0,
I think that such a step-by-step solution is much more clearer and maintainable than a single huge ->> thread.
Update: sorry, I misunderstood the question. I think this will work for you:
(->> (group-by :year jsondata)
(reduce-kv (fn [acc year recs]
(let [sum-mass (->> (keep :mass recs)
(map #(Double/parseDouble %))
(reduce +))]
(assoc acc year sum-mass)))
{})
(sort-by second)
(last))
=> ["1947-01-01T00:00:00.000" 2.303023E7]
The reduce function here is starting out with an initial empty map, and its input will be the output of group-by which is a map from years to their corresponding records.
For each step of reduce, the reducing function is receiving the acc map we're building up, the current year key, and the corresponding collection of recs for that year. Then we get all the :mass values from recs (using keep instead of map because not all recs have a mass value apparently). Then we map over that with Double/parseDouble to parse the mass strings into numbers. Then we reduce over that to sum all the masses for all the recs. Finally we assoc the year key to acc with the sum-mass. This outputs a map from years to their mass sums.
Then we can sort those map key/value pairs by their value (second returns the value), then we take the last item with the highest value.
When attempting to rework a merge sort program, I implemented a match with statement within a function.
let rec merge (array : int[]) (chunkA : int[]) (chunkB : int[]) a b i =
match a, b with
| chunkA.Length, _ -> chunkB
| _, chunkB.Length -> chunkA
| _ when chunkB.[b] < chunkA.[a]
-> array.[i] <- chunkB.[b]
merge array chunkA chunkB a (b+1) (i+1)
| _ -> array.[i] <- chunkA.[a]
merge array chunkA chunkB (a+1) b (i+1)
However, Visual Studio threw the error:
The namespace or module 'chunkA' is not defined.
This is confusing, since 'chunkA' had been stated within the function parameters.
In addition, I am rather new to F# and functional programming in general. If the structure or methodology in my code is not up to par, then please feel free to comment on this as well.
Also, if I'm being thick, please feel free to tell me that as well.
Many Thanks, Luke
As John mentioned, you cannot directly pattern match a numerical value against another variable. The language of patterns allows only constants, constructors and a few other things.
You can write the code using when but then you do not really benefit from the match constrct in any way, because you only have conditions in when clauses. In that case, I'd go for plain old if, because it makes it more obvious what you are doing:
let rec merge (array : int[]) (chunkA : int[]) (chunkB : int[]) a b i =
if a = chunkA.Length then chunkB
elif b = chunkB.Length then chunkA
elif chunkB.[b] < chunkA.[a] then
array.[i] <- chunkB.[b]
merge array chunkA chunkB a (b+1) (i+1)
else
array.[i] <- chunkA.[a]
merge array chunkA chunkB (a+1) b (i+1)
The match construct is very useful if you are pattern matching on more functional data structures - for example if you were writing merge on two lists (rather than arrays), then it would be a lot nicer with pattern matching.
When you use match, you need to use compile time constants.
Something like this is what you want
|aa,_ when aa=chunkA.Length -> ....
data Task = Task
{ id :: String
, description :: String
, dependsOn :: [String]
, dependentTasks :: [String]
} deriving (Eq, Show, Generic, ToJSON, FromJSON)
type Storage = Map String Task
s :: Storage
s = empty
addTask :: Task -> Storage -> Storage
addTask (Task id desc dep dept) = insert id (Task id desc dep dept)
removeTask :: String -> Storage -> Storage
removeTask tid = delete tid
changes = [addTask (Task "1" "Description" [] []), removeTask "1"]
main = putStrLn . show $ foldl (\s c -> c s) s changes
Suppose I have the following code. I want to store changes list in a json file. But I don't know how to do that with Aeson, aside probably from writing a custom parser and there must be a better way to do that obviously. Like maybe using language extension to derive (Generic, ToJSON, FromJSON) for addTask and removeTask etc...
EDIT. For all people that say "You can't serialize function".
Read the comments to an answer to this question.
Instance Show for function
That said, it's not possible to define Show to actually give you more
? detail about the function. – Louis Wasserman May 12 '12 at 14:51
Sure it is. It can show the type (given via Typeable); or it can show some of the inputs and outputs (as is done in QuickCheck).
EDIT2. Okay, I got that I can't have function name in serialization. But can this be done via template Haskell? I see that aeson supports serialization via template Haskell, but as newcomer to Haskell can't figure out how to do that.
Reading between the lines a bit, a recurring question here is, "Why can't I serialize a function (easily)?" The answer -- which several people have mentioned, but not explained clearly -- is that Haskell is dedicated to referential transparency. Referential transparency says that you can replace a definition with its defined value (and vice versa) without changing the meaning of the program.
So now, let's suppose we had a hypothetical serializeFunction, which in the presence of this code:
foo x y = x + y + 3
Would have this behavior:
> serializeFunction (foo 5)
"foo 5"
I guess you wouldn't object too strenuously if I also claimed that in the presence of
bar x y = x + y + 3
we would "want" this behavior:
> serializeFunction (bar 5)
"bar 5"
And now we have a problem, because by referential transparency
serializeFunction (foo 5)
= { definition of foo }
serializeFunction (\y -> 5 + y + 3)
= { definition of bar }
serializeFunction (bar 5)
but "foo 5" does not equal "bar 5".
The obvious followup question is: why do we demand referential transparency? There are at least two good reasons: first, it allows equational reasoning like above, hence eases the burden of refactoring; and second, it reduces the amount of runtime information that's needed, hence improving performance.
Of course, if you can come up with a representation of functions that respects referential transparency, that poses no problems. Here are some ideas in that direction:
printing the type of the function
instance (Typeable a, Typeable b) => Show (a -> b) where
show = show . typeOf
-- can only write a Read instance for trivial functions
printing the input-output behavior of the function (which can also be read back in)
creating a data type that combines a function with its name, and then printing that name
data Named a = Named String a
instance Show (Named a) where
show (Named n _) = n
-- perhaps you could write an instance Read (Map String a -> Named a)
(and see also cloud haskell for a more complete working of this idea)
constructing an algebraic data type that can represent all the expressions you care about but contains only basic types that already have a Show instance and serializing that (e.g. as described in the other answer)
But printing a bare function's name is in conflict with referential transparency.
Make a data type for your functions and an evaluation function:
data TaskFunction = AddTask Task | RemoveTask String
deriving (Eq, Show, Generic, ToJSON, FromJSON)
eval :: TaskFunction -> Storage -> Storage
eval (AddTask t) = addTask t
eval (RemoveTask t) = removeTask t
changes = [AddTask (Task "1" "Description" [] []), RemoveTask "1"]
main = putStrLn . show $ foldl (\s c -> c s) s (eval <$> changes)
I have a problem when I compare java object as attribute inside the java class
This is my clp file
(import Model.*)
(deftemplate PizzaBase
(declare (from-class PizzaBase)
(include-variables TRUE)))
(deftemplate PizzaTopping
(declare (from-class PizzaTopping)
(include-variables TRUE)))
(deftemplate Pizza
(declare (from-class Pizza)
(include-variables TRUE)))
(defrule make-pizza
?pizzaBase1 <-(PizzaBase{size == 9})
(Pizza(pizzaBase ?pizzaBase1))
=>
(add (new PizzaBase "New DeepPan" 10))
)
According from my rule, I want to create a new pizzaBase.When the pizzaBase object in Pizza equal pizzaBase1(size = 9), but JESS is not create a new fact for me.
From my thinking, I think JESS cannot compare the Java object that create from the class.Therefore, There isn't add any fact to JESS.
So,"How to solve this problem?",because I look on the manual on the JESS website but there aren't any title that according my problem.
Thank!
You may have overlooked section 5.3.2., Adding Java objects to working memory.
A Java object isn't the same as a fact, even when you derive a shadow (!) fact from a POJO, using from-class and include-variables. A fact contains a reference to the Java object you insert by calling (add ?aNewObject) in the reserved slot name OBJECT.
Change your rule like this:
(defrule make-pizza
(PizzaBase{size == 9}(OBJECT ?pizzaBase1))
(Pizza(pizzaBase ?pizzaBase1))
=>
(add (new PizzaBase "New DeepPan" 10))
)
Following up on How to make a record from a sequence of values, how can you write a defrecord constructor call and assign the fields from a Map, leaving un-named fields nil?
(defrecord MyRecord [f1 f2 f3])
(assign-from-map MyRecord {:f1 "Huey" :f2 "Dewey"}) ; returns a new MyRecord
I imagine a macro could be written to do this.
You can simply merge the map into a record initialised with nils:
(merge (MyRecord. nil nil nil) {:f1 "Huey" :f2 "Dewey"})
Note that records are capable of holding values stored under extra keys in a map-like fashion.
The list of a record's fields can be obtained using reflection:
(defn static? [field]
(java.lang.reflect.Modifier/isStatic
(.getModifiers field)))
(defn get-record-field-names [record]
(->> record
.getDeclaredFields
(remove static?)
(map #(.getName %))
(remove #{"__meta" "__extmap"})))
The latter function returns a seq of strings:
user> (get-record-field-names MyRecord)
("f1" "f2" "f3")
__meta and __extmap are the fields used by Clojure records to hold metadata and to support the map functionality, respectively.
You could write something like
(defmacro empty-record [record]
(let [klass (Class/forName (name record))
field-count (count (get-record-field-names klass))]
`(new ~klass ~#(repeat field-count nil))))
and use it to create empty instances of record classes like so:
user> (empty-record user.MyRecord)
#:user.MyRecord{:f1 nil, :f2 nil, :f3 nil}
The fully qualified name is essential here. It's going to work as long as the record class has been declared by the time any empty-record forms referring to it are compiled.
If empty-record was written as a function instead, one could have it expect an actual class as an argument (avoiding the "fully qualified" problem -- you could name your class in whichever way is convenient in a given context), though at the cost of doing the reflection at runtime.
Clojure generates these days a map->RecordType function when a record is defined.
(defrecord Person [first-name last-name])
(def p1 (map->Person {:first-name "Rich" :last-name "Hickey"}))
The map is not required to define all fields in the record definition, in which case missing keys have a nil value in the result. The map is also allowed to contain extra fields that aren't part of the record definition.
As mentioned in the linked question responses, the code here shows how to create a defrecord2 macro to generate a constructor function that takes a map, as demonstrated here. Specifically of interest is the make-record-constructor macro.