I read about SAT and SMT. I always wonder how it can be applied in real programming scenario.
Here is an example:
Given that var a = 20; var b = a; are true, we want to know if b = 20 is true or false.
How should I turn it into boolean algebra expressions and apply SAT?
Here is the simplest example using SMT-LIB 2.0 standard supported by many SMT solvers:
(declare-fun a () Int)
(declare-fun b () Int)
(assert (= a b))
(assert (= a 20))
(assert (= b 20))
(check-sat)
You can use http://rise4fun.com/z3 to experiment with it. It will respond "sat" meaning the assertions can be satisfied, meaning then b can be 20.
Then you can replace (assert (= b 20)) with (assert (distinct b 20)). Z3 will respond with "unsat" meaning it's not possible for b to be anything else.
Related
(declare-const a Int)
(declare-const b Int)
(declare-const c Int)
(assert (exists ((a0 Int) (b0 Int) (c0 Int))
(and (<= a 3)
(>= a 0)
(<= b 3)
(>= b 0)
(<= c 3)
(>= c 0)
(= 3 (+ a b c))
(> a 1)
(= a0 a)
(= b0 b)
(= c0 c)
(= a0 2)
(= b0 0)
(= c0 1))))
(apply (then qe ctx-solver-simplify propagate-ineqs))
I am trying to generate this code using the provided Java-API for the Z3 Solver. However, this ends up throwing the following Exception:
Z3 Managed Exception: propagate-ineqs does not support unsat core production
Goal g = ctx.mkGoal(true, true, false);
g.add((BoolExpr) exp);
Tactic qe = ctx.mkTactic("qe");
Tactic simpify = ctx.mkTactic("simplify");
Tactic ctxSimpify = ctx.mkTactic("ctx-simplify");
Tactic ctxSolverSimplify = ctx.mkTactic("ctx-solver-simplify");
Tactic propagateIneqs = ctx.mkTactic("propagate-ineqs");
Tactic then = ctx.then(qe, simpify, ctxSimpify, ctxSolverSimplify,
propagateIneqs);
ApplyResult ar = then.apply(g);
BoolExpr result = ctx.mkAnd(ar.getSubgoals()[0].getFormulas());
Kindly let me know where am I going wrong and how can I use the tactic propagate-ineqs in Java-API. So that, I can get answers with trivial inequalities are simplified.
The command
Goal g = ctx.mkGoal(true, true, false);
asks for unsat core extraction to be enabled (the second true) for the goal g, but the propagate-ineqs tactic does not support that feature, so it throws an exception.
You could try explicitly turning off the unsat core feature during the construction of your Context ctx. You can pass a Map with options and their values. The option for unsat cores is "unsat_core" and can be set to "true" or "false".
I'm trying to report the inference steps in JESS. For example, I would like to know which rules/facts caused inference engine to fire a certain rule. In order words, I want to see the theorem proving capabilities of JESS.
Here is an example from Wikipedia:
(defrule A ""
(and (X croaks) (X eats flies))
=>
(assert (X is a frog))
)
(defrule B ""
(and (X chirps) (X sings))
=>
(assert (X is a canary))
)
(defrule C ""
(X is a frog)
=>
(assert (X is green))
)
(defrule D ""
(X is a canary)
=>
(assert (X is yellow))
)
If I have the following:
(assert (X croaks))
(assert (X eats flies))
Then when I enter (run) then I will have rule C fired. It seems like, it's fired because of
(X is a frog)
but actually because of
(and (X croaks) (X eats flies))
I am not sure if I'm clear, but I wonder if there is any way that I can show why a certain rules is fired with a comlete inference process.
You'll have to write some Java code, implementing jess.JessListener. You attach an object of this class to the Rete object, using Rete.addJessListener(jess.JessListener). The event you are interesting in is JessEvent.DEFRULE_FIRED, and it'll contain a reference to the activation object, and the rule is available from that.
See the Javadoc for JessListener for some Java code.
You can attach the listener from CLP code, prior to (run).
Ok, about Monad, I am aware that there are enough questions having been asked. I am not trying to bother anyone to ask what is monad again.
Actually, I read What is a monad?, it is very helpful. And I feel I am very close to really understand it.
I create this question here is just to describe some of my thoughts on Monad and Function, and hope someone could correct me or confirm it correct.
Some answers in that post let me feel monad is a little bit like function.
Monad takes a type, return a wrapper type (return), also, it can take a type, doing some operations on it and returns a wrapper type (bind).
From my point of view, it is a little bit like function. A function takes something and do some operations and return something.
Then why we even need monad? I think one of the key reasons is that monad provides a better way or pattern for sequential operations on the initial data/type.
For example, we have an initial integer i. In our code, we need to apply 10 functions f1, f2, f3, f4, ..., f10 step by step, i.e., we apply f1 on i first, get a result, and then apply f2 on that result, then we get a new result, then apply f3...
We can do this by functions rawly, just like f1 i |> f2 |> f3.... However, the intermediate results during the steps are not consistent; Also if we have to handle possible failure somewhere in middle, things get ugly. And an Option anyway has to be constructed if we don't want the whole process fail on exceptions. So naturally, monad comes in.
Monad unifies and forces the return types in all steps. This largely simplifies the logic and readability of the code (this is also the purpose of those design patterns, isn't it). Also, it is more bullet proof against error or bug. For example, Option Monad forces every intermediate results to be options and it is very easy to implement the fast fail paradigm.
Like many posts about monad described, monad is a design pattern and a better way to combine functions / steps to build up a process.
Am I understanding it correctly?
It sounds to me like you're discovering the limits of learning by analogy. Monad is precisely defined both as a type class in Haskell and as a algebraic thing in category theory; any comparison using "... like ..." is going to be imprecise and therefore wrong.
So no, since Haskell's monads aren't like functions, since they are 1) implemented as type classes, and 2) intended to be used differently than functions.
This answer is probably unsatisfying; are you looking for intuition? If so, I'd suggest doing lots of examples, and especially reading through LYAH. It's very difficult to get an intuitive understanding of abstract things like monads without having a solid base of examples and experience to fall back on.
Why do we even need monads? This is a good question, and maybe there's more than one question here:
Why do we even need the Monad type class? For the same reason that we need any type class.
Why do we even need the monad concept? Because it's useful. Also, it's not a function, so it can't be replaced by a function. (Your example seems like it does not require a Monad (rather, it needs an Applicative)).
For example, you can implement context-free parser combinators using the Applicative type class. But try implementing a parser for the language consisting of the same string of symbols twice (separated by a space) without Monad, i.e.:
a a -> yes
a b -> no
ab ab -> yes
ab ba -> no
So that's one thing a monad provides: the ability to use previous results to "decide" what to do. Here's another example:
f :: Monad m => m Int -> m [Char]
f m =
m >>= \x ->
if x > 2
then return (replicate x 'a')
else return []
f (Just 1) -->> Just ""
f (Just 3) -->> Just "aaa"
f [1,2,3,4] -->> ["", "", "aaa", "aaaa"]
Monads (and Functors, and Applicative Functors) can be seen as being about "generalized function application": they all create functions of type f a ⟶ f b where not only the "values inside a context", like types a and b, are involved, but also the "context" -- the same type of context -- represented by f.
So "normal" function application involves functions of type (a ⟶ b), "generalized" function application is with functions of type (f a ⟶ f b). Such functions can too be composed under normal function composition, because of the more uniform types structure: f a ⟶ f b ; f b ⟶ f c ==> f a ⟶ f c.
Each of the three creates them in a different way though, starting from different things:
Functors: fmap :: Functor f => (a ⟶ b) ⟶ (f a ⟶ f b)
Applicative Functors: (<*>) :: Applicative f => f (a ⟶ b) ⟶ (f a ⟶ f b)
Monadic Functors: (=<<) :: Monad f => (a ⟶ f b) ⟶ (f a ⟶ f b)
In practice, the difference is in how do we get to use the resulting value-in-context type, seen as denoting some type of computations.
Writing in generalized do notation,
Functors: do { x <- a ; return (g x) } g <$> a -- fmap
Applicative do { x <- a ; y <- b ; return (g x y) } g <$> a <*> b
Functors: (\ x -> g x <$> b ) =<< a
Monadic do { x <- a ; y <- k x ; return (g x y) } (\ x -> g x <$> k x) =<< a
Functors:
And their types,
"liftF1" :: (Functor f) => (a ⟶ b) ⟶ f a ⟶ f b -- fmap
liftA2 :: (Applicative f) => (a ⟶ b ⟶ c) ⟶ f a ⟶ f b ⟶ f c
"liftBind" :: (Monad f) => (a ⟶ b ⟶ c) ⟶ f a ⟶ (a ⟶ f b) ⟶ f c
I want to prove function definition correctness using the function keyword definition. Here is the definition of an addition function on the usual inductive definition of natural numbers:
theory FunctionDefinition
imports Main
begin
datatype natural = Zero | Succ natural
function add :: "natural => natural => natural"
where
"add Zero m = m"
| "add (Succ n) m = Succ (add n m)"
Isabelle/JEdit shows me the following subgoals:
goal (4 subgoals):
1. ⋀P x. (⋀m. x = (Zero, m) ⟹ P) ⟹ (⋀n m. x = (Succ n, m) ⟹ P) ⟹ P
2. ⋀m ma. (Zero, m) = (Zero, ma) ⟹ m = ma
3. ⋀m n ma. (Zero, m) = (Succ n, ma) ⟹ m = Succ (add_sumC (n, ma))
4. ⋀n m na ma. (Succ n, m) = (Succ na, ma) ⟹ Succ (add_sumC (n, m)) = Succ (add_sumC (na, ma))
Auto solve_direct: ⋀m ma. (Zero, m) = (Zero, ma) ⟹ m = ma can be solved directly with
Product_Type.Pair_inject: (?a, ?b) = (?a', ?b') ⟹ (?a = ?a' ⟹ ?b = ?b' ⟹ ?R) ⟹ ?R
using
apply (auto simp add: Product_Type.Pair_inject)
I get
goal (1 subgoal):
1. ⋀P a b. (⋀m. a = Zero ∧ b = m ⟹ P) ⟹ (⋀n m. a = Succ n ∧ b = m ⟹ P) ⟹ P
It is not clear how to proceed. At all, is this the right way to tackle this problem?
I know that Isabelle would do this automatically if I used a fun definition -- I want to learn how to do this manually .
The tutorial on the function package mentions in section 3 that fun f where ... abbreviates
function (sequential) f where ...
by pat_completeness auto
termination by lexicographic_order
Here pat_completeness is a proof method from the function package that automates proof of completeness for patterns of datatype constructors. This is the first subgoal that you have to prove. It is recommended to apply pat_completeness before auto, because auto changes the syntactic structure of the goal and pat_completeness might not work after auto.
If you want to prove pattern completeness without pat_completeness, you should try to do case analysis of all function parameters, i.e., case_tac a in your example.
Manuel already mentioned it in his comment, but I thought a more detailed example might be helpful anyway. Here is what you can do manually:
First you specify your function as usual
function add :: "natural => natural => natural"
where
"add Zero m = m" |
"add (Succ n) m = Succ (add n m)"
and then you prove that the given patterns cover all cases by
by (pat_completeness) auto
Afterwards you take care of termination. E.g., every datatype comes with a size function and you might note that the first argument of add strictly decreases in size for every recursive call. By default function will bundle all arguments of a function into a tuple for a termination proof, i.e., instead of two arguments n and m, in the termination proof you work with the single pair (n, m). Thus if you want to tell the system that it should use the size of the first argument you can do this as follows:
termination add
apply (relation "measure (size o fst)")
This will yield the remaining goals:
goal (2 subgoals):
1. wf (measure (size o fst))
2. !!n m. ((n, m), Succ n, m) : measure (size o fst)
That is, you have to show that the given relation is well-founded (which is trivial for measures, which are always well-founded, since they are constructed by mapping arguments to natural numbers and then using less-than on the naturals as relation) and that for every recursive call the arguments are actually in the given relation. Both goals are easily dispatched by simp.
apply simp
apply simp
done
I'm supposed to define the function n! (N-Factorial). The thing is I don't know how to.
Here is what I have so far, can someone please help with this? I don't understand the conditionals in Racket, so an explanation would be great!
(define fact (lambda (n) (if (> n 0)) (* n < n)))
You'll have to take a good look at the documentation first, this is a very simple example but you have to understand the basics before attempting a solution, and make sure you know how to write a recursive procedure. Some comments:
(define fact
(lambda (n)
(if (> n 0)
; a conditional must have two parts:
; where is the consequent? here goes the advance of the recursion
; where is the alternative? here goes the base case of the recursion
)
(* n < n))) ; this line is outside the conditional, and it's just wrong
Notice that the last expression is incorrect, I don't know what it's supposed to do, but as it is it'll raise an error. Delete it, and concentrate on writing the body of the conditional.
The trick with scheme (or lisp) is to understand each little bit between each set of brackets as you build them up into more complex forms.
So lets start with the conditionals. if takes 3 arguments. It evaluates the first, and if that's true, if returns the second, and if the first argument is false it returns the third.
(if #t "some value" "some other value") ; => "some value"
(if #f "some value" "some other value") ; => "some other value"
(if (<= 1 0) "done" "go again") ; => "go again"
cond would work too - you can read the racket introduction to conditionals here: http://docs.racket-lang.org/guide/syntax-overview.html#%28part._.Conditionals_with_if__and__or__and_cond%29
You can define functions in two different ways. You're using the anonymous function approach, which is fine, but you don't need a lambda in this case, so the simpler syntax is:
(define (function-name arguments) result)
For example:
(define (previous-number n)
(- n 1))
(previous-number 3) ; => 2
Using a lambda like you have achieves the same thing, using different syntax (don't worry about any other differences for now):
(define previous-number* (lambda (n) (- n 1)))
(previous-number* 3) ; => 2
By the way - that '*' is just another character in that name, nothing special (see http://docs.racket-lang.org/guide/syntax-overview.html#%28part._.Identifiers%29). A '!' at the end of a function name often means that that function has side effects, but n! is a fine name for your function in this case.
So lets go back to your original question and put the function definition and conditional together. We'll use the "recurrence relation" from the wiki definition because it makes for a nice recursive function: If n is less than 1, then the factorial is 1. Otherwise, the factorial is n times the factorial of one less than n. In code, that looks like:
(define (n! n)
(if (<= n 1) ; If n is less than 1,
1 ; then the factorial is 1
(* n (n! (- n 1))))) ; Otherwise, the factorial is n times the factorial of one less than n.
That last clause is a little denser than I'd like, so lets just work though it down for n = 2:
(define n 2)
(* n (n! (- n 1)))
; =>
(* 2 (n! (- 2 1)))
(* 2 (n! 1))
(* 2 1)
2
Also, if you're using Racket, it's really easy to confirm that it's working as we expect:
(check-expect (n! 0) 1)
(check-expect (n! 1) 1)
(check-expect (n! 20) 2432902008176640000)