Boolean Algebra expression won't simplify consistently? - boolean-logic

I got this question on one of my exams recently and it's been annoying me ever since.
¬¬(A.B).¬B + ¬¬¬C
on the exam I put that
A.B.¬B + ¬C = 0 + ¬C = ¬C and left it there, and some of the online calculators I put it through give this answer, but others give a different answer e.g.
¬¬(A.B).¬B + ¬¬¬C
¬¬(A.B).¬B + ¬C
¬¬A.B + ¬B + ¬C
AB + ¬B + ¬C
A + ¬B + ¬C
and some classmates got this. Both methods seem correct, what am I missing?

If by this:
¬¬(A.B).¬B + ¬¬¬C
You understand this:
(((not not (A and B)) and (not B)) or (not not not C))
Then the simplest possible form we can get at is derived as follows:
(((not not (A and B)) and (not B)) or (not not not C))
<=> (((A and B) and (not B)) or (not C))
<=> (((A and (not B)) AND (B and (not B))) or (not C))
<=> (((A and (not B)) AND false) or (not C)
<=> (false or (not C))
<=> not C
In the other derivation you give, the mistake is going from the 2nd line to the 3rd. The equivalence is lost in that step, the expressions are not equivalent. To see this, simply evaluate each expression for the assignment A=true, B=false, C=true and we can verify the first expression is false and the second is true:
((not not (A and B)) and (not B)) or (not C)
<=> ((not not (true and false)) and (not false)) or (not true)
<=> ((not not false) and (not false)) or (not true)
<=> (false and true) or (not true)
<=> false or false
<=> false
but
((not not A) and B) or (not B) or (not C)
<=> ((not not true) and false) or (not false) or (not true)
<=> (true and false) or (not false) or (not true)
<=> false or true or false
<=> true
If there is a rule they were trying to use in that step, they misused it.

Related

Expression (!a && b) || (!a && c) || (b && c) with minimum gates

I want to create a logical circuit
(!a && b) || (!a && c) || (b && c)
using as few logical gates (~ a nand b) nand NOT, AND, OR, NAND, NOR, XOR, NXOR as possible. The gate types can be mixxed. I have found some online calculators that can convert the above expression to NANDs only like
(!a nand b) nand (!a nand c) nand (b nand c)
But I wonder if there is a way to do it by using less than four gates.
Four gates and one inverter seem to be minimal:
This result was created by Logic Friday 1.
Entered:
f = (!a & b) | (!a & c) | (b & c);
Minimized:
f = a' b + a' c + b c;
It seems like it cannot be minimized more than this: a'b + c(a' + b) or can even be written as follows a'(b + c) + bc
which is actually identical to a'b + a'c + bc (has same truth table).

Finding inverse functions [duplicate]

In pure functional languages like Haskell, is there an algorithm to get the inverse of a function, (edit) when it is bijective? And is there a specific way to program your function so it is?
In some cases, yes! There's a beautiful paper called Bidirectionalization for Free! which discusses a few cases -- when your function is sufficiently polymorphic -- where it is possible, completely automatically to derive an inverse function. (It also discusses what makes the problem hard when the functions are not polymorphic.)
What you get out in the case your function is invertible is the inverse (with a spurious input); in other cases, you get a function which tries to "merge" an old input value and a new output value.
No, it's not possible in general.
Proof: consider bijective functions of type
type F = [Bit] -> [Bit]
with
data Bit = B0 | B1
Assume we have an inverter inv :: F -> F such that inv f . f ≡ id. Say we have tested it for the function f = id, by confirming that
inv f (repeat B0) -> (B0 : ls)
Since this first B0 in the output must have come after some finite time, we have an upper bound n on both the depth to which inv had actually evaluated our test input to obtain this result, as well as the number of times it can have called f. Define now a family of functions
g j (B1 : B0 : ... (n+j times) ... B0 : ls)
= B0 : ... (n+j times) ... B0 : B1 : ls
g j (B0 : ... (n+j times) ... B0 : B1 : ls)
= B1 : B0 : ... (n+j times) ... B0 : ls
g j l = l
Clearly, for all 0<j≤n, g j is a bijection, in fact self-inverse. So we should be able to confirm
inv (g j) (replicate (n+j) B0 ++ B1 : repeat B0) -> (B1 : ls)
but to fulfill this, inv (g j) would have needed to either
evaluate g j (B1 : repeat B0) to a depth of n+j > n
evaluate head $ g j l for at least n different lists matching replicate (n+j) B0 ++ B1 : ls
Up to that point, at least one of the g j is indistinguishable from f, and since inv f hadn't done either of these evaluations, inv could not possibly have told it apart – short of doing some runtime-measurements on its own, which is only possible in the IO Monad.
                                                                                                                                   ⬜
You can look it up on wikipedia, it's called Reversible Computing.
In general you can't do it though and none of the functional languages have that option. For example:
f :: a -> Int
f _ = 1
This function does not have an inverse.
Not in most functional languages, but in logic programming or relational programming, most functions you define are in fact not functions but "relations", and these can be used in both directions. See for example prolog or kanren.
Tasks like this are almost always undecidable. You can have a solution for some specific functions, but not in general.
Here, you cannot even recognize which functions have an inverse. Quoting Barendregt, H. P. The Lambda Calculus: Its Syntax and Semantics. North Holland, Amsterdam (1984):
A set of lambda-terms is nontrivial if it is neither the empty nor the full set. If A and B are two nontrivial, disjoint sets of lambda-terms closed under (beta) equality, then A and B are recursively inseparable.
Let's take A to be the set of lambda terms that represent invertible functions and B the rest. Both are non-empty and closed under beta equality. So it's not possible to decide whether a function is invertible or not.
(This applies to the untyped lambda calculus. TBH I don't know if the argument can be directly adapted to a typed lambda calculus when we know the type of a function that we want to invert. But I'm pretty sure it will be similar.)
If you can enumerate the domain of the function and can compare elements of the range for equality, you can - in a rather straightforward way. By enumerate I mean having a list of all the elements available. I'll stick to Haskell, since I don't know Ocaml (or even how to capitalise it properly ;-)
What you want to do is run through the elements of the domain and see if they're equal to the element of the range you're trying to invert, and take the first one that works:
inv :: Eq b => [a] -> (a -> b) -> (b -> a)
inv domain f b = head [ a | a <- domain, f a == b ]
Since you've stated that f is a bijection, there's bound to be one and only one such element. The trick, of course, is to ensure that your enumeration of the domain actually reaches all the elements in a finite time. If you're trying to invert a bijection from Integer to Integer, using [0,1 ..] ++ [-1,-2 ..] won't work as you'll never get to the negative numbers. Concretely, inv ([0,1 ..] ++ [-1,-2 ..]) (+1) (-3) will never yield a value.
However, 0 : concatMap (\x -> [x,-x]) [1..] will work, as this runs through the integers in the following order [0,1,-1,2,-2,3,-3, and so on]. Indeed inv (0 : concatMap (\x -> [x,-x]) [1..]) (+1) (-3) promptly returns -4!
The Control.Monad.Omega package can help you run through lists of tuples etcetera in a good way; I'm sure there's more packages like that - but I don't know them.
Of course, this approach is rather low-brow and brute-force, not to mention ugly and inefficient! So I'll end with a few remarks on the last part of your question, on how to 'write' bijections. The type system of Haskell isn't up to proving that a function is a bijection - you really want something like Agda for that - but it is willing to trust you.
(Warning: untested code follows)
So can you define a datatype of Bijection s between types a and b:
data Bi a b = Bi {
apply :: a -> b,
invert :: b -> a
}
along with as many constants (where you can say 'I know they're bijections!') as you like, such as:
notBi :: Bi Bool Bool
notBi = Bi not not
add1Bi :: Bi Integer Integer
add1Bi = Bi (+1) (subtract 1)
and a couple of smart combinators, such as:
idBi :: Bi a a
idBi = Bi id id
invertBi :: Bi a b -> Bi b a
invertBi (Bi a i) = (Bi i a)
composeBi :: Bi a b -> Bi b c -> Bi a c
composeBi (Bi a1 i1) (Bi a2 i2) = Bi (a2 . a1) (i1 . i2)
mapBi :: Bi a b -> Bi [a] [b]
mapBi (Bi a i) = Bi (map a) (map i)
bruteForceBi :: Eq b => [a] -> (a -> b) -> Bi a b
bruteForceBi domain f = Bi f (inv domain f)
I think you could then do invert (mapBi add1Bi) [1,5,6] and get [0,4,5]. If you pick your combinators in a smart way, I think the number of times you'll have to write a Bi constant by hand could be quite limited.
After all, if you know a function is a bijection, you'll hopefully have a proof-sketch of that fact in your head, which the Curry-Howard isomorphism should be able to turn into a program :-)
I've recently been dealing with issues like this, and no, I'd say that (a) it's not difficult in many case, but (b) it's not efficient at all.
Basically, suppose you have f :: a -> b, and that f is indeed a bjiection. You can compute the inverse f' :: b -> a in a really dumb way:
import Data.List
-- | Class for types whose values are recursively enumerable.
class Enumerable a where
-- | Produce the list of all values of type #a#.
enumerate :: [a]
-- | Note, this is only guaranteed to terminate if #f# is a bijection!
invert :: (Enumerable a, Eq b) => (a -> b) -> b -> Maybe a
invert f b = find (\a -> f a == b) enumerate
If f is a bijection and enumerate truly produces all values of a, then you will eventually hit an a such that f a == b.
Types that have a Bounded and an Enum instance can be trivially made RecursivelyEnumerable. Pairs of Enumerable types can also be made Enumerable:
instance (Enumerable a, Enumerable b) => Enumerable (a, b) where
enumerate = crossWith (,) enumerate enumerate
crossWith :: (a -> b -> c) -> [a] -> [b] -> [c]
crossWith f _ [] = []
crossWith f [] _ = []
crossWith f (x0:xs) (y0:ys) =
f x0 y0 : interleave (map (f x0) ys)
(interleave (map (flip f y0) xs)
(crossWith f xs ys))
interleave :: [a] -> [a] -> [a]
interleave xs [] = xs
interleave [] ys = []
interleave (x:xs) ys = x : interleave ys xs
Same goes for disjunctions of Enumerable types:
instance (Enumerable a, Enumerable b) => Enumerable (Either a b) where
enumerate = enumerateEither enumerate enumerate
enumerateEither :: [a] -> [b] -> [Either a b]
enumerateEither [] ys = map Right ys
enumerateEither xs [] = map Left xs
enumerateEither (x:xs) (y:ys) = Left x : Right y : enumerateEither xs ys
The fact that we can do this both for (,) and Either probably means that we can do it for any algebraic data type.
Not every function has an inverse. If you limit the discussion to one-to-one functions, the ability to invert an arbitrary function grants the ability to crack any cryptosystem. We kind of have to hope this isn't feasible, even in theory!
In some cases, it is possible to find the inverse of a bijective function by converting it into a symbolic representation. Based on this example, I wrote this Haskell program to find inverses of some simple polynomial functions:
bijective_function x = x*2+1
main = do
print $ bijective_function 3
print $ inverse_function bijective_function (bijective_function 3)
data Expr = X | Const Double |
Plus Expr Expr | Subtract Expr Expr | Mult Expr Expr | Div Expr Expr |
Negate Expr | Inverse Expr |
Exp Expr | Log Expr | Sin Expr | Atanh Expr | Sinh Expr | Acosh Expr | Cosh Expr | Tan Expr | Cos Expr |Asinh Expr|Atan Expr|Acos Expr|Asin Expr|Abs Expr|Signum Expr|Integer
deriving (Show, Eq)
instance Num Expr where
(+) = Plus
(-) = Subtract
(*) = Mult
abs = Abs
signum = Signum
negate = Negate
fromInteger a = Const $ fromIntegral a
instance Fractional Expr where
recip = Inverse
fromRational a = Const $ realToFrac a
(/) = Div
instance Floating Expr where
pi = Const pi
exp = Exp
log = Log
sin = Sin
atanh = Atanh
sinh = Sinh
cosh = Cosh
acosh = Acosh
cos = Cos
tan = Tan
asin = Asin
acos = Acos
atan = Atan
asinh = Asinh
fromFunction f = f X
toFunction :: Expr -> (Double -> Double)
toFunction X = \x -> x
toFunction (Negate a) = \a -> (negate a)
toFunction (Const a) = const a
toFunction (Plus a b) = \x -> (toFunction a x) + (toFunction b x)
toFunction (Subtract a b) = \x -> (toFunction a x) - (toFunction b x)
toFunction (Mult a b) = \x -> (toFunction a x) * (toFunction b x)
toFunction (Div a b) = \x -> (toFunction a x) / (toFunction b x)
with_function func x = toFunction $ func $ fromFunction x
simplify X = X
simplify (Div (Const a) (Const b)) = Const (a/b)
simplify (Mult (Const a) (Const b)) | a == 0 || b == 0 = 0 | otherwise = Const (a*b)
simplify (Negate (Negate a)) = simplify a
simplify (Subtract a b) = simplify ( Plus (simplify a) (Negate (simplify b)) )
simplify (Div a b) | a == b = Const 1.0 | otherwise = simplify (Div (simplify a) (simplify b))
simplify (Mult a b) = simplify (Mult (simplify a) (simplify b))
simplify (Const a) = Const a
simplify (Plus (Const a) (Const b)) = Const (a+b)
simplify (Plus a (Const b)) = simplify (Plus (Const b) (simplify a))
simplify (Plus (Mult (Const a) X) (Mult (Const b) X)) = (simplify (Mult (Const (a+b)) X))
simplify (Plus (Const a) b) = simplify (Plus (simplify b) (Const a))
simplify (Plus X a) = simplify (Plus (Mult 1 X) (simplify a))
simplify (Plus a X) = simplify (Plus (Mult 1 X) (simplify a))
simplify (Plus a b) = (simplify (Plus (simplify a) (simplify b)))
simplify a = a
inverse X = X
inverse (Const a) = simplify (Const a)
inverse (Mult (Const a) (Const b)) = Const (a * b)
inverse (Mult (Const a) X) = (Div X (Const a))
inverse (Plus X (Const a)) = (Subtract X (Const a))
inverse (Negate x) = Negate (inverse x)
inverse a = inverse (simplify a)
inverse_function x = with_function inverse x
This example only works with arithmetic expressions, but it could probably be generalized to work with lists as well. There are also several implementations of computer algebra systems in Haskell that may be used to find the inverse of a bijective function.
No, not all functions even have inverses. For instance, what would the inverse of this function be?
f x = 1

Simplify "all or nothing" boolean expression

Can a "all or nothing" boolean expression be simplified? Suppose I have three values, A, B, C, and want to determine if all three are true, or all three are false. Like a XOR gate, but with N values.
Can this statement be simplified?
(A && B && C) || !(A || B || C)
All true or all false basically means that all should be the same. So if the equality comparison is acceptable you can do this:
A == B && B == C

Proving if n = m and m = o, then n + m = m + o in Idris?

I am trying to improve my Idris skill by looking at some of the exercises Software Foundations (originally for Coq, but I am hoping the translation to Idris not too bad). I am having trouble with the "Exercise: 1 star (plus_id_exercise)" which reads:
Remove "Admitted." and fill in the proof.
Theorem plus_id_exercise : ∀ n m o : nat,
n = m → m = o → n + m = m + o.
Proof.
(* FILL IN HERE *) Admitted.
I have translated to the following problem in Idris:
plusIdExercise : (n : Nat) ->
(m : Nat) ->
(o : Nat) ->
(n == m) = True ->
(m == o) = True ->
(n + m == m + o) = True
I am trying to perform a case by case analysis and I am having a lot of issues. The first case:
plusIdExercise Z Z Z n_eq_m n_eq_o = Refl
seems to work, but then I want to say for instance:
plusIdExercise (S n) Z Z n_eq_m n_eq_o = absurd
But this doesn't work and gives:
When checking right hand side of plusIdExercise with expected type
S n + 0 == 0 + 0 = True
Type mismatch between
t -> a (Type of absurd)
and
False = True (Expected type)
Specifically:
Type mismatch between
\uv => t -> uv
and
(=) FalseUnification failure
I am trying to say this case can never happen because n == m, but Z (= m) is never the successor of any number (n). Is there anything I can do to fix this? Am I approaching this correctly? I am somewhat confused.
I would argue that the translation is not entirely correct. The lemma stated in Coq does not use boolean equality on natural numbers, it uses the so-called propositional equality. In Coq you can ask the system to give you more information about things:
Coq < About "=".
eq : forall A : Type, A -> A -> Prop
The above means = (it is syntactic sugar for eq type) takes two arguments of some type A and produces a proposition, not a boolean value.
That means that a direct translation would be the following snippet
plusIdExercise : (n = m) -> (m = o) -> (n + m = m + o)
plusIdExercise Refl Refl = Refl
And when you pattern-match on values of the equality type, Idris essentially rewrites terms according to the corresponding equation (it's roughly equivalent to Coq's rewrite tactic).
By the way, you might find the Software Foundations in Idris project useful.

Boolean negation

One of my exam questions reads:
! ( ! ( a != b) && ( b > 7 ) )
The choices:
a) (a != b) || (b < 7)
b) (a != b) || (b <= 7)
c) (a == b) || (b <= 7)
d) (a != b) && (b <= 7)
e) (a == b) && (b > 7)
Initially, I thought it would be D. This is incorrect, and I realize why. I don't understand how the logical negation operator reverses && and greater than/less than. I believe I have narrowed it down to the first two. Is there any instance > would change to <= ?
Is there any instance > would change to <= ?
Answer: every time you negate it.
Consider x > 1. The negation of this is clearly x <= 1. If you simply negate it as x < 1 then neither case covers the x == 1 case.
That being said, the given boolean ! ( ! ( a != b) && ( b > 7 ) ) can be decomposed as follows:
Given:
! ( !(a != b) && (b > 7))
Negate a != b:
! ((a == b) && (b > 7))
Distribute the !:
!(a == b) || !(b > 7)
Negate a==b:
(a != b) || !(b > 7)
Negate b>7:
(a != b) || (b <= 7)
The answer is, therefore, B.
The answer should be B. This is because the negation next to the (a != b) is evaluated first, then you distribute the outside negation to the entire proposition.
Using DeMorgan's Laws, the && will switch to ||. Similarly, != becomes ==, and > becomes <=.
!(!(a != b) && (b > 7))
!((a == b) && (b > 7))
(a != b) || (b <= 7)
! ( ! ( a != b) && ( b > 7 ) )
= ! ( (a = b) && (b > 7))
= (a != b) || (b <= 7)
answer is B.
to understand this :
! ( ! ( a != b) && ( b > 7 ) )
Lets break it into parts.
Part dummy: (a!=b)
Part X: !dummy
Part Y: (b>7)
Now !X = double negate of dummy => dummy => (a!=b)
!Y = !(b>7) => b should not be greater than 7 => b should be less than or equal to 7 => (b<=7)
Now problem left is how && becomes ||
So original question is: !( X && Y ) => should not be (X and Y) => it should be either negate of X or it should be negate of Y, because if instead of X it is ~X, the condition (X and Y) becomes false and hence !(X and Y) becomes true and hence original condition is achieved. Similarly for Y.
Firt apply the inner bracket Logical NOT (!):
!(!(a != b) && (b > 7)) becomes !((a == b) && (b > 7))
With De Morgan's Law we reverse the operators to their counterpart.
We change > to <= because the > operator does not include 7 itself or anything less, hence the <= is the only one that satisfies that condition.
Now the outer !:
When looking at truth tables (picture above), you'll notice that Logical AND (&&) and Logical OR (||) have opposite results when comparing 2 different boolean expressions (i.e. truth false, false true), hence when we apply the !, we reverse the && with ||. Finally we need to switch the == to != again.
All up, this produces
((a != b) || (b <= 7))