In the image above, circuit is Sum of Products
(B’+D’) (A+D) (A+C)
The image below is my attempt on using NAND and NOT gates only. However, my senses is telling me that I am doing it wrongly. Please help!
The first circuit actually implements
which is this circuit:
Using De Morgan's Laws, this is equivalent to
which is this circuit:
This Python program can be used to compare the circuits:
import itertools
# Create all the possible input combinations
x = (True, False)
comb = set(itertools.product(x, x, x, x))
# AD + AC + B'D'
def c1(a, b, c, d):
return ((a and d) or (a and c) or ((not b) and (not d)))
# ((AD)'(AC)'(B'D')')'
def c2(a, b, c, d):
return not ((not (a and d)) and (not (a and c)) and (not ((not b) and (not d))))
# For each input, verify that the results are the same
for x in comb:
r1 = c1(*x)
r2 = c2(*x)
if r1 != r2:
print "Error: Input %s produced %s != %s" % (x, r1, r2)
You can replace all AND gates with NAND gates and just negate the result. I can see that you negated the inputs, which is wrong because:
(ab)' != a'b'
As an example think about signals (a, b) = (1, 0). If you negate them and calculate output, you get 0. If you first calculate and then negate output, you get 1.
About the OR gate:
a + b + c -> ((a + b + c)')' -> (a'b'c')'
So OR gate is NAND gate with all signals negated.
Related
I want to write a function that takes a mathematical function (/,x,+,-), a number to start with and a list of numbers. Then, it's supposed to give back a list.
The first element is the starting number, the second element the value of the starting number plus/minus/times/divided by the first number of the given list. The third element is the result of the previous result plus/minus/times/divided by the second result of the given list, and so on.
I've gotten everything to work if I tell the code which function to use but if I want to let the user input the mathematical function he wants, there are problems with the types. Trying :t (/) for example gives out Fractional a => a -> a -> a, but if you put that at the start of your types, it fails.
Is there a specific type to distinguish these functions (/,x,+,-)? Or is there another way to write this function succesfully?
prefix :: (Fractional a, Num a) => a -> a -> a -> a -> [a] -> [a]
prefix (f) a b = [a] ++ prefix' (f) a b
prefix' :: (Fractional a, Num a) => a -> a -> a -> a -> [a] -> [a]
prefix' (z) x [] = []
prefix' (z) x y = [x z (head y)] ++ prefix' (z) (head (prefix' (z) x y)) (tail y)
A right solution would be something like this:
prefix (-) 0 [1..5]
[0,-1,-3,-6,-10,-15]
Is there a specific type to distinguish these functions (/,*,+,-)?
I don't see a reason to do this. Why is \x y -> x+y considered "better" than \x y -> x + y + 1. Sure adding two numbers is something that most will consider more "pure". But it is strange to restrict yourself to a specific subset of functions. It is also possible that for some function \x y -> f x y - 1 "happens" to be equal to (+), except that the compiler can not determine that.
The type checking will make sure that one can not pass functions that operate on numbers, given the list contains strings, etc. But deliberately restricting this further is not very useful. Why would you prevent programmers to use your function for different purposes?
Or is there another way to write this function succesfully?
What you here describe is the scanl :: (b -> a -> b) -> b -> [a] -> [b] function. If we call scanl with scanl f z [x1, x2, ..., xn], then we obtain a list [z, f z x1, f (f z x1) x2, ...]. scanl can be defined as:
scanl :: (b -> a -> b) -> b -> [a] -> [b]
scanl f = go
where go z [] = [z]
go z (x:xs) = z : go (f z x) xs
We thus first emit the accumulator (that starts with the initial value), and then "update" the accumulator to f z x with z the old accumulator, and x the head of the list, and recurse on the tail of the list.
If you want to restrict to these four operations, just define the type yourself:
data ArithOp = Plus | Minus | Times | Div
as_fun Plus = (+)
as_fun Minus = (-)
as_fun Times = (*)
as_fun Div = (/)
This question already has answers here:
Can you overload + in haskell?
(5 answers)
Closed 4 years ago.
I am trying to override the + symbol in an effort to learn how to define my own types. I am very new to Haskell, and I cannot seem to get past this error.
Here is my simple new type:
newtype Matrix x = Matrix x
(+):: (Num a, Num b, Num c) => Matrix [[a]] -> Matrix [[b]] -> Matrix [[c]]
x + y = Matrix zipWith (\ a b -> zipWith (+) a b) x y
When I try to load this into ghci, I get the error
linear_algebra.hs:9:42:
Ambiguous occurrence ‘+’
It could refer to either ‘Main.+’, defined at linear_algebra.hs:9:3
or ‘Prelude.+’,
imported from ‘Prelude’ at linear_algebra.hs:1:1
(and originally defined in ‘GHC.Num’)
Failed, modules loaded: none.
Replacing my last line of code with
x + y = Matrix zipWith (\ a b -> zipWith (Prelude.+) a b) x y
gives me the error
Couldn't match expected type ‘([Integer] -> [Integer] -> [Integer])
-> Matrix [[a]] -> Matrix [[b]] -> Matrix [[c]]’
with actual type ‘Matrix
((a0 -> b0 -> c0) -> [a0] -> [b0] -> [c0])’
Relevant bindings include
y :: Matrix [[b]] (bound at linear_algebra.hs:9:5)
x :: Matrix [[a]] (bound at linear_algebra.hs:9:1)
(+) :: Matrix [[a]] -> Matrix [[b]] -> Matrix [[c]]
(bound at linear_algebra.hs:9:1)
The function ‘Matrix’ is applied to four arguments,
but its type ‘((a0 -> b0 -> c0) -> [a0] -> [b0] -> [c0])
-> Matrix ((a0 -> b0 -> c0) -> [a0] -> [b0] -> [c0])’
has only one
In the expression:
Matrix zipWith (\ a b -> zipWith (Prelude.+) a b) x y
In an equation for ‘+’:
x + y = Matrix zipWith (\ a b -> zipWith (Prelude.+) a b) x y
Failed, modules loaded: none.
Can you please help me understand what the error is? I would really appreciate it. Thanks!
First of all, the type
(+) :: (Num a, Num b, Num c) => Matrix [[a]] -> Matrix [[b]] -> Matrix [[c]]
is too much general. It states that you can sum any numeric matrix with any other numeric matrix, even if the element types are different, to produce a matrix of a third numeric type (potentially distinct from the first two). That is, in particular a matrix of floats can be summed to a matrix of doubles to produce a matrix of ints.
You want instead
(+) :: Num a => Matrix [[a]] -> Matrix [[a]] -> Matrix [[a]]
I would recommend to move the "list of list" type inside the newtype
newtype Matrix a = Matrix [[a]]
reflecting that the list of lists implements the Matrix concept. That gives the type signature
(+) :: Num a => Matrix a -> Matrix a -> Matrix a
To "override" (+): there's no overriding/overloading in Haskell. The closest options are:
define a module-local function (+). This will clash with Prelude.(+), so that every + will now need to be qualified to remove the ambiguity. We can not write x + y, but we need x Prelude.+ y or x MyModuleName.+ y.
implement a Num instance for Matrix a. This is not a great idea since a matrix is not exactly a number, but we can try anyway.
instance Num a => Num (Matrix a) where
Matrix xs + Matrix ys = Matrix (zipWith (zipWith (+)) xs ys)
-- other Num operators here
(*) = error "not implemented" -- We can't match dimension
negate (Matrix xs) = Matrix (map (map negate) xs)
abs = error "not implemented"
signum = error "not implemented"
fromInteger = error "not implemented"
This is very similar to your code, which lacks some parentheses. Not all the other methods can be implemented in a completely meaningful way, since Num is for numbers, not matrices.
use a different operator, e.g. (^+) or whatever
I am learning Haskell. I am trying to make a function that deletes integers out of a list when met with the parameters of a certain function f.
deleteif :: [Int] -> (Int -> Bool) -> [Int]
deleteif x f = if x == []
then []
else if head x == f
then deleteif((tail x) f)
else [head x] ++ deleteif((tail x) f)
I get the following errors :
function tail is applied to two arguments
'deleteif' is applied to too few arguments
The issue is that you don't use parentheses to call a function in Haskell. So you just need to use
if f (head x)
then deleteif (tail x) f
else [head x] ++ deleteif (tail x) f
the problem is in deleteif((tail x) f)
it becomes deleteif (tail x f)
so tail gets 2 arguments
and then deleteif a
so deleteif gets 1 argument
you want deleteif (tail x) f
head x == f is wrong you want `f (head x)
you can use pattern matching ,guards and make it more generic
deleteif :: [a] -> (a -> Bool) -> [a]
deleteif [] _ = []
deleteif (x:xs) f
| f x = deleteif xs f
| otherwise = x : deleteif xs f
As already said, deleteif((tail x) f) is parsed as deleteif (tail x f), which means tail is applied to the two arguments x and f, and the result would then be passed on as the single argument to deleteif. What you want is deleteif (tail x) f, which is equivalent to (deleteif (tail x)) f and what most languages1 would write deleteif(tail x, f).
This parsing order may seem confusing initially, but it turns out to be really useful in practice. The general name for the technique is Currying.
For one thing, it allows you to write dense statements without needing many parentheses – in fact deleteif (tail x f) could also be written deleteif $ tail x f.
More importantly, because the arguments don't need to be “encased” in a single tuple, you don't need to supply them all at once but automatically get partial application when you apply to only one argument. For instance, you could use this function like that: deleteif (>4) [1,3,7,5,2,9,7] to yield [7,5,9,7]. This works by partially applying the function2 > to 4, leaving a single-argument function which can be used to filter the list.
1Indeed, this style is possible in Haskell as well: just write the signatures of such multi-argument functions as deleteif :: ([Int], Int->Bool) -> [Int]. Or write uncurry deleteif (tail x, f). But it's definitely better you get used to the curried style!
2Actually, > is an infix which behaves a bit different – you can partially apply it to either side, i.e. you can also write deleteif (4>) [1,3,7,5,2,9,7] to get [1,3,2].
I found this on stack: reversible "binary to number" predicate
But I don't understand
:- use_module(library(clpfd)).
binary_number(Bs0, N) :-
reverse(Bs0, Bs),
binary_number(Bs, 0, 0, N).
binary_number([], _, N, N).
binary_number([B|Bs], I0, N0, N) :-
B in 0..1,
N1 #= N0 + (2^I0)*B,
I1 #= I0 + 1,
binary_number(Bs, I1, N1, N).
Example queries:
?- binary_number([1,0,1], N).
N = 5.
?- binary_number(Bs, 5).
Bs = [1, 0, 1] .
Could somebody explain me the code
Especialy this : binary_number([], _, N, N). (The _ )
Also what does library(clpfd) do ?
And why reverse(Bs0, Bs) ? I took it away it still works fine...
thx in advance
In the original, binary_number([], _, N, N)., the _ means you don't care what the value of the variable is. If you used, binary_number([], X, N, N). (not caring what X is), Prolog would issue a singleton variable warning. Also, what this predicate clause says is that when the first argument is [] (the empty list), then the 3rd and 4th arguments are unified.
As explained in the comments, use_module(library(clpfd)) causes Prolog to use the library for Constraint Logic Programming over Finite Domains. You can also find lots of good info on it via Google search of "prolog clpfd".
Normally, in Prolog, arithmetic expressions of comparison require that the expressions be fully instantiated:
X + Y =:= Z + 2. % Requires X, Y, and Z to be instantiated
Prolog would evaluate and do the comparison and yield true or false. It would throw an error if any of these variables were not instantiated. Likewise, for assignment, the is/2 predicate requires that the right hand side expression be fully evaluable with specific variables all instantiated:
Z is X + Y. % Requires X and Y to be instantiated
Using CLPFD you can have Prolog "explore" solutions for you. And you can further specify what domain you'd like to restrict the variables to. So, you can say X + Y #= Z + 2 and Prolog can enumerate possible solutions in X, Y, and Z.
As an aside, the original implementation could be refactored a little to avoid the exponentiation each time and to eliminate the reverse:
:- use_module(library(clpfd)).
binary_number(Bin, N) :-
binary_number(Bin, 0, N).
binary_number([], N, N).
binary_number([Bit|Bits], Acc, N) :-
Bit in 0..1,
Acc1 #= Acc*2 + Bit,
binary_number(Bits, Acc1, N).
This works well for queries such as:
| ?- binary_number([1,0,1,0], N).
N = 10 ? ;
no
| ?- binary_number(B, 10).
B = [1,0,1,0] ? ;
B = [0,1,0,1,0] ? ;
B = [0,0,1,0,1,0] ? ;
...
But it has termination issues, as pointed out in the comments, for cases such as, Bs = [1|_], N #=< 5, binary_number(Bs, N). A solution was presented by #false which simply modifies the above helps solve those termination issues. I'll reiterate that solution here for convenience:
:- use_module(library(clpfd)).
binary_number(Bits, N) :-
binary_number_min(Bits, 0,N, N).
binary_number_min([], N,N, _M).
binary_number_min([Bit|Bits], N0,N, M) :-
Bit in 0..1,
N1 #= N0*2 + Bit,
M #>= N1,
binary_number_min(Bits, N1,N, M).
I want to prove function definition correctness using the function keyword definition. Here is the definition of an addition function on the usual inductive definition of natural numbers:
theory FunctionDefinition
imports Main
begin
datatype natural = Zero | Succ natural
function add :: "natural => natural => natural"
where
"add Zero m = m"
| "add (Succ n) m = Succ (add n m)"
Isabelle/JEdit shows me the following subgoals:
goal (4 subgoals):
1. ⋀P x. (⋀m. x = (Zero, m) ⟹ P) ⟹ (⋀n m. x = (Succ n, m) ⟹ P) ⟹ P
2. ⋀m ma. (Zero, m) = (Zero, ma) ⟹ m = ma
3. ⋀m n ma. (Zero, m) = (Succ n, ma) ⟹ m = Succ (add_sumC (n, ma))
4. ⋀n m na ma. (Succ n, m) = (Succ na, ma) ⟹ Succ (add_sumC (n, m)) = Succ (add_sumC (na, ma))
Auto solve_direct: ⋀m ma. (Zero, m) = (Zero, ma) ⟹ m = ma can be solved directly with
Product_Type.Pair_inject: (?a, ?b) = (?a', ?b') ⟹ (?a = ?a' ⟹ ?b = ?b' ⟹ ?R) ⟹ ?R
using
apply (auto simp add: Product_Type.Pair_inject)
I get
goal (1 subgoal):
1. ⋀P a b. (⋀m. a = Zero ∧ b = m ⟹ P) ⟹ (⋀n m. a = Succ n ∧ b = m ⟹ P) ⟹ P
It is not clear how to proceed. At all, is this the right way to tackle this problem?
I know that Isabelle would do this automatically if I used a fun definition -- I want to learn how to do this manually .
The tutorial on the function package mentions in section 3 that fun f where ... abbreviates
function (sequential) f where ...
by pat_completeness auto
termination by lexicographic_order
Here pat_completeness is a proof method from the function package that automates proof of completeness for patterns of datatype constructors. This is the first subgoal that you have to prove. It is recommended to apply pat_completeness before auto, because auto changes the syntactic structure of the goal and pat_completeness might not work after auto.
If you want to prove pattern completeness without pat_completeness, you should try to do case analysis of all function parameters, i.e., case_tac a in your example.
Manuel already mentioned it in his comment, but I thought a more detailed example might be helpful anyway. Here is what you can do manually:
First you specify your function as usual
function add :: "natural => natural => natural"
where
"add Zero m = m" |
"add (Succ n) m = Succ (add n m)"
and then you prove that the given patterns cover all cases by
by (pat_completeness) auto
Afterwards you take care of termination. E.g., every datatype comes with a size function and you might note that the first argument of add strictly decreases in size for every recursive call. By default function will bundle all arguments of a function into a tuple for a termination proof, i.e., instead of two arguments n and m, in the termination proof you work with the single pair (n, m). Thus if you want to tell the system that it should use the size of the first argument you can do this as follows:
termination add
apply (relation "measure (size o fst)")
This will yield the remaining goals:
goal (2 subgoals):
1. wf (measure (size o fst))
2. !!n m. ((n, m), Succ n, m) : measure (size o fst)
That is, you have to show that the given relation is well-founded (which is trivial for measures, which are always well-founded, since they are constructed by mapping arguments to natural numbers and then using less-than on the naturals as relation) and that for every recursive call the arguments are actually in the given relation. Both goals are easily dispatched by simp.
apply simp
apply simp
done