Implement primitive recursive factorial in haskell - function

I am currently trying to implement primitive recursive factorial in Haskell.
I'm using the function recNat, as a recursor. That is:
recNat :: a -> (Nat -> a -> a) -> Nat -> a
recNat a _ Zero = a
recNat a h (Succ n) = h n (recNat a h n)
This is our attempt, but can't quite figure out what's wrong
factR :: Nat -> Nat
factR Zero = Succ Zero
factR (Succ m) = recNat (Succ m) (\ _ y -> y) (factR m)
I was also trying to implement the exponential function, but it seems even more confusing.

In order to implement a factorial, we can implement a function for multiplication. For the multiplication function, we need the addition function
data Nat = Zero | Succ Nat
add :: Nat -> Nat -> Nat
add a Zero = a
add a (Succ b) = Succ (add a b)
mul :: Nat -> Nat -> Nat
mul a Zero = Zero
mul a (Succ b) = add a (mul a b)
Then the factorial function just comes down to:
fac :: Nat -> Nat
fac Zero = Succ Zero
fac (Succ a) = mul (Succ a) (fac a)

Related

In Haskell how to "apply" functions in nested context to a value in context?

nestedApply :: (Applicative f, Applicative g) => g (f (a -> b)) -> f a -> g (f b)
As the type indicates, how to get that (a->b) applied to that a in the context f?
Thanks for help.
This one of those cases where it's helpful to focus on types. I will try to keep it simple and explain the reasoning.
Let's start with describing the task. We have gfab :: g(f(a->b)) and fa :: f a, and we want to have g(f b).
gfab :: g (f (a -> b))
fa :: f a
??1 :: g (f b)
Since g is a functor, to obtain type g T we can start with a value ??2 of type g U and apply fmap to ??3 :: U -> T. In our case, we have T = f b, so we are looking for:
gfab :: g (f (a -> b))
fa :: f a
??2 :: g U
??3 :: U -> f b
??1 = fmap ??3 ??2 :: g (f b)
Now, it looks like we should pick ??2 = gfab. After all,that's the only value of type g Something we have. We obtain U = f (a -> b).
gfab :: g (f (a -> b))
fa :: f a
??3 :: f (a -> b) -> f b
??1 = fmap ??3 gfab :: g (f b)
Let's make ??3 into a lambda, \ (x :: f (a->b)) -> ??4 with ??4 :: f b. (The type of x can be omitted, but I decided to add it to explain what's going on)
gfab :: g (f (a -> b))
fa :: f a
??4 :: f b
??1 = fmap (\ (x :: f (a->b)) -> ??4) gfab :: g (f b)
How to craft ??4. Well, we have values of types f (a->b) and f a, so we can <*> those to get f b. We finally obtain:
gfab :: g (f (a -> b))
fa :: f a
??1 = fmap (\ (x :: f (a->b)) -> x <*> fa) gfab :: g (f b)
We can simplyfy that into:
nestedApply gfab fa = fmap (<*> fa) gfab
Now, this is not the most elegant way to do it, but understanding the process is important.
With
nestedApply :: (Applicative f, Applicative g)
=> g (f (a -> b))
-> f a
-> g (f b )
to get that (a->b) applied to that a in the context f, we need to operate in the context g.
And that's just fmap.
It's clearer with the flipped signature, focusing on its last part
flip nestedApply :: (Applicative f, Applicative g)
=> f a
-> g (f (a -> b)) --- from here
-> g (f b ) --- to here
So what we have here is
nestedApply gffun fx = fmap (bar fx) gffun
with bar fx being applied under the g wraps by fmap for us. Which is
bar fx :: f (a -> b)
-> f b
i.e.
bar :: f a
-> f (a -> b)
-> f b
and this is just <*> isn't it, again flipped. Thus we get the answer,
nestedApply gffun fx = fmap (<*> fx) gffun
As we can see only fmap capabilities of g are used, so we only need
nestedApply :: (Applicative f, Functor g) => ...
in the type signature.
It's easy when writing it on a sheet of paper, in 2D. Which we imitate here with the wild indentation to get that vertical alignment.
Yes we the humans learned to write first, on paper, and to type on a typewriter, much later. The last generation or two were forced into linear typing by the contemporary devices since the young age but now the scribbling and talking (and gesturing and pointing) will hopefully be taking over yet again. Inventive input modes will eventually include 3D workflows and that will be a definite advancement. 1D bad, 2D good, 3D even better. For instance many category theory diagrams are much easier to follow (and at least imagine) when drawn in 3D. The rule of thumb is, it should be easy, not hard. If it's too busy, it probably needs another dimension.
Just playing connect the wires under the wraps. A few self-evident diagrams, and it's done.
Here's some type mandalas for you (again, flipped):
-- <$> -- <*> -- =<<
f a f a f a
(a -> b) f (a -> b) (a -> f b)
f b f b f ( f b) -- fmapped, and
f b -- joined
and of course the mother of all applications,
-- $
a
a -> b
b
a.k.a. Modus Ponens (yes, also flipped).

Primitive Recursive Functions Excercise in Haskell

For a functional programming excercise I am required to apply primitive recursive functions in haskell. However I do not quite understand yet the definition (and application) of this type of functions.
We are presented the data type Nat to be used, its constructor is:
data Nat = Zero | Succ Nat
To my understanding this means that the type "Nat" can be either a Zero or a Natural Succesor.
Then we have a recursor:
recNat :: a -> (Nat -> a -> a) -> Nat -> a
recNat a _ Zero = a
recNat a h (Succ n) = h n (recNat a h n)
Which I understand is meant to apply recursion to a function?
And I've also been given an example of an addition function using the recursor:
addR :: Nat -> Nat -> Nat
addR m n = recNat n (\ _ y -> Succ y) m
But I don't get how it works, it uses the recNat function with the given two Nats, and also uses an anonymous function as input for recNat ( that is the part I'm not sure what it does! )
So my main issue is what does this do in the function exactly > \ _ y -> Succ y
I'm suppossed to apply this same recursor (RecNat) to apply other operations to Nat, but I'm stuck still trying to understand the example!
You’re right that data Nat = Zero | Succ Nat means that a Nat may be Zero or the Successor of another Nat; this represents natural numbers as a linked list, i.e.:
zero, one, two, three, four, five :: Nat
zero = Zero
one = Succ Zero -- or: Succ zero
two = Succ (Succ Zero) -- Succ one
three = Succ (Succ (Succ Zero)) -- Succ two
four = Succ (Succ (Succ (Succ Zero))) -- Succ three
five = Succ (Succ (Succ (Succ (Succ Zero)))) -- Succ four
-- …
The function of recNat is to fold over a Nat: recNat z k takes a Nat and “counts down” by ones to the final Zero, calling k on every intermediate Succ, and replacing the Zero with z:
recNat z k three
recNat z k (Succ (Succ (Succ Zero)))
-- by second equation of ‘recNat’:
k two (recNat z k two)
k (Succ (Succ Zero)) (recNat z k (Succ (Succ Zero)))
-- by second equation of ‘recNat’:
k two (k one (recNat z k one))
k (Succ (Succ Zero)) (k (Succ Zero) (recNat z k (Succ Zero)))
-- by second equation of ‘recNat’:
k two (k one (k zero (recNat z k zero)))
k (Succ (Succ Zero)) (k (Succ Zero) (k Zero (recNat z k Zero)))
-- by first equation of ‘recNat’:
k two (k one (k zero z))
k (Succ (Succ Zero)) (k (Succ Zero) (k Zero z))
The lambda \ _ y -> Succ y has type a -> Nat -> Nat; it just ignores its first argument and returns the successor of its second argument. Here’s an illustration of how addR works to compute the sum of two Nats:
addR two three
addR (Succ (Succ Zero)) (Succ (Succ (Succ Zero)))
-- by definition of ‘addR’:
recNat three (\ _ y -> Succ y) two
recNat (Succ (Succ (Succ Zero))) (\ _ y -> Succ y) (Succ (Succ Zero))
-- by second equation of ‘recNat’:
(\ _ y -> Succ y) one (recNat three (\ _ y -> Succ y) one)
(\ _ y -> Succ y) (Succ Zero) (recNat (Succ (Succ (Succ Zero))) (\ _ y -> Succ y) (Succ Zero))
-- by application of the lambda:
Succ (recNat three (\ _ y -> Succ y) one)
Succ (recNat (Succ (Succ (Succ Zero))) (\ _ y -> Succ y) (Succ Zero))
-- by second equation of ‘recNat’:
Succ ((\ _ y -> Succ y) zero (recNat three (\ _ y -> Succ y) zero))
Succ ((\ _ y -> Succ y) zero (recNat (Succ (Succ (Succ Zero))) (\ _ y -> Succ y) zero))
-- by application of the lambda:
Succ (Succ (recNat three (\ _ y -> Succ y) zero))
Succ (Succ (recNat (Succ (Succ (Succ Zero))) (\ _ y -> Succ y) zero))
-- by first equation of ‘recNat’:
Succ (Succ three)
Succ (Succ (Succ (Succ (Succ Zero))))
-- by definition of ‘five’:
five
Succ (Succ (Succ (Succ (Succ Zero))))
As you can see, what’s happening here is we’re essentially taking off each Succ from one number and putting it on the end of the other, or equivalently, replacing the Zero in one number with the other number, i.e., the steps go like this:
1+1+0 + 1+1+1+0 2 + 3
1+(1+0 + 1+1+1+0) 1+(1 + 3)
1+1+(0 + 1+1+1+0) 1+1+(0 + 3)
1+1+(1+1+1+0) 1+1+(3)
1+1+1+1+1+0 5
The inner lambda always ignores its first argument with _, so it may be simpler to see how this works with a simpler definition of recNat that literally replaces Zero with a value z and Succ with a function s:
recNat' :: a -> (a -> a) -> Nat -> a
recNat' z _ Zero = z
recNat' z s (Succ n) = s (recNat z s n)
Then addition is slightly simplified:
addR' m n = recNat' n Succ m
This literally says “to compute the sum of m and n, add one m times to n”.
You may find it easier to play around with these numbers if you make a Num instance and Show instance for them:
{-# LANGUAGE InstanceSigs #-} -- for explicitness
instance Num Nat where
fromInteger :: Integer -> Nat
fromInteger n
| n <= 0 = Zero
| otherwise = Succ (fromInteger (n - 1))
(+) :: Nat -> Nat -> Nat
(+) = addR
(*) :: Nat -> Nat -> Nat
(*) = … -- left as an exercise
(-) :: Nat -> Nat -> Nat
(-) = … -- left as an exercise
abs :: Nat -> Nat
abs n = n
signum :: Nat -> Nat
signum Zero = Zero
signum Succ{} = Succ Zero
negate :: Nat -> Nat
negate n = n -- somewhat hackish
instance Show Nat where
show n = show (recNat' (+ 1) 0 n :: Int)
Then you can write 2 + 3 :: Nat and have it display as 5.
Roughly, recNat x f n computes
f (n-1) (f (n-2) (f (n-3) (... (f 0 x))))
So, it applies f to x for n times, each time also passing a "counter" as the first argument of f.
In your case \_ y -> ... ignores the "counter" argument. Hence
addR m n = recNat n (\ _ y -> Succ y) m
can be read as "to compute m+n, apply m times the function Succ to n". This effectively computes ((n+1)+1)+1... where there are m ones in the sum.
You can try to compute the product of two naturals in a similar way. Use \_ y -> ... and express multiplication as repeated addition. You'll need to use the already-defined addR for that.
Additional hint: after multiplication, if you want to compute the predecessor n-1, then the "counter" argument will be very handy, so don't discard that and use \x y -> ... instead. After that, you can derive (truncated) subtraction as repeated predecessor.

SML: Recursive addition test case syntax

I have written SML code to recursively add two numbers of datatype nat. A function peanify to convert integer to nat. A function decimal to convert nat to integer.
Wanted help with syntax to test fun plus.
Say I want to test for 2 + 1
datatype nat = Zero | Succ of nat
fun peanify (x:int) : nat =
if x=0 then Zero
else Succ (peanify (x-1))
fun decimal (n:nat) : int =
case n of
Zero => 0
| (Succ n) => 1 + (decimal n)
fun plus (x : nat) : (nat -> nat) =
case x of
Zero => (fn y => y)
| Succ x' => (fn y => Succ (plus x' y))
Expected Result-:
val it = Succ (Succ (Succ Zero)) : nat
You can write:
val it = plus (peanify 2) (peanify 1)

OCaml Adding Natural Numbers

I'm learning OCaml in school and recently came came across a program for an assignment that I couldn't understand, and was hoping if somebody could explain it to me. Here's the code:
(* Natural numbers can be defined as follows:
type = ZERO | SUCC of nat;;
For instance, two = SUCC (SUCC ZERO) and three = SUCC (SUCC (SUCC ZERO)).
Write a function 'natadd' that adds two natural numbers in this fashion. *)
# type = ZERO | SUCC of nat;;
# let two = SUCC (SUCC ZERO);;
# let three = SUCC (SUCC (SUCC ZERO));;
# let rec natadd : nat -> nat -> nat =
fun n1 n2 ->
match n1 with
| ZERO -> n2
| SUCC n1 -> SUCC (natadd n1 n2)
;;
Here's a sample output for the code:
# natadd two three;;
- : nat = SUCC (SUCC (SUCC (SUCC (SUCC ZERO))))
What I don't understand is the match-with statement. Does it mean that if n1 is non-zero, then it adds SUCC and uses [SUCC n1] as a new argument in place of n1 for the recursive call?
No, it doesn't use SUCC n1 as the argument of the recursive call, it uses only n1 (the part of the matched SUCC) as the argument. The SUCC is then applied to the result of the recursive call.
The code might be a bit confusing because there are two variables with the name n1. Better might be
let rec natadd : nat -> nat -> nat = fun a b ->
match a with
| ZERO -> b
| SUCC a_pred -> SUCC (natadd a_pred b)

Composing two error-raising functions in Haskell

The problem I have been given says this:
In a similar way to mapMaybe, define
the function:
composeMaybe :: (a->Maybe b) -> (b -> Maybe c) -> (a-> Maybe c)
which composes two error-raising functions.
The type Maybe a and the function mapMaybe are coded like this:
data Maybe a = Nothing | Just a
mapMaybe g Nothing = Nothing
mapMaybe g (Just x) = Just (g x)
I tried using composition like this:
composeMaybe f g = f.g
But it does not compile.
Could anyone point me in the right direction?
The tool you are looking for already exists. There are two Kleisli composition operators in Control.Monad.
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> a -> m c
When m = Maybe, the implementation of composeMaybe becomes apparent:
composeMaybe = (>=>)
Looking at the definition of (>=>),
f >=> g = \x -> f x >>= g
which you can inline if you want to think about it in your own terms as
composeMaybe f g x = f x >>= g
or which could be written in do-sugar as:
composeMaybe f g x = do
y <- f x
g y
In general, I'd just stick to using (>=>), which has nice theoretical reasons for existing, because it provides the cleanest way to state the monad laws.
First of all: if anything it should be g.f, not f.g because you want a function which takes the same argument as f and gives the same return value as g. However that doesn't work because the return type of f does not equal the argument type of g (the return type of f has a Maybe in it and the argument type of g does not).
So what you need to do is: Define a function which takes a Maybe b as an argument. If that argument is Nothing, it should return Nothing. If the argument is Just b, it should return g b. composeMaybe should return the composition of the function with f.
Here is an excellent tutorial about Haskell monads (and especially the Maybe monad, which is used in the first examples).
composeMaybe :: (a -> Maybe b)
-> (b -> Maybe c)
-> (a -> Maybe c)
composeMaybe f g = \x ->
Since g takes an argument of type b, but f produces a value of type Maybe b, you have to pattern match on the result of f x if you want to pass that result to g.
case f x of
Nothing -> ...
Just y -> ...
A very similar function already exists — the monadic bind operator, >>=. Its type (for the Maybe monad) is Maybe a -> (a -> Maybe b) -> Maybe b, and it's used like this:
Just 100 >>= \n -> Just (show n) -- gives Just "100"
It's not exactly the same as your composeMaybe function, which takes a function returning a Maybe instead of a direct Maybe value for its first argument. But you can write your composeMaybe function very simply with this operator — it's almost as simple as the definition of the normal compose function, (.) f g x = f (g x).
Notice how close the types of composeMaybe's arguments are to what the monadic bind operator wants for its latter argument:
ghci> :t (>>=)
(>>=) :: (Monad m) => m a -> (a -> m b) -> m b
The order of f and g is backward for composition, so how about a better name?
thenMaybe :: (a -> Maybe b) -> (b -> Maybe c) -> (a -> Maybe c)
thenMaybe f g = (>>= g) . (>>= f) . return
Given the following definitions
times3 x = Just $ x * 3
saferecip x
| x == 0 = Nothing
| otherwise = Just $ 1 / x
one can, for example,
ghci> saferecip `thenMaybe` times3 $ 4
Just 0.75
ghci> saferecip `thenMaybe` times3 $ 8
Just 0.375
ghci> saferecip `thenMaybe` times3 $ 0
Nothing
ghci> times3 `thenMaybe` saferecip $ 0
Nothing
ghci> times3 `thenMaybe` saferecip $ 1
Just 0.3333333333333333