Can this function be implemented? - function

Is there any implementation for this function?
foo :: (Monad m, Monad n) => m a -> n a -> (a -> a -> a) -> m (n a)
foo x y f = ...

Yes, and it can be given a more general type.
foo :: (Functor f, Functor g) => (a -> b -> c) -> f a -> g b -> f (g c)
foo f fx gy = fmap (\x -> fmap (f x) gy) fx

Related

How to define a function based on its type?

I'm trying to define a simple Haskell function with the type (m -> n -> l) -> (m -> n) -> m -> l.
I thought that it needs be to defined as f g h x = f g (h x), but apparently that's not true. How could I correct this function?
Based on the signature, the only sensical implementation is:
f :: (m -> n -> l) -> (m -> n) -> m -> l
f g h x = g x (h x)
This makes sense since we are given two functions g :: m -> n -> l and h :: m -> n, and a value x :: m. We should construct a value with type l. The only way to do this is to make use of the function g. For the parameter with type m we can work with x, for the second parameter we need a value of type n, we do not have such value, but we can construct one by applying h on x. Since h x has type h x :: n, we thus can use this as second parameter for g.
This function is already defined: it is a special case of (<*>) :: Applicative f => f (n -> l) -> f n -> f l with f ~ (->) m.
Djinn is a tool that reasons about types and thus is generating function definitions based on its signature. If one queries with f :: (m -> n -> l) -> (m -> n) -> m -> l, we get:
f :: (m -> n -> l) -> (m -> n) -> m -> l
f a b c = a c (b c)
which is the same function (except that it uses other variable names).
f :: (m -> n -> l) -> (m -> n) -> m -> l
f g h x = _
Now you need to use the arguments in some way.
g (_ :: m) (_ :: n) :: l
h (_ :: m) :: n
x :: m
Both g and h need a value of type m as their first argument. Well, luckily we have exactly one such value, so it's easy to see what to do.
g x (_ :: n) :: l
h x :: n
x :: m
So now g still needs a value of type n. Again we're lucky, because applying h to x has produced such a value.
g x (h x) :: l
h x :: n
x :: m
Ok, and there we now have something of type l, which is what we needed!
f g h x = g x (h x)
f :: (m -> n -> l) -> (m -> n) -> m -> l
f g h x = l
where
l =
what can produce an l for us? only g:
g
which takes two parameters, an m and an n,
m n
but where can we get those? Well, m we've already got,
m = x
and n we can get from h,
n = h
which needs an m
m
and where do we get an m? We've already got an m!

In Haskell how to "apply" functions in nested context to a value in context?

nestedApply :: (Applicative f, Applicative g) => g (f (a -> b)) -> f a -> g (f b)
As the type indicates, how to get that (a->b) applied to that a in the context f?
Thanks for help.
This one of those cases where it's helpful to focus on types. I will try to keep it simple and explain the reasoning.
Let's start with describing the task. We have gfab :: g(f(a->b)) and fa :: f a, and we want to have g(f b).
gfab :: g (f (a -> b))
fa :: f a
??1 :: g (f b)
Since g is a functor, to obtain type g T we can start with a value ??2 of type g U and apply fmap to ??3 :: U -> T. In our case, we have T = f b, so we are looking for:
gfab :: g (f (a -> b))
fa :: f a
??2 :: g U
??3 :: U -> f b
??1 = fmap ??3 ??2 :: g (f b)
Now, it looks like we should pick ??2 = gfab. After all,that's the only value of type g Something we have. We obtain U = f (a -> b).
gfab :: g (f (a -> b))
fa :: f a
??3 :: f (a -> b) -> f b
??1 = fmap ??3 gfab :: g (f b)
Let's make ??3 into a lambda, \ (x :: f (a->b)) -> ??4 with ??4 :: f b. (The type of x can be omitted, but I decided to add it to explain what's going on)
gfab :: g (f (a -> b))
fa :: f a
??4 :: f b
??1 = fmap (\ (x :: f (a->b)) -> ??4) gfab :: g (f b)
How to craft ??4. Well, we have values of types f (a->b) and f a, so we can <*> those to get f b. We finally obtain:
gfab :: g (f (a -> b))
fa :: f a
??1 = fmap (\ (x :: f (a->b)) -> x <*> fa) gfab :: g (f b)
We can simplyfy that into:
nestedApply gfab fa = fmap (<*> fa) gfab
Now, this is not the most elegant way to do it, but understanding the process is important.
With
nestedApply :: (Applicative f, Applicative g)
=> g (f (a -> b))
-> f a
-> g (f b )
to get that (a->b) applied to that a in the context f, we need to operate in the context g.
And that's just fmap.
It's clearer with the flipped signature, focusing on its last part
flip nestedApply :: (Applicative f, Applicative g)
=> f a
-> g (f (a -> b)) --- from here
-> g (f b ) --- to here
So what we have here is
nestedApply gffun fx = fmap (bar fx) gffun
with bar fx being applied under the g wraps by fmap for us. Which is
bar fx :: f (a -> b)
-> f b
i.e.
bar :: f a
-> f (a -> b)
-> f b
and this is just <*> isn't it, again flipped. Thus we get the answer,
nestedApply gffun fx = fmap (<*> fx) gffun
As we can see only fmap capabilities of g are used, so we only need
nestedApply :: (Applicative f, Functor g) => ...
in the type signature.
It's easy when writing it on a sheet of paper, in 2D. Which we imitate here with the wild indentation to get that vertical alignment.
Yes we the humans learned to write first, on paper, and to type on a typewriter, much later. The last generation or two were forced into linear typing by the contemporary devices since the young age but now the scribbling and talking (and gesturing and pointing) will hopefully be taking over yet again. Inventive input modes will eventually include 3D workflows and that will be a definite advancement. 1D bad, 2D good, 3D even better. For instance many category theory diagrams are much easier to follow (and at least imagine) when drawn in 3D. The rule of thumb is, it should be easy, not hard. If it's too busy, it probably needs another dimension.
Just playing connect the wires under the wraps. A few self-evident diagrams, and it's done.
Here's some type mandalas for you (again, flipped):
-- <$> -- <*> -- =<<
f a f a f a
(a -> b) f (a -> b) (a -> f b)
f b f b f ( f b) -- fmapped, and
f b -- joined
and of course the mother of all applications,
-- $
a
a -> b
b
a.k.a. Modus Ponens (yes, also flipped).

Haskell function composition - two one-dimensional functions to a two dimensional function

Let's assume I have a function f in Haskell, It takes a Double and returns a Double, and I have function g that also takes a Double and returns a Double.
Now, I can apply f to g like this: f . g.
Now, let's take a higher-dimensional function f, that
takes two Doubles and outputs one:
f :: Double -> Double -> Double
or
f :: (Double, Double) -> Double
And I have two g functions as well:
g1 :: Double -> Double, g2 :: Double -> Double
Now, I want to compose the functions to get something like:
composition x = f (g1 x) (g2 x)
Can this be achieved just by using the dot (.) operator?
You can make use of liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c for this:
composition = liftA2 f g1 g2
Since a function is an applicative [src]:
instance Applicative ((->) r) where
pure = const
(<*>) f g x = f x (g x)
liftA2 q f g x = q (f x) (g x)
and liftA2 is implemented as [src]:
liftA2 :: (a -> b -> c) -> f a -> f b -> f c
liftA2 f x = (<*>) (fmap f x)
this will thus be resolved to:
liftA2 f g1 g2 x = (<*>) (fmap f g1) g2 x
= (fmap f g1 <*> g2) x
= (f . g1 <*> g2) x
= (\fa ga xa -> fa xa (ga xa)) (f . g1) g2 x
= (f . g1) x (g2 x)
= f (g1 x) (g2 x)
You can try this:
import Control.Arrow
composition = f . (g1 &&& g2)
(&&&) turns g1 :: a -> b and g2 :: a -> c into g1 &&& g2 :: a -> (b, c). Then you can apply normal composition.

Filter function in nonogram solver

I have to understand the Deductive solver By Ted Yin from https://wiki.haskell.org/Nonogram
i don't know how
elim b w ps = filter (\p -> all (\x -> x `elem` p) b &&
all (\x -> x `notElem` p) w) ps
works. I only know that
all (\x -> x `notElem` [1]) [1,2,3,4]
gives False, and
all (\x -> x `elem` [1]) [1,1,1,1]
gives True.
but i don't know hot to run all elim function and how it works
First, help yourself to a little whitespace to aid understanding, and name your subexpressions:
elim b w ps = filter (\p -> all (\x -> x `elem` p) b &&
all (\x -> x `notElem` p) w
) ps
= filter foo ps
where
foo p = all (\x -> x `elem` p) b &&
all (\x -> x `notElem` p) w
= filter foo ps
where
foo p = all tst1 b && all tst2 w
where
tst1 = (\x -> x `elem` p)
tst2 = (\x -> x `notElem` p)
= filter foo ps
where
foo p = (&&) (all tst1 b) (all tst2 w)
where
tst1 x = elem x p
tst2 y = notElem y p
Now what does that do? Or better yet, what is it? Let's go by some types to build up our insight here:
filter :: (a -> Bool) -> [a] -> [a]
foo :: a -> Bool
ps :: [a]
filter foo ps :: [a]
p :: a
foo p :: Bool
(&&) :: Bool -> Bool -> Bool
all tst1 b :: Bool
all tst2 w :: Bool
---------------------------
all :: (t -> Bool) -> [t] -> Bool
tst1 :: t -> Bool
tst2 :: t -> Bool
b :: [t]
w :: [t]
---------------------------
......
---------------------------
elim b w ps :: [a]
elim :: [t] -> [t] -> [a] -> [a]
Complete the picture by working through the types of tst1 and tst2 to find out the relationship between the t and a types.
tst1 :: t -> Bool -- tst1 x = elem x p
tst2 :: t -> Bool -- tst2 y = notElem y p
x :: t
y :: t
elem :: Eq t => t -> [t] -> Bool
notElem :: Eq t => t -> [t] -> Bool
p :: [t] -- it was : a !
Thus a ~ [t] and [a] ~ [[t]] and finally,
elim b w ps :: [[t]]
elim :: Eq t => [t] -> [t] -> [[t]] -> [[t]]
So then filter foo leaves only those ps in ps for which foo p == True.
And that means all tst1 b == True and all tst2 w == True.
And that means, every x in b is an element of p, and every y in w is not an element in p. Or in other words only such ps in ps are left alone in the resulting list for which
foo p = (b \\ p) == [] && (p \\ w) == p
holds:
import Data.List (\\)
elim b w ps = [ p | p <- ps, (b \\ p) == [], (p \\ w) == p ]

Idris proof by definition

I can write the function
powApply : Nat -> (a -> a) -> a -> a
powApply Z f = id
powApply (S k) f = f . powApply k f
and prove trivially:
powApplyZero : (f : _) -> (x : _) -> powApp Z f x = x
powApplyZero f x = Refl
So far, so good. Now, I try to generalize this function to work with negative exponents. Of course, an inverse must be provided:
import Data.ZZ
-- Two functions, f and g, with a proof that g is an inverse of f
data Invertible : Type -> Type -> Type where
MkInvertible : (f : a -> b) -> (g : b -> a) ->
((x : _) -> g (f x) = x) -> Invertible a b
powApplyI : ZZ -> Invertible a a -> a -> a
powApplyI (Pos Z) (MkInvertible f g x) = id
powApplyI (Pos (S k)) (MkInvertible f g x) =
f . powApplyI (Pos k) (MkInvertible f g x)
powApplyI (NegS Z) (MkInvertible f g x) = g
powApplyI (NegS (S k)) (MkInvertible f g x) =
g . powApplyI (NegS k) (MkInvertible f g x)
I then try to prove a similar statement:
powApplyIZero : (i : _) -> (x : _) -> powApplyI (Pos Z) i x = x
powApplyIZero i x = ?powApplyIZero_rhs
However, Idris refuses to evaluate the application of powApplyI, leaving the type of ?powApplyIZero_rhs as powApplyI (Pos 0) i x = x (yes, Z is changed to 0). I've tried writing powApplyI in a non-pointsfree style, and defining my own ZZ with the %elim modifier (which I don't understand), but neither of these worked. Why isn't the proof handled by inspecting the first case of powApplyI?
Idris version: 0.9.15.1
Here are some things:
powApplyNI : Nat -> Invertible a a -> a -> a
powApplyNI Z (MkInvertible f g x) = id
powApplyNI (S k) (MkInvertible f g x) = f . powApplyNI k (MkInvertible f g x)
powApplyNIZero : (i : _) -> (x : _) -> powApplyNI 0 i x = x
powApplyNIZero (MkInvertible f g y) x = Refl
powApplyZF : ZZ -> (a -> a) -> a -> a
powApplyZF (Pos Z) f = id
powApplyZF (Pos (S k)) f = f . powApplyZF (Pos k) f
powApplyZF (NegS Z) f = f
powApplyZF (NegS (S k)) f = f . powApplyZF (NegS k) f
powApplyZFZero : (f : _) -> (x : _) -> powApplyZF 0 f x = x
powApplyZFZero f x = ?powApplyZFZero_rhs
The first proof went fine, but ?powApplyZFZero_rhs stubbornly keeps the type powApplyZF (Pos 0) f x = x. Clearly, there's some problem with ZZ (or my use of it).
The problem: powApplyI was not provably total, according to Idris. Idris' totality checker relies on being able to reduce parameters to structurally smaller forms, and with raw ZZs, this doesn't work.
The answer is to delegate the recursion to plain old powApply (which is proven total):
total
powApplyI : ZZ -> a <~ a -> a -> a
powApplyI (Pos k) (MkInvertible f g x) = powApply k f
powApplyI (NegS k) (MkInvertible f g x) = powApply (S k) g
Then, with a case split on i, powApplyIZero is proven trivially.
Thanks to Melvar from the #idris IRC channel.
powApplyI (Pos Z) i x doesn't reduce further because i is not in weak head normal form.
I don't have an Idris compiler, so I rewrote your code in Agda. It's pretty similar:
open import Function
open import Relation.Binary.PropositionalEquality
open import Data.Nat
open import Data.Integer
data Invertible : Set -> Set -> Set where
MkInvertible : {a b : Set} (f : a -> b) -> (g : b -> a) ->
(∀ x -> g (f x) ≡ x) -> Invertible a b
powApplyI : {a : Set} -> ℤ -> Invertible a a -> a -> a
powApplyI ( + 0 ) (MkInvertible f g x) = id
powApplyI ( + suc k ) (MkInvertible f g x) = f ∘ powApplyI ( + k ) (MkInvertible f g x)
powApplyI -[1+ 0 ] (MkInvertible f g x) = g
powApplyI -[1+ suc k ] (MkInvertible f g x) = g ∘ powApplyI -[1+ k ] (MkInvertible f g x)
Now you can define your powApplyIZero as
powApplyIZero : {a : Set} (i : Invertible a a) -> ∀ x -> powApplyI (+ 0) i x ≡ x
powApplyIZero (MkInvertible _ _ _) _ = refl
Pattern-matching on i induces unification and powApplyI (+ 0) i x becomes replaced with powApplyI (+ 0) i (MkInvertible _ _ _), so powApplyI can proceed further.
Or you could write this explicitly:
powApplyIZero : {a : Set} (f : a -> a) (g : a -> a) (p : ∀ x -> g (f x) ≡ x)
-> ∀ x -> powApplyI (+ 0) (MkInvertible f g p) x ≡ x
powApplyIZero _ _ _ _ = refl