Ok, about Monad, I am aware that there are enough questions having been asked. I am not trying to bother anyone to ask what is monad again.
Actually, I read What is a monad?, it is very helpful. And I feel I am very close to really understand it.
I create this question here is just to describe some of my thoughts on Monad and Function, and hope someone could correct me or confirm it correct.
Some answers in that post let me feel monad is a little bit like function.
Monad takes a type, return a wrapper type (return), also, it can take a type, doing some operations on it and returns a wrapper type (bind).
From my point of view, it is a little bit like function. A function takes something and do some operations and return something.
Then why we even need monad? I think one of the key reasons is that monad provides a better way or pattern for sequential operations on the initial data/type.
For example, we have an initial integer i. In our code, we need to apply 10 functions f1, f2, f3, f4, ..., f10 step by step, i.e., we apply f1 on i first, get a result, and then apply f2 on that result, then we get a new result, then apply f3...
We can do this by functions rawly, just like f1 i |> f2 |> f3.... However, the intermediate results during the steps are not consistent; Also if we have to handle possible failure somewhere in middle, things get ugly. And an Option anyway has to be constructed if we don't want the whole process fail on exceptions. So naturally, monad comes in.
Monad unifies and forces the return types in all steps. This largely simplifies the logic and readability of the code (this is also the purpose of those design patterns, isn't it). Also, it is more bullet proof against error or bug. For example, Option Monad forces every intermediate results to be options and it is very easy to implement the fast fail paradigm.
Like many posts about monad described, monad is a design pattern and a better way to combine functions / steps to build up a process.
Am I understanding it correctly?
It sounds to me like you're discovering the limits of learning by analogy. Monad is precisely defined both as a type class in Haskell and as a algebraic thing in category theory; any comparison using "... like ..." is going to be imprecise and therefore wrong.
So no, since Haskell's monads aren't like functions, since they are 1) implemented as type classes, and 2) intended to be used differently than functions.
This answer is probably unsatisfying; are you looking for intuition? If so, I'd suggest doing lots of examples, and especially reading through LYAH. It's very difficult to get an intuitive understanding of abstract things like monads without having a solid base of examples and experience to fall back on.
Why do we even need monads? This is a good question, and maybe there's more than one question here:
Why do we even need the Monad type class? For the same reason that we need any type class.
Why do we even need the monad concept? Because it's useful. Also, it's not a function, so it can't be replaced by a function. (Your example seems like it does not require a Monad (rather, it needs an Applicative)).
For example, you can implement context-free parser combinators using the Applicative type class. But try implementing a parser for the language consisting of the same string of symbols twice (separated by a space) without Monad, i.e.:
a a -> yes
a b -> no
ab ab -> yes
ab ba -> no
So that's one thing a monad provides: the ability to use previous results to "decide" what to do. Here's another example:
f :: Monad m => m Int -> m [Char]
f m =
m >>= \x ->
if x > 2
then return (replicate x 'a')
else return []
f (Just 1) -->> Just ""
f (Just 3) -->> Just "aaa"
f [1,2,3,4] -->> ["", "", "aaa", "aaaa"]
Monads (and Functors, and Applicative Functors) can be seen as being about "generalized function application": they all create functions of type f a ⟶ f b where not only the "values inside a context", like types a and b, are involved, but also the "context" -- the same type of context -- represented by f.
So "normal" function application involves functions of type (a ⟶ b), "generalized" function application is with functions of type (f a ⟶ f b). Such functions can too be composed under normal function composition, because of the more uniform types structure: f a ⟶ f b ; f b ⟶ f c ==> f a ⟶ f c.
Each of the three creates them in a different way though, starting from different things:
Functors: fmap :: Functor f => (a ⟶ b) ⟶ (f a ⟶ f b)
Applicative Functors: (<*>) :: Applicative f => f (a ⟶ b) ⟶ (f a ⟶ f b)
Monadic Functors: (=<<) :: Monad f => (a ⟶ f b) ⟶ (f a ⟶ f b)
In practice, the difference is in how do we get to use the resulting value-in-context type, seen as denoting some type of computations.
Writing in generalized do notation,
Functors: do { x <- a ; return (g x) } g <$> a -- fmap
Applicative do { x <- a ; y <- b ; return (g x y) } g <$> a <*> b
Functors: (\ x -> g x <$> b ) =<< a
Monadic do { x <- a ; y <- k x ; return (g x y) } (\ x -> g x <$> k x) =<< a
Functors:
And their types,
"liftF1" :: (Functor f) => (a ⟶ b) ⟶ f a ⟶ f b -- fmap
liftA2 :: (Applicative f) => (a ⟶ b ⟶ c) ⟶ f a ⟶ f b ⟶ f c
"liftBind" :: (Monad f) => (a ⟶ b ⟶ c) ⟶ f a ⟶ (a ⟶ f b) ⟶ f c
Related
Why does this code compile?
--sequence_mine :: Monad m => [m a] -> m [a]
sequence_mine [] = return []
sequence_mine (elt:l) = do
e <- elt
sl <- sequence l
return (e:sl)
Note I intentionally commented out the type declaration here. But the code still compiles and seems to work as expected, even without the type declaration - and this is what surprises me.
To my understanding, ambiguity should arise on this line:
return (e:sl)
The reason is that Haskell shouldn't know which type of monad we are returning. Why does it have to be the same type that we are accepting?
To clarify more. To my understanding, if I don't explicitely put the type declaration analogous to the one I commented out, Haskell should deduce this function has a typing like this:
sequence_mine :: (Monad m1, Monad m2) => [m1 a] -> m2 [a]
Unless I explicitely unify m1 and m2 by calling them both m, there is no reason for Haskell to believe they both refer to the same type! I would suppose.
Yet that is not the case. What am I missing here?
Well, let's look at what the do block desugars to:
sequence_mine (elt:l) = elt >>= \e -> (sequence l) >>= \sl -> return (e:sl)
Recall that the "bind" operator >>= has the type signature (Monad m) => m a -> (a -> m b) -> m b. Note that the monad m here, although arbitrary, must be the same for both of the arguments and the result type.
So if elt has type m a, it's easy to see that the return (e:sl) - which is the output type of the whole expression - must have type m [a], for the same monad m.
To put it another way, each do block only works in the context of a fixed monad.
I'm trying to become familiar with Haskell and I was wondering if the following was possible and if so, how?
Say I have a set of functions {f,g,..} for which I was to define a replacement function {f',g',..}. Now say I have a function c which uses these functions (and only these functions) inside itself e.g. c x = g (f x). Is there a way to automatically define c' x = g' (f' x) without explicitly defining it?
EDIT: By a replacement function f' I mean some function that is conceptually relates to f by is altered in some arbitrary way. For example, if f xs ys = (*) <$> xs <*> ys then f' (x:xs) (y:ys) = (x * y):(f' xs ys) etc.
Many thanks,
Ben
If, as seems to be the case with your example, f and f' have the same type etc., then you can easily pass them in as extra parameters. Like
cGen :: ([a] -> [a] -> [a]) -> (([a] -> [a]) -> b) -> [a] -> b
cGen f g x = g (f x)
...which BTW could also be written cGen = (.)...
If you want to group together specific “sets of functions”, you can do that with a “configuration type”
data CConfig a b = CConfig {
f :: [a] -> [a] -> [a]
, g :: ([a] -> [a]) -> b
}
cGen :: CConfig a b -> [a] -> b
cGen (CConfig f g) = f . g
The most concise and reliable way to do something like this would be with RecordWildCards
data Replacer ... = R {f :: ..., g :: ...}
c R{..} x = g (f x)
Your set of functions now is now pulled from the local scope from the record, rather than a global definition, and can be swapped out for a different set of functions at your discretion.
The only way to get closer to what you want would to be to use Template Haskell to parse the source and modify it. Regular Haskell code simply cannot inspect a function in any way - that would violate referential transparency.
I am learning Haskell. I am trying to make a function that deletes integers out of a list when met with the parameters of a certain function f.
deleteif :: [Int] -> (Int -> Bool) -> [Int]
deleteif x f = if x == []
then []
else if head x == f
then deleteif((tail x) f)
else [head x] ++ deleteif((tail x) f)
I get the following errors :
function tail is applied to two arguments
'deleteif' is applied to too few arguments
The issue is that you don't use parentheses to call a function in Haskell. So you just need to use
if f (head x)
then deleteif (tail x) f
else [head x] ++ deleteif (tail x) f
the problem is in deleteif((tail x) f)
it becomes deleteif (tail x f)
so tail gets 2 arguments
and then deleteif a
so deleteif gets 1 argument
you want deleteif (tail x) f
head x == f is wrong you want `f (head x)
you can use pattern matching ,guards and make it more generic
deleteif :: [a] -> (a -> Bool) -> [a]
deleteif [] _ = []
deleteif (x:xs) f
| f x = deleteif xs f
| otherwise = x : deleteif xs f
As already said, deleteif((tail x) f) is parsed as deleteif (tail x f), which means tail is applied to the two arguments x and f, and the result would then be passed on as the single argument to deleteif. What you want is deleteif (tail x) f, which is equivalent to (deleteif (tail x)) f and what most languages1 would write deleteif(tail x, f).
This parsing order may seem confusing initially, but it turns out to be really useful in practice. The general name for the technique is Currying.
For one thing, it allows you to write dense statements without needing many parentheses – in fact deleteif (tail x f) could also be written deleteif $ tail x f.
More importantly, because the arguments don't need to be “encased” in a single tuple, you don't need to supply them all at once but automatically get partial application when you apply to only one argument. For instance, you could use this function like that: deleteif (>4) [1,3,7,5,2,9,7] to yield [7,5,9,7]. This works by partially applying the function2 > to 4, leaving a single-argument function which can be used to filter the list.
1Indeed, this style is possible in Haskell as well: just write the signatures of such multi-argument functions as deleteif :: ([Int], Int->Bool) -> [Int]. Or write uncurry deleteif (tail x, f). But it's definitely better you get used to the curried style!
2Actually, > is an infix which behaves a bit different – you can partially apply it to either side, i.e. you can also write deleteif (4>) [1,3,7,5,2,9,7] to get [1,3,2].
So, I'm trying to implement a polyvariadic ZipWithN as described here. Unfortunately, Paczesiowa's code seems to have been compiled with outdated versions of both ghc and HList, so in the process of trying to understand how it works, I've also been porting it up to the most recent versions of both of those (ghc-7.8.3 and HList-0.3.4.1 at this time). It's been fun.
Anyways, I've run into a bug that google isn't helping me fix for once, in the definition of an intermediary function curryN'. In concept, curryN' is simple: it takes a type-level natural number N (or, strictly speaking, a value of that type), and a function f whose first argument is an HList of length N, and returns an N-ary function that takes makes an HList out of its first N arguments, and returns f applied to that HList. It's curry, but polyvariadic.
It uses three helper functions/classes:
The first is ResultType/resultType, as I've defined here. resultType takes a single function as an argument, and returns the type of that function after applying it to as many arguments as it will take. (Strictly speaking, again, it returns an undefined value of that type).
For example:
ghci> :t resultType (++)
resultType (++) :: [a]
ghci> :t resultType negate
resultType negate :: (ResultType a result, Num a) => result
(The latter case because if a happens to be a function of type x -> y, resultType would have to return y. So it's not perfect applied to polymorphic functions.)
The second two are Eat/eat and MComp/mcomp, defined together (along with curryN') in a single file (along with the broken curryN') like this.
eat's first argument is a value whose type is a natural number N, and returns a function that takes N arguments and returns them combined into an HList:
ghci> :t eat (hSucc (hSucc hZero))
eat (hSucc (hSucc hZero)) :: x -> x1 -> HList '[x, x1]
ghci> eat (hSucc (hSucc hZero)) 5 "2"
H[5, "2"]
As far as I can tell it works perfectly. mcomp is a polyvariadic compose function. It takes two functions, f and g, where f takes some number of arguments N. It returns a function that takes N arguments, applies f to all of them, and then applies g to f. (The function order parallels (>>>) more than (.))
ghci> :t (,,) `mcomp` show
(,,) `mcomp` show :: (Show c, Show b, Show a) => a -> b -> c -> [Char]
ghci> ((,,) `mcomp` show) 4 "str" 'c'
"(4,\"str\",'c')"
Like resultType, it "breaks" on functions whose return types are type variables, but since I only plan on using it on eat (whose ultimate return type is just an HList), it should work (Paczesiowa seems to think so, at least). And it does, if the first argument to eat is fixed:
\f -> eat (hSucc (hSucc hZero)) `mcomp` f
works fine.
curryN' however, is defined like this:
curryN' n f = eat n `mcomp` f
Trying to load this into ghci, however, gets this error:
Part3.hs:51:1:
Could not deduce (Eat n '[] f0)
arising from the ambiguity check for ‘curryN'’
from the context (Eat n '[] f,
MComp f cp d result,
ResultType f cp)
bound by the inferred type for ‘curryN'’:
(Eat n '[] f, MComp f cp d result, ResultType f cp) =>
Proxy n -> (cp -> d) -> result
at Part3.hs:51:1-29
The type variable ‘f0’ is ambiguous
When checking that ‘curryN'’
has the inferred type ‘forall f cp d result (n :: HNat).
(Eat n '[] f, MComp f cp d result, ResultType f cp) =>
Proxy n -> (cp -> d) -> result’
Probable cause: the inferred type is ambiguous
Failed, modules loaded: Part1.
So clearly eat and mcomp don't play as nicely together as I would hope. Incidentally, this is significantly different from the kind of error that mcomp (+) (+1) gives, which complains about overlapping instances for MComp.
Anyway, trying to find information on this error didn't lead me to anything useful - with the biggest obstacle for my own debugging being that I have no idea what the type variable f0 even is, as it doesn't appear in any of the type signatures or contexts ghci infers.
My best guess is that mcomp is having trouble recursing through eat's polymorphic return type (even though what that is is fixed by a type-level natural number). But if that is the case, I don't know how I'd go about fixing it.
Additionally (and bizarrely to me), if I try to combine Part1.hs and Part2.hs into a single file, I still get an error...but a different one
Part3alt.hs:59:12:
Overlapping instances for ResultType f0 cp
arising from the ambiguity check for ‘curryN'’
Matching givens (or their superclasses):
(ResultType f cp)
bound by the type signature for
curryN' :: (MComp f cp d result, Eat n '[] f, ResultType f cp) =>
Proxy n -> (cp -> d) -> result
at Part3alt.hs:(59,12)-(60,41)
Matching instances:
instance result ~ x => ResultType x result
-- Defined at Part3alt.hs:19:10
instance ResultType y result => ResultType (x -> y) result
-- Defined at Part3alt.hs:22:10
(The choice depends on the instantiation of ‘cp, f0’)
In the ambiguity check for:
forall (n :: HNat) cp d result f.
(MComp f cp d result, Eat n '[] f, ResultType f cp) =>
Proxy n -> (cp -> d) -> result
To defer the ambiguity check to use sites, enable AllowAmbiguousTypes
In the type signature for ‘curryN'’:
curryN' :: (MComp f cp d result, Eat n [] f, ResultType f cp) =>
Proxy n -> (cp -> d) -> result
Failed, modules loaded: none.
Again with the mysterious f0 type variable. I'll admit that I'm a little bit over my head here with all this typehackery, so if anyone could help me figure out what exactly the problem here is, and, more importantly, how I can fix it (if it is, hopefully, possible), I'd be incredibly grateful.
Final note: the reasons that the two files here are called Part1 and Part3 is that Part2 contains some auxiliary functions used in zipWithN, but not curryN'. For the most part they work fine, but there are a couple of oddities that I might ask about later.
Instead of fmap, which applies a function to a value-in-a-functor:
fmap :: Functor f => (a -> b) -> f a -> f b
I needed a function where the functor has a function and the value is plain:
thing :: Functor f => f (a -> b) -> a -> f b
but I can't find one.
What is this pattern called, where I apply a function-in-a-functor (or in an applicative, or in a monad) to a plain value?
I've implemented it already, I just don't quite understand what I did and why there wasn't already such a function in the standard libraries.
You don't need Applicative for this; Functor will do just fine:
apply f x = fmap ($ x) f
-- or, expanded:
apply f x = fmap (\f' -> f' x) f
Interestingly, apply is actually a generalisation of flip; lambdabot replaces flip with this definition as one of its generalisations of standard Haskell, so that's a possible name, although a confusing one.
By the way, it's often worth trying Hayoo (which searches the entirety of Hackage, unlike Hoogle) to see what names a function is often given, and whether it's in any generic package. Searching for f (a -> b) -> a -> f b, it finds flip (in Data.Functor.Syntax, from the functors package) and ($#) (from the synthesizer package) as possible names. Still, I'd probably just use fmap ($ arg) f at the use site.
As Niklas says, this is application in some applicative functor to a lifted value.
\f a -> f <*> pure a
:: Applicative f => f (a -> b) -> a -> f b
or more generally (?), using Category (.)
\f a -> f . pure a
:: (Applicative (cat a), Category cat) => cat b c -> b -> cat a c