Difference in Function typing in Haskell - function

I've been playing around with basic functions in Haskell, and am a little confused with the difference between the following type declarations for the function f
f :: Integer -> Integer
versus
f :: Integral n => n -> n
So far, I've treated both of these as identical, but I'm sure this isn't true. What is the difference?
Edit: As a response to the first answer, I wanna propose a similar example which more is along the lines of the question I hold.
Consider the following declarations
f :: Num n => n -> n
or
f :: Num -> Num
What functionality does each offer?

Let's rename:
f :: Integer -> Integer
g :: (Integral n) => n -> n
I like to follow a fairly common practice of adding parentheses to the constraint section of the signature. It helps it stand out as different.
f :: Integer -> Integer is simple, it takes an integer and returns another integer.
As for g :: (Integral n) => n -> n: Integral is not a type itself, rather it's more like a predicate. Some types are Integral, others aren't. For example, Int is an Integral type, Double is not.
Here n is a type variable, and it can refer to any type. (Integral n) is a constraint on the type variable, which restricts what types it can refer to. So you could read it like this:
g takes a value of any type n and returns a value of that same type, provided that it is an Integral type.
If we examine the Integral typeclass:
ghci> :info Integral
class (Real a, Enum a) => Integral a where
quot :: a -> a -> a
rem :: a -> a -> a
div :: a -> a -> a
mod :: a -> a -> a
quotRem :: a -> a -> (a, a)
divMod :: a -> a -> (a, a)
toInteger :: a -> Integer
{-# MINIMAL quotRem, toInteger #-}
-- Defined in ‘GHC.Real’
instance Integral Word -- Defined in ‘GHC.Real’
instance Integral Integer -- Defined in ‘GHC.Real’
instance Integral Int -- Defined in ‘GHC.Real’
We can see 3 builtin types which are Integral. Which means that g simultaneously has three different types, depending on how it is used.
g :: Word -> Word
g :: Integer -> Integer
g :: Int -> Int
(And if you define another Integral type in the future, g will automatically work with that as well)
The Word -> Word variant is a good example, since Words cannot be negative. g, when given a positive machine-sized number, returns another positive machine-sized number, whereas f could return any integer, including negative ones or gigantic ones.
Integral is a rather specific class. It's easier to see with Num, which has fewer methods and thus can represent more types:
h :: (Num a) => a -> a
This is also a generalization of f, that is, you could use h where something with f's type is expected. But h can also take a complex number, and it would then return a complex number.
The key with signatures like g's and h's is that they work on multiple types, as long as the return type is the same as the input type.

According to this link the standard instances of the Integral type class are Integer and Int. Integer is a primitive type, and it acts like an unbounded mathematical integer. So given:
f :: Integral n => n -> n
n could be Int or Integer or any "custom" type that you define that is an instance of Integral. The use of type classes allows for type polymorphism.
f :: Num -> Num WILL NOT COMPILE because Num is not a type. Num has kind * -> Constraint and is thus a type constructor whereas f requires an ordinary type or monotype which has kind *, such as Int or Integer (also known as type system primitives). See the haskell wiki for a brief reference/jumping off point about kinds. Haskell has a rich type system with higher-order types, if used correctly it can be an extremely powerful tool i.e. type polymorphism.

f :: Integer -> Integer
f is a function from arbitrary sized integers to arbitrary sized integers.
f' :: Integral n => n -> n
f' is a polymorphic function from type n to type n. Any type n that is member of the class Integral is allowed. Examples for types in Integral are Integer and Int (the integer type with fixed precision).
f'' :: Num n => n -> n
f'' is also a polymorphic function, but this time for class Num. Num is a more generic number type that has more members than Integral, but all types in Integral are also in contained in Num. For f'' n can also be Double.

TL;DR
f :: Integer -> Integer works for one type: Integer.
f :: Integral n => n -> n works for Integer, Int, Word, Int32, Int64, Word8, and any arbitrary user-defined types that implement the Integral type class.
That's the short version. If you only care about one type, it's easier to type-check and more efficient to execute if you specify that one type. If you want multiple types, then the second way is probably the way to do.

Related

Haskell Fold implementation of `elemIndex`

I am looking at Haskell elemIndex function:
elemIndex :: Eq a => a -> [a] -> Maybe Int
What does Maybe mean in this definition? Sometimes when I call it, the output has a Just or a Nothing What does it mean? How can I interpret this if I were to use folds?
First question:
What does it mean?
This means that the returned value is either an index (Int) or Nothing.
from the docs:
The elemIndex function returns the index of the first element in the given list which is equal (by ==) to the query element, or Nothing if there is no such element.
The second question:
How can I interpret this if I were to use folds?
I'm not sure there is enough context to the "were to use folds" part. But, there are at least 2 ways to use this function:
case analysis, were you state what to return in each case:
case elemIndex xs of
Just x -> f x -- apply function f to x.
Nothing -> undefined -- do something here, e.g. give a default value.
use function maybe:
maybe defaultValue f (elemIndex xs)
Maybe is a sum type.
Sum type is any type that has multiple possible representations.
For example:
data Bool = False | True
Bool can represented as True or False. The same goes with Maybe.
data Maybe a = Nothing | Just a
The Maybe type encapsulates an optional value. A value of type Maybe a either contains a value of type a (represented as Just a), or it is empty (represented as Nothing)
elemIndex :: Eq a => a -> [a] -> Maybe Int
The elemIndex function returns the index of the first element in the given list which is equal (by ==) to the query element, or Nothing if there is no such element.
Lets compare it to the indexOf function
What are the possible values of this method?
The index of the element in the array in case it was found (lets say 2).
-1 in case it was not found.
Another way to represent it:
Return a number in case it was found - Just 2.
Instead of returning magic numbers like -1 we can return a value that represents the
option of a failure - Nothing.
Regarding "How can I interpret this if I were to use folds", I do not have enough information to understand the question.
Maybe is a type constructor.
Int is a type. Maybe Int is a type.
String is a type. Maybe String is a type.
For any type a, Maybe a is a type. Its values come in two varieties: either Nothing or Just x where x is a value of type a (we write: x :: a):
x :: a
----------------- ------------------
Just x :: Maybe a Nothing :: Maybe a
In the first rule, the a in both the type of the value x :: a and the type of the value Just x :: Maybe a is the same. Thus if we know the type of x we know the type of Just x; and vice versa.
In the second rule, nothing in the value Nothing itself determines the a in its type. The determination will be made according to how that value is used, i.e. from the context of its usage, from its call site.
As to the fold implementation of elemIndex, it could be for example
elemIndex_asFold :: Eq a => a -> [a] -> Maybe Int
elemIndex_asFold x0 = foldr g Nothing
where
g x r | x == x0 = Just x
| else = r

Representing Functions as Types

A function can be a highly nested structure:
function a(x) {
return b(c(x), d(e(f(x), g())))
}
First, wondering if a function has an instance. That is, the evaluation of the function being the instance of the function. In that sense, the type is the function, and the instance is the evaluation of it. If it can be, then how to model a function as a type (in some type-theory oriented language like Haskell or Coq).
It's almost like:
type a {
field: x
constructor b {
constructor c {
parameter: x
},
...
}
}
But I'm not sure if I'm not on the right track. I know you can say a function has a [return] type. But I'm wondering if a function can be considered a type, and if so, how to model it as a type in a type-theory-oriented language, where it models the actual implementation of the function.
I think the problem is that types based directly on the implementation (let's call them "i-types") don't seem very useful, and we already have good ways of modelling them (called "programs" -- ha ha).
In your specific example, the full i-type of your function, namely:
type a {
field: x
constructor b {
constructor c {
parameter: x
},
constructor d {
constructor e {
constructor f {
parameter: x
}
constructor g {
}
}
}
}
is just a verbose, alternative syntax for the implementation itself. That is, we could write this i-type (in a Haskell-like syntax) as:
itype a :: a x = b (c x) (d (e (f x) g))
On the other hand, we could convert your function implementation to Haskell term-level syntax directly to write it as:
a x = b (c x) (d (e (f x) g))
and the i-type and the implementation are exactly the same thing.
How would you use these i-types? The compiler might use them by deriving argument and return types to type-check the program. (Fortunately, there are well known algorithms, such as Algorithm W, for simultaneously deriving and type-checking argument and return types from i-types of this sort.) Programmers probably wouldn't use i-types directly -- they're too complicated to use for refactoring or reasoning about program behavior. They'd probably want to look at the types derived by the compiler for the arguments and return type.
In particular, "modelling" these i-types at the type level in Haskell doesn't seem productive. Haskell can already model them at the term level. Just write your i-types as a Haskell program:
a x = b (c x) (d (e (f x) g))
b s t = sqrt $ fromIntegral $ length (s ++ t)
c = show
d = reverse
e c ds = show (sum ds + fromIntegral (ord c))
f n = if even n then 'E' else 'O'
g = [1.5..5.5]
and don't run it. Congratulations, you've successfully modelled these i-types! You can even use GHCi to query derived argument and return types:
> :t a
a :: Floating a => Integer -> a -- "a" takes an Integer and returns a float
>
Now, you are perhaps imagining that there are situations where the implementation and i-type would diverge, maybe when you start introducing literal values. For example, maybe you feel like the function f above:
f n = if even n then 'E' else 'O'
should be assigned a type something like the following, that doesn't depend on the specific literal values:
type f {
field: n
if_then_else {
constructor even { -- predicate
parameter: n
}
literal Char -- then-branch
literal Char -- else-branch
}
Again, though, you'd be better off defining an arbitrary term-level Char, like:
someChar :: Char
someChar = undefined
and modeling this i-type at the term-level:
f n = if even n then someChar else someChar
Again, as long as you don't run the program, you've successfully modelled the i-type of f, can query its argument and return types, type-check it as part of a bigger program, etc.
I'm not clear exactly what you are aiming at, so I'll try to point at some related terms that you might want to read about.
A function has not only a return type, but a type that describes its arguments as well. So the (Haskell) type of f reads "f takes an Int and a Float, and returns a List of Floats."
f :: Int -> Float -> [Float]
f i x = replicate i x
Types can also describe much more of the specification of a function. Here, we might want the type to spell out that the length of the list will be the same as the first argument, or that every element of the list will be the same as the second argument. Length-indexed lists (often called Vectors) are a common first example of Dependent Types.
You might also be interested in functions that take types as arguments, and return types. These are sometimes called "type-level functions". In Coq or Idris, they can be defined the same way as more familiar functions. In Haskell, we usually implement them using Type Families, or using Type Classes with Functional Dependencies.
Returning to the first part of your question, Beta Reduction is the process of filling in concrete values for each of the function's arguments. I've heard people describe expressions as "after reduction" or "fully reduced" to emphasize some stage in this process. This is similar to a function Call Site, but emphasizes the expression & arguments, rather than the surrounding context.

passing 2 arguments to a function in Haskell

In Haskell, I know that if I define a function like this add x y = x + y
then I call like this add e1 e2. that call is equivalent to (add e1) e2
which means that applying add to one argument e1 yields a new function which is then applied to the second argument e2.
That's what I don't understand in Haskell. in other languages (like Dart), to do the task above, I would do this
add(x) {
return (y) => x + y;
}
I have to explicitly return a function. So does the part "yields a new function which is then applied to the second argument" automatically do underlying in Haskell? If so, what does that "hiding" function look like? Or I just missunderstand Haskell?
In Haskell, everything is a value,
add x y = x + y
is just a syntactic sugar of:
add = \x -> \y -> x + y
For more information: https://wiki.haskell.org/Currying :
In Haskell, all functions are considered curried: That is, all functions > in Haskell take just single arguments.
This is mostly hidden in notation, and so may not be apparent to a new
Haskeller. Let's take the function
div :: Int -> Int -> Int
which performs integer division. The expression div 11 2
unsurprisingly evaluates to 5. But there's more that's going on than
immediately meets the untrained eye. It's a two-part process. First,
div 11
is evaluated and returns a function of type
Int -> Int
Then that resulting function is applied to the value 2, and yields 5.
You'll notice that the notation for types reflects this: you can read
Int -> Int -> Int
incorrectly as "takes two Ints and returns an Int", but what it's
really saying is "takes an Int and returns something of the type Int
-> Int" -- that is, it returns a function that takes an Int and returns an Int. (One can write the type as Int x Int -> Int if you
really mean the former -- but since all functions in Haskell are
curried, that's not legal Haskell. Alternatively, using tuples, you
can write (Int, Int) -> Int, but keep in mind that the tuple
constructor (,) itself can be curried.)

non-monadic error handling in Haskell?

I was wondering if there is an elegant way to do non-monadic error handling in Haskell that is syntactically simpler than using plain Maybe or Either. What I wanted to deal with is non-IO exceptions such as in parsing, where you generate the exception yourself to let yourself know at a later point, e.g., something was wrong in the input string.
The reason I ask is that monads seem to be viral to me. If I wanted to use exception or exception-like mechanism to report non-critical error in pure functions, I can always use either and do case analysis on the result. Once I use a monad, it's cumbersome/not easy to extract the content of a monadic value and feed it to a function not using monadic values.
A deeper reason is that monads seem to be an overkill for many error-handling. One rationale for using monads as I learned is that monads allow us to thread through a state. But in the case of reporting an error, I don't see any need for threading states (except for the failure state, which I honestly don't know whether it's essential to use monads).
(
EDIT: as I just read, in a monad, each action can take advantage of results from the previous actions. But in reporting an error, it is often unnecessary to know the results of the previous actions. So there is a potential over-kill here for using monads. All that is needed in many cases is to abort and report failure on-site without knowing any prior state. Applicative seems to be a less restrictive choice here to me.
In the specific example of parsing, are the execptions/errors we raise ourselves really effectual in nature? If not, is there something even weaker than Applicative for to model error handling?
)
So, is there a weaker/more general paradigm than monads that can be used to model error-reporting? I am now reading Applicative and trying to figure out if it's suitable. Just wanted to ask beforehand so that I don't miss the obvious.
A related question about this is whether there is a mechanism out there which simply enclose every basic type with,e.g., an Either String. The reason I ask here is that all monads (or maybe functors) enclose a basic type with a type constructor. So if you want to change your non-exception-aware function to be exception aware, you go from, e.g.,
f:: a -> a -- non-exception-aware
to
f':: a -> m a -- exception-aware
But then, this change breaks functional compositions that would otherwise work in the non-exception case. While you could do
f (f x)
you can't do
f' (f' x)
because of the enclosure. A probably naive way to solve the composibilty issue is change f to:
f'' :: m a -> m a
I wonder if there is an elegant way of making error handling/reporting work along this line?
Thanks.
-- Edit ---
Just to clarify the question, take an example from http://mvanier.livejournal.com/5103.html, to make a simple function like
g' i j k = i / k + j / k
capable of handling division by zero error, the current way is to break down the expression term-wise, and compute each term in a monadic action (somewhat like rewriting in assembly language):
g' :: Int -> Int -> Int -> Either ArithmeticError Int
g' i j k =
do q1 <- i `safe_divide` k
q2 <- j `safe_divide` k
return (q1 + q2)
Three actions would be necessary if (+) can also incur an error. I think two reasons for this complexity in current approach are:
As the author of the tutorial pointed out, monads enforce a certain order of operations, which wasn't required in the original expression. That's where the non-monadic part of the question comes from (along with the "viral" feature of monads).
After the monadic computation, you don't have Ints, instead, you have Either a Int, which you cannot add directly. The boilerplate code would multiply rapidly when the express get more complex than addition of two terms. That's where the enclosing-everything-in-a-Either part of the question comes from.
In your first example, you want to compose a function f :: a -> m a with itself. Let's pick a specific a and m for the sake of discussion: Int -> Maybe Int.
Composing functions that can have errors
Okay, so as you point out, you cannot just do f (f x). Well, let's generalize this a little more to g (f x) (let's say we're given a g :: Int -> Maybe String to make things more concrete) and look at what you do need to do case-by-case:
f :: Int -> Maybe Int
f = ...
g :: Int -> Maybe String
g = ...
gComposeF :: Int -> Maybe String
gComposeF x =
case f x of -- The f call on the inside
Nothing -> Nothing
Just x' -> g x' -- The g call on the outside
This is a bit verbose and, like you said, we would like to reduce the repetition. We can also notice a pattern: Nothing always goes to Nothing, and the x' gets taken out of Just x' and given to the composition. Also, note that instead of f x, we could take any Maybe Int value to make things even more general. So let's also pull our g out into an argument, so we can give this function any g:
bindMaybe :: Maybe Int -> (Int -> Maybe String) -> Maybe String
bindMaybe Nothing g = Nothing
bindMaybe (Just x') g = g x'
With this helper function, we can rewrite our original gComposeF like this:
gComposeF :: Int -> Maybe String
gComposeF x = bindMaybe (f x) g
This is getting pretty close to g . f, which is how you would compose those two functions if there wasn't the Maybe discrepancy between them.
Next, we can see that our bindMaybe function doesn't specifically need Int or String, so we can make this a little more useful:
bindMaybe :: Maybe a -> (a -> Maybe b) -> Maybe b
bindMaybe Nothing g = Nothing
bindMaybe (Just x') g = g x'
All we had to change, actually, was the type signature.
This already exists!
Now, bindMaybe actually already exists: it is the >>= method from the Monad type class!
(>>=) :: Monad m => m a -> (a -> m b) -> m b
If we substitute Maybe for m (since Maybe is an instance of Monad, we can do this) we get the same type as bindMaybe:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
Let's take a look at the Maybe instance of Monad to be sure:
instance Monad Maybe where
return x = Just x
Nothing >>= f = Nothing
Just x >>= f = f x
Just like bindMaybe, except we also have an additional method that lets us put something into a "monadic context" (in this case, this just means wrapping it in a Just). Our original gComposeF looks like this:
gComposeF x = f x >>= g
There is also =<<, which is a flipped version of >>=, that lets this look a little more like the normal composition version:
gComposeF x = g =<< f x
There is also a builtin function for composing functions with types of the form a -> m b called <=<:
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> a -> m c
-- Specialized to Maybe, we get:
(<=<) :: (b -> Maybe c) -> (a -> Maybe b) -> a -> Maybe c
Now this really looks like function composition!
gComposeF = g <=< f -- This is very similar to g . f, which is how we "normally" compose functions
When we can simplify even more
As you mentioned in your question, using do notation to convert simple division function to one which properly handles errors is a bit harder to read and more verbose.
Let's look at this a little more carefully, but let's start with a simpler problem (this is actually a simpler problem than the one we looked at in the first sections of this answer): We already have a function, say that multiplies by 10, and we want to compose it with a function that gives us a Maybe Int. We can immediately simplify this a little bit by saying that what we really want to do is take a "regular" function (such as our multiplyByTen :: Int -> Int) and we want to give it a Maybe Int (i.e., a value that won't exist in the case of an error). We want a Maybe Int to come back too, because we want the error to propagate.
For concreteness, we will say that we have some function maybeCount :: String -> Maybe Int (maybe divides 5 by the number times we use the word "compose" in the String and rounds down. It doesn't really matter what it specifically though) and we want to apply multiplyByTen to the result of that.
We'll start with the same kind of case analysis:
multiplyByTen :: Int -> Int
multiplyByTen x = x * 10
maybeCount :: String -> Maybe Int
maybeCount = ...
countThenMultiply :: String -> Maybe Int
countThenMultiply str =
case maybeCount str of
Nothing -> Nothing
Just x -> multiplyByTen x
We can, again, do a similar "pulling out" of multiplyByTen to generalize this further:
overMaybe :: (Int -> Int) -> Maybe Int -> Maybe Int
overMaybe f mstr =
case mstr of
Nothing -> Nothing
Just x -> f x
These types also can be more general:
overMaybe :: (a -> b) -> Maybe a -> Maybe b
Note that we just needed to change the type signature, just like last time.
Our countThenMultiply can then be rewritten:
countThenMultiply str = overMaybe multiplyByTen (maybeCount str)
This function also already exists!
This is fmap from Functor!
fmap :: Functor f => (a -> b) -> f a -> f b
-- Specializing f to Maybe:
fmap :: (a -> b) -> Maybe a -> Maybe b
and, in fact, the definition of the Maybe instance is exactly the same as well. This lets us apply any "normal" function to a Maybe value and get a Maybe value back, with any failure automatically propagated.
There is also a handy infix operator synonym for fmap: (<$>) = fmap. This will come in handy later. This is what it would look like if we used this synonym:
countThenMultiply str = multiplyByTen <$> maybeCount str
What if we have more Maybes?
Maybe we have a "normal" function of multiple arguments that we need to apply to multiple Maybe values. As you have in your question, we could do this with Monad and do notation if we were so inclined, but we don't actually need the full power of Monad. We need something in between Functor and Monad.
Let's look the division example you gave. We want to convert g' to use the safeDivide :: Int -> Int -> Either ArithmeticError Int. The "normal" g' looks like this:
g' i j k = i / k + j / k
What we would really like to do is something like this:
g' i j k = (safeDivide i k) + (safeDivide j k)
Well, we can get close with Functor:
fmap (+) (safeDivide i k) :: Either ArithmeticError (Int -> Int)
The type of this, by the way, is analogous to Maybe (Int -> Int). The Either ArithmeticError part just tells us that our errors give us information in the form of ArithmeticError values instead of only being Nothing. It could help to mentally replace Either ArithmeticError with Maybe for now.
Well, this is sort of like what we want, but we need a way to apply the function "inside" the Either ArithmeticError (Int -> Int) to Either ArithmeticError Int.
Our case analysis would look like this:
eitherApply :: Either ArithmeticError (Int -> Int) -> Either ArithmeticError Int -> Either ArithmeticError Int
eitherApply ef ex =
case ef of
Left err -> Left err
Right f ->
case ex of
Left err' -> Left err'
Right x -> Right (f x)
(As a side note, the second case can be simplified with fmap)
If we have this function, then we can do this:
g' i j k = eitherApply (fmap (+) (safeDivide i k)) (safeDivide j k)
This still doesn't look great, but let's go with it for now.
It turns out eitherApply also already exists: it is (<*>) from Applicative. If we use this, we can arrive at:
g' i j k = (<*>) (fmap (+) (safeDivide i k)) (safeDivide j k)
-- This is the same as
g' i j k = fmap (+) (safeDivide i k) <*> safeDivide j k
You may remember from earlier that there is an infix synonym for fmap called <$>. If we use that, the whole thing looks like:
g' i j k = (+) <$> safeDivide i k <*> safeDivide j k
This looks strange at first, but you get used to it. You can think of <$> and <*> as being "context sensitive whitespace." What I mean is, if we have some regular function f :: String -> String -> Int and we apply it to normal String values we have:
firstString, secondString :: String
result :: Int
result = f firstString secondString
If we have two (for example) Maybe String values, we can apply f :: String -> String -> Int, we can apply f to both of them like this:
firstString', secondString' :: Maybe String
result :: Maybe Int
result = f <$> firstString' <*> secondString'
The difference is that instead of whitespace, we add <$> and <*>. This generalizes to more arguments in this way (given f :: A -> B -> C -> D -> E):
-- When we apply normal values (x :: A, y :: B, z :: C, w :: D):
result :: E
result = f x y z w
-- When we apply values that have an Applicative instance, for example x' :: Maybe A, y' :: Maybe B, z' :: Maybe C, w' :: Maybe D:
result' :: Maybe E
result' = f <$> x' <*> y' <*> z' <*> w'
A very important note
Note that none of the above code mentioned Functor, Applicative, or Monad. We just used their methods as though they are any other regular helper functions.
The only difference is that these particular helper functions can work on many different types, but we don't even have to think about that if we don't want to. If we really want to, we can just think of fmap, <*>, >>= etc in terms of their specialized types, if we are using them on a specific type (which we are, in all of this).
The reason I ask is that monads seem to be viral to me.
Such viral character is actually well-suited to exception handling, as it forces you to recognize your functions may fail and to deal with the failure cases.
Once I use a monad, it's cumbersome/not easy to extract the content of
a monadic value and feed it to a function not using monadic values.
You don't have to extract the value. Taking Maybe as a simple example, very often you can just write plain functions to deal with success cases, and then use fmap to apply them to your Maybe values and maybe/fromMaybe to deal with failures and eliminate the Maybe wrapping. Maybe is a monad, but that doesn't oblige you to use the monadic interface or do notation all the time. In general, there is no real opposition between "monadic" and "pure".
One rationale for using monads as I learned is that monads allow us to
thread through a state.
That is just one of many use cases. The Maybe monad allows you to skip any remaining computations in a bind chain after failure. It does not thread any sort of state.
So, is there a weaker/more general paradigm than monads that can be
used to model error-reporting? I am now reading Applicative and trying
to figure out if it's suitable.
You can certainly chain Maybe computations using the Applicative instance. (*>) is equivalent to (>>), and there is no equivalent to (>>=) since Applicative is less powerful than Monad. While it is generally a good thing not to use more power than you actually need, I am not sure if using Applicative is any simpler in the sense you aim at.
While you could do f (f x) you can't do f' (f' x)
You can write f' <=< f' $ x though:
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> a -> m c
You may find this answer about (>=>), and possibly the other discussions in that question, interesting.

Function Composition VS Function Application

Do anyone can give example of function composition?
This is the definition of function composition operator?
(.) :: (b -> c) -> (a -> b) -> a -> c
f . g = \x -> f (g x)
This shows that it takes two functions and return a function but i remember someone has expressed the logic in english like
boy is human -> ali is boy -> ali is human
What this logic related to function composition?
What is the meaning of strong binding of function application and composition and which one is more strong binding than the other?
Please help.
Thanks.
(Edit 1: I missed a couple components of your question the first time around; see the bottom of my answer.)
The way to think about this sort of statement is to look at the types. The form of argument that you have is called a syllogism; however, I think you are mis-remembering something. There are many different kinds of syllogisms, and yours, as far as I can tell, does not correspond to function composition. Let's consider a kind of syllogism that does:
If it is sunny out, I will get hot.
If I get hot, I will go swimming.
Therefore, if it is sunny about, I will go swimming.
This is called a hypothetical syllogism. In logic terms, we would write it as follows: let S stand for the proposition "it is sunny out", let H stand for the proposition "I will get hot", and let W stand for the proposition "I will go swimming". Writing α → β for "α implies β", and writing ∴ for "therefore", we can translate the above to:
S → H
H → W
∴ S → W
Of course, this works if we replace S, H, and W with any arbitrary α, β, and γ. Now, this should look familiar. If we change the implication arrow → to the function arrow ->, this becomes
a -> b
b -> c
∴ a -> c
And lo and behold, we have the three components of the type of the composition operator! To think about this as a logical syllogism, you might consider the following:
If I have a value of type a, I can produce a value of type b.
If I have a value of type b, I can produce a value of type c.
Therefore, if I have a value of type a, I can produce a value of type c.
This should make sense: in f . g, the existence of a function g :: a -> b tells you that premise 1 is true, and f :: b -> c tells you that premise 2 is true. Thus, you can conclude the final statement, for which the function f . g :: a -> c is a witness.
I'm not entirely sure what your syllogism translates to. It's almost an instance of modus ponens, but not quite. Modus ponens arguments take the following form:
If it is raining, then I will get wet.
It is raining.
Therefore, I will get wet.
Writing R for "it is raining", and W for "I will get wet", this gives us the logical form
R → W
R
∴ W
Replacing the implication arrow with the function arrow gives us the following:
a -> b
a
∴ b
And this is simply function application, as we can see from the type of ($) :: (a -> b) -> a -> b. If you want to think of this as a logical argument, it might be of the form
If I have a value of type a, I can produce a value of type b.
I have a value of type a.
Therefore, I can produce a value of type b.
Here, consider the expression f x. The function f :: a -> b is a witness of the truth of proposition 1; the value x :: a is a witness of the truth of proposition 2; and therefore the result can be of type b, proving the conclusion. It's exactly what we found from the proof.
Now, your original syllogism takes the following form:
All boys are human.
Ali is a boy.
Therefore, Ali is human.
Let's translate this to symbols. Bx will denote that x is a boy; Hx will denote that x is human; a will denote Ali; and ∀x. φ says that φ is true for all values of x. Then we have
∀x. Bx → Hx
Ba
∴ Ha
This is almost modus ponens, but it requires instantiating the forall. While logically valid, I'm not sure how to interpret it at the type-system level; if anybody wants to help out, I'm all ears. One guess would be a rank-2 type like (forall x. B x -> H x) -> B a -> H a, but I'm almost sure that that's wrong. Another guess would be a simpler type like (B x -> H x) -> B Int -> H Int, where Int stands for Ali, but again, I'm almost sure it's wrong. Again: if you know, please let me know too!
And one last note. Looking at things this way—following the connection between proofs and programs—eventually leads to the deep magic of the Curry-Howard isomorphism, but that's a more advanced topic. (It's really cool, though!)
Edit 1: You also asked for an example of function composition. Here's one example. Suppose I have a list of people's middle names. I need to construct a list of all the middle initials, but to do that, I first have to exclude every non-existent middle name. It is easy to exclude everyone whose middle name is null; we just include everyone whose middle name is not null with filter (\mn -> not $ null mn) middleNames. Similarly, we can easily get at someone's middle initial with head, and so we simply need map head filteredMiddleNames in order to get the list. In other words, we have the following code:
allMiddleInitials :: [Char]
allMiddleInitials = map head $ filter (\mn -> not $ null mn) middleNames
But this is irritatingly specific; we really want a middle-initial–generating function. So let's change this into one:
getMiddleInitials :: [String] -> [Char]
getMiddleInitials middleNames = map head $ filter (\mn -> not $ null mn) middleNames
Now, let's look at something interesting. The function map has type (a -> b) -> [a] -> [b], and since head has type [a] -> a, map head has type [[a]] -> [a]. Similarly, filter has type (a -> Bool) -> [a] -> [a], and so filter (\mn -> not $ null mn) has type [a] -> [a]. Thus, we can get rid of the parameter, and instead write
-- The type is also more general
getFirstElements :: [[a]] -> [a]
getFirstElements = map head . filter (not . null)
And you see that we have a bonus instance of composition: not has type Bool -> Bool, and null has type [a] -> Bool, so not . null has type [a] -> Bool: it first checks if the given list is empty, and then returns whether it isn't. This transformation, by the way, changed the function into point-free style; that is, the resulting function has no explicit variables.
You also asked about "strong binding". What I think you're referring to is the precedence of the . and $ operators (and possibly function application as well). In Haskell, just as in arithmetic, certain operators have higher precedence than others, and thus bind more tightly. For instance, in the expression 1 + 2 * 3, this is parsed as 1 + (2 * 3). This is because in Haskell, the following declarations are in force:
infixl 6 +
infixl 7 *
The higher the number (the precedence level), the sooner that that operator is called, and thus the more tightly the operator binds. Function application effectively has infinite precedence, so it binds the most tightly: the expression f x % g y will parse as (f x) % (g y) for any operator %. The . (composition) and $ (application) operators have the following fixity declarations:
infixr 9 .
infixr 0 $
Precedence levels range from zero to nine, so what this says is that the . operator binds more tightly than any other (except function application), and the $ binds less tightly. Thus, the expression f . g $ h will parse as (f . g) $ h; and in fact, for most operators, f . g % h will be (f . g) % h and f % g $ h will be f % (g $ h). (The only exceptions are the rare few other zero or nine precedence operators.)