This is a bit theoretical question. We can define fx but seemingly not fx':
Function fx {A} (x:A) (f:A->Type) (g:Type->f x): f x := g (f x).
Definition fx' {A} (x:A) (f:A->Type): f x.
In a way, this makes sense, as one cannot prove from the f and x that f has been (or will be) applied to x. But we can apply f to x to get something of type Type:
assert (h := f x).
This seems puzzling: one can apply f to x but still can't obtain a witness y: f x that he has done so.
The only explanation I can think of is this: as a type, f x is an application, as a term, it's just a type. We cannot infer a past application from a type; similarly, we cannot infer a future application from a function and its potential argument. As for (the instance of) applying itself, it isn't a stage in a proof, so we cannot have a witness of it. But I'm just guessing. The question:
Is it possible to define fx'? If yes, how; if no, why (please give a theoretical explanation)
First, a direct answer to your question: no, it is not possible to define fx'. According to your snippet, fx' should have type
forall (A : Type) (x : A) (f : A -> Type), f x.
It is not hard to see that the existence of fx' implies a contradiction, as the following script shows.
Section Contra.
Variable fx' : forall (A : Type) (x : A) (f : A -> Type), f x.
Lemma contra : False.
Proof.
exact (fx' unit tt (fun x => False)).
Qed.
End Contra.
What happened here? The type of fx' says that for any family of types f indexed by a type A, we can produce an element of f x, where x is arbitrary. In particular, we can take f to be a constant family of types (fun x => False), in which case f x is the same thing as False. (Note that False, besides being a member of Prop, is also a member of Type.)
Now, given your question, I think you are slightly confused by the meaning of types and propositions in Coq. You said:
This seems puzzling: one can apply f to x but still can't obtain a
witness y: f x that he has done so.
The fact that we can apply f to x simply means that the expression f x has a valid type, which, in this case, is Type. In other words, Coq shows that f x : Type. But having a type is a different thing from being inhabited: when f and x are arbitrary, it is not possible to build a term y such that y : f x. In particular, we have False : Type, but there is no way we can build a term p with p : False, because that would mean that Coq's logic is inconsistent.
Related
I'm currently revising the chapter for the lambda (\) expressions in Haskell. Was just wondering if someone could please help explain how did this:
const :: a → b → a
const x _ = x (this part)
Get defined into this:
const :: a → (b → a)
const x = λ_ → x (how did it become like this?)
The signatures a -> b -> a and a -> (b -> a) are parsed exactly the same, much like the arithmetic expressions 1 - 2 - 3 and (1 - 2) - 3 are the same: the -> operator is right-associative, whereas the - operator is left associative, i.e. the parser effectively puts the parentheses in the right place if not explicitly specified. In other words, A -> B -> C is defined to be A -> (B -> C).
If we explicitly write a -> (b -> a), then we do this to put focus on the fact that we're dealing with curried functions, i.e. that we can accept the arguments one-by-one instead of all at once, but all multi-parameter functions in Haskell are curried anyway.
As for why const x _ = x and const x = \_ -> x are equivalent: well first, to be pedantic they're not equivalent, see bottom of this answer. But let's ignore that for now.
Both lambdas and (single) function clauses are just ways to define functions. Like,
sqPl1 :: Int -> Int
sqPl1 x = x^2 + 1
does the same as
sqPl1 = \x -> x^2 + 1
It's just a different syntax. Some would say the f x = ... notation is just syntactic sugar for binding x in a lambda, i.e. f = \x -> ..., because Haskell is based on lambda calculus and in lambda calculus lambdas are the only way to write functions. (That's a bit of an oversimplification though.)
I said they're not quite equivalent. I'm referring to two things here:
You can have local definitions whose scope outlasts a parameter binding. For example if I write
foo x y = ec * y
where ec = {- expensive computation depending on `x` -}
then ec will always be computed from scratch whenever foo is applied. However, if I write it as
foo x = \y -> ec * y
where ec = {- expensive computation depending on `x` -}
then I can partially apply foo to one argument, and the resulting single-argument function can be evaluated with many different y values without needing to compute ec again. For example map (foo 3) [0..90] would be faster with the lambda definition. (On the flip side, if the stored value takes up a lot of memory it may be preferrable to not keep it around; it depends.)
Haskell has a notion of constant applicative forms. It's a subtle topic that I won't go into here, but that can be affected by whether you write a function as a lambda or with clauses or expand arguments.
A function can be a highly nested structure:
function a(x) {
return b(c(x), d(e(f(x), g())))
}
First, wondering if a function has an instance. That is, the evaluation of the function being the instance of the function. In that sense, the type is the function, and the instance is the evaluation of it. If it can be, then how to model a function as a type (in some type-theory oriented language like Haskell or Coq).
It's almost like:
type a {
field: x
constructor b {
constructor c {
parameter: x
},
...
}
}
But I'm not sure if I'm not on the right track. I know you can say a function has a [return] type. But I'm wondering if a function can be considered a type, and if so, how to model it as a type in a type-theory-oriented language, where it models the actual implementation of the function.
I think the problem is that types based directly on the implementation (let's call them "i-types") don't seem very useful, and we already have good ways of modelling them (called "programs" -- ha ha).
In your specific example, the full i-type of your function, namely:
type a {
field: x
constructor b {
constructor c {
parameter: x
},
constructor d {
constructor e {
constructor f {
parameter: x
}
constructor g {
}
}
}
}
is just a verbose, alternative syntax for the implementation itself. That is, we could write this i-type (in a Haskell-like syntax) as:
itype a :: a x = b (c x) (d (e (f x) g))
On the other hand, we could convert your function implementation to Haskell term-level syntax directly to write it as:
a x = b (c x) (d (e (f x) g))
and the i-type and the implementation are exactly the same thing.
How would you use these i-types? The compiler might use them by deriving argument and return types to type-check the program. (Fortunately, there are well known algorithms, such as Algorithm W, for simultaneously deriving and type-checking argument and return types from i-types of this sort.) Programmers probably wouldn't use i-types directly -- they're too complicated to use for refactoring or reasoning about program behavior. They'd probably want to look at the types derived by the compiler for the arguments and return type.
In particular, "modelling" these i-types at the type level in Haskell doesn't seem productive. Haskell can already model them at the term level. Just write your i-types as a Haskell program:
a x = b (c x) (d (e (f x) g))
b s t = sqrt $ fromIntegral $ length (s ++ t)
c = show
d = reverse
e c ds = show (sum ds + fromIntegral (ord c))
f n = if even n then 'E' else 'O'
g = [1.5..5.5]
and don't run it. Congratulations, you've successfully modelled these i-types! You can even use GHCi to query derived argument and return types:
> :t a
a :: Floating a => Integer -> a -- "a" takes an Integer and returns a float
>
Now, you are perhaps imagining that there are situations where the implementation and i-type would diverge, maybe when you start introducing literal values. For example, maybe you feel like the function f above:
f n = if even n then 'E' else 'O'
should be assigned a type something like the following, that doesn't depend on the specific literal values:
type f {
field: n
if_then_else {
constructor even { -- predicate
parameter: n
}
literal Char -- then-branch
literal Char -- else-branch
}
Again, though, you'd be better off defining an arbitrary term-level Char, like:
someChar :: Char
someChar = undefined
and modeling this i-type at the term-level:
f n = if even n then someChar else someChar
Again, as long as you don't run the program, you've successfully modelled the i-type of f, can query its argument and return types, type-check it as part of a bigger program, etc.
I'm not clear exactly what you are aiming at, so I'll try to point at some related terms that you might want to read about.
A function has not only a return type, but a type that describes its arguments as well. So the (Haskell) type of f reads "f takes an Int and a Float, and returns a List of Floats."
f :: Int -> Float -> [Float]
f i x = replicate i x
Types can also describe much more of the specification of a function. Here, we might want the type to spell out that the length of the list will be the same as the first argument, or that every element of the list will be the same as the second argument. Length-indexed lists (often called Vectors) are a common first example of Dependent Types.
You might also be interested in functions that take types as arguments, and return types. These are sometimes called "type-level functions". In Coq or Idris, they can be defined the same way as more familiar functions. In Haskell, we usually implement them using Type Families, or using Type Classes with Functional Dependencies.
Returning to the first part of your question, Beta Reduction is the process of filling in concrete values for each of the function's arguments. I've heard people describe expressions as "after reduction" or "fully reduced" to emphasize some stage in this process. This is similar to a function Call Site, but emphasizes the expression & arguments, rather than the surrounding context.
I was wondering if there is an elegant way to do non-monadic error handling in Haskell that is syntactically simpler than using plain Maybe or Either. What I wanted to deal with is non-IO exceptions such as in parsing, where you generate the exception yourself to let yourself know at a later point, e.g., something was wrong in the input string.
The reason I ask is that monads seem to be viral to me. If I wanted to use exception or exception-like mechanism to report non-critical error in pure functions, I can always use either and do case analysis on the result. Once I use a monad, it's cumbersome/not easy to extract the content of a monadic value and feed it to a function not using monadic values.
A deeper reason is that monads seem to be an overkill for many error-handling. One rationale for using monads as I learned is that monads allow us to thread through a state. But in the case of reporting an error, I don't see any need for threading states (except for the failure state, which I honestly don't know whether it's essential to use monads).
(
EDIT: as I just read, in a monad, each action can take advantage of results from the previous actions. But in reporting an error, it is often unnecessary to know the results of the previous actions. So there is a potential over-kill here for using monads. All that is needed in many cases is to abort and report failure on-site without knowing any prior state. Applicative seems to be a less restrictive choice here to me.
In the specific example of parsing, are the execptions/errors we raise ourselves really effectual in nature? If not, is there something even weaker than Applicative for to model error handling?
)
So, is there a weaker/more general paradigm than monads that can be used to model error-reporting? I am now reading Applicative and trying to figure out if it's suitable. Just wanted to ask beforehand so that I don't miss the obvious.
A related question about this is whether there is a mechanism out there which simply enclose every basic type with,e.g., an Either String. The reason I ask here is that all monads (or maybe functors) enclose a basic type with a type constructor. So if you want to change your non-exception-aware function to be exception aware, you go from, e.g.,
f:: a -> a -- non-exception-aware
to
f':: a -> m a -- exception-aware
But then, this change breaks functional compositions that would otherwise work in the non-exception case. While you could do
f (f x)
you can't do
f' (f' x)
because of the enclosure. A probably naive way to solve the composibilty issue is change f to:
f'' :: m a -> m a
I wonder if there is an elegant way of making error handling/reporting work along this line?
Thanks.
-- Edit ---
Just to clarify the question, take an example from http://mvanier.livejournal.com/5103.html, to make a simple function like
g' i j k = i / k + j / k
capable of handling division by zero error, the current way is to break down the expression term-wise, and compute each term in a monadic action (somewhat like rewriting in assembly language):
g' :: Int -> Int -> Int -> Either ArithmeticError Int
g' i j k =
do q1 <- i `safe_divide` k
q2 <- j `safe_divide` k
return (q1 + q2)
Three actions would be necessary if (+) can also incur an error. I think two reasons for this complexity in current approach are:
As the author of the tutorial pointed out, monads enforce a certain order of operations, which wasn't required in the original expression. That's where the non-monadic part of the question comes from (along with the "viral" feature of monads).
After the monadic computation, you don't have Ints, instead, you have Either a Int, which you cannot add directly. The boilerplate code would multiply rapidly when the express get more complex than addition of two terms. That's where the enclosing-everything-in-a-Either part of the question comes from.
In your first example, you want to compose a function f :: a -> m a with itself. Let's pick a specific a and m for the sake of discussion: Int -> Maybe Int.
Composing functions that can have errors
Okay, so as you point out, you cannot just do f (f x). Well, let's generalize this a little more to g (f x) (let's say we're given a g :: Int -> Maybe String to make things more concrete) and look at what you do need to do case-by-case:
f :: Int -> Maybe Int
f = ...
g :: Int -> Maybe String
g = ...
gComposeF :: Int -> Maybe String
gComposeF x =
case f x of -- The f call on the inside
Nothing -> Nothing
Just x' -> g x' -- The g call on the outside
This is a bit verbose and, like you said, we would like to reduce the repetition. We can also notice a pattern: Nothing always goes to Nothing, and the x' gets taken out of Just x' and given to the composition. Also, note that instead of f x, we could take any Maybe Int value to make things even more general. So let's also pull our g out into an argument, so we can give this function any g:
bindMaybe :: Maybe Int -> (Int -> Maybe String) -> Maybe String
bindMaybe Nothing g = Nothing
bindMaybe (Just x') g = g x'
With this helper function, we can rewrite our original gComposeF like this:
gComposeF :: Int -> Maybe String
gComposeF x = bindMaybe (f x) g
This is getting pretty close to g . f, which is how you would compose those two functions if there wasn't the Maybe discrepancy between them.
Next, we can see that our bindMaybe function doesn't specifically need Int or String, so we can make this a little more useful:
bindMaybe :: Maybe a -> (a -> Maybe b) -> Maybe b
bindMaybe Nothing g = Nothing
bindMaybe (Just x') g = g x'
All we had to change, actually, was the type signature.
This already exists!
Now, bindMaybe actually already exists: it is the >>= method from the Monad type class!
(>>=) :: Monad m => m a -> (a -> m b) -> m b
If we substitute Maybe for m (since Maybe is an instance of Monad, we can do this) we get the same type as bindMaybe:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
Let's take a look at the Maybe instance of Monad to be sure:
instance Monad Maybe where
return x = Just x
Nothing >>= f = Nothing
Just x >>= f = f x
Just like bindMaybe, except we also have an additional method that lets us put something into a "monadic context" (in this case, this just means wrapping it in a Just). Our original gComposeF looks like this:
gComposeF x = f x >>= g
There is also =<<, which is a flipped version of >>=, that lets this look a little more like the normal composition version:
gComposeF x = g =<< f x
There is also a builtin function for composing functions with types of the form a -> m b called <=<:
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> a -> m c
-- Specialized to Maybe, we get:
(<=<) :: (b -> Maybe c) -> (a -> Maybe b) -> a -> Maybe c
Now this really looks like function composition!
gComposeF = g <=< f -- This is very similar to g . f, which is how we "normally" compose functions
When we can simplify even more
As you mentioned in your question, using do notation to convert simple division function to one which properly handles errors is a bit harder to read and more verbose.
Let's look at this a little more carefully, but let's start with a simpler problem (this is actually a simpler problem than the one we looked at in the first sections of this answer): We already have a function, say that multiplies by 10, and we want to compose it with a function that gives us a Maybe Int. We can immediately simplify this a little bit by saying that what we really want to do is take a "regular" function (such as our multiplyByTen :: Int -> Int) and we want to give it a Maybe Int (i.e., a value that won't exist in the case of an error). We want a Maybe Int to come back too, because we want the error to propagate.
For concreteness, we will say that we have some function maybeCount :: String -> Maybe Int (maybe divides 5 by the number times we use the word "compose" in the String and rounds down. It doesn't really matter what it specifically though) and we want to apply multiplyByTen to the result of that.
We'll start with the same kind of case analysis:
multiplyByTen :: Int -> Int
multiplyByTen x = x * 10
maybeCount :: String -> Maybe Int
maybeCount = ...
countThenMultiply :: String -> Maybe Int
countThenMultiply str =
case maybeCount str of
Nothing -> Nothing
Just x -> multiplyByTen x
We can, again, do a similar "pulling out" of multiplyByTen to generalize this further:
overMaybe :: (Int -> Int) -> Maybe Int -> Maybe Int
overMaybe f mstr =
case mstr of
Nothing -> Nothing
Just x -> f x
These types also can be more general:
overMaybe :: (a -> b) -> Maybe a -> Maybe b
Note that we just needed to change the type signature, just like last time.
Our countThenMultiply can then be rewritten:
countThenMultiply str = overMaybe multiplyByTen (maybeCount str)
This function also already exists!
This is fmap from Functor!
fmap :: Functor f => (a -> b) -> f a -> f b
-- Specializing f to Maybe:
fmap :: (a -> b) -> Maybe a -> Maybe b
and, in fact, the definition of the Maybe instance is exactly the same as well. This lets us apply any "normal" function to a Maybe value and get a Maybe value back, with any failure automatically propagated.
There is also a handy infix operator synonym for fmap: (<$>) = fmap. This will come in handy later. This is what it would look like if we used this synonym:
countThenMultiply str = multiplyByTen <$> maybeCount str
What if we have more Maybes?
Maybe we have a "normal" function of multiple arguments that we need to apply to multiple Maybe values. As you have in your question, we could do this with Monad and do notation if we were so inclined, but we don't actually need the full power of Monad. We need something in between Functor and Monad.
Let's look the division example you gave. We want to convert g' to use the safeDivide :: Int -> Int -> Either ArithmeticError Int. The "normal" g' looks like this:
g' i j k = i / k + j / k
What we would really like to do is something like this:
g' i j k = (safeDivide i k) + (safeDivide j k)
Well, we can get close with Functor:
fmap (+) (safeDivide i k) :: Either ArithmeticError (Int -> Int)
The type of this, by the way, is analogous to Maybe (Int -> Int). The Either ArithmeticError part just tells us that our errors give us information in the form of ArithmeticError values instead of only being Nothing. It could help to mentally replace Either ArithmeticError with Maybe for now.
Well, this is sort of like what we want, but we need a way to apply the function "inside" the Either ArithmeticError (Int -> Int) to Either ArithmeticError Int.
Our case analysis would look like this:
eitherApply :: Either ArithmeticError (Int -> Int) -> Either ArithmeticError Int -> Either ArithmeticError Int
eitherApply ef ex =
case ef of
Left err -> Left err
Right f ->
case ex of
Left err' -> Left err'
Right x -> Right (f x)
(As a side note, the second case can be simplified with fmap)
If we have this function, then we can do this:
g' i j k = eitherApply (fmap (+) (safeDivide i k)) (safeDivide j k)
This still doesn't look great, but let's go with it for now.
It turns out eitherApply also already exists: it is (<*>) from Applicative. If we use this, we can arrive at:
g' i j k = (<*>) (fmap (+) (safeDivide i k)) (safeDivide j k)
-- This is the same as
g' i j k = fmap (+) (safeDivide i k) <*> safeDivide j k
You may remember from earlier that there is an infix synonym for fmap called <$>. If we use that, the whole thing looks like:
g' i j k = (+) <$> safeDivide i k <*> safeDivide j k
This looks strange at first, but you get used to it. You can think of <$> and <*> as being "context sensitive whitespace." What I mean is, if we have some regular function f :: String -> String -> Int and we apply it to normal String values we have:
firstString, secondString :: String
result :: Int
result = f firstString secondString
If we have two (for example) Maybe String values, we can apply f :: String -> String -> Int, we can apply f to both of them like this:
firstString', secondString' :: Maybe String
result :: Maybe Int
result = f <$> firstString' <*> secondString'
The difference is that instead of whitespace, we add <$> and <*>. This generalizes to more arguments in this way (given f :: A -> B -> C -> D -> E):
-- When we apply normal values (x :: A, y :: B, z :: C, w :: D):
result :: E
result = f x y z w
-- When we apply values that have an Applicative instance, for example x' :: Maybe A, y' :: Maybe B, z' :: Maybe C, w' :: Maybe D:
result' :: Maybe E
result' = f <$> x' <*> y' <*> z' <*> w'
A very important note
Note that none of the above code mentioned Functor, Applicative, or Monad. We just used their methods as though they are any other regular helper functions.
The only difference is that these particular helper functions can work on many different types, but we don't even have to think about that if we don't want to. If we really want to, we can just think of fmap, <*>, >>= etc in terms of their specialized types, if we are using them on a specific type (which we are, in all of this).
The reason I ask is that monads seem to be viral to me.
Such viral character is actually well-suited to exception handling, as it forces you to recognize your functions may fail and to deal with the failure cases.
Once I use a monad, it's cumbersome/not easy to extract the content of
a monadic value and feed it to a function not using monadic values.
You don't have to extract the value. Taking Maybe as a simple example, very often you can just write plain functions to deal with success cases, and then use fmap to apply them to your Maybe values and maybe/fromMaybe to deal with failures and eliminate the Maybe wrapping. Maybe is a monad, but that doesn't oblige you to use the monadic interface or do notation all the time. In general, there is no real opposition between "monadic" and "pure".
One rationale for using monads as I learned is that monads allow us to
thread through a state.
That is just one of many use cases. The Maybe monad allows you to skip any remaining computations in a bind chain after failure. It does not thread any sort of state.
So, is there a weaker/more general paradigm than monads that can be
used to model error-reporting? I am now reading Applicative and trying
to figure out if it's suitable.
You can certainly chain Maybe computations using the Applicative instance. (*>) is equivalent to (>>), and there is no equivalent to (>>=) since Applicative is less powerful than Monad. While it is generally a good thing not to use more power than you actually need, I am not sure if using Applicative is any simpler in the sense you aim at.
While you could do f (f x) you can't do f' (f' x)
You can write f' <=< f' $ x though:
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> a -> m c
You may find this answer about (>=>), and possibly the other discussions in that question, interesting.
Specifically, inspired by J's conjucation operator (g&.f = (f inverse)(g)(f))
I need a way to augment functions with additional information. The obvious way is to use ADT. Something like:
data Isomorphism a b = ISO {FW (a -> b) , BW (b -> a)}
(FW f) `isoApp` x = f x
(BW g) `isoApp` x = g x
But the need for an application function really hurts code readability when most of the time you just want the forward function.
What would be very useful is a class:
class Applyable a b c | a b -> c where
apply :: a -> b -> c
(I think the b could be made implicit with existential quantifiers but I'm not comfortable enough to be sure I wouldn't get the signature wrong)
Now the apply would be made implicit so you could just write
f x
as you would any other function. Ex:
instance Applyable (a -> b) a b where
apply f x = f x
instance Applyable (Isomorphism a b) a b where
apply f x = (FW f) x
inverse (Iso f g) = Iso g f
then you could write something like:
conjugate :: (Applyable g b b) => g -> Iso a b -> b -> a
f `conjugate` g = (inverse f) . g . f
Ideally the type signature could be inferred
However, these semantics seem complicated, as you would also need to modify (.) to support Applyable rather than functions. Is there any way to simply trick the type system into treating Applyable datatypes as functions for all normal purposes?
Is there a fundamental reason that this is impossible / a bad idea?
As far as I know, function application is possibly the one thing in the entire Haskell language that you cannot override.
You can, however, devise some sort of operator for this. Admittedly f # x isn't quite as nice as f x, but it's better than f `isoApp` x.
Do anyone can give example of function composition?
This is the definition of function composition operator?
(.) :: (b -> c) -> (a -> b) -> a -> c
f . g = \x -> f (g x)
This shows that it takes two functions and return a function but i remember someone has expressed the logic in english like
boy is human -> ali is boy -> ali is human
What this logic related to function composition?
What is the meaning of strong binding of function application and composition and which one is more strong binding than the other?
Please help.
Thanks.
(Edit 1: I missed a couple components of your question the first time around; see the bottom of my answer.)
The way to think about this sort of statement is to look at the types. The form of argument that you have is called a syllogism; however, I think you are mis-remembering something. There are many different kinds of syllogisms, and yours, as far as I can tell, does not correspond to function composition. Let's consider a kind of syllogism that does:
If it is sunny out, I will get hot.
If I get hot, I will go swimming.
Therefore, if it is sunny about, I will go swimming.
This is called a hypothetical syllogism. In logic terms, we would write it as follows: let S stand for the proposition "it is sunny out", let H stand for the proposition "I will get hot", and let W stand for the proposition "I will go swimming". Writing α → β for "α implies β", and writing ∴ for "therefore", we can translate the above to:
S → H
H → W
∴ S → W
Of course, this works if we replace S, H, and W with any arbitrary α, β, and γ. Now, this should look familiar. If we change the implication arrow → to the function arrow ->, this becomes
a -> b
b -> c
∴ a -> c
And lo and behold, we have the three components of the type of the composition operator! To think about this as a logical syllogism, you might consider the following:
If I have a value of type a, I can produce a value of type b.
If I have a value of type b, I can produce a value of type c.
Therefore, if I have a value of type a, I can produce a value of type c.
This should make sense: in f . g, the existence of a function g :: a -> b tells you that premise 1 is true, and f :: b -> c tells you that premise 2 is true. Thus, you can conclude the final statement, for which the function f . g :: a -> c is a witness.
I'm not entirely sure what your syllogism translates to. It's almost an instance of modus ponens, but not quite. Modus ponens arguments take the following form:
If it is raining, then I will get wet.
It is raining.
Therefore, I will get wet.
Writing R for "it is raining", and W for "I will get wet", this gives us the logical form
R → W
R
∴ W
Replacing the implication arrow with the function arrow gives us the following:
a -> b
a
∴ b
And this is simply function application, as we can see from the type of ($) :: (a -> b) -> a -> b. If you want to think of this as a logical argument, it might be of the form
If I have a value of type a, I can produce a value of type b.
I have a value of type a.
Therefore, I can produce a value of type b.
Here, consider the expression f x. The function f :: a -> b is a witness of the truth of proposition 1; the value x :: a is a witness of the truth of proposition 2; and therefore the result can be of type b, proving the conclusion. It's exactly what we found from the proof.
Now, your original syllogism takes the following form:
All boys are human.
Ali is a boy.
Therefore, Ali is human.
Let's translate this to symbols. Bx will denote that x is a boy; Hx will denote that x is human; a will denote Ali; and ∀x. φ says that φ is true for all values of x. Then we have
∀x. Bx → Hx
Ba
∴ Ha
This is almost modus ponens, but it requires instantiating the forall. While logically valid, I'm not sure how to interpret it at the type-system level; if anybody wants to help out, I'm all ears. One guess would be a rank-2 type like (forall x. B x -> H x) -> B a -> H a, but I'm almost sure that that's wrong. Another guess would be a simpler type like (B x -> H x) -> B Int -> H Int, where Int stands for Ali, but again, I'm almost sure it's wrong. Again: if you know, please let me know too!
And one last note. Looking at things this way—following the connection between proofs and programs—eventually leads to the deep magic of the Curry-Howard isomorphism, but that's a more advanced topic. (It's really cool, though!)
Edit 1: You also asked for an example of function composition. Here's one example. Suppose I have a list of people's middle names. I need to construct a list of all the middle initials, but to do that, I first have to exclude every non-existent middle name. It is easy to exclude everyone whose middle name is null; we just include everyone whose middle name is not null with filter (\mn -> not $ null mn) middleNames. Similarly, we can easily get at someone's middle initial with head, and so we simply need map head filteredMiddleNames in order to get the list. In other words, we have the following code:
allMiddleInitials :: [Char]
allMiddleInitials = map head $ filter (\mn -> not $ null mn) middleNames
But this is irritatingly specific; we really want a middle-initial–generating function. So let's change this into one:
getMiddleInitials :: [String] -> [Char]
getMiddleInitials middleNames = map head $ filter (\mn -> not $ null mn) middleNames
Now, let's look at something interesting. The function map has type (a -> b) -> [a] -> [b], and since head has type [a] -> a, map head has type [[a]] -> [a]. Similarly, filter has type (a -> Bool) -> [a] -> [a], and so filter (\mn -> not $ null mn) has type [a] -> [a]. Thus, we can get rid of the parameter, and instead write
-- The type is also more general
getFirstElements :: [[a]] -> [a]
getFirstElements = map head . filter (not . null)
And you see that we have a bonus instance of composition: not has type Bool -> Bool, and null has type [a] -> Bool, so not . null has type [a] -> Bool: it first checks if the given list is empty, and then returns whether it isn't. This transformation, by the way, changed the function into point-free style; that is, the resulting function has no explicit variables.
You also asked about "strong binding". What I think you're referring to is the precedence of the . and $ operators (and possibly function application as well). In Haskell, just as in arithmetic, certain operators have higher precedence than others, and thus bind more tightly. For instance, in the expression 1 + 2 * 3, this is parsed as 1 + (2 * 3). This is because in Haskell, the following declarations are in force:
infixl 6 +
infixl 7 *
The higher the number (the precedence level), the sooner that that operator is called, and thus the more tightly the operator binds. Function application effectively has infinite precedence, so it binds the most tightly: the expression f x % g y will parse as (f x) % (g y) for any operator %. The . (composition) and $ (application) operators have the following fixity declarations:
infixr 9 .
infixr 0 $
Precedence levels range from zero to nine, so what this says is that the . operator binds more tightly than any other (except function application), and the $ binds less tightly. Thus, the expression f . g $ h will parse as (f . g) $ h; and in fact, for most operators, f . g % h will be (f . g) % h and f % g $ h will be f % (g $ h). (The only exceptions are the rare few other zero or nine precedence operators.)