Function Composition VS Function Application - function

Do anyone can give example of function composition?
This is the definition of function composition operator?
(.) :: (b -> c) -> (a -> b) -> a -> c
f . g = \x -> f (g x)
This shows that it takes two functions and return a function but i remember someone has expressed the logic in english like
boy is human -> ali is boy -> ali is human
What this logic related to function composition?
What is the meaning of strong binding of function application and composition and which one is more strong binding than the other?
Please help.
Thanks.

(Edit 1: I missed a couple components of your question the first time around; see the bottom of my answer.)
The way to think about this sort of statement is to look at the types. The form of argument that you have is called a syllogism; however, I think you are mis-remembering something. There are many different kinds of syllogisms, and yours, as far as I can tell, does not correspond to function composition. Let's consider a kind of syllogism that does:
If it is sunny out, I will get hot.
If I get hot, I will go swimming.
Therefore, if it is sunny about, I will go swimming.
This is called a hypothetical syllogism. In logic terms, we would write it as follows: let S stand for the proposition "it is sunny out", let H stand for the proposition "I will get hot", and let W stand for the proposition "I will go swimming". Writing α → β for "α implies β", and writing ∴ for "therefore", we can translate the above to:
S → H
H → W
∴ S → W
Of course, this works if we replace S, H, and W with any arbitrary α, β, and γ. Now, this should look familiar. If we change the implication arrow → to the function arrow ->, this becomes
a -> b
b -> c
∴ a -> c
And lo and behold, we have the three components of the type of the composition operator! To think about this as a logical syllogism, you might consider the following:
If I have a value of type a, I can produce a value of type b.
If I have a value of type b, I can produce a value of type c.
Therefore, if I have a value of type a, I can produce a value of type c.
This should make sense: in f . g, the existence of a function g :: a -> b tells you that premise 1 is true, and f :: b -> c tells you that premise 2 is true. Thus, you can conclude the final statement, for which the function f . g :: a -> c is a witness.
I'm not entirely sure what your syllogism translates to. It's almost an instance of modus ponens, but not quite. Modus ponens arguments take the following form:
If it is raining, then I will get wet.
It is raining.
Therefore, I will get wet.
Writing R for "it is raining", and W for "I will get wet", this gives us the logical form
R → W
R
∴ W
Replacing the implication arrow with the function arrow gives us the following:
a -> b
a
∴ b
And this is simply function application, as we can see from the type of ($) :: (a -> b) -> a -> b. If you want to think of this as a logical argument, it might be of the form
If I have a value of type a, I can produce a value of type b.
I have a value of type a.
Therefore, I can produce a value of type b.
Here, consider the expression f x. The function f :: a -> b is a witness of the truth of proposition 1; the value x :: a is a witness of the truth of proposition 2; and therefore the result can be of type b, proving the conclusion. It's exactly what we found from the proof.
Now, your original syllogism takes the following form:
All boys are human.
Ali is a boy.
Therefore, Ali is human.
Let's translate this to symbols. Bx will denote that x is a boy; Hx will denote that x is human; a will denote Ali; and ∀x. φ says that φ is true for all values of x. Then we have
∀x. Bx → Hx
Ba
∴ Ha
This is almost modus ponens, but it requires instantiating the forall. While logically valid, I'm not sure how to interpret it at the type-system level; if anybody wants to help out, I'm all ears. One guess would be a rank-2 type like (forall x. B x -> H x) -> B a -> H a, but I'm almost sure that that's wrong. Another guess would be a simpler type like (B x -> H x) -> B Int -> H Int, where Int stands for Ali, but again, I'm almost sure it's wrong. Again: if you know, please let me know too!
And one last note. Looking at things this way—following the connection between proofs and programs—eventually leads to the deep magic of the Curry-Howard isomorphism, but that's a more advanced topic. (It's really cool, though!)
Edit 1: You also asked for an example of function composition. Here's one example. Suppose I have a list of people's middle names. I need to construct a list of all the middle initials, but to do that, I first have to exclude every non-existent middle name. It is easy to exclude everyone whose middle name is null; we just include everyone whose middle name is not null with filter (\mn -> not $ null mn) middleNames. Similarly, we can easily get at someone's middle initial with head, and so we simply need map head filteredMiddleNames in order to get the list. In other words, we have the following code:
allMiddleInitials :: [Char]
allMiddleInitials = map head $ filter (\mn -> not $ null mn) middleNames
But this is irritatingly specific; we really want a middle-initial–generating function. So let's change this into one:
getMiddleInitials :: [String] -> [Char]
getMiddleInitials middleNames = map head $ filter (\mn -> not $ null mn) middleNames
Now, let's look at something interesting. The function map has type (a -> b) -> [a] -> [b], and since head has type [a] -> a, map head has type [[a]] -> [a]. Similarly, filter has type (a -> Bool) -> [a] -> [a], and so filter (\mn -> not $ null mn) has type [a] -> [a]. Thus, we can get rid of the parameter, and instead write
-- The type is also more general
getFirstElements :: [[a]] -> [a]
getFirstElements = map head . filter (not . null)
And you see that we have a bonus instance of composition: not has type Bool -> Bool, and null has type [a] -> Bool, so not . null has type [a] -> Bool: it first checks if the given list is empty, and then returns whether it isn't. This transformation, by the way, changed the function into point-free style; that is, the resulting function has no explicit variables.
You also asked about "strong binding". What I think you're referring to is the precedence of the . and $ operators (and possibly function application as well). In Haskell, just as in arithmetic, certain operators have higher precedence than others, and thus bind more tightly. For instance, in the expression 1 + 2 * 3, this is parsed as 1 + (2 * 3). This is because in Haskell, the following declarations are in force:
infixl 6 +
infixl 7 *
The higher the number (the precedence level), the sooner that that operator is called, and thus the more tightly the operator binds. Function application effectively has infinite precedence, so it binds the most tightly: the expression f x % g y will parse as (f x) % (g y) for any operator %. The . (composition) and $ (application) operators have the following fixity declarations:
infixr 9 .
infixr 0 $
Precedence levels range from zero to nine, so what this says is that the . operator binds more tightly than any other (except function application), and the $ binds less tightly. Thus, the expression f . g $ h will parse as (f . g) $ h; and in fact, for most operators, f . g % h will be (f . g) % h and f % g $ h will be f % (g $ h). (The only exceptions are the rare few other zero or nine precedence operators.)

Related

How is one implementation of the constant function equal to a version using a lambda?

I'm currently revising the chapter for the lambda (\) expressions in Haskell. Was just wondering if someone could please help explain how did this:
const :: a → b → a
const x _ = x (this part)
Get defined into this:
const :: a → (b → a)
const x = λ_ → x (how did it become like this?)
The signatures a -> b -> a and a -> (b -> a) are parsed exactly the same, much like the arithmetic expressions 1 - 2 - 3 and (1 - 2) - 3 are the same: the -> operator is right-associative, whereas the - operator is left associative, i.e. the parser effectively puts the parentheses in the right place if not explicitly specified. In other words, A -> B -> C is defined to be A -> (B -> C).
If we explicitly write a -> (b -> a), then we do this to put focus on the fact that we're dealing with curried functions, i.e. that we can accept the arguments one-by-one instead of all at once, but all multi-parameter functions in Haskell are curried anyway.
As for why const x _ = x and const x = \_ -> x are equivalent: well first, to be pedantic they're not equivalent, see bottom of this answer. But let's ignore that for now.
Both lambdas and (single) function clauses are just ways to define functions. Like,
sqPl1 :: Int -> Int
sqPl1 x = x^2 + 1
does the same as
sqPl1 = \x -> x^2 + 1
It's just a different syntax. Some would say the f x = ... notation is just syntactic sugar for binding x in a lambda, i.e. f = \x -> ..., because Haskell is based on lambda calculus and in lambda calculus lambdas are the only way to write functions. (That's a bit of an oversimplification though.)
I said they're not quite equivalent. I'm referring to two things here:
You can have local definitions whose scope outlasts a parameter binding. For example if I write
foo x y = ec * y
where ec = {- expensive computation depending on `x` -}
then ec will always be computed from scratch whenever foo is applied. However, if I write it as
foo x = \y -> ec * y
where ec = {- expensive computation depending on `x` -}
then I can partially apply foo to one argument, and the resulting single-argument function can be evaluated with many different y values without needing to compute ec again. For example map (foo 3) [0..90] would be faster with the lambda definition. (On the flip side, if the stored value takes up a lot of memory it may be preferrable to not keep it around; it depends.)
Haskell has a notion of constant applicative forms. It's a subtle topic that I won't go into here, but that can be affected by whether you write a function as a lambda or with clauses or expand arguments.

Coq: Proving an application

This is a bit theoretical question. We can define fx but seemingly not fx':
Function fx {A} (x:A) (f:A->Type) (g:Type->f x): f x := g (f x).
Definition fx' {A} (x:A) (f:A->Type): f x.
In a way, this makes sense, as one cannot prove from the f and x that f has been (or will be) applied to x. But we can apply f to x to get something of type Type:
assert (h := f x).
This seems puzzling: one can apply f to x but still can't obtain a witness y: f x that he has done so.
The only explanation I can think of is this: as a type, f x is an application, as a term, it's just a type. We cannot infer a past application from a type; similarly, we cannot infer a future application from a function and its potential argument. As for (the instance of) applying itself, it isn't a stage in a proof, so we cannot have a witness of it. But I'm just guessing. The question:
Is it possible to define fx'? If yes, how; if no, why (please give a theoretical explanation)
First, a direct answer to your question: no, it is not possible to define fx'. According to your snippet, fx' should have type
forall (A : Type) (x : A) (f : A -> Type), f x.
It is not hard to see that the existence of fx' implies a contradiction, as the following script shows.
Section Contra.
Variable fx' : forall (A : Type) (x : A) (f : A -> Type), f x.
Lemma contra : False.
Proof.
exact (fx' unit tt (fun x => False)).
Qed.
End Contra.
What happened here? The type of fx' says that for any family of types f indexed by a type A, we can produce an element of f x, where x is arbitrary. In particular, we can take f to be a constant family of types (fun x => False), in which case f x is the same thing as False. (Note that False, besides being a member of Prop, is also a member of Type.)
Now, given your question, I think you are slightly confused by the meaning of types and propositions in Coq. You said:
This seems puzzling: one can apply f to x but still can't obtain a
witness y: f x that he has done so.
The fact that we can apply f to x simply means that the expression f x has a valid type, which, in this case, is Type. In other words, Coq shows that f x : Type. But having a type is a different thing from being inhabited: when f and x are arbitrary, it is not possible to build a term y such that y : f x. In particular, we have False : Type, but there is no way we can build a term p with p : False, because that would mean that Coq's logic is inconsistent.

non-monadic error handling in Haskell?

I was wondering if there is an elegant way to do non-monadic error handling in Haskell that is syntactically simpler than using plain Maybe or Either. What I wanted to deal with is non-IO exceptions such as in parsing, where you generate the exception yourself to let yourself know at a later point, e.g., something was wrong in the input string.
The reason I ask is that monads seem to be viral to me. If I wanted to use exception or exception-like mechanism to report non-critical error in pure functions, I can always use either and do case analysis on the result. Once I use a monad, it's cumbersome/not easy to extract the content of a monadic value and feed it to a function not using monadic values.
A deeper reason is that monads seem to be an overkill for many error-handling. One rationale for using monads as I learned is that monads allow us to thread through a state. But in the case of reporting an error, I don't see any need for threading states (except for the failure state, which I honestly don't know whether it's essential to use monads).
(
EDIT: as I just read, in a monad, each action can take advantage of results from the previous actions. But in reporting an error, it is often unnecessary to know the results of the previous actions. So there is a potential over-kill here for using monads. All that is needed in many cases is to abort and report failure on-site without knowing any prior state. Applicative seems to be a less restrictive choice here to me.
In the specific example of parsing, are the execptions/errors we raise ourselves really effectual in nature? If not, is there something even weaker than Applicative for to model error handling?
)
So, is there a weaker/more general paradigm than monads that can be used to model error-reporting? I am now reading Applicative and trying to figure out if it's suitable. Just wanted to ask beforehand so that I don't miss the obvious.
A related question about this is whether there is a mechanism out there which simply enclose every basic type with,e.g., an Either String. The reason I ask here is that all monads (or maybe functors) enclose a basic type with a type constructor. So if you want to change your non-exception-aware function to be exception aware, you go from, e.g.,
f:: a -> a -- non-exception-aware
to
f':: a -> m a -- exception-aware
But then, this change breaks functional compositions that would otherwise work in the non-exception case. While you could do
f (f x)
you can't do
f' (f' x)
because of the enclosure. A probably naive way to solve the composibilty issue is change f to:
f'' :: m a -> m a
I wonder if there is an elegant way of making error handling/reporting work along this line?
Thanks.
-- Edit ---
Just to clarify the question, take an example from http://mvanier.livejournal.com/5103.html, to make a simple function like
g' i j k = i / k + j / k
capable of handling division by zero error, the current way is to break down the expression term-wise, and compute each term in a monadic action (somewhat like rewriting in assembly language):
g' :: Int -> Int -> Int -> Either ArithmeticError Int
g' i j k =
do q1 <- i `safe_divide` k
q2 <- j `safe_divide` k
return (q1 + q2)
Three actions would be necessary if (+) can also incur an error. I think two reasons for this complexity in current approach are:
As the author of the tutorial pointed out, monads enforce a certain order of operations, which wasn't required in the original expression. That's where the non-monadic part of the question comes from (along with the "viral" feature of monads).
After the monadic computation, you don't have Ints, instead, you have Either a Int, which you cannot add directly. The boilerplate code would multiply rapidly when the express get more complex than addition of two terms. That's where the enclosing-everything-in-a-Either part of the question comes from.
In your first example, you want to compose a function f :: a -> m a with itself. Let's pick a specific a and m for the sake of discussion: Int -> Maybe Int.
Composing functions that can have errors
Okay, so as you point out, you cannot just do f (f x). Well, let's generalize this a little more to g (f x) (let's say we're given a g :: Int -> Maybe String to make things more concrete) and look at what you do need to do case-by-case:
f :: Int -> Maybe Int
f = ...
g :: Int -> Maybe String
g = ...
gComposeF :: Int -> Maybe String
gComposeF x =
case f x of -- The f call on the inside
Nothing -> Nothing
Just x' -> g x' -- The g call on the outside
This is a bit verbose and, like you said, we would like to reduce the repetition. We can also notice a pattern: Nothing always goes to Nothing, and the x' gets taken out of Just x' and given to the composition. Also, note that instead of f x, we could take any Maybe Int value to make things even more general. So let's also pull our g out into an argument, so we can give this function any g:
bindMaybe :: Maybe Int -> (Int -> Maybe String) -> Maybe String
bindMaybe Nothing g = Nothing
bindMaybe (Just x') g = g x'
With this helper function, we can rewrite our original gComposeF like this:
gComposeF :: Int -> Maybe String
gComposeF x = bindMaybe (f x) g
This is getting pretty close to g . f, which is how you would compose those two functions if there wasn't the Maybe discrepancy between them.
Next, we can see that our bindMaybe function doesn't specifically need Int or String, so we can make this a little more useful:
bindMaybe :: Maybe a -> (a -> Maybe b) -> Maybe b
bindMaybe Nothing g = Nothing
bindMaybe (Just x') g = g x'
All we had to change, actually, was the type signature.
This already exists!
Now, bindMaybe actually already exists: it is the >>= method from the Monad type class!
(>>=) :: Monad m => m a -> (a -> m b) -> m b
If we substitute Maybe for m (since Maybe is an instance of Monad, we can do this) we get the same type as bindMaybe:
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
Let's take a look at the Maybe instance of Monad to be sure:
instance Monad Maybe where
return x = Just x
Nothing >>= f = Nothing
Just x >>= f = f x
Just like bindMaybe, except we also have an additional method that lets us put something into a "monadic context" (in this case, this just means wrapping it in a Just). Our original gComposeF looks like this:
gComposeF x = f x >>= g
There is also =<<, which is a flipped version of >>=, that lets this look a little more like the normal composition version:
gComposeF x = g =<< f x
There is also a builtin function for composing functions with types of the form a -> m b called <=<:
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> a -> m c
-- Specialized to Maybe, we get:
(<=<) :: (b -> Maybe c) -> (a -> Maybe b) -> a -> Maybe c
Now this really looks like function composition!
gComposeF = g <=< f -- This is very similar to g . f, which is how we "normally" compose functions
When we can simplify even more
As you mentioned in your question, using do notation to convert simple division function to one which properly handles errors is a bit harder to read and more verbose.
Let's look at this a little more carefully, but let's start with a simpler problem (this is actually a simpler problem than the one we looked at in the first sections of this answer): We already have a function, say that multiplies by 10, and we want to compose it with a function that gives us a Maybe Int. We can immediately simplify this a little bit by saying that what we really want to do is take a "regular" function (such as our multiplyByTen :: Int -> Int) and we want to give it a Maybe Int (i.e., a value that won't exist in the case of an error). We want a Maybe Int to come back too, because we want the error to propagate.
For concreteness, we will say that we have some function maybeCount :: String -> Maybe Int (maybe divides 5 by the number times we use the word "compose" in the String and rounds down. It doesn't really matter what it specifically though) and we want to apply multiplyByTen to the result of that.
We'll start with the same kind of case analysis:
multiplyByTen :: Int -> Int
multiplyByTen x = x * 10
maybeCount :: String -> Maybe Int
maybeCount = ...
countThenMultiply :: String -> Maybe Int
countThenMultiply str =
case maybeCount str of
Nothing -> Nothing
Just x -> multiplyByTen x
We can, again, do a similar "pulling out" of multiplyByTen to generalize this further:
overMaybe :: (Int -> Int) -> Maybe Int -> Maybe Int
overMaybe f mstr =
case mstr of
Nothing -> Nothing
Just x -> f x
These types also can be more general:
overMaybe :: (a -> b) -> Maybe a -> Maybe b
Note that we just needed to change the type signature, just like last time.
Our countThenMultiply can then be rewritten:
countThenMultiply str = overMaybe multiplyByTen (maybeCount str)
This function also already exists!
This is fmap from Functor!
fmap :: Functor f => (a -> b) -> f a -> f b
-- Specializing f to Maybe:
fmap :: (a -> b) -> Maybe a -> Maybe b
and, in fact, the definition of the Maybe instance is exactly the same as well. This lets us apply any "normal" function to a Maybe value and get a Maybe value back, with any failure automatically propagated.
There is also a handy infix operator synonym for fmap: (<$>) = fmap. This will come in handy later. This is what it would look like if we used this synonym:
countThenMultiply str = multiplyByTen <$> maybeCount str
What if we have more Maybes?
Maybe we have a "normal" function of multiple arguments that we need to apply to multiple Maybe values. As you have in your question, we could do this with Monad and do notation if we were so inclined, but we don't actually need the full power of Monad. We need something in between Functor and Monad.
Let's look the division example you gave. We want to convert g' to use the safeDivide :: Int -> Int -> Either ArithmeticError Int. The "normal" g' looks like this:
g' i j k = i / k + j / k
What we would really like to do is something like this:
g' i j k = (safeDivide i k) + (safeDivide j k)
Well, we can get close with Functor:
fmap (+) (safeDivide i k) :: Either ArithmeticError (Int -> Int)
The type of this, by the way, is analogous to Maybe (Int -> Int). The Either ArithmeticError part just tells us that our errors give us information in the form of ArithmeticError values instead of only being Nothing. It could help to mentally replace Either ArithmeticError with Maybe for now.
Well, this is sort of like what we want, but we need a way to apply the function "inside" the Either ArithmeticError (Int -> Int) to Either ArithmeticError Int.
Our case analysis would look like this:
eitherApply :: Either ArithmeticError (Int -> Int) -> Either ArithmeticError Int -> Either ArithmeticError Int
eitherApply ef ex =
case ef of
Left err -> Left err
Right f ->
case ex of
Left err' -> Left err'
Right x -> Right (f x)
(As a side note, the second case can be simplified with fmap)
If we have this function, then we can do this:
g' i j k = eitherApply (fmap (+) (safeDivide i k)) (safeDivide j k)
This still doesn't look great, but let's go with it for now.
It turns out eitherApply also already exists: it is (<*>) from Applicative. If we use this, we can arrive at:
g' i j k = (<*>) (fmap (+) (safeDivide i k)) (safeDivide j k)
-- This is the same as
g' i j k = fmap (+) (safeDivide i k) <*> safeDivide j k
You may remember from earlier that there is an infix synonym for fmap called <$>. If we use that, the whole thing looks like:
g' i j k = (+) <$> safeDivide i k <*> safeDivide j k
This looks strange at first, but you get used to it. You can think of <$> and <*> as being "context sensitive whitespace." What I mean is, if we have some regular function f :: String -> String -> Int and we apply it to normal String values we have:
firstString, secondString :: String
result :: Int
result = f firstString secondString
If we have two (for example) Maybe String values, we can apply f :: String -> String -> Int, we can apply f to both of them like this:
firstString', secondString' :: Maybe String
result :: Maybe Int
result = f <$> firstString' <*> secondString'
The difference is that instead of whitespace, we add <$> and <*>. This generalizes to more arguments in this way (given f :: A -> B -> C -> D -> E):
-- When we apply normal values (x :: A, y :: B, z :: C, w :: D):
result :: E
result = f x y z w
-- When we apply values that have an Applicative instance, for example x' :: Maybe A, y' :: Maybe B, z' :: Maybe C, w' :: Maybe D:
result' :: Maybe E
result' = f <$> x' <*> y' <*> z' <*> w'
A very important note
Note that none of the above code mentioned Functor, Applicative, or Monad. We just used their methods as though they are any other regular helper functions.
The only difference is that these particular helper functions can work on many different types, but we don't even have to think about that if we don't want to. If we really want to, we can just think of fmap, <*>, >>= etc in terms of their specialized types, if we are using them on a specific type (which we are, in all of this).
The reason I ask is that monads seem to be viral to me.
Such viral character is actually well-suited to exception handling, as it forces you to recognize your functions may fail and to deal with the failure cases.
Once I use a monad, it's cumbersome/not easy to extract the content of
a monadic value and feed it to a function not using monadic values.
You don't have to extract the value. Taking Maybe as a simple example, very often you can just write plain functions to deal with success cases, and then use fmap to apply them to your Maybe values and maybe/fromMaybe to deal with failures and eliminate the Maybe wrapping. Maybe is a monad, but that doesn't oblige you to use the monadic interface or do notation all the time. In general, there is no real opposition between "monadic" and "pure".
One rationale for using monads as I learned is that monads allow us to
thread through a state.
That is just one of many use cases. The Maybe monad allows you to skip any remaining computations in a bind chain after failure. It does not thread any sort of state.
So, is there a weaker/more general paradigm than monads that can be
used to model error-reporting? I am now reading Applicative and trying
to figure out if it's suitable.
You can certainly chain Maybe computations using the Applicative instance. (*>) is equivalent to (>>), and there is no equivalent to (>>=) since Applicative is less powerful than Monad. While it is generally a good thing not to use more power than you actually need, I am not sure if using Applicative is any simpler in the sense you aim at.
While you could do f (f x) you can't do f' (f' x)
You can write f' <=< f' $ x though:
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> a -> m c
You may find this answer about (>=>), and possibly the other discussions in that question, interesting.

Haskell sugar for "applyable" class adts containing functions ex. Isomorphisms

Specifically, inspired by J's conjucation operator (g&.f = (f inverse)(g)(f))
I need a way to augment functions with additional information. The obvious way is to use ADT. Something like:
data Isomorphism a b = ISO {FW (a -> b) , BW (b -> a)}
(FW f) `isoApp` x = f x
(BW g) `isoApp` x = g x
But the need for an application function really hurts code readability when most of the time you just want the forward function.
What would be very useful is a class:
class Applyable a b c | a b -> c where
apply :: a -> b -> c
(I think the b could be made implicit with existential quantifiers but I'm not comfortable enough to be sure I wouldn't get the signature wrong)
Now the apply would be made implicit so you could just write
f x
as you would any other function. Ex:
instance Applyable (a -> b) a b where
apply f x = f x
instance Applyable (Isomorphism a b) a b where
apply f x = (FW f) x
inverse (Iso f g) = Iso g f
then you could write something like:
conjugate :: (Applyable g b b) => g -> Iso a b -> b -> a
f `conjugate` g = (inverse f) . g . f
Ideally the type signature could be inferred
However, these semantics seem complicated, as you would also need to modify (.) to support Applyable rather than functions. Is there any way to simply trick the type system into treating Applyable datatypes as functions for all normal purposes?
Is there a fundamental reason that this is impossible / a bad idea?
As far as I know, function application is possibly the one thing in the entire Haskell language that you cannot override.
You can, however, devise some sort of operator for this. Admittedly f # x isn't quite as nice as f x, but it's better than f `isoApp` x.

Haskell IO Passes to Another Function

This question here is related to
Haskell Input Return Tuple
I wonder how we can passes the input from monad IO to another function in order to do some computation.
Actually what i want is something like
-- First Example
test = savefile investinput
-- Second Example
maxinvest :: a
maxinvest = liftM maximuminvest maxinvestinput
maxinvestinput :: IO()
maxinvestinput = do
str <- readFile "C:\\Invest.txt"
let cont = words str
let mytuple = converttuple cont
let myint = getint mytuple
putStrLn ""
-- Convert to Tuple
converttuple :: [String] -> [(String, Integer)]
converttuple [] = []
converttuple (x:y:z) = (x, read y):converttuple z
-- Get Integer
getint :: [(String, Integer)] -> [Integer]
getint [] = []
getint (x:xs) = snd (x) : getint xs
-- Search Maximum Invest
maximuminvest :: (Ord a) => [a] -> a
maximuminvest [] = error "Empty Invest Amount List"
maximuminvest [x] = x
maximuminvest (x:xs)
| x > maxTail = x
| otherwise = maxTail
where maxTail = maximuminvest xs
In the second example, the maxinvestinput is read from file and convert the data to the type maximuminvest expected.
Please help.
Thanks.
First, I think you're having some basic issues with understanding Haskell, so let's go through building this step by step. Hopefully you'll find this helpful. Some of it will just arrive at the code you have, and some of it will not, but it is a slowed-down version of what I'd be thinking about as I wrote this code. After that, I'll try to answer your one particular question.
I'm not quite sure what you want your program to do. I understand that you want a program which reads as input a file containing a list of people and their investments. However, I'm not sure what you want to do with it. You seem to (a) want a sensible data structure ([(String,Integer)]), but then (b) only use the integers, so I'll suppose that you want to do something with the strings too. Let's go through this. First, you want a function that can, given a list of integers, return the maximum. You call this maximuminvest, but this function is more general that just investments, so why not call it maximum? As it turns out, this function already exists. How could you know this? I recommend Hoogle—it's a Haskell search engine which lets you search both function names and types. You want a function from lists of integers to a single integer, so let's search for that. As it turns out, the first result is maximum, which is the more general version of what you want. But for learning purposes, let's suppose you want to write it yourself; in that case, your implementation is just fine.
Alright, now we can compute the maximum. But first, we need to construct our list. We're going to need a function of type [String] -> [(String,Integer)] to convert our formattingless list into a sensible one. Well, to get an integer from a string, we'll need to use read. Long story short, your current implementation of this is also fine, though I would (a) add an error case for the one-item list (or, if I were feeling nice, just have it return an empty list to ignore the final item of odd-length lists), and (b) use a name with a capital letter, so I could tell the words apart (and probably a different name):
tupledInvestors :: [String] -> [(String, Integer)]
tupledInvestors [] = []
tupledInvestors [_] = error "tupledInvestors: Odd-length list"
tupledInvestors (name:amt:rest) = (name, read amt) : tupledInvestors rest
Now that we have these, we can provide ourselves with a convenience function, maxInvestment :: [String] -> Integer. The only thing missing is the ability to go from the tupled list to a list of integers. There are several ways to solve this. One is the one you have, though that would be unusual in Haskell. A second would be to use map :: (a -> b) -> [a] -> [b]. This is a function which applies a function to every element of a list. Thus, your getint is equivalent to the simpler map snd. The nicest way would probably be to use Data.List.maximumBy :: :: (a -> a -> Ordering) -> [a] -> a. This is like maximum, but it allows you to use a comparison function of your own. And using Data.Ord.comparing :: Ord a => (b -> a) -> b -> b -> Ordering, things become nice. This function allows you to compare two arbitrary objects by converting them to something which can be compared. Thus, I would write
maxInvestment :: [String] -> Integer
maxInvestment = maximumBy (comparing snd) . tupledInvestors
Though you could also write maxInvestment = maximum . map snd . tupledInvestors.
Alright, now on to the IO. Your main function, then, wants to read from a specific file, compute the maximum investment, and print that out. One way to represent that is as a series of three distinct steps:
main :: IO ()
main = do dataStr <- readFile "C:\\Invest.txt"
let maxInv = maxInvestment $ words dataStr
print maxInv
(The $ operator, if you haven't seen it, is just function application, but with more convenient precedence; it has type (a -> b) -> a -> b, which should make sense.) But that let maxInv seems pretty pointless, so we can get rid of that:
main :: IO ()
main = do dataStr <- readFile "C:\\Invest.txt"
print . maxInvestment $ words dataStr
The ., if you haven't seen it yet, is function composition; f . g is the same as \x -> f (g x). (It has type (b -> c) -> (a -> b) -> a -> c, which should, with some thought, make sense.) Thus, f . g $ h x is the same as f (g (h x)), only easier to read.
Now, we were able to get rid of the let. What about the <-? For that, we can use the =<< :: Monad m => (a -> m b) -> m a -> m b operator. Note that it's almost like $, but with an m tainting almost everything. This allows us to take a monadic value (here, the readFile "C:\\Invest.txt" :: IO String), pass it to a function which turns a plain value into a monadic value, and get that monadic value. Thus, we have
main :: IO ()
main = print . maxInvestment . words =<< readFile "C:\\Invest.txt"
That should be clear, I hope, especially if you think of =<< as a monadic $.
I'm not sure what's happening with testfile; if you edit your question to reflect that, I'll try to update my answer.
One more thing. You said
I wonder how we can passes the input from monad IO to another function in order to do some computation.
As with everything in Haskell, this is a question of types. So let's puzzle through the types here. You have some function f :: a -> b and some monadic value m :: IO a. You want to use f to get a value of type b. This is impossible, as I explained in my answer to your other question; however, you can get something of type IO b. Thus, you need a function which takes your f and gives you a monadic version. In other words, something with type Monad m => (a -> b) -> (m a -> m b). If we plug that into Hoogle, the first result is Control.Monad.liftM, which has precisely that type signature. Thus, you can treat liftM as a slightly different "monadic $" than =<<: f `liftM` m applies f to the pure result of m (in accordance with whichever monad you're using) and returns the monadic result. The difference is that liftM takes a pure function on the left, and =<< takes a partially-monadic one.
Another way to write the same thing is with do-notation:
do x <- m
return $ f x
This says "get the x out of m, apply f to it, and lift the result back into the monad." This is the same as the statement return . f =<< m, which is precisely liftM again. First f performs a pure computation; its result is passed into return (via .), which lifts the pure value into the monad; and then this partially-monadic function is applied, via =<,, to m.
It's late, so I'm not sure how much sense that made. Let me try to sum it up. In short, there is no general way to leave a monad. When you want to perform computation on monadic values, you lift pure values (including functions) into the monad, and not the other way around; that could violate purity, which would be Very Bad™.
I hope that actually answered your question. Let me know if it didn't, so I can try to make it more helpful!
I'm not sure I understand your question, but I'll answer as best I can. I've simplified things a bit to get at the "meat" of the question, if I understand it correctly.
maxInvestInput :: IO [Integer]
maxInvestInput = liftM convertToIntegers (readFile "foo")
maximumInvest :: Ord a => [a] -> a
maximumInvest = blah blah blah
main = do
values <- maxInvestInput
print $ maximumInvest values
OR
main = liftM maximumInvest maxInvestInput >>= print