The difference between +1 and -1 - function

> :t (+1)
(+1) :: Num a => a -> a
> :t (-1)
(-1) :: Num a => a
How come the second one is not a function? Do I have to write (+(-1)) or is there a better way?

This is because (-1) is interpreted as negative one, however (+1) is interpreted as the curried function (\x->1+x).
In haskell, (a **) is syntactic sugar for (**) a, and (** a) is (\x -> x ** a). However (-) is a special case since it is both a unary operator (negate) and a binary operator (minus). Therefore this syntactic sugar cannot be applied unambiguously here. When you want (\x -> a - x) you can write (-) a, and, as already answered in Currying subtraction, you can use the functions negate and subtract to disambiguate between the unary and binary - functions.

Do I have to write (+(-1)) or is there a better way?
I just found a function called subtract, so I can also say subtract 1. I find that quite readable :-)

(-1) is negative one, as others have noted. The subtract one function is \x -> x-1, flip (-) 1 or indeed (+ (-1)).
- is treated as a special case in the expression grammar. + is not, presumably because positive literals don't need the leading plus and allowing it would lead to even more confusion.
Edit: I got it wrong the first time. ((-) 1) is the function "subtract from one", or (\x -> 1-x).

Related

Please could you explain to me how currying works when it comes to higher order functions, particularly the example below

Can someone please help me with the below,
applyTwice :: (a -> a) -> a -> a
applyTwice f x = f (f x)
I do not understand how the above works. If we had something like (+3) 10 surely it would produce 13? How is that f (f x). Basically I do not understand currying when it comes to looking at higher order functions.
So what I'm not understanding is if say we had a function of the form a -> a -> a it would take an input a then produce a function which expects another input a to produce an output. So if we had add 5 3 then doing add 5 would produce a function which would expect the input 3 to produce a final output of 8. My question is how does that work here. We take a function in as an input so does partial function application work here like it did in add x y or am I completely overcomplicating everything?
That's not currying, that's partial application.
> :t (+)
(+) :: Num a => a -> a -> a
> :t (+) 3
(+) 3 :: Num a => a -> a
The partial application (+) 3 indeed produces a function (+3)(*) which awaits another numerical input to produce its result. And it does so, whether once or twice.
You example is expanded as
applyTwice (+3) 10 = (+3) ((+3) 10)
= (+3) (10+3)
= (10+3)+3
That's all there is to it.
(*)(actually, it's (3 +), but that's the same as (+ 3) anyway).
As chepner clarifies in the comments (quoted with minimal copy editing),
partial application is an illusion created by the fact that functions only take one argument, and the combination of the right associativity of (->) and the left associativity of function application. (+) 3 isn't really a partial application. It's just [a regular] application of (+) to an argument 3.
So seen from the point of view of other, more traditional languages, we refer to this as a distinction between currying and partial application.
But seen from the Haskell perspective it is all indeed about currying, i.e. applying a function to its arguments one at a time, until fully saturated as indicated by its type (i.e. a->a->a value applied to an a value becomes an a->a value, and that then becomes an a value when applied to an a value in its turn).

Partially applied functions [duplicate]

This question already has answers here:
What is the difference between currying and partial application?
(16 answers)
Closed 5 years ago.
While studying functional programming, the concept partially applied functions comes up a lot. In Haskell, something like the built-in function take is considered to be partially applied.
I am still unclear as to what it exactly means for a function to be partially applied or the use/implication of it.
the classic example is
add :: Int -> Int -> Int
add x y = x + y
add function takes two arguments and adds them, we can now implement
increment = add 1
by partially applying add, which now waits for the other argument.
A function by itself can't be "partially applied" or not. It's a meaningless concept.
When you say that a function is "partially applied", you refer to how the function is called (aka "applied"). If the function is called with all its parameters, then it is said to be "fully applied". If some of the parameters are missing, then the function is said to be "partially applied".
For example:
-- The function `take` is fully applied here:
oneTwoThree = take 3 [1,2,3,4,5]
-- The function `take` is partially applied here:
take10 = take 10 -- see? it's missing one last argument
-- The function `take10` is fully applied here:
oneToTen = take10 [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,42]
The result of a partial function application is another function - a function that still "expects" to get its missing arguments - like the take10 in the example above, which still "expects" to receive the list.
Of course, it all gets a bit more complicated once you get into the higher-order functions - i.e. functions that take other functions as arguments or return other functions as result. Consider this function:
mkTake n = take (n+5)
The function mkTake has only one parameter, but it returns another function as result. Now, consider this:
x = mkTake 10
y = mkTake 10 [1,2,3]
On the first line, the function mkTake is, obviously, "fully applied", because it is given one argument, which is exactly how many arguments it expects. But the second line is also valid: since mkTake 10 returns a function, I can then call this function with another parameter. So what does it make mkTake? "Overly applied", I guess?
Then consider the fact that (barring compiler optimizations) all functions are, mathematically speaking, functions of exactly one argument. How could this be? When you declare a function take n l = ..., what you're "conceptually" saying is take = \n -> \l -> ... - that is, take is a function that takes argument n and returns another function that takes argument l and returns some result.
So the bottom line is that the concept of "partial application" isn't really that strictly defined, it's just a handy shorthand to refer to functions that "ought to" (as in common sense) take N arguments, but are instead given M < N arguments.
Strictly speaking, partial application refers to a situation where you supply fewer arguments than expected to a function, and get back a new function created on the fly that expects the remaining arguments.
Also strictly speaking, this does not apply in Haskell, because every function takes exactly one argument. There is no partial application, because you either apply the function to its argument or you don't.
However, Haskell provides syntactic sugar for defining functions that includes emulating multi-argument functions. In Haskell, then, "partial application" refers to supplying fewer than the full number of arguments needed to obtain a value that cannot be further applied to another argument. Using everyone's favorite add example,
add :: Int -> Int -> Int
add x y = x + y
the type indicates that add takes one argument of type Int and returns a function of type Int -> Int. The -> type constructor is right-associative, so it helps to explicitly parenthesize it to emphasize the one-argument nature of a Haskell function: Int -> (Int -> Int).
When calling such a "multi-argument" function, we take advantage of the fact the function application is left-associative, so we can write add 3 5 instead of (add 3) 5. The call add 3 is thought of as partial application, because we could further apply the result to another argument.
I mentioned the syntactic sugar Haskell provides to ease defining complex higher-order functions. There is one fundamental way to define a function: using a lambda expression. For example, to define a function that adds 3 to its argument, we write
\x -> x + 3
For ease of reference, we can assign a name to this function:
add = \x -> x + 3
To further ease the defintion, Haskell lets us write in its place
add x = x + 3
hiding the explicit lambda expression.
For higher-order, "multiargument" functions, we would write a function to add two values as
\x -> \y -> x + y
To hide the currying, we can write instead
\x y -> x + y.
Combined with the syntactic sugar of replacing a lambda expression with a paramterized name, all of the following are equivalent definitions for the explicitly typed function add :: Num a => a -> a -> a:
add = \x -> \y -> x + y -- or \x y -> x + y as noted above
add x = \y -> x + y
add x y = x + y
This is best understood by example. Here's the type of the addition operator:
(+) :: Num a => a -> a -> a
What does it mean? As you know, it takes two numbers and outputs another one. Right? Well, sort of.
You see, a -> a -> a actually means this: a -> (a -> a). Whoa, looks weird! This means that (+) is a function that takes one argument and outputs a function(!!) that also takes one argument and outputs a value.
What this means is you can supply that one value to (+) to get another, partially applied, function:
(+ 5) :: Num a => a -> a
This function takes one argument and adds five to it. Now, call that new function with some argument:
Prelude> (+5) 3
8
Here you go! Partial function application at work!
Note the type of (+ 5) 3:
(+5) 3 :: Num a => a
Look what we end up with:
(+) :: Num a => a -> a -> a
(+ 5) :: Num a => a -> a
(+5) 3 :: Num a => a
Do you see how the number of as in the type signature decreases by one every time you add a new argument?
Function with one argument that outputs a function with one argument that in turn, outputs a value
Function with one argument that outputs a value
The value itself
"...something like the built-in function take is considered to be partially applied" - I don't quite understand what this means. The function take itself is not partially applied. A function is partially applied when it is the result of supplying between 1 and (N - 1) arguments to a function that takes N arguments. So, take 5 is a partially applied function, but take and take 5 [1..10] are not.

What is the nicest way to make a function that takes Float or Double as input?

Say I want to implement the Fermi function (the simplest example of a logistic curve) so that if it's passed a Float it returns a Float and if it's passed a Double it returns a Double. Here's what I've got:
e = 2.7182845904523536
fermiFunc :: (Floating a) => a -> a
fermiFunc x = let one = fromIntegral 1 in one/(one + e^(-x))
The problem is that ghc says e is a Double. Defining the variable one is also kinda gross. The other solution I've thought of is to just define the function for doubles:
e = 2.7182845904523536
fermiFuncDouble :: Double -> Double
fermiFuncDouble x = 1.0/(1.0 + e^(-x))
Then using Either:
fermiFunc :: (Floating a) => Either Float Double -> a
fermiFunc Right x = double2Float (fermiFuncDouble (float2Double x))
fermiFunc Left x = fermiFuncDouble x
This isn't very exciting though because I might as well have just written a separate function for the Float case that just handles the casting and calls fermiFuncDouble. Is there a nice way to write a function for both types?
Don't write e^x, ever, in any language. That is not the exponential function, it's the power function.
The exponential function is called exp, and its definition actually has little to do with the power operation โ€“ it's defined, depending on your taste, as a Taylor series or as the unique solution to the ordinary differential equation dโ„d๐‘ฅ exp ๐‘ฅ = exp ๐‘ฅ with boundary condition exp 0 = 1. Now, it so happens that, for any rational n, we have exp n โ‰ก (exp 1)n and that motivates also defining the power operation for numbers in โ„ or โ„‚ addition to โ„š, namely as
az := exp (z ยท ln a)
...but e๐‘ฅ should be understood as really just a shortcut for writing exp(๐‘ฅ) itself.
So rather than defining e somewhere and trying to take some power of it, you should use exp just as it is.
fermiFunc :: Floating a => a -> a
fermiFunc x = 1/(1 + exp (-x))
...or indeed
fermiFunc = recip . succ . exp . negate
Assuming that you want floating point exponent, that's (**). (^) is integral exponent. Rewriting your function to use (**) and letting GHC infer the type gives:
fermiFunc x = 1/(1 + e ** (-x))
and
> :t fermiFunc
fermiFunc :: (Floating a) => a -> a
Since Float and Double both have Floating instances, fermiFunc is now sufficiently polymorphic to work with both.
(Note: you may need to declare a polymorphic type for e to get around the monomorphism restriction, i.e., e :: Floating a => a.)
In general, the answer to "How do I write a function that works with multiple types?" is either "Write it so that it works universally for all types." (parametric polymorphism, like map), "Find (or create) one or more typeclasses that they share that provides the behaviour you need." (ad hoc polymorphism, like show), or "Create a new type that is the sum of those types." (like Either).
The latter two have some tradeoffs. For instance, type classes are open (you can add more at any time) while sum types are closed (you must modify the definition to add more types). Sum types require you to know which type you are dealing with (because it must be matched up with a constructor) while type classes let you write polymorphic functions.
You can use :i in GHCi to list instances and to list instance methods, which might help you to locate a suitable typeclass.

Haskell 2014.0.0: Two 'Could not deduce' errors occur when executing simple function

I want to program a function 'kans' (which calculates the chance of a repetition code of length n being correctly decoded) the following way:
fac 0 = 1
fac n = n * fac (n-1)
comb n r = ceiling (fac n / ((fac (n - r))*(fac r)))
kans 0 = 1
kans n = sum [ (comb (n+1) i) * 0.01^i * 0.99^(n+1-i) | i <- [0..(div n 2)]]
Now, calling 'kans 2' in GHCi gives the following error message: http://i.imgur.com/wEZRXgu.jpg
The functions 'fac' and 'comb' function normally. I even get an error message when I call 'kans 0', which I defined seperately.
I would greatly appreciate your help.
The error messages both contain telling parts of the form
Could not deduce [...]
from the context (Integral a, Fractional a)
This says that you are trying to use a single type as both a member of the class Integral and as a member of the class Fractional. There are no such types in Haskell: The standard numerical types all belong to exactly one of them. And you are using operations that require one or the other:
/ is "floating point" or "field" division, and only works with Fractional.
div is "integer division", and so works only with Integral.
the second argument of ^ must be Integral.
What you probably should do is to decide exactly which of your variables and results are integers (and thus in the Integral class), and which are floating-point (and thus in the Fractional class.) Then use the fromIntegral function to convert from the former to the latter whenever necessary. You will need to do this conversion in at least one of your functions.
On mathematical principle I recommend you do it in kans, and change comb to be purely integral by using div instead of /.
As #Mephy suggests, you probably should write type signatures for your functions, this helps make error messages clearer. However Num won't be enough here. For kans you can use either of
kans :: Integer -> Double
kans :: (Integral a, Fractional b) => a -> b
The latter is more general, but the former is likely to be the two specific types you'll actually use.
short dirty answer:
because both fac and comb always return Integers and only accept Integers, use this code:
fac 0 = 1
fac n = n * fac (n-1)
comb n r =fac n `div` ((fac (n - r))*(fac r))
kans 0 = 1
kans n = sum [ fromIntegral (comb (n+1) i) * 0.01^i * 0.99^(n+1-i) | i <- [0..(div n 2)]]
actual explanation:
looking at your error messages, GHC complains about ambiguity.
let's say you try to run this piece of code:
x = show (read "abc")
GHC will complain that it can't compute read "abc" because it doesn't have enough information to determine which definition of read to use (there are multiple definitions ; you might want to read about type classes, or maybe you already did. for example, one of type String -> Int which parses Integers and one of type String -> Float etc.). this is called ambiguity, because the result type of read "abc" is ambiguous.
in Haskell, when using arithmetic operators (which like read usually have multiple definitions in the same way) there is ambiguity all the time, so when there is ambiguity GHC checks if the types Integer or Float can be plugged in without problems, and if they do, then it changes the types to them, resolving the ambiguity. otherwise, it complains about the ambiguity. this is called defaulting.
the problem with your code is that because of the use of ceiling and / the types require instances of Fractional and RealFrac while also requiring instances of Integral for the same type variables, but no type is instance of both at once, so GHC complains about ambiguity.
the way to fix this would be to notice that comb would only work on whole numbers, and that (fac n / ((fac (n - r))*(fac r))) is always a whole number. the ceiling is redundant.
then we should replace / in comb's definition by \div`` to express that only whole numbers will be the output.
the last thing to do is to replace comb (n+1) i with fromIntegral (comb (n+1) i) (fromIntegral is a function which casts whole numbers to arbitrary types) because otherwise the types would mismach again - adding this allowes us to separate the type of comb (n+1) i (which is a whole number) from the type of 0.01^i which is of course a floating-point number.
this solves all problems and results in the code I wrote above in the "short dirty answer" section.

Controlling function's arity

I was reading the answer to this question: Haskell: difference between . (dot) and $ (dollar sign) And the reply struck me as odd... What does he mean + has no input? And then I tried:
((+) 1)
((+) 1 1)
((+) 1 1 1)
Whoops... sad news. But I'm sure I saw functions that can take seemingly arbitrary or a very large number of arguments to believe that someone had defined them in a way a->b->c...->z. There must be some way to handle it! What I'm looking for is something like &rest or &optional in CL.
Sure, you can define a variadic addition function, with some typeclass hackery:1
{-# LANGUAGE TypeFamilies #-}
class Add r where
add' :: (Integer -> Integer) -> r
instance Add Integer where
add' k = k 0
instance (n ~ Integer, Add r) => Add (n -> r) where
add' k m = add' (\n -> k (m+n))
add :: (Add r) => r
add = add' id
And so:
GHCi> add 1 2 :: Integer
3
GHCi> add 1 2 3 :: Integer
6
The same trick is used by the standard Text.Printf module. It's generally avoided for two reasons: one, the types it gives you can be awkward to work with, and you often have to specify an explicit type signature for your use; two, it's really a hack, and should be used rarely, if at all. printf has to take any number of arguments and be polymorphic, so it can't simply take a list list, but for addition, you could just use sum.
1 The language extension isn't strictly necessary here, but they make the usage easier (without them, you'd have to explicitly specify the type of each argument in the examples I gave, e.g. add (1 :: Integer) (2 :: Integer) :: Integer).
It's a syntactical trade-off: you can't (in general) have both variable arity functions and nice Haskell-style syntax for function application at the same time. This is because it would make many expressions ambiguous.
Suppose you have a function foo that allows an arity of 1 or 2, and consider the following expression:
foo a b
Should the 1 or 2 argument version of foo be used here? No way for the compiler to know, as it could be the 2 argument version but it could equally be the result of the 1 argument version applied to b.
Hence language designers need to make a choice.
Haskell opts for nice function application syntax (no parentheses required)
Lisp opts for variable arity functions (and adds parentheses to remove the ambiguity)