Higher-Order Function Scheme: Incrementing via Returned Value - function

I'm having issues with some functionality in Scheme
So in this book i'm learning about Scheme i've come across this problem that I don't quite grasp yet.
It's asking me to create a high order function that does this:
(display ((incrementn 4) 2))
6
I've been stuck on this for a couple of hours and still seem to have not understood the fundamentals. So i'm turning to you all in hopes I can get some better understanding on this functional call.
So the way I understand it so far is that when we define a function like so:
(define (increment n) ______)
The blank spaces obviously represent my following manipulation of the arguments given. What i don't seem to understand is how the high order function returns the outside argument (of the increment function) and injects it into the defined function(which is (incrementn 3) )
I understand fully that 3 is the initial value (integer) that we increment n times (n being the argument passed outside of the ((incrementn n) x) ) that we increment n by 1 x times
The question i'm asking you simply is that given x being an unbound variable (right?) how do I return that integer and increment n by 1 that many times? What is the syntax for this kind of behaviour?

The the point to understand here is that after we call incrementn with 3 as the initial argument it will return a function, and that afterwards 2 is passed as an argument to that function. This is called currying, and the solution is straightforward after you grasp the concept at play here:
(define (incrementn n)
(lambda (x)
(+ n x)))
As you can see, the call to incrementn captures the value of the n parameter in the returned lambda, and when we call it passing x, n is there to be used in the expressions in the body of the lambda. Now, it works as expected:
((incrementn 4) 2)
=> 6
((incrementn -1) 3)
=> 2

Related

Partially applied functions [duplicate]

This question already has answers here:
What is the difference between currying and partial application?
(16 answers)
Closed 5 years ago.
While studying functional programming, the concept partially applied functions comes up a lot. In Haskell, something like the built-in function take is considered to be partially applied.
I am still unclear as to what it exactly means for a function to be partially applied or the use/implication of it.
the classic example is
add :: Int -> Int -> Int
add x y = x + y
add function takes two arguments and adds them, we can now implement
increment = add 1
by partially applying add, which now waits for the other argument.
A function by itself can't be "partially applied" or not. It's a meaningless concept.
When you say that a function is "partially applied", you refer to how the function is called (aka "applied"). If the function is called with all its parameters, then it is said to be "fully applied". If some of the parameters are missing, then the function is said to be "partially applied".
For example:
-- The function `take` is fully applied here:
oneTwoThree = take 3 [1,2,3,4,5]
-- The function `take` is partially applied here:
take10 = take 10 -- see? it's missing one last argument
-- The function `take10` is fully applied here:
oneToTen = take10 [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,42]
The result of a partial function application is another function - a function that still "expects" to get its missing arguments - like the take10 in the example above, which still "expects" to receive the list.
Of course, it all gets a bit more complicated once you get into the higher-order functions - i.e. functions that take other functions as arguments or return other functions as result. Consider this function:
mkTake n = take (n+5)
The function mkTake has only one parameter, but it returns another function as result. Now, consider this:
x = mkTake 10
y = mkTake 10 [1,2,3]
On the first line, the function mkTake is, obviously, "fully applied", because it is given one argument, which is exactly how many arguments it expects. But the second line is also valid: since mkTake 10 returns a function, I can then call this function with another parameter. So what does it make mkTake? "Overly applied", I guess?
Then consider the fact that (barring compiler optimizations) all functions are, mathematically speaking, functions of exactly one argument. How could this be? When you declare a function take n l = ..., what you're "conceptually" saying is take = \n -> \l -> ... - that is, take is a function that takes argument n and returns another function that takes argument l and returns some result.
So the bottom line is that the concept of "partial application" isn't really that strictly defined, it's just a handy shorthand to refer to functions that "ought to" (as in common sense) take N arguments, but are instead given M < N arguments.
Strictly speaking, partial application refers to a situation where you supply fewer arguments than expected to a function, and get back a new function created on the fly that expects the remaining arguments.
Also strictly speaking, this does not apply in Haskell, because every function takes exactly one argument. There is no partial application, because you either apply the function to its argument or you don't.
However, Haskell provides syntactic sugar for defining functions that includes emulating multi-argument functions. In Haskell, then, "partial application" refers to supplying fewer than the full number of arguments needed to obtain a value that cannot be further applied to another argument. Using everyone's favorite add example,
add :: Int -> Int -> Int
add x y = x + y
the type indicates that add takes one argument of type Int and returns a function of type Int -> Int. The -> type constructor is right-associative, so it helps to explicitly parenthesize it to emphasize the one-argument nature of a Haskell function: Int -> (Int -> Int).
When calling such a "multi-argument" function, we take advantage of the fact the function application is left-associative, so we can write add 3 5 instead of (add 3) 5. The call add 3 is thought of as partial application, because we could further apply the result to another argument.
I mentioned the syntactic sugar Haskell provides to ease defining complex higher-order functions. There is one fundamental way to define a function: using a lambda expression. For example, to define a function that adds 3 to its argument, we write
\x -> x + 3
For ease of reference, we can assign a name to this function:
add = \x -> x + 3
To further ease the defintion, Haskell lets us write in its place
add x = x + 3
hiding the explicit lambda expression.
For higher-order, "multiargument" functions, we would write a function to add two values as
\x -> \y -> x + y
To hide the currying, we can write instead
\x y -> x + y.
Combined with the syntactic sugar of replacing a lambda expression with a paramterized name, all of the following are equivalent definitions for the explicitly typed function add :: Num a => a -> a -> a:
add = \x -> \y -> x + y -- or \x y -> x + y as noted above
add x = \y -> x + y
add x y = x + y
This is best understood by example. Here's the type of the addition operator:
(+) :: Num a => a -> a -> a
What does it mean? As you know, it takes two numbers and outputs another one. Right? Well, sort of.
You see, a -> a -> a actually means this: a -> (a -> a). Whoa, looks weird! This means that (+) is a function that takes one argument and outputs a function(!!) that also takes one argument and outputs a value.
What this means is you can supply that one value to (+) to get another, partially applied, function:
(+ 5) :: Num a => a -> a
This function takes one argument and adds five to it. Now, call that new function with some argument:
Prelude> (+5) 3
8
Here you go! Partial function application at work!
Note the type of (+ 5) 3:
(+5) 3 :: Num a => a
Look what we end up with:
(+) :: Num a => a -> a -> a
(+ 5) :: Num a => a -> a
(+5) 3 :: Num a => a
Do you see how the number of as in the type signature decreases by one every time you add a new argument?
Function with one argument that outputs a function with one argument that in turn, outputs a value
Function with one argument that outputs a value
The value itself
"...something like the built-in function take is considered to be partially applied" - I don't quite understand what this means. The function take itself is not partially applied. A function is partially applied when it is the result of supplying between 1 and (N - 1) arguments to a function that takes N arguments. So, take 5 is a partially applied function, but take and take 5 [1..10] are not.

Scheme nested lambda function

I am a beginner in Scheme. I found this question in MIT exam 1 for SICP lecture.
What's the value and type for –
((lambda (a) (lambda (b) (+ (sqrt a) (sqrt b)))) 5)
I am having a hard time understanding how this function works. I am really confused about the parameter b. Only 5 is passed as a parameter to the outer lambda function, then what value does b take for the inner lambda function?
I tried running this function in mit-scheme but the resulting value gets incremented each time it's run.
You're correct that only the outer lambda form is applied to the argument 5. Then it returns its body with a replaced with 5, so it would return
(lambda (b) (+ (sqrt 5) (sqrt b)))
which is itself a function. This could later be applied to another argument, to produce an actual numeric value.

Haskell - lambda expression

I am trying to understand what's useful and how to actually use lambda expression in Haskell.
I don't really understand the advantage of using lambda expression over the convention way of defining functions.
For example, I usually do the following:
let add x y = x+y
and I can simply call
add 5 6
and get the result of 11
I know I can also do the following:
let add = \x->(\y-> x+y)
and get the same result.
But like I mentioned before, I don't understand the purpose of using lambda expression.
Also, I typed the following code (a nameless function?) into the prelude and it gave me an error message.
let \x -> (\y->x+y)
parse error (possibly incorrect indentation or mismatched backets)
Thank you in advance!
Many Haskell functions are "higher-order functions", i.e., they expect other functions as parameters. Often, the functions we want to pass to such a higher-order function are used only once in the program, at that particular point. It's simply more convenient then to use a lambda expression than to define a new local function for that purpose.
Here's an example that filters all even numbers that are greater than ten from a given list:
ghci> filter (\ x -> even x && x > 10) [1..20]
[12,14,16,18,20]
Here's another example that traverses a list and for every element x computes the term x^2 + x:
ghci> map (\ x -> x^2 + x) [1..10]
[2,6,12,20,30,42,56,72,90,110]

Tail-Recursive Power Function in Scheme

I am having trouble writing a tail-recursive power function in scheme. I want to write the function using a helper function. I know that I need to have a parameter to hold an accumulated value, but I am stuck after that. My code is as follows.
(define (pow-tr a b)
(define (pow-tr-h result)
(if (= b 0)
result
pow-tr a (- b 1))(* result a)) pow-tr-h 1)
I edited my code, and now it works. It is as follows:
(define (pow-tr2 a b)
(define (pow-tr2-h a b result)
(if (= 0 b)
result
(pow-tr2-h a (- b 1) (* result a))))
(pow-tr2-h a b 1))
Can someone explain to me why the helper function should have the same parameters as the main function. I am having a hard time trying to think of why this is necessary.
It's not correct to state that "the helper function should have the same parameters as the main function". You only need to pass the parameters that are going to change in each iteration - in the example, the exponent and the accumulated result. For instance, this will work fine without passing the base as a parameter:
(define (pow-tr2 a b)
(define (pow-tr2-h b result)
(if (= b 0)
result
(pow-tr2-h (- b 1) (* result a))))
(pow-tr2-h b 1))
It works because the inner, helper procedure can "see" the a parameter defined in the outer, main procedure. And because the base is never going to change, we don't have to pass it around. To read more about this, take a look at the section titled "Internal definitions and block structure" in the wonderful SICP book.
Now that you're using helper procedures, it'd be a good idea to start using named let, a very handy syntax for writing helpers without explicitly coding an inner procedure. The above code is equivalent to this:
(define (pow-tr2 a b)
(let pow-tr2-h [(b b) (result 1)]
(if (= b 0)
result
(pow-tr2-h (- b 1) (* result a)))))
Even though it has the same name, it's not the same parameter. If you dug into what the interpreter is doing you'll see "a" defined twice. Once for the local scope, but it still remembers the "a" on the outer scope. When the interpreter invokes a function it tries to bind the values of the arguments to the formal parameters.
The reason that you pass the values through rather mutating state like you would likely do in an algol family language is that by not mutating state you can use a substitution model to reason about the behaviour of procedures. That same procedure called at anytime with arguments will yeild the same result as it is called from anywhere else with the same arguments.
In a purely functional style values never change, rather you keep calling the function with new values. The compiler should be able to write code in a tight loop that updates the values in place on the stack (tail call elimination). This way you can worry more about the correctness of the algorithm rather than acting as a human compiler, which truth be told is a very inefficient machine-task pairing.
(define (power a b)
(if (zero? b)
1
(* a (power a (- b 1)))))
(display (power 3.5 3))

Mechanism of a function in Scheme

Here it is a strange function in Scheme:
(define f
(call/cc
(lambda (x) x) ) )
(((f 'f) f) 1 )
When f is called in the command line, the result displayed is f .
What is the explanation of this mechanism ?..
Thanks!
You've just stumbled upon 'continuations', possibly the hardest thing of Scheme to understand.
call/cc is an abbreviation for call-with-current-continuation, what the procedure does is it takes a single argument function as its own argument, and calls it with the current 'continuation'.
So what's a continuation? That's infamously hard to explain and you should probably google it to get a better explanation than mine. But a continuation is simply a function of one argument, whose body represents a certain 'continuation' of a value.
Like, when we have (+ 2 (* 2 exp)) with exp being a random expression, if we evaluate that expression there is a 'continuation' waiting for that result, a place where evaluation continues, if it evaluates to 3 for instance, it inserts that value into the expression (* 2 3) and goes on from there with the next 'continuation', or the place where evaluation continues, which is (+ 2 ...).
In almost all contexts of programming languages, the place where computation continues with the value is the same place as it started, though the return statement in many languages is a key counterexample, the continuation is at a totally different place than the return statement itself.
In Scheme, you have direct control over your continuations, you can 'capture' them like done there. What f does is nothing more than evaluate to the current continuation, after all, when (lambda (x) x) is called with the current continuation, it just evaluates to it, so the entire function body does. As I said, continuations are functions themselves whose body can just be seen as the continuation they are to capture, which was famously shown by the designers of Scheme, that continuations are simply just lambda abstractions.
So in the code f first evaluates to the continuation it was called in. Then this continuation as a function is applied to 'f (a symbol). This means that that symbol is brought back to that continuation, where it is evaluated again as a symbol, to reveal the function it is bound to, which again is called with a symbol as its argument, which is finally displayed.
Kind of mind boggling, if you've seen the film 'primer', maybe this explains it:
http://thisdomainisirrelevant.net/1047