Anyone got a good explanation of "combinators" (Y-combinators etc. and NOT the company)?
I'm looking for one for the practical programmer who understands recursion and higher-order functions, but doesn't have a strong theory or math background.
(Note: that I'm talking about these things)
Unless you're deeply into theory, you can regard the Y combinator
as a neat trick with functions, like monads.
Monads allow you to chain actions, the Y combinator allows you to
define self-recursive functions.
Python has built-in support for self-recursive functions, so you
can define them without Y:
> def fun():
> print "bla"
> fun()
> fun()
bla
bla
bla
...
fun is accessible inside fun itself, so we can easily call it.
But what if Python were different, and fun weren't accessible
inside fun?
> def fun():
> print "bla"
> # what to do here? (cannot call fun!)
The solution is to pass fun itself as an argument to fun:
> def fun(arg): # fun receives itself as argument
> print "bla"
> arg(arg) # to recur, fun calls itself, and passes itself along
And Y makes that possible:
> def Y(f):
> f(f)
> Y(fun)
bla
bla
bla
...
All it does it call a function with itself as argument.
(I don't know if this definition of Y is 100% correct, but I think it's the general idea.)
Reginald Braithwaite (aka Raganwald) has been writing a great series on combinators in Ruby over at his new blog, homoiconic.
While he doesn't (to my knowledge) look at the Y-combinator itself, he does look at other combinators, for instance:
the Kestrel
the Thrush
the Cardinal
the Obdurate Kestrel
other Quirky Birds
and a few posts on how you can use them.
Quote Wikipedia:
A combinator is a higher-order function that uses only function application and earlier defined combinators to define a result from its arguments.
Now what does this mean? It means a combinator is a function (output is determined solely by its input) whose input includes a function as an argument.
What do such functions look like and what are they used for? Here are some examples:
(f o g)(x) = f(g(x))
Here o is a combinator that takes in 2 functions , f and g, and returns a function as its result, the composition of f with g, namely f o g.
Combinators can be used to hide logic. Say we have a data type NumberUndefined, where NumberUndefined can take on a numeric value Num x or a value Undefined, where x a is a Number. Now we want to construct addition, subtraction, multiplication, and division for this new numeric type. The semantics are the same as for those of Number except if Undefined is an input, the output must also be Undefined and when dividing by the number 0 the output is also Undefined.
One could write the tedious code as below:
Undefined +' num = Undefined
num +' Undefined = Undefined
(Num x) +' (Num y) = Num (x + y)
Undefined -' num = Undefined
num -' Undefined = Undefined
(Num x) -' (Num y) = Num (x - y)
Undefined *' num = Undefined
num *' Undefined = Undefined
(Num x) *' (Num y) = Num (x * y)
Undefined /' num = Undefined
num /' Undefined = Undefined
(Num x) /' (Num y) = if y == 0 then Undefined else Num (x / y)
Notice how the all have the same logic concerning Undefined input values. Only division does a bit more. The solution is to extract out the logic by making it a combinator.
comb (~) Undefined num = Undefined
comb (~) num Undefined = Undefined
comb (~) (Num x) (Num y) = Num (x ~ y)
x +' y = comb (+) x y
x -' y = comb (-) x y
x *' y = comb (*) x y
x /' y = if y == Num 0 then Undefined else comb (/) x y
This can be generalized into the so-called Maybe monad that programmers make use of in functional languages like Haskell, but I won't go there.
A combinator is function with no free variables. That means, amongst other things, that the combinator does not have dependencies on things outside of the function, only on the function parameters.
Using F# this is my understanding of combinators:
let sum a b = a + b;; //sum function (lambda)
In the above case sum is a combinator because both a and b are bound to the function parameters.
let sum3 a b c = sum((sum a b) c);;
The above function is not a combinator as it uses sum, which is not a bound variable (i.e. it doesn't come from any of the parameters).
We can make sum3 a combinator by simply passing the sum function as one of the parameters:
let sum3 a b c sumFunc = sumFunc((sumFunc a b) c);;
This way sumFunc is bound and hence the entire function is a combinator.
So, this is my understanding of combinators. Their significance, on the other hand, still escapes me. As others pointed out, fixed point combinators allow one to express a recursive function without explicit recursion. I.e. instead of calling itself the recusrsive function calls lambda that is passed in as one of the arguments.
Here is one of the most understandable combinator derivations that I found:
http://mvanier.livejournal.com/2897.html
This looks a good one : http://www.catonmat.net/blog/derivation-of-ycombinator/
This is a good article.
The code examples are in scheme, but they shouldn't be hard to follow.
I'm pretty short on theory, but I can give you an example that sets my imagination aflutter, which may be helpful to you. The simplest interesting combinator is probably "test".
Hope you know Python
tru = lambda x,y: x
fls = lambda x,y: y
test = lambda l,m,n: l(m,n)
Usage:
>>> test(tru,"goto loop","break")
'goto loop'
>>> test(fls,"goto loop","break")
'break'
test evaluates to the second argument if the first argument is true, otherwise the third.
>>> x = tru
>>> test(x,"goto loop","break")
'goto loop'
Entire systems can be built up from a few basic combinators.
(This example is more or less copied out of Types and Programming Languages by Benjamin C. Pierce)
Shortly, Y combinator is a higher order function that is used to implement recursion on lambda expressions (anonymous functions). Check the article How to Succeed at Recursion Without Really Recursing by Mike Vanier - it's one of best practical explanation of this matter I've seen.
Also, scan the SO archives:
What is a y-combinator?
Y-Combinator Practical Example
Related
I want to plot a function using surface in Julia. I manage to plot te desired function:
x = 0:0.1:4
y = 0:0.1:4
f(x,y) = x^0.2 * y^0.8
surface(x, y, f, camera=(10,30),linealpha=0.3, fc=:heat)
However, I would f(*) to be a proper function over which I could also optimize (e.g. utility maximisation in economics). This is my attempt:
function Utility(x1, x2)
u= x.^0.2 .* y.^0.8
return u
end
But it unfortunately does not work. Can anybody help me?
Best
Daniel
I think Benoit's comment really should be the answer, but let me expand a little bit.
First of all, an inline function definition is not any different from a multi-line function definiton (see the first two examples in the docs here). Therefore, doing
utility(x, y) = x^0.2 * y^0.8
will give you a function that works exactly like
function utility(x, y)
x^0.2 * y^0.8
end
However, your Utility function is actually different from your f function - you are defining it with the arguments x1 and x2, but in the function body you are using y rather than x2.
This would ordinarily raise an undefined variable error, except that in the code snippet you posted, y is already defined in global scope as the range 0:0.1:4, so the function will use this:
julia> y = 0:0.1:4
0.0:0.1:4.0
julia> u(x1, x2) = x1 .^ 0.2 * y .^ 0.8
u (generic function with 1 method)
julia> u(2.0, 0.0)
41-element Array{Float64,1}:
0.0
0.18205642030260805
0.3169786384922227
...
this is also where your introduction of broadcasting in the Utility function (the second difference between your two examples as Benoit pointed out) comes back to haunt you: calling the function while relying on it to use the global variable y would error immediately without broadcasting (as you can't exponentiate a range):
julia> u2(x1, x2) = x1^0.2 * y^0.8
u2 (generic function with 1 method)
julia> u2(2.0, 0.0)
ERROR: MethodError: no method matching ^(::StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}, ::Float64)
with broadcasting, however, this exponentiation works and returns the full range, with every element of the range exponentiated. Thus, your function returns an array rather than a single number (as you can see above from my call to u(2.0, 0.0). This is what Plots complains about - it doesn't know how to plot an array, when it expects to just be plotting a single data point.
I am trying to write a function in Julia that takes in a multi-dimensional array (a data cube) and rescales every entry from 0 to 1. However, whenever I run the code in atom, I get the error
LoadError: MethodError: no method matching -(::Array{Float64,2}, ::Float64)
Closest candidates are:
-(::Float64, ::Float64) at float.jl:397
-(::Complex{Bool}, ::Real) at complex.jl:298
-(::Missing, ::Number) at missing.jl:97
...
Stacktrace:
[1] rescale_zero_one(::Array{Float64,2}) at D:\Julio\Documents\Michigan_v2\CS\EECS_598_Data_Science\codex\Codex_3\svd_video.jl:40
[2] top-level scope at D:\Julio\Documents\Michigan_v2\CS\EECS_598_Data_Science\codex\Codex_3\svd_video.jl:50 [inlined]
[3] top-level scope at .\none:0
in expression starting at D:\Julio\Documents\Michigan_v2\CS\EECS_598_Data_Science\codex\Codex_3\svd_video.jl:48
I have the basics of what my function must do, but I really don't understand some of the notation and what the error is telling or how to fix it.
function rescale_zero_one(A::Array)
B = float(A)
B -= minimum(B)
B /= maximum(B)
return B
end
m,n,j = size(movie_cube)
println(j)
C = Array{Float64}(UndefInitializer(),m,n,j)
for k in 1:j
println(k)
C[:,:,j] = rescale_zero_one(movie_cube[:,:,j])
end
the variable movie_cube is a 3 dimensional data array of Float64 entries and I just want to rescale the entries from zero to one. However, the error that I mentioned keeps appearing. I would really appreciate any help with this code!
Try to use dot syntax for doing some operations in an array!
function rescale_zero_one(A::Array)
B = float.(A)
B .-= minimum(B)
B ./= maximum(B)
return B
end
This code is a bit faster and simpler (it only makes two passes over the input matrix rather than five in the previous answer):
function rescale(A::Matrix)
(a, b) = extrema(A)
return (A .- a) ./ (b - a)
end
This can be generalized to three dimensions, so that you don't need the outer loop over the dimensions in C. Warning: this solution is actually a bit slow, since extrema/maximum/minimum are slow when using the dims keyword, which is quite strange:
function rescale(A::Array{T, 3}) where {T}
mm = extrema(A, dims=(1,2))
a, b = first.(mm), last.(mm)
return (A .- a) ./ (b .- a)
end
Now you could just write C = rescale(movie_cube). You can even generalize this further:
function rescale(A::Array{T, N}; dims=ntuple(identity, N)) where {T,N}
mm = extrema(A, dims=dims)
a, b = first.(mm), last.(mm)
return (A .- a) ./ (b .- a)
end
Now you can normalize your multidimensional array along any dimensions you like. Current behaviour becomes
C = rescale(movie_cube, dims=(1,2))
Rescaling each row is
C = rescale(movie_cube, dims=(1,))
Default behaviour is to rescale the entire array:
C = rescale(movie_cube)
One more thing, this is a bit odd:
C = Array{Float64}(UndefInitializer(),m,n,j)
It's not wrong, but it is more common to use the shorter and more elegant:
C = Array{Float64}(undef, m, n, j)
You might also consider simply writing: C = similar(movie_cube) or C = similar(movie_cube, Float64).
Edit: Another general solution is to not implement the dimension handling in the rescale function, but to rather leverage mapslices. Then:
function rescale(A::Array)
(a, b) = extrema(A)
return (A .- a) ./ (b - a)
end
C = mapslices(rescale, A, dims=(1,2))
This is also not the fastest solution, for reasons I don't understand. I really think this ought to be fast, and might be sped up in a future version of Julia.
Unlike Matlab, Octave Symbolic has no piecewise function. Is there a work around? I would like to do something like this:
syms x
y = piecewise(x0, 1)
Relatedly, how does one get pieces of a piecewise function? I ran the following:
>> int (exp(-a*x), x, 0, t)
And got the following correct answer displayed and stored in a variable:
t for a = 0
-a*t
1 e
- - ----- otherwise
a a
But now I would like to access the "otherwise" part of the answer so I can factor it. How do I do that?
(Yes, I can factor it in my head, but I am practicing for when more complicated expressions come along. I am also only really looking for an approach using symbolic expressions -- even though in any single case numerics may work fine, I want to understand the symbolic approach.)
Thanks!
Matlab's piecewise function seems to be fairly new (introduced in 2016b), but it basically just looks like a glorified ternary operator. Unfortunately I don't have 2016 to check if it performs any checks on the inputs or not, but in general you can recreate a 'ternary' operator in octave by indexing into a cell using logical indexing. E.g.
{#() return_A(), #() return_B(), #() return_default()}([test1, test2, true]){1}()
Explanation:
Step 1: You put all the values of interest in a cell array. Wrap them in function handles if you want to prevent them being evaluated at the time of parsing (e.g. if you wanted the output of the ternary operator to be to produce an error)
Step 2: Index this cell array using logical indexing, where at each index you perform a logical test
Step 3: If you need a 'default' case, use a 'true' test for the last element.
Step 4: From the cell (sub)array that results from above, select the first element and 'run' the resulting function handle. Selecting the first element has the effect that if more than one tests succeed, you only pick the first result; given the 'default' test will always succeed, this also makes sure that this is not picked unless it's the first and only test that succeeds (which it does so by default).
Here are the above steps implemented into a function (appropriate sanity checks omitted here for brevity), following the same syntax as matlab's piecewise:
function Out = piecewise (varargin)
Conditions = varargin(1:2:end); % Select all 'odd' inputs
Values = varargin(2:2:end); % Select all 'even' inputs
N = length (Conditions);
if length (Values) ~= N % 'default' case has been provided
Values{end+1} = Conditions{end}; % move default return-value to 'Values'
Conditions{end} = true; % replace final (ie. default) test with true
end
% Wrap return-values into function-handles
ValFuncs = cell (1, N);
for n = 1 : N; ValFuncs{n} = #() Values{n}; end
% Grab funhandle for first successful test and call it to return its value
Out = ValFuncs([Conditions{:}]){1}();
end
Example use:
>> syms x t;
>> F = #(a) piecewise(a == 0, t, (1/a)*exp(-a*t)/a);
>> F(0)
ans = (sym) t
>> F(3)
ans = (sym)
-3⋅t
ℯ
─────
9
I have two functions f and g:
let f (x:float) (y:float) =
x * y
let g (x:float) =
x * 2.0
I want to compose (>>) them to get a new function that performs f and then g on the result.
The solution should behave like:
let h x y =
(f x y) |> g
This does not work:
// Does not compile
let h =
f >> g
How should >> be used?
I think you want to achieve this:
let fog x = f x >> g
You can't compose them directly f >> g in that order, it makes sense since f expects two parameters, so doing f x will result in a partially applied function, but g expects a value, not a function.
Composing the other way around works and in your specific example you get even the same results, because you are using commutative functions. You can do g >> f and you get a composition that results in a partially applied function since g expects a single value, so by applying a value to g you get another value (not a function) and f expects two values, then you will get a partially applied function.
Writing in point-free style, i.e. defining functions without explicit arguments, can become ugly when the implicit arguments are more than one.
It can always be done, with the correct combination of operators. But the outcome is going to be a disappointment and you will lose the primary benefit of point-free style - simplicity and legibility.
For fun and learning, we'll give it a try. Let's start with the explicit (a.k.a. "pointful") style and work from there.
(Note: we're going to rearrange our composition operators into their explicit form (>>) (a) (b) rather than the more usual a >> b. This will create a bunch of parentheses, but it will make things easier to grok, without worrying about sometimes-unintuitive operator precedence rules.)
let h x y = f x y |> g
let h x = f x >> g
// everybody puts on their parentheses!
let h x = (>>) (f x) (g)
// swap order
let h x = (<<) (g) (f x)
// let's put another pair of parentheses around the partially applied function
let h x = ((<<) g) (f x)
There we are! See, now h x is expressed in the shape we want - "pass x to f, then pass the result to another function".
That function happens to be ((<<) g), which is the function that takes a float -> float as argument and returns its composition with g.
(A composition where g comes second, which is important, even if in the particular example used it doesn't make a difference.)
Our float -> float argument is, of course, (f x) i.e. the partial application of f.
So, the following compiles:
let h x = x |> f |> ((<<) g)
and that can now be quite clearly simplified to
let h = f >> ((<<) g)
Which isn't all that awful-looking, when you already know what it means. But anybody with any sense will much rather write and read let h x y = f x y |> g.
So, I was just thinking about how cool chaining is and how it makes things easier to read. With a lot of languages, when applying a bunch of functions to a variable, you'd write something like this:
i(h(g(f(x))))
And you have to read it from right-to-left or inner-most to outer-most. You apply f first, then g, and so forth. But if it were chained, it would look more like
x|f|g|h|i
And you could read it like a normal human being. So, my question is, there has to be some languages that do it that way, what are they? Is that what these fancy-pants functional programming languages do?
Because of this, I usually end up creating a whole bunch of temp variables so that I can split it onto separate lines and make it more readable:
a = f(x)
b = g(a)
c = h(b)
what_i_really_wanted_all_along = i(c)
Where's with my magical language, you could still split it onto different lines, if they're getting too long, without needing intervening variables:
x | f
| g
| h
| i
Yes, with F# you have a pipeline operator |> (also called forward pipe operator, and you have a backward pipe <|).
You write it like: x |> f |> g |> h |> i
Check this blog post that gives a good idea of real life usage.
It's not exclusive to functional programming, though it probably best implemented in functional languages, since the whole concept of function composition is squarely in the functional programming's domain.
For one thing, any language with object-oriented bent has chaining for methods which return an instance of the class:
obj.method1().method2().method3(); // JavaScript
MyClass->new()->f()->g()->i(); # Perl
Alternately, the most famous yet the least "programming-language" example of this chaining pattern would be something completely non-OO and non-functional ... you guessed it, pipes in Unix. As in, ls | cut -c1-4 | sort -n. Since shell programming is considered a language, I say it's a perfectly valid example.
Well, you can do this in JavaScript and its relatives:
function compose()
{
var funcs = Array.prototype.slice.call(arguments);
return function(x)
{
var i = 0, len = funcs.length;
while(i < len)
{
x = funcs[i].call(null, x);
++i;
}
return x;
}
}
function doubleIt(x) { print('Doubling...'); return x * 2; }
function addTwo(x) { print('Adding 2...'); return x + 2; }
function tripleIt(x) { print('Tripling...'); return x * 3; }
var theAnswer = compose(doubleIt, addTwo, tripleIt)( 6 );
print( 'The answer is: ' + theAnswer );
// Prints:
// Doubling...
// Adding 2...
// Tripling...
// The answer is: 42
As you can see, the functions read left-to-right and neither the object nor the functions need any special implementation. The secret is all in compose.
What you're describing is essentially the Fluent Interface pattern.
Wikipedia has a good example from a number of languages:
http://en.wikipedia.org/wiki/Fluent_interface
And Martin Fowler has his write up here:
http://www.martinfowler.com/bliki/FluentInterface.html
As DVK points out - any OO language where a method can return an instance of the class it belongs to can provide this functionality.
C# extension methods accomplish something very close to your magical language, if a little less concisely:
x.f()
.g()
.h()
.i();
Where the methods are declared thus:
static class Extensions
{
static T f<T>(this T x) { return x; }
static T g<T>(this T x) { return x; }
...
}
Linq uses this very extensively.
Haskell. The following three examples are equivalent:
i(h(g(f(x)))) (Nested function calls)
x & f & g & h & i (Left-to-right chaining as requested)
(i . h . g . f)(x) (Function composition, which is more common in Haskell)
http://www.haskell.org/haskellwiki/Function_composition
http://en.wikipedia.org/wiki/Function_composition_(computer_science)
I am not suggesting you could use Mathematica if you don't do some math usually, but it certainly is flexible enough for supporting Postfix notation. In fact, you may define your own notation, but let's keep with Postfix for simplicity.
You may enter:
Postfix[Sin[x]]
To get
x // Sin
Which translates to Postfix notation. Or if you have a deeper expression:
MapAll[Postfix, Cos[Sin[x]]]
To get:
(Postfix[x]//Sin)//Cos
Where you may see Postfix[x] first, as for Mathematica x is an expression to be evaluated later.
Conversely, you may input:
x // Sin // Cos
To get of course
Cos[Sin[x]]
Or you can use an idiom very frequently used, use Postfix in Postfix form:
Cos[x] // Postfix
To get
x // Cos
HTH!
BTW:
As an answer to Where's with my magical language,? , see this:
(x//Sin
// Cos
// Exp
// Myfunct)
gives
Myfunct[E^Cos[Sin[x]]]
PS: As an excercise to the readers :) ... How to do this for functions that take n vars?
As has been previously mentioned, Haskell supports function composition, as follows:
(i . h . g . f) x, which is equivalent to: i(h(g(f(x))))
This is the standard order of operations for function composition in mathematics. Some people still consider this to be backward, however. Without getting too much into a debate over which approach is better, I would like to point out that you can easily define the flipped composition operator:
infixr 1 >>>, <<<
(<<<) = (.) -- regular function composition
(>>>) = flip (.) -- flipped function composition
(f >>> g >>> h >>> i) x
-- or --
(i <<< h <<< g <<< f) x
This is the notation used by the standard library Control.Category. (Although the actual type signature is generalized and works on other things besides functions). If you're still bothered by the parameter being at the end, you can also use a variant of the function application operator:
infixr 0 $
infixl 0 #
f $ x = f x -- regular function application
(%) = flip ($) -- flipped function application
i $ h $ g $ f $ x
-- or --
x % f % g % h % i
Which is close to the syntax you want. To my knowledge, % is NOT a built-in operator in Haskell, but $ is. I've glossed over the infix bits. If you're curious, thats a technicality that makes the above code parse as:
(((x % f) % g) % h) % i -- intended
and not:
x % (f % (g % (h % i))) -- parse error (which then leads to type error)