Update 2: examples removed, because they were misleading. The ones below are more relevant.
My question:
Is there a programming language with such a construct?
Update:
Now when I think about it, Prolog has something similar.
I even allows defining operations at definition line.
(forget about backtracking and relations - think about syntax)
I asked this question because I believe, it's a nice thing to have symmetry in a language.
Symmetry between "in" parameters and "out" parameters.
If returning values like that would be easy, we could drop explicit returning in designed language.
retruning pairs ... I think this is a hack. we do not need a data structure to pass multiple parameters to a function.
Update 2:
To give an example of syntax I'm looking for:
f (s, d&) = // & indicates 'out' variable
d = s+s.
main =
f("say twice", &twice) // & indicates 'out' variable declaration
print(twice)
main2 =
print (f("say twice", _))
Or in functional + prolog style
f $s (s+s). // use $ to mark that s will get it's value in other part of the code
main =
f "say twice" $twice // on call site the second parameter will get it's value from
print twice
main2 =
print (f "Say twice" $_) // anonymous variable
In a proposed language, there are no expressions, because all returns are through parameters. This would be cumbersome in situations where deep hierarchical function calls are natural. Lisp'ish example:
(let x (* (+ 1 2) (+ 3 4))) // equivalent to C x = ((1 + 2) * (3 + 4))
would need in the language names for all temporary variables:
+ 1 2 res1
+ 3 4 res2
* res1 res2 x
So I propose anonymous variables that turn a whole function call into value of this variable:
* (+ 1 2 _) (+ 3 4 _)
This is not very natural, because all the cultural baggage we have, but I want to throw away all preconceptions about syntax we currently have.
<?php
function f($param, &$ret) {
$ret = $param . $param;
}
f("say twice", $twice);
echo $twice;
?>
$twice is seen after the call to f(), and it has the expected value. If you remove the ampersand, there are errors. So it looks like PHP will declare the variable at the point of calling. I'm not convinced that buys you much, though, especially in PHP.
"Is there a programming language with such a construct?"
Your question is in fact a little unclear.
In a sense, any language that supports assignment to [the variable state associated with] a function argument, supports "such a construct".
C supports it because "void f (type *address)" allows modification of anything address points to. Java supports it because "void f (Object x)" allows any (state-modifying) invocation of some method of x. COBOL supports it because "PROCEDURE DIVISION USING X" can involve an X that holds a pointer/memory address, ultimately allowing to go change the state of the thing pointed to by that address.
From that perspective, I'd say almost every language known to mankind supports "such a construct", with the exception perhaps of languages such as Tutorial D, which claim to be "absolutely pointer-free".
I'm having a hard time understanding what you want. You want to put the return type on call signature? I'm sure someone could hack that together but is it really useful?
// fakelang example - use a ; to separate ins and outs
function f(int in1, int in2; int out1, int out2, int out3) {...}
// C++0x-ish
auto f(int in1, int in2) -> int o1, int o2, int o3 {...}
int a, b, c;
a, b, c = f(1, 2);
I get the feeling this would be implemented internally this way:
LEA EAX, c // push output parameter pointers first, in reverse order
PUSH EAX
LEA EAX, b
PUSH EAX
LEA EAX, a
PUSH EAX
PUSH 1 // push input parameters
PUSH 2
CALL f // Caller treat the outputs as references
ADD ESP,20 // clean the stack
For your first code snippet, I'm not aware of any such languages, and frankly I'm glad it is the case. Declaring a variable in the middle of expression like that, and then using it outside said expression, looks very wrong to me. If anything, I'd expect the scope of such variable to be restricted to the function call, but then of course it's quite pointless in the first place.
For the second one - multiple return values - pretty much any language with first-class tuple support has something close to that. E.g. Python:
def foo(x, y):
return (x + 1), (y + 1)
x, y = foo(1, 2)
Lua doesn't have first-class tuples (i.e. you can't bind a tuple value to a single variable - you always have to expand it, possibly discarding part of it), but it does have multiple return values, with essentially the same syntax:
function foo(x, y)
return (x + 1), (y + 1)
end
local x, y = foo(x, y)
F# has first-class tuples, and so everything said earlier about Python applies to it as well. But it can also simulate tuple returns for methods that were declared in C# or VB with out or ref arguments, which is probably the closest to what you describe - though it is still implicit (i.e. you don't specify the out-argument at all, even as _). Example:
// C# definition
int Foo(int x, int y, out int z)
{
z = y + 1;
return x + 1;
}
// explicit F# call
let mutable y = 0
let x = Foo(1, 2, byref y);
// tupled F# call
let x, y = Foo(1, 2)
Here is how you would do it in Perl:
sub f { $_[1] = $_[0] . $_[0] } #in perl all variables are passed by reference
f("say twice", my $twice);
# or f("...", our $twice) or f("...", $twice)
# the last case is only possible if you are not running with "use strict;"
print $twice;
[edit] Also, since you seem interested in minimal syntax:
sub f { $_[1] = $_[0] x 2 } # x is the repetition operator
f "say twice" => $twice; # => is a quoting comma, used here just for clarity
print $twice;
is perfectly valid perl. Here's an example of normal quoting comma usage:
("abc", 1, "d e f", 2) # is the same as
(abc => 1, "d e f" => 2) # the => only quotes perl /\w+/ strings
Also, on return values, unless exited with a "return" early, all perl subroutines automatically return the last line they execute, be it a single value, or a list. Lastly, take a look at perl6's feed operators, which you might find interesting.
[/edit]
I am not sure exactly what you are trying to achieve with the second example, but the concept of implicit variables exists in a few languages, in Perl, it is $_.
an example would be some of perl's builtins which look at $_ when they dont have an argument.
$string = "my string\n";
for ($string) { # loads "my string" into $_
chomp; # strips the last newline from $_
s/my/our/; # substitutes my for our in $_
print; # prints $_
}
without using $_, the above code would be:
chomp $string;
$string =~ s/my/our/;
print $string;
$_ is used in many cases in perl to avoid repeatedly passing temporary variables to functions
Not programming languages, but various process calculi have syntax for binding names at the receiver call sites in the scope of process expressions dependent on them. While Pict has such syntax, it doesn't actually make sense in the derived functional syntax that you're asking about.
You might have a look at Oz. In Oz you only have procedures and you assign values to variables instead of returning them.
It looks like this:
proc {Max X Y Z}
if X >= Y then Z = X else Z = Y end
end
There are functions (that return values) but this is only syntactic sugar.
Also, Concepts, Techniques, and Models of Computer Programming is a great SICP-like book that teaches programming by using Oz and the Mozart Programming System.
I don't think so. Most languages that do something like that use Tuples so that there can be multiple return values. Come to think of it, the C-style reference and output parameters are mostly hacks around not being about to return Tuples...
Somewhat confusing, but C++ is quite happy with declaring variables and passing them as out parameters in the same statement:
void foo ( int &x, int &y, int &z ) ;
int a,b,c = (foo(a,b,c),c);
But don't do that outside of obfuscation contests.
You might also want to look at pass by name semantics in Algol, which your fuller description smells vaguely similar to.
Related
A function can be a highly nested structure:
function a(x) {
return b(c(x), d(e(f(x), g())))
}
First, wondering if a function has an instance. That is, the evaluation of the function being the instance of the function. In that sense, the type is the function, and the instance is the evaluation of it. If it can be, then how to model a function as a type (in some type-theory oriented language like Haskell or Coq).
It's almost like:
type a {
field: x
constructor b {
constructor c {
parameter: x
},
...
}
}
But I'm not sure if I'm not on the right track. I know you can say a function has a [return] type. But I'm wondering if a function can be considered a type, and if so, how to model it as a type in a type-theory-oriented language, where it models the actual implementation of the function.
I think the problem is that types based directly on the implementation (let's call them "i-types") don't seem very useful, and we already have good ways of modelling them (called "programs" -- ha ha).
In your specific example, the full i-type of your function, namely:
type a {
field: x
constructor b {
constructor c {
parameter: x
},
constructor d {
constructor e {
constructor f {
parameter: x
}
constructor g {
}
}
}
}
is just a verbose, alternative syntax for the implementation itself. That is, we could write this i-type (in a Haskell-like syntax) as:
itype a :: a x = b (c x) (d (e (f x) g))
On the other hand, we could convert your function implementation to Haskell term-level syntax directly to write it as:
a x = b (c x) (d (e (f x) g))
and the i-type and the implementation are exactly the same thing.
How would you use these i-types? The compiler might use them by deriving argument and return types to type-check the program. (Fortunately, there are well known algorithms, such as Algorithm W, for simultaneously deriving and type-checking argument and return types from i-types of this sort.) Programmers probably wouldn't use i-types directly -- they're too complicated to use for refactoring or reasoning about program behavior. They'd probably want to look at the types derived by the compiler for the arguments and return type.
In particular, "modelling" these i-types at the type level in Haskell doesn't seem productive. Haskell can already model them at the term level. Just write your i-types as a Haskell program:
a x = b (c x) (d (e (f x) g))
b s t = sqrt $ fromIntegral $ length (s ++ t)
c = show
d = reverse
e c ds = show (sum ds + fromIntegral (ord c))
f n = if even n then 'E' else 'O'
g = [1.5..5.5]
and don't run it. Congratulations, you've successfully modelled these i-types! You can even use GHCi to query derived argument and return types:
> :t a
a :: Floating a => Integer -> a -- "a" takes an Integer and returns a float
>
Now, you are perhaps imagining that there are situations where the implementation and i-type would diverge, maybe when you start introducing literal values. For example, maybe you feel like the function f above:
f n = if even n then 'E' else 'O'
should be assigned a type something like the following, that doesn't depend on the specific literal values:
type f {
field: n
if_then_else {
constructor even { -- predicate
parameter: n
}
literal Char -- then-branch
literal Char -- else-branch
}
Again, though, you'd be better off defining an arbitrary term-level Char, like:
someChar :: Char
someChar = undefined
and modeling this i-type at the term-level:
f n = if even n then someChar else someChar
Again, as long as you don't run the program, you've successfully modelled the i-type of f, can query its argument and return types, type-check it as part of a bigger program, etc.
I'm not clear exactly what you are aiming at, so I'll try to point at some related terms that you might want to read about.
A function has not only a return type, but a type that describes its arguments as well. So the (Haskell) type of f reads "f takes an Int and a Float, and returns a List of Floats."
f :: Int -> Float -> [Float]
f i x = replicate i x
Types can also describe much more of the specification of a function. Here, we might want the type to spell out that the length of the list will be the same as the first argument, or that every element of the list will be the same as the second argument. Length-indexed lists (often called Vectors) are a common first example of Dependent Types.
You might also be interested in functions that take types as arguments, and return types. These are sometimes called "type-level functions". In Coq or Idris, they can be defined the same way as more familiar functions. In Haskell, we usually implement them using Type Families, or using Type Classes with Functional Dependencies.
Returning to the first part of your question, Beta Reduction is the process of filling in concrete values for each of the function's arguments. I've heard people describe expressions as "after reduction" or "fully reduced" to emphasize some stage in this process. This is similar to a function Call Site, but emphasizes the expression & arguments, rather than the surrounding context.
Let's say I have 1000 functions defined as follows
void func dummy1(int a);
void func dummy2(int a, int aa);
void func dummy3(int a, int aa, int aaa);
.
.
.
void func dummy1000(int a, int aa, int aaa, ...);
I want to write a function that takes an integer, n (n < 1000) and calls nth dummy function (in case of 10, dummy10) with exactly n arguments(arguments can be any integer, let's say 0) as required. I know this can be achieved by writing a switch case statement with 1000 cases which is not plausible.
In my opinion, this cannot be achieved without recompilation at run time so languages like java, c, c++ will never let such a thing happen.
Hopefully, there is a way to do this. If so I am curious.
Note: This is not something that I will ever use, I asked question just because of my curiosity.
In modern functional languages, you can make a list of functions which take a list as an argument. This will arguably solve your problem, but it is also arguably cheating, as it is not quite the statically-typed implementation your question seems to imply. However, it is pretty much what dynamic languages such as Python, Ruby, or Perl do when using "manual" argument handling...
Anyway, the following is in Haskell: it supplies the nth function (from its first argument fs) a list of n copies of the second argument (x), and returns the result. Of course, you will need to put together the list of functions somehow, but unlike a switch statement this list will be reusable as a first-class argument.
selectApplyFunction :: [ [Int] -> a ] -> Int -> Int -> a
selectApplyFunction fs x n = (fs !! (n-1)) (replicate n x)
dummy1 [a] = 5 * a
dummy2 [a, b] = (a + 3) * b
dummy3 [a, b, c] = (a*b*c) / (a*b + b*c + c*a)
...
myFunctionList = [ dummy1, dummy2, dummy3, ... ]
-- (myfunction n) provides n copies of the number 42 to the n'th function
myFunction = selectApplyFunction myFunctionList 42
-- call the 666'th function with 666 copies of 42
result = myFunction 666
Of course, you will get an exception if n is greater than the number of functions, or if the function can't handle the list it is given. Note, too, that it is poor Haskell style -- mainly because of the way it abuses lists to (abusively) solve your problem...
No, you are incorrect. Most modern languages support some form of Reflection that will allow you to call a function by name and pass params to it.
You can create an array of functions in most of modern languages.
In pseudo code,
var dummy = new Array();
dummy[1] = function(int a);
dummy[2] = function(int a, int aa);
...
var result = dummy[whateveryoucall](1,2,3,...,whateveryoucall);
In functional languages you could do something like this, in strongly typed ones, like Haskell, the functions must have the same type, though:
funs = [reverse, tail, init] -- 3 functions of type [a]->[a]
run fn arg = (funs !! fn) $ args -- applies function at index fn to args
In object oriented languages, you can use function objects and reflection together to achieve exactly what you want. The problem of the variable number of arguments is solved by passing appropriate POJOs (recalling C stucts) to the function object.
interface Functor<A,B> {
public B compute(A input);
}
class SumInput {
private int x, y;
// getters and setters
}
class Sum implements Functor<SumInput, Integer> {
#Override
public Integer compute(SumInput input) {
return input.getX() + input.getY();
}
}
Now imagine you have a large number of these "functors". You gather them in a configuration file (maybe an XML file with metadata about each functor, usage scenarios, instructions, etc...) and return the list to the user.
The user picks one of them. By using reflection, you can see what is the required input and the expected output. The user fills in the input, and by using reflection you instantiate the functor class (newInstance()), call the compute() function and get the output.
When you add a new functor, you just have to change the list of the functors in the config file.
All,
When we are passing an expression as a parameter, how does the evaluation occur? Here is a small example. This is just a pseudocode kind of example:
f (x,y)
{
y = y+1;
x = x+y;
}
main()
{
a = 2; b = 2;
f(a+b, a)
print a;
}
When accessing variable x in f, does it access the address of the temp variable which contains the result of a+b or will it access the individual addresses of a and b and then evaluate the value of a+b
Please help.
Regards,
darkie15
Somewhat language dependent, but in C++
f(a+b, a)
evaluates a + b and and pushes the result of evaluation onto the stack and then passes references to this value to f(). This will only work if the first parameter is of f() is s const reference, as temporary objects like the result of a + b can only be bound to const references.
In C or C++, as long as x and y are not pointers (in which case the expression is not useful anyway), they are both evaluated before the function call and the VALUE of the result is pushed on the stack. There are no references involved, at all.
All parameters in C and C++ are always passed by value. If a reference type (eg int*, int&) is passed to the function, the VALUE of the reference is passed. While the referenced object may be changed by accessing eg *x within the function, the value of the reference still cannot be changed, because C and C++ parameters are always always always passed by value only.
EDIT: an exception in C and C++ is the case in which some overloaded operator is defined like the following:
T* operator+ (L lhs, R rhs) {return new T(lhs, rhs);}
and x is an L, and y is an R. In this case, the value of the T* generated by the function is pushed on the stack as a parameter. Don't write code like that, it confuses other programmers =D.
I quite often see on the Internet various complaints that other peoples examples of currying are not currying, but are actually just partial application.
I've not found a decent explanation of what partial application is, or how it differs from currying. There seems to be a general confusion, with equivalent examples being described as currying in some places, and partial application in others.
Could someone provide me with a definition of both terms, and details of how they differ?
Currying is converting a single function of n arguments into n functions with a single argument each. Given the following function:
function f(x,y,z) { z(x(y));}
When curried, becomes:
function f(x) { lambda(y) { lambda(z) { z(x(y)); } } }
In order to get the full application of f(x,y,z), you need to do this:
f(x)(y)(z);
Many functional languages let you write f x y z. If you only call f x y or f(x)(y) then you get a partially-applied function—the return value is a closure of lambda(z){z(x(y))} with passed-in the values of x and y to f(x,y).
One way to use partial application is to define functions as partial applications of generalized functions, like fold:
function fold(combineFunction, accumulator, list) {/* ... */}
function sum = curry(fold)(lambda(accum,e){e+accum}))(0);
function length = curry(fold)(lambda(accum,_){1+accum})(empty-list);
function reverse = curry(fold)(lambda(accum,e){concat(e,accum)})(empty-list);
/* ... */
#list = [1, 2, 3, 4]
sum(list) //returns 10
#f = fold(lambda(accum,e){e+accum}) //f = lambda(accumulator,list) {/*...*/}
f(0,list) //returns 10
#g = f(0) //same as sum
g(list) //returns 10
The easiest way to see how they differ is to consider a real example. Let's assume that we have a function Add which takes 2 numbers as input and returns a number as output, e.g. Add(7, 5) returns 12. In this case:
Partial applying the function Add with a value 7 will give us a new function as output. That function itself takes 1 number as input and outputs a number. As such:
Partial(Add, 7); // returns a function f2 as output
// f2 takes 1 number as input and returns a number as output
So we can do this:
f2 = Partial(Add, 7);
f2(5); // returns 12;
// f2(7)(5) is just a syntactic shortcut
Currying the function Add will give us a new function as output. That function itself takes 1 number as input and outputs yet another new function. That third function then takes 1 number as input and returns a number as output. As such:
Curry(Add); // returns a function f2 as output
// f2 takes 1 number as input and returns a function f3 as output
// i.e. f2(number) = f3
// f3 takes 1 number as input and returns a number as output
// i.e. f3(number) = number
So we can do this:
f2 = Curry(Add);
f3 = f2(7);
f3(5); // returns 12
In other words, "currying" and "partial application" are two totally different functions. Currying takes exactly 1 input, whereas partial application takes 2 (or more) inputs.
Even though they both return a function as output, the returned functions are of totally different forms as demonstrated above.
Note: this was taken from F# Basics an excellent introductory article for .NET developers getting into functional programming.
Currying means breaking a function with many arguments into a series
of functions that each take one argument and ultimately produce the
same result as the original function. Currying is probably the most
challenging topic for developers new to functional programming, particularly because it
is often confused with partial application. You can see both at work
in this example:
let multiply x y = x * y
let double = multiply 2
let ten = double 5
Right away, you should see behavior that is different from most
imperative languages. The second statement creates a new function
called double by passing one argument to a function that takes two.
The result is a function that accepts one int argument and yields the
same output as if you had called multiply with x equal to 2 and y
equal to that argument. In terms of behavior, it’s the same as this
code:
let double2 z = multiply 2 z
Often, people mistakenly say that multiply is curried to form double.
But this is only somewhat true. The multiply function is curried, but
that happens when it is defined because functions in F# are curried by
default. When the double function is created, it’s more accurate to
say that the multiply function is partially applied.
The multiply function is really a series of two functions. The first
function takes one int argument and returns another function,
effectively binding x to a specific value. This function also accepts
an int argument that you can think of as the value to bind to y. After
calling this second function, x and y are both bound, so the result is
the product of x and y as defined in the body of double.
To create double, the first function in the chain of multiply
functions is evaluated to partially apply multiply. The resulting
function is given the name double. When double is evaluated, it uses
its argument along with the partially applied value to create the
result.
Interesting question. After a bit of searching, "Partial Function Application is not currying" gave the best explanation I found. I can't say that the practical difference is particularly obvious to me, but then I'm not an FP expert...
Another useful-looking page (which I confess I haven't fully read yet) is "Currying and Partial Application with Java Closures".
It does look like this is widely-confused pair of terms, mind you.
I have answered this in another thread https://stackoverflow.com/a/12846865/1685865 . In short, partial function application is about fixing some arguments of a given multivariable function to yield another function with fewer arguments, while Currying is about turning a function of N arguments into a unary function which returns a unary function...[An example of Currying is shown at the end of this post.]
Currying is mostly of theoretical interest: one can express computations using only unary functions (i.e. every function is unary). In practice and as a byproduct, it is a technique which can make many useful (but not all) partial functional applications trivial, if the language has curried functions. Again, it is not the only means to implement partial applications. So you could encounter scenarios where partial application is done in other way, but people are mistaking it as Currying.
(Example of Currying)
In practice one would not just write
lambda x: lambda y: lambda z: x + y + z
or the equivalent javascript
function (x) { return function (y){ return function (z){ return x + y + z }}}
instead of
lambda x, y, z: x + y + z
for the sake of Currying.
Currying is a function of one argument which takes a function f and returns a new function h. Note that h takes an argument from X and returns a function that maps Y to Z:
curry(f) = h
f: (X x Y) -> Z
h: X -> (Y -> Z)
Partial application is a function of two(or more) arguments which takes a function f and one or more additional arguments to f and returns a new function g:
part(f, 2) = g
f: (X x Y) -> Z
g: Y -> Z
The confusion arises because with a two-argument function the following equality holds:
partial(f, a) = curry(f)(a)
Both sides will yield the same one-argument function.
The equality is not true for higher arity functions because in this case currying will return a one-argument function, whereas partial application will return a multiple-argument function.
The difference is also in the behavior, whereas currying transforms the whole original function recursively(once for each argument), partial application is just a one step replacement.
Source: Wikipedia Currying.
Simple answer
Curry: lets you call a function, splitting it in multiple calls, providing one argument per-call.
Partial: lets you call a function, splitting it in multiple calls, providing multiple arguments per-call.
Simple hints
Both allow you to call a function providing less arguments (or, better, providing them cumulatively). Actually both of them bind (at each call) a specific value to specific arguments of the function.
The real difference can be seen when the function has more than 2 arguments.
Simple e(c)(sample)
(in Javascript)
We want to run the following process function on different subjects (e.g. let's say our subjects are "subject1" and "foobar" strings):
function process(context, successCallback, errorCallback, subject) {...}
why always passing the arguments, like context and the callbacks, if they will be always the same?
Just bind some values for the the function:
processSubject = _.partial(process, my_context, my_success, my_error)
// assign fixed values to the first 3 arguments of the `process` function
and call it on subject1 and foobar, omitting the repetition of the first 3 arguments, with:
processSubject('subject1');
processSubject('foobar');
Comfy, isn't it? 😉
With currying you'd instead need to pass one argument per time
curriedProcess = _.curry(process); // make the function curry-able
processWithBoundedContext = curriedProcess(my_context);
processWithCallbacks = processWithBoundedContext(my_success)(my_error); // note: these are two sequential calls
result1 = processWithCallbacks('subject1');
// same as: process(my_context, my_success, my_error, 'subject1');
result2 = processWithCallbacks('foobar');
// same as: process(my_context, my_success, my_error, 'foobar');
Disclaimer
I skipped all the academic/mathematical explanation. Cause I don't know it. Maybe it helped 🙃
EDIT:
As added by #basickarl, a further slight difference in use of the two functions (see Lodash for examples) is that:
partial returns a pre-cooked function that can be called once with the missing argument(s) and return the final result;
while curry is being called multiple times (one for each argument), returning a pre-cooked function each time; except in the case of calling with the last argument, that will return the actual result from the processing of all the arguments.
With ES6:
here's a quick example of how immediate Currying and Partial-application are in ECMAScript 6.
const partialSum = math => (eng, geo) => math + eng + geo;
const curriedSum = math => eng => geo => math + eng + geo;
The difference between curry and partial application can be best illustrated through this following JavaScript example:
function f(x, y, z) {
return x + y + z;
}
var partial = f.bind(null, 1);
6 === partial(2, 3);
Partial application results in a function of smaller arity; in the example above, f has an arity of 3 while partial only has an arity of 2. More importantly, a partially applied function would return the result right away upon being invoke, not another function down the currying chain. So if you are seeing something like partial(2)(3), it's not partial application in actuality.
Further reading:
Functional Programming in 5 minutes
Currying: Contrast with Partial Function Application
I had this question a lot while learning and have since been asked it many times. The simplest way I can describe the difference is that both are the same :) Let me explain...there are obviously differences.
Both partial application and currying involve supplying arguments to a function, perhaps not all at once. A fairly canonical example is adding two numbers. In pseudocode (actually JS without keywords), the base function may be the following:
add = (x, y) => x + y
If I wanted an "addOne" function, I could partially apply it or curry it:
addOneC = curry(add, 1)
addOneP = partial(add, 1)
Now using them is clear:
addOneC(2) #=> 3
addOneP(2) #=> 3
So what's the difference? Well, it's subtle, but partial application involves supplying some arguments and the returned function will then execute the main function upon next invocation whereas currying will keep waiting till it has all the arguments necessary:
curriedAdd = curry(add) # notice, no args are provided
addOne = curriedAdd(1) # returns a function that can be used to provide the last argument
addOne(2) #=> returns 3, as we want
partialAdd = partial(add) # no args provided, but this still returns a function
addOne = partialAdd(1) # oops! can only use a partially applied function once, so now we're trying to add one to an undefined value (no second argument), and we get an error
In short, use partial application to prefill some values, knowing that the next time you call the method, it will execute, leaving undefined all unprovided arguments; use currying when you want to continually return a partially-applied function as many times as necessary to fulfill the function signature. One final contrived example:
curriedAdd = curry(add)
curriedAdd()()()()()(1)(2) # ugly and dumb, but it works
partialAdd = partial(add)
partialAdd()()()()()(1)(2) # second invocation of those 7 calls fires it off with undefined parameters
Hope this helps!
UPDATE: Some languages or lib implementations will allow you to pass an arity (total number of arguments in final evaluation) to the partial application implementation which may conflate my two descriptions into a confusing mess...but at that point, the two techniques are largely interchangeable.
For me partial application must create a new function where the used arguments are completely integrated into the resulting function.
Most functional languages implement currying by returning a closure: do not evaluate under lambda when partially applied. So, for partial application to be interesting, we need to make a difference between currying and partial application and consider partial application as currying plus evaluation under lambda.
I could be very wrong here, as I don't have a strong background in theoretical mathematics or functional programming, but from my brief foray into FP, it seems that currying tends to turn a function of N arguments into N functions of one argument, whereas partial application [in practice] works better with variadic functions with an indeterminate number of arguments. I know some of the examples in previous answers defy this explanation, but it has helped me the most to separate the concepts. Consider this example (written in CoffeeScript for succinctness, my apologies if it confuses further, but please ask for clarification, if needed):
# partial application
partial_apply = (func) ->
args = [].slice.call arguments, 1
-> func.apply null, args.concat [].slice.call arguments
sum_variadic = -> [].reduce.call arguments, (acc, num) -> acc + num
add_to_7_and_5 = partial_apply sum_variadic, 7, 5
add_to_7_and_5 10 # returns 22
add_to_7_and_5 10, 11, 12 # returns 45
# currying
curry = (func) ->
num_args = func.length
helper = (prev) ->
->
args = prev.concat [].slice.call arguments
return if args.length < num_args then helper args else func.apply null, args
helper []
sum_of_three = (x, y, z) -> x + y + z
curried_sum_of_three = curry sum_of_three
curried_sum_of_three 4 # returns a function expecting more arguments
curried_sum_of_three(4)(5) # still returns a function expecting more arguments
curried_sum_of_three(4)(5)(6) # returns 15
curried_sum_of_three 4, 5, 6 # returns 15
This is obviously a contrived example, but notice that partially applying a function that accepts any number of arguments allows us to execute a function but with some preliminary data. Currying a function is similar but allows us to execute an N-parameter function in pieces until, but only until, all N parameters are accounted for.
Again, this is my take from things I've read. If anyone disagrees, I would appreciate a comment as to why rather than an immediate downvote. Also, if the CoffeeScript is difficult to read, please visit coffeescript.org, click "try coffeescript" and paste in my code to see the compiled version, which may (hopefully) make more sense. Thanks!
I'm going to assume most people who ask this question are already familiar with the basic concepts so their is no need to talk about that. It's the overlap that is the confusing part.
You might be able to fully use the concepts, but you understand them together as this pseudo-atomic amorphous conceptual blur. What is missing is knowing where the boundary between them is.
Instead of defining what each one is, it's easier to highlight just their differences—the boundary.
Currying is when you define the function.
Partial Application is when you call the function.
Application is math-speak for calling a function.
Partial application requires calling a curried function and getting a function as the return type.
A lot of people here do not address this properly, and no one has talked about overlaps.
Simple answer
Currying: Lets you call a function, splitting it in multiple calls, providing one argument per-call.
Partial Application: Lets you call a function, splitting it in multiple calls, providing multiple arguments per-call.
One of the significant differences between the two is that a call to a
partially applied function returns the result right away, not another
function down the currying chain; this distinction can be illustrated
clearly for functions whose arity is greater than two.
What does that mean? That means that there are max two calls for a partial function. Currying has as many as the amount of arguments. If the currying function only has two arguments, then it is essentially the same as a partial function.
Examples
Partial Application and Currying
function bothPartialAndCurry(firstArgument) {
return function(secondArgument) {
return firstArgument + secondArgument;
}
}
const partialAndCurry = bothPartialAndCurry(1);
const result = partialAndCurry(2);
Partial Application
function partialOnly(firstArgument, secondArgument) {
return function(thirdArgument, fourthArgument, fifthArgument) {
return firstArgument + secondArgument + thirdArgument + fourthArgument + fifthArgument;
}
}
const partial = partialOnly(1, 2);
const result = partial(3, 4, 5);
Currying
function curryOnly(firstArgument) {
return function(secondArgument) {
return function(thirdArgument) {
return function(fourthArgument ) {
return function(fifthArgument) {
return firstArgument + secondArgument + thirdArgument + fourthArgument + fifthArgument;
}
}
}
}
}
const curryFirst = curryOnly(1);
const currySecond = curryFirst(2);
const curryThird = currySecond(3);
const curryFourth = curryThird(4);
const result = curryFourth(5);
// or...
const result = curryOnly(1)(2)(3)(4)(5);
Naming conventions
I'll write this when I have time, which is soon.
There are other great answers here but I believe this example (as per my understanding) in Java might be of benefit to some people:
public static <A,B,X> Function< B, X > partiallyApply( BiFunction< A, B, X > aBiFunction, A aValue ){
return b -> aBiFunction.apply( aValue, b );
}
public static <A,X> Supplier< X > partiallyApply( Function< A, X > aFunction, A aValue ){
return () -> aFunction.apply( aValue );
}
public static <A,B,X> Function< A, Function< B, X > > curry( BiFunction< A, B, X > bif ){
return a -> partiallyApply( bif, a );
}
So currying gives you a one-argument function to create functions, where partial-application creates a wrapper function that hard codes one or more arguments.
If you want to copy&paste, the following is noisier but friendlier to work with since the types are more lenient:
public static <A,B,X> Function< ? super B, ? extends X > partiallyApply( final BiFunction< ? super A, ? super B, X > aBiFunction, final A aValue ){
return b -> aBiFunction.apply( aValue, b );
}
public static <A,X> Supplier< ? extends X > partiallyApply( final Function< ? super A, X > aFunction, final A aValue ){
return () -> aFunction.apply( aValue );
}
public static <A,B,X> Function< ? super A, Function< ? super B, ? extends X > > curry( final BiFunction< ? super A, ? super B, ? extends X > bif ){
return a -> partiallyApply( bif, a );
}
In writing this, I confused currying and uncurrying. They are inverse transformations on functions. It really doesn't matter what you call which, as long as you get what the transformation and its inverse represent.
Uncurrying isn't defined very clearly (or rather, there are "conflicting" definitions that all capture the spirit of the idea). Basically, it means turning a function that takes multiple arguments into a function that takes a single argument. For example,
(+) :: Int -> Int -> Int
Now, how do you turn this into a function that takes a single argument? You cheat, of course!
plus :: (Int, Int) -> Int
Notice that plus now takes a single argument (that is composed of two things). Super!
What's the point of this? Well, if you have a function that takes two arguments, and you have a pair of arguments, it is nice to know that you can apply the function to the arguments, and still get what you expect. And, in fact, the plumbing to do it already exists, so that you don't have to do things like explicit pattern matching. All you have to do is:
(uncurry (+)) (1,2)
So what is partial function application? It is a different way to turn a function in two arguments into a function with one argument. It works differently though. Again, let's take (+) as an example. How might we turn it into a function that takes a single Int as an argument? We cheat!
((+) 0) :: Int -> Int
That's the function that adds zero to any Int.
((+) 1) :: Int -> Int
adds 1 to any Int. Etc. In each of these cases, (+) is "partially applied".
Currying
Wikipedia says
Currying is the technique of converting a function that takes multiple arguments into a sequence of functions that each takes a single argument.
Example
const add = (a, b) => a + b
const addC = (a) => (b) => a + b // curried function. Where C means curried
Partial application
Article Just Enough FP: Partial Application
Partial application is the act of applying some, but not all, of the arguments to a function and returning a new function awaiting the rest of the arguments. These applied arguments are stored in closure and remain available to any of the partially applied returned functions in the future.
Example
const add = (a) => (b) => a + b
const add3 = add(3) // add3 is a partially applied function
add3(5) // 8
The difference is
currying is a technique (pattern)
partial application is a function with some predefined arguments (like add3 from the previous example)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
When learning a new programming language, one of the possible roadblocks you might encounter is the question whether the language is, by default, pass-by-value or pass-by-reference.
So here is my question to all of you, in your favorite language, how is it actually done? And what are the possible pitfalls?
Your favorite language can, of course, be anything you have ever played with: popular, obscure, esoteric, new, old...
Here is my own contribution for the Java programming language.
first some code:
public void swap(int x, int y)
{
int tmp = x;
x = y;
y = tmp;
}
calling this method will result in this:
int pi = 3;
int everything = 42;
swap(pi, everything);
System.out.println("pi: " + pi);
System.out.println("everything: " + everything);
"Output:
pi: 3
everything: 42"
even using 'real' objects will show a similar result:
public class MyObj {
private String msg;
private int number;
//getters and setters
public String getMsg() {
return this.msg;
}
public void setMsg(String msg) {
this.msg = msg;
}
public int getNumber() {
return this.number;
}
public void setNumber(int number) {
this.number = number;
}
//constructor
public MyObj(String msg, int number) {
setMsg(msg);
setNumber(number);
}
}
public static void swap(MyObj x, MyObj y)
{
MyObj tmp = x;
x = y;
y = tmp;
}
public static void main(String args[]) {
MyObj x = new MyObj("Hello world", 1);
MyObj y = new MyObj("Goodbye Cruel World", -1);
swap(x, y);
System.out.println(x.getMsg() + " -- "+ x.getNumber());
System.out.println(y.getMsg() + " -- "+ y.getNumber());
}
"Output:
Hello world -- 1
Goodbye Cruel World -- -1"
thus it is clear that Java passes its parameters by value, as the value for pi and everything and the MyObj objects aren't swapped.
be aware that "by value" is the only way in java to pass parameters to a method. (for example a language like c++ allows the developer to pass a parameter by reference using '&' after the parameter's type)
now the tricky part, or at least the part that will confuse most of the new java developers: (borrowed from javaworld)
Original author: Tony Sintes
public void tricky(Point arg1, Point arg2)
{
arg1.x = 100;
arg1.y = 100;
Point temp = arg1;
arg1 = arg2;
arg2 = temp;
}
public static void main(String [] args)
{
Point pnt1 = new Point(0,0);
Point pnt2 = new Point(0,0);
System.out.println("X: " + pnt1.x + " Y: " +pnt1.y);
System.out.println("X: " + pnt2.x + " Y: " +pnt2.y);
System.out.println(" ");
tricky(pnt1,pnt2);
System.out.println("X: " + pnt1.x + " Y:" + pnt1.y);
System.out.println("X: " + pnt2.x + " Y: " +pnt2.y);
}
"Output
X: 0 Y: 0
X: 0 Y: 0
X: 100 Y: 100
X: 0 Y: 0"
tricky successfully changes the value of pnt1!
This would imply that Objects are passed by reference, this is not the case!
A correct statement would be: the Object references are passed by value.
more from Tony Sintes:
The method successfully alters the
value of pnt1, even though it is
passed by value; however, a swap of
pnt1 and pnt2 fails! This is the major
source of confusion. In the main()
method, pnt1 and pnt2 are nothing more
than object references. When you pass
pnt1 and pnt2 to the tricky() method,
Java passes the references by value
just like any other parameter. This
means the references passed to the
method are actually copies of the
original references. Figure 1 below
shows two references pointing to the
same object after Java passes an
object to a method.
(source: javaworld.com)
Conclusion or a long story short:
Java passes it parameters by value
"by value" is the only way in java to pass a parameter to a method
using methods from the object given as parameter will alter the object as the references point to the original objects. (if that method itself alters some values)
useful links:
http://www.javaworld.com/javaworld/javaqa/2000-05/03-qa-0526-pass.html
http://www.ibm.com/developerworks/java/library/j-passbyval/
http://www.ibm.com/developerworks/library/j-praxis/pr1.html
http://javadude.com/articles/passbyvalue.htm
Here is another article for the c# programming language
c# passes its arguments by value (by default)
private void swap(string a, string b) {
string tmp = a;
a = b;
b = tmp;
}
calling this version of swap will thus have no result:
string x = "foo";
string y = "bar";
swap(x, y);
"output:
x: foo
y: bar"
however, unlike java c# does give the developer the opportunity to pass parameters by reference, this is done by using the 'ref' keyword before the type of the parameter:
private void swap(ref string a, ref string b) {
string tmp = a;
a = b;
b = tmp;
}
this swap will change the value of the referenced parameter:
string x = "foo";
string y = "bar";
swap(x, y);
"output:
x: bar
y: foo"
c# also has a out keyword, and the difference between ref and out is a subtle one.
from msdn:
The caller of a method which takes an
out parameter is not required to
assign to the variable passed as the
out parameter prior to the call;
however, the callee is required to
assign to the out parameter before
returning.
and
In contrast ref parameters are
considered initially assigned by the
callee. As such, the callee is not
required to assign to the ref
parameter before use. Ref parameters
are passed both into and out of a
method.
a small pitfall is, like in java, that objects passed by value can still be changed using their inner methods
conclusion:
c# passes its parameters, by default, by value
but when needed parameters can also be passed by reference using the ref keyword
inner methods from a parameter passed by value will alter the object (if that method itself alters some values)
useful links:
http://msdn.microsoft.com/en-us/vcsharp/aa336814.aspx
http://www.c-sharpcorner.com/UploadFile/saragana/Willswapwork11162005012542AM/Willswapwork.aspx
http://en.csharp-online.net/Value_vs_Reference
Python uses pass-by-value, but since all such values are object references, the net effect is something akin to pass-by-reference. However, Python programmers think more about whether an object type is mutable or immutable. Mutable objects can be changed in-place (e.g., dictionaries, lists, user-defined objects), whereas immutable objects can't (e.g., integers, strings, tuples).
The following example shows a function that is passed two arguments, an immutable string, and a mutable list.
>>> def do_something(a, b):
... a = "Red"
... b.append("Blue")
...
>>> a = "Yellow"
>>> b = ["Black", "Burgundy"]
>>> do_something(a, b)
>>> print a, b
Yellow ['Black', 'Burgundy', 'Blue']
The line a = "Red" merely creates a local name, a, for the string value "Red" and has no effect on the passed-in argument (which is now hidden, as a must refer to the local name from then on). Assignment is not an in-place operation, regardless of whether the argument is mutable or immutable.
The b parameter is a reference to a mutable list object, and the .append() method performs an in-place extension of the list, tacking on the new "Blue" string value.
(Because string objects are immutable, they don't have any methods that support in-place modifications.)
Once the function returns, the re-assignment of a has had no effect, while the extension of b clearly shows pass-by-reference style call semantics.
As mentioned before, even if the argument for a is a mutable type, the re-assignment within the function is not an in-place operation, and so there would be no change to the passed argument's value:
>>> a = ["Purple", "Violet"]
>>> do_something(a, b)
>>> print a, b
['Purple', 'Violet'] ['Black', 'Burgundy', 'Blue', 'Blue']
If you didn't want your list modified by the called function, you would instead use the immutable tuple type (identified by the parentheses in the literal form, rather than square brackets), which does not support the in-place .append() method:
>>> a = "Yellow"
>>> b = ("Black", "Burgundy")
>>> do_something(a, b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in do_something
AttributeError: 'tuple' object has no attribute 'append'
Since I haven't seen a Perl answer yet, I thought I'd write one.
Under the hood, Perl works effectively as pass-by-reference. Variables as function call arguments are passed referentially, constants are passed as read-only values, and results of expressions are passed as temporaries. The usual idioms to construct argument lists by list assignment from #_, or by shift tend to hide this from the user, giving the appearance of pass-by-value:
sub incr {
my ( $x ) = #_;
$x++;
}
my $value = 1;
incr($value);
say "Value is now $value";
This will print Value is now 1 because the $x++ has incremented the lexical variable declared within the incr() function, rather than the variable passed in. This pass-by-value style is usually what is wanted most of the time, as functions that modify their arguments are rare in Perl, and the style should be avoided.
However, if for some reason this behaviour is specifically desired, it can be achieved by operating directly on elements of the #_ array, because they will be aliases for variables passed into the function.
sub incr {
$_[0]++;
}
my $value = 1;
incr($value);
say "Value is now $value";
This time it will print Value is now 2, because the $_[0]++ expression incremented the actual $value variable. The way this works is that under the hood #_ is not a real array like most other arrays (such as would be obtained by my #array), but instead its elements are built directly out of the arguments passed to a function call. This allows you to construct pass-by-reference semantics if that would be required. Function call arguments that are plain variables are inserted as-is into this array, and constants or results of more complex expressions are inserted as read-only temporaries.
It is however exceedingly rare to do this in practice, because Perl supports reference values; that is, values that refer to other variables. Normally it is far clearer to construct a function that has an obvious side-effect on a variable, by passing in a reference to that variable. This is a clear indication to the reader at the callsite, that pass-by-reference semantics are in effect.
sub incr_ref {
my ( $ref ) = #_;
$$ref++;
}
my $value = 1;
incr(\$value);
say "Value is now $value";
Here the \ operator yields a reference in much the same way as the & address-of operator in C.
There's a good explanation here for .NET.
A lot of people are surprise that reference objects are actually passed by value (in both C# and Java). It's a copy of a stack address. This prevents a method from changing where the object actually points to, but still allows a method to change the values of the object. In C# its possible to pass a reference by reference, which means you can change where an actual object points to.
Don't forget there is also pass by name, and pass by value-result.
Pass by value-result is similar to pass by value, with the added aspect that the value is set in the original variable that was passed as the parameter. It can, to some extent, avoid interference with global variables. It is apparently better in partitioned memory, where a pass by reference could cause a page fault (Reference).
Pass by name means that the values are only calculated when they are actually used, rather than at the start of the procedure. Algol used pass-by-name, but an interesting side effect is that is it very difficult to write a swap procedure (Reference). Also, the expression passed by name is re-evaluated each time it is accessed, which can also have side effects.
Whatever you say as pass-by-value or pass-by-reference must be consistent across languages. The most common and consistent definition used across languages is that with pass-by-reference, you can pass a variable to a function "normally" (i.e. without explicitly taking address or anything like that), and the function can assign to (not mutate the contents of) the parameter inside the function and it will have the same effect as assigning to the variable in the calling scope.
From this view, the languages are grouped as follows; each group having the same passing semantics. If you think that two languages should not be put in the same group, I challenge you to come up with an example that distinguishes them.
The vast majority of languages including C, Java, Python, Ruby, JavaScript, Scheme, OCaml, Standard ML, Go, Objective-C, Smalltalk, etc. are all pass-by-value only. Passing a pointer value (some languages call it a "reference") does not count as pass by reference; we are only concerned about the thing passed, the pointer, not the thing pointed to.
Languages such as C++, C#, PHP are by default pass-by-value like the languages above, but functions can explicitly declare parameters to be pass-by-reference, using & or ref.
Perl is always pass-by-reference; however, in practice people almost always copy the values after getting it, thus using it in a pass-by-value way.
by value
is slower than by reference since the system has to copy the parameter
used for input only
by reference
faster since only a pointer is passed
used for input and output
can be very dangerous if used in conjunction with global variables
Concerning J, while there is only, AFAIK, passing by value, there is a form of passing by reference which enables moving a lot of data. You simply pass something known as a locale to a verb (or function). It can be an instance of a class or just a generic container.
spaceused=: [: 7!:5 <
exectime =: 6!:2
big_chunk_of_data =. i. 1000 1000 100
passbyvalue =: 3 : 0
$ y
''
)
locale =. cocreate''
big_chunk_of_data__locale =. big_chunk_of_data
passbyreference =: 3 : 0
l =. y
$ big_chunk_of_data__l
''
)
exectime 'passbyvalue big_chunk_of_data'
0.00205586720663967
exectime 'passbyreference locale'
8.57957102144893e_6
The obvious disadvantage is that you need to know the name of your variable in some way in the called function. But this technique can move a lot of data painlessly. That's why, while technically not pass by reference, I call it "pretty much that".
PHP is also pass by value.
<?php
class Holder {
private $value;
public function __construct($value) {
$this->value = $value;
}
public function getValue() {
return $this->value;
}
}
function swap($x, $y) {
$tmp = $x;
$x = $y;
$y = $tmp;
}
$a = new Holder('a');
$b = new Holder('b');
swap($a, $b);
echo $a->getValue() . ", " . $b->getValue() . "\n";
Outputs:
a b
However in PHP4 objects were treated like primitives. Which means:
<?php
$myData = new Holder('this should be replaced');
function replaceWithGreeting($holder) {
$myData->setValue('hello');
}
replaceWithGreeting($myData);
echo $myData->getValue(); // Prints out "this should be replaced"
By default, ANSI/ISO C uses either--it depends on how you declare your function and its parameters.
If you declare your function parameters as pointers then the function will be pass-by-reference, and if you declare your function parameters as not-pointer variables then the function will be pass-by-value.
void swap(int *x, int *y); //< Declared as pass-by-reference.
void swap(int x, int y); //< Declared as pass-by-value (and probably doesn't do anything useful.)
You can run into problems if you create a function that returns a pointer to a non-static variable that was created within that function. The returned value of the following code would be undefined--there is no way to know if the memory space allocated to the temporary variable created in the function was overwritten or not.
float *FtoC(float temp)
{
float c;
c = (temp-32)*9/5;
return &c;
}
You could, however, return a reference to a static variable or a pointer that was passed in the parameter list.
float *FtoC(float *temp)
{
*temp = (*temp-32)*9/5;
return temp;
}