I've just started learning Standard ML (and functional programming in general) and I've come across two different ways of defining a function.
val double = fn x => x*2:
And
fun double x = x*2;
If I understand correctly, the first one is assigning a variable to an ananonymous function. Under what circumstances should I do this instead of fun abc?
This is a style question. The fun syntax is syntactic sugar for fn, so anything that you can write with the former can also be written with the latter.
fn directly represents λ-abstraction, which means it is limited to functions of one argument (see this SO question). fun is convenient shorthand that allows you to curry a multi-argument function and bind a name to it with a single bit of syntax, so it's probably better to use fun whenever you want to do one of those things.
Related
Python has such methods as __add__, __mul__, __cmp__ and so on (called magic methods), which are used as a class methods and can give a different meaning to adding(+), multiplying(*), comparing(==), ... two instances of a class. My question is do other languages have a similar method? I'm familiar with Java, C++, ruby and PHP, but never came across such a thing. I know all four have a constructor method which corresponds to __init__, but what about other magic methods?
I tried googling "Magic methods in other programming languages" but nothing related showed up, probably they got different names on different languages.
In general, having too much "magic" in a language is a sign of bad language design. Maybe that is why there are not many languages which have magic methods?
Magic like this creates a two-class system: the language designer can add new magic methods to the language, but the programmer is restricted to only use the methods that the High Priest Of Language Design allows them to. In general, it should be possible for the programmer to do as much possible without requiring to change the language specification.
For example, in Scala, +, -, *, /, ==, !=, <, >, <=, >=, ::, |, &, ||, &&, **, ^, +=, -=, *=, /=, and so on and so forth, are simply legal identifiers. So, if you want to implement your own version of multiplication for your own objects, you just write a method named *. This is just a boring old standard method, there is absolutely nothing "magic" about it.
Conversely, any method can be called using operator notation, i.e. without a dot. And any method that takes exactly one argument can be called without parentheses in operator notation.
This does not only apply to methods. Also, any type constructor with exactly two type arguments can be used in infix notation, so if I have
class ↔️[A, B]
I can do
class Foo extends (String ↔️ Int)
which is the same as
class Foo extends ↔️[String, Int]
Well … I kinda lied: there is some syntactic sugar in Scala:
foo() is translated to foo.apply() if there is no method named foo in scope. This allows you to effectively overload the function call operator.
foo.bar = baz is translated to foo.bar_=(baz). This allows you to effectively overload property assignment. (This is how you write setters in Scala.)
foo(bar) = baz is translated to foo.update(bar, baz). This allows you to effectively overload index assignment. (This is how you write array or dictionary access in Scala, for example).
!foo (and a couple of others) are translated to foo.unary_!.
foo += bar will try to call the += method of foo, i.e. it is equivalent to foo.+=(bar). But if this fails and foo is a valid lvalue, and foo has a method named +, then Scala will also try foo = foo + bar instead.
Also, precedence, associativity, and fixity are fixed in Scala: they are determined by the first character of the method name. I.e. all methods starting with * have the same precedence, all methods starting with - have the same precedence, and so on.
Haskell goes a step further: there is no fundamental difference between functions and operators. Every function can be used in function call notation and in operator notation. The only difference is lexical: if the function name consists of operator characters, then when I want to use it in function call notation, I have to wrap it in parentheses. OTOH, if the function name consists of alphanumeric characters and I want to use it in operator notation, I need to wrap it in backticks. So, the following are equivalent:
a + b
(+) a b
a `plus` b
plus a b
For operator usage of functions, you can freely define the fixity, associativity, and precedence, e.g.:
infixr 15 <!==!>
In Ruby, there is a pre-defined set of operators that has corresponding methods, e.g.:
def +(other)
plus(other)
end
In C++ operator overloading is what your are looking for.
Java has no native support for operator overloading (Reference).
C has no operator overloading (Reference). Thus, a lot of add, mult and so on functions are written. Often those are macros, because then they can be used for different types. IMHO this is why I like C++ better.
#Alex gave reference to a nice overview of operator overlaoding.
My lecturer at the moment has a strange habit I've not seen before, I'm wondering if this is a Haskell standard or a quirk of his programming style.
Basically, he'll often do thing such as this:
functionEx :: String -> Int
functionEx s = functionExA s 0
functionExA :: String -> Int -> Int
functionExA s n = --function code
He calls these 'auxiliary' functions, and for the most part the only advantage I can see to these is to make a function callable with fewer supplied arguments. But most of these are hidden away in code anyway and in my view adding the argument to the original call is much more readable.
As I said, I'm not suggesting my view is correct, I've just not seen it done like this before and would like to know if it's common in Haskell.
Yes, this is commonplace, and not only in functional programming. It's good practice in your code to separate the interface to your code (in this case, that means the function signature: what arguments you have to pass) from the details of the implementation (the need to have a counter or similar in recursive code).
In real-world programming, one manifestation of this is having default arguments or multiple overloads of one function. Another common way of doing this is returning or taking an instance of an interface instead of a particular class that implements that interface. In Java, this might mean returning a List from a method instead of ArrayList, even when you know that the code actually uses an ArrayList (where ArrayList implements the List interface). In Haskell, typeclasses often serve the same function.
The "one argument which always should be zero at the start" pattern happens occasionally in the real world, but it's especially common in functional programming teaching, because you want to show how to write the same function in a recursive style vs. tail-recursive. The wrapper function is also important to demonstrate that both the implementations actually have the same result.
In Haskell, it's more common to use where as follows:
functionEx :: String -> Int
functionEx s = functionExA s 0 where
functionExA s n = --function code
This way, even the existence of the "real" function is hidden from the external interface. There's no reason to expose the fact that this function is (say) tail-recursive with a count argument.
If the special case definition is used frequently, it can be an advantage to do this. For example, the sum function is just a special case of the fold function. So why don't we just use foldr (+) 0 [1, 2, 3] each time instead of sum [1,2,3]? Because sum is much more readable.
I am beginner in Haskell .
The convention used in function definition as per my school material is actually as follows
function_name arguments_separated_by_spaces = code_to_do
ex :
f a b c = a * b +c
As a mathematics student I am habituated to use the functions like as follows
function_name(arguments_separated_by_commas) = code_to_do
ex :
f(a,b,c) = a * b + c
Its working in Haskell .
My doubt is whether it works in all cases ?
I mean can i use traditional mathematical convention in Haskell function definition also ?
If wrong , in which specific cases the convention goes wrong ?
Thanks in advance :)
Let's say you want to define a function that computes the square of the hypoteneuse of a right-triangle. Either of the following definitions are valid
hyp1 a b = a * a + b * b
hyp2(a,b) = a * a + b * b
However, they are not the same function! You can tell by looking at their types in GHCI
>> :type hyp1
hyp1 :: Num a => a -> a -> a
>> :type hyp2
hyp2 :: Num a => (a, a) -> a
Taking hyp2 first (and ignoring the Num a => part for now) the type tells you that the function takes a pair (a, a) and returns another a (e.g it might take a pair of integers and return another integer, or a pair of real numbers and return another real number). You use it like this
>> hyp2 (3,4)
25
Notice that the parentheses aren't optional here! They ensure that the argument is of the correct type, a pair of as. If you don't include them, you will get an error (which will probably look really confusing to you now, but rest assured that it will make sense when you've learned about type classes).
Now looking at hyp1 one way to read the type a -> a -> a is it takes two things of type a and returns something else of type a. You use it like this
>> hyp1 3 4
25
Now you will get an error if you do include parentheses!
So the first thing to notice is that the way you use the function has to match the way you defined it. If you define the function with parens, you have to use parens every time you call it. If you don't use parens when you define the function, you can't use them when you call it.
So it seems like there's no reason to prefer one over the other - it's just a matter of taste. But actually I think there is a good reason to prefer one over the other, and you should prefer the style without parentheses. There are three good reasons:
It looks cleaner and makes your code easier to read if you don't have parens cluttering up the page.
You will take a performance hit if you use parens everywhere, because you need to construct and deconstruct a pair every time you use the function (although the compiler may optimize this away - I'm not sure).
You want to get the benefits of currying, aka partially applied functions*.
The last point is a little subtle. Recall that I said that one way to understand a function of type a -> a -> a is that it takes two things of type a, and returns another a. But there's another way to read that type, which is a -> (a -> a). That means exactly the same thing, since the -> operator is right-associative in Haskell. The interpretation is that the function takes a single a, and returns a function of type a -> a. This allows you to just provide the first argument to the function, and apply the second argument later, for example
>> let f = hyp1 3
>> f 4
25
This is practically useful in a wide variety of situations. For example, the map functions lets you apply some function to every element of a list -
>> :type map
map :: (a -> b) -> [a] -> [b]
Say you have the function (++ "!") which adds a bang to any String. But you have lists of Strings and you'd like them all to end with a bang. No problem! You just partially apply the map function
>> let bang = map (++ "!")
Now bang is a function of type**
>> :type bang
bang :: [String] -> [String]
and you can use it like this
>> bang ["Ready", "Set", "Go"]
["Ready!", "Set!", "Go!"]
Pretty useful!
I hope I've convinced you that the convention used in your school's educational material has some pretty solid reasons for being used. As someone with a math background myself, I can see the appeal of using the more 'traditional' syntax but I hope that as you advance in your programming journey, you'll be able to see the advantages in changing to something that's initially a bit unfamiliar to you.
* Note for pedants - I know that currying and partial application are not exactly the same thing.
** Actually GHCI will tell you the type is bang :: [[Char]] -> [[Char]] but since String is a synonym for [Char] these mean the same thing.
f(a,b,c) = a * b + c
The key difference to understand is that the above function takes a triple and gives the result. What you are actually doing is pattern matching on a triple. The type of the above function is something like this:
(a, a, a) -> a
If you write functions like this:
f a b c = a * b + c
You get automatic curry in the function.
You can write things like this let b = f 3 2 and it will typecheck but the same thing will not work with your initial version. Also, things like currying can help a lot while composing various functions using (.) which again cannot be achieved with the former style unless you are trying to compose triples.
Mathematical notation is not consistent. If all functions were given arguments using (,), you would have to write (+)((*)(a,b),c) to pass a*b and c to function + - of course, a*b is worked out by passing a and b to function *.
It is possible to write everything in tupled form, but it is much harder to define composition. Whereas now you can specify a type a->b to cover for functions of any arity (therefore, you can define composition as a function of type (b->c)->(a->b)->(a->c)), it is much trickier to define functions of arbitrary arity using tuples (now a->b would only mean a function of one argument; you can no longer compose a function of many arguments with a function of many arguments). So, technically possible, but it would need a language feature to make it simple and convenient.
I'm new to Scala and I'm having a problem understanding this. Why are there two syntaxes for the same concept, and none of them more efficient or shorter at that (merely from a typing standpoint, maybe they differ in behavior - which is what I'm asking).
In Go the analogues have a practical difference - you can't forward-reference the lambda assigned to a variable, but you can reference a named function from anywhere. Scala blends these two if I understand it correctly: you can forward-reference any variable (please correct me if I'm wrong).
Please note that this question is not a duplicate of What is the difference between “def” and “val” to define a function.
I know that def evaluates the expression after = each time it is referenced/called, and val only once. But this is different because the expression in the val definition evaluates to a function.
It is also not a duplicate of Functions vs methods in Scala.
This question concerns the syntax of Scala, and is not asking about the difference between functions and methods directly. Even though the answers may be similar in content, it's still valuable to have this exact point cleared up on this site.
There are three main differences (that I know of):
1. Internal Representation
Function expressions (aka anonymous functions or lambdas) are represented in the generated bytecode as instances of any of the Function traits. This means that function expressions are also objects. Method definitions, on the other hand, are first class citizens on the JVM and have a special bytecode representation. How this impacts performance is hard to tell without profiling.
2. Reference Syntax
References to functions and methods have different syntaxes. You can't just say foo when you want to send the reference of a method as an argument to some other part of your code. You'll have to say foo _. With functions you can just say foo and things will work as intended. The syntax foo _ is effectively wrapping the call to foo inside an anonymous function.
3. Generics Support
Methods support type parametrization, functions do not. For example, there's no way to express the following using a function value:
def identity[A](a: A): A = a
The closest would be this, but it loses the type information:
val identity = (a: Any) => a
As an extension to Ionut's first point, it may be worth taking a quick look at http://www.scala-lang.org/api/current/#scala.Function1.
From my understanding, an instance of a function as you described (ie.
val f = (x: Int) => x + 1) extends the Function1 class. The implications of this are that an instance of a function consumes more memory than defining a method. Methods are innate to the JVM, hence they can be determined at compile time. The obvious cost of a Function is its memory consumption, but with it come added benefits such as composition with other Function objects.
If I understand correctly, the reason defs and lambdas can work together is because the Function class has a self-type (T1) ⇒ R which is implied by its apply() method https://github.com/scala/scala/blob/v2.11.8/src/library/scala/Function1.scala#L36. (At least I THINK that's what going on, please correct me if I'm wrong). This is all just my own speculation, however. There's certain to be some extra compiler magic taking place underneath to allow method and function interoperability.
I am defining a function that takes as input a function and I want to specify it in the input type i.e. Operat[_?FunctionQ]:=...
But there is no functionQ as of yet in mathematica. How do I get aroud this except not specifying any type at all.
Any ideas?
Oh!
This: Test if an expression is a Function?
may be the answer i am looking for. I am reading further
Is the solution proposed there robust?, i.e.:
FunctionQ[_Function | _InterpolatingFunction | _CompiledFunction] = True;
FunctionQ[f_Symbol] := Or[
DownValues[f] =!= {},
MemberQ[ Attributes[f], NumericFunction ]]
FunctionQ[_] = False;
The exhibited definition has great utility. The question is: what exactly constitutes a function in Mathematica? Pure functions and the like are easily to classify as functions, but what about definitions that involve pattern-matching? Consider:
h[g[x_]] ^:= x + 1
Is h to be considered a function? If so, it will be hard to identify as it will entail examining the up-values of every symbol in the system to make that determination. Is g a function? It has an up-value, but g[x] is an inert expression.
What about head composition:
f[x_][y_][z_] := x + y + z
Is f a function? How about f[1] or f[1][2]?
And then there are the various capabilities like JLink and NETLink:
Needs["JLink`"]
obj = JavaNew["java.util.Date"]
obj#toString[]
Is obj#toString a function?
I hate to bring up these problems without offering solutions -- but I want to emphasize that the question as to what constitutes a function in the Mathematica context is a tricky one. It is tricky from both the theoretical and practical standpoints.
I think that the answer to whether the exhibited function test is complete really depends upon the types of expressions that you will be feeding it in your specific application.