Magic methods on other programming languages - language-agnostic

Python has such methods as __add__, __mul__, __cmp__ and so on (called magic methods), which are used as a class methods and can give a different meaning to adding(+), multiplying(*), comparing(==), ... two instances of a class. My question is do other languages have a similar method? I'm familiar with Java, C++, ruby and PHP, but never came across such a thing. I know all four have a constructor method which corresponds to __init__, but what about other magic methods?
I tried googling "Magic methods in other programming languages" but nothing related showed up, probably they got different names on different languages.

In general, having too much "magic" in a language is a sign of bad language design. Maybe that is why there are not many languages which have magic methods?
Magic like this creates a two-class system: the language designer can add new magic methods to the language, but the programmer is restricted to only use the methods that the High Priest Of Language Design allows them to. In general, it should be possible for the programmer to do as much possible without requiring to change the language specification.
For example, in Scala, +, -, *, /, ==, !=, <, >, <=, >=, ::, |, &, ||, &&, **, ^, +=, -=, *=, /=, and so on and so forth, are simply legal identifiers. So, if you want to implement your own version of multiplication for your own objects, you just write a method named *. This is just a boring old standard method, there is absolutely nothing "magic" about it.
Conversely, any method can be called using operator notation, i.e. without a dot. And any method that takes exactly one argument can be called without parentheses in operator notation.
This does not only apply to methods. Also, any type constructor with exactly two type arguments can be used in infix notation, so if I have
class ↔️[A, B]
I can do
class Foo extends (String ↔️ Int)
which is the same as
class Foo extends ↔️[String, Int]
Well … I kinda lied: there is some syntactic sugar in Scala:
foo() is translated to foo.apply() if there is no method named foo in scope. This allows you to effectively overload the function call operator.
foo.bar = baz is translated to foo.bar_=(baz). This allows you to effectively overload property assignment. (This is how you write setters in Scala.)
foo(bar) = baz is translated to foo.update(bar, baz). This allows you to effectively overload index assignment. (This is how you write array or dictionary access in Scala, for example).
!foo (and a couple of others) are translated to foo.unary_!.
foo += bar will try to call the += method of foo, i.e. it is equivalent to foo.+=(bar). But if this fails and foo is a valid lvalue, and foo has a method named +, then Scala will also try foo = foo + bar instead.
Also, precedence, associativity, and fixity are fixed in Scala: they are determined by the first character of the method name. I.e. all methods starting with * have the same precedence, all methods starting with - have the same precedence, and so on.
Haskell goes a step further: there is no fundamental difference between functions and operators. Every function can be used in function call notation and in operator notation. The only difference is lexical: if the function name consists of operator characters, then when I want to use it in function call notation, I have to wrap it in parentheses. OTOH, if the function name consists of alphanumeric characters and I want to use it in operator notation, I need to wrap it in backticks. So, the following are equivalent:
a + b
(+) a b
a `plus` b
plus a b
For operator usage of functions, you can freely define the fixity, associativity, and precedence, e.g.:
infixr 15 <!==!>
In Ruby, there is a pre-defined set of operators that has corresponding methods, e.g.:
def +(other)
plus(other)
end

In C++ operator overloading is what your are looking for.
Java has no native support for operator overloading (Reference).
C has no operator overloading (Reference). Thus, a lot of add, mult and so on functions are written. Often those are macros, because then they can be used for different types. IMHO this is why I like C++ better.
#Alex gave reference to a nice overview of operator overlaoding.

Related

Operators and Functions [duplicate]

Is there any substantial difference between operators and methods?
The only difference I see is the way the are called, do they have other differences?
For example in Python concatenation, slicing, indexing are defined as operators, while (referring to strings) upper(), replace(), strip() and so on are methods.
If I understand question currectly...
In nutshell, everything is a method of object. You can find "expression operators" methods in python magic class methods, in the operators.
So, why python has "sexy" things like [x:y], [x], +, -? Because it is common things to most developers, even to unfamiliar with development people, so math functions like +, - will catch human eye and he will know what happens. Similar with indexing - it is common syntax in many languages.
But there is no special ways to express upper, replace, strip methods, so there is no "expression operators" for it.
So, what is different between "expression operators" and methods, I'd say just the way it looks.
Your question is rather broad. For your examples, concatenation, slicing, and indexing are defined on strings and lists using special syntax (e.g., []). But other types may do things differently.
In fact, the behavior of most (I think all) of the operators is constrolled by magic methods, so really when you write something like x + y a method is called under the hood.
From a practical perspective, one of the main differences is that the set of available syntactic operators is fixed and new ones cannot be added by your Python code. You can't write your own code to define a new operator called $ and then have x $ y work. On the other hand, you can define as many methods as you want. This means that you should choose carefully what behavior (if any) you assign to operators; since there are only a limited number of operators, you want to be sure that you don't "waste" them on uncommon operations.
Is there any substantial difference between operators and
methods?
Practically speaking, there is no difference because each operator is mapped to a specific Python special method. Moreover, whenever Python encounters the use of an operator, it calls its associated special method implicitly. For example:
1 + 2
implicitly calls int.__add__, which makes the above expression equivalent1 to:
(1).__add__(2)
Below is a demonstration:
>>> class Foo:
... def __add__(self, other):
... print("Foo.__add__ was called")
... return other + 10
...
>>> f = Foo()
>>> f + 1
Foo.__add__ was called
11
>>> f.__add__(1)
Foo.__add__ was called
11
>>>
Of course, actually using (1).__add__(2) in place of 1 + 2 would be inefficient (and ugly!) because it involves an unnecessary name lookup with the . operator.
That said, I do not see a problem with generally regarding the operator symbols (+, -, *, etc.) as simply shorthands for their associated method names (__add__, __sub__, __mul__, etc.). After all, they each end up doing the same thing by calling the same method.
1Well, roughly equivalent. As documented here, there is a set of special methods prefixed with the letter r that handle reflected operands. For example, the following expression:
A + B
may actually be equivalent to:
B.__radd__(A)
if A does not implement __add__ but B implements __radd__.

How to define my function from a string?

This is normal definition of some function as I know:
real function f(x)
real x
f = (sin(x))**2*exp(-x)
end function f
But I want to define a function from some string, for example the program will ask me to write it, and then it will define the function f in a program. Is this possible in Fortran?
What you are looking for is possible in reflective programming languages, and is not possible in Fortran.
Quote from the link above:
A language supporting reflection provides a number of features available at runtime that would otherwise be very obscure to accomplish in a lower-level language. Some of these features are the abilities to:
Discover and modify source code constructions (such as code blocks, classes, methods, protocols, etc.) as a first-class object at runtime.
Convert a string matching the symbolic name of a class or function into a reference to or invocation of that class or function.
Evaluate a string as if it were a source code statement at runtime.
Create a new interpreter for the language's bytecode to give a new meaning or purpose for a programming construct.
I worked on a project once that tried to achieve something similar. We read in a string that contained a string with named variables and mathematical operations (a function if you will). In this string the variables then got replaced by their numerical values and the terms were evaluated.
The basic idea is not to too difficult, but it requires a lot of string manipulations - and it is not a function in the context of a programming language.
We did it like this:
Recursively divide the string at +,-,/,*, but remember to honor brackets
If this is not possible (without violating bracketing), evaluate the remaining string:
Does it contain a mathematical expression like cos? Yes => recurse into arguments
No => evaluate the mathematical expression (no variables allowed, but they got replaced)
This works quite well, but it requires:
Splitting strings
Matching in strings
Replacing strings with other strings, etc.
This is not trivial to do in Fortran, so if you have other options (like calling an external tool/script that returns the value), I would look into that - especially if you are new to Fortran!

Uses of & and && operator

Same question goes for | and ||.
What are uses for the & and && operator? The only use i can think of are
Bitwise Ands for int base types (but not float/decimals) using &
logical short circuit for bools/functions that return bool. Using the && operator usually.
I cant think of any other cases i have used it.
Does anyone know other uses?
-edit- To clarify, i am asking about any language. I seen DateTime use '-' to return a timespan, strings use '+' to create new strings, etc. I dont remember any custom datatype using && and &. So i am asking what might they (reasonably) be use for? I dont know of an example.
In most C-based languages the meanings of these operators are:
&& - boolean AND. Used in boolean expressions such as if statements.
|| - boolean OR. Used in boolean expressions such as if statements.
& - bitwise AND. Used to AND the bits of both operands.
| - bitwise OR. Used to OR the bits of both operands.
However, these are not guaranteed to be such. Since every language defines its own operators, these string can be defined as anything in a different language.
From your edit, you seem to be using C#. The above description is right for C#, with | and & also being conditional operators (depending on context).
As for what you are saying about DateTime and the + operator - this is not related to the other operators you mentioned and their meaning.
If you're asking about all languages then I don't think it's reasonable to talk about "the & operator". The token & could have all sorts of meanings in different languages, operator and otherwise.
For example in C alone there are two distinct & operators (unary address-of and binary bitwise-and). Unary & in C and related languages is the only example I can immediately think of, of a use I've encountered that meets your criteria.
However, C++ adds operator overloading so that they can mean anything you like for user-defined classes, and in addition the & character has meaning in type declarations. In C++0x the && token has meaning in type declarations too.
A language along the lines of APL or J could "reasonably" use an & operator to mean pretty much anything, since there is no expectation that code in those languages bears any resemblance at all to C-like languages. Not sure if either of those two does in fact use either & or &&.
What meanings it's "reasonable" for a binary & operator overload to have in C++ is a matter of taste - normally it would be something that's analogous to bitwise & in some way, because the values represented by your class can be considered as a sequence of bits in some way. Doesn't have to be, though, as long as it's something that makes sense in the domain. Normally it's fairly "unreasonable" to use an & overload just because & happens to be unused. But if your class represents something fairly abstruse in mathematics and you need a third binary operator after + and *, I suppose you'd start looking around. If what you want is something with even lower precedence than +, binary & is a candidate. I can't for the moment think of any structures in abstract algebra that want such a thing, but that doesn't mean there aren't any.
Overloading operator&& in C++ is moderately antisocial, since the un-overloaded version of the operator short-circuits and overloaded versions don't. C++ programmers are used to writing expressions like if (p && *p != 0), so by overloading operator&& you're in effect messing with a control structure.
Overloading unary operator& in C++ is extremely antisocial. It stops people taking pointers to your objects. IIRC there are some awkward cases where common implementations of standard templates require of their template parameters that unary operator& results in a pointer (or at least a very pointer-like thing). This is not documented in the requirements for the argument, but is either almost or completely unavoidable when the library-writer comes to implement the template. So the overload would place restrictions on the use of the class that can't be deduced from the standard, and there'd better be a very good reason for that.
[Edit: what I didn't know when I wrote this, but do know now, is that template-writers could work around the need to use unary operator& with template parameters where the standard doesn't specify what & does for that type (i.e. all of them). You can do what boost::addressof does, which is:
reinterpret_cast<Foo*>(&reinterpret_cast<char&>(foo))
The standard doesn't require much of reinterpet_cast, but since we're talking about standard templates they know exactly what it does in the implementation, and anyway it's legal to reinterpret an object as chars. I think this is guaranteed to work - but if not the implementation can ensure that it does work if necessary to write fully conforming standard templates.
But, if your implementation doesn't go to these lengths to avoid calling an overloaded operator&, the original problem remains.]
As your previoes question about these operators has been about C#, I assume that this one is too.
Generally you want to use the short-circuit version of the conditional operators to avoid unneccesary operations. If the value of the first operand is enough to determine the result, the second operand needn't be evaluated.
When a condition relies on the previos condition being true, only the short-circuit operators work, for example doing a null check and property comparison:
if (myObj != null && myObj.State == "active")
Using the & operator in that case would not keep the second operand from being evaluated, and it would cause a null reference exception.
The non-shortcircuit operators are useful when you want both operands to always be evaluated, for example when they have a side effect:
if (DoSomeWork() & DoOtherWork())
Using the && operator would prevent the second method to be called if the first returned false.
The & and | are also binary operators, but as the || and && operators aren't, there is no ambiguity when you use them as binary operators.
Very general question and I'm assuming you're talking in Java, C#, or another similar syntax. In VB it's the equivalent of + on strings, but that's another story I assume.
As far as I know, your statement is correct if you're talking in terms of C#.
If it's Javascript then please look at this answer: Using &&'s short-circuiting as an if statement?
There is a short discussion on C# uses there too.
Java has a few more operators, such as |= : What does "|=" mean in Java?
C uses & as a unary operator on any data types to get the address of the data
for example:
int i = 5;
cout<<&i;//print the address of i
Some languages allow you to override such operators to make them do anything you want!

Too many arguments for function

I'm starting to learn Lisp with a Java background. In SICP's exercise there are many tasks where students should create abstract functions with many parameters, like
(define (filtered-accumulate combiner null-value term a next b filter)...)
in exercise 1.33. In Java (language with safe, static typing discipline) - a method with more than 4 arguments usually smells, but in Lisp/Scheme it doesn't, does it? I'm wondering how many arguments do you use in your functions? If you use it in production, do you make as many layers?
SICP uses a subset of Scheme
SICP is a book used in introductory computer science course. While it explains some advanced concepts, it uses a very tiny language, a subset of the Scheme language and a sub-subset of any real world Scheme or Lisp a typical implementation provides. Students using SICP are supposed to start with a simple and easy to learn language. From there they learn to implement more complex language additions.
Only positional parameters are being used in plain educational Scheme
There are for example no macros developed in SICP. Add that standard Scheme does have only positional parameters for functions.
Lisp and Scheme offer also more expressive argument lists
In 'real' Lisp or Scheme one can use one or more of the following:
objects or records/structures (poor man's closures) which group things. An object passed can contain several data items, which otherwise would need to be passed 'spread'.
defaults for optional variables. Thus we need only to pass those that we want to have a certain non-default value
optional and named arguments. This allows flexible argument lists which are much more descriptive.
computed arguments. The value or the default value of arguments can be computed based on other arguments
Above leads to more complicated to write function interfaces, but which are often easier to use.
In Lisp it is good style to have descriptive names for arguments and also provide online documentation for the interface. The development environment will display information about the interface of a function, so this information is typically only a keystroke away or is even display automatically.
It's also good style for any non-trivial interface which is supposed to be used interactively by the user/developer to check its arguments at runtime.
Example for a complex, but readable argument list
When there are more arguments, then Common Lisp provides named arguments, which can appear in any order after the normal argument. Named arguments provide also defaults and can be omitted:
(defun order-product (product
&key
buyer
seller
(vat (local-vat seller))
(price (best-price product))
amount
free-delivery-p)
"The function ORDER-PRODUCT ..." ; documentation string
(declare (type ratio vat price) ; type declarations
(type (integer 0) amount)
(type boolean free-delivery-p))
...)
We would use it then:
(order-product 'sicp
:seller 'mit-press
:buyer 'stan-kurilin
:amount 1)
Above uses the seller argument before the buyerargument. It also omits various arguments, some of which have their values computed.
Now we can ask whether such extensive arguments are good or bad. The arguments for them:
the function call gets more descriptive
functions have standard mechanisms to attach documentation
functions can be asked for their parameter lists
type declarations are possible -> thus types don't need to be written as comments
many parameters can have sensible default values and don't need to be mentioned
Several Scheme implementations have adopted similar argument lists.

Expression Versus Statement

I'm asking with regards to c#, but I assume its the same in most other languages.
Does anyone have a good definition of expressions and statements and what the differences are?
Expression: Something which evaluates to a value. Example: 1+2/x
Statement: A line of code which does something. Example: GOTO 100
In the earliest general-purpose programming languages, like FORTRAN, the distinction was crystal-clear. In FORTRAN, a statement was one unit of execution, a thing that you did. The only reason it wasn't called a "line" was because sometimes it spanned multiple lines. An expression on its own couldn't do anything... you had to assign it to a variable.
1 + 2 / X
is an error in FORTRAN, because it doesn't do anything. You had to do something with that expression:
X = 1 + 2 / X
FORTRAN didn't have a grammar as we know it today—that idea was invented, along with Backus-Naur Form (BNF), as part of the definition of Algol-60. At that point the semantic distinction ("have a value" versus "do something") was enshrined in syntax: one kind of phrase was an expression, and another was a statement, and the parser could tell them apart.
Designers of later languages blurred the distinction: they allowed syntactic expressions to do things, and they allowed syntactic statements that had values.
The earliest popular language example that still survives is C. The designers of C realized that no harm was done if you were allowed to evaluate an expression and throw away the result. In C, every syntactic expression can be a made into a statement just by tacking a semicolon along the end:
1 + 2 / x;
is a totally legit statement even though absolutely nothing will happen. Similarly, in C, an expression can have side-effects—it can change something.
1 + 2 / callfunc(12);
because callfunc might just do something useful.
Once you allow any expression to be a statement, you might as well allow the assignment operator (=) inside expressions. That's why C lets you do things like
callfunc(x = 2);
This evaluates the expression x = 2 (assigning the value of 2 to x) and then passes that (the 2) to the function callfunc.
This blurring of expressions and statements occurs in all the C-derivatives (C, C++, C#, and Java), which still have some statements (like while) but which allow almost any expression to be used as a statement (in C# only assignment, call, increment, and decrement expressions may be used as statements; see Scott Wisniewski's answer).
Having two "syntactic categories" (which is the technical name for the sort of thing statements and expressions are) can lead to duplication of effort. For example, C has two forms of conditional, the statement form
if (E) S1; else S2;
and the expression form
E ? E1 : E2
And sometimes people want duplication that isn't there: in standard C, for example, only a statement can declare a new local variable—but this ability is useful enough that the
GNU C compiler provides a GNU extension that enables an expression to declare a local variable as well.
Designers of other languages didn't like this kind of duplication, and they saw early on that if expressions can have side effects as well as values, then the syntactic distinction between statements and expressions is not all that useful—so they got rid of it. Haskell, Icon, Lisp, and ML are all languages that don't have syntactic statements—they only have expressions. Even the class structured looping and conditional forms are considered expressions, and they have values—but not very interesting ones.
an expression is anything that yields a value: 2 + 2
a statement is one of the basic "blocks" of program execution.
Note that in C, "=" is actually an operator, which does two things:
returns the value of the right hand subexpression.
copies the value of the right hand subexpression into the variable on the left hand side.
Here's an extract from the ANSI C grammar. You can see that C doesn't have many different kinds of statements... the majority of statements in a program are expression statements, i.e. an expression with a semicolon at the end.
statement
: labeled_statement
| compound_statement
| expression_statement
| selection_statement
| iteration_statement
| jump_statement
;
expression_statement
: ';'
| expression ';'
;
http://www.lysator.liu.se/c/ANSI-C-grammar-y.html
An expression is something that returns a value, whereas a statement does not.
For examples:
1 + 2 * 4 * foo.bar() //Expression
foo.voidFunc(1); //Statement
The Big Deal between the two is that you can chain expressions together, whereas statements cannot be chained.
You can find this on wikipedia, but expressions are evaluated to some value, while statements have no evaluated value.
Thus, expressions can be used in statements, but not the other way around.
Note that some languages (such as Lisp, and I believe Ruby, and many others) do not differentiate statement vs expression... in such languages, everything is an expression and can be chained with other expressions.
For an explanation of important differences in composability (chainability) of expressions vs statements, my favorite reference is John Backus's Turing award paper, Can programming be liberated from the von Neumann style?.
Imperative languages (Fortran, C, Java, ...) emphasize statements for structuring programs, and have expressions as a sort of after-thought. Functional languages emphasize expressions. Purely functional languages have such powerful expressions than statements can be eliminated altogether.
Expressions can be evaluated to get a value, whereas statements don't return a value (they're of type void).
Function call expressions can also be considered statements of course, but unless the execution environment has a special built-in variable to hold the returned value, there is no way to retrieve it.
Statement-oriented languages require all procedures to be a list of statements. Expression-oriented languages, which is probably all functional languages, are lists of expressions, or in tha case of LISP, one long S-expression that represents a list of expressions.
Although both types can be composed, most expressions can be composed arbitrarily as long as the types match up. Each type of statement has its own way of composing other statements, if they can do that all. Foreach and if statements require either a single statment or that all subordinate statements go in a statement block, one after another, unless the substatements allow for thier own substatements.
Statements can also include expressions, where an expression doesn't really include any statements. One exception, though, would be a lambda expression, which represents a function, and so can include anything a function can iclude unless the language only allows for limited lambdas, like Python's single-expression lambdas.
In an expression-based language, all you need is a single expression for a function since all control structures return a value (a lot of them return NIL). There's no need for a return statement since the last-evaluated expression in the function is the return value.
Simply: an expression evaluates to a value, a statement doesn't.
Some things about expression based languages:
Most important: Everything returns an value
There is no difference between curly brackets and braces for delimiting code blocks and expressions, since everything is an expression. This doesn't prevent lexical scoping though: A local variable could be defined for the expression in which its definition is contained and all statements contained within that, for example.
In an expression based language, everything returns a value. This can be a bit strange at first -- What does (FOR i = 1 TO 10 DO (print i)) return?
Some simple examples:
(1) returns 1
(1 + 1) returns 2
(1 == 1) returns TRUE
(1 == 2) returns FALSE
(IF 1 == 1 THEN 10 ELSE 5) returns 10
(IF 1 == 2 THEN 10 ELSE 5) returns 5
A couple more complex examples:
Some things, such as some function calls, don't really have a meaningful value to return (Things that only produce side effects?). Calling OpenADoor(), FlushTheToilet() or TwiddleYourThumbs() will return some sort of mundane value, such as OK, Done, or Success.
When multiple unlinked expressions are evaluated within one larger expression, the value of the last thing evaluated in the large expression becomes the value of the large expression. To take the example of (FOR i = 1 TO 10 DO (print i)), the value of the for loop is "10", it causes the (print i) expression to be evaluated 10 times, each time returning i as a string. The final time through returns 10, our final answer
It often requires a slight change of mindset to get the most out of an expression based language, since the fact that everything is an expression makes it possible to 'inline' a lot of things
As a quick example:
FOR i = 1 to (IF MyString == "Hello, World!" THEN 10 ELSE 5) DO
(
LotsOfCode
)
is a perfectly valid replacement for the non expression-based
IF MyString == "Hello, World!" THEN TempVar = 10 ELSE TempVar = 5
FOR i = 1 TO TempVar DO
(
LotsOfCode
)
In some cases, the layout that expression-based code permits feels much more natural to me
Of course, this can lead to madness. As part of a hobby project in an expression-based scripting language called MaxScript, I managed to come up with this monster line
IF FindSectionStart "rigidifiers" != 0 THEN FOR i = 1 TO (local rigidifier_array = (FOR i = (local NodeStart = FindsectionStart "rigidifiers" + 1) TO (FindSectionEnd(NodeStart) - 1) collect full_array[i])).count DO
(
LotsOfCode
)
I am not really satisfied with any of the answers here. I looked at the grammar for C++ (ISO 2008). However maybe for the sake of didactics and programming the answers might suffice to distinguish the two elements (reality looks more complicated though).
A statement consists of zero or more expressions, but can also be other language concepts. This is the Extended Backus Naur form for the grammar (excerpt for statement):
statement:
labeled-statement
expression-statement <-- can be zero or more expressions
compound-statement
selection-statement
iteration-statement
jump-statement
declaration-statement
try-block
We can see the other concepts that are considered statements in C++.
expression-statements is self-explaining (a statement can consist of zero or more expressions, read the grammar carefully, it's tricky)
case for example is a labeled-statement
selection-statements are if if/else, case
iteration-statements are while, do...while, for (...)
jump-statements are break, continue, return (can return expression), goto
declaration-statement is the set of declarations
try-block is statement representing try/catch blocks
and there might be some more down the grammar
This is an excerpt showing the expressions part:
expression:
assignment-expression
expression "," assignment-expression
assignment-expression:
conditional-expression
logical-or-expression assignment-operator initializer-clause
throw-expression
expressions are or contain often assignments
conditional-expression (sounds misleading) refers to usage of the operators (+, -, *, /, &, |, &&, ||, ...)
throw-expression - uh? the throw clause is an expression too
The de-facto basis of these concepts is:
Expressions: A syntactic category whose instance can be evaluated to a value.
Statement: A syntactic category whose instance may be involved with evaluations of an expression and the resulted value of the evaluation (if any) is not guaranteed available.
Besides to the very initial context for FORTRAN in the early decades, both definitions of expressions and statements in the accepted answer are obviously wrong:
Expressions can be unvaluated operands. Values are never produced from them.
Subexpressions in non-strict evaluations can be definitely unevaluated.
Most C-like languages have the so-called short-circuit evaluation rules to conditionally skip some subexpression evaluations not change the final result in spite of the side effects.
C and some C-like languages have the notion of unevaluated operand which may be even normatively defined in the language specification. Such constructs are used to avoid the evaluations definitely, so the remained context information (e.g. types or alignment requirements) can be statically distinguished without changing the behavior after the program translation.
For example, an expression used as the operand of the sizeof operator is never evaluated.
Statements have nothing to do with line constructs. They can do something more than expressions, depending on the language specifications.
Modern Fortran, as the direct descendant of the old FORTRAN, has concepts of executable statements and nonexecutable statements.
Similarly, C++ defines declarations as the top-level subcategory of a translation unit. A declaration in C++ is a statement. (This is not true in C.) There are also expression-statements like Fortran's executable statements.
To the interest of the comparison with expressions, only the "executable" statements matter. But you can't ignore the fact that statements are already generalized to be constructs forming the translation units in such imperative languages. So, as you can see, the definitions of the category vary a lot. The (probably) only remained common property preserved among these languages is that statements are expected to be interpreted in the lexical order (for most users, left-to-right and top-to-bottom).
(BTW, I want to add [citation needed] to that answer concerning materials about C because I can't recall whether DMR has such opinions. It seems not, otherwise there should be no reasons to preserve the functionality duplication in the design of C: notably, the comma operator vs. the statements.)
(The following rationale is not the direct response to the original question, but I feel it necessary to clarify something already answered here.)
Nevertheless, it is doubtful that we need a specific category of "statements" in general-purpose programming languages:
Statements are not guaranteed to have more semantic capabilities over expressions in usual designs.
Many languages have already successfully abandon the notion of statements to get clean, neat and consistent overall designs.
In such languages, expressions can do everything old-style statements can do: just drop the unused results when the expressions are evaluated, either by leaving the results explicitly unspecified (e.g. in RnRS Scheme), or having a special value (as a value of a unit type) not producible from normal expression evaluations.
The lexical order rules of evaluation of expressions can be replaced by explicit sequence control operator (e.g. begin in Scheme) or syntactic sugar of monadic structures.
The lexical order rules of other kinds of "statements" can be derived as syntactic extensions (using hygienic macros, for example) to get the similar syntactic functionality. (And it can actually do more.)
On the contrary, statements cannot have such conventional rules, because they don't compose on evaluation: there is just no such common notion of "substatement evaluation". (Even if any, I doubt there can be something much more than copy and paste from existed rules of evaluation of expressions.)
Typically, languages preserving statements will also have expressions to express computations, and there is a top-level subcategory of the statements preserved to expression evaluations for that subcategory. For example, C++ has the so-called expression-statement as the subcategory, and uses the discarded-value expression evaluation rules to specify the general cases of full-expression evaluations in such context. Some languages like C# chooses to refine the contexts to simplify the use cases, but it bloats the specification more.
For users of programming languages, the significance of statements may confuse them further.
The separation of rules of expressions and statements in the languages requires more effort to learn a language.
The naive lexical order interpretation hides the more important notion: expression evaluation. (This is probably most problematic over all.)
Even the evaluations of full expressions in statements are constraint with the lexical order, subexpressions are not (necessarily). Users should ultimately learn this besides any rules coupled to the statements. (Consider how to make a newbie get the point that ++i + ++i is meaningless in C.)
Some languages like Java and C# further constraints the order of evaluations of subexpressions to be permissive of ignorance of evaluation rules. It can be even more problematic.
This seems overspecified to users who have already learned the idea of expression evaluation. It also encourages the user community to follow the blurred mental model of the language design.
It bloats the language specification even more.
It is harmful to optimization by missing the expressiveness of nondeterminism on evaluations, before more complicated primitives are introduced.
A few languages like C++ (particularly, C++17) specify more subtle contexts of evaluation rules, as a compromise of the problems above.
It bloats the language specification a lot.
This goes totally against to simplicity to average users...
So why statements? Anyway, the history is already a mess. It seems most language designers do not take their choice carefully.
Worse, it even gives some type system enthusiasts (who are not familiar enough with the PL history) some misconceptions that type systems must have important things to do with the more essential designs of rules on the operational semantics.
Seriously, reasoning depending on types are not that bad in many cases, but particularly not constructive in this special one. Even experts can screw things up.
For example, someone emphasizes the well-typing nature as the central argument against the traditional treatment of undelimited continuations. Although the conclusion is somewhat reasonable and the insights about composed functions are OK (but still far too naive to the essense), this argument is not sound because it totally ignores the "side channel" approach in practice like _Noreturn any_of_returnable_types (in C11) to encode Falsum. And strictly speaking, an abstract machine with unpredictable state is not identical to "a crashed computer".
A statement is a special case of an expression, one with void type. The tendency of languages to treat statements differently often causes problems, and it would be better if they were properly generalized.
For example, in C# we have the very useful Func<T1, T2, T3, TResult> overloaded set of generic delegates. But we also have to have a corresponding Action<T1, T2, T3> set as well, and general purpose higher-order programming constantly has to be duplicated to deal with this unfortunate bifurcation.
Trivial example - a function that checks whether a reference is null before calling onto another function:
TResult IfNotNull<TValue, TResult>(TValue value, Func<TValue, TResult> func)
where TValue : class
{
return (value == null) ? default(TValue) : func(value);
}
Could the compiler deal with the possibility of TResult being void? Yes. All it has to do is require that return is followed by an expression that is of type void. The result of default(void) would be of type void, and the func being passed in would need to be of the form Func<TValue, void> (which would be equivalent to Action<TValue>).
A number of other answers imply that you can't chain statements like you can with expressions, but I'm not sure where this idea comes from. We can think of the ; that appears after statements as a binary infix operator, taking two expressions of type void and combining them into a single expression of type void.
Statements -> Instructions to follow sequentially
Expressions -> Evaluation that returns a value
Statements are basically like steps, or instructions in an algorithm, the result of the execution of a statement is the actualization of the instruction pointer (so-called in assembler)
Expressions do not imply and execution order at first sight, their purpose is to evaluate and return a value. In the imperative programming languages the evaluation of an expression has an order, but it is just because of the imperative model, but it is not their essence.
Examples of Statements:
for
goto
return
if
(all of them imply the advance of the line (statement) of execution to another line)
Example of expressions:
2+2
(it doesn't imply the idea of execution, but of the evaluation)
Statement,
A statement is a procedural building-block from which all C# programs are constructed. A statement can declare a local variable or constant, call a method, create an object, or assign a value to a variable, property, or field.
A series of statements surrounded by curly braces form a block of code. A method body is one example of a code block.
bool IsPositive(int number)
{
if (number > 0)
{
return true;
}
else
{
return false;
}
}
Statements in C# often contain expressions. An expression in C# is a fragment of code containing a literal value, a simple name, or an operator and its operands.
Expression,
An expression is a fragment of code that can be evaluated to a single value, object, method, or namespace. The two simplest types of expressions are literals and simple names. A literal is a constant value that has no name.
int i = 5;
string s = "Hello World";
Both i and s are simple names identifying local variables. When those variables are used in an expression, the value of the variable is retrieved and used for the expression.
I prefer the meaning of statement in the formal logic sense of the word. It is one that changes the state of one or more of the variables in the computation, enabling a true or false statement to be made about their value(s).
I guess there will always be confusion in the computing world and science in general when new terminology or words are introduced, existing words are 'repurposed' or users are ignorant of the existing, established or 'proper' terminology for what they are describing
Here is the summery of one of the simplest answer I found.
originally Answered by Anders Kaseorg
A statement is a complete line of code that performs some action, while an expression is any section of the code that evaluates to a value.
Expressions can be combined “horizontally” into larger expressions using operators, while statements can only be combined “vertically” by writing one after another, or with block constructs.
Every expression can be used as a statement (whose effect is to evaluate the expression and ignore the resulting value), but most statements cannot be used as expressions.
http://www.quora.com/Python-programming-language-1/Whats-the-difference-between-a-statement-and-an-expression-in-Python
Statements are grammatically complete sentences. Expressions are not. For example
x = 5
reads as "x gets 5." This is a complete sentence. The code
(x + 5)/9.0
reads, "x plus 5 all divided by 9.0." This is not a complete sentence. The statement
while k < 10:
print k
k += 1
is a complete sentence. Notice that the loop header is not; "while k < 10," is a subordinating clause.
In a statement-oriented programming language, a code block is defined as a list of statements. In other words, a statement is a piece of syntax that you can put inside a code block without causing a syntax error.
Wikipedia defines the word statement similarly
In computer programming, a statement is a syntactic unit of an imperative programming language that expresses some action to be carried out. A program written in such a language is formed by a sequence of one or more statements
Notice the latter statement. (although "a program" in this case is technically wrong because both C and Java reject a program that consists of nothing of statements.)
Wikipedia defines the word expression as
An expression in a programming language is a syntactic entity that may be evaluated to determine its value
This is, however, false, because in Kotlin, throw new Exception("") is an expression but when evaluated, it simply throws an exception, never returning any value.
In a statically typed programming language, every expression has a type. This definition, however, doesn't work in a dynamically typed programming language.
Personally, I define an expression as a piece of syntax that can be composed with an operator or function calls to yield a bigger expression. This is actually similar to the explanation of expression by Wikipedia:
It is a combination of one or more constants, variables, functions, and operators that the programming language interprets (according to its particular rules of precedence and of association) and computes to produce ("to return", in a stateful environment) another value
But, the problem is in C programming language, given a function executeSomething like this:
void executeSomething(void){
return;
}
Is executeSomething() an expression or is it a statement? According to my definition, it is a statement because as defined in Microsoft's C reference grammar,
You cannot use the (nonexistent) value of an expression that has type void in any way, nor can you convert a void expression (by implicit or explicit conversion) to any type except void
But the same page clearly indicates that such syntax is an expression.
A statement is a block of code that doesn't return anything and which is just a standalone unit of execution. For example-
if(a>=0)
printf("Hello Humen,I'm a statement");
An expression, on the other hand, returns or evaluates a new value. For example -
if(a>=0)
return a+10;//This is an expression because it evalutes an new value;
or
a=10+y;//This is also an expression because it returns a new value.
Expression
A piece of syntax which can be evaluated to some value. In other words, an expression is an accumulation of expression elements like literals, names, attribute access, operators or function calls which all return a value. In contrast to many other languages, not all language constructs are expressions. There are also statements which cannot be used as expressions, such as while. Assignments are also statements, not expressions.
Statement
A statement is part of a suite (a “block” of code). A statement is either an expression or one of several constructs with a keyword, such as if, while or for.
To improve on and validate my prior answer, definitions of programming language terms should be explained from computer science type theory when applicable.
An expression has a type other than the Bottom type, i.e. it has a value. A statement has the Unit or Bottom type.
From this it follows that a statement can only have any effect in a program when it creates a side-effect, because it either can not return a value or it only returns the value of the Unit type which is either nonassignable (in some languages such a C's void) or (such as in Scala) can be stored for a delayed evaluation of the statement.
Obviously a #pragma or a /*comment*/ have no type and thus are differentiated from statements. Thus the only type of statement that would have no side-effects would be a non-operation. Non-operation is only useful as a placeholder for future side-effects. Any other action due to a statement would be a side-effect. Again a compiler hint, e.g. #pragma, is not a statement because it has no type.
Most precisely, a statement must have a "side-effect" (i.e. be imperative) and an expression must have a value type (i.e. not the bottom type).
The type of a statement is the unit type, but due to Halting theorem unit is fiction so lets say the bottom type.
Void is not precisely the bottom type (it isn't the subtype of all possible types). It exists in languages that don't have a completely sound type system. That may sound like a snobbish statement, but completeness such as variance annotations are critical to writing extensible software.
Let's see what Wikipedia has to say on this matter.
https://en.wikipedia.org/wiki/Statement_(computer_science)
In computer programming a statement is the smallest standalone element of an imperative programming language that expresses some action to be carried out.
Many languages (e.g. C) make a distinction between statements and definitions, with a statement only containing executable code and a definition declaring an identifier, while an expression evaluates to a value only.