Specifying default rules in the Curry language: Why and how? - operator-precedence

In section 3.5.6 of the Curry tutorial (pdf), we are advised to use default rules to "regain control after a failed search". The following example is given. (For clarity I have added a type signature and curried the input.)
lookup :: k -> [(k,v)] -> Maybe v
lookup key (_++[(key,value)]++_ ) = Just value
lookup’default _ _ = Nothing
I can't get that to compile unless I replace the ’ with a '. Once I do, it behaves like this:
test> test.lookup 1 [(2,3)]
*** No value found!
Question 1: What is the default declaration for?
Why would you need to specify that a particular clause is the default one? Won't it be arrived at one way or another, once the others fail?
Question 2: How is it written? Should it be written at all?
If instead I drop the string 'default:
lookup :: k -> [(k,v)] -> Maybe v
lookup key (_++[(key,value)]++_ ) = Just value
lookup _ _ = Nothing
it behaves as intended:
test> test.lookup 1 [(2,3)]
Nothing
test>
Has the 'default syntax changed since the tutorial was written? Has it been removed altogether?

This is the code you are looking for. Yours was missing the preprocessor directive to allow default rules. And using the wrong quote character.
-- Use default rules
{-# OPTIONS_CYMAKE -F --pgmF=currypp --optF=defaultrules #-}
lookup :: k -> [(k,v)] -> Maybe v
lookup key (_++[(key,value)]++_ ) = Just value
lookup'default _ _ = Nothing
test_positive = lookup 2 [(2,3)] == Just 3
test_negative = lookup 1 [(2,3)] == Nothing
A default rule serves various purposes. Regaining control after a failed search is a particularly useful one, since you cannot check with equality whether an expression is a failure.

If you delete the option "-F", then the preprocessor is not invoked which explains the behavior.
The permission error is due to the fact that not all possible
intermediate representations of a Curry program are precompiled
in the Ubuntu package. Unfortunately, the "default rule translator"
of CurryPP requires one of these intermediate representations.
The Ubuntu/Debian package is intended only for using the
kernel of Curry. For extensions and more advanced tools,
I recommend to install PAKCS manually, e.g., the current release from
https://www.informatik.uni-kiel.de/~pakcs/download.html
If you already have Ubuntu, a simple make should be sufficient.

Related

Do you have to declare a function's type?

One thing I do not fully understand about Haskell is declaring functions and their types: is it something you have to do or is it just something you should do for clarity? Or are there certain scenarios where you need to do it, just not all?
You don’t need to declare the type of any function that uses only standard Haskell type system features. Haskell 98 is specified with global type inference, meaning that the types of all top-level bindings are guaranteed to be inferable.
However, it’s good practice to include type annotations for top-level definitions, for a few reasons:
Verifying that the inferred type matches your expectations
Helping the compiler producing better diagnostic messages when there are type mismatches
Most importantly, documenting your intent and making the code more readable for humans!
As for definitions in where clauses, it’s a matter of style. The conventional style is to omit them, partly because in some cases, their types could not be written explicitly before the ScopedTypeVariables extension. I consider the omission of scoped type variables a bit of a bug in the 1998 and 2010 standards, and GHC is the de facto standard compiler today, but it’s still a nonstandard extension. Regardless, it’s good practice to include annotations where possible for nontrivial code, and helpful for you as a programmer.
In practice, it’s common to use some language extensions that complicate type inference or make it “undecidable”, meaning that, at least for arbitrary programs, it’s impossible to always infer a type, or at least a unique “best” type. But for the sake of usability, extensions are usually designed very carefully to only require annotations at the point where you actually use them.
For example, GHC (and standard Haskell) will only infer polymorphic types with top-level foralls, which are normally left entirely implicit. (They can be written explicitly using ExplicitForAll.) If you need to pass a polymorphic function as an argument to another function like (forall t. …) -> … using RankNTypes, this requires an annotation to override the compiler’s assumption that you meant something like forall t. (… -> …), or that you mistakenly applied the function on different types.
If an extension requires annotations, the rules for when and where you must include them are typically documented in places like the GHC User’s Guide, and formally specified in the papers specifying the feature.
Short answer: Functions are defined in "bindings" and have their types declared in "type signatures". Type signatures for bindings are always syntactically optional, as the language doesn't require their use in any particular case. (There are some places type signatures are required for things other than bindings, like in class definitions or in declarations of data types, but I don't think there's any case where a binding requires an accompanying type signature according to the syntax of the language, though I might be forgetting some weird situation.) The reason they aren't required is that the compiler can usually, though not always, figure out the types of functions itself as part of its type-checking operation.
However, some programs may not compile unless a type signature is added to a binding, and some programs may not compile unless a type signature is removed, so sometimes you need to use them, and sometimes you can't use them (at least not without a language extension and some changes to the syntax of other, nearby type signatures to use the extension).
It is considered best practice to include type signatures for every top-level binding, and the GHC -Wall flag will warn you if any top-level bindings lack an associated type signature. The rationale for this is that top-level signatures (1) provide documentation for the "interface" of your code, (2) aren't so numerous that they overburden the programmer, and (3) usually provide sufficient guidance to the compiler that you get better error messages than if you omit type signatures entirely.
If you look at almost any real-world Haskell source code (e.g., browse the source of any decent library on Hackage), you'll see this convention being used -- all top-level bindings have type signatures, and type signatures are used sparingly in other contexts (in expressions or where or let clauses). I'd encourage any beginner to use this same convention in the code they write as they're learning Haskell. It's a good habit and will avoid many frustrating error messages.
Long answer:
In Haskell, a binding assigns a name to a chunk of code, like the following binding for the function hypo:
hypo a b = sqrt (a*a + b*b)
When compiling a binding (or collection of related bindings), the compiler performs a type-checking operation on the expressions and subexpressions that are involved.
It is this type-checking operation that allows the compiler to determine that the variable a in the above expression must be of some type t that has a Num t constraint (in order to support the * operation), that the result of a*a will be the same type t, and that this implies that b*b and so b are also of this same type t (since only two values of the same type can be added together with +), and that a*a + b*b is therefore of type t, and so the result of the sqrt must also be of this same type t which must incidentally have a Floating t constraint to support the sqrt operation. The information collected and type relationships deduced during this type checking allow the compiler to infer a general type signature for the hypo function automatically, namely:
hypo :: (Floating t) => t -> t -> t
(The Num t constraint doesn't appear because it's implied by Floating t).
Because the compiler can learn the type signatures of (most) bound names, like hypo, automatically as a side-effect of the type-checking operation, there's no fundamental need for the programmer to explicitly supply this information, and that's the motivation for the language making type signatures optional. The only requirements the language places on type signatures is that if they are supplied, they must appear in the same declaration list as the associated binding (e.g., both must appear in the same module, or in the same where clause or whatever, and you can't have a type signature without a binding), there must be at most one type signature for a binding (no duplicate type signatures, even if they are identical, unlike in C, say), and the type supplied in the type signature must not be in conflict with the results of type checking.
The language allows the type signature and binding to appear anywhere in the same declaration list, in any order, and with no requirement they be next to each other, so the following is valid Haskell code:
double :: (Num a) => a -> a
half x = x / 2
double x = x + x
half :: (Fractional a) => a -> a
Such silliness is not recommended, however, and the convention is to place the type signature immediately before the corresponding binding, though one exception is to have a type signature shared across multiple bindings of the same type, whose definitions follow:
ex1, ex2, ex3 :: Tree Int
ex1 = Leaf 1
ex2 = Node (Leaf 2) (Leaf 3)
ex3 = Node (Node (Leaf 4) (Leaf 5)) (Leaf 5)
In some situations, the compiler cannot automatically infer the correct type for a binding, and a type signature may be required. The following binding requires a type signature and won't compile without it. (The technical problem is that toList is written using polymorphic recursion.)
data Binary a = Leaf a | Pair (Binary (a,a)) deriving (Show)
-- following type signature is required...
toList :: Binary a -> [a]
toList (Leaf x) = [x]
toList (Pair b) = concatMap (\(x,y) -> [x,y]) (toList b)
In other situations, the compiler can automatically infer the correct type for a binding, but the type can't be expressed in a type signature (at least, not without some GHC extensions to the standard language). This happens most often in where clauses. (The technical problem is that type variables aren't scoped, and go's type involves the type variable a from the type signature of myLookup.)
myLookup :: Eq a => a -> [(a,b)] -> Maybe b
myLookup k = go
where -- go :: [(a,b)] -> Maybe b
go ((k',v):rest) | k == k' = Just v
| otherwise = go rest
go [] = Nothing
There's no type signature in standard Haskell for go that would work here. However, if you enable an extension, you can write one if you also modify the type signature for myLookup itself to scope the type variables.
myLookup :: forall a b. Eq a => a -> [(a,b)] -> Maybe b
myLookup k = go
where go :: [(a,b)] -> Maybe b
go ((k',v):rest) | k == k' = Just v
| otherwise = go rest
go [] = Nothing
It's considered best practice to put type signatures on all top-level bindings and use them sparingly elsewhere. The -Wall compiler flag turns on the -Wmissing-signatures warning which warns about any missing top-level signatures.
The main motivation, I think, is that top-level bindings are the ones that are most likely to be used in multiple places throughout the code at some distance from where they are defined, and the type signature usually provides concise documentation for what a function does and how it's intended to be used. Consider the following type signatures from a Sudoku solver I wrote many years ago. Is there much doubt what these functions do?
possibleSymbols :: Index -> Board -> [Symbol]
possibleBoards :: Index -> Board -> [Board]
setSymbol :: Index -> Board -> Symbol -> Board
While the type signatures auto-generated by the compiler also serve as decent documentation and can be inspected in GHCi, it's convenient to have the type signatures in the source code, as a form of compiler-checked comment documenting the binding's purpose.
Any Haskell programmer who's spent a moment trying to use an unfamiliar library, read someone else's code, or read their own past code knows how helpful top-level signatures are as documentation. (Admittedly, a frequently levelled criticism of Haskell is that sometimes the type signatures are the only documentation for a library.)
A secondary motivation is that in developing and refactoring code, type signatures make it easier to "control" the types and localize errors. Without any signatures, the compiler can infer some really crazy types for code, and the error messages that get generated can be baffling, often identifying parts of the code that have nothing to do with the underlying error.
For example, consider this program:
data Tree a = Leaf a | Node (Tree a) (Tree a)
leaves (Leaf x) = x
leaves (Node l r) = leaves l ++ leaves r
hasLeaf x t = elem x (leaves t)
main = do
-- some tests
print $ hasLeaf 1 (Leaf 1)
print $ hasLeaf 1 (Node (Leaf 2) (Leaf 3))
The functions leaves and hasLeaf compile fine, but main barfs out the following cascade of errors (abbreviated for this posting):
Leaves.hs:12:11-28: error:
• Ambiguous type variable ‘a0’ arising from a use of ‘hasLeaf’
prevents the constraint ‘(Eq a0)’ from being solved.
Probable fix: use a type annotation to specify what ‘a0’ should be.
Leaves.hs:12:19: error:
• Ambiguous type variable ‘a0’ arising from the literal ‘1’
prevents the constraint ‘(Num a0)’ from being solved.
Probable fix: use a type annotation to specify what ‘a0’ should be.
Leaves.hs:12:27: error:
• No instance for (Num [a0]) arising from the literal ‘1’
Leaves.hs:13:11-44: error:
• Ambiguous type variable ‘a1’ arising from a use of ‘hasLeaf’
prevents the constraint ‘(Eq a1)’ from being solved.
Probable fix: use a type annotation to specify what ‘a1’ should be.
Leaves.hs:13:19: error:
• Ambiguous type variable ‘a1’ arising from the literal ‘1’
prevents the constraint ‘(Num a1)’ from being solved.
Probable fix: use a type annotation to specify what ‘a1’ should be.
Leaves.hs:13:33: error:
• No instance for (Num [a1]) arising from the literal ‘2’
With programmer-supplied top-level type signatures:
leaves :: Tree a -> [a]
leaves (Leaf x) = x
leaves (Node l r) = leaves l ++ leaves r
hasLeaf :: (Eq a) => a -> Tree a -> Bool
hasLeaf x t = elem x (leaves t)
a single error is immediately localized to the offending line:
leaves (Leaf x) = x
^
Leaves.hs:4:19: error:
• Occurs check: cannot construct the infinite type: a ~ [a]
Beginners might not understand the "occurs check" but are at least looking at the right place to make the simple fix:
leaves (Leaf x) = [x]
So, why not add type signatures everywhere, not just at top-level? Well, if you literally tried to add type signatures everywhere they were syntactically valid, you'd be writing code like:
{-# LANGUAGE ScopedTypeVariables #-}
hypo :: forall t. (Floating t) => t -> t -> t
hypo (a :: t) (b :: t) = sqrt (((a :: t) * (a :: t) :: t) + ((b :: t) * (b :: t) :: t) :: t) :: t
so you want to draw the line somewhere. The main argument against adding them for all bindings in let and where clauses is that those bindings are often short bindings easily understood at a glance, and they're localized to the code that you're trying to understand "all at once" anyway. The signatures are also potentially less useful as documentation because bindings in these clauses are more likely to refer to and use other nearby bindings of arguments or intermediate results, so they aren't "self-contained" like a top-level binding. The signature only documents a small portion of what the binding is doing. For example, in:
qsort :: (Ord a) => [a] -> [a]
qsort (x:xs) = qsort l ++ [x] ++ qsort r
where -- l, r :: [a]
l = filter (<=x) xs
r = filter (>x) xs
qsort [] = []
having type signatures l, r :: [a] in the where clause wouldn't add very much. There's also the additional complication that you'd need the ScopedTypeVariables extension to write it, as above, so that's maybe another reason to omit it.
As I say, I think any Haskell beginner should be encouraged to adopt a similar convention of writing top-level type signatures, ideally writing the top-level signature before starting to write the accompanying bindings. It's one of the easiest ways to leverage the type system to guide the design process and write good Haskell code.

Introductory F# (Fibonacci and function expressions)

I've started a course on introduction to F#, and I've been having some trouble with two assignments. The first one had me creating two functions, where the first function takes an input and adds it with four, and the second one calculates sqrt(x^2+y^2). Then I should write function expressions for them both, but for some reason it gives me the error "Unexpected symbol'|' in implementation file".
let g = fun n -> n + 4;;
let h = fun (x,y) -> System.Math.Sqrt((x*x)+(y*y));;
let f = fun (x,n) -> float
|(n,0) -> g(n)
|(x,n) -> h(x,n);;
The second assignment asks me to create a function, which finds the sequence of Fibonaccis numbers. I've written the following code, but it seems to forget about the 0 in the beginning since the output always is n+1 and not n.
let rec fib = function
|0 -> 0
|1 -> 1
|n -> fib(n-1) + fib(n-2)
;;
Keep in mind that this is the first week, so I should be able to create these with those methods.
Your first snippet mostly suffers from two issues:
In F#, there is a difference between float and int. You write integer values as 4 or 0 and you write float values as 4.0 or 0.0. F# does not automatically convert integers to floats, so you need to be consistent.
Your syntax in the f function is a bit odd - I'm not sure what float is supposed to mean there and the fun and function constructs behave differently.
So, starting with your original code:
let g = fun n -> n + 4;;
This works, but I would not write it as an explicit function using fun - you can use let to define functions too and it is simpler. Also, you only need ;; in F# Interactive, but if you're using any decent editor with command for sending code to F# interactive (via Alt+Enter) you do not need that.
However, in your f function, you want to return float so you need to modify g to return float too. This means replacing 4 with 4.0:
let g n = n + 4.0
The h function is good, but you can again write it using let:
let h (x,y) = System.Math.Sqrt((x*x)+(y*y));;
In your f function, you can either use function to write a function using pattern matching, or you can use more verbose syntax using match (function is just a shorthand for writing a function and then pattern matching on the input):
let f = function
| (n,0.0) -> g(n)
| (x,n) -> h(x,n)
let f (x, y) =
match (x, y) with
| (n,0.0) -> g(n)
| (x,n) -> h(x,n)
Also note that the indentation matters - you need spaces before |.
I'm going to address your first block of code, and leave the Fibonacci function for later. First I'll repost your code, then I'll talk about it.
let g = fun n -> n + 4;;
let h = fun (x,y) -> System.Math.Sqrt((x*x)+(y*y));;
let f = fun (x,n) -> float
|(n,0) -> g(n)
|(x,n) -> h(x,n);;
First comment: If you're defining a function and assigning it immediately to a name, like in all these examples, you don't need the fun keyword. The usual way to define functions is to write them as let (name) (parameters) = (function body). So your code above would become:
let g n = n + 4;;
let h (x,y) = System.Math.Sqrt((x*x)+(y*y));;
let f (x,n) = float
|(n,0) -> g(n)
|(x,n) -> h(x,n);;
I haven't made any other changes, so your f function still has an error in it. Let's address that error next.
I think the mistake you're making here is to think that fun and function are interchangeable. They're not. fun is standard function definition, but function is something else. It's a very common pattern in F# to write functions like the following:
let someFunc parameter =
match parameter with
| "case 1" -> printfn "Do something"
| "case 2" -> printfn "Do something else"
| _ -> printfn "Default behavior"
The function keyword is shorthand for one parameter plus a match expression. In other words, this:
let someFunc = function
| "case 1" -> printfn "Do something"
| "case 2" -> printfn "Do something else"
| _ -> printfn "Default behavior"
is exactly the same code as this:
let someFunc parameter =
match parameter with
| "case 1" -> printfn "Do something"
| "case 2" -> printfn "Do something else"
| _ -> printfn "Default behavior"
with just one difference. In the version with the function keyword, you don't get to pick the name of the parameter. It gets automatically created by the F# compiler, and since you can't know in advance what the name of the parameter will be, you can't refer to it in your code. (Well, there are ways, but I don't want to make you learn to run before you have learned to walk, so to speak). And one more thing: while you're still learning F#, I strongly recommend that you do NOT use the function keyword. It's really useful once you know what you're doing, but in your early learning stages you should use the more explicit match (parameter) with expressions. That way you'll get used to seeing what it's doing. Once you've been doing F# for a few months, then you can start replacing those let f param = match param with (...) expressions with the shorter let f = function (...). But until match param with (...) has really sunk in and you've understood it, you should continue to type it out explicitly.
So your f function should have looked like:
let f (x,n) =
match (x,n) with
|(n,0) -> g(n)
|(x,n) -> h(x,n);;
I see that while I was typing this, Tomas Petricek posted a response, and it addresses the incorrect usage of float, so I won't duplicate his explanation of why you're going to get an error on the word float in your f function. And he also explained about ;;, so I won't duplicate that either. I'll just say that when he mentions "any decent editor with command for sending code to F# interactive (via Alt+Enter)", there are a lot of editor choices -- but as a beginner, you might just want someone to recommend one to you, so I'll recommend one. First off, though: if you're on Windows, you might be using Visual Studio already, in which case you should stick to Visual Studio since you know it. It's a good editor for F#. But if you don't use Visual Studio yet, I don't recommend downloading it just to play around with F#. It's a beast of a program, designed for professional software developers to do all sorts of things they need to do in their jobs, and so it can feel a bit overwhelming if you're just getting started. So I would actually recommend something more lightweight: the editor called Visual Studio Code. It's cross-platform, and will run perfectly well on Linux, OS X or on Windows. Once you've downloaded and installed VS Code, you'll then want to install the Ionide extension. Ionide is a plugin for VS Code (and also for Atom, though the Atom version of Ionide is updated less often since all the Ionide developers use VS Code now) that makes F# editing a real pleasure. There are actually three extensions you'll find: Ionide-fsharp, Ionide-FAKE, and Ionide-Paket. Download and install all three: FAKE and Paket are two tools for F# programming that you might not need yet, but once you do need them, you'll already have them installed.
Okay, that's enough to get you started, I think.

Shortening function definitions using multine patterns in haskell

Is it better to do:
charToAction 'q' = Just $ WalkRight False
charToAction 'd' = Just $ WalkRight True
charToAction 'z' = Just Jump
charToAction _ = Nothing
or
charToAction x = case x of
'q' -> Just $ WalkRight False
'd' -> Just $ WalkRight True
'z' -> Just Jump
_ -> Nothing
?
There's absolutely no functional difference whatsoever. It's a matter of personal preference.
There is no performance difference because the first definition desugars to the second. You were right to prefer a case + pattern matching solution to one with guards and an equality test, some excellent general remarks are here: http://existentialtype.wordpress.com/2011/03/15/boolean-blindness/ (His examples are in ML but translate immediately to Haskell).
Note that you are misusing otherwise in the second definition; you should just write _ -> Nothing It isn't the otherwise of guards that you are using, you could as well have written fmap -> Nothing and the same would have happened.
As others mentioned, there is no difference in the generated code as the former approach desugars to the latter approach, however I can point out some other considerations that may help you choose one or the other to use:
The former approach looks prettier in some circumstances (although I personally think that once you have lots of cases it is distracting)
The former approach makes it slightly more difficult to refactor the name. Using the case statement the name of the function only appears once.
Case statements can be used anywhere in your code and can be used anonymously. The former approach requires using a let or where block to define pattern-matching functions within a function.
However, there are no cases where you can't translate one into the other. It's entirely a matter of coding style.

Emulating namespaces in Fortran 90

One of the most troublesome issues with Fortran 90 is the lack of namespacing. In this previous question "How do you use Fortran 90 module data" from Pete, it has been discussed the main issue of USE behaving like a "from module import *" in Python: everything that is declared public in the module is imported as-is within the scope of the importing module. No prefixing. This makes very, very hard to understand, while reading some code, where a given identifier comes from, and if a given module is still used or not.
A possible solution, discussed in the question I linked above, is to use the ONLY keyword to both limit the imported identifiers and document where they come from, although this is very, very tedious when the module is very large. Keeping the module small, and always using USE : ONLY is a potentially good strategy to work around the lack of namespacing and qualifying prefixes in Fortran 9X.
Are there other (not necessarily better) workaround strategies? Does the Fortran 2k3 standard say anything regarding namespacing support?
For me this is the most irritating Fortran feature related to modules. The only solution is to add common prefix to procedures, variables, constants, etc. to avoid namespace collisions.
One can prefix all entities (all public entities seems to be more appropriate) right inside the module:
module constants
implicit none
real, parameter :: constants_pi = 3.14
real, parameter :: constants_e = 2.71828183
end module constants
Drawback is increased code verbosity inside the module. As an alternative one can use namespace-prefix wrapper module as suggested here, for example.
module constants_internal
implicit none
real, parameter :: pi = 3.14
real, parameter :: e = 2.71828183
end module constants_internal
module constants
use constants_internal, only: &
constants_pi => pi, &
constants_e => e
end module constants
The last is a small modification of what you, Stefano, suggested.
Even if we accept the situation with verbosity the fact that Fortran is not case-sensitive language force us to use the same separator (_) in entities names. And it will be really difficult to distinguish module name (as a prefix) from entity name until we do not use strong naming discipline, for example, module names are one word only.
Having several years of Fortran-only programming experience (I got into Python only a year ago), I was not aware of such concept as namespaces for a while. So I guess I learned to just keep track of everything imported, and as High Performance Mark said, use ONLY as much as you have time to do it (tedious).
Another way I can think of to emulate a namespace would be to declare everything within a module as a derived type component. Fortran won't let you name the module the same way as the namespace, but prefixing module_ to module name could be intuitive enough:
MODULE module_constants
IMPLICIT NONE
TYPE constants_namespace
REAL :: pi=3.14159
REAL :: e=2.71828
ENDTYPE
TYPE(constants_namespace) :: constants
ENDMODULE module_constants
PROGRAM namespaces
USE module_constants
IMPLICIT NONE
WRITE(*,*)constants%pi
WRITE(*,*)constants%e
ENDPROGRAM namespaces
Fortran 2003 has the new ASSOCIATE construct and don't forget the possibility of renaming USE- associated entities. But I don't think that either of these is much closer to providing a good emulation of namespaces than Fortran 90 already has, just (slightly) better workarounds.
Like some of the respondents to the question you link to, I tend to think that modules with very many identifiers should probably be split into smaller modules (or, wait for Fortran 2008 and use submodules) and these days I almost always specify an ONLY clause (with renames) for USE statements.
I can't say that I miss namespaces much, but then I've never had them really.
No one else summited this suggestion as an answer (though someone did in the comments of one answer). So I'm going to submit this in hopes it may help someone else.
You can emulate namespaces in the following way, and I would hope there would be no noticeable performance hit for doing so from the compiler (if so all object oriented Fortran programming is sufferings). I use this pattern in my production code. The only real downfall to me is the lack of derived type contained parameter variables, but I offer an option for that as well below.
Module Math_M
IMPLICIT NONE
PRIVATE
public :: Math
Type Math_T
real :: pi=3.14159
contains
procedure, nopass :: e => math_e
procedure :: calcAreaOfCircle => math_calcAreaOfCircle
End Type
Type(Math_T) :: Math
real, parameter :: m_e = 2.71828
contains
function math_e() result(e)
real :: e
e = m_e
end function math_e
function math_calcAreaOfCircle(this, r) result(a)
class(Math_T), intent(in) :: this
real, intent(in) :: r
real :: a
a = this%pi * r**2.0
end function math_calcAreaOfCircle
End Module Math_M
And the usage
Program Main
use Math_M
IMPLICIT NONE
print *, Math%pi
print *, Math%e()
print *, Math%calcAreaOfCircle(2.0)
End Program Main
Personally I prefer using $ over the _ for module variables, but not all compilers like that without compiler flags. Hopefully this helps someone in the future.

How to define an function in the eshell (Erlang shell)?

Is there any way to define an Erlang function from within the Erlang shell instead of from an erl file (aka a module)?
Yes but it is painful. Below is a "lambda function declaration" (aka fun in Erlang terms).
1> F=fun(X) -> X+2 end.
%%⇒ #Fun <erl_eval.6.13229925>
Have a look at this post. You can even enter a module's worth of declaration if you ever needed. In other words, yes you can declare functions.
One answer is that the shell only evaluates expressions and function definitions are not expressions, they are forms. In an erl file you define forms not expressions.
All functions exist within modules, and apart from function definitions a module consists of attributes, the more important being the modules name and which functions are exported from it. Only exported functions can be called from other modules. This means that a module must be defined before you can define the functions.
Modules are the unit of compilation in erlang. They are also the basic unit for code handling, i.e. it is whole modules which are loaded into, updated, or deleted from the system. In this respect defining functions separately one-by-one does not fit into the scheme of things.
Also, from a purely practical point of view, compiling a module is so fast that there is very little gain in being able to define functions in the shell.
This depends on what you actually need to do.
There are functions that one could consider as 'throwaways', that is, are defined once to perform a test with, and then you move on. In such cases, the fun syntax is used. Although a little cumbersome, this can be used to express things quickly and effectively. For instance:
1> Sum = fun(X, Y) -> X + Y end.
#Fun<erl_eval.12.128620087>
2> Sum(1, 2).
3
defines an anonymous fun that is bound to the variable (or label) Sum. Meanwhile, the following defines a named fun, called F, that is used to create a new process whose PID (<0.80.0>) is bound to Pid. Note that F is called in a tail recursive fashion in the second clause of receive, enabling the process to loop until the message stop is sent to it.
3> Pid = spawn(fun F() -> receive stop -> io:format("Stopped~n"); Msg -> io:format("Got: ~p~n", [Msg]), F() end end).
<0.80.0>
4> Pid ! hello.
hello
Got: hello
5> Pid ! stop.
Stopped
stop
6>
However you might need to define certain utility functions that you intend to use over and over again in the Erlang shell. In this case, I would suggest using the user_default.erl file together with .erlang to automatically load these custom utility functions into the Erlang shell as soon as this is launched. For instance, you could write a function that compiles all the Erlang files in living in the current directory.
I have written a small guide on how to do this on this GitHub link. You might find it helpful and instructive.
If you want to define a function on the shell to use it as macro (because it encapsulates some functionality that you need frequently), have a look at
https://erldocs.com/current/stdlib/shell_default.html