This question already has an answer here:
Do Go built-ins use generics?
(1 answer)
Closed 11 months ago.
I am just starting to study Go and some things caught my attention.
Functions like:
delete(map, "Answer") // for maps
append(slice, 0) // for slices
len(slice), cap(slice) // again for slices
and so on. As someone coming from C like languages, I am wondering:
1) Can these functions be called through the variable itself (as in map.delete("Answer"))?
2) Is this a common practice (defining a generic function and let it figure out the type and what it should do), or is this just for the built in types. For example, if I would define my own type, like MyCoolLinkedList, should I define the len and append functions inside the type and have them called like
list := new(MyCoolLinkedList)
list.len()
or should I define a function that receives the list, like:
len(list)
1 - You can't call builtin methods "attached" to types or values, e.g.
m := map[int]string{1: "one"}
m.delete(1)
is a compile time error which you can verify easily.
2 - Go doesn't have generics. But to ease the "pain", it provides several builtin functions which can accept values of different types. They are builtin because –as mentioned– due to the lack of generics, they need the help of the compiler to be able to accept values of different types. Some also accept a type instead of an expression as the first argument (e.g. make([]int, 1)) which is also something you can't create. Builtin functions do not have standard Go types, they can only appear in call expressions.
You can't create such functions that accept values of different types. Having said that, when you create your own type and you create a "function" for it, it is advisable to declare it as a method instead of a helper function; as if the "function" operates on your concrete type, you could not use it for other types anyway.
So it makes sense to declare it as a method, and then you can call it more "elegantly" like
value.Method()
Being a method also "counts" toward the method set of the type, should you need to implement an interface, e.g. in case of your MyCoolLinkedList it would make sense to implement sort.Interface to be able to sort the list, which requires a Len() int method for example.
Choosing a method over a helper function also has the advantage that your method will be available via reflection. You can list and call methods of a type using the reflect package, but you can't do the same with "just" functions.
Related
I would like to create a type (say, my_vector) that behaves exactly like a Vector, so that I can pass it to all the functions that take Vectors. In addition to that, I want to create special functions exclusively for my_vector, that do not work for generic Vectors.
So, for instance, if A is a Matrix and typeof(b) is my_vector, I want to be able to solve a linear system using A\b, to recover its length with length(b), or to access its elements with b[index].
At the same time, I want to define a specific function that can take objects of type my_vector as parameters, but that does not take objects of type Vector.
How can I do this?
This is not possible in general.
What you can do is define your my_vector type as subtype of AbstractVector. Then all functions accepting AbstractVector will also accept your type. What you need to implement for AbstractVector is listed in https://docs.julialang.org/en/v1/manual/interfaces/#man-interface-array.
Then if you want your my_vector type to also have functionalities that Vector has but AbstractVector does not have you need to implement yourself methods for functions that are specifically defined to only accept Vector. There are 49 such methods in Julia standard installation and you can find their list by writing methodswith(Vector). Most likely you will not need them all but only a small selection of such methods e.g. push! or pop!.
Having said that this will not ensure that everything that accepts Vector will accept your my_vector, as if some package accepts only Vector you will have to perform the same process for functions defined in this package.
In summary - check if AbstractVector is enough for you, as most likely it is and then it is simple. If it is not the case then doing what you want is difficult.
I am writing a language where functions are not typed. Which means I need to infer the return type of a function call in order to do type checking. However when somebody writes a recursive function the type checker goes into an infinite recursion trying to infer the type of the function call inside the function body.
The type checker does something like this:
Infer the types of the function call actual arguments.
Create a mapping of the actual argument types to the formal arguments.
Use the mapping to annotate types on the arguments used inside the function body.
Infer and return the return type of the function body.
Step 4 tries to then infer the type of the function call inside the function body, which calls the same type checker function again, causing an infinite recursion.
An example of a recursive function that gives me this problem:
function factorial(n) = n<1 ? 1 : n*factorial(n-1); // Function definition.
...
assert 24 == factorial(4); // Function call expression usage example.
How can I solve this problem without going in to an infinite recursion loop? Is there a way to infer the type of the recursive function call without having to go into the body again? Or some clean way to infer the type from context?
I know the easy solution might be to add types annotations to functions, this way the problem is trivial, but before doing that I want to know if there is a way to solve this without resorting to that.
I'd also like for the solution to work for mutual recursion.
Type inference can vary a lot depending on the language's type system and on what properties you want to have in terms of when annotations are needed. But whatever your language looks like, I think there's one seminal case you really should read about, which is ML. ML's type inference holds a nice sweet spot where it all fits together in a relatively simple paradigm. No type annotations are needed, and any expression has a single most general type (this property is called principality of typing).
ML's type system is the Hindley-Milner type system, which has parametric polymorphism. The type of an expression is either a specific type, or “any”. More precisely, the type constructor of an expression is either a specific type constructor or “any”, and type constructors can have arguments which themselves either have a specific type constructor or “any”. For example, the empty list has the type “list of any”. Two expressions that can have “any” type in isolation may be constrained to have the same type, whatever it is, so “any” is expressed with variables. For example, function list_of_two(x, y) = [x, y] (in a notation like your language) constrains x and y to have the same type, because they're inserted in the same list, but that type can be any type, so the type of this function is “take any two parameters of the same type α, and return a value of type list of α”.
The basic type inference algorithm for Hindley-Milner is algorithm W. At its core, it works by giving each subexpression a type that's a variable: α₁, α₂, α₃, … Programming language constructions then impose constraints on those variables. For example, if a list contains two elements of types α₁ and α₂ and the list itself has the type α₃, this constraints α₁ = α₂ and α₃ = list of α₁. Putting all these constraints together is a unification problem.
The constraints are based on a purely syntactic reading of the program. If there's a recursive call, you don't need to know the type of the function: it just means that there's a constraint that the variable for the return type of the function is the same as the type at its point of use. That's just one more equation to add to the set of constraints.
I left out an important aspect of ML which is that an expression's type can be generalized: an expression can be used with different types at different places. This is what allows polymorphism. For example,
let empty_list = [] in
(empty_list # [3]), (empty_list # ["hello"])
is a valid program where empty_list is used once with the type “list of integers” and once with the type “list of strings”. The type of empty_list is “for any α, list of α”: that's parametric polymorphism. Generalization adds some complexity to the algorithm, but it also removes complexity elsewhere, because that's what allows principality. Without it, let empty_list = [] in … would be ambiguous: empty_list would have to have some type, but there's no way to know what type without analyzing …, and then when you do analyze the … above you'd need to make a choice between integer and string.
Depending on your language's type system, ML and algorithm W may be directly reusable or may just provide some vague inspiration. But the principle of using variables during the inference, and progressively constraining these variables, is very general.
Is it possible to remove the last element from a tuple in typesafe manner for arbitrary arity?
I want something like this:
[A,B,C] abc = [a,b,c];
[A,B] ab = removeLast(abc);
No, unfortunately it's not possible, the reason being that a tuple type is represented within the type system as a linked list of instantiations of Tuple, but the type system can't express loops or recursion within the signature of a function. (And having loops/recursion would almost certainly make the type system undecidable.)
One way we could, in principle, solve this in future would be to have a built-in primitive type function that evaluates the last element type of a tuple type.
By "primitive" type function, I mean a type function that can't be written in the language itself, but is instead provided as a built-in by the compiler.
Ceylon doesn't currently have any of these sorts of primitive type functions, but there are a couple of other similar problems which could be solved in this manner.
I am seeking a way to acquire all the functions which have been defined in the current scope from lua. Is there a quick way to implement it directly from lua, not from C? The factory functions are preferred to be included.
You can use a hybrid approach:
(1) to get all local variables, you can use debug.getlocal and get the names and values of the variables (see for example the logic in this answer). All function values will have type of the value equal to function (type(value) == 'function'), so you can easily filter based on that condition. The name of the variable will give you the name you are looking for (keep in mind that several names may refer to the same function).
(2) to get all global variables you can iterate over fields in _ENV or _G tables and apply the same filtering logic as in 1.
Note that neither of these methods gives you access to functions stored in table fields or available indirectly through metamethods.
I'm starting to learn Lisp with a Java background. In SICP's exercise there are many tasks where students should create abstract functions with many parameters, like
(define (filtered-accumulate combiner null-value term a next b filter)...)
in exercise 1.33. In Java (language with safe, static typing discipline) - a method with more than 4 arguments usually smells, but in Lisp/Scheme it doesn't, does it? I'm wondering how many arguments do you use in your functions? If you use it in production, do you make as many layers?
SICP uses a subset of Scheme
SICP is a book used in introductory computer science course. While it explains some advanced concepts, it uses a very tiny language, a subset of the Scheme language and a sub-subset of any real world Scheme or Lisp a typical implementation provides. Students using SICP are supposed to start with a simple and easy to learn language. From there they learn to implement more complex language additions.
Only positional parameters are being used in plain educational Scheme
There are for example no macros developed in SICP. Add that standard Scheme does have only positional parameters for functions.
Lisp and Scheme offer also more expressive argument lists
In 'real' Lisp or Scheme one can use one or more of the following:
objects or records/structures (poor man's closures) which group things. An object passed can contain several data items, which otherwise would need to be passed 'spread'.
defaults for optional variables. Thus we need only to pass those that we want to have a certain non-default value
optional and named arguments. This allows flexible argument lists which are much more descriptive.
computed arguments. The value or the default value of arguments can be computed based on other arguments
Above leads to more complicated to write function interfaces, but which are often easier to use.
In Lisp it is good style to have descriptive names for arguments and also provide online documentation for the interface. The development environment will display information about the interface of a function, so this information is typically only a keystroke away or is even display automatically.
It's also good style for any non-trivial interface which is supposed to be used interactively by the user/developer to check its arguments at runtime.
Example for a complex, but readable argument list
When there are more arguments, then Common Lisp provides named arguments, which can appear in any order after the normal argument. Named arguments provide also defaults and can be omitted:
(defun order-product (product
&key
buyer
seller
(vat (local-vat seller))
(price (best-price product))
amount
free-delivery-p)
"The function ORDER-PRODUCT ..." ; documentation string
(declare (type ratio vat price) ; type declarations
(type (integer 0) amount)
(type boolean free-delivery-p))
...)
We would use it then:
(order-product 'sicp
:seller 'mit-press
:buyer 'stan-kurilin
:amount 1)
Above uses the seller argument before the buyerargument. It also omits various arguments, some of which have their values computed.
Now we can ask whether such extensive arguments are good or bad. The arguments for them:
the function call gets more descriptive
functions have standard mechanisms to attach documentation
functions can be asked for their parameter lists
type declarations are possible -> thus types don't need to be written as comments
many parameters can have sensible default values and don't need to be mentioned
Several Scheme implementations have adopted similar argument lists.