Is "Lisp-1 vs Lisp-2" relevant in a language with static types? - namespaces

(This is a CS-theory type of question; I hope that's acceptable.)
The "Lisp-1 vs Lisp-2" debate is about whether the namespace of functions should be distinct from the namespace of all other variables, and it's relevant in dynamically typed languages that allow the programmer to pass around functions as values. Lisp-1 languages (such as Scheme) have one namespace, so you can't have both a function named f and also an integer named f (one would shadow the other, just like two integers named f). Lisp-2 languages (such as Common Lisp) have two namespaces, so you can have both f variables, but you have to specify which one you mean with special syntax (#'f is the function and f is the integer).
It seems to me that the main technical problem, the need to disambiguate the function from the integer, is not an issue if the language is also statically typed (unlike most Lisps). For instance, if a sort function requires a list and a less-than function as an explicit signature,
def sort[X](list: List[X], lessThan: Function[X, Boolean]) // Scala syntax; had to pick something
then it wouldn't matter if the functions and everything else are in the same namespace or not. sort(mylist, myless) would only pass a type check if myless is a function--- no special syntax needed. Some people argue that one namespace is more aesthetically pleasing than two namespaces, but I'd like to focus on technical issues.
Is there anything that two namespaces would make more difficult or more prone to error (or conversely for one namespace), assuming that the language in question is statically typed?
(I'm thinking about this in the context of a domain specific language that I'm working on, and I want to make sure that I don't run into problems down the road. It would be easier to implement with two namespaces (Lisp-2), and since it's statically typed, there's no need for the equivalent of #'f. I asked the question in general because I want to hear general points and perhaps become aware of questions that I don't yet know to ask.)

One very common objection to multiple namespaces is that it complicates the formal semantics to have an arbitrary number of namespaces. One namespace makes things simple. The next simplest, so I'm told (I don't write these things), is an infinite number of namespaces--something I tried to do once and only got halfway through (see here if curious, though I think it's not what you're asking for in this case). It's when you limit it to a finite number of namespaces that a formal semantics gets messy, or so I'm told. And certainly it makes any kind of compiler or interpreter a bit more complex, too. This objection straddles between aesthetic and technical in that it's not an objection based on technical difficulty per se, as no super-human intelligence is required to do multiple namespaces, just a lot of extra manual labor, but rather an objection that doing a more complex semantics means more code, more special cases, more chances for error, etc. Personally, I'm not swayed by such arguments, I merely point them out since you ask. I think you'll find such arguments aren't fatal and that it's fine to proceed on what you're doing and see where it leads. I care much more about programmer/end-user experiences than implementor difficulty. But I mention this argument because other people I respect seem to think this is a big deal and I believe it's the correct answer to your question about difficulty and error-prone-ness.
Note, by the way, that when Gabriel and I wrote Technical Issues of Separation in Function Cells and Value Cells, I needed words to help me avoid saying "Scheme-like" and "Lisp-like" because people had unrelated reasons for preferring Scheme that had nothing to do with namespacing. Using the terms "Lisp1" and "Lisp2" allowed me keep the paper from becoming a Scheme vs. Lisp debate, and to focus readers narrowly on the matter of namespace. In the end, ANSI Common Lisp ended up with at least 4 namespaces (functions, values, go tags, block tags) or maybe 5 if you count types (which are sort of like a namespace in some ways but not others), but in any case not 2. So strictly it would be a Lisp4 or Lisp5. But either way it still horrifies those who prefer a single namespace because any arbitrary finite number bigger than one tends to be unacceptable to them.
My personal recommendation is that you should design the language according to your personal sense of what's right. For some languages, one namespace is fine. For others not. Just don't do it because someone tells you that either way has to be the right way--it's really legitimately your choice to make.
Some argue that one namespace is conceptually simpler, but I think that depends on the notion of simplicity. Some say smaller is simpler (notationally or implementationally). I claim that what's conceptually simplest is what's most isomorphic to how your brain thinks about things, and that your brain may not always be thinking "small" for "simple". I'd cite as at least one example that every human language in existence has multiple definitions for the same word, almost all resolved by context (of which your type inferencing might be one example of contextual information), suggesting that wetware is designed to disambiguate such things and that your brain is wasting some of its natural capacity if you don't let it care about such things. Maybe indeed this contextual inferencing increases the complexity of your language implementation, but languages are implemented only once, and are used many times, so there's every reason to optimize the end user experience, not the implementor experience. It's an investment that pays off later.
Insist on thinking about things how you want. Worry more about making your language have a good feel and about whether what you want to do is implementable at all, even if it does require work and extra checking of the code. No language design decision is the right one for every language--languages are ecologies and what matters is that language elements work well with one another, not that language elements match other languages. Indeed, if languages were going to be all the same, it would be pointless to have multiple languages.

Related

Has the word Abstraction other interpretations in computing?

I am quite new in programming and what haunts me about it is not really the coding (well at least not until the present moment!) itself, but some words/concepts that are really important to understand. My doubt is with the word "ABSTRACTION". I have already searched dictionaries and saw some videos of people giving very clear explanations of the word. So, I know that abstraction is when you take into consideration only the things that are important and leave out everything else (putting in very simple and direct language), like for instance, if you are going to change a light bulb, you do not need to know the manufacturer of the light bulb or the light socket. You also do not need to know the materials used to manufacture the light bulb. However, the problem is when you read some texts or listen to people using the word and it does not seem to fit the meaning and then you start to wonder if they misused the word (which I think is very unlikely) or it is because there is another obscure meaning that I have not found yet or maybe it is just because I am too dumb to understand it. Below I put excerpts from articles I was reading and bolded and capitalized the part where the word appears so you guys have a context and understand where my problem is. Thank you.
"A paradigm programming provides and determines the view that the programmer has on the structuring and execution of the programme. For example, in object-oriented programming, programmers MAY ABSTRACT A PROGRAMME AS A COLLECTION OF OBJECTS that interact with each other, while in functional programming, programmers ABSTRACT THE PROGRAMME as a sequence of functions executed in a stacked fashion."
"A tuple space has the function of creating a SHARED MEMORY ABSTRACTION over a distributed system, where everyone can read and write to it."
It's easy to understand if you replace abstract/abstraction with one of its synonyms conceptualize/conceptualization. In your first two examples "abstract a programme" means "think of a programme as"... or "conceptualize a programme as"... When we make an abstraction we forget about some details, and think about that thing in other terms.
Side advice from a fellow beginner:
As someone who started learning computer science independently less than a year ago, I can tell you right now there will be lots of tricky terms like this. Try not to get too caught up in them. Often times if you just keep learning, you'll experience first hand what these terms mean without even realizing it. Bits and pieces will add up. The takeaway from this being, don't let what you don't know slow you down. Sometimes it's ok to keep going and just not know for a while.
These seem to fit the definition you put up earlier. For object oriented programming, the mindset is to consider "objects" as the essential (important) aspect of a program and abstract all other considerations away. Same thing for functional programming where "functions" are the defining aspect abstracting other considerations as secondary.
The tuple space may be a little trickier but if you consider that variations in memory storage models are abstracted away in favour of a higher level concept focusing on a collection of values, then you see what the abstraction relates to.
Abstract
adjective
existing in thought or as an idea but not having a physical or concrete existence.
relating to or denoting art that does not attempt to represent external reality, but rather seeks to achieve its effect using shapes, colours, and textures.
verb
consider something theoretically or separately from (something else).
extract or remove (something).
noun
a summary of the contents of a book, article, or speech.
an abstract work of art.
There you have your answer. Ask 100 people what an abstract painting is, you will get at least 100 answers. Why should programmers behave differently?
Lets see what Oracle has to say about abstract classes:
Abstract classes are similar to interfaces. You cannot instantiate them, and they may contain a mix of methods declared with or without an implementation. However, with abstract classes, you can declare fields that are not static and final, and define public, protected, and private concrete methods.
Consider using abstract classes if any of these statements apply to your situation:
You want to share code among several closely related classes.
You expect that classes that extend your abstract class have many common methods or fields, or require access modifiers other than public (such as protected and private).
You want to declare non-static or non-final fields. This enables you to define methods that can access and modify the state of the object to which they belong.
Compare that with the definition of abstract in the above section. I think you get a pretty good idea of abstractness in computer programming.

What is differentiable programming?

Native support for differential programming has been added to Swift for the Swift for Tensorflow project. Julia has similar with Zygote.
What exactly is differentiable programming?
what does it enable? Wikipedia says
the programs can be differentiated throughout
but what does that mean?
how would one use it (e.g. a simple example)?
and how does it relate to automatic differentiation (the two seem conflated a lot of the time)?
I like to think about this question in terms of user-facing features (differentiable programming) vs implementation details (automatic differentiation).
From a user's perspective:
"Differentiable programming" is APIs for differentiation. An example is a def gradient(f) higher-order function for computing the gradient of f. These APIs may be first-class language features, or implemented in and provided by libraries.
"Automatic differentiation" is an implementation detail for automatically computing derivative functions. There are many techniques (e.g. source code transformation, operator overloading) and multiple modes (e.g. forward-mode, reverse-mode).
Explained in code:
def f(x):
return x * x * x
∇f = gradient(f)
print(∇f(4)) # 48.0
# Using the `gradient` API:
# ▶ differentiable programming.
# How `gradient` works to compute the gradient of `f`:
# ▶ automatic differentiation.
I never heard the term "differentiable programming" before reading your question, but having used the concepts noted in your references, both from the side of creating code to solve a derivative with Symbolic differentiation and with Automatic differentiation and having written interpreters and compilers, to me this just means that they have made the ability to calculate the numeric value of the derivative of a function easier. I don't know if they made it a First-class citizen, but the new way doesn't require the use of a function/method call; it is done with syntax and the compiler/interpreter hides the translation into calls.
If you look at the Zygote example it clearly shows the use of prime notation
julia> f(10), f'(10)
Most seasoned programmers would guess what I just noted because there was not a research paper explaining it. In other words it is just that obvious.
Another way to think about it is that if you have ever tried to calculate a derivative in a programming language you know how hard it can be at times and then ask yourself why don't they (the language designers and programmers) just add it into the language. In these cases they did.
What surprises me is how long it to took before derivatives became available via syntax instead of calls, but if you have ever worked with scientific code or coded neural networks at at that level then you will understand why this is a concept that is being touted as something of value.
Also I would not view this as another programming paradigm, but I am sure it will be added to the list.
How does it relate to automatic differentiation (the two seem conflated a lot of the time)?
In both cases that you referenced, they use automatic differentiation to calculate the derivative instead of using symbolic differentiation. I do not view differentiable programming and automatic differentiation as being two distinct sets, but instead that differentiable programming has a means of being implemented and the way they chose was to use automatic differentiation, they could have chose symbolic differentiation or some other means.
It seems you are trying to read more into what differential programming is than it really is. It is not a new way of programming, but just a nice feature added for doing derivatives.
Perhaps if they named it differentiable syntax it might have been more clear. The use of the word programming gives it more panache than I think it deserves.
EDIT
After skimming Swift Differentiable Programming Mega-Proposal and trying to compare that with the Julia example using Zygote, I would have to modify the answer into parts that talk about Zygote and then switch gears to talk about Swift. They each took a different path, but the commonality and bottom line is that the languages know something about differentiation which makes the job of coding them easier and hopefully produces less errors.
About the Wikipedia quote that
the programs can be differentiated throughout
At first reading it seems nonsense or at least lacks enough detail to understand it in context which is why I am sure you asked.
In having many years of digging into what others are trying to communicate, one learns that unless the source has been peer reviewed to take it with a grain of salt, and unless it is absolutely necessary to understand, then just ignore it. In this case if you ignore the sentence most of what your reference makes sense. However I take it that you want an answer, so let's try and figure out what it means.
The key word that has me perplexed is throughout, but since you note the statement came from Wikipedia and in Wikipedia they give three references for the statement, a search of the word throughout appears only in one
∂P: A Differentiable Programming System to Bridge Machine Learning and Scientific Computing
Thus, since our ∂P system does not require primitives to handle new
types, this means that almost all functions and types defined
throughout the language are automatically supported by Zygote, and
users can easily accelerate specific functions as they deem necessary.
So my take on this is that by going back to the source, e.g. the paper, you can better understand how that percolated up into Wikipedia, but it seems that the meaning was lost along the way.
In this case if you really want to know the meaning of that statement you should ask on the Wikipedia talk page and ask the author of the statement directly.
Also note that the paper referenced is not peer reviewed, so the statements in there may not have any meaning amongst peers at present. As I said, I would just ignore it and get on with writing wonderful code.
You can guess its definition by application of differentiability.
It's been used for optimization i.e. to calculate minimum value or maximum value
Many of these problems can be solved by finding the appropriate function and then using techniques to find the maximum or the minimum value required.

Is there a way to prove a program has no bug?

I was thinking about the fact that we can prove a program has bugs. We can test it to assess that it is more or less bug resistant.
But is there a way (even theoretically) to prove that a program has no bug ?
For simple programs, such as a "Hello World", I guess we should be able to do it.
But what about larger programs ?
There are nowadays many different formalisms that can be used to prove programs correct (e.g., formalizations in proof assistants, dependently typed programming languages, separation logic, ...). As noted by others, there is no automatic way to prove any given program correct (see the halting problem). However, those mentioned formalisms are often applicable to specific programs. (Such an application can be far from automatic and require a tremendous amount of creativity.)
Another very important point is what we actually mean by proving a program correct or as you stated prove that a program has no bug. Even with formal methods there is typically no way to say that nothing whatsoever can go wrong with a program. The reason is that formal methods usually show that a program conforms to a specification.
You can think of a specification as a logical formula (that states some property about a program) and of the correctness proof as a formal proof that a program satisfies this formula (i.e., enjoys the corresponding property). Due to this setup, everything outside the specification is not even "considered" by the proof. So to really show that a program has no bugs you would first have to write down a logical formula that states when a program does not have bugs.
So it would maybe be more honest to say that formal methods are often able to prove (without doubt) that a program does not have certain kinds of bugs (depending on the used specification).

Why are "pure" functions called "pure"? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
A pure function is one that has no side effects -- it cannot do any kind of I/O and it cannot modify the state of anything -- and it is referentially transparent -- when called multiple times with the same inputs, it always gives the same outputs.
Why is the word "pure" used to describe functions with those properties? Who first used the word "pure" in that way, and when? Are there other words that mean roughly the same thing?
To answer your first question, mathematical functions have often been described as "pure" in terms of some specified variables. e.g.:
the first term is a pure function of x and the second term is a pure function of y
Because of this, I don't think you'll find a true "first" occurrence.
For programming languages, a little searching shows that Ada 95 (pragma Pure), High Performance Fortran (1993) (PURE) and VHDL-93 (pure) all contain formal notions of 'pure functions'.
Haskell (1990) is fairly obvious, but purity isn't explicit. GCC's C has various function attributes for various differing levels of 'pure'.
A couple of books: Rationale for the C programming language (1990) uses the term, as does Programming Languages and their Definitions (1984). However, both apparently only use it once! Programming the IBM Personal Computer, Pascal (also 1984) uses the term, but it isn't clear from Google's restricted view whether or not the Pascal compiler had support for it. (I suspect not.)
An interesting note is that Green, the predecessor to Ada, actually had a fairly strict 'function' definition - even memory allocation was disallowed. However, this was dropped before it became Ada, where functions can have side-effects (I/O or global variables), but can't modify their arguments.
C28-6571-3 (the first PL/I reference manual, written before the compiler) shows that PL/I had support for pure functions, in the form of the REDUCIBLE (= pure) attribute, as far back as 1966 - when the compiler was first released. (This also answers your third question.)
This last document specifically notes that it includes REDUCIBLE as a new change since document C28-6571-2. So REDUCIBLE, which is possibly the first incarnation of formal pure functions in programming languages, appeared somewhere between January and July 1966.
Update: The earliest instance of "pure function" on Google Groups in this sense is from 1988, which easily postdates the book references.
A couple of myths:
The term "pure functional" doesn't come from mathematics, where all functions are by nature "pure" and, so, there was never any need to call anything a "pure function".
The term doesn't come from imperative programming. The early imperative programming languages, Fortran, Algol 60, Pascal etc., always had two kinds of abstractions: "functions" that produced results based on their inputs and "procedures" which took some inputs and did an action. It was considered good programming practice for "functions" not to have side effects. There was no need for them to have side effects because one could always use procedures instead.
So, where else could the term "pure functional" have come from? The answer is - sort of- obvious. It came from impure functional programming languages, the foremost among them being Lisp. Lisp was designed sometime between 1958 and 1960 (between the first and second reports of Algol 60, whose design McCarthy was involved in, but didn't feel satisfied with). Lisp's design was based fundamentally on functional programming. However, it also allowed side-effects as a pragmatic choice. It did not have a notion of a command or a procedure. So, in Lisp, one mostly wrote "pure functions", but occasionally, one wrote "impure functions," i.e., functions with side-effects, to get something done. The terms "pure Lisp" or "purely functional subset of Lisp" have been in use for a long time. Slowly, by osmosis, this idea of "purity" has come to invade all our space.
The imperative programming languages could have resisted the trend. But, once C decided to abolish the idea of "procedures" and call them "void functions" instead, they didn't have much of a leg to stand on.
It comes from the mathematical definition of "function", where it is not possible for functions to have side effects.
Why is the word "pure" used to describe functions with those properties?
From Wiktionary > pure # adjective
free of flaws or imperfections; unsullied
free of foreign material or pollutants
free of immoral behavior or qualities; clean
of a branch of science, done for its own sake instead of serving another branch of science.
It should be obvious that the behavior of interacting functions is easiest to reason about when they are influenced only by their inputs, and they themselves influence only their outputs. Therefore it is inevitable that these kinds of functions will be noticed and classified. Now what word could we use to describe a function with such properties? "free of foreign material or pollutants" and "free of immoral behavior or qualities" seem to describe this rather well.
Who first used the word "pure" in that way, and when?
I am much too young to answer this with any degree of confidence. I argue, however, that it was inevitable that the word pure (or some very close synonym) would be used to describe functions that behave in this way.
Are there other words that mean roughly the same thing?
You said it yourself: "referentially transparent". However, you seem to suggest that "referential transparency" encompasses only part of the meaning of the phrase "pure function". I disagree; I feel it is entirely synonymous. From Wikipedia > Referential Transparency:
An expression is said to be referentially transparent if it can be replaced with its value without changing the behavior of a program. (emphasis mine)
The Haskell community sometimes uses the adjective "safe" in a similar manner. (See the Safe library, made to avoid throwing exceptions. Contrast with unsafePerformIO)
I can't think of any other synonyms right now.
The concept of a function originated in mathematics. The mathematical concept of a function is more-or-less a mapping from one set onto another. In this sense it's impossible for functions to have side effects; not because they're "better" that way or because they're specifically defined as to not have side effects, but because the concept of "having side effects" doesn't make any sense with this definition of a function. Mathematical functions aren't a series of steps that execute, so how could any of those steps somehow "affect" other mathematical objects you're talking about?
When people started studying computation, they became interested in machine-implementable algorithms for computing the values of mathematical functions given their inputs. People started talking about computable functions. But functions as implemented in a computer (in imperative languages at least, which are what programmers first worked with) are a series of executable steps, which obviously can have side effects.
So it became natural for programmers to think about functions as algorithms, not as mathematical functions. So then a pure function is one that is purely a mathematical function, to which all the hundreds of years of theory about functions applies, as opposed to the generalised programmer's function, which can't be reasoned about that way.

Formal Equivalence between programming languages

We have 2 languages which are (informally) semantically equivalent but syntactically different. One is xml and another is script based. How can I go about formally proving that both languages are in fact equivalent. Script approach is just a convenient way to write a same program that would be tedious to write in xml.
Thanks
Ketan
Write a program which takes as its input a program in one language, and gives as its output the equivalent program in the other language.
Then prove that the program you just wrote is correct for all possible inputs.
A good way to do that is by some sort of induction. Programs usually have a tree structure to them; if you can prove that the translation is correct for every possible leaf, and you can prove that it is correct for every possible combination of two correct trees, then by induction, you've proven the whole thing.
The first thing you need to define is what you mean by 'equivalent'
If you mean 'what you can do with one, you can do with the other' then one approach would be to prove that both languages are Turing complete If you can do this then you will have shown that both languages can perform the same kinds of computations as each other (and any other Turing complete language or device)
If you mean 'they have the same structure - just different ways of specifying elements' then you will need to abstract the languages to show that they do indeed share the same structure. Backus Naur Form is one way of doing this. If two languages have the same structure in BNF with just the terminals being different then really they are just two different representations of the same abstract syntax.
There are obviously other possible meaning of 'equivalent' and thus other things you could do.
You also need to define what you mean by 'proving' - do you mean a rigorous mathematical proof or 'convince my co workers that the scripting language is good enough'? If you mean the first, it's going to be tough. If you mean the second, you could define a specification for what thse languages need to be able to do and demonstrate proof of concepts in each language to show that they can meet the specification.
If they're general-purpose programming languages, write a Turing machine simulator in each. If they're not, you've got more of a problem. How sure are you that the languages are equivalent? Is this an exploratory sort of thing, or are you just trying to prove what you're already sure of?
In general, it's not possible to prove equivalence between languages. If they were written independently, your best bet might be to prove the script-based language does everything you need done, rather than to try to prove its equivalence to the XML-based one.
You may take some inspiration here:
http://compcert.inria.fr/doc/index.html