Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have often seen claims that a programming language feature eliminates a whole class of errors.
For example, I have seen claims that:
A strong type system eliminates the class of errors caused by using features that a type does not support.
Automatic memory management eliminates the class of errors relating to allocating the correct amount of memory for an object/structure.
Mandatory variable initialisation eliminates null pointer or null reference errors.
Immutable data structures eliminate the class of errors caused by not understanding the impacts of changing mutable state.
I am not trying to find out whether the claims above are true or not, but rather compile a list of claims of this type that are specific enough for me to research and evaluate myself.
What other specific features are alleged to eliminate a whole class of errors?
Is there a general principle or theory for identifying features that do this, or identifying the absence of such features?
(Note that I do not include obviously vague or subjective claims like these, whether true or not:
Object oriented programming improves reusability.
Dynamic languages are faster to program in.
Meaningful whitespace makes the program cleaner.
)
Immutability
Concurrency
Immutability reduces all shared data issues in a concurrent system. Concurrency and Caching are two of the hardest things to get correct, and most people get it wrong the first few dozens of times they try to write code of either type.
Side Effects
Immutability also makes code deterministic, inputs to functions can never change during the scope of the function, for either inside or outside the function, this means there are no side effects.
Non-Null values
Most systems that support immutability as the only type of variable also don't have the concept of Null, either references are assigned or they aren't, if they aren't the compiler complains and you have to fix it.
In Java liberal use of final references and #Nonnull annotations on parameters and return values eliminate almost all non-logic based errors or shows them as they are at compile time or very early in runtime
Here are a few off the top of my head:
Class Feature Example
Type Error Single Data Type awk
Type Mismatch Union Types XQuery
Reference Error No Variables sed
Mismatched Braces No Braces python
Dangling semicolon Significant Whitespace python
Buffer Overflow No Pointer Arithmetic Ada
Division by Zero Default to infinity lua
Circular Reference All values are immutable strings tcl
Circular Import No Cyclical Dependencies OCaml
Ambiguous Type Hindley-Milner type inference OCaml
Not enough args Partial Application Haxe
Import Error Implicit Standard Library Coldfusion
Leaky Abstraction No Conditional Logic CSS
Object Expected Everything is an object SmallTalk
No such method Reification SmallTalk
Infinite Loop No Side Effects DSSSL
Deadlock Software Transaction Memory Clojure
Namespace Conflict Stack Save/Restore PostScript
Invalid arguments Stack Machine PostScript
Heisenbug Message Passing Concurrency Erlang
Optional types eliminate null pointer exceptions. The type forces you to check for the possibility of an empty value.
Most general programming languages have an optional type of some form. Maybe in Haskell, Option in OCaml, Optional generic type in Java.
Related
I'm finding when generating Verilog output from the Chisel framework, all of the 'structure' defined in the chisel framework is lost at the interface.
This is problematic for instantiating this work in larger SystemVerilog designs.
Are there any extensions or features in Chisel to support this better? For example, automatically converting Chisel "Bundle" objects into SystemVerilog 'struct' ports.
Or creating SV enums, when the Chisel code is written using the Enum class.
Currently, no. However, both suggestions sound like very good candidates for discussion for future implementation in Chisel/FIRRTL.
SystemVerilog Struct Generation
Most Chisel code instantiated inside Verilog/SystemVerilog will use some interface wrapper that deals with converting the necessary signal names that the instantiator wants to use into Chisel-friendly names. As one example of doing this see AcceleratorWrapper. That instantiates a specific accelerator and does the connections to the Verilog names the instantiator expects. You can't currently do this with SystemVerilog structs, but you could accomplish the same thing with a SystemVerilog wrapper that maps the SystemVerilog structs to deterministic Chisel names. This is the same type of problem/solution that most people encounter/solve when integrating external IP in their project.
Kludges aside, what you're talking about is possible in the future...
Some explanation is necessary as to why this is complex:
Chisel is converted to FIRRTL. FIRRTL is then lowered to a reduced subset of FIRRTL called "low" FIRRTL. Low FIRRTL is then mapped to Verilog. Part of this lowering process flattens all bundles using uniquely determined names (typically a.b.c will lower to a_b_c but will be uniquified if a namespace conflict due to the lowering would result). Verilog has no support for structs, so this has to happen. Additionally, and more critically, some optimizations happen at the Low FIRRTL level like Constant Propagation and Dead Code Elimination that are easier to write and handle there.
However, SystemVerilog or some other language that a FIRRTL backend is targeting that supports non-flat types benefits from using the features of that language to produce more human-readable output. There are two general approaches for rectifying this:
Lowered types retain information about how they were originally constructed via annotations and the SystemVerilog emitter reconstructs those. This seems inelegant due to lowering and then un-lowering.
The SystemVerilog emitter uses a different sequence of FIRRTL transforms that does not go all the way to Low FIRRTL. This would require some of the optimizing transforms run on Low FIRRTL to be rewritten to work on higher forms. This is tractable, but hard.
If you want some more information on what passes are run during each compiler phase, take a look at LoweringCompilers.scala
Enumerated Types
What you mention for Enum is planned for the Verilog backend. The idea here was to have Enums emit annotations describing what they are. The Verilog emitter would then generate localparams. The preliminary work for annotation generation was added as part of StrongEnum (chisel3#885/chisel3#892), but the annotations portion had to be later backed out. A solution to this is actively being worked on. A subsequent PR to FIRRTL will then augment the Verilog emitter to use these. So, look for this going forward.
On Contributions and Outreach
For questions like this with (currently) negative answers, feel free to file an issue on the respective Chisel3 or FIRRTL repository. And even better than that is an RFC followed by an implementation.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I started learning Ada for its potential use in an embedded device which is safety critical. So far, I'm really liking it. However, in my research on embedded programming, I came across the hot topic of whether to use exception handling in embedded systems. I think I understand why some people seem to avoid it:
depending on its implementation it can introduce either run-time overhead or larger code size (mentioned here under "Implementation")
the time it takes to execute exceptions can be non-deterministic (one of several sources I saw)
Now my question is, Does the Ada language or the GNAT compiler address these concerns? My understanding of safety critical code is that non-deterministic code size and execution time is often not acceptable.
Due Diligence: I am having a bit of trouble finding out exactly how deterministic Ada exceptions can be, but my understanding is their original implementation called for more run-time overhead in exchange for reduced code size impact (above first link mentions Ada explicitly). Beyond the above first link, I have looked into profiles mentioning determinism of code, like the Ravenscar profile and this paper, but nothing seems to mention exception handling determinism. To be fair, I may be looking in the wrong places, as this topic seems quite deep.
There are embedded systems that are safety- or mission-critical, embedded systems that are hard real time, and embedded systems that are both.
Embedded systems that are hard real time may be constrained or not. Colleagues worked on a missile guidance system in the 70s that had about 4 instructions worth of headroom in its main loop! (as you can imagine, it was written in assembler and used a tuned executive, not an RTOS. Exceptions weren't supported). On the other hand, the last one I worked on, on a 1 GHz PowerPC board, had a 2 millisecond deadline for the response to a particular interrupt, and our measured worst case was 1.3 milliseconds (and it was a soft real time requirement anyway, you just didn't have to miss too many in a row).
That system also had safety requirements (I know, I know, safe missile systems, huh) and although we were permitted to use exceptions, an unhandled exception meant that the system had to be shut down, missile in flight or no, resulting in loss of missile. And we were strictly forbidden to say when others => null; to swallow an exception, so any exception we didn't handle would be 'unhandled' and would bounce up to the top level.
The argument is, if an unhandled exception happens, you can no longer know the state of the system, so you can't justify continuing. Of course, the wider safety engineering has to consider what action the overall system should take (for example, perhaps this processor should restart in a recovery mode).
Sometimes people use exceptions as part of their control flow; indeed, for handling random text inputs a commonly used method is, rather than checking for end of file, just carry on until you get an End_Error;
loop
begin
-- read input
-- process input
exception
when End_Error => exit;
end;
end loop;
Jacob's answer discusses using SPARK. You don't have to use SPARK to not handle exceptions, though of course it would be nice to be able to prove to yourself (and your safety auditor!) that there won't be any. Handling exceptions is very tricky, and some RTSs (e.g Cortex GNAT RTS) don't; the configuration pragma
pragma Restrictions (No_Exception_Propagation);
means that exceptions can't be propagated out of the scope where they're raised (the program will crash out with a call to a Last_Chance_Handler).
Propagating exceptions only withon the scope where they're raised isn't, IMO, that useful:
begin
-- do something
if some error condition then
raise Err;
end if;
-- do more
exception
when Err =>
null;
end;
would be a rather confusing way of avoiding the "do more" code. Better to use a label!
Exceptions are deterministic in Ada. (But some checks which can raise an exception have some freedom. If the compiler can provide a correct answer, it doesn't always have to raise an exception, if an intermediate result is out of bounds for the type in question.)
At least one Ada compiler (GNAT) has a "zero cost" exception implementation. This doesn't make exceptions completely free, but you don't pay a run-time cost until you actually raise an exception. You still pay a cost in terms of code space. How large that cost is depends on the architecture.
I haven't worked on safety critical systems myself, but I know for sure that the run-time used for the software in the Ariane 4 inertial navigation system included exceptions.
If you don't want exceptions, one option is to use SPARK (a language derived from Ada). You can still use any Ada compiler you like, but you use the SPARK tools to prove that the program can't raise any exceptions. You should note that SPARK isn't magic. You have to help the tools, by inserting assertions, which the tools can use as intermediate steps for the proofs.
I have no problem with the IO Monad. But I want to understand the followings:
In All/almost Haskell tutorials/ text books they keep saying that getChar is not a pure function, because it can give you a different result. My question is: Who said that this is a function in the first place. Unless you give me the implementation of this function, and I study that implementation, I can't guarantee it is pure. So, where is that implementation?
In All/almost Haskell tutorials/ text books, it's said that, say (IO String) is an action that (When executed) it can give you back a value of type String. This is fine, but who/where this execution is taking place. Of course! The computer is doing this execution. This is OK too. but since I am only a beginner, I hope you forgive me to ask, where is the recipe for this "execution". I would guess it is not written in Haskell. Does this later idea mean that, after all, that a Haskell program is converted into a C-like program, which will eventually be converted into Assembly -> Machine code? If so, where one can find the implementation of the IO stuff in Haskell?
Many thanks
Haskell functions are not the same as computations.
A computation is a piece of imperative code (perhaps written in C or Assembler, and then compiled to machine code, directly executable on a processor), that is by nature effectful and even unrestricted in its effects. That is, once it is ran, a computation may access and alter any memory and perform any operations, such as interacting with keyboard and screen, or even launching missiles.
By contrast, a function in a pure language, such as Haskell, is unable to alter arbitrary memory and launch missiles. It can only alter its own personal section of memory and return a result that is specified in its type.
So, in a sense, Haskell is a language that cannot do anything. Haskell is useless. This was a major problem during the 1990's, until IO was integrated into Haskell.
Now, an IO a value is a link to a separately prepared computation that will, eventually, hopefully, produce a. You will not be able to create an IO a out of pure Haskell functions. All the IO primitives are designed separately, and packaged into GHC. You can then compose these simple computations into less trivial ones, and eventually your program may have any effects you may wish.
One point, though: pure functions are separate from each other, they can only influence each other if you use them together. Computations, on the other hand, may interact with each other freely (as I said, they can generally do anything), and therefore can (and do) accidentally break each other. That's why there are so many bugs in software written in imperative languages! So, in Haskell, computations are kept in IO.
I hope this dispels at least some of your confusion.
Suppose, I want to write a function that tries to find a key in a map and returns None if it cannot: try_find: 'a -> ('a, 'b) Map.t -> 'b option, what is the canonical way to do this? To first check that the key exists with mem and then call find? Or to catch the Not_found exception? Batteries seem to do the latter.
On the other hand, in languages like C# or Java people are usually discouraged from using exceptions in such cases, for performance reasons. Is using exceptions on "normal" execution paths a usual thing in Ocaml or is it also discouraged?
OCaml exceptions are as fast as function calls for the default backend. For Javascript backends, it is not always true. The canonical OCaml way is to implement a function that doesn't throw an exception is to use a throwing function and translate the exception to a nullary variant, e.g.,
let try_find x xs = try Some (List.find x xs) with Not_found -> None
Calling mem and find is a loss of performance, as you will actually iterate the list twice.
There are tradeoffs between raising an exception and returning an option type. The standard function List.find will not allocate any new values in the heap, so no garbage will be created. On the other hand, the try_find function will allocate a new value every time something is found (None is a constant so it is not allocated). This will create an extra work for the garbage collector, that will eventually degrade the performance. To me, the semantic benefits of total functions outweigh possible performance degradation. If the latter does matter (in case of tight loops) then I can always optimize it locally by either using an exception in a very tight context, or continuation passing style and/or GADT.
Is using exceptions on "normal" execution paths a usual thing in Ocaml or is it also discouraged?
It wasn't discouraged by the design of the language, and OCaml standard library uses exceptions a lot. However, the language evolves, and new features are added to the language. Moreover, new backends are implemented, like several Javascript backends, Java, and .Net backends. It is not trivial, to provide the same performance guarantees for these backends. So with a time, the popularity of exceptions reduced, and many people started to favor total functions with explicitly encoded errors, cf., the newly added to the standard library result type. Another example is Janestreet Core library (and all other libraries) that disfavor exceptions and use them only for exceptional cases.
You should decide by yourself an exception policy (or borrow the existing one). My personal policy is trying to avoid them in the public interfaces and sparingly use them very locally. I also use exceptions, for logic and programmer errors, basically, for errors, that shouldn't be captured.
From what I've seen, OCaml exceptions are quite efficient, and I see them being used more often than in other functional languages I guess.
I try to avoid them myself as they interfere with reasoning about the program. But a self-contained use in a library doesn't seem so bad.
The efficiency of low-level things like exceptions is something that might vary a lot from platform to platform. I suspect that catching the Not_found exception would be faster for very large maps, as it avoids traversing the map twice. Otherwise it might not matter much.
My questions are more of historical nature than practical:
Who invented it?
Which language used it first (and to what extent)?
What was the original idea, the underlying concept (which actual problems had to be solved these days, papers welcome) ?
Is LISPs condition system the ancestor of current exception handling?
Today's Common Lisp condition system is a relative newcomer. The design was based on previous systems, but wasn't included as part of the Common Lisp language until the late 80's around the time of CLTL2
I believe the conditions chapter in that book has a fair amount commentary on the history and background of the design, and references to related research and prior implementations of similar systems.
The VAX CPUs had a stack-based exception handling system. In every call frame, one 32-bit cell was allocated and filled with a zero. If the subroutine being called wanted to handle exceptions, all it had to do was fill in that cell with the address of the exception-handling routine.
When an exception took place, a stack search would occur. This was easy, since the stack frames were all chained together. The first stack frame with a non-zero entry would cause an stack unwind to that point, and the exception handler would be called.
I remember this as being one of the features of the processor which were aimed at higher-level languages, but I don't know that there was a higher-level language that took advantage of the feature. I believe it was used by library code, which would likely have been written in assembler.
Doesn't it go back to the setjmp, longjmp functions in C? Richie, Kernighan, et al?