Which languages have "documented undefined behavior"? - language-agnostic

The detailed explanation for the undefined-behavior tag starts very reasonably:
In computer programming, undefined behavior (informally "UB") refers
to computer code whose behavior is not specified by the programming
language standard under certain conditions.
but then says (emphasis mine):
Questions regarding the various forms of documented undefined behavior
in the given programming language.
How can "behavior" that is "not specified" be "documented"?
What is "documented undefined behavior" and which languages have that beast?
[Additional comment:
Different people have interpreted the previous quoted text in widely different ways, even with opposite meanings, so this question became essentially a request for clarification and rewrite of the text, something which would naturally fit on meta. But the question was not primarily a request of a change of a tag explanation, but a programming language questions, hence it was intentionally not posted on meta.
(Note: I already brought up on meta the issue of that text multiple times, both in a question and an answer, but each time my comments were deleted, in a very harsh way.)]
EDIT
please edit to explain in detail the parts of your question that are
unique
The unique parts are, again:
How can "undefined behavior" be at the same time "documented"?
What is "documented undefined behavior"?
Which languages have "documented undefined behavior"?
None of linked question answers these; the words documented undefined behavior don't even appear together.
If I missed something, it would be helpful to point specifically to the answers that explain those.
I would be sad that yet another question, answer, or discussion about UB would be deleted, because it points to inconsistencies in the tag description.

I was the one who wrote that text in the wiki. What I meant with "documented undefined behavior" is something which is formally undefined behavior in the language standard, but perfectly well-defined in the real world. No language has "documented undefined behavior", but the real world doesn't always care about what the language standard says.
A better term would perhaps be non-standard language extensions, or if you will "undefined as far as the programming language standard is concerned".
There are several reasons why something could be regarded as undefined behavior in the language standard:
Something is simply outside the scope of the standard. Such as the behavior of memory, monitors etc. All behavior that isn't documented in the standard is in theory undefined.
Something is actually well-defined on a given specific hardware, but the standard doesn't want impose restrictions on the system/hardware, so that no technology gets an unfair market advantage. Therefore it labels something undefined behavior even though it isn't in practice.
Something is truly undefined behavior even in the hardware, or doesn't make any sense in any context.
Example of 1)
Where are variables stored in memory? This is outside the scope of the standard yet perfectly well-defined on any computer where programs are executed.
Similarly, if I say "my cat is black", it is undefined behavior, because the color of cats isn't covered by the programming language. This doesn't mean that my cat will suddenly start to shimmer in a caleidoscope of mysterious colors, but rather that reality takes precedence over theoretical programming standards. We can be perfectly certain that the specific cat will always be a black cat, even though it is undefined behavior.
Example of 2)
Signed integer overflow. What happens in case of integer overflow is perfectly well-defined on the CPU level. In most cases the value itself will get treated as simple unsigned addition/subtraction, but an overflow flag will be set in a status register. As far as the C or C++ language is concerned, such overflows might in theory cause dreadful, unexplained events to happen. But in reality, the underlying hardware will produce perfectly well-defined results.
Example of 3)
Division by zero. Accessing invalid addresses. The behavior upon stack overflow. Etc.

C and C++ are rather unique, in that the "official" C standards was written long after the language was already in use, and had even been described in published books. There were many situations such as integer overflow which some implementations would process in documented predictable fashion but others could not do so cheaply. The Standard treats such things as "Undefined Behavior", expressly noting that implementation may (but are not required) to process them in a documented fashion characteristic of the environment. Note that this allows for the possibility that in some environments it might be expensive to guarantee any kind of consistent behavior, and that many programs might not offer from such guarantees despite the cost.
Consider, for example, something like:
extern volatile int someFlag;
void test(int x, int y)
{
int z;
someFlag = 1;
z=x+y;
someFlag = 0;
if (f2())
f3(x,y,z);
}
Should an implementation where overflow raises a signal be allowed to change the code to:
extern volatile sig_atomic_t someFlag;
void test(int x, int y)
{
someFlag = 1;
someFlag = 0;
if (f2())
f3(x,y,x+y);
}
Doing so would avoid the need to save the value of x+y in memory across the call to f2(), and might avoid the need to compute it altogether. A net win unless someFlag would affect the behavior of the integer-overflow signal in a fashion that code was relying upon. If the Standard had characterized integer overflow as "Implementation Defined", it would have been awkward for implementations to document overflow behavior as required by the Standard without foregoing optimizations like the above, even though for many purposes a guarantee that the addition would be performed before the call to f2 would add cost but not any value.
Rather than try to worry about whether such optimizations should be allowed or prohibited, the authors of the Standard opted to characterize integer overflow as Undefined Behavior, allowing implementations that documented its behavior to continue doing so, but not requiring that implementations pessimistically assume that any possible side-effects might be observable in ways they don't know about. Before the Standard was written, any behaviors that an implementation documented would be documented behaviors, and the fact that the Standard characterized behaviors as Undefined was not intended to change that.
Since then, there have been a number of defect reports which incorrectly describe as "non-conforming" various constructs which are conforming but not strictly conforming, and this has contributed to a mistaken belief that the term "X is undefined" in the Standard is equivalent to "X is forbidden". Other language specifications are far more explicit in distinguishing between constructs which are forbidden and must be diagnosed, those which are forbidden but may not always be diagnosed, those whose behavior is expected to be partially but not fully consistent, and those whose behavior will behave in different consistent fashions on different implementations, but the authors of the original C and C++ Standards left such things to implementers' judgment.

As I read it, "documented undefined behavior" doesn't mean "behavior that is both (undefined AND documented)". It means "(undefined behavior) which is documented". He even gives an example:
For example, accessing beyond the last element of an array in C might be diagnosed by the compiler if the array index is known during compilation, or might return a garbage value from uninitialized memory, or return an apparently sensible value, or cause the program to crash by accessing memory outside the process' data address space.
The undefined behavior is "accessing beyond the last element of an array in C". The C language says, THIS IS UNDEFINED. And yet, he and others have documented what things actually DO happen in the real word when you enter this "undefined" area of the language.
So there's two levels at which this undefined behavior is documented.
1) It is identified. "C doesn't define what happens when you go past the end of the array". Now you know it's undefined behavior.
2) It is explored. "Here are some things that can happen when you do it."
Maybe the author intended meaning 1 or meaning 2. Or some other meaning. But I think the meaning you're chasing may be an artifact of reading the phrase differently than I did.

Related

Understanding pragmas

I have a few related questions about pragmas. What got me started on this line of questions was trying to determine whether it's possible to disable some warnings without going all the way to no worries (I'd still like to worry, at least a little bit!). And I'm still interested in the answer to that specific question.
But thinking about that issue made me realize that I don't really understand how pragmas work. It's clear that at least some pragmas take arguments (e.g., use isms<Perl5>). But they don't seem to be functions. Where do they fit into the overall MOP? Are they sort of like Traits? Or packages? Is there any way to introspect over them? See what pragmas are currently in effect?
Are pragmas built into the language, or are they something that users can add? When writing a library, I'd love to have some errors/warnings that users can optionally disable with a pragma – is that possible, or are they restricted to use in the compiler? If I can create my pragmas, is there a practical difference between setting something with a pragma versus with a dynamic variable, aside from the cleaner look of a pragma? For that matter, how do we decide what language features should be set with a pragma versus a variable (e.g., why is $*TOLERANCE not a pragma)?
Basically, I'd be interested in any info about pragmas that you could offer or point me towards – though my specific question is still whether I can selectively turn off certain warnings.
Currently, pragmas are hard-coded in the handling of the use statement. They usually either set some flag in a hash that is associated with the lexical scope of the moment, or change the setting of a dynamic variable in the grammar.
Since use is a compile time construct, you can only use compile time constructs to get at them (currently) (so you'd need BEGIN if it is not part of a use).
I have been in favour of decoupling use from pragma's in the past, as I see them as mostly a holdover from the Perl roots of Raku.
All of this will be changed in the RakuAST branch. I'm not sure what Jonathan Worthington has in mind regarding pragmas in the RakuAST context. For one thing, I think we should be able to "export" a pragma to the scope of a use statement.

Programming Concepts That Were "Automated" By Modern Languages [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Weird question but here it is. What are the programming concepts that were "automated" by modern languages? What I mean are the concepts you had to manually do before. Here is an example: I have just read that in C, you manually do garbage collection; with "modern" languages however, the compiler/interpreter/language itself takes care of it. Do you know of any other, or there aren't any more?
Optimizations.
For the longest time, it was necessary to do this by hand. Now most compilers can do it infinitely better than any human ever could.
Note: This is not to say that hand-optimizations aren't still done, as pointed out in the comments. I'm merely saying that a number of optimizations that used to be done by hand are automatic now.
I think writing machine code deserves a mention.
Data collections
Hash tables, linked lists, resizable arrays, etc
All these had to be done by hand before.
Iteration over a collection:
foreach (string item in items)
{
// Do item
}
Database access, look at the ActiveRecord pattern in Ruby.
The evil goto.
Memory management, anybody? I know it's more efficient to allocate and deallocate your own memory explicitly, but it also leads to Buffer Overruns when it's not done correctly, and it's so time consuming - a number of modern languages will allocate and garbage collect automatically.
Event System
Before you had to implement the Observer Pattern all by yourself. Today ( at least in .NET ) you can simply use "delegates" and "events"
Line Numbers
No more BASIC, no more Card Decks!
First in list, extension method. Facilitates fluent programming
Exceptions, compartmentalization of what is the program trying to do (try block) and what it will do if something fail (catch block), makes for saner programming. Whereas before, you should be always in alert to intersperse error handling in between your program statements
Properties, makes the language very component-centric, very modern. But sadly that would make Java not modern.
Lambda, functions that captures variables. whereas before, we only have function pointer. This also precludes the need for nested function(Pascal has nested function)
Convenient looping on collection, i.e. foreach, whereas before you have to make a design pattern for obj->MoveNext, obj->Eof
goto-less programming using modern construct like break, continue, return. Whereas before, I remember in Turbo Pascal 5, there's no break continue, VB6 has Exit Do/For(analogous to break), but it doesn't have continue
C#-wise, I love the differentiation of out and ref, so the compiler can catch programming errors earlier
Reflection and attributes, making programs/components able to discover each other functionalities, and invoke them during runtime. Whereas in C language before (I don't know in modern C, been a long time not using C), this things are inconceivable
Remote method invocations, like WebServices, Remoting, WCF, RMI. Gone are the days of low-level TCP/IP plumbing for communication between computers
Declarations
In single-pass-compiler languages like C and C++, declarations had to precede usage of a function. More modern languages use multi-pass compilers and don't need declarations anymore. Unfortunately, C and C++ were defined in such a poor way that they can't deprecate declarations now that multi-pass compilers are feasible.
Pattern matching and match expressions
In modern languages you can use pattern matching which is more powerful than a switch statement, imbricated if statements of ternary operations:
E.g. this F# expression returns a string depending the value of myList:
match myList with
| [] -> "empty list"
| 2::3::tl -> "list starts with elements 2 and 3"
| _ -> "other kind of list"
in C# you would have to write such equivalent expression that is less readable/maintanable:
(myList.Count == 0) ? "empty list" :
(myList[0] == 2 && myList[1] == 3 ? "list starts with elements 2 and 3" :
"other kind of list")
If you go back to assembly you could find many more, like the concept of classes, which you could mimic to a certain extent, were quite difficult to achieve... or even having multiple statements in a single line, like saying "int c = 5 + 10 / 20;" is actually many different "instructions" put into a single line.
Pretty much anything you can think of beyond simple assembly (concepts such as scope, inheritance & polymorphism, overloading, "operators" anything beyond your basic assembly are things that have been automated by modern languages, compilers and interpreters.)
Some languages support Dynamic Typing, like Python! That is one of the best things ever (for certain fields of applications).
Functions.
It used to be that you needed to manually put all the arguments to stack, jump to piece of code where you defined your function, then manually take care of its return values. Now you just write a function!
Programming itself
With some modern IDE (e.g. Xcode/Interface Builder) a text editor or an address book is just a couple of clicks away.
Also built-in functions for sorting(such as bubble sort,quick sort,....).
Especially in Python 'containers' are a powerful data structures that in also in other high level and modern programming languages require a couple of lines to implement.You can find many examples of this kind in Python description.
Multithreading
Native support for multithreading in the language (like in java) makes it more "natural" than adding it as an external lib (e.g. posix thread layer in C). It helps in understanding the concept because there are many examples, documentation, and tools available.
Good String Types make you have to worry less about messing up your string code,
Most Languages other then c and occasionally c++ (depending on how c like they are being) have much nicer strings then c style arrays of char with a '\0' at the end (easier to work with a lot less worry about buffer overflows,ect.). C strings suck.
I probably have not worked with c strings enough to give such a harsh (ok not that harsh but I'd like to be harsher) but after reading this (especially the parts about saner seeming pascal string arrays which used the zeroth element to mark the length of the string), and a bunch of flamewars over which strcpy/strcat is better to use (the older strn* first security enhancement efforts, the openbsd strl*, or the microsoft str*_s) I just have come to really dislike them.
Type inference
In languages like F# or Haskell, the compiler infers types, making programming task much easier:
Java: float length = ComputeLength(new Vector3f(x,y,z));
F#: let length = ComputeLength(new Vector3f(x,y,z))
Both program are equivalent and statically type. But F# compiler knows for instance that ComputeLength function returns a float so it automagically deduces the type of length, etc..
A whole bunch of the Design Patterns. Many of the design patterns, such as Observer (as KroaX mentioned), Command, Strategy, etc. exist to model first-class functions, and so many more languages have begun to support them natively. Some of the patterns related to other concepts, such as Lock and Iterator, have also been built into languages.
dynamic library
dynamic libraries automatically share common code between processes, saving RAM and speed up starting time.
plugins
a very clean and efficient way to extend functionality.
Data Binding. Sure cuts down on wiring to manually shuffle data in and out of UI elements.
OS shell Scripting, bash/sh/or even worse batch scripts can to a large extent be replaced with python/perl/ruby(especially for long scripts, less so at least currently for some of the core os stuff).
You can have most of the same ability throw out a few lines of script to glue stuff together while still working in a "real" programming language.
Context Switching.
Most modern programming languages use the native threading model instead of cooperative threads. Cooperative threads had to actively yield control to let the next thread work, with native threads this is handled by the operating system.
As Example (pseudo code):
volatile bool run = true;
void thread1()
{
while(run)
{
doHeavyWork();
yield();
}
}
void thread2()
{
run = false;
}
On a cooperative system thread2 would never run without the yield() in the while loop.
Variable Typing
Ruby, Python and AS will let you declare variables without a type if that's what you want. Let me worry about whether this particular object's implementation of Print() is the right one, is what I say.
How about built-in debuggers?
How many here remember "The good old days" when he had to add print-lines all over the program to figure out what was happening?
Stupidity
That's a thing I've gotten lot of help from modern programming languages. Some programming languages are a mess to start with so you don't need to spend effort to shuffle things around without reason. Good modern libraries enforce stupidity through forcing the programmer inside their framework and writing redundant code. Java especially helps me enormously in stupidity by forcing me inside OOPS box.
Auto Type Conversion.
This is something that I don`t even realize that language is doing to me, except when I got errors for wrong type conversion.
So, in modern languages you can:
Dim Test as integer = "12"
and everthing should work fine...
Try to do something like that in a C compiler for embedded programming for example. You have to manually convert all type conversions!!! That is a lot of work.

Why is undefined behavior allowed (as opposed to not compiling/crashing)?

I understand the reasons for compiler/interpreter language extensions but why is behaviour that has no valid definition allowed to fail silently/do weird things rather then throwing a compiler error? Is it because of the extra difficulty(impossible or simply time consuming) for the compiler to catch them)?
P.S. what languages have undefined behaviour and which don't?
P.P.S. Are there instances of undefined behaviour which is not impossible/takes too long to catch in compilation and if so are there any good reasons/excuses for those.
The concept of undefined behaviour is required in languages like C and C++, because detecting the conditions that cause it would be impossible or prohibitively expensive. Take for example this code:
int * p = new int(0);
// lots of conditional code, somewhere in which we do
int * q = p;
// lots more conditional code, somewhere in which we do
delete p;
// even more conditional code, somewhere in which we do
delete q;
Here the pointer has been deleted twice, resulting in undefind behaviour. Detecting the error is too hard to do for a language like C or C++.
Largely because, to accomplish certain purposes, it's necessary. Just for example, C and C++ were originally used to write operating systems, including things like device drivers. To do that, they used (among other things) direct access to specific hardware locations that represented I/O devices. Preventing access to those locations would have prevented C from being used for its intended purpose (and C++ was specifically targeted at allowing all the same capabilities as C).
Another factor is a really basic decision between specifying a language and specifying a platform. To use the same examples, C and C++ are both based on a conscious decision to restrict the definition to the language, and leave the platform surrounding that language separate. Quite a few of the alternatives, with Java and .NET as a couple of the most obvious examples, specify entire platforms instead.
Both of these reflect basic differences in attitude about the design. One of the basic precepts of the design of C (largely retained in C++) was "trust the programmer". Though it was never stated quite so directly, the basic "sandbox" concept of Java was/is based on the idea that you should not trust the programmer.
As far as what languages do/don't have undefined behavior, that's the dirty little secret: for all practical purposes, all of them have undefined behavior. Some languages (again, C and C++ are prime examples) go to considerable effort to point out what behavior is undefined, while many others either try to claim it doesn't exist (e.g., Java) or mostly ignore many of the "dark corners" where it arises (e.g., Pascal, most .NET).
The ones that claim it doesn't exist generally produce the biggest problems. Java, for example, includes quite a few rules that try to guarantee consistent floating point results. In the process, they make it impossible to execute Java efficiently on quite a bit of hardware -- but floating point results still aren't really guaranteed to be consistent. Worse, the floating point model they mandate isn't exactly perfect so under some circumstances it prevents getting the best results you could (or at least makes you do a lot of extra work to get around what it mandates).
To their credit, Sun/Oracle has (finally) started to notice the problem, and is now working on a considerably different floating point model that should be an improvement. I'm not sure if this has been incorporated in Java yet, but I suspect that when/if it is, there will be a fairly substantial "rift" between code for the old model and code for the new model.
Because different operating systems operate differently (...), and you can't just say "crash in this case", because it could be something the operating system could do better.

Idiom vs. pattern

In the context of programming, how do idioms differ from patterns?
I use the terms interchangeably and normally follow the most popular way I've heard something called, or the way it was called most recently in the current conversation, e.g. "the copy-swap idiom" and "singleton pattern".
The best difference I can come up with is code which is meant to be copied almost literally is more often called pattern while code meant to be taken less literally is more often called idiom, but such isn't even always true. This doesn't seem to be more than a stylistic or buzzword difference. Does that match your perception of how the terms are used? Is there a semantic difference?
Idioms are language-specific.
Patterns are language-independent design principles, usually written in a "pattern language" (a uniform template) describing things such as the motivating circumstances, pros & cons, related patterns, etc.
When people observing program development from On High (Analysts, consultants, academics, methodology gurus, etc) see developers doing the same thing over and over again in various situations and environments, then the intelligence gained from that observation can be distilled into a Pattern. A pattern is a way of "doing things" with the software tools at hand that represent a common abstraction.
Some examples:
OO programming took global variables away from developers. For those cases where they really still need global variables but need a way to make their use look clean and object oriented, there's the Singleton Pattern.
Sometimes you need to create a new object having one of a variety of possible different types, depending on some circumstances. An ugly way might involve an ever-expanding case statement. The accepted "elegant" way to achieve this in an OO-clean way is via the "Factory" or "Factory Method" pattern.
Sometimes, a lot of developers do things in a certain way but it's a bad way that should be disrecommended. This can be formalized in an antipattern.
Patterns are a high-level way of doing things, and most are language independent. Whether you create your objects with new Object or Object.new is immaterial to the pattern.
Since patterns are something a bit theoretical and formal, there is usually a formal pattern (heh - word overload! let's say "template") for their description. Such a template may include:
Name
Effect achieved
Rationale
Restrictions and Limitations
How to do it
Idioms are something much lower-level, and usually operate at the language level. Example:
*dst++ = *src++
in C copies a data element from src to dst while incrementing the pointers to both; it's usually done in a loop. Obviously, you won't see this idiom in Java or Object Pascal.
while <INFILE> { print chomp; }
is (roughly quoted from memory) a Perl idiom for looping over an input file and printing out all lines in the file. There's a lot of implicit variable use in that statement. Again, you won't see this particular syntax anywhere but in Perl; but an old Perl hacker will take a quick look at the statement and immediately recognize what you're doing.
Contrary to the idea that patterns are language agnostic, both Paul Graham and Peter Norvig have suggested that the need to use a pattern is a sign that your language is missing a feature. (Visitor Pattern is often singled out as the most glaring example of this.)
I generally think the main difference between "patterns" and "idioms" to be one of size. An idiom is something small, like "use an interface for the type of a variable that holds a collection" while Patterns tend to be larger. I think the smallness of idioms does mean that they're more often language specific (the example I just gave was a Java idiom), but I don't think of that as their defining characteristic.
Since if you put 5 programmers in a room they will probably not even agree on what things are patterns, there's no real "right answer" to this.
One opinion that I've heard once and really liked (though can't for the life of me recall the source), is that idioms are things that should probably be in your language or there is some language that has them. Conversely, they are tricks that we use because our language doesn't offer a direct primitive for them. For instance, there's no singleton in Java, but we can mimic it by hiding the constructor and offering a getInstance method.
Patterns, on the other hand, are more language agnostic (though they often refer to a specific paradigm). You may have some infrastructure to support them (e.g., Spring for MVC), but they're not and not going to be language constructs, and yet you could need them in any language from that paradigm.

Is there a SHOULD (or other modal verb) constructs in any programming languages?

As far as I know I've never encountered a SHOULD construct in a computer language, but then again I don't know that many languages, compared to the hundreds out there.
Anyways SHOULD and other modal verbs are very common in natural languages, and their meanings are quite clear when writing documentation and legal-binding contracts, so they aren't really gray terms, and theoretically could be expressed in programming terms (I guess).
For example an ASSERT, upholds in a sense a MUST construct.
Are there any actual examples of this sort of thing? Any research about it?
I'm guessing some Rule-Based systems, and perhaps fuzzy logic algorithms work like this.
I think of try as "should" and catch and finally as "in case it doesn't"
The exact meaning of should in natural language is also not clear cut. When you say "the wheel should fit in the row" - what does that mean exactly? It may mean the same as must, but then there is no point in a construct for it. Else, at what confidence do you need to be for this to be satisfied? What is the action in case the wheel does not fit?
In the senses you referred to, there are some equivalents, though I do not know of language that use the word should for them:
Testing/assertion
ASSERT is often a language directive, macro, or testing library function. In the sense that ASSERT corresponds to must, some languages and testing frameworks define macros for "warning assertions" which will spit a warning message if the check fails but not bail out or fail the test - that would correspond to should.
Exception handling
In some terms, you can consider exception thrown as analogues - if an exception is caught, the program can handle the case where something is not as it should be. But sometimes the exception describes the failure of something to be as it must be for the program to work, in which case the exception will not be caught or the handler will make the program fail gracefully. This is however not always the case - sometimes code is executed to test something that may be or perhaps is even unlikely to be, and an exception is caught expecting that it will usually be thrown.
Constraint logic
One common meaning of must and should in various formal natural language documents is in terms of constraints - must specifying a constraint that you always have to satisfy, and if you cannot then your state is incompatible, while should means that you will always satisfy the constraint if it is possible given the state and the constraints implied by must, but if it is not possible that is still valid. In informal constraint logic, this occurs when there are "external constraints" in the context - so verifying that the "solution" is satisfactory with respect to the "should constraints" may be possible only with knowledge of the context, and given the context you may also be able to satisfy different subsets of the "should constraints" but not at the same time. For that reason, some constraint logic specification languages (whether you call them "programming languages" depends on your definition) have the concepts of ordering of constraints - the first level of constraints corresponds to must, the next level corresponds to should, and a constraint has to be satisfied if possible given all constraints external to it (in previous levels), even if that conflicts with some constraints in the next levels which will then not be satisfied.
#Simon Perhaps a Try/finally is closest to should. anything in the Try should run, but not always. A webservice should open the socket but if it doesn't, we don't care.
This sort of modality is used in RSpec - dsl for building tests in behavior-driven style.
Modal verbs like "should", "may", "might" may cause confusion, so RFC 2119 gives an definition to point all noses in the same direction:
SHOULD This word, or the adjective "RECOMMENDED", mean that there
may exist valid reasons in particular circumstances to ignore a
particular item, but the full implications must be understood and
carefully weighed before choosing a different course.
From this definition it should (no pun intended) be clear that it's meant to be used in specifications, rather than program code, where you want things deterministic. At least I do. I can imagine it's usable in AI, though.
Well Should might be found in a prolog type language, as a softer inference? Ie the result logically should be x but might not be. You can say the result is probably x but not unequivocally?
What is the behavior you expect from the program if the result is not as it SHOULD? In the ASSERT case, it is an exception (AssertException or similar). SHOULD the program throw an exception or just ignore the result? To me it seems that there is nothing in between. Either the result is accepted or not.
Otherwise you SHOULD specify what behaviour you expect. :-)
Back to the assertion: If the assertion fails, an exception is thrown. It is up to you what you do with that exception. In java/C#, e.g., you can catch it and then do anything you want, so you define whether the assert has a MUST or a SHOULD semantic.
Well, Java2K has similar concepts. It SHOULD be doing what it's told...
SCNR.