Why don't languages raise errors on integer overflow by default? - language-agnostic

In several modern programming languages (including C++, Java, and C#), the language allows integer overflow to occur at runtime without raising any kind of error condition.
For example, consider this (contrived) C# method, which does not account for the possibility of overflow/underflow. (For brevity, the method also doesn't handle the case where the specified list is a null reference.)
//Returns the sum of the values in the specified list.
private static int sumList(List<int> list)
{
int sum = 0;
foreach (int listItem in list)
{
sum += listItem;
}
return sum;
}
If this method is called as follows:
List<int> list = new List<int>();
list.Add(2000000000);
list.Add(2000000000);
int sum = sumList(list);
An overflow will occur in the sumList() method (because the int type in C# is a 32-bit signed integer, and the sum of the values in the list exceeds the value of the maximum 32-bit signed integer). The sum variable will have a value of -294967296 (not a value of 4000000000); this most likely is not what the (hypothetical) developer of the sumList method intended.
Obviously, there are various techniques that can be used by developers to avoid the possibility of integer overflow, such as using a type like Java's BigInteger, or the checked keyword and /checked compiler switch in C#.
However, the question that I'm interested in is why these languages were designed to by default allow integer overflows to happen in the first place, instead of, for example, raising an exception when an operation is performed at runtime that would result in an overflow. It seems like such behavior would help avoid bugs in cases where a developer neglects to account for the possibility of overflow when writing code that performs an arithmetic operation that could result in overflow. (These languages could have included something like an "unchecked" keyword that could designate a block where integer overflow is permitted to occur without an exception being raised, in those cases where that behavior is explicitly intended by the developer; C# actually does have this.)
Does the answer simply boil down to performance -- the language designers didn't want their respective languages to default to having "slow" arithmetic integer operations where the runtime would need to do extra work to check whether an overflow occurred, on every applicable arithmetic operation -- and this performance consideration outweighed the value of avoiding "silent" failures in the case that an inadvertent overflow occurs?
Are there other reasons for this language design decision as well, other than performance considerations?

In C#, it was a question of performance. Specifically, out-of-box benchmarking.
When C# was new, Microsoft was hoping a lot of C++ developers would switch to it. They knew that many C++ folks thought of C++ as being fast, especially faster than languages that "wasted" time on automatic memory management and the like.
Both potential adopters and magazine reviewers are likely to get a copy of the new C#, install it, build a trivial app that no one would ever write in the real world, run it in a tight loop, and measure how long it took. Then they'd make a decision for their company or publish an article based on that result.
The fact that their test showed C# to be slower than natively compiled C++ is the kind of thing that would turn people off C# quickly. The fact that your C# app is going to catch overflow/underflow automatically is the kind of thing that they might miss. So, it's off by default.
I think it's obvious that 99% of the time we want /checked to be on. It's an unfortunate compromise.

I think performance is a pretty good reason. If you consider every instruction in a typical program that increments an integer, and if instead of the simple op to add 1, it had to check every time if adding 1 would overflow the type, then the cost in extra cycles would be pretty severe.

You work under the assumption that integer overflow is always undesired behavior.
Sometimes integer overflow is desired behavior. One example I've seen is representation of an absolute heading value as a fixed point number. Given an unsigned int, 0 is 0 or 360 degrees and the max 32 bit unsigned integer (0xffffffff) is the biggest value just below 360 degrees.
int main()
{
uint32_t shipsHeadingInDegrees= 0;
// Rotate by a bunch of degrees
shipsHeadingInDegrees += 0x80000000; // 180 degrees
shipsHeadingInDegrees += 0x80000000; // another 180 degrees, overflows
shipsHeadingInDegrees += 0x80000000; // another 180 degrees
// Ships heading now will be 180 degrees
cout << "Ships Heading Is" << (double(shipsHeadingInDegrees) / double(0xffffffff)) * 360.0 << std::endl;
}
There are probably other situations where overflow is acceptable, similar to this example.

C/C++ never mandate trap behaviour. Even the obvious division by 0 is undefined behaviour in C++, not a specified kind of trap.
The C language doesn't have any concept of trapping, unless you count signals.
C++ has a design principle that it doesn't introduce overhead not present in C unless you ask for it. So Stroustrup would not have wanted to mandate that integers behave in a way which requires any explicit checking.
Some early compilers, and lightweight implementations for restricted hardware, don't support exceptions at all, and exceptions can often be disabled with compiler options. Mandating exceptions for language built-ins would be problematic.
Even if C++ had made integers checked, 99% of programmers in the early days would have turned if off for the performance boost...

Because checking for overflow takes time. Each primitive mathematical operation, which normally translates into a single assembly instruction would have to include a check for overflow, resulting in multiple assembly instructions, potentially resulting in a program that is several times slower.

It is likely 99% performance. On x86 would have to check the overflow flag on every operation which would be a huge performance hit.
The other 1% would cover those cases where people are doing fancy bit manipulations or being 'imprecise' in mixing signed and unsigned operations and want the overflow semantics.

Backwards compatibility is a big one. With C, it was assumed that you were paying enough attention to the size of your datatypes that if an over/underflow occurred, that that was what you wanted. Then with C++, C# and Java, very little changed with how the "built-in" data types worked.

If integer overflow is defined as immediately raising a signal, throwing an exception, or otherwise deflecting program execution, then any computations which might overflow will need to be performed in the specified sequence. Even on platforms where integer overflow checking wouldn't cost anything directly, the requirement that integer overflow be trapped at exactly the right point in a program's execution sequence would severely impede many useful optimizations.
If a language were to specify that integer overflows would instead set a latching error flag, were to limit how actions on that flag within a function could affect its value within calling code, and were to provide that the flag need not be set in circumstances where an overflow could not result in erroneous output or behavior, then compilers could generate more efficient code than any kind of manual overflow-checking programmers could use. As a simple example, if one had a function in C that would multiply two numbers and return a result, setting an error flag in case of overflow, a compiler would be required to perform the multiplication whether or not the caller would ever use the result. In a language with looser rules like I described, however, a compiler that determined that nothing ever uses the result of the multiply could infer that overflow could not affect a program's output, and skip the multiply altogether.
From a practical standpoint, most programs don't care about precisely when overflows occur, so much as they need to guarantee that they don't produce erroneous results as a consequence of overflow. Unfortunately, programming languages' integer-overflow-detection semantics have not caught up with what would be necessary to let compilers produce efficient code.

My understanding of why errors would not be raised by default at runtime boils down to the legacy of desiring to create programming languages with ACID-like behavior. Specifically, the tenet that anything that you code it to do (or don't code), it will do (or not do). If you didn't code some error handler, then the machine will "assume" by virtue of no error handler, that you really want to do the ridiculous, crash-prone thing you're telling it to do.
(ACID reference: http://en.wikipedia.org/wiki/ACID)

Related

Is divide by zero an error or an exception?

Basically I want to know how do you differentiate an error from an exception. In some programming languages accessing a non existent file throws an error and in others its an exception. How do you know if some thing is an error or an exception?
Like anything else - you either test it or read the documentation. It can be an "Error" or an "Exception" based on the language.
Eg.
C:
Crashes and gives a divide by zero error.
Ruby:
>> 6 / 0
ZeroDivisionError: divided by 0
from (irb):1:in `/'
from (irb):1
(ZeroDivisionError is actually an exception.)
Java:
Code:
int x = 6 / 0;
Output:
Exception in thread "main" java.lang.ArithmeticException: / by zero
It depends on the language :
some languages don't have exceptions
some languages don't use exceptions for everything.
For example, in PHP :
There are exceptions
But divide by 0 doesn't cause an exception to be thrown : is only raises a warning -- that doesn't stop the execution of the script.
The following portion of code :
echo 10 / 0;
echo "hello, world!";
Would give this result :
Warning: Division by zero in /.../temp.php on line 5
hello, world!
The terms error and exception are commonly used as jargon terms, with meanings that vary depending upon the programming ecosystem in which they are used.
Conditions
This response follows the lead of Common Lisp, and adopts the term condition as a nonjudgmental way of referring to an "interesting situation" in a program.
What makes a program condition "interesting"? Let's consider the division-by-zero case for real numbers. In the overwhelming majority of cases in which one real is divided by another, the result is another plain ordinary well-behaved real number. These are the "routine" or "uninteresting" cases. However, in the case that the divisor is zero then, mathematically speaking, the result is undefined. The program is now in an "interesting" or "exceptional" condition.
It becomes even more complicated once we take the mathematical ideal of a real number and model it, say, as an IEEE-format floating point number. If we divide 1.0 / 0.0, the IEEE standard (mostly) says that the result is in fact another floating point number, the quiet NaN Infinity. Since the result no longer behaves in the same way as a plain old real number, the program condition is once again "interesting" or "exceptional".
Classifying Conditions
The question is: what should we do when we run into an interesting condition? The answer is dependent upon the context. When classifying program conditions, the following questions are useful:
How likely is it that the condition will occur: certain, probable, unlikely, impossible?
How is the condition detected: program malfunction, distinguished value, signal/handler (aka exception handling), program termination?
How should the condition be handled: ignore it, perform some special action, terminate the program?
The answers to these questions yield 4 x 4 x 3 = 48 distinct cases -- and surely more could be distinguished by further criteria. This brings us to the heart of the matter. We have more than two cases but only two labels, error and exception, to apply to them. Needless to say, there are many possible ways to divide the 48+ cases into two groups.
For example, one could say that anything involving program malfunction is an error, anything else is an exception. Or that anything involving a language's built-in exception handling facilities is an exception, anything else is an error. The possibilities are legion.
Examples
End-Of-File
When reading and processing a stream of characters, hitting the end-of-file is certain. In C, this event is detected by means of a distinguished return value from an I/O function, a so-called error return value. Thus, one speaks of an EOF error.
Division-By-Zero
When dividing two user-entered numbers in a simple calculator program, we want to give a meaningful result even if the user enters a divisor of zero. In some C environments, division-by-zero results in a signal (SIGFPE) that must be fielded by a signal handler. Signals are sometimes called exceptions in the C community and, confusingly, sometimes called program error signals. In other C environments, IEEE floating-point rules apply and the division-by-zero would result in a NaN value. The C environment would be blissfully unaware of that value, considering it to be neither an exception nor an error.
Runtime Load Failure
Programs frequently load their program code dynamically at run-time (e.g. classes, DLLs). This might fail due to a missing file. C offers no standard way to detect or recover from this case. The program would be terminated involuntarily, and one often speaks of this situation as a fatal exception. In Java, this would be termed a linkage error.
Java's Throwable Hierarchy
Java's exception-handling system divides the so-called Throwable class hierarchy into two main groups. Subclasses of Error are meant to represent conditions from which recovery is impossible. Subclasses of Exception are meant for recoverable conditions are are further subdivided into checked exceptions (for probable conditions) and unchecked exceptions (for unlikely conditions). Unfortunately, the boundaries between these categories are poorly defined and you will often find instances of throwables whose semantics suggest that they belong in a different category.
Be Wary Of Jargon
These examples show that the meanings of error and exception are murky at best. One must treat error and exception as jargon, whose meaning is determined by the context of discussion.
Of greater value are distinguishing characteristics of program conditions. What is the likelihood of the condition occurring? How is the condition detected? What action should be taken when the condition is detected? In any discussion that demands clarity, one is better suited to answer these questions directly rather than relying upon jargon terminology.
Exceptions should indicate exceptional activity, so if you reach a point in your code for which you've done your best to avoid divide by zero, then throwing an exception (if you are able to in your language) is the right way.
If it's routine logic to check for divide by zero (like for a calculator app) then you should check for that in your code before it has the chance to raise an exception. In that case, it's an error (in user input) and should be handled as such.
(Stole this idea either from The Pragmatic Programmer or Code Complete; can't remember which.)

Technical non-terminating condition in a loop

Most of us know that a loop should not have a non-terminating condition. For example, this C# loop has a non-terminating condition: any even value of i. This is an obvious logic error.
void CountByTwosStartingAt(byte i) { // If i is even, it never exceeds 254
for(; i < 255; i += 2) {
Console.WriteLine(i);
}
}
Sometimes there are edge cases that are extremely unlikeley, but technically constitute non-exiting conditions (stack overflows and out-of-memory errors aside). Suppose you have a function that counts the number of sequential zeros in a stream:
int CountZeros(Stream s) {
int total = 0;
while(s.ReadByte() == 0) total++;
return total;
}
Now, suppose you feed it this thing:
class InfiniteEmptyStream:Stream
{
// ... Other members ...
public override int Read(byte[] buffer, int offset, int count) {
Array.Clear(buffer, offset, count); // Output zeros
return count; // Never returns -1 (end of stream)
}
}
Or more realistically, maybe a stream that returns data from external hardware, which in certain cases might return lots of zeros (such as a game controller sitting on your desk). Either way we have an infinite loop. This particular non-terminating condition stands out, but sometimes they don't.
A completely real-world example as in an app I'm writing. An endless stream of zeros will be deserialized into infinite "empty" objects (until the collection class or GC throws an exception because I've exceeded two billion items). But this would be a completely unexpected circumstance (considering my data source).
How important is it to have absolutely no non-terminating conditions? How much does this affect "robustness?" Does it matter if they are only "theoretically" non-terminating (is it okay if an exception represents an implicit terminating condition)? Does it matter whether the app is commercial? If it is publicly distributed? Does it matter if the problematic code is in no way accessible through a public interface/API?
Edit:
One of the primary concerns I have is unforseen logic errors that can create the non-terminating condition. If, as a rule, you ensure there are no non-terminating conditions, you can identify or handle these logic errors more gracefully, but is it worth it? And when? This is a concern orthogonal to trust.
You either "trust" your data source, or you don't.
If you trust it, then probably you want to make a best effort to process the data, no matter what it is. If it sends you zeros for ever, then it has posed you a problem too big for your resources to solve, and you expend all your resources on it and fail. You say this is "completely unexpected", so the question is whether it's OK for it to merely be "completely unexpected" for your application to fall over because it's out of memory. Or does it need to actually be impossible?
If you don't trust your data source, then you might want to put an artificial limit on the size of problem you will attempt, in order to fail before your system runs out of memory.
In either case it might be possible to write your app in such a way that you recover gracefully from an out-of-memory exception.
Either way it's a robustness issue, but falling over because the problem is too big to solve (your task is impossible) is usually considered more acceptable than falling over because some malicious user is sending you a stream of zeros (you accepted an impossible task from some script-kiddie DoS attacker).
Things like that have to decided on a case-by-case basis. If may make sense to have additional sanity checks, but it is too much work too make every piece of code completely foolproof; and it is not always possible to anticipate what fools come up with.
You either "trust" your data source, or you don't.
I'd say that you either "support" the software being used with that data source, or you don't. For example I've seen software which doesn't handle an insufficient-memory condition: but insufficient memory isn't "supported" for that software (or less specifically it isn't supported for that system); so, for that system, if an insufficient-memory condition occurs, the fix is to reduce the load on the system or to increase the memory (not to fix the software). For that system, handling insufficient memory isn't a requirement: what is a requirements is to manage the load put on the system, and to provide sufficient memory for that given load.
How important is it to have absolutely
no non-terminating conditions?
It isn't important at all. That is, it's not a goal by itself. The important thing is that the code correctly implements the spec. For example, an interactive shell may have a bug if the main loop does terminate.
In the scenario you're describing, the problem of infinite zeros is actually a special case of memory exhaustion. It's not a theoretical question but something that can actually happen. You should decide how to handle this.

What exactly is the danger of using magic debug values (such as 0xDEADBEEF) as literals?

It goes without saying that using hard-coded, hex literal pointers is a disaster:
int *i = 0xDEADBEEF;
// god knows if that location is available
However, what exactly is the danger in using hex literals as variable values?
int i = 0xDEADBEEF;
// what can go wrong?
If these values are indeed "dangerous" due to their use in various debugging scenarios, then this means that even if I do not use these literals, any program that during runtime happens to stumble upon one of these values might crash.
Anyone care to explain the real dangers of using hex literals?
Edit: just to clarify, I am not referring to the general use of constants in source code. I am specifically talking about debug-scenario issues that might come up to the use of hex values, with the specific example of 0xDEADBEEF.
There's no more danger in using a hex literal than any other kind of literal.
If your debugging session ends up executing data as code without you intending it to, you're in a world of pain anyway.
Of course, there's the normal "magic value" vs "well-named constant" code smell/cleanliness issue, but that's not really the sort of danger I think you're talking about.
With few exceptions, nothing is "constant".
We prefer to call them "slow variables" -- their value changes so slowly that we don't mind recompiling to change them.
However, we don't want to have many instances of 0x07 all through an application or a test script, where each instance has a different meaning.
We want to put a label on each constant that makes it totally unambiguous what it means.
if( x == 7 )
What does "7" mean in the above statement? Is it the same thing as
d = y / 7;
Is that the same meaning of "7"?
Test Cases are a slightly different problem. We don't need extensive, careful management of each instance of a numeric literal. Instead, we need documentation.
We can -- to an extent -- explain where "7" comes from by including a tiny bit of a hint in the code.
assertEquals( 7, someFunction(3,4), "Expected 7, see paragraph 7 of use case 7" );
A "constant" should be stated -- and named -- exactly once.
A "result" in a unit test isn't the same thing as a constant, and requires a little care in explaining where it came from.
A hex literal is no different than a decimal literal like 1. Any special significance of a value is due to the context of a particular program.
I believe the concern raised in the IP address formatting question earlier today was not related to the use of hex literals in general, but the specific use of 0xDEADBEEF. At least, that's the way I read it.
There is a concern with using 0xDEADBEEF in particular, though in my opinion it is a small one. The problem is that many debuggers and runtime systems have already co-opted this particular value as a marker value to indicate unallocated heap, bad pointers on the stack, etc.
I don't recall off the top of my head just which debugging and runtime systems use this particular value, but I have seen it used this way several times over the years. If you are debugging in one of these environments, the existence of the 0xDEADBEEF constant in your code will be indistinguishable from the values in unallocated RAM or whatever, so at best you will not have as useful RAM dumps, and at worst you will get warnings from the debugger.
Anyhow, that's what I think the original commenter meant when he told you it was bad for "use in various debugging scenarios."
There's no reason why you shouldn't assign 0xdeadbeef to a variable.
But woe betide the programmer who tries to assign decimal 3735928559, or octal 33653337357, or worst of all: binary 11011110101011011011111011101111.
Big Endian or Little Endian?
One danger is when constants are assigned to an array or structure with different sized members; the endian-ness of the compiler or machine (including JVM vs CLR) will affect the ordering of the bytes.
This issue is true of non-constant values, too, of course.
Here's an, admittedly contrived, example. What is the value of buffer[0] after the last line?
const int TEST[] = { 0x01BADA55, 0xDEADBEEF };
char buffer[BUFSZ];
memcpy( buffer, (void*)TEST, sizeof(TEST));
I don't see any problem with using it as a value. Its just a number after all.
There's no danger in using a hard-coded hex value for a pointer (like your first example) in the right context. In particular, when doing very low-level hardware development, this is the way you access memory-mapped registers. (Though it's best to give them names with a #define, for example.) But at the application level you shouldn't ever need to do an assignment like that.
I use CAFEBABE
I haven't seen it used by any debuggers before.
int *i = 0xDEADBEEF;
// god knows if that location is available
int i = 0xDEADBEEF;
// what can go wrong?
The danger that I see is the same in both cases: you've created a flag value that has no immediate context. There's nothing about i in either case that will let me know 100, 1000 or 10000 lines that there is a potentially critical flag value associated with it. What you've planted is a landmine bug that, if I don't remember to check for it in every possible use, I could be faced with a terrible debugging problem. Every use of i will now have to look like this:
if (i != 0xDEADBEEF) { // Curse the original designer to oblivion
// Actual useful work goes here
}
Repeat the above for all of the 7000 instances where you need to use i in your code.
Now, why is the above worse than this?
if (isIProperlyInitialized()) { // Which could just be a boolean
// Actual useful work goes here
}
At a minimum, I can spot several critical issues:
Spelling: I'm a terrible typist. How easily will you spot 0xDAEDBEEF in a code review? Or 0xDEADBEFF? On the other hand, I know that my compile will barf immediately on isIProperlyInitialised() (insert the obligatory s vs. z debate here).
Exposure of meaning. Rather than trying to hide your flags in the code, you've intentionally created a method that the rest of the code can see.
Opportunities for coupling. It's entirely possible that a pointer or reference is connected to a loosely defined cache. An initialization check could be overloaded to check first if the value is in cache, then to try to bring it back into cache and, if all that fails, return false.
In short, it's just as easy to write the code you really need as it is to create a mysterious magic value. The code-maintainer of the future (who quite likely will be you) will thank you.

Single most effective practice to prevent arithmetic overflow and underflow

What is the single most effective practice to prevent arithmetic overflow and underflow?
Some examples that come to mind are:
testing based on valid input ranges
validation using formal methods
use of invariants
detection at runtime using language features or libraries (this does not prevent it)
One possibility is to use a language that has arbitrarily sized integers that never overflow / underflow.
Otherwise, if this is something you're really concerned about, and if your language allows it, write a wrapper class that acts like an integer, but checks every operation for overflow. You could even have it do the check on debug builds, and leave things optimized for release builds. In a language like C++, you could do this, and it would behave almost exactly like an integer for release builds, but for debug builds you'd get full run-time checking.
class CheckedInt
{
private:
int Value;
public:
// Constructor
CheckedInt(int src) : Value(src) {}
// Conversions back to int
operator int&() { return Value; }
operator const int &() const { return Value; }
// Operators
CheckedInt operator+(CheckedInt rhs) const
{
if (rhs.Value < 0 && rhs.Value + Value > Value)
throw OverflowException();
if (rhs.Value > 0 && rhs.Value + Value < Value)
throw OverflowException();
return CheckedInt(rhs.Value + Value);
}
// Lots more operators...
};
Edit:
Turns out someone is doing this already for C++ - the current implementation is focused for Visual Studio, but it looks like they're getting support for gcc as well.
I write a lot of test code to do range/validity checking on my code. This tends to catch most of these types of situations - and definitely helps me write more bulletproof code.
Use high precision floating point numbers like a long double.
I think you are missing one very important option in your list: choose the right programming language for the job. There are many programming languages which do not have these problems, because they don't have fixed size integers.
There are more important considerations when choosing which language you use than the size of the integer. Simply check your input if you don't know if the value is in bounds, or use exception handling if the case is extremely rare.
A wrapper that checks for inconsistencies will make sense in many cases. If an additive operation (ie, addition or multiplication) on two or more integers results in a smaller value than the operands then you know something went wrong. Every additive operation should be followed by,
if (sum < operand1 || sum < operand2)
omg_error();
Likewise any operation that should logically result in a smaller value should be check to see if it was accidentally embiggin'd.
Have you investigated the use of formal methods to check your code to prove that it is free of overflows? A formal methods technique known as abstract interpretation can check the robustness of your software to prove that your software will not suffer from an overflow, underflow, divide by zero, overflow, or other similar run-time error. It is a mathematical technique that exhaustively analyzes your software. The technique was pioneered by Patrick Cousot in the 1970s. It was successfully used to diagnose an overflow condition in the Arian 5 rocket where an overflow caused the destruction of the launch vehicle. The overflow was caused while converting a floating point number to an integer. You can find more information about this technique here and also on Wikipedia.

When to use unsigned values over signed ones?

When is it appropriate to use an unsigned variable over a signed one? What about in a for loop?
I hear a lot of opinions about this and I wanted to see if there was anything resembling a consensus.
for (unsigned int i = 0; i < someThing.length(); i++) {
SomeThing var = someThing.at(i);
// You get the idea.
}
I know Java doesn't have unsigned values, and that must have been a concious decision on Sun Microsystems' part.
I was glad to find a good conversation on this subject, as I hadn't really given it much thought before.
In summary, signed is a good general choice - even when you're dead sure all the numbers are positive - if you're going to do arithmetic on the variable (like in a typical for loop case).
unsigned starts to make more sense when:
You're going to do bitwise things like masks, or
You're desperate to to take advantage of the sign bit for that extra positive range .
Personally, I like signed because I don't trust myself to stay consistent and avoid mixing the two types (like the article warns against).
In your example above, when 'i' will always be positive and a higher range would be beneficial, unsigned would be useful. Like if you're using 'declare' statements, such as:
#declare BIT1 (unsigned int 1)
#declare BIT32 (unsigned int reallybignumber)
Especially when these values will never change.
However, if you're doing an accounting program where the people are irresponsible with their money and are constantly in the red, you will most definitely want to use 'signed'.
I do agree with saint though that a good rule of thumb is to use signed, which C actually defaults to, so you're covered.
I would think that if your business case dictates that a negative number is invalid, you would want to have an error shown or thrown.
With that in mind, I only just recently found out about unsigned integers while working on a project processing data in a binary file and storing the data into a database. I was purposely "corrupting" the binary data, and ended up getting negative values instead of an expected error. I found that even though the value converted, the value was not valid for my business case.
My program did not error, and I ended up getting wrong data into the database. It would have been better if I had used uint and had the program fail.
C and C++ compilers will generate a warning when you compare signed and unsigned types; in your example code, you couldn't make your loop variable unsigned and have the compiler generate code without warnings (assuming said warnings were turned on).
Naturally, you're compiling with warnings turned all the way up, right?
And, have you considered compiling with "treat warnings as errors" to take it that one step further?
The downside with using signed numbers is that there's a temptation to overload them so that, for example, the values 0->n are the menu selection, and -1 means nothing's selected - rather than creating a class that has two variables, one to indicate if something is selected and another to store what that selection is. Before you know it, you're testing for negative one all over the place and the compiler is complaining about how you're wanting to compare the menu selection against the number of menu selections you have - but that's dangerous because they're different types. So don't do that.
size_t is often a good choice for this, or size_type if you're using an STL class.