I'm currently improving the part of our COM component that logs all external calls into a file. For pointers we write something like (IInterface*)0x12345678 with the value being equal to the actual address.
Currently no difference is made for null pointers - they are displayed as 0x0 which IMO is suboptimal and inelegant. Changing this behaviour is not a problem at all. But first I'd like to know - is there any real advantage in representing null pointers in hex?
In C or C++, you should be able to use the standard %p formatting code, which will then make your pointers look like everybody else's.
I'm not sure how null pointers are formatted in Win32 by %p, on Linux I think you get "null" or something similar.
Using the notation 0x0 (IMO) makes it clearer that it's referring to an address (even if it's not the internal representation of the null pointer). (In actual code, I prefer would using the NULL macro, though, but it sounds like you're talking specifically about debugging spew.)
It gives some context, just like I prefer using '\0' for the NUL-terminator.
It's a stylistic preference, though, so do what appeals to you (and to your colleagues).
Personally, I'd print 0x0 to the log file[*]. Some day when someone comes to parse the file automatically, the more uniform the data is the better. I don't find 0x0 difficult to read, so it seems silly to have a special case in the writer code, and another special case in the reader code, for no benefit that I can think of.
0x0 is preferable to 0 for grepping the log for NULLs, too: saves you having to figure out that you should be grepping for )0 or something funny.
I wouldn't write 0x0 for a null pointer constant in C or C++, though. I write non-null addresses so unbelievably rarely that there's nothing for the nulls to be uniform with. I guess if I was defining a bunch of constants to represent the memory map of some device, and the zero address was significant in that memory map, then I might write it 0x0 in that context.
[*] Or perhaps 0x00000000. I like 32-bit pointers to be printed 8 chars long, because when I read/remember a pointer I start out in pairs from the left. If it turns out to have 7 chars, I get horribly confused at the end ;-). 64-bit pointers it doesn't matter, because I can't remember a number that long anyway...
It's all positive zero in the end.
There is: You can always convert them back to a number (0), with no additional effort. And the only disadvantage is readability.
There is no reason to prefer (SomeType*)0x0 to (SomeType*)0.
As an aside: In C, the null pointer constant is a somewhat strange construct; the compiler recognizes (SomeType*)0 as "the null pointer", even if the internal representation on some machine might differ from the numerical value 0. It is more like NULL in SQL -- not a "real" pointer value. In practice, all machines I know of model the null pointer as the "0" address.
I am pretty sure the hex notation is a result of the layout of memory. Memory is word aligned, where a word is 32 bits if you are on a 32 bit processor. These words are segmented into pages, which are arranged in page tables, etc. etc. Hex notation is the only way to make sense of this arrangements (unless you really like using your calculator).
My opinion, is for readability, think about it, if you were to look at 0, what does that mean, does that mean its a unsigned integer, or if it was 0x0, then instinctively, it has something to do with binary notation, more likely platform dependent.
Since the tag is language agnostic, and the word 'null pointer', in Delphi/Object Pascal, it is 'nil', in C#, it is 'null', in C/C++ it is 'NULL'.
Look at for example in the C-FAQ, in Section 5 on NULL pointers, specifically, 5.4, 5.5, 5.6 and 5.7 to give you an insight into this.
In a nutshell, the usage and notation of null pointers is dependent on
What language is used?
Semantics and syntax of the language specifications.
What type of compiler?
Type of platform, in terms of how memory is accessed, the processor, bits...
Hope this helps,
Best regards,
Tom.
Related
Is there a way to ensure that:
if a==b then devfun(a)==devfun(b);
where devfun() is a device function involves some floating point maths ops (e.g. polynomials) and returns floating point results, a and b are floating point variables.
I don't care about cross-implentation consistence (e.g. different compiler/different OS/different driver versions or different compiler options), I only care about, within the same building/program, at runtime, can it ensure that during each function call, the result returned by devfun() are consistent in a way such that as long as a==b, devfun(a)==devfun(b)?
I am talking about SM2.0+ hardware and CUDA 5.0+, just in case being relevant.
Let's assume that your numbers a and b represent properly normalized IEEE-754 representation floating point numbers and that niether a nor b is a NaN value. Let's also assume a and b are both 32-bit, or else a and b are both 64-bit (IEEE-754 floating point representations).
In that case, I believe the (ISO C/C++, or CUDA C/C++) floating point test for equality (==) will return TRUE when the two numbers a and b are bitwise identical (and FALSE otherwise).
Under the TRUE case, with one exception, I believe it is safe to assume that devfun(a) == devfun(b) without any additional conditions except the obvious ones: there is no difference in the behavior of devfun on either side of the == operation, that is, it's the same code, compiled in the same way, executed under the same conditions (e.g. other variables that may be taking part in devfun, same GPU type, etc.), just as you've indicated in your question: "same building/program".
The one exception is if the result of devfun(a) is NaN, since (IEEE-754) NaN != NaN.
It would be interesting (to me) if you think you have a piece of code that disproves this assertion.
Perhaps floating point ninjas will come along and correct me.
Perhaps also I would be remiss if I did not say something about the hazards of floating point comparisons. If you're not familiar with this (most folks would never recommend performing a test a==b on two floating point numbers) you can find many questions about it on SO.
For the same reasons that floating point equality comparison (==) in general is unwise, I think relying on the above assertion, even if it's true, is unwise. Let me give you one example.
Suppose you compile code for architecture sm_20. Now you run the code on an sm_21 device. This one simple variation could result in a JIT-compile at runtime. Now you are no longer running the same code, and all bets are off.
So, again, even if the above is true, I think it's unwise for you to rely on such a statement:
if a==b, then devfun(a) == devfun(b)
The following statements represent my understanding of type systems (which suffers from too little hands-on experience outside the Java world); please correct any errors.
The static/dynamic distinction seems pretty clear-cut:
Statically typed langauges assign each variable, field and parameter a type and the compiler prevents assignments between incompatible types. Examples: C, Java, Pascal.
Dynamically typed languages treat variables as generic bins that can hold anything you want - types are checked (if at all) only at runtime when you actually perform operations on the values, not when you assign them. Examples: Smalltalk, Python, JavaScript.
Type inference allows statically typed languages to look like (and have some of the advantages of) dynamically typed ones, by inferring types from the context so that you don't have to declare them most of the time - but unlike in dynamic languages, you cannot e.g. use a variable to hold a string initially and then assign an integer to it. Examples: Haskell, Scala
I am much less certain about the strong/weak distinction, and I suspect that it's not very clearly defined:
Strongly typed languages assign each runtime value a type and only allow operations to be performed that are defined for that type, otherwise there is an explicit type error.
Weakly typed languages don't have runtime type checks - if you try to perform an operation on a value that it does not support, the results are unpredictable. It may actually do something useful, but more likely you'll get corrupted data, a crash, or some undecipherable secondary error.
There seems to be at least two different kinds of weakly typed languages (or perhaps a continuum):
In C and assembler, values are basically buckets of bits, so anything is possible and if you get the compiler to dereference the first 4 bytes of a null-terminated string, you better hope it leads somewhere that does not contain legal machine code.
PHP and JavaScript are also generally considered weakly typed, but do not consider values to be opaque bit buckets; they will, however, perform implicit type conversions.
But these implicit conversions seem to apply mainly to string/integer/float variables - does that really warrant the classification as weakly typed? Or are there other issues where these languages's type system may obfuscate errors?
I am much less certain about the strong/weak distinction, and I suspect that it's not very clearly defined.
You are right: it isn't.
This is what Benjamin C. Pierce, author of Types and Programming Languages and Advanced Types and Programming Languages has to say:
I spent a few weeks... trying to sort out the terminology of "strongly typed," "statically typed," "safe," etc., and found it amazingly difficult.... The usage of these terms is so various as to render them almost useless.
Luca Cardelli, in his Typeful Programming article, defines it as the absence of unchecked run-time type errors. Tony Hoare calls that exact same property "security". Other papers call it "type safety" or simply "safety".
Mark-Jason Dominus wrote a classic rant about this a couple of years ago on the comp.lang.perl.moderated newsgroup, in a discussion about whether or not Perl was strongly typed. In this rant he states that within just a few hours of research, he was able to find 8 different, sometimes contradictory definitions, mostly from respected sources like college textbooks or peer-reviewed papers. In particular, those texts contained examples that were meant to help the students distinguish between strongly and weakly typed languages, and according to those examples, C is strongly typed, C is weakly typed, C++ is strongly typed, C++ is weakly typed, Lisp is strongly typed, Lisp is weakly typed, Perl is strongly typed, Perl is weakly typed. (Does that clear up any confusion?)
The only definition that I have seen consistently applied is:
strongly typed: my programming language
weakly typed: your programming language
Regarding static and dynamic typing you are dead on the money. Static typing means that programs are checked before being executed, and a program might be rejected before it starts. Dynamic typing means that the types of values are checked during execution, and a poorly typed operation might cause the program to halt or otherwise signal an error at run time. A primary reason for static typing is to rule out programs that might have such "dynamic type errors".
Bob Harper has argued that a dynamically typed language can (and should) be considered to be a statically typed language with a single type, which Bob calls "value". This view is fair, but it's helpful only in limited contexts, such as trying to be precise about the type theory of languages.
Although I think you grasp the concept, your bullets do not make it clear that type inference is simply a special case of static typing. In most languages with type inference, type annotations are optional, but not necessarily in all contexts. (Example: signatures in ML.) Advanced static type systems often give you a tradeoff between annotations and inference; for example, in Haskell you can type polymorphic functions of higher rank (forall to the left of an arrow) but only with an annotations. So, if you are willing to add an annotation, you can get the compiler to accept a program that would be rejected without the annotation. I think this is the wave of the future in type inference.
The ideas of "strong" and "weak" typing I would characterize as not useful, because they don't have a universally agreed on technical meaning. Strong typing generally means that there are no loopholes in the type system, whereas weak typing means the type system can be subverted (invalidating any guarantees). The terms are often used incorrectly to mean static and dynamic typing. To see the difference, think of C: the language is type-checked at compile time (static typing), but there are plenty of loopholes; you can pretty much cast a value of any type to another type of the same sizeāin particular, you can cast pointer types freely. Pascal was a language that was intended to be strongly typed but famously had an unforeseen loophole: a variant record with no tag.
Implementations of strongly typed languages often acquire loopholes over time, usually so that part of the run-time system can be implemented in the high-level language. For example, Objective Caml has a function called Obj.magic which has the run-time effect of simply returning its argument, but at compile time it converts a value of any type to one of any other type. My favorite example is Modula-3, whose designers called their type-casting construct LOOPHOLE.
I encourage you to avoid the terms "strong" and "weak" with regard to type systems, and instead say precisely what you mean, e.g., "the type system guarantees that the following class of errors cannot occur at run time" (strong), "the static type system does not protect against certain run-time errors" (weak), or "the type system has a loophole" (weak). Just calling a type system "strong" or "weak" by itself does not communicate very much.
This is a pretty accurate reflection of my own understanding of the topic of the static/dynamic, strong/weak typing discussion. In addition, you can consider those other languages:
In languages such as TCL and Bourne Shell, the "main" value type is the string. Numeric operators are available that implicitly coerce input values from string representation and result values to string representation. They can be considered examples of dynamic, weakly typed languages.
Forth may be an example of a static, weakly typed language. The language performs no type checking of its own, and the main stack may interchangeably contain pointers, integers, strings (conventionally represented as two cells, start and length). Inconsistent use of operators can lead to either interesting, or unspecified behavior. Typical Forth implementations provide a separate stack for floating point numbers.
Maybe this Book can help. Be prepared for some math though. If I remember correctly, a "non-math" statement was: "Strongly typed: A language that I feel safe to program with".
There seems to be at least two different kinds of weakly typed languages (or perhaps a continuum):
In C and assembler, values are basically buckets of bits, so anything is possible and if you get the compiler to dereference the first 4 bytes of a null-terminated string, you better hope it leads somewhere that does not contain legal machine code.
I would disagree with this statement, at least in C. You can manipulate the type system in C in such a way that you can treat any given memory location as a bucket of bits, but a variable most definitely has a type and that type has specific properties. The fact that there are no runtime checks (unless you consider floating point exceptions or segmentation faults to be runtime checks) isn't really relevant. C can be considered "weakly typed" in the sense that the compiler will perform some implicit type conversion for you, but it doesn't go very far with it.
I consider strong/weak to be the concept of implicit conversion and a good example is addition of a string and a number. In a strongly typed language the conversion won't happen (at least in all languages I can think of) and you'll get an error. Weakly typed languages like VB (with Option Explicit Off) and Javascript will try to cast one of the operands to the other type.
In VB.Net with Option Strict Off:
Dim A As String = "5"
Dim B As Integer = 5
Trace.WriteLine(A + B) 'returns 10
With Option Strict On (turning VB into a strongly typed language) you'll get a compiler error.
In Javascript:
var A = '5';
var B = 5;
alert(A + B);//returns 55
Some people will say that the results are not predictable but they actually do follow a set of rules.
Hmm, don't know much more either, but I wanted to mention C++ and its implicit converstions(implicit constructors). This might be as well an example of weak typing.
I agree with the others who say "there doesn't seem to be a hard and fast definition here." My answer tends to be based on how much rope the language gives you WRT types. If you can pretty much fake anything you want, then it's weak. If it really doesn't let you get yourself into trouble, even if you want to, it's strong.
I really haven't seen too many languages that skirt this border, so I can't say that I've ever needed a better definition that that...
I was looking at this question. Basically having a leading zero causes the number to be interpreted as octal. I've ran into this problem numerous times in multiple languages.
Why doesn't the language explicitly require you to specify octal with a function call or a type (in strong typed languages) like:
oct variable = 2;
I can understand why hexadecimal (0x0234) has this format. Hex is pretty useful. An integer from the database will never have an x in it.
But octal numbers 0123 look like ints and are a pain to deal with. I've never used octal for anything.
Can anyone explain the rationale behind this usage? Is it just a bit of historical cruft?
It's largely historic. The best solution I've seen is in the new version of Python, where octal is indicated with a special prefix character "o", much like hexadecimal's "x" prefix:
0o10 == 0x8 == 8
99.9% of the reason it exists is to support chmod() calls, i.e. chmod(fd, 0755).
It does rather seem like a format more like hex's would be superior.
It exists since working with 3-bit segments is almost as useful as working with 4-bit segments. This was more true in the past (e.g., seven-segment LEDs, chmod, etc.).
The real question is why haven't more languages adopted octal and binary notations in a more regular fashion:
10 == 0b1010 == 0o12 == 0x0A
I know that Python finally adopted the 0o8 notation... not sure if they have adopted the binary one as well. I guess a better question is Why does this still trip people up?
I hate this too, I don't know why it's been carried forward into so many modern languages. I once knew someone who had a zip code like "09827" when he lived in NYC. Sometimes he had to input his zip code as "9827," because the leading zero would lead to error messages (since 9's and 8's are illegal characters in octal numbers).
Yes, it's historical. C uses this way to specify literals in octal, and possibly it was used somewhere before that.
I've experienced it in Javascript, where parsing dates stops working in august. Up to july it works as '07' parsed as octal is still seven, but '08' is not a valid number... (The solution is to specify the number base in the parseInt call.)
In C# there are no binary or octal literals, perhaps the reasoning is that you shouldn't do as much bit fiddling that the language needs it...
Personally, I blame the programmer in this case. Why are you formatting an integer by zero padding? Zero padding is for strings, not numeric types.
It goes without saying that using hard-coded, hex literal pointers is a disaster:
int *i = 0xDEADBEEF;
// god knows if that location is available
However, what exactly is the danger in using hex literals as variable values?
int i = 0xDEADBEEF;
// what can go wrong?
If these values are indeed "dangerous" due to their use in various debugging scenarios, then this means that even if I do not use these literals, any program that during runtime happens to stumble upon one of these values might crash.
Anyone care to explain the real dangers of using hex literals?
Edit: just to clarify, I am not referring to the general use of constants in source code. I am specifically talking about debug-scenario issues that might come up to the use of hex values, with the specific example of 0xDEADBEEF.
There's no more danger in using a hex literal than any other kind of literal.
If your debugging session ends up executing data as code without you intending it to, you're in a world of pain anyway.
Of course, there's the normal "magic value" vs "well-named constant" code smell/cleanliness issue, but that's not really the sort of danger I think you're talking about.
With few exceptions, nothing is "constant".
We prefer to call them "slow variables" -- their value changes so slowly that we don't mind recompiling to change them.
However, we don't want to have many instances of 0x07 all through an application or a test script, where each instance has a different meaning.
We want to put a label on each constant that makes it totally unambiguous what it means.
if( x == 7 )
What does "7" mean in the above statement? Is it the same thing as
d = y / 7;
Is that the same meaning of "7"?
Test Cases are a slightly different problem. We don't need extensive, careful management of each instance of a numeric literal. Instead, we need documentation.
We can -- to an extent -- explain where "7" comes from by including a tiny bit of a hint in the code.
assertEquals( 7, someFunction(3,4), "Expected 7, see paragraph 7 of use case 7" );
A "constant" should be stated -- and named -- exactly once.
A "result" in a unit test isn't the same thing as a constant, and requires a little care in explaining where it came from.
A hex literal is no different than a decimal literal like 1. Any special significance of a value is due to the context of a particular program.
I believe the concern raised in the IP address formatting question earlier today was not related to the use of hex literals in general, but the specific use of 0xDEADBEEF. At least, that's the way I read it.
There is a concern with using 0xDEADBEEF in particular, though in my opinion it is a small one. The problem is that many debuggers and runtime systems have already co-opted this particular value as a marker value to indicate unallocated heap, bad pointers on the stack, etc.
I don't recall off the top of my head just which debugging and runtime systems use this particular value, but I have seen it used this way several times over the years. If you are debugging in one of these environments, the existence of the 0xDEADBEEF constant in your code will be indistinguishable from the values in unallocated RAM or whatever, so at best you will not have as useful RAM dumps, and at worst you will get warnings from the debugger.
Anyhow, that's what I think the original commenter meant when he told you it was bad for "use in various debugging scenarios."
There's no reason why you shouldn't assign 0xdeadbeef to a variable.
But woe betide the programmer who tries to assign decimal 3735928559, or octal 33653337357, or worst of all: binary 11011110101011011011111011101111.
Big Endian or Little Endian?
One danger is when constants are assigned to an array or structure with different sized members; the endian-ness of the compiler or machine (including JVM vs CLR) will affect the ordering of the bytes.
This issue is true of non-constant values, too, of course.
Here's an, admittedly contrived, example. What is the value of buffer[0] after the last line?
const int TEST[] = { 0x01BADA55, 0xDEADBEEF };
char buffer[BUFSZ];
memcpy( buffer, (void*)TEST, sizeof(TEST));
I don't see any problem with using it as a value. Its just a number after all.
There's no danger in using a hard-coded hex value for a pointer (like your first example) in the right context. In particular, when doing very low-level hardware development, this is the way you access memory-mapped registers. (Though it's best to give them names with a #define, for example.) But at the application level you shouldn't ever need to do an assignment like that.
I use CAFEBABE
I haven't seen it used by any debuggers before.
int *i = 0xDEADBEEF;
// god knows if that location is available
int i = 0xDEADBEEF;
// what can go wrong?
The danger that I see is the same in both cases: you've created a flag value that has no immediate context. There's nothing about i in either case that will let me know 100, 1000 or 10000 lines that there is a potentially critical flag value associated with it. What you've planted is a landmine bug that, if I don't remember to check for it in every possible use, I could be faced with a terrible debugging problem. Every use of i will now have to look like this:
if (i != 0xDEADBEEF) { // Curse the original designer to oblivion
// Actual useful work goes here
}
Repeat the above for all of the 7000 instances where you need to use i in your code.
Now, why is the above worse than this?
if (isIProperlyInitialized()) { // Which could just be a boolean
// Actual useful work goes here
}
At a minimum, I can spot several critical issues:
Spelling: I'm a terrible typist. How easily will you spot 0xDAEDBEEF in a code review? Or 0xDEADBEFF? On the other hand, I know that my compile will barf immediately on isIProperlyInitialised() (insert the obligatory s vs. z debate here).
Exposure of meaning. Rather than trying to hide your flags in the code, you've intentionally created a method that the rest of the code can see.
Opportunities for coupling. It's entirely possible that a pointer or reference is connected to a loosely defined cache. An initialization check could be overloaded to check first if the value is in cache, then to try to bring it back into cache and, if all that fails, return false.
In short, it's just as easy to write the code you really need as it is to create a mysterious magic value. The code-maintainer of the future (who quite likely will be you) will thank you.
When is it appropriate to use an unsigned variable over a signed one? What about in a for loop?
I hear a lot of opinions about this and I wanted to see if there was anything resembling a consensus.
for (unsigned int i = 0; i < someThing.length(); i++) {
SomeThing var = someThing.at(i);
// You get the idea.
}
I know Java doesn't have unsigned values, and that must have been a concious decision on Sun Microsystems' part.
I was glad to find a good conversation on this subject, as I hadn't really given it much thought before.
In summary, signed is a good general choice - even when you're dead sure all the numbers are positive - if you're going to do arithmetic on the variable (like in a typical for loop case).
unsigned starts to make more sense when:
You're going to do bitwise things like masks, or
You're desperate to to take advantage of the sign bit for that extra positive range .
Personally, I like signed because I don't trust myself to stay consistent and avoid mixing the two types (like the article warns against).
In your example above, when 'i' will always be positive and a higher range would be beneficial, unsigned would be useful. Like if you're using 'declare' statements, such as:
#declare BIT1 (unsigned int 1)
#declare BIT32 (unsigned int reallybignumber)
Especially when these values will never change.
However, if you're doing an accounting program where the people are irresponsible with their money and are constantly in the red, you will most definitely want to use 'signed'.
I do agree with saint though that a good rule of thumb is to use signed, which C actually defaults to, so you're covered.
I would think that if your business case dictates that a negative number is invalid, you would want to have an error shown or thrown.
With that in mind, I only just recently found out about unsigned integers while working on a project processing data in a binary file and storing the data into a database. I was purposely "corrupting" the binary data, and ended up getting negative values instead of an expected error. I found that even though the value converted, the value was not valid for my business case.
My program did not error, and I ended up getting wrong data into the database. It would have been better if I had used uint and had the program fail.
C and C++ compilers will generate a warning when you compare signed and unsigned types; in your example code, you couldn't make your loop variable unsigned and have the compiler generate code without warnings (assuming said warnings were turned on).
Naturally, you're compiling with warnings turned all the way up, right?
And, have you considered compiling with "treat warnings as errors" to take it that one step further?
The downside with using signed numbers is that there's a temptation to overload them so that, for example, the values 0->n are the menu selection, and -1 means nothing's selected - rather than creating a class that has two variables, one to indicate if something is selected and another to store what that selection is. Before you know it, you're testing for negative one all over the place and the compiler is complaining about how you're wanting to compare the menu selection against the number of menu selections you have - but that's dangerous because they're different types. So don't do that.
size_t is often a good choice for this, or size_type if you're using an STL class.