standard C lib:
int fputc(int c , FILE *stream);
And such behaviors occured many times, e.g:
int putc(int c, FILE *stream);
int putchar(int c);
why not use CHAR as it really is?
If use INT is necessary, when should I use INT instead of CHAR?
Most likely (in my opinion, since much of the rationale behind early C is lost in the depths of time), it it was simply to mirror the types used in the fgetc type functions which must be able to return any real character plus the EOF special character. The fgetc function gets the next character converted to an int, and uses a special marker value EOF to indicate the end of the stream.
To do that, they needed the wider int type since a char isn't quite large enough to hold all possible characters plus one more thing.
And, since the developers of C seemed to prefer a rather minimalist approach to code, it makes sense that they would use the same type, to allow for code such as:
filecopy(ifp, ofp)
FILE *ifp;
FILE *ofp;
{
int c;
while ((c = fgetc (ifp)) != EOF)
fputc (c, ofp);
}
No char parameters in K&R C
One reason is that in early versions1 of C there were no char parameters.
Yes, you could declare a parameter as char or float but it was considered int or double. Therefore, it would have, then, been somewhat misleading to document an interface as taking a char argument.
I believe this is still true today for functions declared without prototypes, in order for it to be possible to interoperate with older code.
1. Early, but still widespread. C was a quick success and became the first (and still, mostly, the only) widely successful systems programming language.
Related
I've noticed this across a few languages, but I'll make my question specific to AS3. Why is an int class lowercase but a String or Number is not?
var myInt:int = 0;
var myString:String = "";
var myNum:Number = 0;
That's simply primitive values vs. Objects. You can do something like String.substring() because it's an Object but you can't do anything with int, it's just a number.
==== EDIT ====
According to the comment below, the int in AS3 is a class so you can call some methods of it. However, it is still a primitive type. The difference is explained here: http://www.adobe.com/devnet/actionscript/learning/as3-fundamentals/data-types.html
"Primitive values are usually faster than complex values because ActionScript 3 stores primitive values in a special way that makes memory and speed optimizations possible.
Note: For readers interested in the technical details, ActionScript 3 stores primitive values internally as immutable objects. The fact that they are stored as immutable objects means that passing by reference is effectively the same as passing by value. This cuts down on memory usage and increases execution speed, because references are usually significantly smaller than the values themselves."
This is most likely due to the fact that they followed the ECMAScript3 standard where you on page 14 of the PDF find the definition of primitive values of type Number, String, and Boolean.
Page 26 of the PDF lists future reserved keywords, where int appears. Naturally if they want to support unsigned integers, it should be called uint.
Personally I think it would make more sense to name them Int and Uint (or do like Java)
Basically, I wonder if a language exists where this code will be invalid because even though counter and distance are both int under the hood, they represent incompatible types in the real world:
#include <stdio.h>
typedef int counter;
typedef int distance;
int main() {
counter pies = 1;
distance lengthOfBiscuit = 4;
printf("total pies: %d\n", pies + lengthOfBiscuit);
return 0;
}
That compiles with no warnings with "gcc -pedantic -Wall" and all other languages where I've tried it. It seems like it would be a good idea to disallow accidentally adding a counter and a distance, so where is the language support?
(Incidentally, the real-life example that prompted this quesion was web dev work in PHP and Python -- I was trying to make "HTML-escaped string", "SQL-escaped string" and "raw dangerous user input" incompatible, but the best I can seem to get is apps hungarian notation as suggested here --> http://www.joelonsoftware.com/articles/Wrong.html <-- and that still relies on human checking ("wrong code looks wrong") rather than compiler support ("wrong code is wrong"))
Haskell can do this, with GeneralizedNewtypeDeriving you can treat wrapped values as the underlying thing, whilst only exposing what you need:
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype Counter = Counter Int deriving Num
newtype Distance = Distance Int deriving Num
main :: IO ()
main = print $ Counter 1 + Distance 2
Now you get the error:
Add.hs:6:28:
Couldn't match expected type ‘Counter’ with actual type ‘Distance’
In the second argument of ‘(+)’, namely ‘Distance 2’
In the second argument of ‘($)’, namely ‘Counter 1 + Distance 2’
You can still "force" the underlying data type with "coerce", or by unwrapping the Ints explicitly.
I should add that any language with "real" types should be able to do this.
In Ada you can have types that use the same representation, but are still distinct types. What a "strong typedef" would be (if it existed) in C or C++.
In your case, you could do
type counter is new Integer;
type distance is new Integer;
to create two new types that behave like integers, but cannot be mixed.
Derived types and sub types in Ada
You could ccreate an object wrapping the undelying type in a member variable and define operations (even in the form of functions) that make sense on that type (e.g. LEngth would define "plus" allowing addition to another length, but for angle).
A drawback of this approach is you have to create a wrapper for each underlying type you care about and define the appropriate operations for each sensible combination, which might be tedious and possibly error-prone.
In C++, you could check out BOOST support for dimensions. The example given is designed primarily for physical dimensions, but I think you could adapt it to many others as well.
I haven't written C in quite some time and am writing an app using the MySQL C API, compiling in g++ on redhat.
So i start outputting some fields with printfs... using the oracle api, with PRO*C, which i used to use (on suse, years ago), i could select an int and output it as:
int some_int;
printf("%i",some_int);
I tried to do that with mysql ints and i got 8 random numbers displayed... i thought this was a mysql api issue and some config issue with my server, and i wasted a few hours trying to fix it, but couldn't, and found that i could do:
int some_int;
printf("%s",some_int);
and it would print out the integer properly. Because i'm not doing computations on the values i am extracting, i thought this an okay solution.
UNTIL I TRIED TO COUNT SOME THINGS....
I did a simple:
int rowcount;
for([stmt]){
rowcount++;
}
printf("%i",rowcount);
i am getting an 8 digit random number again... i couldn't figure out what the deal is with ints on this machine.
then i realized that if i initialize the int to zero, then i get a proper number.
can someone please explain to me under what conditions you need to initialize int variables to zero? i don't recall doing this every time in my old codebase, and i didn't see it in the example that i was modeling my mysql_stmt code from...
is there something i'm missing? also, it's entirely possible i've forgotten this is required each time
thanks...
If you don't initialize your variables, there's no guarantee of a default 0/NULL/whatever value. Some compilers MIGHT initialize it to 0 for you (IIRC, MSVC++ 6.0 would be kind enough to do so), and others might not. So don't rely on it. Never use a variable without first giving it some sort of sane value.
Only global and static values will be initialized to zero. The variables on the stack will always contain garbage value if not initialized.
int g_var; //This is a global varibale. So, initialized to zero
int main()
{
int s_var = 0; //This is on stack. So, you need to explicitly initialize
static int stat_var; //This is a static variable, So, initialized to zero
}
You always neet to initialize your variables. To catch this sort of error, you should probably compile with -Wall to give you all warnings that g++ can provide. I also prefer to use -Werror to make all warnings errors, since it's almost always the case that a warning indicates an error or a potential error and that cleaning up the code is better than leaving it as is.
Also, in your second printf, you used %s which is for printing strings, not integers.
int i = 0;
printf("%d\n", i);
// or
printf("%i\n", i);
Is what you want.
Variable are not automatically initialized in c.
You have indeed forgotten. In C and C++, you don't get any automatic initialization; the contents of c after int c; are whatever happens to be at the address referred to by c at the time.
Best practice: initialize at the definition: int c = 0;.
Oh, PS, and take some care that the MySQL int type matches the C int type; I think it does but I'm not positive. It will be, however, both architecture and compiler sensitive, since sizeof(int) isn't the same in all C environments.
Uninitialized variable.
int some_int = 0;
Why is it bad practice to declare variables on one line?
e.g.
private String var1, var2, var3
instead of:
private String var1;
private String var2;
private String var3;
In my opinion, the main goal of having each variable on a separate line would be to facilitate the job of Version Control tools.
If several variables are on the same line you risk having conflicts for unrelated modifications by different developers.
In C++ :
int * i, j;
i is of type int *, j is of type int.
The distinction is too easily missed.
Besides having them on one line each makes it easier to add some comments later
I think that there are various reasons, but they all boil down to that the first is just less readable and more prone to failure because a single line is doing more than one thing.
And all that for no real gain, and don't you tell me you find two lines of saved space is a real gain.
It's a similar thing to what happens when you have
if ((foo = some_function()) == 0) {
//do something
}
Of course this example is much worse than yours.
In C/C++, you also have the problem that the * used to indicate a pointer type only applies to the directly following identifier. So a rather common mistake of inexperienced developers is to write
int* var1, var2, var3;
and expecting all three variables to be of type 'int pointer', whereas for the compiler this reads as
int* var1;
int var2;
int var3;
making only var1 a pointer.
With separate lines, you have the opportunity to add a comment on each line describing the use of the variable (if it isn't clear from its name).
Because in some languages, var2 and var3 in your example would not be strings, they would be variants (untyped).
Why is that bad practice? I don't think it is, as long as your code is still readable.
//not much use
int i, j, k;
//better
int counter,
childCounter,
percentComplete;
To be honest I am not against it. I think that its perfectly feasible to group similar variables on the same line e.g.
float fMin, fMax;
however I steer clear when the variables are unrelated e.g.
int iBalance, iColor;
Relevance.
Just because two variables are of type String does not mean they are closely related to each other.
If the two (or more) variables are closely related by function, rather then variable type, then maybe they could be declared together. i.e. only if it makes sense for a reader of your program to see the two variables together should they actually be placed together
Here's my reasons:
Readability, easier to spot if you know there's only one on each line
Version control, less intra-line changes, more single-line additions, changes, or deletions, easier to merge from one branch to another
What about the case such as:
public static final int NORTH = 0,
EAST = 1,
SOUTH = 2,
WEST = 3;
Is that considered bad practice as well? I would consider that okay as it counters some of the points previously made:
they would all definitely be the same type (in my statically typed Java-world)
comments can be added for each
if you have to change the type for one, you probably have to do it for all, and all four can be done in one change
So in an (albeit smelly code) example, is there reasons you wouldn't do that?
Agree with edg, and also because it is more readable and easy for maintenance to have each variable on separate line. You immediately see the type, scope and other modifiers and when you change a modifier it applies only to the variable you want - that avoids errors.
to be more apparent to you when using Version Control tools (covered by Michel)
to be more readable to you when you have the simplest overflow/underflow or compile error and your eyes failed to point out the obvious
to defend the opposite (i.e. multi-variable single-line declaration) has less pros ("code textual vertical visibility" being a singleton)
It is bad practice mostly when you can and want to initialize variables on the deceleration. An example where this might not be so bad is:
string a,b;
if (Foo())
{
a = "Something";
b = "Something else";
}
else
{
a = "Some other thing";
b = "Out of examples";
}
Generally it is, for the version control and commenting reasons discussed by others, and I'd apply that in 95% of all cases. however there are circumstances where it does make sense, for example if I'm coding graphics and I want a couple of variables to represent texture coordinates (always referenced by convention as s and t) then the declaring them as
int s, t; // texture coordinates
IMHO enhances code readability both by shortening the code and by making it explicit that these two variables belong together (of course some would argue for using a single point class variable in this case).
while attempting this question https://www.interviewbit.com/problems/remove-element-from-array/
Method 1 gives Memory Limit exceeded for this code:
Type 1:
int i,j;
Type 2:
int i;
int j;
type 1: Gives Memory Limit Exceeded
int removeElement (int* A, int n1, int B)
{
int k=0, i;
for(i=0;i<n1;i++)
if(A[i]!=B)
{
A[k]=A[i];
k++;
}
return k;
}
Whereas type 2 works perfectly fine
int removeElement (int* A, int n1, int B)
{
int k=0;
int i;
for(i=0;i<n1;i++)
if(A[i]!=B)
{
A[k]=A[i];
k++;
}
return k;
}
When is it appropriate to use an unsigned variable over a signed one? What about in a for loop?
I hear a lot of opinions about this and I wanted to see if there was anything resembling a consensus.
for (unsigned int i = 0; i < someThing.length(); i++) {
SomeThing var = someThing.at(i);
// You get the idea.
}
I know Java doesn't have unsigned values, and that must have been a concious decision on Sun Microsystems' part.
I was glad to find a good conversation on this subject, as I hadn't really given it much thought before.
In summary, signed is a good general choice - even when you're dead sure all the numbers are positive - if you're going to do arithmetic on the variable (like in a typical for loop case).
unsigned starts to make more sense when:
You're going to do bitwise things like masks, or
You're desperate to to take advantage of the sign bit for that extra positive range .
Personally, I like signed because I don't trust myself to stay consistent and avoid mixing the two types (like the article warns against).
In your example above, when 'i' will always be positive and a higher range would be beneficial, unsigned would be useful. Like if you're using 'declare' statements, such as:
#declare BIT1 (unsigned int 1)
#declare BIT32 (unsigned int reallybignumber)
Especially when these values will never change.
However, if you're doing an accounting program where the people are irresponsible with their money and are constantly in the red, you will most definitely want to use 'signed'.
I do agree with saint though that a good rule of thumb is to use signed, which C actually defaults to, so you're covered.
I would think that if your business case dictates that a negative number is invalid, you would want to have an error shown or thrown.
With that in mind, I only just recently found out about unsigned integers while working on a project processing data in a binary file and storing the data into a database. I was purposely "corrupting" the binary data, and ended up getting negative values instead of an expected error. I found that even though the value converted, the value was not valid for my business case.
My program did not error, and I ended up getting wrong data into the database. It would have been better if I had used uint and had the program fail.
C and C++ compilers will generate a warning when you compare signed and unsigned types; in your example code, you couldn't make your loop variable unsigned and have the compiler generate code without warnings (assuming said warnings were turned on).
Naturally, you're compiling with warnings turned all the way up, right?
And, have you considered compiling with "treat warnings as errors" to take it that one step further?
The downside with using signed numbers is that there's a temptation to overload them so that, for example, the values 0->n are the menu selection, and -1 means nothing's selected - rather than creating a class that has two variables, one to indicate if something is selected and another to store what that selection is. Before you know it, you're testing for negative one all over the place and the compiler is complaining about how you're wanting to compare the menu selection against the number of menu selections you have - but that's dangerous because they're different types. So don't do that.
size_t is often a good choice for this, or size_type if you're using an STL class.