I am working on a project where a user enters different types of variables I'm trying to make my project fool proof so I'm trying to prevent a data mismatch. An example of the type of input that I'm asking the user for would be something like this:
int main(){
int num;
string name,name2;
double money
cout<<"Enter your first name:"<<endl;
cin>>name;
cout<<"Enter your last name"<<endl;
cin>>name2;
cout<<"Enter a number"<<endl;
cin>>num;
cout<<"Enter an amount of money you have"<<endl;
cin>>money;
}
I'm main worried about the integer and double types of variables. So I'm wondering if there is a kind of exception for data mismatch for both integer and double variable types?
Related
I want to write a function that accepts either an int or a Number as a parameter and does different things depending on what type it is. My first question is what to use in the signature, Object or Number? I was thinking Number because it covers both int and Number.
function myFunction(p:Number):void { //Should p be of type Object or Number?
if (p is int) trace("Parameter is int.");
else if (p is Number) trace("Parameter is Number.");
}
The problem is I want it to be classified as a Number if the parameter has a decimal point like 1.00. That will still come up as int. Does anyone have any ideas how to do that? I suppose I could convert p to a String and then check to see if it contains a decimal point, but I was hoping there was an easier way.
The main problem I see is with the following sentence:
The problem is I want it to be classified as a Number if the parameter has a decimal point like 1.00.
Let's say we have an int variable called x, and we set it to 1. Does it look like 1, 1.00, or 1x10^-1? The answer is always no. Even if somebody were to type this:
myFunction(1.00);
the number still wouldn't actually look like anything. It's just the number one. Its only representation basically is however it looks in the actual machine bit representation (either floating point-style or just 000...001).
All it is is a number - whether stored in a Number variable or an int variable. Trying to convert it to a String won't help, as the information isn't there to begin with. The closest you're going to be able to come to this is pretty much going to be to use a String as the parameter type and see if somebody calls myFunction("1") or myFunction("1.00").
Only do something like this if you really have to, due to the wide range of stuff that could actually be stored in a String. Otherwise your is keywords should be the best way to go. But whichever one you choose, do not use an Object or untyped parameter; go with either a String or (much more preferably) a Number.
Class properties with the long data type are properly mapped when adding a new migration (code-first), but ulong data types are skipped by mysql's EF provider. How does one map a property to use mysql's unsigned bigint?
Update Feb 2021
Apparently EF Core now supports ulong -- see #JimbobTheSailor's answer below.
Older Entity Framework versions:
Turns out that Entity Framework does not support unsigned data types. For uint columns, one could just store the value in a signed data type with a larger range (that is, a long). What about ulong columns? The common solution couldn't work for me because there is no EF-supported signed data type that can hold a ulong without overflowing.
After a bit of thinking, I figured out a simple solution to this problem: just store the data in the supported long type and cast it to ulong when accessed. You might be thinking: "But wait, ulong's max value > long's max value!" You can still store the bytes of a ulong in a long and then cast it back to ulong when you need it, since both have 8 bytes. This will allow you to save a ulong variable to a database through EF.
// Avoid modifying the following directly.
// Used as a database column only.
public long __MyVariable { get; set; }
// Access/modify this variable instead.
// Tell EF not to map this field to a Db table
[NotMapped]
public ulong MyVariable
{
get
{
unchecked
{
return (ulong)__MyVariable;
}
}
set
{
unchecked
{
__MyVariable = (long)value;
}
}
}
The casting is unchecked to prevent overflow exceptions.
Hope this helps someone.
Update Entity Framework Core Feb 2021
EF Core 3.1:
EF Core now supports long and ulong types. Using code first, the long or ulong is mapped to EF Core's new 'Decimal Type'
public ulong MyULong{ get; set; } //==> decimal(20, 0)
A ulong results in a decimal being defined in the database with 20 digits and 0 digits to the right of the decimal point, which is sufficient to store a 64 bit ulong.
EF 5:
Thankyou to #Tomasz for noting that in EF 5 and 6 the ulong is mapped to a BigInt, rather than the Decimal type as per my original answer, now under the heading "EF Core 3.1" above
standard C lib:
int fputc(int c , FILE *stream);
And such behaviors occured many times, e.g:
int putc(int c, FILE *stream);
int putchar(int c);
why not use CHAR as it really is?
If use INT is necessary, when should I use INT instead of CHAR?
Most likely (in my opinion, since much of the rationale behind early C is lost in the depths of time), it it was simply to mirror the types used in the fgetc type functions which must be able to return any real character plus the EOF special character. The fgetc function gets the next character converted to an int, and uses a special marker value EOF to indicate the end of the stream.
To do that, they needed the wider int type since a char isn't quite large enough to hold all possible characters plus one more thing.
And, since the developers of C seemed to prefer a rather minimalist approach to code, it makes sense that they would use the same type, to allow for code such as:
filecopy(ifp, ofp)
FILE *ifp;
FILE *ofp;
{
int c;
while ((c = fgetc (ifp)) != EOF)
fputc (c, ofp);
}
No char parameters in K&R C
One reason is that in early versions1 of C there were no char parameters.
Yes, you could declare a parameter as char or float but it was considered int or double. Therefore, it would have, then, been somewhat misleading to document an interface as taking a char argument.
I believe this is still true today for functions declared without prototypes, in order for it to be possible to interoperate with older code.
1. Early, but still widespread. C was a quick success and became the first (and still, mostly, the only) widely successful systems programming language.
hi i'm working on a personal project
for a transport parser.
i want to be able to represent a recived packet in binary number and afterwards be able to set specific bits.
I've got a pretty good idea how to do the second part but i'm really stuck at the beginning
ive got an advice to use unsigned char for that but can i really represent a full packet in that variable.
thanks
an unsigned char array is probably what you need: you can store whatever you want in this structure and access it in whatever means pleases you.
You could have this container in a bigger container too: the bigger container would have pointers to the each layer's beginning & end etc.
I'd probably have a simple class (simple to begin with anyway):
class Packet
{
public:
Packet(unsigned int length);
Packet(void *data);
bool getBit(unsigned int bit);
void setBit(unsigned int bit,bool set);
private:
std::vector<unsigned char> bytes;
};
That's just to start, no doubt it would get more complex depending what you use it for. You might consider overloading the array operator but that's probably outside "beginner level" and maybe best ignored right now.
I haven't written C in quite some time and am writing an app using the MySQL C API, compiling in g++ on redhat.
So i start outputting some fields with printfs... using the oracle api, with PRO*C, which i used to use (on suse, years ago), i could select an int and output it as:
int some_int;
printf("%i",some_int);
I tried to do that with mysql ints and i got 8 random numbers displayed... i thought this was a mysql api issue and some config issue with my server, and i wasted a few hours trying to fix it, but couldn't, and found that i could do:
int some_int;
printf("%s",some_int);
and it would print out the integer properly. Because i'm not doing computations on the values i am extracting, i thought this an okay solution.
UNTIL I TRIED TO COUNT SOME THINGS....
I did a simple:
int rowcount;
for([stmt]){
rowcount++;
}
printf("%i",rowcount);
i am getting an 8 digit random number again... i couldn't figure out what the deal is with ints on this machine.
then i realized that if i initialize the int to zero, then i get a proper number.
can someone please explain to me under what conditions you need to initialize int variables to zero? i don't recall doing this every time in my old codebase, and i didn't see it in the example that i was modeling my mysql_stmt code from...
is there something i'm missing? also, it's entirely possible i've forgotten this is required each time
thanks...
If you don't initialize your variables, there's no guarantee of a default 0/NULL/whatever value. Some compilers MIGHT initialize it to 0 for you (IIRC, MSVC++ 6.0 would be kind enough to do so), and others might not. So don't rely on it. Never use a variable without first giving it some sort of sane value.
Only global and static values will be initialized to zero. The variables on the stack will always contain garbage value if not initialized.
int g_var; //This is a global varibale. So, initialized to zero
int main()
{
int s_var = 0; //This is on stack. So, you need to explicitly initialize
static int stat_var; //This is a static variable, So, initialized to zero
}
You always neet to initialize your variables. To catch this sort of error, you should probably compile with -Wall to give you all warnings that g++ can provide. I also prefer to use -Werror to make all warnings errors, since it's almost always the case that a warning indicates an error or a potential error and that cleaning up the code is better than leaving it as is.
Also, in your second printf, you used %s which is for printing strings, not integers.
int i = 0;
printf("%d\n", i);
// or
printf("%i\n", i);
Is what you want.
Variable are not automatically initialized in c.
You have indeed forgotten. In C and C++, you don't get any automatic initialization; the contents of c after int c; are whatever happens to be at the address referred to by c at the time.
Best practice: initialize at the definition: int c = 0;.
Oh, PS, and take some care that the MySQL int type matches the C int type; I think it does but I'm not positive. It will be, however, both architecture and compiler sensitive, since sizeof(int) isn't the same in all C environments.
Uninitialized variable.
int some_int = 0;