AS3: Function that determines if parameter is int or Number - actionscript-3

I want to write a function that accepts either an int or a Number as a parameter and does different things depending on what type it is. My first question is what to use in the signature, Object or Number? I was thinking Number because it covers both int and Number.
function myFunction(p:Number):void { //Should p be of type Object or Number?
if (p is int) trace("Parameter is int.");
else if (p is Number) trace("Parameter is Number.");
}
The problem is I want it to be classified as a Number if the parameter has a decimal point like 1.00. That will still come up as int. Does anyone have any ideas how to do that? I suppose I could convert p to a String and then check to see if it contains a decimal point, but I was hoping there was an easier way.

The main problem I see is with the following sentence:
The problem is I want it to be classified as a Number if the parameter has a decimal point like 1.00.
Let's say we have an int variable called x, and we set it to 1. Does it look like 1, 1.00, or 1x10^-1? The answer is always no. Even if somebody were to type this:
myFunction(1.00);
the number still wouldn't actually look like anything. It's just the number one. Its only representation basically is however it looks in the actual machine bit representation (either floating point-style or just 000...001).
All it is is a number - whether stored in a Number variable or an int variable. Trying to convert it to a String won't help, as the information isn't there to begin with. The closest you're going to be able to come to this is pretty much going to be to use a String as the parameter type and see if somebody calls myFunction("1") or myFunction("1.00").
Only do something like this if you really have to, due to the wide range of stuff that could actually be stored in a String. Otherwise your is keywords should be the best way to go. But whichever one you choose, do not use an Object or untyped parameter; go with either a String or (much more preferably) a Number.

Related

What are the differences between Null, Zero and Blank in SQL?

Can someone please explain the differences between Null, Zero and Blank in SQL to me?
Zero is a number value. It is a definite with precise mathematical properties. (You can do arithmetic on it ...)
NULL means the absence of any value. You can't do anything with it except test for it.
Blank is ill-defined. It means different things in different contexts to different people. For example:
AFAIK, there is no SQL or MySQL specific technical meaning for "blank". (Try searching the online MySQL manual ...)
For some people "blank" could mean a zero length string value: i.e. one with no characters in it ('').
For some people "blank" could mean a non-zero length string value consisting of only non-printing characters (SPACE, TAB, etc). Or maybe consisting of just a single SPACE character.
In some contexts (where character and string are different types), some people could use "blank" to mean a non-printing character value.
For some people could even use "blank" mean "anything that doesn't show up when you print or display it".
And then there are meanings that are specific to (for example) ORM mappings.
The point is that "blank" does not have a single well-defined meaning. At least not in (native) English IT terminology. It is probably best to avoid it ... if you want other IT professionals to understand what you mean. (And if someone else uses the term and it is not obvious from the context, ask them to say precisely what they mean!)
We cannot say anything generally meaningful about how ZERO / NULL / BLANK are represented, how much memory they occupy or anything like that. All we can say is that they are represented differently to each other .... and that the actual representation is implementation and context dependent.
You may correlate NULL-BLANK-ZERO case by child birth scenario( A real life Example.).
NULL Case: Child is not born yet.
BLANK Case: Child is born but we didn't give any name to him
ZERO Case: We defined it as zero, Child is born but of zero age.
See how this data will look in a database table:
Also NULL is a absence of value, where a field having NULL is not allocated any memory, where as empty fields have empty value with allocated space in memory.
Could you be more accurate about blank?
For what I understand of your question:
"Blank" is the lack of value. This is a human concept. In SQL, you need to fill the field with a value anyway. So that there is a value which means "no value has been set for this field". It is NULL.
If Blank is "", then it is a string, an empty one.
Zero: well, Zero is 0 ... It is a number.
To sum up:
NULL --> no value set
Blank ("") --> empty string
Zero --> Number equal to 0
Please, try to be more accurate next time you post an answer on Stack!
If I were you, I would check some resources about it, for example:
https://www.tutorialspoint.com/sql/sql-null-values.htm
NULL means it does not have any value not even garbage value.
ZERO is an integer value.
BLANK is simply a empty String value.

ANY , NONE and Unit in Nim

i couldn't find any specific information in the manual.
can anyone clarify how does ANY , NONE and type unit are reflected in Nim?
short definitions -
a unit type is a type that allows only one value (and thus can hold no information). The carrier (underlying set) associated with a unit type can be any singleton set. There is an isomorphism between any two such sets, so it is customary to talk about the unit type and ignore the details of its value. One may also regard the unit type as the type of 0-tuples, i.e. the product of no types.
ANY -
type ANY also known as ALL or Top , is the universal set. (all possible values).
NONE- the "empty set"
thank you!
Your question seems to be about sets. Let's have a look:
let emptySet: set[int8] = {}
This is an empty set of type int8. The {} literal for the empty set is implicitly casted to any actual set type.
let singletonSet = {1'i8}
This is a set containing exactly one value (a unit type if I understand it correctly). The type of the set can now be automatically deduced from the type of the single value in it.
let completeSet = {low(int8) .. high(int8)}
This set holds all possible int8 values.
The builtin set type is implemented as bitvector and thus can only be used for value types which can hold only a small set of possible values (for int8, the bitvector is already 256 bits long). Besides int8, it is usually used for char and enumeration types.
Then there is HashSet from the module sets which can hold larger types. However, if you construct a HashSet containing all possible values, memory consumption will probably be enormous.
Nim is not a functional language, and never claims to be one. There is no equivalent of these types, and the solution is more like the road that c++ takes.
There is void, wich is closest to what Unit is. The Any type does not exist, but there is the untyped pointer. But that type does not hold any type information in it, so you need to know what you can cast it to. And for NONE, or Nothing how I know it from scala, you have to use void, too. But here you can add the noReturn pragma.

What exactly is a datatype?

I understand what a datatype is (intuitively). But I need the formal definition. I don't understand if it is a set or it's the names 'int' 'float' etc. The formal definition found on wikipedia is confusing.
In computer programming, a data type is a classification identifying one of various types of data, such as floating-point, integer, or Boolean, that determines the possible values for that type; the operations that can be done on values of that type; the meaning of the data; and the way values of that type can be stored.
Can anyone help me with that?
Yep. What that's saying is that a data type has three pieces:
The various possible values. So, for example, an eight bit signed integer might have -127..128. This of that as a set of values V.
The operations: so an 8-bit signed integer might have +, -, * (multiply), and / (divide). The full definition would define those as functions from V into V, or possible as a function from V into float for division.
The way it's stored -- I sort of gave it away when I said "eight bit signed integer". The other detail is that I'm assuming a specific representation by the way I showed the range of values.
You might, if you're into object oriented programming, notice that this is very much like the definition of a class, which is defined by the storage used by each object, adn the methods of the class. Providing those parts for some arbitrary thing, but not inheritance rules, gives you what's called an abstract data type.
Update
#Appy, there's some room for differences in the formalities. I was a little subtle because it was late and I was suddenly uncertain if I'd assumed one's complement or two's complement -- of course it's two's complement. So interpretation is included in my description. Abstractly, though, you'd say it is a algebraic structure T=(V,O) where V is a set of values, O a set of functions from V into some arbitrary type -- remember '==' for example will be a function eq:V × V → {0,1} so you can't expect every operation to be into V.
I can define it as a classification of a particular type of information. It is easy for humans to distinguish between different types of data. We can usually tell at a glance whether a number is a percentage, a time, or an amount of money. We do this through special symbols %, :, and $.
Basically it's the concept that I am sure you grock. For computers however a data type is defined and has various associated attributes, like size, like a definition keywork (sometimes), the values it can take (numbers or characters for example) and operations that can be done on it like add subtract for numbers and append on string or compare on a character, etc. These differ from language to language and even from environment to env. (16 - 32 bit ints/ 32 - 64 envs./ etc).
If there is anything I am missing or needs refining please ask as this is fairly open ended.

Flash remoting and floating point values

in xxxx.mxml (from flex) i have called the remote remote method (of java) the method return type is float
in the xxxx.mxml's remote objects result handler i need get the float values as Numeric.....or String..i tried with string...i did Alert.show to see the value some times i get exact value for eg, 0.5 is the value returning from java methid but here it will show 0.50000454...so on..how get the exact value?
It is because of the way floating point numbers are stored; basically they can't be stored precisely. A quick search in SO would reveal a lot of threads about this. Also read "What Every Computer Scientist Should Know About Floating-Point Arithmetic"
Thus the problem of getting the exact value boils down to what you define exact to be. Try rounding it to a given number of floating points at java end, convert the rounded number to a string (I'm not sure if this conversion would preserve the precision) and send that string.

When to use unsigned values over signed ones?

When is it appropriate to use an unsigned variable over a signed one? What about in a for loop?
I hear a lot of opinions about this and I wanted to see if there was anything resembling a consensus.
for (unsigned int i = 0; i < someThing.length(); i++) {
SomeThing var = someThing.at(i);
// You get the idea.
}
I know Java doesn't have unsigned values, and that must have been a concious decision on Sun Microsystems' part.
I was glad to find a good conversation on this subject, as I hadn't really given it much thought before.
In summary, signed is a good general choice - even when you're dead sure all the numbers are positive - if you're going to do arithmetic on the variable (like in a typical for loop case).
unsigned starts to make more sense when:
You're going to do bitwise things like masks, or
You're desperate to to take advantage of the sign bit for that extra positive range .
Personally, I like signed because I don't trust myself to stay consistent and avoid mixing the two types (like the article warns against).
In your example above, when 'i' will always be positive and a higher range would be beneficial, unsigned would be useful. Like if you're using 'declare' statements, such as:
#declare BIT1 (unsigned int 1)
#declare BIT32 (unsigned int reallybignumber)
Especially when these values will never change.
However, if you're doing an accounting program where the people are irresponsible with their money and are constantly in the red, you will most definitely want to use 'signed'.
I do agree with saint though that a good rule of thumb is to use signed, which C actually defaults to, so you're covered.
I would think that if your business case dictates that a negative number is invalid, you would want to have an error shown or thrown.
With that in mind, I only just recently found out about unsigned integers while working on a project processing data in a binary file and storing the data into a database. I was purposely "corrupting" the binary data, and ended up getting negative values instead of an expected error. I found that even though the value converted, the value was not valid for my business case.
My program did not error, and I ended up getting wrong data into the database. It would have been better if I had used uint and had the program fail.
C and C++ compilers will generate a warning when you compare signed and unsigned types; in your example code, you couldn't make your loop variable unsigned and have the compiler generate code without warnings (assuming said warnings were turned on).
Naturally, you're compiling with warnings turned all the way up, right?
And, have you considered compiling with "treat warnings as errors" to take it that one step further?
The downside with using signed numbers is that there's a temptation to overload them so that, for example, the values 0->n are the menu selection, and -1 means nothing's selected - rather than creating a class that has two variables, one to indicate if something is selected and another to store what that selection is. Before you know it, you're testing for negative one all over the place and the compiler is complaining about how you're wanting to compare the menu selection against the number of menu selections you have - but that's dangerous because they're different types. So don't do that.
size_t is often a good choice for this, or size_type if you're using an STL class.