I don't understand what the first line is defining or what the purpose of <Py_ssize_t>ceil is? - cython

cdef Py_ssize_t max_distance, offset
offset = <Py_ssize_t>ceil(sqrt(img.shape[0] * img.shape[0] +
img.shape[1] * img.shape[1]))
Can someone help me understand what the purpose of the the first line is when creating the variable after it? I dont understand what <Py_ssize_t> is because it seems like they do not assign anything to it. This is in Cython which I am brand new to and I only know python

<Py_ssize_t> is a cast. Without context it's hard to know if it's necessary (but I suspect not). It casts whatever ceil returns to a C integer of type Py_ssize_t (a signed integer that is big enough to be used for sizes of Python containers). Documentation: https://cython.readthedocs.io/en/stable/src/userguide/language_basics.html#type-casting
The chances are it isn't necessary and would happen automatically with offset = ceil

Related

F# Define Function input type/ Prevent type missmatches

I should probably start out by saying im new to F# and that this is without doubt a rookie question.
I have been doing F# for like two months now and a thing that I in general find frustrating about F# is the contant errors saying. "Here was expected to have type something but was given type something else."
I remember my teacher saying that unlike other languages then F# cant figure out on its own what types you are referring to.
So my question to you guys is how do you define the type that an input for function has?
example:
let FoneTwo y x =
System.Math.Sqrt(x^2+y^2)
printfn "%A" (FoneTwo 2.2 3.3)
This errors that a float is expected but has type String. Where on earth does this string come from? In the function call im clearly using floats. So i guess i need to specify somehow that y and x is a float but how?
In general Im not only after an answer to this example but a general rule or method to keep track of this issue? Because in general this happens alot.
can someone enlighten me?
Note that the operator ^ is not power, it's string concatenation (for ML compatibility).
You should use the pown function instead:
System.Math.Sqrt(pown x 2 + pown y 2)
And you can shorten it to:
sqrt(pown x 2 + pown y 2)
There is an operator ** available but that's for floats, so you would have to change your code to:
sqrt (x ** 2.0 + y ** 2.0)
But I would use pown instead.

idl making big numbers = 0.0

I'm trying to the the mass of the black hole at the center of this galaxy, I have the mass in solar masses, but need it in kg. However when I try to convert (1Msolar = 1.989*10^30kg) idl just gives me 0.0000. I have no idea what I'm doing wrong and I tried just telling idl to print both 1.989*10^30 and 1989000000000000000000000000000 and the outputs are 0.00000 and -1 respectively. Can someone please explain why this is happening?
This is a type conversion error/overflow issue. When you use large numbers you either need to explicitly define them as long or long64 (i.e., 64-bit long integer) for integer numbers. For real numbers, you can use float or double and to do this, the easiest way is the following:
msun = 1.989d30
which is equivalent to 1.989 x 1030 as a double-precision floating point number. If you want single precision, then just do the following:
msun = 1.989e30
To make a 32- or 64-bit long integer, just use:
msun = 1989L * 10L^(27)
or for 64-bit
msun = 1989LL * 10LL^(27)
I agree with #honeste_vivere's answer about overflow and data types, but I would add that I often change units to avoid this. I frequently have densities that are order 1e19/m^3, so I cast density in units of 1e19/m^3 and then deal with numbers that are order 1. This prevents math errors during least squares fits and other operations that might do things like squaring my data.

AS3: Function that determines if parameter is int or Number

I want to write a function that accepts either an int or a Number as a parameter and does different things depending on what type it is. My first question is what to use in the signature, Object or Number? I was thinking Number because it covers both int and Number.
function myFunction(p:Number):void { //Should p be of type Object or Number?
if (p is int) trace("Parameter is int.");
else if (p is Number) trace("Parameter is Number.");
}
The problem is I want it to be classified as a Number if the parameter has a decimal point like 1.00. That will still come up as int. Does anyone have any ideas how to do that? I suppose I could convert p to a String and then check to see if it contains a decimal point, but I was hoping there was an easier way.
The main problem I see is with the following sentence:
The problem is I want it to be classified as a Number if the parameter has a decimal point like 1.00.
Let's say we have an int variable called x, and we set it to 1. Does it look like 1, 1.00, or 1x10^-1? The answer is always no. Even if somebody were to type this:
myFunction(1.00);
the number still wouldn't actually look like anything. It's just the number one. Its only representation basically is however it looks in the actual machine bit representation (either floating point-style or just 000...001).
All it is is a number - whether stored in a Number variable or an int variable. Trying to convert it to a String won't help, as the information isn't there to begin with. The closest you're going to be able to come to this is pretty much going to be to use a String as the parameter type and see if somebody calls myFunction("1") or myFunction("1.00").
Only do something like this if you really have to, due to the wide range of stuff that could actually be stored in a String. Otherwise your is keywords should be the best way to go. But whichever one you choose, do not use an Object or untyped parameter; go with either a String or (much more preferably) a Number.

What exactly is the danger of using magic debug values (such as 0xDEADBEEF) as literals?

It goes without saying that using hard-coded, hex literal pointers is a disaster:
int *i = 0xDEADBEEF;
// god knows if that location is available
However, what exactly is the danger in using hex literals as variable values?
int i = 0xDEADBEEF;
// what can go wrong?
If these values are indeed "dangerous" due to their use in various debugging scenarios, then this means that even if I do not use these literals, any program that during runtime happens to stumble upon one of these values might crash.
Anyone care to explain the real dangers of using hex literals?
Edit: just to clarify, I am not referring to the general use of constants in source code. I am specifically talking about debug-scenario issues that might come up to the use of hex values, with the specific example of 0xDEADBEEF.
There's no more danger in using a hex literal than any other kind of literal.
If your debugging session ends up executing data as code without you intending it to, you're in a world of pain anyway.
Of course, there's the normal "magic value" vs "well-named constant" code smell/cleanliness issue, but that's not really the sort of danger I think you're talking about.
With few exceptions, nothing is "constant".
We prefer to call them "slow variables" -- their value changes so slowly that we don't mind recompiling to change them.
However, we don't want to have many instances of 0x07 all through an application or a test script, where each instance has a different meaning.
We want to put a label on each constant that makes it totally unambiguous what it means.
if( x == 7 )
What does "7" mean in the above statement? Is it the same thing as
d = y / 7;
Is that the same meaning of "7"?
Test Cases are a slightly different problem. We don't need extensive, careful management of each instance of a numeric literal. Instead, we need documentation.
We can -- to an extent -- explain where "7" comes from by including a tiny bit of a hint in the code.
assertEquals( 7, someFunction(3,4), "Expected 7, see paragraph 7 of use case 7" );
A "constant" should be stated -- and named -- exactly once.
A "result" in a unit test isn't the same thing as a constant, and requires a little care in explaining where it came from.
A hex literal is no different than a decimal literal like 1. Any special significance of a value is due to the context of a particular program.
I believe the concern raised in the IP address formatting question earlier today was not related to the use of hex literals in general, but the specific use of 0xDEADBEEF. At least, that's the way I read it.
There is a concern with using 0xDEADBEEF in particular, though in my opinion it is a small one. The problem is that many debuggers and runtime systems have already co-opted this particular value as a marker value to indicate unallocated heap, bad pointers on the stack, etc.
I don't recall off the top of my head just which debugging and runtime systems use this particular value, but I have seen it used this way several times over the years. If you are debugging in one of these environments, the existence of the 0xDEADBEEF constant in your code will be indistinguishable from the values in unallocated RAM or whatever, so at best you will not have as useful RAM dumps, and at worst you will get warnings from the debugger.
Anyhow, that's what I think the original commenter meant when he told you it was bad for "use in various debugging scenarios."
There's no reason why you shouldn't assign 0xdeadbeef to a variable.
But woe betide the programmer who tries to assign decimal 3735928559, or octal 33653337357, or worst of all: binary 11011110101011011011111011101111.
Big Endian or Little Endian?
One danger is when constants are assigned to an array or structure with different sized members; the endian-ness of the compiler or machine (including JVM vs CLR) will affect the ordering of the bytes.
This issue is true of non-constant values, too, of course.
Here's an, admittedly contrived, example. What is the value of buffer[0] after the last line?
const int TEST[] = { 0x01BADA55, 0xDEADBEEF };
char buffer[BUFSZ];
memcpy( buffer, (void*)TEST, sizeof(TEST));
I don't see any problem with using it as a value. Its just a number after all.
There's no danger in using a hard-coded hex value for a pointer (like your first example) in the right context. In particular, when doing very low-level hardware development, this is the way you access memory-mapped registers. (Though it's best to give them names with a #define, for example.) But at the application level you shouldn't ever need to do an assignment like that.
I use CAFEBABE
I haven't seen it used by any debuggers before.
int *i = 0xDEADBEEF;
// god knows if that location is available
int i = 0xDEADBEEF;
// what can go wrong?
The danger that I see is the same in both cases: you've created a flag value that has no immediate context. There's nothing about i in either case that will let me know 100, 1000 or 10000 lines that there is a potentially critical flag value associated with it. What you've planted is a landmine bug that, if I don't remember to check for it in every possible use, I could be faced with a terrible debugging problem. Every use of i will now have to look like this:
if (i != 0xDEADBEEF) { // Curse the original designer to oblivion
// Actual useful work goes here
}
Repeat the above for all of the 7000 instances where you need to use i in your code.
Now, why is the above worse than this?
if (isIProperlyInitialized()) { // Which could just be a boolean
// Actual useful work goes here
}
At a minimum, I can spot several critical issues:
Spelling: I'm a terrible typist. How easily will you spot 0xDAEDBEEF in a code review? Or 0xDEADBEFF? On the other hand, I know that my compile will barf immediately on isIProperlyInitialised() (insert the obligatory s vs. z debate here).
Exposure of meaning. Rather than trying to hide your flags in the code, you've intentionally created a method that the rest of the code can see.
Opportunities for coupling. It's entirely possible that a pointer or reference is connected to a loosely defined cache. An initialization check could be overloaded to check first if the value is in cache, then to try to bring it back into cache and, if all that fails, return false.
In short, it's just as easy to write the code you really need as it is to create a mysterious magic value. The code-maintainer of the future (who quite likely will be you) will thank you.

int issue in g++/mysql/redhat

I haven't written C in quite some time and am writing an app using the MySQL C API, compiling in g++ on redhat.
So i start outputting some fields with printfs... using the oracle api, with PRO*C, which i used to use (on suse, years ago), i could select an int and output it as:
int some_int;
printf("%i",some_int);
I tried to do that with mysql ints and i got 8 random numbers displayed... i thought this was a mysql api issue and some config issue with my server, and i wasted a few hours trying to fix it, but couldn't, and found that i could do:
int some_int;
printf("%s",some_int);
and it would print out the integer properly. Because i'm not doing computations on the values i am extracting, i thought this an okay solution.
UNTIL I TRIED TO COUNT SOME THINGS....
I did a simple:
int rowcount;
for([stmt]){
rowcount++;
}
printf("%i",rowcount);
i am getting an 8 digit random number again... i couldn't figure out what the deal is with ints on this machine.
then i realized that if i initialize the int to zero, then i get a proper number.
can someone please explain to me under what conditions you need to initialize int variables to zero? i don't recall doing this every time in my old codebase, and i didn't see it in the example that i was modeling my mysql_stmt code from...
is there something i'm missing? also, it's entirely possible i've forgotten this is required each time
thanks...
If you don't initialize your variables, there's no guarantee of a default 0/NULL/whatever value. Some compilers MIGHT initialize it to 0 for you (IIRC, MSVC++ 6.0 would be kind enough to do so), and others might not. So don't rely on it. Never use a variable without first giving it some sort of sane value.
Only global and static values will be initialized to zero. The variables on the stack will always contain garbage value if not initialized.
int g_var; //This is a global varibale. So, initialized to zero
int main()
{
int s_var = 0; //This is on stack. So, you need to explicitly initialize
static int stat_var; //This is a static variable, So, initialized to zero
}
You always neet to initialize your variables. To catch this sort of error, you should probably compile with -Wall to give you all warnings that g++ can provide. I also prefer to use -Werror to make all warnings errors, since it's almost always the case that a warning indicates an error or a potential error and that cleaning up the code is better than leaving it as is.
Also, in your second printf, you used %s which is for printing strings, not integers.
int i = 0;
printf("%d\n", i);
// or
printf("%i\n", i);
Is what you want.
Variable are not automatically initialized in c.
You have indeed forgotten. In C and C++, you don't get any automatic initialization; the contents of c after int c; are whatever happens to be at the address referred to by c at the time.
Best practice: initialize at the definition: int c = 0;.
Oh, PS, and take some care that the MySQL int type matches the C int type; I think it does but I'm not positive. It will be, however, both architecture and compiler sensitive, since sizeof(int) isn't the same in all C environments.
Uninitialized variable.
int some_int = 0;