How to write a Numba function used both in CPU mode and in CUDA device mode? - cuda

I want to write a Numba function used both in CPU mode and in CUDA device mode. Of course, I can write two identical functions with and without the cuda.jit decorator. For example:
from numba import cuda, njit
#njit("i4(i4, i4)")
def func_cpu(a, b)
return a + b
#cuda.jit("i4(i4, i4)", device=True)
def func_gpu(a, b)
return a + b
But it is ugly in software engineering. Is there a more elegant way, i.e., combining the codes in one function?

A decorator is essentially a function, that takes a function as the input, and also returns a (often modified) function as the output. The addition of arguments and keywords arguments as done with Numba makes it slightly more complicated (internally), but you can think of it as a nested function where the outer one again returns a decorator.
So instead of using it as a decorator like you do now (with the #), you can just call it as any function and capture the output. And the output will then be a callable function as well.
This allows writing your function in pure Python, and then apply as many "decorators" on it as you'd like. For example:
from numba import cuda, njit
def func_py(a, b)
return a + b
func_njit = njit("i4(i4, i4)")(func_py)
func_gpu = cuda.jit("i4(i4, i4)", device=True)(func_py)
assert func_py(4, 3) == func_njit(4, 3)
assert func_py(4, 3) == func_gpu(4, 3)

Related

SymPy Wronskian function

I have been trying to compute the wronskian using SymPy, and can not figure out how to use the function. I did look at the program itself but I am very new to python. For functions any sinusoidal is okay. I just want to observe how to use SymPy in this way for future reference. Any help would be great!
~I listed my imports below
import sympy as sp
from scipy import linalg
import numpy as np
sp.init_printing()
I don't this that 'var' is the only thing wrong with what I am inputting.
You have to define the var first. You have not defined it. Also the functions should go in a list.
x = sp.Symbol('x')
## Define your var here
Wronskian_Sol = sp.matrices.dense.wronskian([sp.sin(x), 1-sp.cos(x)**2], var, method="bareiss")
Here is an example in this book "Applied Differntial Equation with Boundary Value Problems" by Vladimir A. Dobrushkin at page 199.
I computed a Wronskian for these three functions using Sympy
x
x*sin(x)
x*cons(x)
import sympy as sp
x = sp.Symbol('x')
var = x
Wronskian_Sol = sp.matrices.dense.wronskian([x, x*sp.cos(x), x*sp.sin(x)], var, method="bareiss")
print(Wronskian_Sol)
print(Wronskian_Sol.simplify())
This gives the output. The first is not simplified, the last one is simplified. You can reduce the first one to simplified version easily by taking the common factor x**3 out which leaves (sin(x)**2 + cos(x)**2) ..and this is nothing but 1.
x**3*sin(x)**2 + x**3*cos(x)**2
x**3
You can confirm the solution by manually taking the determinant of the derivative matrix.

Why don't you get a type error when you pass a float instead of an int in cython

I have a cython function:
def test(int a, int b):
return a+b
If I call it with:
test(0.5, 1)
I get the value 1.
Why doesn't it give a type error?
This is because float defines the special function __int__, which is called by Cython along the way (or more precise PyNumber_Long, at least this is my guess, because it is not easy to track the call through all these defines and ifdefs).
That is the deal: If your object defines __int__ so it can be used as an integer by Cython. Using Cython for implicit type-checking is not very robust.
If you want, you can check, whether the input is an int-object like in the following example (for Python3, for Python2 it is a little bit more complex, because there are different int-classes):
%%cython
from cpython cimport PyLong_Check
def print_me(i):
if not PyLong_Check(i):
print("Not an integer!")
else:
print(i)

How do I read a C char array into a python bytearray with cython?

I have an array with bytes and its size:
cdef char *bp
cdef size_t size
How do I read the array into a Python bytearray (or another appropriate structure that can easily be pickled)?
Three reasonably straightforward ways to do it:
Use the appropriate C API function as I suggested in the comments:
from cpython.bytes cimport PyBytes_FromStringAndSize
output = PyBytes_FromStringAndSize(bp,size)
This makes a copy, which may be an issue with a sufficiently large string. For Python 2 the functions are similarly named but with PyString rather than PyBytes.
View the char pointer with a typed memoryview, get a numpy array from that:
cdef char[::1] mview = <char[:size:1]>(bp)
output = np.asarray(mview)
This shouldn't make a copy, so could be more efficient if large.
Do the copy manually:
output = bytearray(size)
for i in range(size):
output[i] = bp[i]
(this could be somewhat accelerated with Cython if needed)
This issue I think you're having with ctypes (based on the subsequent question you linked to in the comments) is that you cannot pass C pointer to the ctypes Python interface. If you try to pass a char* to a Python function Cython will try to convert it to a string. This fails because it stops at the first 0 element (hence you need size). Therefore you aren't passing ctypes a char*, you're passing it a nonsense Python string.

How to run a function in another process using Cython (and not interacting with Python)? [Included Python code example]

What is the best way to replicate the below behavior in a cython (without having to interact with Python)? Assuming that the function which will be passed into the new process is a cdef function.
import time
from multiprocessing import Process
def func1(n):
while True:
# do some work (different from func2)
time.sleep(n)
def func2(n):
while True:
# do some other work (different from func1)
time.sleep(n)
p1 = Process(target=func1, args=(1,))
p1.start()
p2 = Process(target=func2, args=(1,))
p2.start()
How to run a function in another process using Cython (without interacting with Python)?

Practical difference between def f(x: Int) = x+1 and val f = (x: Int) => x+1 in Scala

I'm new to Scala and I'm having a problem understanding this. Why are there two syntaxes for the same concept, and none of them more efficient or shorter at that (merely from a typing standpoint, maybe they differ in behavior - which is what I'm asking).
In Go the analogues have a practical difference - you can't forward-reference the lambda assigned to a variable, but you can reference a named function from anywhere. Scala blends these two if I understand it correctly: you can forward-reference any variable (please correct me if I'm wrong).
Please note that this question is not a duplicate of What is the difference between “def” and “val” to define a function.
I know that def evaluates the expression after = each time it is referenced/called, and val only once. But this is different because the expression in the val definition evaluates to a function.
It is also not a duplicate of Functions vs methods in Scala.
This question concerns the syntax of Scala, and is not asking about the difference between functions and methods directly. Even though the answers may be similar in content, it's still valuable to have this exact point cleared up on this site.
There are three main differences (that I know of):
1. Internal Representation
Function expressions (aka anonymous functions or lambdas) are represented in the generated bytecode as instances of any of the Function traits. This means that function expressions are also objects. Method definitions, on the other hand, are first class citizens on the JVM and have a special bytecode representation. How this impacts performance is hard to tell without profiling.
2. Reference Syntax
References to functions and methods have different syntaxes. You can't just say foo when you want to send the reference of a method as an argument to some other part of your code. You'll have to say foo _. With functions you can just say foo and things will work as intended. The syntax foo _ is effectively wrapping the call to foo inside an anonymous function.
3. Generics Support
Methods support type parametrization, functions do not. For example, there's no way to express the following using a function value:
def identity[A](a: A): A = a
The closest would be this, but it loses the type information:
val identity = (a: Any) => a
As an extension to Ionut's first point, it may be worth taking a quick look at http://www.scala-lang.org/api/current/#scala.Function1.
From my understanding, an instance of a function as you described (ie.
val f = (x: Int) => x + 1) extends the Function1 class. The implications of this are that an instance of a function consumes more memory than defining a method. Methods are innate to the JVM, hence they can be determined at compile time. The obvious cost of a Function is its memory consumption, but with it come added benefits such as composition with other Function objects.
If I understand correctly, the reason defs and lambdas can work together is because the Function class has a self-type (T1) ⇒ R which is implied by its apply() method https://github.com/scala/scala/blob/v2.11.8/src/library/scala/Function1.scala#L36. (At least I THINK that's what going on, please correct me if I'm wrong). This is all just my own speculation, however. There's certain to be some extra compiler magic taking place underneath to allow method and function interoperability.