Function with same signature - cuda

I would like to have two versions of same member function of a class in host and device side.
Lets say
class A {
public:
double stdinvcdf(float x) {
static normal boostnormal(0, 1);
return boost::math::cdf(boostnormal,x);
}
__device__ double stdinvcdf(float x) {
return normcdfinvf(x);
}
};
But when I compile this code using nvcc, it aborts with function redefinition error.

CUDA / C++ does not support this kind of function overloading, because in the end, there is no different function signature. The common approach to have both, i.e. host and device versions is to use __host__ in combination with __device__ alongside with an #ifdef, e.g.
__host__ __device__ double stdinvcdf(float x)
{
#ifdef __CUDA_ARCH__
/* DEVICE CODE */
#else
/* HOST CODE */
#endif
}
This solution was also discussed in this thread in the NVIDIA developer forum.

Related

How to invoke a constexpr function on both device and host?

From the documentation:
I.4.20.4. Constexpr functions and function templates
By default, a
constexpr function cannot be called from a function with incompatible
execution space. The experimental nvcc flag --expt-relaxed-constexpr
removes this restriction. When this flag is specified, host code can
invoke a __device__ constexpr function and device code can invoke a
__host__ constexpr function.
I read it, but I don't understand what it means - device code can invoke a host constexpr function? Here is my test:
constexpr int bar(int i)
{
#ifdef __CUDA_ARCH__
return i;
#else
return 555;
#endif
}
__global__ void kernel()
{
int tid = (blockDim.x * blockIdx.x) + threadIdx.x;
printf("%i\n", bar(tid));
}
int main(int argc, char *[])
{
static_assert(bar(5) > 0);
// static_assert(bar(argc) > 0); // compile error
cout << bar(argc) << endl;
kernel<<<2, 2>>>();
gpuErrchk(cudaPeekAtLastError());
gpuErrchk(cudaDeviceSynchronize());
return 0;
}
It prints:
555
0
1
2
3
According to my understanding, the host invokes the host function, while the device invokes the device function. I.e. it behaves the same as if I declare bar with both __host__ and __device__ attributes. Adding a single attribute (__host__ or __device__) doesn't make any difference.
As a comparison, the documentation for std::initializer_list is much clearer:
I.4.20.2. std::initializer_list
By default, the CUDA compiler will
implicitly consider the member functions of std::initializer_list to
have __host__ __device__ execution space specifiers, and therefore
they can be invoked directly from device code.
Here I don't have any questions.
What does the documentation mean exactly?
Consider the following code.
#include <algorithm> //std::max
__global__ void kernel(int *array, int n) {
array[0] = std::max(array[1], array[2]);
}
This code will not compile by default.
error: calling a constexpr __host__ function("max") from a __global__ function("kernel") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.
std::max is a standard host function without __device__ execution space specifiers and thus cannot be called from device code.
However, when the compiler flag --expt-relaxed-constexpr is specified, the code compiles nonetheless. I cannot give you any details about how this is achieved internally

Why can't we split __host__ and __device__ implementations?

If we have a __host__ __device__ function in CUDA, we can use macros to choose different code paths for host-side and device-side code in its implementations, like so:
__host__ __device__ int foo(int x)
{
#ifdef CUDA_ARCH
return x * 2;
#else
return x;
#endif
}
but why is it that we can't write:
__host__ __device__ int foo(int x);
__device__ int foo(int x) { return x * 2; }
__host__ int foo(int x) { return x; }
instead?
The Clang implementation of CUDA C++ actually supports overloading on __host__ and
__device__ because it considers the execution space qualifiers part of the function signature. Note, however, that even there, you'd have to declare the two functions separately:
__device__ int foo(int x);
__host__ int foo(int x);
__device__ int foo(int x) { return x * 2; }
__host__ int foo(int x) { return x; }
test it out here
Personally, I'm not sure how desirable/important that really is to have though. Consider that you can just define a foo(int x) in the host code outside of your CUDA source. If someone told me they need to have different implementations of the same function for host and device where the host version for some reason needs to be defined as part of the CUDA source, my initial gut feeling would be that there's likely something going in a bit of an odd direction. If the host version does something different, shouldn't it most likely have a different name? If it logically does the same thing just not using the GPU, then why does it have to be part of the CUDA source? I'd generally advocate for keeping as clean and strict a separation between host and device code as possible and keeping any host code inside the CUDA source to the bare minimum. Even if you don't care about the cleanliness of your code, doing so will at least minimize the chances of getting hurt by all the compiler magic that goes on under the hood…

Load device function from shared library with dlopen

I'm relatively new to cuda programming and can't find a solution to my problem.
I'm trying to have a shared library, lets call it func.so, that defines a device function
__device__ void hello(){ prinf("hello");}
I then want to be able to access that library via dlopen, and use that function in my programm. I tried something along the following lines:
func.cu
#include <stdio.h>
typedef void(*pFCN)();
__device__ void dhello(){
printf("hello\n")
}
__device__ pFCN ptest = dhello;
pFCN h_pFCN;
extern "C" pFCN getpointer(){
cudaMemcpyFromSymbol(&h_pFCN, ptest, sizeof(pFCN));
return h_pFCN;
}
main.cu
#include <dlfcn.h>
#include <stdio.h>
typedef void (*fcn)();
typedef fcn (*retpt)();
retpt hfcnpt;
fcn hfcn;
__device__ fcn dfcn;
__global__ void foo(){
(*dfcn)();
}
int main() {
void * m_handle = dlopen("gputest.so", RTLD_NOW);
hfcnpt = (retpt) dlsym( m_handle, "getpointer");
hfcn = (*hfcnpt)();
cudaMemcpyToSymbol(dfcn, &hfcn, sizeof(fcn), 0, cudaMemcpyHostToDevice);
foo<<<1,1>>>();
cudaThreadSynchronize();
return 0;
}
But this way I get the following error when debugging with cuda-gdb:
CUDA Exception: Warp Illegal Instruction
Program received signal CUDA_EXCEPTION_4, Warp Illegal Instruction.
0x0000000000806b30 in dtest () at func.cu:5
I appreciate any help you all can give me! :)
Calling a __device__ function in one compilation unit from device code in another compilation unit requires separate compilation with device linking usage of nvcc.
However, such usage with libraries only works with static libraries.
Therefore if the target __device__ function is in the .so library, and the calling code is outside of the .so library, your approach cannot work, with the current nvcc toolchain.
The only "workarounds" I can suggest would be to put the desired target function in a static library, or else put both caller and target inside the same .so library. There are a number of questions/answers on the cuda tag which give examples of these alternate approaches.

Access to CUDA library functions inside specialized instantiations of __device__ function templates

I have the following template __device__ function in CUDA:
template<typename T>
__device__ void MyatomicAdd(T *address, T val){
atomicAdd(address , val);
}
that compiles and runs just fine if instantiated with T as a float, i.e.
__global__ void myKernel(float *a, float b){
MyatomicAdd<float>(a,b);
}
will run without a problem.
I wanted to specialize this function, as there is no atomicAdd() for doubles, so I can hand code an implementation in double precision. Ignoring the double precision specialization for now, the single precision specialization and template look like this:
template<typename T>
__device__ void MyatomicAdd(T *address, T val){
};
template<>
__device__ void MyatomicAdd<float>(float *address, float val){
atomicAdd(address , val);
}
Now the compiler complains that atomicAdd() is undefined in my specialization, the same applies when I try to use any CUDA functions like __syncthreads() within the specialization. Any ideas? Thanks.
It ended up being a linking problem with some OpenGL code developed by a colleague. Forcing the specializations to be inline fixed the problem, although obviously not the root cause. Still, it'll do for now until I can be be bothered to dig through the other guy's code.

CUDA function call from anther cu file

I have two cuda files say A and B.
I need to call a function from A to B like..
__device__ int add(int a, int b) //this is a function in A
{
return a+b;
}
__device__ void fun1(int a, int b) //this is a function in B
{
int c = A.add(a,b);
}
How can I do this??
Can I use static keyword? Please give me an example..
The short answer is that you can't. CUDA only supports internal linkage, thus everything needed to compile a kernel must be defined within the same translation unit.
What you might be able to do is put the functions into a header file like this:
// Both functions in func.cuh
#pragma once
__device__ inline int add(int a, int b)
{
return a+b;
}
__device__ inline void fun1(int a, int b)
{
int c = add(a,b);
}
and include that header file into each .cu file you need to use the functions. The CUDA built chain seems to honour the inline keyword and that sort of declaration won't generate duplicate symbols on any of the CUDA platforms I use (which doesn't include Windows). I am not sure whether it is intended to work or not, so cavaet emptor.
I think meanwhile there is a possibilty to solve it:
CUDA external class linkage and unresolved extern function in ptxas file
You can enable "Generate Relocateable Device Code" in VS Project Properies->CUDA C/C++->Common or use compiler parameter -rdc=true.