I have many functions in my cuda C program with both __host__ and __device__ modifiers. I'm looking for a shorthand for these two to make my code look neat by changing
__host__ __device__ void foo() {
}
to
__both__ void foo() {
}
.
Of course I can
#define __both__ __host__ __device__
, but if something like this already existed, I would prefer to use the existing solution.
No there is no short hand for this in CUDA C/C++ currently.
Related
I know that we can't write CUDA kernels with a variable number of parameters:
Is it possible to have a CUDA kernel with varying number of parameters?
(at least not in the C varargs sense; we can use C++ variadic templates.)
But what about non-kernel device-side code, i.e. __device__ functions? Can these be varargs functions?
Yes, we can write varargs device-side functions.
For example:
#include <stdio.h>
#include <stdarg.h>
__device__ void foo(const char* str, ...)
{
va_list ap;
va_start(ap, str);
int arg = va_arg(ap, int); // making an assumption here
printf("str is \"%s\", first va_list argument is %d\n", str, arg);
}
This compiles fine with NVCC - and works, provided you actually pass a null-terminated string and an int. I would not be surprised if CUDA's printf() itself were implemented this way.
I've been reading in the CUDA Programming Guide about template functions and is something like this working?
#include <cstdio>
/* host struct */
template <typename T>
struct Test {
T *val;
int size;
};
/* struct device */
template <typename T>
__device__ Test<T> *d_test;
/* test function */
template <typename T>
T __device__ testfunc() {
return *d_test<T>->val;
}
/* test kernel */
__global__ void kernel() {
printf("funcout = %g \n", testfunc<float>());
}
I get the correct result but a warning:
"warning: a host variable "d_test [with T=T]" cannot be directly read in a device function" ?
Has the struct in the testfunction to be instantiated with *d_test<float>->val ?
KR,
Iggi
Unfortunately, the CUDA compiler seems to generally have some issues with variable templates. If you look at the assembly, you'll see that everything works just fine. The compiler clearly does instantiate the variable template and allocates a corresponding device object.
.global .align 8 .u64 _Z6d_testIfE;
The generated code uses this object just like it's supposed to
ld.global.u64 %rd3, [_Z6d_testIfE];
I'd consider this warning a compiler bug. Note that I cannot reproduce the issue with CUDA 10 here, so this issue has most likely been fixed by now. Consider updating your compiler…
#MichaelKenzel is correct.
This is almost certainly an nvcc bug - which I have now filed (you might need an account to access that.
Also note I've been able to reproduce the issue with less code:
template <typename T>
struct foo { int val; };
template <typename T>
__device__ foo<T> *x;
template <typename T>
int __device__ f() { return x<T>->val; }
__global__ void kernel() { int y = f<float>(); }
and have a look at the result on GodBolt as well.
I am trying to add a cuda backend to a 20k loc c++ expression template library. So far it is working great, but i am drowned in completely bogus "warning: calling a __host__ function from a __host__ __device__ function is not allowed" warnings.
Most of the code can be summarized like this:
template<class Impl>
struct Wrapper{
Impl impl;
// lots and lots of decorator code
__host__ __device__ void call(){ impl.call();};
};
//Guaranteed to never ever be used on gpu.
struct ImplCPU{
void call();
};
//Guaranteed to never ever be used on cpu.
struct ImplGPU{
__host__ __device__ void call();//Actually only __device__, but needed to shut up the compiler as well
};
Wrapper<ImplCPU> wrapCPU;
Wrapper<ImplGPU> wrapGPU;
In all cases, call() in Wrapper is trivial, while the wrapper itself is a rather complicated beast (only host-functions containing meta-information).
conditional compilation is not an option, both paths are intended to be used side by side.
I am one step short of "--disable-warnings", because honestly the cost of copying and maintaining 10k loc of horrible template magic outweighs the benefits of warnings.
I would be super happy about a way to have call being device or host conditionally based on whether the implementation is for gpu or cpu(because Impl knows what it is for)
Just to show bad it is. A single warning:
/home/user/Remora/include/remora/detail/matrix_expression_classes.hpp(859): warning: calling a __host__ function from a __host__ __device__ function is not allowed
detected during:
instantiation of "remora::matrix_matrix_prod<MatA, MatB>::size_type remora::matrix_matrix_prod<MatA, MatB>::size1() const [with MatA=remora::dense_triangular_proxy<const float, remora::row_major, remora::lower, remora::hip_tag>, MatB=remora::matrix<float, remora::column_major, remora::hip_tag>]"
/home/user/Remora/include/remora/cpu/../assignment.hpp(258): here
instantiation of "MatA &remora::assign(remora::matrix_expression<MatA, Device> &, const remora::matrix_expression<MatB, Device> &) [with MatA=remora::dense_matrix_adaptor<float, remora::row_major, remora::continuous_dense_tag, remora::hip_tag>, MatB=remora::matrix_matrix_prod<remora::dense_triangular_proxy<const float, remora::row_major, remora::lower, remora::hip_tag>, remora::matrix<float, remora::column_major, remora::hip_tag>>, Device=remora::hip_tag]"
/home/user/Remora/include/remora/cpu/../assignment.hpp(646): here
instantiation of "remora::noalias_proxy<C>::closure_type &remora::noalias_proxy<C>::operator=(const E &) [with C=remora::matrix<float, remora::row_major, remora::hip_tag>, E=remora::matrix_matrix_prod<remora::dense_triangular_proxy<const float, remora::row_major, remora::lower, remora::hip_tag>, remora::matrix<float, remora::column_major, remora::hip_tag>>]"
/home/user/Remora/Test/hip_triangular_prod.cpp(325): here
instantiation of "void Remora_hip_triangular_prod::triangular_prod_matrix_matrix_test(Orientation) [with Orientation=remora::row_major]"
/home/user/Remora/Test/hip_triangular_prod.cpp(527): here
This problem is actually quite unfortunate deficiency in CUDA language extensions.
Standard approach to deal with these warnings (in Thrust and similar templated CUDA libs) is to disable the warning for the function/method that causes it by using #pragma hd_warning_disable, or in newer CUDA (9.0 or newer) #pragma nv_exec_check_disable.
So in your case it would be:
template<class Impl>
struct Wrapper{
Impl impl;
// lots and lots of decorator code
#pragma nv_exec_check_disable
__host__ __device__ void call(){ impl.call();};
};
Similar question already asked
I'm sorry, but you're abusing the language and misleading readers. It is not true that your wrapper classes has a __host__ __device__ method; what you mean to say is that it has either a __host__ method or a __device__ method. You should treat the warning as more an error.
So, you can't just use the sample template instantiation for ImplCPU and ImplGPU; but - you could do something like this?
template<typename Impl> struct Wrapper;
template<> struct Wrapper<ImplGPU> {
ImplGPU impl;
__device__ void call(){ impl.call();};
}
template<> struct Wrapper<ImplCPU> {
ImplGPU impl;
__host__ void call(){ impl.call();};
}
or if you want to be more pedantic, it could be:
enum implementation_device { CPU, GPU };
template<implementation_device ImplementationDevice> Wrapper;
template<> Wrapper<CPU> {
__host__ void call();
}
template<> Wrapper<GPU> {
__device__ void call();
}
Having said that - you were expecting to use a single Wrapper class, and here I am telling you that you can't do that. I suspect your question presents an X-Y problem, and you should really consider the entire approach of using that wrapper. Perhaps you need to have the code which uses it templated different for CPU or GPU. Perhaps you need type erasure somewhere. But this won't do.
The solution i came up in the mean-time with far far less code duplication is to replace call by a functor level:
template<class Impl, class Device>
struct WrapperImpl;
template<class Impl>
struct WrapperImpl<Impl, CPU>{
typename Impl::Functor f;
__host__ operator()(){ f();}
};
//identical to CPU up to __device__
template<class Impl>
struct WrapperImpl<Impl, GPU>{
typename Impl::Functor f;
__device__ operator()(){ f();}
};
template<class Impl>
struct Wrapper{
typedef WrapperImpl<Impl, typename Impl::Device> Functor;
Impl impl;
// lots and lots of decorator code that i now do not need to duplicate
Functor call_functor()const{
return Functor{impl.call_functor();};
}
};
//repeat for around 20 classes
Wrapper<ImplCPU> wrapCPU;
wrapCPU.call_functor()();
I successfully created an operator+ between two float4 by doing :
__device__ float4 operator+(float4 a, float4 b) {
// ...
}
However, if in addition, I want to have an operator+ for uchar4, by doing the same thing with uchar4, i get the following error:
"error: more than one instance of overloaded function "operator+" has "C" linkage" "
I get a similar error message when I declare multiple functions with the same name but different arguments.
So, two questions :
Polymorphism : Is-it possible to have multiple functions with the same name and different arguments in Cuda ? If so, why do I have this error message ?
operator+ for float4 : it seems that this feature is already included by including "cutil_math.h", but when I include that (#include <cutil_math.h>) it complains that there is no such file or directory... anything particular I should do ? Note: I am using pycuda, which is a cuda for python.
Thanks!
Note the "has "C" linkage" in the error. You are compiling your code with C linkage (pyCUDA does this by default to circumvent symbol mangling issues). C++ can't support multiple definitions of the same function name using C linkage.
The solution is to compile code without automatically generated "extern C", and explicitly specify C linkage only for kernels. So your code would looks something like:
__device__ float4 operator+(float4 a, float4 b) { ... };
extern "C"
__global__ void kernel() { };
rather than the standard pyCUDA emitted:
extern "C"
{
__device__ float4 operator+(float4 a, float4 b) { ... };
__global__ void kernel() { };
}
pycuda.compiler.SourceModule has an option no_extern_c which can be used to control whether extern "C" is emitted by the just in time compilation system or not.
I convert my .cu-files using CUDA_COMPILE_PTX from findPackageCUDA.cmake. When I try to get the function-pointers to my kernels I am facing the following problem:
My kernel named Kernel1 only can be loaded correctly via cuModuleGetFunction if I use its .entry-label from the resulting .ptx-file, e.g. _Z7Kernel1Pj
The problem is that this label may change each time I have to recompile my .cu-files. This can't be a solution if I reference them by name in a constant char*.
_Z7Kernel1Pj is a C++ mangled name. If you want to have a simple symbol you can use extern "C"
extern "C" void Kernel1(...)
For example if you use the default CUDA visual studio project contains the kernel
__global__ void addKernel(int *c, const int *a, const int *b)
If you run cuobjdump -symbols on this you will see the mangled symbol name
STT_FUNC STB_GLOBAL _Z9addKernelPiPKiS1_
If you use extern "C"
extern "C" __global__ void addKernel(int *c, const int *a, const int *b)
the symbol name will now be
STT_FUNC STB_GLOBAL addKernel
Using extern "C" will result in loss of function overloading and namespaces