Can I use vararg functions in CUDA device-side code? - cuda

I know that we can't write CUDA kernels with a variable number of parameters:
Is it possible to have a CUDA kernel with varying number of parameters?
(at least not in the C varargs sense; we can use C++ variadic templates.)
But what about non-kernel device-side code, i.e. __device__ functions? Can these be varargs functions?

Yes, we can write varargs device-side functions.
For example:
#include <stdio.h>
#include <stdarg.h>
__device__ void foo(const char* str, ...)
{
va_list ap;
va_start(ap, str);
int arg = va_arg(ap, int); // making an assumption here
printf("str is \"%s\", first va_list argument is %d\n", str, arg);
}
This compiles fine with NVCC - and works, provided you actually pass a null-terminated string and an int. I would not be surprised if CUDA's printf() itself were implemented this way.

Related

cuda nvcc make __device__ conditional

I am trying to add a cuda backend to a 20k loc c++ expression template library. So far it is working great, but i am drowned in completely bogus "warning: calling a __host__ function from a __host__ __device__ function is not allowed" warnings.
Most of the code can be summarized like this:
template<class Impl>
struct Wrapper{
Impl impl;
// lots and lots of decorator code
__host__ __device__ void call(){ impl.call();};
};
//Guaranteed to never ever be used on gpu.
struct ImplCPU{
void call();
};
//Guaranteed to never ever be used on cpu.
struct ImplGPU{
__host__ __device__ void call();//Actually only __device__, but needed to shut up the compiler as well
};
Wrapper<ImplCPU> wrapCPU;
Wrapper<ImplGPU> wrapGPU;
In all cases, call() in Wrapper is trivial, while the wrapper itself is a rather complicated beast (only host-functions containing meta-information).
conditional compilation is not an option, both paths are intended to be used side by side.
I am one step short of "--disable-warnings", because honestly the cost of copying and maintaining 10k loc of horrible template magic outweighs the benefits of warnings.
I would be super happy about a way to have call being device or host conditionally based on whether the implementation is for gpu or cpu(because Impl knows what it is for)
Just to show bad it is. A single warning:
/home/user/Remora/include/remora/detail/matrix_expression_classes.hpp(859): warning: calling a __host__ function from a __host__ __device__ function is not allowed
detected during:
instantiation of "remora::matrix_matrix_prod<MatA, MatB>::size_type remora::matrix_matrix_prod<MatA, MatB>::size1() const [with MatA=remora::dense_triangular_proxy<const float, remora::row_major, remora::lower, remora::hip_tag>, MatB=remora::matrix<float, remora::column_major, remora::hip_tag>]"
/home/user/Remora/include/remora/cpu/../assignment.hpp(258): here
instantiation of "MatA &remora::assign(remora::matrix_expression<MatA, Device> &, const remora::matrix_expression<MatB, Device> &) [with MatA=remora::dense_matrix_adaptor<float, remora::row_major, remora::continuous_dense_tag, remora::hip_tag>, MatB=remora::matrix_matrix_prod<remora::dense_triangular_proxy<const float, remora::row_major, remora::lower, remora::hip_tag>, remora::matrix<float, remora::column_major, remora::hip_tag>>, Device=remora::hip_tag]"
/home/user/Remora/include/remora/cpu/../assignment.hpp(646): here
instantiation of "remora::noalias_proxy<C>::closure_type &remora::noalias_proxy<C>::operator=(const E &) [with C=remora::matrix<float, remora::row_major, remora::hip_tag>, E=remora::matrix_matrix_prod<remora::dense_triangular_proxy<const float, remora::row_major, remora::lower, remora::hip_tag>, remora::matrix<float, remora::column_major, remora::hip_tag>>]"
/home/user/Remora/Test/hip_triangular_prod.cpp(325): here
instantiation of "void Remora_hip_triangular_prod::triangular_prod_matrix_matrix_test(Orientation) [with Orientation=remora::row_major]"
/home/user/Remora/Test/hip_triangular_prod.cpp(527): here
This problem is actually quite unfortunate deficiency in CUDA language extensions.
Standard approach to deal with these warnings (in Thrust and similar templated CUDA libs) is to disable the warning for the function/method that causes it by using #pragma hd_warning_disable, or in newer CUDA (9.0 or newer) #pragma nv_exec_check_disable.
So in your case it would be:
template<class Impl>
struct Wrapper{
Impl impl;
// lots and lots of decorator code
#pragma nv_exec_check_disable
__host__ __device__ void call(){ impl.call();};
};
Similar question already asked
I'm sorry, but you're abusing the language and misleading readers. It is not true that your wrapper classes has a __host__ __device__ method; what you mean to say is that it has either a __host__ method or a __device__ method. You should treat the warning as more an error.
So, you can't just use the sample template instantiation for ImplCPU and ImplGPU; but - you could do something like this?
template<typename Impl> struct Wrapper;
template<> struct Wrapper<ImplGPU> {
ImplGPU impl;
__device__ void call(){ impl.call();};
}
template<> struct Wrapper<ImplCPU> {
ImplGPU impl;
__host__ void call(){ impl.call();};
}
or if you want to be more pedantic, it could be:
enum implementation_device { CPU, GPU };
template<implementation_device ImplementationDevice> Wrapper;
template<> Wrapper<CPU> {
__host__ void call();
}
template<> Wrapper<GPU> {
__device__ void call();
}
Having said that - you were expecting to use a single Wrapper class, and here I am telling you that you can't do that. I suspect your question presents an X-Y problem, and you should really consider the entire approach of using that wrapper. Perhaps you need to have the code which uses it templated different for CPU or GPU. Perhaps you need type erasure somewhere. But this won't do.
The solution i came up in the mean-time with far far less code duplication is to replace call by a functor level:
template<class Impl, class Device>
struct WrapperImpl;
template<class Impl>
struct WrapperImpl<Impl, CPU>{
typename Impl::Functor f;
__host__ operator()(){ f();}
};
//identical to CPU up to __device__
template<class Impl>
struct WrapperImpl<Impl, GPU>{
typename Impl::Functor f;
__device__ operator()(){ f();}
};
template<class Impl>
struct Wrapper{
typedef WrapperImpl<Impl, typename Impl::Device> Functor;
Impl impl;
// lots and lots of decorator code that i now do not need to duplicate
Functor call_functor()const{
return Functor{impl.call_functor();};
}
};
//repeat for around 20 classes
Wrapper<ImplCPU> wrapCPU;
wrapCPU.call_functor()();

Declaring a global device array using a pointer with cudaMemcpyFromSymbol

When I use the following code, it shows the correct value 3345.
#include <iostream>
#include <cstdio>
__device__ int d_Array[1];
__global__ void foo(){
d_Array[0] = 3345;
}
int main()
{
foo<<<1,1>>>();
cudaDeviceSynchronize();
int h_Array[1];
cudaMemcpyFromSymbol(&h_Array, d_Array, sizeof(int));
std::cout << "values: " << h_Array[0] << std::endl;
}
But if we replace the line of code __device__ int d_Array[1]; by
__device__ int *d_Array; it shows a wrong value. Why?
The problem is in memory allocation. Try the same thing on C++ (on host) and you will either get an error or unexpected value.
In addition, you can check the Cuda errors calling cudaGetLastError() after your kernel. In the first case everything is fine, and the result is cudaSuccess. In the second case there is cudaErrorLaunchFailure error. Here is the explanation of this error (from cuda toolkit documentation):
"An exception occurred on the device while executing a kernel. Common causes include dereferencing an invalid device pointer and accessing out of bounds shared memory. The device cannot be used until cudaThreadExit() is called. All existing device memory allocations are invalid and must be reconstructed if the program is to continue using CUDA."
Note that cudaMemcpyToSymbol also supports the offset parameter for array indexing

How does CUDA's cudaMemcpyFromSymbol work?

I understand the concept of passing a symbol, but was wondering what exactly is going on behind the scenes. If it's not the address of the variable, then what is it?
I believe the details are that for each __device__ variable, cudafe creates a normal global variable as in C and also a CUDA-specific PTX variable. The global C variable is used so that the host program can refer to the variable by its address, and the PTX variable is used for the actual storage of the variable. The presence of the host variable also allows the host compiler to successfully parse the program. When the device program executes, it operates on the PTX variable when it manipulates the variable by name.
If you wrote a program to print the address of a __device__ variable, the address would differ depending on whether you printed it out from the host or device:
#include <cstdio>
__device__ int device_variable = 13;
__global__ void kernel()
{
printf("device_variable address from device: %p\n", &device_variable);
}
int main()
{
printf("device_variable address from host: %p\n", &device_variable);
kernel<<<1,1>>>();
cudaDeviceSynchronize();
return 0;
}
$ nvcc test_device.cu -run
device_variable address from host: 0x65f3e8
device_variable address from device: 0x403ee0000
Since neither processor agrees on the address of the variable, that makes copying to it problematic, and indeed __host__ functions are not allowed to access __device__ variables directly:
__device__ int device_variable;
int main()
{
device_variable = 13;
return 0;
}
$ nvcc warning.cu
error.cu(5): warning: a __device__ variable "device_variable" cannot be directly written in a host function
cudaMemcpyFromSymbol allows copying data back from a __device__ variable, provided the programmer happens to know the (mangled) name of the variable in the source program.
cudafe facilitates this by creating a mapping from mangled names to the device addresses of variables at program initialization time. The program discovers the device address of each variable by querying the CUDA driver for a driver token given its mangled name.
So the implementation of cudaMemcpyFromSymbol would look something like this in pseudocode:
std::map<const char*, void*> names_to_addresses;
cudaError_t cudaMemcpyFromSymbol(void* dst, const char* symbol, size_t count, size_t offset, cudaMemcpyKind kind)
{
void* ptr = names_to_addresses[symbol];
return cudaMemcpy(dst, ptr + offset, count, kind);
}
If you look at the output of nvcc --keep, you can see for yourself the way that the program interacts with special CUDART APIs that are not normally available to create the mapping:
$ nvcc --keep test_device.cu
$ grep device_variable test_device.cudafe1.stub.c
static void __nv_cudaEntityRegisterCallback( void **__T22) { __nv_dummy_param_ref(__T22); __nv_save_fatbinhandle_for_managed_rt(__T22); __cudaRegisterEntry(__T22, ((void ( *)(void))kernel), _Z6kernelv, (-1)); __cudaRegisterVariable(__T22, __shadow_var(device_variable,::device_variable), 0, 4, 0, 0); }
If you inspect the output, you can see that cudafe has inserted a call to __cudaRegisterVariable to create the mapping for device_variable. Users should not attempt to use this API themselves.

Reduce by key on device array

I am using reduce_by_key to find the number of elements in an array of type int2 which has same first values .
For example
Array: <1,2> <1,3> <1,4> <2,5> <2,7>
so no. elements with 1 as first element are 3 and with 2 are 2.
CODE:
struct compare_int2 : public thrust::binary_function<int2, int2, bool> {
__host__ __device__ bool operator()(const int2 &a,const int2 &b) const{
return (a.x == b.x);}
};
compare_int2 cmp;
int main()
{
int n,i;
scanf("%d",&n);
int2 *h_list = (int2 *) malloc(sizeof(int2)*n);
int *h_ones = (int *) malloc(sizeof(int)*n);
int2 *d_list,*C;
int *d_ones,*D;
cudaMalloc((void **)&d_list,sizeof(int2)*n);
cudaMalloc((void **)&d_ones,sizeof(int)*n);
cudaMalloc((void **)&C,sizeof(int2)*n);
cudaMalloc((void **)&D,sizeof(int)*n);
for(i=0;i<n;i++)
{
int2 p;
printf("Value ? ");
scanf("%d %d",&p.x,&p.y);
h_list[i] = p;
h_ones[i] = 1;
}
cudaMemcpy(d_list,h_list,sizeof(int2)*n,cudaMemcpyHostToDevice);
cudaMemcpy(d_ones,h_ones,sizeof(int)*n,cudaMemcpyHostToDevice);
thrust::reduce_by_key(d_list, d_list+n, d_ones, C, D,cmp);
return 0;
}
The above code is showing Segmentation Fault . I ran the above code using gdb and it reported the segfault at this location.
thrust::system::detail::internal::scalar::reduce_by_key >
(keys_first=0x1304740000,keys_last=0x1304740010,values_first=0x1304740200,keys_output=0x1304740400, values_output=0x1304740600,binary_pred=...,binary_op=...)
at /usr/local/cuda-6.5/bin/../targets/x86_64-linux/include/thrust/system/detail/internal/scalar/reduce_by_key.h:61 61
InputKeyType temp_key = *keys_first
How to use reduce_by_key on device arrays ?
Thrust interprets ordinary pointers as pointing to data on the host:
thrust::reduce_by_key(d_list, d_list+n, d_ones, C, D,cmp);
Therefore thrust will call the host path for the above algorithm, and it will seg fault when it attempts to dereference those pointers in host code. This is covered in the thrust getting started guide:
You may wonder what happens when a "raw" pointer is used as an argument to a Thrust function. Like the STL, Thrust permits this usage and it will dispatch the host path of the algorithm. If the pointer in question is in fact a pointer to device memory then you'll need to wrap it with thrust::device_ptr before calling the function.
Thrust has a variety of mechanisms (e.g. device_ptr, device_vector, and execution policy) to identify to the algorithm that the data is device-resident and the device path should be used.
The simplest modification for your existing code might be to use device_ptr:
#include <thrust/device_ptr.h>
...
thrust::device_ptr<int2> dlistptr(d_list);
thrust::device_ptr<int> donesptr(d_ones);
thrust::device_ptr<int2> Cptr(C);
thrust::device_ptr<int> Dptr(D);
thrust::reduce_by_key(dlistptr, dlistptr+n, donesptr, Cptr, Dptr,cmp);
The issue described above is similar to another issue you asked about.

Writing a simple thrust functor operating on some zipped arrays

I am trying to perform a thrust::reduce_by_key using zip and permutation iterators.
i.e. doing this on a zipped array of several 'virtual' permuted arrays.
I am having trouble in writing the syntax for the functor density_update.
But first the setup of the problem.
Here is my function call:
thrust::reduce_by_key( dflagt,
dflagtend,
thrust::make_zip_iterator(
thrust::make_tuple(
thrust::make_permutation_iterator(dmasst, dmapt),
thrust::make_permutation_iterator(dvelt, dmapt),
thrust::make_permutation_iterator(dmasst, dflagt),
thrust::make_permutation_iterator(dvelt, dflagt)
)
),
thrust::make_discard_iterator(),
danswert,
thrust::equal_to<int>(),
density_update()
)
dmapt, dflagt are of type thrust::device_ptr<int> and dvelt , dmasst and danst are of type
thrust::device_ptr<double>.
(They are thrust wrappers to my raw cuda arrays)
The arrays mapt and flagt are both index vectors from which I need to perform a gather operation from the arrays dmasst and dvelt.
After the reduction step I intend to write my data to the danswert array. Since multiple arrays are being used in the reduction, obviously I am using zip iterators.
My problem lies in writing the functor density_update which is binary operation.
struct density_update
{
typedef thrust::device_ptr<double> ElementIterator;
typedef thrust::device_ptr<int> IndexIterator;
typedef thrust::permutation_iterator<ElementIterator,IndexIterator> PIt;
typedef thrust::tuple< PIt , PIt , PIt, PIt> Tuple;
__host__ __device__
double operator()(const Tuple& x , const Tuple& y)
{
return thrust::get<0>(*x) * (thrust::get<1>(*x) - thrust::get<3>(*x)) + \
thrust::get<0>(*y) * (thrust::get<1>(*y) - thrust::get<3>(*y));
}
};
The value being returned is a double . Why the binary operation looks like the above functor is
not important. I just want to know how I would go about correcting the above syntactically.
As shown above the code is throwing a number of compilation errors. I am not sure where I have gone wrong.
I am using CUDA 4.0 on GTX 570 on Ubuntu 10.10
density_update should not receive tuples of iterators as parameters -- it needs tuples of the iterators' references.
In principle you could write density_update::operator() in terms of the particular reference type of the various iterators, but it's simpler to have the compiler infer the type of the parameters:
struct density_update
{
template<typename Tuple>
__host__ __device__
double operator()(const Tuple& x, const Tuple& y)
{
return thrust::get<0>(x) * (thrust::get<1>(x) - thrust::get<3>(x)) + \
thrust::get<0>(y) * (thrust::get<1>(y) - thrust::get<3>(y));
}
};