CUDA: How to apply __restrict__ on array of pointers to arrays? - cuda

This kernel using two __restrict__ int arrays compiles fine:
__global__ void kerFoo( int* __restrict__ arr0, int* __restrict__ arr1, int num )
{
for ( /* Iterate over array */ )
arr1[i] = arr0[i]; // Copy one to other
}
However, the same two int arrays composed into a pointer array fails compilation:
__global__ void kerFoo( int* __restrict__ arr[2], int num )
{
for ( /* Iterate over array */ )
arr[1][i] = arr[0][i]; // Copy one to other
}
The error given by the compiler is:
error: invalid use of `restrict'
I have certain structures that are composed as an array of pointers to arrays. (For example, a struct passed to the kernel that has int* arr[16].) How do I pass them to kernels and be able to apply __restrict__ on them?

The CUDA C manual only refers to the C99 definition of __restrict__, no special CUDA-specific circumstances.
Since the indicated parameter is an array containing two pointers, this use of __restrict__ looks perfectly valid to me, no reason for the compiler to complain IMHO. I would ask the compiler author to verify and possibly/probably correct the issue. I'd be interested in different opinions, though.
One remark to #talonmies:
The whole point of restrict is to tell the compiler that two or more pointer arguments will never overlap in memory.
This is not strictly true. restrict tells the compiler that the pointer in question, for the duration of its lifetime, is the only pointer through which the pointed-to object can be accessed. Be aware that the object pointed to is only assumed to be an array of int. (In truth it's only one int in this case.) Since the compiler cannot know the size of the array, it is up to the programmer to guard the array's boundaries..

Filling in the comment in your code with some arbitrary iteration, we get the following program:
__global__ void kerFoo( int* __restrict__ arr[2], int num )
{
for ( int i = 0; i < 1024; i ++)
arr[1][i] = arr[0][i]; // Copy one to other
}
and this compiles fine with CUDA 10.1 (Godbolt.org).

Related

How to change the default code generated by SWIG for the allocation of memory for a C structure?

I am using a flexible array in the structure. So I want to change the memory allocated for that structure with some of my own code. Basically I want to change the new_structname() and structname_variable_set() functions.
typedef struct vector{
int x;
char y;
int arr[0];
} vector;
here, SWIG generated new_vector() function to allocate memory by calling calloc(1,sizeof(struct vector)) where swig will not handle these type of structure in a special manner. So we need to modify the swig generated new_vector() in order to allocate memory for the flexible array. So is there any way to handle this?
There are a few ways you can do this. What you're looking for though is %extend. That lets us define new constructors and implement them as we see fit. (It even works with a C compiler, they're only constructors from the perspective of the target language).
Using your vector as a starting point we can illustrate this:
%module test
%include <stdint.i>
%inline %{
typedef struct vector{ int x; char y; int arr[0]; }vector;
%}
%extend vector {
vector(const size_t len) {
vector *v = calloc(1, sizeof *v + len);
v->x = len;
return v;
}
}
With this SWIG synthesises a new_vector function in the generated module code as you'd hoped.
I also assumed that you want to record the length inside the struct as one of its members. If that's not the case you can simply delete the assignment I made.

Memory allocation in cuda c [duplicate]

For example, cudaMalloc((void**)&device_array, num_bytes);
This question has been asked before, and the reply was "because cudaMalloc returns an error code", but I don't get it - what has a double pointer got to do with returning an error code? Why can't a simple pointer do the job?
If I write
cudaError_t catch_status;
catch_status = cudaMalloc((void**)&device_array, num_bytes);
the error code will be put in catch_status, and returning a simple pointer to the allocated GPU memory should suffice, shouldn't it?
In C, data can be passed to functions by value or via simulated pass-by-reference (i.e. by a pointer to the data). By value is a one-way methodology, by pointer allows for two-way data flow between the function and its calling environment.
When a data item is passed to a function via the function parameter list, and the function is expected to modify the original data item so that the modified value shows up in the calling environment, the correct C method for this is to pass the data item by pointer. In C, when we pass by pointer, we take the address of the item to be modified, creating a pointer (perhaps a pointer to a pointer in this case) and hand the address to the function. This allows the function to modify the original item (via the pointer) in the calling environment.
Normally malloc returns a pointer, and we can use assignment in the calling environment to assign this returned value to the desired pointer. In the case of cudaMalloc, the CUDA designers chose to use the returned value to carry an error status rather than a pointer. Therefore the setting of the pointer in the calling environment must occur via one of the parameters passed to the function, by reference (i.e. by pointer). Since it is a pointer value that we want to set, we must take the address of the pointer (creating a pointer to a pointer) and pass that address to the cudaMalloc function.
Adding to Robert's answer, but to first reiterate, it is a C API, which means it does not support references, which would allow you to modify the value of a pointer (not just what is pointed to) inside the function. The answer by Robert Crovella explained this. Also note that it needs to be void because C also does not support function overloading.
Further, when using a C API within a C++ program (but you have not stated this), it is common to wrap such a function in a template. For example,
template<typename T>
cudaError_t cudaAlloc(T*& d_p, size_t elements)
{
return cudaMalloc((void**)&d_p, elements * sizeof(T));
}
There are two differences with how you would call the above cudaAlloc function:
Pass the device pointer directly, without using the address-of operator (&) when calling it, and without casting to a void type.
The second argument elements is now the number of elements rather than the number of bytes. The sizeof operator facilitates this. This is arguably more intuitive to specify elements and not worry about bytes.
For example:
float *d = nullptr; // floats, 4 bytes per elements
size_t N = 100; // 100 elements
cudaError_t err = cudaAlloc(d,N); // modifies d, input is not bytes
if (err != cudaSuccess)
std::cerr << "Unable to allocate device memory" << std::endl;
I guess the signature of cudaMalloc function could be better explained by an example. It is basically assigning a buffer through a pointer to that buffer (a pointer to pointer), like the following method:
int cudaMalloc(void **memory, size_t size)
{
int errorCode = 0;
*memory = new char[size];
return errorCode;
}
As you can see, the method takes a memory pointer to pointer, on which it saves the new allocated memory. It then returns the error code (in this case as an integer, but it is actually an enum).
The cudaMalloc function could be designed as it follows also:
void * cudaMalloc(size_t size, int * errorCode = nullptr)
{
if(errorCode)
errorCode = 0;
char *memory = new char[size];
return memory;
}
In this second case, the error code is set through a pointer implicit set to null (for the case people do not bother with the error code at all). Then the allocated memory is returned.
The first method can be used as is the actual cudaMalloc right now:
float *p;
int errorCode;
errorCode = cudaMalloc((void**)&p, sizeof(float));
While the second one can be used as follows:
float *p;
int errorCode;
p = (float *) cudaMalloc(sizeof(float), &errorCode);
These two methods are functionally equivalent, while they have different signatures, and the people from cuda decided to go for the first method, returning the error code and assigning the memory through a pointer, while most people say that the second method would have been a better choice.

Reduce by key on device array

I am using reduce_by_key to find the number of elements in an array of type int2 which has same first values .
For example
Array: <1,2> <1,3> <1,4> <2,5> <2,7>
so no. elements with 1 as first element are 3 and with 2 are 2.
CODE:
struct compare_int2 : public thrust::binary_function<int2, int2, bool> {
__host__ __device__ bool operator()(const int2 &a,const int2 &b) const{
return (a.x == b.x);}
};
compare_int2 cmp;
int main()
{
int n,i;
scanf("%d",&n);
int2 *h_list = (int2 *) malloc(sizeof(int2)*n);
int *h_ones = (int *) malloc(sizeof(int)*n);
int2 *d_list,*C;
int *d_ones,*D;
cudaMalloc((void **)&d_list,sizeof(int2)*n);
cudaMalloc((void **)&d_ones,sizeof(int)*n);
cudaMalloc((void **)&C,sizeof(int2)*n);
cudaMalloc((void **)&D,sizeof(int)*n);
for(i=0;i<n;i++)
{
int2 p;
printf("Value ? ");
scanf("%d %d",&p.x,&p.y);
h_list[i] = p;
h_ones[i] = 1;
}
cudaMemcpy(d_list,h_list,sizeof(int2)*n,cudaMemcpyHostToDevice);
cudaMemcpy(d_ones,h_ones,sizeof(int)*n,cudaMemcpyHostToDevice);
thrust::reduce_by_key(d_list, d_list+n, d_ones, C, D,cmp);
return 0;
}
The above code is showing Segmentation Fault . I ran the above code using gdb and it reported the segfault at this location.
thrust::system::detail::internal::scalar::reduce_by_key >
(keys_first=0x1304740000,keys_last=0x1304740010,values_first=0x1304740200,keys_output=0x1304740400, values_output=0x1304740600,binary_pred=...,binary_op=...)
at /usr/local/cuda-6.5/bin/../targets/x86_64-linux/include/thrust/system/detail/internal/scalar/reduce_by_key.h:61 61
InputKeyType temp_key = *keys_first
How to use reduce_by_key on device arrays ?
Thrust interprets ordinary pointers as pointing to data on the host:
thrust::reduce_by_key(d_list, d_list+n, d_ones, C, D,cmp);
Therefore thrust will call the host path for the above algorithm, and it will seg fault when it attempts to dereference those pointers in host code. This is covered in the thrust getting started guide:
You may wonder what happens when a "raw" pointer is used as an argument to a Thrust function. Like the STL, Thrust permits this usage and it will dispatch the host path of the algorithm. If the pointer in question is in fact a pointer to device memory then you'll need to wrap it with thrust::device_ptr before calling the function.
Thrust has a variety of mechanisms (e.g. device_ptr, device_vector, and execution policy) to identify to the algorithm that the data is device-resident and the device path should be used.
The simplest modification for your existing code might be to use device_ptr:
#include <thrust/device_ptr.h>
...
thrust::device_ptr<int2> dlistptr(d_list);
thrust::device_ptr<int> donesptr(d_ones);
thrust::device_ptr<int2> Cptr(C);
thrust::device_ptr<int> Dptr(D);
thrust::reduce_by_key(dlistptr, dlistptr+n, donesptr, Cptr, Dptr,cmp);
The issue described above is similar to another issue you asked about.

Writing a simple thrust functor operating on some zipped arrays

I am trying to perform a thrust::reduce_by_key using zip and permutation iterators.
i.e. doing this on a zipped array of several 'virtual' permuted arrays.
I am having trouble in writing the syntax for the functor density_update.
But first the setup of the problem.
Here is my function call:
thrust::reduce_by_key( dflagt,
dflagtend,
thrust::make_zip_iterator(
thrust::make_tuple(
thrust::make_permutation_iterator(dmasst, dmapt),
thrust::make_permutation_iterator(dvelt, dmapt),
thrust::make_permutation_iterator(dmasst, dflagt),
thrust::make_permutation_iterator(dvelt, dflagt)
)
),
thrust::make_discard_iterator(),
danswert,
thrust::equal_to<int>(),
density_update()
)
dmapt, dflagt are of type thrust::device_ptr<int> and dvelt , dmasst and danst are of type
thrust::device_ptr<double>.
(They are thrust wrappers to my raw cuda arrays)
The arrays mapt and flagt are both index vectors from which I need to perform a gather operation from the arrays dmasst and dvelt.
After the reduction step I intend to write my data to the danswert array. Since multiple arrays are being used in the reduction, obviously I am using zip iterators.
My problem lies in writing the functor density_update which is binary operation.
struct density_update
{
typedef thrust::device_ptr<double> ElementIterator;
typedef thrust::device_ptr<int> IndexIterator;
typedef thrust::permutation_iterator<ElementIterator,IndexIterator> PIt;
typedef thrust::tuple< PIt , PIt , PIt, PIt> Tuple;
__host__ __device__
double operator()(const Tuple& x , const Tuple& y)
{
return thrust::get<0>(*x) * (thrust::get<1>(*x) - thrust::get<3>(*x)) + \
thrust::get<0>(*y) * (thrust::get<1>(*y) - thrust::get<3>(*y));
}
};
The value being returned is a double . Why the binary operation looks like the above functor is
not important. I just want to know how I would go about correcting the above syntactically.
As shown above the code is throwing a number of compilation errors. I am not sure where I have gone wrong.
I am using CUDA 4.0 on GTX 570 on Ubuntu 10.10
density_update should not receive tuples of iterators as parameters -- it needs tuples of the iterators' references.
In principle you could write density_update::operator() in terms of the particular reference type of the various iterators, but it's simpler to have the compiler infer the type of the parameters:
struct density_update
{
template<typename Tuple>
__host__ __device__
double operator()(const Tuple& x, const Tuple& y)
{
return thrust::get<0>(x) * (thrust::get<1>(x) - thrust::get<3>(x)) + \
thrust::get<0>(y) * (thrust::get<1>(y) - thrust::get<3>(y));
}
};

2D arrays in CUDA

I read a lot about handling 2D arrays in CUDA and i think it is necessary to flatten it before sending it to GPU.however can I allocate 1D array on GPU and access it as 2D array in GPU?I tried but failed my code looks like follows:
__global__ void kernel( int **d_a )
{
cuPrintf("%p",local_array[0][0]);
}
int main(){
int **A;
int i;
cudaPrintfInit();
cudaMalloc((void**)&A,16*sizeof(int));
kernel<<<1,1>>>(A);
cudaPrintfDisplay(stdout,true);
cudaPrintfEnd();
}
In fact it is not necessary to "flatten" your 2D array before using it on the GPU (although this can speed up memory accesses). If you'd like a 2D array, you can use something like cudaMallocPitch, which is documented in the CUDA C programming guide. I believe the reason your code isn't working is because you only malloced a 1D array - A[0][0] doesn't exist. If you look at your code, you made a 1D array of ints, not int*s. If you wanted to malloc a flattened 2D array, you could do something like:
int** A;
cudaMalloc(&A, 16*length*sizeof(int*)); //where length is the number of rows/cols you want
And then in your kernel use (to print the pointer to any element):
__global__ void kernel( int **d_a, int row, int col, int stride )
{
printf("%p", d_a[ col + row*stride ]);
}
This is how I fixed problem
I cudaMalloc in usual way but while sending pointer to kernel i'm typecasting it to int(*)[col],and this is working for me