In CUDA why can't I allocate 2d shared memory dynamically? - cuda

The following works fine;
__extern__ float dyanimicSh1D[];
But the following does not work:
__extern__ float dyanimicSh2D[][];
I want to understand why it is so?

You can't do it because the compiler needs the width information for the array to generate code that does proper indexing.
If you allocate shared memory in a static fashion like this:
__shared__ float sarr[24][12];
Then not only are you telling how much memory to allocate/provide, you are also giving the width of the array (12 in this example). This is important, because a static 2D array of this type is not treated under the hood as an array of pointers, but instead it is a flat allocation, with indexing created by the compiler, at compile-time.
so that later when you do something like this:
float val = sarr[y][x];
the compiler will take the sarr pointer, and do pointer arithmetic to add x + (y*12) to it, before dereferencing that pointer to retrieve the value. The 12 in that calculation is discovered at compile-time and used by the compiler in generating the code to do the indexing.
Doing something like this:
extern __shared__ float sarr[][];
doesn't supply the array width information to the compiler, so it cannot generate the indexing needed at compile time, and is not allowed.
By the way, this works:
extern __shared__ float sarr[][12];
Here is an example:
$ cat t46.cu
#include <cstdio>
__global__ void k(int x, int y){
extern __shared__ float sarr[][12];
for (int i = 0; i < 32; i ++)
for (int j = 0; j < 12; j++)
sarr[i][j] = i * 256 + j;
float val = sarr[y][x];
printf("%f\n", val);
}
int main(){
k<<<1,1,128*12>>>(3,2);
cudaDeviceSynchronize();
}
$ nvcc -o t46 t46.cu
$ cuda-memcheck ./t46
========= CUDA-MEMCHECK
515.000000
========= ERROR SUMMARY: 0 errors
$

Related

CUDA/C - Using malloc in kernel functions gives strange results

I'm new to CUDA/C and new to stack overflow. This is my first question.
I'm trying to allocate memory dynamically in a kernel function, but the results are unexpected.
I read using malloc() in a kernel can lower performance a lot, but I need it anyway so I first tried with a simple int ** array just to test the possibility, then I'll actually need to allocate more complex structs.
In my main I used cudaMalloc() to allocate the space for the array of int *, and then I used malloc() for every thread in the kernel function to allocate the array for every index of the outer array. I then used another thread to check the result, but it doesn't always work.
Here's main code:
#define N_CELLE 1024*2
#define L_CELLE 512
extern "C" {
int main(int argc, char **argv) {
int *result = (int *)malloc(sizeof(int));
int *d_result;
int size_numbers = N_CELLE * sizeof(int *);
int **d_numbers;
cudaMalloc((void **)&d_numbers, size_numbers);
cudaMalloc((void **)&d_result, sizeof(int *));
kernel_one<<<2, 1024>>>(d_numbers);
cudaDeviceSynchronize();
kernel_two<<<1, 1>>>(d_numbers, d_result);
cudaMemcpy(result, d_result, sizeof(int), cudaMemcpyDeviceToHost);
printf("%d\n", *result);
cudaFree(d_numbers);
cudaFree(d_result);
free(result);
}
}
I used extern "C"because I could't compile while importing my header, which is not used in this example code. I pasted it since I don't know if this may be relevant or not.
This is kernel_one code:
__global__ void kernel_one(int **d_numbers) {
int i = threadIdx.x + blockIdx.x * blockDim.x;
d_numbers[i] = (int *)malloc(L_CELLE*sizeof(int));
for(int j=0; j<L_CELLE;j++)
d_numbers[i][j] = 1;
}
And this is kernel_two code:
__global__ void kernel_two(int **d_numbers, int *d_result) {
int temp = 0;
for(int i=0; i<N_CELLE; i++) {
for(int j=0; j<L_CELLE;j++)
temp += d_numbers[i][j];
}
*d_result = temp;
}
Everything works fine (aka the count is correct) until I use less than 1024*2*512 total blocks in device memory. For example, if I #define N_CELLE 1024*4 the program starts giving "random" results, such as negative numbers.
Any idea of what the problem could be?
Thanks anyone!
In-kernel memory allocation draws memory from a statically allocated runtime heap. At larger sizes, you are exceeding the size of that heap and then your two kernels are attempting to read and write from uninitialised memory. This produces a runtime error on the device and renders the results invalid. You would already know this if you either added correct API error checking on the host side, or ran your code with the cuda-memcheck utility.
The solution is to ensure that the heap size is set to something appropriate before trying to run a kernel. Adding something like this:
size_t heapsize = sizeof(int) * size_t(N_CELLE) * size_t(2*L_CELLE);
cudaDeviceSetLimit(cudaLimitMallocHeapSize, heapsize);
to your host code before any other API calls, should solve the problem.
I don't know anything about CUDA but these are severe bugs:
You cannot convert from int** to void**. They are not compatible types. Casting doesn't solve the problem, but hides it.
&d_numbers gives the address of a pointer to pointer which is wrong. It is of type int***.
Both of the above bugs result in undefined behavior. If your program somehow seems to works in some condition, that's just by pure (bad) luck only.

"device-function-maxrregcount" message while compiling cuda code

I am trying to write a code which performs multiple vector dot product inside the kernel. I'm using cublasSdot function from cublas library to perform vector dot product. This is my code:
using namespace std;
__global__ void ker(float * a, float * c,long long result_size,int n, int m)
{
float *sum;
int id = blockIdx.x*blockDim.x+threadIdx.x;
float *out1,*out2;
int k;
if(id<result_size)
{
cublasHandle_t handle;
cublasCreate(&handle);
out1 = a + id*m;
for(k=0;k<n;k++)
{
out2 =a + k*m;
cublasSdot(handle, m,out1,1,out2,1,sum);
c[id*n + k]= *sum;
}
}
}
int main()
{
int n=70000,m=100;
long result_size=n;
result_size*=n;
float * dev_data,*dev_result;
float * data = new float [n*m];
float * result = new float [result_size];
for (int i = 0; i< n; i++)
for(int j = 0; j <m;j++)
{
data[i*m+j]=rand();
}
cudaMalloc ((void**)&dev_data,sizeof(float)*m*n);
cudaMalloc ((void**)&dev_result,sizeof(float)*result_size);
cudaMemcpy( dev_data, data, sizeof(float) * m* n, cudaMemcpyHostToDevice);
int block_size=1024;
int grid_size=ceil((float)result_size/(float)block_size);
ker<<<grid_size,block_size>>>(dev_data,dev_result,result_size,n,m);
cudaDeviceSynchronize();
cudaMemcpy(result, dev_result, sizeof(float)*(result_size), cudaMemcpyDeviceToHost);
return 0;
}
I have included cublas_v2 library and used the following command to compile the code:
nvcc -lcublas_device -arch=sm_35 -rdc=true askstack.cu -o askstack
But I got the following message:
ptxas info : 'device-function-maxrregcount' is a BETA feature
Can anyone please let me know what should I do regarding this message?
This message is informational, as said by talonmies.
This maxregcount option of NVCC is used to specify a limit of registers that can be used by a kernel and all the device functions it uses :
If a kernel is limited to a certain number of registers with the launch_bounds attribute or the --maxrregcount option, then all functions that the kernel calls must not use more than that number of registers; if they exceed the limit, then a link error will be given.
See : NVCC Doc : 6.5.1. Object Compatibility
It seems that device-function-maxregcount is used to override this value for device functions only. So, you can have a different maximum amount of registers allowed on kernels and device functions.
For device functions, this option overrides the value specified by --maxregcount.
Source : The CUDA Handbook

External call of a class method in a kernel

I have a class FPlan that has a number of methods such as permute and packing.
__host__ __device__ void Perturb_action(FPlan *dfp){
dfp->perturb();
dfp->packing();
}
__global__ void Vector_Perturb(FPlan **dfp, int n){
int i=threadIx.x;
if(i<n) Perturb_action(dfp[i]);
}
in main:
FPlan **fp_vec;
fp_vec=(FPlan**)malloc(VEC_SIZE*sizeof(FPlan*));
//initialize the vec
for(int i=0; i<VEC_SIZE;i++)
fp_vec[i]=&fp;
//fp of type FPlan that is initialized
int v_sz=sizeof(fp_vec);
double test=fp_vec[0]->getCost();
printf("the cost before perturb %f\n"test);
FPlan **value;
cudaMalloc(&value,v_sz);
cudaMemcpy(value,&fp_vec,v_sz,cudaMemcpyHostToDevice);
//call kernel
dim3 threadsPerBlock(VEC_SIZE);
dim3 numBlocks(1);
Vector_Perturb<<<numBlocks,threadsPerBlock>>> (value,VEC_SIZE);
cudaMemcpy(fp_vec,value,v_sz,cudaMemcpyDeviceToHost);
test=fp_vec[0]->getCost();
printf("the cost after perturb %f\n"test);
test=fp_vec[1]->getCost();
printf("the cost after perturb %f\n"test);
I am getting before permute for fp_vec[0] printf the cost 0.8.
After permute for fp_vec[0] the value inf and for fp_vec[1] the value 0.8.
The expected output after the permutation should be something like fp_vec[0] = 0.7 and fp_vec[1] = 0.9. I want to apply these permutations to an array of type FPlan.
What am I missing? Is calling an external function supported in CUDA?
This seems to be a common problem these days:
Consider the following code:
#include <stdio.h>
#include <stdlib.h>
int main() {
int* arr = (int*) malloc(100);
printf("sizeof(arr) = %i", sizeof(arr));
return 0;
}
what is the expected ouptut? 100? no its 4 (at least on a 32 bit machine). sizeof() returns the size of the type of a variable not the allocated size of an array.
int v_sz=sizeof(fp_vec);
double test=fp_vec[0]->getCost();
printf("the cost before perturb %f\n"test);
FPlan **value;
cudaMalloc(&value,v_sz);
cudaMemcpy(value,&fp_vec,v_sz,cudaMemcpyHostToDevice);
You are allocating 4 (or 8) bytes on the device and copy 4 (or 8) bytes. The result is undefined (and maybe every time garbage).
Besides that, you shold do proper error checking of your CUDA calls.
Have a look: What is the canonical way to check for errors using the CUDA runtime API?

Dynamic Shared Memory in CUDA

There are similar questions to what I'm about to ask, but I feel like none of them get at the heart of what I'm really looking for. What I have now is a CUDA method that requires defining two arrays into shared memory. Now, the size of the arrays is given by a variable that is read into the program after the start of execution. Because of this, I cannot use that variable to define the size of the arrays, due to the fact that defining the size of shared arrays requires knowing the value at compile time. I do not want to do something like __shared__ double arr1[1000] because typing in the size by hand is useless to me as that will change depending on the input. In the same vein, I cannot use #define to create a constant for the size.
Now I can follow an example similar to what is in the manual (http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#shared) such as
extern __shared__ float array[];
__device__ void func() // __device__ or __global__ function
{
short* array0 = (short*)array;
float* array1 = (float*)&array0[128];
int* array2 = (int*)&array1[64];
}
But this still runs into an issue. From what I've read, defining a shared array always makes the memory address the first element. That means I need to make my second array shifted over by the size of the first array, as they appear to do in this example. But the size of the first array is dependent on user input.
Another question (Cuda Shared Memory array variable) has a similar issue, and they were told to create a single array that would act as the array for both arrays and simply adjust the indices to properly match the arrays. While this does seem to do what I want, it looks very messy. Is there any way around this so that I can still maintain two independent arrays, each with sizes that are defined as input by the user?
When using dynamic shared memory with CUDA, there is one and only one pointer passed to the kernel, which defines the start of the requested/allocated area in bytes:
extern __shared__ char array[];
There is no way to handle it differently. However this does not prevent you from having two user-sized arrays. Here's a worked example:
$ cat t501.cu
#include <stdio.h>
__global__ void my_kernel(unsigned arr1_sz, unsigned arr2_sz){
extern __shared__ char array[];
double *my_ddata = (double *)array;
char *my_cdata = arr1_sz*sizeof(double) + array;
for (int i = 0; i < arr1_sz; i++) my_ddata[i] = (double) i*1.1f;
for (int i = 0; i < arr2_sz; i++) my_cdata[i] = (char) i;
printf("at offset %d, arr1: %lf, arr2: %d\n", 10, my_ddata[10], (int)my_cdata[10]);
}
int main(){
unsigned double_array_size = 256;
unsigned char_array_size = 128;
unsigned shared_mem_size = (double_array_size*sizeof(double)) + (char_array_size*sizeof(char));
my_kernel<<<1,1, shared_mem_size>>>(256, 128);
cudaDeviceSynchronize();
return 0;
}
$ nvcc -arch=sm_20 -o t501 t501.cu
$ cuda-memcheck ./t501
========= CUDA-MEMCHECK
at offset 10, arr1: 11.000000, arr2: 10
========= ERROR SUMMARY: 0 errors
$
If you have a random arrangement of arrays of mixed data types, you'll want to either manually align your array starting points (and request enough shared memory) or else use alignment directives (and be sure to request enough shared memory), or use structures to help with alignment.

CUDA shared memory issue in outputs depending on extern declaration and size of array

If I am experimenting with shared memory in CUDA and I do not understand its behaviour in this bit of code.
I have a pretty basic kernel:
__global__ void sum( int* input, int* output, int size){
int tid = threadIdx.x+blockDim.x*blockIdx.x +
blockDim.x*gridDim.x*blockIdx.y;
extern __shared__ int sdata[];
sdata[tid] = input[tid];
__syncthreads();
output[tid] = input[tid];
}
And the output is 0 for all output[]. However, if I comment out sdata[tid] = input[tid];, then the output is fine and equal input[].
What am I doing wrong here? Am I missing something?
[UPDATE]
Well, if I remove the tag extern and give a size to the shared array, it seems to work fine. Any ideas why?
[UPDATE]
The way that I am invoking the kernel is from c++ code, so I needed to wrap it to be invoked from the main code.
kernel.cu contains the kernel itself plus the wrapper function:
void wrapper(int dBlock, int dThread, int* input, int* output, int size){
sum<<<dBlock,dThread>>>(input, output, size);
}
callerfunction.cpp contains c++ code and the function that invokes the wrapper.
If you use the extern qualifier you need to pass the size of the shared memory when launching the kernel.
kernel<<< blocks, threads, size>>>(...)
The size parameter is the size of shared memory in Bytes.