Is there any way to dynamically allocate constant memory? CUDA - cuda

I'm confused about copying arrays to constant memory.
According to programming guide there's at least one way to allocate constant memory and use it in order to store an array of values. And this is called static memory allocation:
__constant__ float constData[256];
float data[256];
cudaMemcpyToSymbol(constData, data, sizeof(data));
cudaMemcpyFromSymbol(data, constData, sizeof(data));
According to programming guide again we can use:
__device__ float* devPointer;
float* ptr;
cudaMalloc(&ptr, 256 * sizeof(float));
cudaMemcpyToSymbol(devPointer, &ptr, sizeof(ptr));
It looks like dynamic constant memory allocation is used, but I'm not sure about it. And also no qualifier __constant__ is used here.
So here are some questions:
Is this pointer stored in constant memory?
Is assigned (by this pointer) memory stored in constant memory too?
Is this pointer constant? And it's not allowed to change that pointer using device or host function. But is changing values of array prohibited or not? If changing values of array is allowed, then does it mean that constant memory is not used to store this values?

The developer can declare up to 64K of constant memory at file scope. In SM 1.0, the constant memory used by the toolchain (e.g. to hold compile-time constants) was separate and distinct from the constant memory available to developers, and I don't think this has changed since. The driver dynamically manages switching between different views of constant memory as it launches kernels that reside in different compilation units. Although you cannot allocate constant memory dynamically, this pattern suffices because the 64K limit is not system-wide, it only applies to compilation units.
Use the first pattern cited in your question: statically declare the constant data and update it with cudaMemcpyToSymbol before launching kernels that reference it. In the second pattern, only reads of the pointer itself will go through constant memory. Reads using the pointer will be serviced by the normal L1/L2 cache hierarchy.

Related

Can I obtain the amount of allocated dynamic shared memory from within a kernel?

On the host side, I can save the amount of dynamic shared memory I intend to launch a kernel with, and use it. I can even pass that as an argument to the kernel. But - is there a way to get it directly from device code, without help from the host side? That is, have the code for a kernel determine, as it runs, how much dynamic shared memory it has available?
Yes, there's a special register holding that value. named %dynamic_smem_size. You can obtain this register's value in your CUDA C/C++ code by wrapping some inline PTX with a getter function:
__device__ unsigned dynamic_smem_size()
{
unsigned ret;
asm volatile ("mov.u32 %0, %dynamic_smem_size;" : "=r"(ret));
return ret;
}
You can similarly obtain the total size of allocated shared memory (static + dynamic) from the register %total_smem_size.

2d array use in the kernel

In the CUDA examples i read, I don't find any direct use of 2D array notation [][] in the kernel code when the array is in the global memory unlike when it is in the shared memory, e.g. matrix multiplication. Is there any performance related reason behind this?
Also, i read in a old thread that the following code is incorrect
int **d_array;
cudaMalloc( (void**)&d_array , 5 * sizeof(int*) );
for(int i = 0 ; i < 5 ; i++)
{
cudaMalloc((void **)&d_array[i],10 * sizeof(int));
}
According to the author, "once the main thread assigns memory on the device the main thread loses access to it, that is, it can only be accessed within kernels. So, When you try call cudaMalloc on the 2nd dimension of the array it throws an "Access violation writing location" exception."
I don't understand what the author really means; actually, i find the above code correct
Thank your for your help
SS
Is there any performance related reason behind this?
Yes, a doubly-subscripted array normally requires an extra pointer lookup, i.e. an extra memory read, before the data referenced can be accessed. By using "simulated" 2D access:
int val = d[i*columns+j];
instead of:
int val = d[i][j];
then only a single memory read access is required. The proper indexing is computed directly, rather than requiring the read of a row-pointer. GPUs generally have lots of compute capability compared to memory bandwidth.
I don't understand what the author really means; actually, i find the above code correct
The code is in fact incorrect.
This operation:
cudaMalloc( (void**)&d_array , 5 * sizeof(int*) );
creates a single contiguous allocation on the device, of length equal to 5 pointers storage, and takes the starting address of that allocation, and stores it in the host memory location associated with d_array. That is what cudaMalloc does: it creates a device allocation of the requested length, and stores the starting device address of that allocation in the provided host memory variable.
So let's deconstruct what is being asked for here:
cudaMalloc((void **)&d_array[i],10 * sizeof(int));
This says, create a device allocation of length 10*sizeof(int) and store the starting address of it in the location d_array[i]. But the location associated with d_array[i] is on the device, not the host, and requires dereferencing of the d_array pointer to actually access it, to store something there.
cudaMalloc does not do this. You cannot ask for the starting address of the device allocation to be stored in device memory. You can only ask for the starting address of the device allocation to be stored in host memory.
&d_array
is a pointer to host memory.
&d_array[i]
is a pointer to device memory.
The canonical 2D array worked example is now referenced in the cuda tag info link.

Passing value from device memory as kernel parameter in CUDA

I'm writing a CUDA application that has a step where the variance of some complex-valued input data is computed, and then that variance is used to threshold the data. I've got a reduction kernel that computes the variance for me, but I'm not sure if I have to pull the value back to the host to pass it to the thresholding kernel or not.
Is there a way to pass the value directly from device memory?
You can use a __device__ variable to hold the variance value in-between kernel calls.
Put this before the definition of the kernels that use it:
__device__ float my_variance = 0.0f;
Variables defined this way can be used by any kernel executing on the device (without requiring that they be explicitly passed as a kernel function parameter) and persist for the lifetime of the context, i.e. beyond the lifetime of any single kernel call.
It's not entirely clear from your question, but you can also define an array of data this way.
__device__ float my_variance[32] = {0.0f};
Likewise, allocations created by cudaMalloc live for the duration of the application/context (or until an appropriate cudaFree is encountered) and so there is no need to "pull back the data" to the host if you want to use it in a successive kernel:
float *d_variance;
cudaMalloc((void **)&d_variance), sizeof(float));
my_reduction_kernel<<<...>>>(..., d_variance, ...);
my_thresholding_kernel<<<...>>>(..., d_variance, ...);
Any value set in *d_variance by the reduction kernel above will be properly observed by the thresholding kernel.

CUDA allocation alignment is 256 bytes - seriously?

In "CUDA C Programming Guide 5.0", p73 (also here) says "Any address of a variable residing in global memory or returned by one of the memory allocation routines from the driver or runtime API is always aligned to at least 256 bytes". I do not know the exact meaning of this sentence. Could anyone show an example for me? Many thanks.
A derivative question:
So, what about allocating an one-dimensional array of basic elements (like int) or self-defined ones? The starting address of the array will be multiples of 256B, while the address of each element in the array is not necessarily multiples of 256B?
The pointers which are allocated by using any of the CUDA Runtime's device memory allocation functions e.g cudaMalloc or cudaMallocPitch are guaranteed to be 256 byte aligned, i.e. the address is a multiple of 256.
Consider the following example:
char *ptr1, *ptr2;
int bytes = 1;
cudaMalloc((void**)&ptr1,bytes);
cudaMalloc((void**)&ptr2,bytes);
Suppose the address returned in ptr1 is some multiple of 256, then the address returned in ptr2 will be atleast (ptr1 + 256).
This is a restriction imposed by the device on which the memory is being allocated. Mostly, pointers are aligned due to performance purposes. (Some NVIDIA guy should be able to tell if there is some other reason also).
Important:
Pointer alignment is not always 256. On my device (GTX460M), it is 512. You can get the device pointer alignment by the cudaDeviceProp::textureAlignment field.
Alignment of pointers is also a requirement for binding the pointer to textures.

Variable Sizes Array in CUDA

Is there any way to declare an array such as:
int arraySize = 10;
int array[arraySize];
inside a CUDA kernel/function? I read in another post that I could declare the size of the shared memory in the kernel call and then I would be able to do:
int array[];
But I cannot do this. I get a compile error: "incomplete type is not allowed". On a side note, I've also read that printf() can be called from within a thread and this also throws an error: "Cannot call host function from inside device/global function".
Is there anything I can do to make a variable sized array or equivalent inside CUDA? I am at compute capability 1.1, does this have anything to do with it? Can I get around the variable size array declarations from within a thread by defining a typedef struct which has a size variable I can set? Solutions for compute capabilities besides 1.1 are welcome. This is for a class team project and if there is at least some way to do it I can at least present that information.
About the printf, the problem is it only works for compute capability 2.x. There is an alternative cuPrintf that you might try.
For the allocation of variable size arrays in CUDA you do it like this:
Inside the kernel you write extern __shared__ int[];
On the kernel call you pass as the third launch parameter the shared memory size in bytes like mykernel<<<gridsize, blocksize, sharedmemsize>>>();
This is explained in the CUDA C programming guide in section B.2.3 about the __shared__ qualifier.
If your arrays can be large, one solution would be to have one kernel that computes the required array sizes, stores them in an array, then after that invocation, the host allocates the necessary arrays and passes an array of pointers to the threads, and then you run your computation as a second kernel.
Whether this helps depends on what you have to do, because it would be arrays allocated in global memory. If the total size (per block) of your arrays is less than the size of the available shared memory, you could have a sufficiently-large shared memory array and let your threads negociate splitting it amongst themselves.