Configuration and the mapping of constant memory in CUDA - cuda

I'd like to know if the configuration of constant memory changes as the underlying architecture evolves from Kepler to Volta. To be specific, I have two questions:
1) Does the sizes of constant memory and per-SM constant cache change?
2) What's the mapping from the cmem space to constant memory?
When compiling cuda code to PTX with adding '-v' to nvcc, we can see the memory usage like: ptxas info : Used 20 registers, 80 bytes cmem[0], 348 bytes cmem[2]. So does the cmem space maps to constant memory? Does accessing to each cmem space go through the on-chip constant cache?

I have found the answer for the 1st question.
In the CUDA C Programming Guide, table14 shows the size of constant memory and constant cache for different CCs.
The constant memory size is always 64KB from CC2.x to 6.x. The on-chip constant cache size is 8KB till CC 3.0 and increases to 10KB for the later.

Related

Using maximum shared memory in Cuda

I am unable to use more than 48K of shared memory (on V100, Cuda 10.2)
I call
cudaFuncSetAttribute(my_kernel,
cudaFuncAttributePreferredSharedMemoryCarveout,
cudaSharedmemCarveoutMaxShared);
before launching my_kernel first time.
I use launch bounds
and dynamic shared memory inside my_kernel:
__global__
void __launch_bounds__(768, 1)
my_kernel(...)
{
extern __shared__ float2 sh[];
...
}
Kernel is called like this:
dim3 blk(32, 24); // 768 threads as in launch_bounds.
my_kernel<<<grd, blk, 64 * 1024, my_stream>>>( ... );
cudaGetLastError() after kernel call returns cudaErrorInvalidValue.
If I use <= 48 K of shared memory (e.g., my_kernel<<<grd, blk, 48 * 1024, my_stream>>>), it works.
Compilation flags are:
nvcc -std=c++14 -gencode arch=compute_70,code=sm_70 -Xptxas -v,-dlcm=cg
What am I missing?
from here:
Compute capability 7.x devices allow a single thread block to address the full capacity of shared memory: 96 KB on Volta, 64 KB on Turing. Kernels relying on shared memory allocations over 48 KB per block are architecture-specific, as such they must use dynamic shared memory (rather than statically sized arrays) and require an explicit opt-in using cudaFuncSetAttribute() as follows:
cudaFuncSetAttribute(my_kernel, cudaFuncAttributeMaxDynamicSharedMemorySize, 98304);
When I add that line to the code you have shown, the invalid value error goes away. For a Turing device, you would want to change that number from 98304 to 65536. And of course 65536 would be sufficient for your example as well, although not sufficient to use the maximum available on volta, as stated in the question title.
In a similar fashion kernels on Ampere devices should be able to use up to 160KB of shared memory (cc 8.0) or 100KB (cc 8.6), dynamically allocated, using the above opt-in mechanism, with the number 98304 changed to 163840 (for cc 8.0, for example) or 102400 (for cc 8.6).
Note that the above covers the Volta (7.0) Turing (7.5) and Ampere (8.x) cases. GPUs with compute capability prior to 7.x have no ability to address more than 48KB per threadblock. In some cases, these GPUs may have more shared memory per multiprocessor, but this is provided to allow for greater occupancy in certain threadblock configurations. The programmer has no ability to use more than 48KB per threadblock.
Although it doesn't pertain to the code presented here (which is already using a dynamic shared memory allocation), note from the excerpted documentation quote that using more than 48KB of shared memory on devices that support it requires 2 things:
The opt-in mechanism already described above
A dynamic rather than static shared memory allocation in the kernel code.
example of dynamic:
extern __shared__ int shared_mem[];
example of static:
__shared__ int shared_mem[1024];
Dynamically allocated shared memory also requires a size to be passed in the kernel launch configuration parameters (an example is given in the question).

Understanding CUDA kernel stack usage and register spilling

I am trying to fully understand the information of PTXAS -v CUDA for kernel stack usage and register spilling (for sm_35 architecture). For one of my kernels it produces:
3536 bytes stack frame, 3612 bytes spill stores, 6148 bytes spill loads
ptxas info : Used 255 registers, 392 bytes cmem[0]
I know that the stack frame is allocated in local memory which lives physically where global memory is and is private to each thread.
My questions are:
Is the memory needed for register spillage also allocated in local
memory?
Is the total amount of memory needed for register spilling and stack
usage equal to [number of threads]x[3536 bytes]. Thus register
spillage loads/stores operate on the stack frame?
The number of spill stores/loads doesn't detail on the size of the
transfers. Are these always 32bit registers? Thus, a 64bit floating
point number spill would be counted as 2 spill stores?
Are spill stores/loads cached in L2 cache?
Registers are spilled to local memory. "local" means "thread-local", i.e. storage private to each thread.
The amount of local memory required for the entire launch is at least number_of_threads times local_memory_bytes_per_thread. Due to allocation granularity it can often be more.
The compiler statistics for spill transfers are already normalized
to bytes as individual local memory accesses may have difference
widths. Inspection of the generated machine code (run cuobjdump
--dump-sass on the binary) will show the width of the individual accesses. The relevant instructions will have names like LLD, LST,
LDL, STL.
I am reasonably sure that local memory accesses are cached in L1 and
L2 caches, but cannot quote the relevant paragraphs from the
documentation at this time.

CUDA. Shared Memory vs Constant

I need a large amount of constant data, more than 6-8 KB, up to 16 KB. In the same time I don't use shared memory. And now I want to store this constant data in the shared memory. Is it a good idea? Any performance approximations? Does broadcasting work for shared memory as well as for constant?
Performance is critical for the application. And I think, I have only 8 KB constant memory cache on my Tesla C2075 (CUDA 2.0)
In compute capability 2.0, the same memory is used for L1 and shared memory. Partitioning between L1 and shared memory can be controlled with the cudaFuncSetCacheConfig() call. I would suggest setting L1 to the maximum possible (48K) with
cudaFuncSetCacheConfig(MyKernel, cudaFuncCachePreferL1);
Then, pull your constant data from global memory, and let L1 handle the caching. If you have multiple arrays that are const, you can direct the compiler to use the constant cache for some of them by using the const qualifier in the kernel argument list. That way, you can leverage both L1 and the constant cache to cache your constants.
Broadcasting works both for L1 and constant cache accesses.

Why is the constant memory size limited in CUDA?

According to "CUDA C Programming Guide", a constant memory access benefits only if a multiprocessor constant cache is hit (Section 5.3.2.4)1. Otherwise there can be even more memory requests for a half-warp than in case of the coalesced global memory read. So why the constant memory size is limited to 64 KB?
One more question in order not to ask twice. As far as I understand, in the Fermi architecture the texture cache is combined with the L2 cache. Does texture usage still make sense or the global memory reads are cached in the same manner?
1Constant Memory (Section 5.3.2.4)
The constant memory space resides in device memory and is cached in the constant cache mentioned in Sections F.3.1 and F.4.1.
For devices of compute capability 1.x, a constant memory request for a warp is first split into two requests, one for each half-warp, that are issued independently.
A request is then split into as many separate requests as there are different memory addresses in the initial request, decreasing throughput by a factor equal to the number of separate requests.
The resulting requests are then serviced at the throughput of the constant cache in case of a cache hit, or at the throughput of device memory otherwise.
The constant memory size is 64 KB for compute capability 1.0-3.0 devices. The cache working set is only 8KB (see the CUDA Programming Guide v4.2 Table F-2).
Constant memory is used by the driver, compiler, and variables declared __device__ __constant__. The driver uses constant memory to communicate parameters, texture bindings, etc. The compiler uses constants in many of the instructions (see disassembly).
Variables placed in constant memory can be read and written using the host runtime functions cudaMemcpyToSymbol() and cudaMemcpyFromSymbol() (see the CUDA Programming Guide v4.2 section B.2.2). Constant memory is in device memory but is accessed through the constant cache.
On Fermi texture, constant, L1 and I-Cache are all level 1 caches in or around each SM. All level 1 caches access device memory through the L2 cache.
The 64 KB constant limit is per CUmodule which is a CUDA compilation unit. The concept of CUmodule is hidden under the CUDA runtime but accessible by the CUDA Driver API.

Where does CUDA allocate the stack frame for kernels?

My kernel call fails with "out of memory". It makes significant usage of the stack frame and I was wondering if this is the reason for its failure.
When invoking nvcc with --ptxas-options=-v it print the following profile information:
150352 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info : Used 59 registers, 40 bytes cmem[0]
Hardware: GTX480, sm20, 1.5GB device memory, 48KB shared memory/multiprocessor.
My question is where is the stack frame allocated: In shared, global memory, constant memory, ..?
I tried with 1 thread per block, as well as with 32 threads per block. Same "out of memory".
Another issue: One can only enlarge the number of threads resident to one multiprocessor if the total numbers of registers do not exceed the number of available registers at the multiprocessor (32k for my card). Does something similar apply to the stack frame size?
Stack is allocated in local memory. Allocation is per physical thread (GTX480: 15 SM * 1536 threads/SM = 23040 threads). You are requesting 150,352 bytes/thread => ~3.4 GB of stack space. CUDA may reduce the maximum physical threads per launch if the size is that high. The CUDA language is not designed to have a large per thread stack.
In terms of registers GTX480 is limited to 63 registers per thread and 32K registers per SM.
Stack frame is most likely in the local memory.
I believe there is some limitation of the local memory usage, but even without it, I think CUDA driver might allocate more local memory than just for one thread in your <<<1,1>>> launch configuration.
One way or another, even if you manage to actually run your code, I fear it may be actually quite slow because of all those stack operations. Try to reduce the number of function calls (e.g. by inlining those functions).