CUDA Programming - Shared memory configuration - cuda

Could you please explain the differences between using both "16 KB shared memory + 48K L1 cache" or "48 KB shared memory + 16 KB L1 cache" in CUDA programming? What should I expect in time execution? When could I expect smaller gpu time?

On Fermi and Kepler nVIDIA GPUs, each SM has a 64KB chunk of memory which can be configured as 16/48 or 48/16 shared memory/L1 cache. Which mode you use depends on how much use of shared memory your kernel makes. If your kernel uses a lot of shared memory then you would probably find that configuring it as 48KB shared memory allows higher occupancy and hence better performance.
On the other hand, if your kernel does not use shared memory at all, or if it only uses a very small amount per thread, then you would configure it as 48KB L1 cache.
How much a "very small amount" is is probably best illustrated with the occupancy calculator which is a spreadsheet included with the CUDA Toolkit and also available here. This spreadsheet allows you to investigate the effect of different shared memory per block and different block sizes.

Related

amount of data that can be hold in shared memory CUDA

In my gpu max threads per block is 1024. I am working on a image processing project using CUDA. Now if I want to use shared memory is that mean that I can only work with 1024 pixels using one block and need to copy only those 1024 elements to the shared memory
Your question is quite unclear, so I will answer to what is asked in the title.
The amount of data that can be hold in shared memory in CUDA depends on the Compute Capability of your GPU.
For instance, on CC 2.x and 3.x :
On devices of compute capability 2.x and 3.x, each multiprocessor has 64KB of on-chip memory that can be partitioned between L1 cache and shared memory.
See Configuring the amount of shared memory section here : Nvidia Parallel Forall Devblog : Using Shared Memory in CUDA C/C++
The optimization you have to think about is to avoid bank conflicts by mapping the threads' access to memory banks. This is introduced in this blog and you should read about it.

CUDA shared memory occupancy

If I have a 48kB shared memory per SM and I make a kernel where I allocate 32kB in shared memory that means that only 1 block can be running on one SM at the same time?
Yes, that is correct.
Shared memory must support the "footprint" of all "resident" threadblocks. In order for a threadblock to be launched on a SM, there must be enough shared memory to support it. If not, it will wait until the presently executing threadblock has finished.
There is some nuance to this arriving with Maxwell GPUs (cc 5.0, 5.2). These GPUs support either 64KB (cc 5.0) or 96KB (cc 5.2) of shared memory. In this case, the maximum shared memory available to a single threadblock is still limited to 48KB, but multiple threadblocks may use more than 48KB in aggregate, on a single SM. This means a cc 5.2 SM could support 2 threadblocks, even if both were using 32KB shared memory.

Why is the constant memory size limited in CUDA?

According to "CUDA C Programming Guide", a constant memory access benefits only if a multiprocessor constant cache is hit (Section 5.3.2.4)1. Otherwise there can be even more memory requests for a half-warp than in case of the coalesced global memory read. So why the constant memory size is limited to 64 KB?
One more question in order not to ask twice. As far as I understand, in the Fermi architecture the texture cache is combined with the L2 cache. Does texture usage still make sense or the global memory reads are cached in the same manner?
1Constant Memory (Section 5.3.2.4)
The constant memory space resides in device memory and is cached in the constant cache mentioned in Sections F.3.1 and F.4.1.
For devices of compute capability 1.x, a constant memory request for a warp is first split into two requests, one for each half-warp, that are issued independently.
A request is then split into as many separate requests as there are different memory addresses in the initial request, decreasing throughput by a factor equal to the number of separate requests.
The resulting requests are then serviced at the throughput of the constant cache in case of a cache hit, or at the throughput of device memory otherwise.
The constant memory size is 64 KB for compute capability 1.0-3.0 devices. The cache working set is only 8KB (see the CUDA Programming Guide v4.2 Table F-2).
Constant memory is used by the driver, compiler, and variables declared __device__ __constant__. The driver uses constant memory to communicate parameters, texture bindings, etc. The compiler uses constants in many of the instructions (see disassembly).
Variables placed in constant memory can be read and written using the host runtime functions cudaMemcpyToSymbol() and cudaMemcpyFromSymbol() (see the CUDA Programming Guide v4.2 section B.2.2). Constant memory is in device memory but is accessed through the constant cache.
On Fermi texture, constant, L1 and I-Cache are all level 1 caches in or around each SM. All level 1 caches access device memory through the L2 cache.
The 64 KB constant limit is per CUmodule which is a CUDA compilation unit. The concept of CUmodule is hidden under the CUDA runtime but accessible by the CUDA Driver API.

Why doesn't CUDA allow us to use all of the SM memory as L1 cache?

In a CUDA device, each SM has 64KB of on-chip memory that is placed close to it. By default, this is partitioned into 48KB of shared memory and 16KB of L1 cache. For kernels whose memory access pattern is hard to determine, this partitioning can be changed to 16KB of shared memory and 48KB of L1 cache.
Why doesn't CUDA allow all of the 64KB per-SM on-chip memory to be used as L1 cache?
There are many kinds of kernels which have no use for shared memory, but could use that extra 16KB of L1 cache.
I believe the reason for this is Computer Graphics. When running OpenGL or Direct3D code, the SM uses the direct-mapped memory (CUDA shared) for one purpose (e.g. vertex attributes), and the set-associative memory (L1 cache) for another. For the graphics pipeline, the architects are able to tune specifically how much memory they need for things like vertex attributes based on the vertex throughput limits of other units (for example).
When thinking about architectural decisions for GPUs (and processor design in general, for that matter), it's important to remember that many decisions are largely economic. GPU Computing has a day job: games and graphics. If it weren't for this day job, GPU computing would not have become economically viable, and massively parallel computing would not likely have become available to the masses.
Nearly every feature of the GPU for computing is used in some way in the graphics pipeline. If it is not (think ECC memory), then it must be financed with higher product prices for the markets that use it (think HPC).

CUDA: Is coalesced global memory access faster than shared memory? Also, does allocating a large shared memory array slow down the program?

I'm not finding an improvement in speed with shared memory on an NVIDIA Tesla M2050
with about 49K shared memory per block. Actually if I allocate
a large char array in shared memory it slows down my program. For example
__shared__ char database[49000];
gives me slower running times than
__shared__ char database[4900];
The program accesses only the first 100 chars of database so the extra space
is unnecessary. I can't figure out why this is happening. Any help would be appreciated.
Thanks.
The reason for the relatively poor performance of CUDA shared memory when using larger arrays may have to do with the fact that each multiprocessor has a limited amount of available shared memory.
Each multiprocessor hosts several processors; for modern devices, typically 32, the number of threads in a warp. This means that, in the absence of divergence or memory stalls, the average processing rate is 32 instructions per cycle (latency is high due to pipelining).
CUDA schedules several blocks to a multiprocessor. Each block consists of several warps. When a warp stalls on a global memory access (even coalesced accesses have high latency), other warps are processed. This effectively hides the latency, which is why high-latency global memory is acceptable in GPUs. To effectively hide latency, you need enough extra warps to execute until the stalled warp can continue. If all warps stall on memory accesses, you can no longer hide the latency.
Shared memory is allocated to blocks in CUDA, and stored on a singly multiprocessor on the GPU device. Each multiprocessor has a relatively small, fixed amount of shared memory space. CUDA cannot schedule more blocks to multiprocessors than the multiprocessors can support in terms of shared memory and register usage. In other words, if the amount of shared memory on a multiprocessor is X and each block requires Y shared memory, CUDA will schedule no more than floor(X/Y) blocks at a time to each multiprocessor (it might be less since there are other constraints, such as register usage).
Ergo, by increasing shared memory usage of a block, you might be reducing the number of active warps - the occupancy - of your kernel, thereby hurting performance. You should look into your kernel code by compiling with the -Xptxas="-v" flag; this should give you register and shared & constant memory usage for each kernel. Use this data and your kernel launch parameters, as well as other required information, in the most recent version of the CUDA Occupancy Calculator to determine whether you might be affected by occupancy.
EDIT:
To address the other part of your question, assuming no shared memory bank conflicts and perfect coalescing of global memory accesses... there are two dimensions to this answer: latency and bandwidth. The latency of shared memory will be lower than that of global memory, since shared memory is on-chip. The bandwidth will be much the same. Ergo, if you are able to hide global memory access latency through coalescing, there is no penalty (note: the access pattern is important here, in that shared memory allows for potentially more diverse access patterns with little to no performance loss, so there can be benefits to using shared memory even if you can hide all the global memory latency).
Also, if you increase the shared memory per-block, CUDA will schedule grids with less concurrent blocks, so they all have enough shared memory, so it reduces parallelism and increases execution time.
The amount of resources available on the gpu are limited. The number of blocks running concurrently is roughly inversly proportional to the size of shared memory per block.
This explains why the runtime is slower when you launch the kernel which uses a really large amount of shared memory.