Causes of Low Achieved Occupancy - cuda

Nvidia web-site mentions a few causes of low achieved occupancy, among them uneven distribution of workload among blocks, which results in blocks hoarding shared memory resources and not releasing them until block is finished. The suggestion is to decrease the size of a block, thus increasing the overall number of blocks (given that we keep the number of threads constant, of course).
A good explanation on that was also given here on stackoverflow.
Given aforementioned information, shouldn't the right course of actions be (in order to maximize performance) simply setting the size of a block as small as possible (equal to the size of a warp, say 32 threads)? That is, unless you need to make sure that a larger number of threads needs to communicate through shared memory, I assume.

Given aforementioned information, shouldn't the right course of
actions be (in order to maximize performance) simply setting the size
of a block as small as possible (equal to the size of a warp, say 32
threads)?
No.
As shown in the documentation here, there is a limit on the number of blocks per multiprocessor which would leave you with a maximum theoretical occupancy of 25% or 50% when using 32 thread blocks, depending on what hardware you run the kernel on.

Usually it is a good approach to use as small blocks as possbile but big enough to saturate device (64 or 128 threads per block depending on device) - it is not always possible since you might want to synchronize threads or communicate via shared memory.
Having large number of small blocks allows GPU to do kind of "autobalancing" and keep all SMs running.
The same applies to CPU - if you have 5 independent taks and each takes 4 seconds to finish, but you have only 4 cores then it will end after 8 seconds(during first 4 seconds 4 cores are running on first 4 tasks and then 1 core is running on last task and 3 cores are idling).
If you are able to divide whole job to 20 tasks that take 1 second then whole job will be done in 5 seconds. So having a lot of small tasks helps to utilize hardware.
In case of GPU you can have large number of active blocks (on Titan X it is 24 SM x 32 active blocks = 768 blocks) and would be good to use this power.
Anyway it is not always true that you need to fully saturate device. On many tasks I can see that using 32 threads per block (so having 50% possible occupancy) gives same performance as using 64 threads per block.
In the end all is a matter of doing some benchmarks, and choosing whatever is best for you in given case with given hardware.

Related

Is there a correlation between the exact meaning of gpu wave and thread block?

computation performed by a GPU kernel is partitioned into
groups of threads called thread blocks, which typically
execute in concurrent groups, resulting in waves of execution
What exactly does wave mean here? Isn't that the same meaning as warp ?
A GPU can execute a maximum number of threads, grouped in a maximum number of thread blocks. When the whole grid for a kernel is larger than the maximum of either of those limits, or if there are concurrent kernels occupying the GPU, it will launch as many thread blocks as possible. When the last thread of a block has terminated, a new block will start.
Since blocks typically have equal run times and scheduling has a certain latency, this often results in bursts of activity on the GPU that you can see in the occupancy. I believe this is what is meant by that sentence.
Do not confuse this with the term "wavefront" which is what AMD calls a warp.
Wave: a group of thread blocks running concurrently on GPU.
Full Wave: (number of SMs on the device) x (max active blocks per SM)
Launching the grid with thread-blocks less than a full wave results in low achieved occupancy. Mostly launching is composed of some number of full wave and possibly 1 incomplete wave. It should be to mention that maximum size of the wave is based on how many blocks can fit on one SM regarding registers per thread, shared memory per block etc.
If we look at the blog of the Julien Demoth and use that values to understand the issue:
max # of threads per SM: 2048 (NVIDIA Tesla K20)
kernel has 4 blocks of 256 threads per SM
Theoretical Occupancy: %50 (4*256/2048)
Full Wave: (# of SMs) x (max active blocks per SM) = 13x4 = 52 blocks
The kernel is launching with 128 blocks so there are 2 full wave and 1 incomplete wave with 24 blocks. The full wave value may be increased using the attribute (launch_bounds) or configuring the amount of shared memory per SM (for some device, see also related report) etc.
Also, the incomplete wave is named as partial last wave and it has negative effect on performance due to having low occupancy. This underutilization of GPU is named as tail effect and it’s dominant especially when launching few thread blocks in a grid.

CUDA purpose of manually specifying thread blocks

just started learning CUDA and there is something I can't quite understand yet. I was wondering whether there is a reason for splitting threads into blocks besides optimizing GPU workload. Because if there isn't, I can't understand why would you need to manually specify the number of blocks and their sizes. Wouldn't that be better to simply supply the number of threads needed to solve the task and let the GPU distribute the threads over the SMs?
That is, consider the following dummy task and GPU setup.
number of available SMs: 16
max number of blocks per SM: 8
max number of threads per block: 1024
Let's say we need to process every entry of a 256x256 matrix and we want a thread assigned to every entry, i.e. the overall number of threads is 256x256 = 65536. Then the number of blocks is:
overall number of threads / max number of threads per block = 65536 / 1024 = 64
Finally, 64 blocks will be distributed among 16 SMs, making it 8 blocks per SM. Now these are trivial calculations that GPU could handle automatically, right?.
The only other reason for manually supplying the number of blocks and their sizes, that I can think of, is separating threads in a specific fashion in order for them to have shared local memory, i.e. somewhat isolating one block of threads from another block of threads.
But surely there must be another reason?
I will try to answer your question from the point of view what I understand best.
The major factor that decides the number of threads per block is the multiprocessor occupancy.The occupancy of a multiprocessor is calculated as the ratio of the active warps to the max. number of active warps that is supported. The threads of a warps may be active or dormant for many reasons depending on the application. Hence a fixed structure for the number of threads may not be viable.
Besides each multiprocessor has a fixed number of registers shared among all the threads of that multiprocessor. If the total registers needed exceeds the max. number, the application is liable to fail.
Further to the above, the fixed shared memory available to a given block may also affect the decision on the number of threads, in case the shared memory is heavily used.
Hence a naive way to decide the number of threads is straightforwardly using the occupancy calculator spreadsheet in case you want to be completely oblivious to the type of application at hand. The other better option would be to consider the occupancy along with the type of application being run.

NSIGHT: What are those Red and Black colour in kernel-level experiments?

I am trying to learn NSIGHT.
Can some one tell me what are these red marks indicating in the following screenshot taken from the User Guide ? There are two red marks in Occupancy per SM and two in warps section as you can see.
Similarly what are those black lines which are varying in length, indicating?
Another example from same page:
Here is the basic explanation:
Grey bars represent the available amount of resources your particular
device has (due to both its hardware and its compute capability).
Black bars represent the theoretical limit that it is possible to achieve for your kernel under your launch configuration (blocks per grid and threads per block)
The red dots represent your the resources that you are using.
For instance, looking at "Active warps" on the first picture:
Grey: The device supports 64 active warps concurrently.
Black: Because of the use of the registers, it is theoretically possible to map 64 warps.
Red: Your achieve 63.56 active warps.
In such case, the grey bar is under the black one, so you cant see the grey one.
In some cases, can happen that the theoretical limit its greater that the device limit. This is OK. You can see examples on the second picture (block limit (shared memory) and block limit (registers). That makes sense if you think that your kernel use only a little fraction of your resources; If one block uses 1 register, it could be possible to launch 65536 blocks (without taking into account other factors), but still your device limit is 16. Then, the number 128 comes from 65536/512. The same applies to the shared memory section: since you use 0 bytes of shared memory per block, you could launch infinite number of block according to shared memory limitations.
About blank spaces
The theoretical and the achieved values are the same for all rows except for "Active warps" and "Occupancy".
You are really executing 1024 threads per block with 32 warps per block on the first picture.
In the case of Occupancy and Active warps I guess the achieved number is a kind of statistical measure. I think that because of the nature of the CUDA model. In CUDA each thread within a warp is executed simultaneously on a SM. The way of hiding high latency operations -such as memory readings- is through "almost-free warps context switches". I guess that should be difficult to take a exact measure of the number of active warps in that situation. Beside hardware concepts, we also have to take into account the kernel implementation, branch-divergence, for instance could make a warp to slower than others... etc.
Extended information
As you saw, these numbers are closely related to your device specific hardware and compute capability, so perhaps a concrete example could help here:
A devide with CCC 3.0 can handle a maximum of 2048 threads per SM, 16
blocks per SM and 64 warps per SM. You also have a maximum number of
registers avaliable to use (65536 on that case).
This wikipedia entry is a handy site to be aware of each ccc features.
You can query this parameters using the deviceQuery utility sample code provided with the CUDA toolkit or, at execution time using the CUDA API as here.
Performance considerations
The thing is that, ideally, 16 blocks of 128 threads could be executed using less than 32 registers per thread. That means a high occupancy rate. In most cases your kernel needs more that 32 register per block, so it is no longer possible to execute 16 blocks concurrently on the SM, then the reduction is done at the block level granularity, i.e., decreasing the number of block. An this is what the bars capture.
You can play with the number of threads and blocks, or even with the _ _launch_bounds_ _ directive to optimize your kernel, or you can use the --maxrregcount setting to lower the number of registers used by a single kernel to see if it improves overall execution speed.

how does CUDA schedule its threads

i've got a few questions regarding cuda's scheduling system.
A.When i use for example the foo<<<255, 255>>() function, what actually happens inside of the card? i know that each SM receives from the upper level a block to schedule, and each SM is responsible to schedule its incoming BLOCK, but which part does it? if for example i've got 8 SMs, when each of each contains 8 small CPUs, is the upper level responsible to schedule the remaining 255*255 - (8 * 8) threads?
B.What's the limit of maximum threads that one can define? i mean foo<<<X, Y>>>(); x,y =?
C. Regarding the last example, how many threads can be inside of one block? can we say that the more blocks / threads we have, the faster the execution will be?
Thanks for your help
A. The compute work distributor will distribute a block from the grid to a SM. The SM will convert the block in warps (WARP_SIZE = 32 on all NVIDIA GPUs). Fermi 2.0 GPUs each SM has two warp schedulers which share a set of data paths. Every cycle each warp scheduler picks a warp and issues an instruction to one of data paths (please don't think of CUDA cores). On Fermi 2.1 GPUs each warp scheduler has independent data paths as well as a set of shared data paths. On 2.1 every cycle each warp scheduler will pick a warp and attempt to dual issue instructions for each warp.
The warp schedulers attempt to optimize the use of data paths. This means that it is possible that a single warp will execute multiple instructions in back to back cycle or the warp scheduler can choose to issue from a different warp every cycle.
The number of warps/threads that each SM can handle is specified in the CUDA Programming Guide v.4.2 Table F-1. This scales from 768 threads to 2048 threads (24-64 warps).
B. The maximum threads per launch is defined by the maximum GridDims * the maximum threads per block. See Table F-1 or refer to the documentation for cudaGetDeviceProperties.
C. See the same resources as (B). The optimum distribution of threads/block is defined by your algorithm partitioning and is influenced by the occupancy calculation. There are observable performance impacts based around problem set size of the warps on the SM and the amount of time blocked at instruction barriers (among other things). For starters I recommend at least 2 blocks per SM and ~50% occupancy.
B. It depends on your device. You can use the cuda function cudaGetDeviceProperties to see the specifications for your device. A common maximum number is y=1024 threads per block and x=65535 blocks per Grid dimension.
C.A common practise is to have powers of 2 (128,256,512 etc.) threads/block. Reducing large arrays is very effective that way (see Reduction). The optimum distribution of blocks and threads actually depends on your application and your hardware. I personally use 512 threads/block for large sparse linear algebra computations on a TeslaM2050 since it's the most efficient for my applications.

Increasing block size decreases performance

In my cuda code if I increase the blocksizeX ,blocksizeY it actually is taking more time .[Therefore I run it at 1x1]Also a chunk of my execution time ( for eg 7 out of 9 s ) is taken by just the call to the kernel .Infact I am quite amazed that even if I comment out the entire kernel the time is almost same.Any suggestions where and how to optimize?
P.S. I have edited this post with my actual code .I am downsampling an image so every 4 neighoring pixels (so for eg 1,2 from row 1 and 1,2 from row 2) give an output pixel.I get a effective bw of .5GB/s compared to theoretical maximum of 86.4 GB/s.The time I use is the difference in calling the kernel with instructions and calling an empty kernel.
It looks pretty bad to me right now but I cant figure out what am I doing wrong.
__global__ void streamkernel(int *r_d,int *g_d,int *b_d,int height ,int width,int *f_r,int *f_g,int *f_b){
int id=blockIdx.x * blockDim.x*blockDim.y+ threadIdx.y*blockDim.x+threadIdx.x+blockIdx.y*gridDim.x*blockDim.x*blockDim.y;
int number=2*(id%(width/2))+(id/(width/2))*width*2;
if (id<height*width/4)
{
f_r[id]=(r_d[number]+r_d[number+1];+r_d[number+width];+r_d[number+width+1];)/4;
f_g[id]=(g_d[number]+g_d[number+1]+g_d[number+width]+g_d[number+width+1])/4;
f_b[id]=(g_d[number]+g_d[number+1]+g_d[number+width]+g_d[number+width+1];)/4;
}
}
Try looking up the matrix multiplication example in CUDA SDK examples for how to use shared memory.
The problem with your current kernel is that it's doing 4 global memory reads and 1 global memory write for each 3 additions and 1 division. Each global memory access costs roughly 400 cycles. This means you're spending the vast majority of time doing memory access (what GPUs are bad at) rather than compute (what GPUs are good at).
Shared memory in effect allows you to cache this so that amortized, you get roughly 1 read and 1 write at each pixel for 3 additions and 1 division. That is still not doing so great on the CGMA ratio (compute to global memory access ratio, the holy grail of GPU computing).
Overall, I think for a simple kernel like this, a CPU implementation is likely going to be faster given the overhead of transferring data across the PCI-E bus.
You're forgetting the fact that one multiprocessor can execute up to 8 blocks simultaneously and the maximum performance is reached exactly then. However there are many factors that limit the number of blocks that can exist in parallel (incomplete list):
Maximum amount of shared memory per multiprocessor limits the number of blocks if #blocks * shared memory per block would be > total shared memory.
Maximum number of threads per multiprocessor limits the number of blocks if #blocks * #threads / block would be > max total #threads.
...
You should try to find a kernel execution configuration that causes exactly 8 blocks to be run on one multiprocessor. This will almost always yield the highest performance even if the occupancy is =/= 1.0! From this point on you can try to iteratively make changes that reduce the number of executed blocks per MP, but therefore increase the occupancy of your kernel and see if the performance increases.
The nvidia occupancy calculator(excel sheet) will be of great help.