Is "relaxed" synchronization of threads in block possible in CUDA? - cuda

I know what __syncthreads() is and I would like to do a little bit different thing:
__global__ void kernel()
{
__shared__ array[1024];
some other declarations
load some data into array
label1:
do some other independent calculations
label2:
use data from array
...
}
So I could do __syncthreads(); at label2. It has the semantics that threads can go beyond label2 only if all threads have reached label2.
What I actually need is to ensure that threads can pass beyond label2 when all other threads have reached label1. Such a barrier is weaker and I hope it would block my program less. Is there any kind of relaxed barrier like this possible?

You can safely omit __syncthreads() only when the size of a block is equal to the size of warp (or half). This technique is called "warp-synchronous programming". Because of the fact that all threads within one warp execute simultaneously you can be sure that they some point they already executed prior instructions. In other cases, you can only assume that all the threads within a block are past some part of kernel - and it's very very risky assumption.

Related

How can I make sure the compiler parallelizes my loads from global memory?

I've written a CUDA kernel that looks something like this:
int tIdx = threadIdx.x; // Assume a 1-D thread block and a 1-D grid
int buffNo = 0;
for (int offset=buffSz*blockIdx.x; offset<totalCount; offset+=buffSz*gridDim.x) {
// Select which "page" we're using on this iteration
float *buff = &sharedMem[buffNo*buffSz];
// Load data from global memory
if (tIdx < nLoadThreads) {
for (int ii=tIdx; ii<buffSz; ii+=nLoadThreads)
buff[ii] = globalMem[ii+offset];
}
// Wait for shared memory
__syncthreads();
// Perform computation
if (tIdx >= nLoadThreads) {
// Perform some computation on the contents of buff[]
}
// Switch pages
buffNo ^= 0x01;
}
Note that there's only one __syncthreads() in the loop, so the first nLoadThreads threads will start loading the data for the 2nd iteration while the rest of the threads are still computing the results for the 1st iteration.
I was thinking about how many threads to allocate for loading vs. computing, and I reasoned that I would only need a single warp for loading, regardless of buffer size, because that inner for loop consists of independent loads from global memory: they can all be in flight at the same time. Is this a valid line of reasoning?
And yet when I try this out, I find that (1) increasing the # of load warps dramatically increases performance, and (2) the disassembly in nvvp shows that buff[ii] = globalMem[ii+offset] was compiled into a load from global memory followed 2 instructions later by a store to shared memory, indicating that the compiler is not applying instruction-level parallelism here.
Would additional qualifiers (const, __restrict__, etc) on buff or globalMem help ensure the compiler does what I want?
I suspect the problem has to do with the fact that buffSz is not known at compile-time (the actual data is 2-D and the appropriate buffer size depends on the matrix dimensions). In order to do what I want, the compiler will need to allocate a separate register for each LD operation in flight, right? If I manually unroll the loop, the compiler re-orders the instructions so that there are a few LD in flight before the corresponding ST needs to access that register. I tried a #pragma unroll but the compiler only unrolled the loop without reordering the instructions, so that didn't help. What else can I do?
The compiler has no chance to reorder stores to shared memory away from loads from global memory, because a __syncthreads() barrier is immediately following.
As all off the threads have to wait at the barrier anyway, it is faster to use more threads for loading. This means that more global memory transactions can be in flight at any time, and each load thread has to incur global memory latency less often.
All CUDA devices so far do not support out-of-order execution, so the load loop will incur exactly one global memory latency per loop iteration, unless the compiler can unroll it and reorder loads before stores.
To allow full unrolling, the number of loop iterations needs to be known at compile time. You can use talonmies' suggestion of templating the loop trips to achieve this.
You can also use partial unrolling. Annotating the load loop with #pragma unroll 2 will allow the compiler to issue two loads, then two stores for every two loop iterations, thus achieve a similar effect to doubling nLoadThreads. Replacing 2 with higher numbers is possible, but you will hit the maximum number of transactions in flight at some point (use float2 or float4 moves to transfer more data with the same number of transactions). Also it is difficult to predict whether the compiler will prefer reordering instructions over the cost of more complex code for the final, potentially partial, trip through the unrolled loop.
So the suggestions are:
Use as many load threads as possible.
Unroll the load loop by templating the number of loop iterations and instantiating it for all possible number of loop trips (or the most common ones, with a generic fallback), or by using partial loop unrolling.
If the data is suitably aligned, move it as float2 or float4 to move more data with the same number of transactions.

Dynamic parallelism - launching many small kernels is very slow

I am trying to use dynamic parallelism to improve an algorithm I have in CUDA. In my original CUDA solution, every thread computes a number that is common for each block. What I want to do is to first launch a coarse (or low resolution) kernel, where threads compute the common value just once (like if every thread represents one block). Then each thread creates a small grid of 1 block (16x16 threads), and launches a child kernel for it passing the common value. In theory it should be faster because one is saving many redundant operations. But in practice, the solution works very slow, I don't know why.
This is the code, very simplified, just the idea.
__global__ coarse_kernel( parameters ){
int common_val = compute_common_val();
dim3 dimblock(16, 16, 1);
dim3 dimgrid(1, 1, 1);
child_kernel <<< dimgrid, dimblock >>> (common_val, parameters);
}
__global__ child_kernel( int common_val, parameters ){
// use common value
do_computations(common_val, parameters);
}
The amount of child_kernels is a lot, one per thread and there must be around 400x400 threads. From what I understand, the GPU should process all these kernels in parallel, right?
Or child kernels are processed somehow sequentially?
My results show that performance is more than 10 times slower than in the original solution I had.
There is a cost in launching kernels, either parent or child. If your child kernels do not extract much parallelism and there is not much benefit against their non-parallel counterparts, then your faint benefit may be cancelled out by the child kernel launch overheads.
In formulas, let to be the overhead to execute a child kernel, te its execution time and ts the time to execute the same code without the help of dynamic parallelism. The speedup arising from the use of dynamic parallelism is ts/(to+te). Perhaps (but this cannot be envinced from your code) te<ts but te,ts<<to, so that ts/(to+te) is about (ts/to)<1 and you observe a slowdown instead of a speedup.

cuda, dummy/implicit block synchronization

I am aware that block sync is not possible, the only way is launching a new kernel.
BUT, let's suppose that I launch X blocks, where X corresponds to the number of the SM on my GPU. I should aspect that the scheduler will assign a block to each SM...right? And if the GPU is being utilized as a secondary graphic card (completely dedicated to CUDA), this means that, theoretically, no other process use it... right?
My idea is the following: implicit synchronization.
Let's suppose that sometimes I need only one block, and sometimes I need all the X blocks. Well, in those cases where I need just one block, I can configure my code so that the first block (or the first SM) will work on the "real" data while the other X-1 blocks (or SMs) on some "dummy" data, executing exactly the same instruction, just with some other offset.
So that all of them will continue to be synchronized, until I am going to need all of them again.
Is the scheduler reliable under this conditions? Or can you be never sure?
You've got several questions in one, so I'll try to address them separately.
One block per SM
I asked this a while back on nVidia's own forums, as I was getting results that indicated that this is not what happens. Apparently, the block scheduler will not assign a block per SM if the number of blocks is equal to the number of SMs.
Implicit synchronization
No. First of all, you cannot guarantee that each block will have its own SM (see above). Secondly, all blocks cannot access the global store at the same time. If they run synchronously at all, they will loose this synchronicity as of the first memory read/write.
Block synchronization
Now for the good news: Yes, you can. The atomic instructions described in Section B.11 of the CUDA C Programming Guide can be used to create a barrier. Assume that you have N blocks executing concurrently on your GPU.
__device__ int barrier = N;
__global__ void mykernel ( ) {
/* Do whatever it is that this block does. */
...
/* Make sure all threads in this block are actually here. */
__syncthreads();
/* Once we're done, decrease the value of the barrier. */
if ( threadIdx.x == 0 )
atomicSub( &barrier , 1 );
/* Now wait for the barrier to be zero. */
if ( threadIdx.x == 0 )
while ( atomicCAS( &barrier , 0 , 0 ) != 0 );
/* Make sure everybody has waited for the barrier. */
__syncthreads();
/* Carry on with whatever else you wanted to do. */
...
}
The instruction atomicSub(p,i) computes *p -= i atomically and is only called by the zeroth thread in the block, i.e. we only want to decrement barrier once. The instruction atomicCAS(p,c,v) sets *p = v iff *p == c and returns the old value of *p. This part just loops until barrier reaches 0, i.e. until all blocks have crossed it.
Note that you have to wrap this part in calls to __synchtreads() as the threads in a block do not execute in strict lock-step and you have to force them all to wait for the zeroth thread.
Just remember that if you call your kernel more than once, you should set barrier back to N.
Update
In reply to jHackTheRipper's answer and Cicada's comment, I should have pointed out that you should not try to start more blocks than can be concurrently scheduled on the GPU! This is limited by a number of factors, and you should use the CUDA Occupancy Calculator to find the maximum number of blocks for your kernel and device.
Judging by the original question, though, only as many blocks as there are SMs are being started, so this point is moot.
#Pedro is definitely wrong!
Achieving global synchronization has been the subject of several research works recently and, at last for non-Kepler architectures (I don't have one yet). The conclusion is always the same (or should be): it is not possible to achieve such a global synchronization across the whole GPU.
The reason is simple: CUDA blocks cannot be preempted, so given that you fully occupy the GPU, threads waiting for the barrier rendez-vous will never allow the block to terminate. Thus, it will not be removed from the SM, and will prevent the remaining blocks to run.
As a consequence, you will just freeze the GPU that will never be able to escape from this deadlock state.
-- edit to answer Pedro's remarks --
Such shortcomings have been noticed by other authors such as:
http://www.openclblog.com/2011/04/eureka.html
by the author of OpenCL in action
-- edit to answer Pedro's second remarks --
The same conclusion is made by #Jared Hoberock in this SO post:
Inter-block barrier on CUDA

How to share a common value between threads in a given block?

I have a kernel that, for each thread in a given block, computes a for loop with a different number of iterations. I use a buffer of size N_BLOCKS to store the number of iterations required for each block. Hence, each thread in a given block must know the number of iterations specific to its block.
However, I'm not sure which way is the best (performance speaking) to read the value and distribute it to all the other threads. I see only one good way (please tell me if there is something better): store the value in shared memory and have each thread read it. For example:
__global__ void foo( int* nIterBuf )
{
__shared__ int nIter;
if( threadIdx.x == 0 )
nIter = nIterBuf[blockIdx.x];
__syncthreads();
for( int i=0; i < nIter; i++ )
...
}
Any other better solutions? My app will use a lot of data, so I want the best performance.
Thanks!
Read-only values that are uniform across all threads in a block are probably best stored in __constant__ arrays. On some CUDA architectures such as Fermi (SM 2.x), if you declare the array or pointer argument using the C++ const keyword AND you access it uniformly within the block (i.e. the index only depends on blockIdx, not threadIdx), then the compiler may automatically promote the reference to constant memory.
The advantage of constant memory is that it goes through a dedicated cache, so it doesn't pollute the L1, and if the amount of data you are accessing per block is relatively small, after the first access within each block, you should always hit in the cache after the initial compulsory miss in each thread block.
You also won't need to use any shared memory or transfer from global to shared memory.
If my info is up-to-date, the shared memory is the second fastest memory, second only to the registers.
If reading this data from shared memory every iteration slows you down and you still have registers available (refer to your GPU's compute capability and specs), you could perhaps try to store a copy of this value in every thread's register (using a local variable).

CUDA finding the max value in given array

I tried to develop a small CUDA program for find the max value in the given array,
int input_data[0...50] = 1,2,3,4,5....,50
max_value initialized by the first value of the input_data[0],
The final answer is stored in result[0].
The kernel is giving 0 as the max value. I don't know what the problem is.
I executed by 1 block 50 threads.
__device__ int lock=0;
__global__ void max(float *input_data,float *result)
{
float max_value = input_data[0];
int tid = threadIdx.x;
if( input_data[tid] > max_value)
{
do{} while(atomicCAS(&lock,0,1));
max_value=input_data[tid];
__threadfence();
lock=0;
}
__syncthreads();
result[0]=max_value; //Final result of max value
}
Even though there are in-built functions, just I am practicing small problems.
You are trying to set up a "critical section", but this approach on CUDA can lead to hang of your whole program - try to avoid it whenever possible.
Why your code hangs?
Your kernel (__global__ function) is executed by groups of 32 threads, called warps. All threads inside a single warp execute synchronously. So, the warp will stop in your do{} while(atomicCAS(&lock,0,1)) until all threads from your warp succeed with obtaining the lock. But obviously, you want to prevent several threads from executing the critical section at the same time. This leads to a hang.
Alternative solution
What you need is a "parallel reduction algorithm". You can start reading here:
Parallel prefix sum # wikipedia
Parallel Reduction # CUDA website
NVIDIA's Guide to Reduction
Your code has potential race. I'm not sure if you defined the 'max_value' variable in shared memory or not, but both are wrong.
1) If 'max_value' is just a local variable, then each thread holds the local copy of it, which are not the actual maximum value (they are just the maximum value between input_data[0] and input_data[tid]). In the last line of code, all threads write to result[0] their own max_value, which will result in undefined behavior.
2) If 'max_value' is a shared variable, 49 threads will enter the if-statements block, and they will try to update the 'max_value' one at a time using locks. But the order of executions among 49 threads is not defined, and therefore some threads may overwrite the actual maximum value to smaller values. You would need to compare the maximum value again within the critical section.
Max is a 'reduction' - check out the Reduction sample in the SDK, and do max instead of summation.
The white paper's a little old but still reasonably useful:
http://developer.download.nvidia.com/compute/cuda/1_1/Website/projects/reduction/doc/reduction.pdf
The final optimization step is to use 'warp synchronous' coding to avoid unnecessary __syncthreads() calls.
It requires at least 2 kernel invocations - one to write a bunch of intermediate max() values to global memory, then another to take the max() of that array.
If you want to do it in a single kernel invocation, check out the threadfenceReduction SDK sample. That uses __threadfence() and atomicAdd() to track progress, then has 1 block do a final reduction when all blocks have finished writing their intermediate results.
There are different accesses for variables. when you define a variable by device then the variable is placed on GPU global memory and it is accessible by all threads in grid , shared places the variable in block shared memory and it is accessible only by the threads of that block , at the end if you don't use any keyword like float max_value then the variable is placed on thread registers and it can be accessed only in that thread.In your code each thread have local variable max_value and it doesn't identify variables in other threads.