Related
If I use fma(a, b, c) in cuda, it means that the formula ab+c is calculated in a single ternary operation. But if I want to calculate -ab+c, does the invoking fma(-a, b, c) take one more multiply operation ?
Unfortunately shader assembly language is undocumented at that level.
However we can try it out:
#!/bin/bash
cat <<EOF > fmatest.cu
__global__ void fma_plus(float *res, float a, float b, float c)
{
*res = fma(a, b, c);
}
__global__ void fma_minus(float *res, float a, float b, float c)
{
*res = fma(-a, b, c);
}
EOF
nvcc -arch sm_60 -c fmatest.cu
cuobjdump -sass fmatest.o
gives
code for sm_60
Function : _Z9fma_minusPffff
.headerflags #"EF_CUDA_SM60 EF_CUDA_PTX_SM(EF_CUDA_SM60)"
/* 0x001fc400fe2007f6 */
/*0008*/ MOV R1, c[0x0][0x20]; /* 0x4c98078000870001 */
/*0010*/ MOV R0, c[0x0][0x148]; /* 0x4c98078005270000 */
/*0018*/ MOV R5, c[0x0][0x14c]; /* 0x4c98078005370005 */
/* 0x001fc800fe8007f1 */
/*0028*/ MOV R2, c[0x0][0x140]; /* 0x4c98078005070002 */
/*0030*/ MOV R3, c[0x0][0x144]; /* 0x4c98078005170003 */
/*0038*/ FFMA R0, R0, -R5, c[0x0][0x150]; /* 0x5181028005470000 */
/* 0x001ffc00ffe000f1 */
/*0048*/ STG.E [R2], R0; /* 0xeedc200000070200 */
/*0050*/ EXIT; /* 0xe30000000007000f */
/*0058*/ BRA 0x58; /* 0xe2400fffff87000f */
/* 0x001f8000fc0007e0 */
/*0068*/ NOP; /* 0x50b0000000070f00 */
/*0070*/ NOP; /* 0x50b0000000070f00 */
/*0078*/ NOP; /* 0x50b0000000070f00 */
..................................
Function : _Z8fma_plusPffff
.headerflags #"EF_CUDA_SM60 EF_CUDA_PTX_SM(EF_CUDA_SM60)"
/* 0x001fc400fe2007f6 */
/*0008*/ MOV R1, c[0x0][0x20]; /* 0x4c98078000870001 */
/*0010*/ MOV R0, c[0x0][0x148]; /* 0x4c98078005270000 */
/*0018*/ MOV R5, c[0x0][0x14c]; /* 0x4c98078005370005 */
/* 0x001fc800fe8007f1 */
/*0028*/ MOV R2, c[0x0][0x140]; /* 0x4c98078005070002 */
/*0030*/ MOV R3, c[0x0][0x144]; /* 0x4c98078005170003 */
/*0038*/ FFMA R0, R0, R5, c[0x0][0x150]; /* 0x5180028005470000 */
/* 0x001ffc00ffe000f1 */
/*0048*/ STG.E [R2], R0; /* 0xeedc200000070200 */
/*0050*/ EXIT; /* 0xe30000000007000f */
/*0058*/ BRA 0x58; /* 0xe2400fffff87000f */
/* 0x001f8000fc0007e0 */
/*0068*/ NOP; /* 0x50b0000000070f00 */
/*0070*/ NOP; /* 0x50b0000000070f00 */
/*0078*/ NOP; /* 0x50b0000000070f00 */
.................................
So the FFMA instruction can indeed take an additional sign to apply to the product (note that it is applied to b in the shader assembly instruction, however this gives the same result).
You can try the same with double precision operands and other compute capabilities instead of sm_60 as well, which will give you similar results.
When different threads in a warp execute divergent code, divergent branches are serialized, and inactive warps are "disabled."
If the divergent paths contain a small number of instructions, such that branch predication is used, it's pretty clear what "disabled" means (threads are turned on/off by the predicate), and it's also clearly visible in the sass dump.
If the divergent execution paths contain larger numbers of instructions (exact number dependent on some compiler heuristics) branch instructions are inserted to potentially skip one execution path or the other. This makes sense: if one long branch is seldom taken, or not taken by any threads in a certain warp, it's advantageous to allow the warp to skip those instructions (rather than being forced to execute both paths in all cases as for predication).
My question is: How are inactive threads "disabled" in the case of divergence with branches? The slide on page 2, lower left of this presentation seems to indicate that branches are taken based on a condition and threads that do not participate are switched off via predicates attached to the instructions at the branch targets. However, this is not the behavior I observe in SASS.
Here's a minimal compilable sample:
#include <stdio.h>
__global__ void nonpredicated( int* a, int iter )
{
if( a[threadIdx.x] == 0 )
// Make the number of divergent instructions unknown at
// compile time so the compiler is forced to create branches
for( int i = 0; i < iter; i++ )
{
a[threadIdx.x] += 5;
a[threadIdx.x] *= 5;
}
else
for( int i = 0; i < iter; i++ )
{
a[threadIdx.x] += 2;
a[threadIdx.x] *= 2;
}
}
int main(){}
Here's the SASS dump showing that the branch instructions are predicated, but the code at the branch targets is not predicated. Are the threads that did not take the branch switched off implicitly during execution of those branch targets, in some way that is not directly visible in the SASS? I often see terminology like "active mask" alluded to in various Cuda documents, but I'm wondering how this manifests in SASS, if it is a separate mechanism from predication.
Additionally, for pre-Volta architectures, the program counter is shared per-warp, so the idea of a predicated branch instruction is confusing to me. Why would you attach a per-thread predicate to an instruction that might change something (the program counter) that is shared by all threads in the warp?
code for sm_20
Function : _Z13nonpredicatedPii
.headerflags #"EF_CUDA_SM20 EF_CUDA_PTX_SM(EF_CUDA_SM20)"
/*0000*/ MOV R1, c[0x1][0x100]; /* 0x2800440400005de4 */
/*0008*/ S2R R0, SR_TID.X; /* 0x2c00000084001c04 */
/*0010*/ MOV32I R3, 0x4; /* 0x180000001000dde2 */
/*0018*/ IMAD.U32.U32 R2.CC, R0, R3, c[0x0][0x20]; /* 0x2007800080009c03 */
/*0020*/ IMAD.U32.U32.HI.X R3, R0, R3, c[0x0][0x24]; /* 0x208680009000dc43 */
/*0028*/ LD.E R0, [R2]; /* 0x8400000000201c85 */
/*0030*/ ISETP.EQ.AND P0, PT, R0, RZ, PT; /* 0x190e0000fc01dc23 */
/*0038*/ #P0 BRA 0xd0; /* 0x40000002400001e7 */
/*0040*/ MOV R4, c[0x0][0x28]; /* 0x28004000a0011de4 */
/*0048*/ ISETP.LT.AND P0, PT, R4, 0x1, PT; /* 0x188ec0000441dc23 */
/*0050*/ MOV R4, RZ; /* 0x28000000fc011de4 */
/*0058*/ #P0 EXIT; /* 0x80000000000001e7 */
/*0060*/ NOP; /* 0x4000000000001de4 */
/*0068*/ NOP; /* 0x4000000000001de4 */
/*0070*/ NOP; /* 0x4000000000001de4 */
/*0078*/ NOP; /* 0x4000000000001de4 */
/*0080*/ IADD R4, R4, 0x1; /* 0x4800c00004411c03 */
/*0088*/ IADD R0, R0, 0x2; /* 0x4800c00008001c03 */
/*0090*/ ISETP.LT.AND P0, PT, R4, c[0x0][0x28], PT; /* 0x188e4000a041dc23 */
/*0098*/ SHL R0, R0, 0x1; /* 0x6000c00004001c03 */
/*00a0*/ #P0 BRA 0x80; /* 0x4003ffff600001e7 */
/*00a8*/ ST.E [R2], R0; /* 0x9400000000201c85 */
/*00b0*/ BRA 0x128; /* 0x40000001c0001de7 */
/*00b8*/ NOP; /* 0x4000000000001de4 */
/*00c0*/ NOP; /* 0x4000000000001de4 */
/*00c8*/ NOP; /* 0x4000000000001de4 */
/*00d0*/ MOV R0, c[0x0][0x28]; /* 0x28004000a0001de4 */
/*00d8*/ MOV R4, RZ; /* 0x28000000fc011de4 */
/*00e0*/ ISETP.LT.AND P0, PT, R0, 0x1, PT; /* 0x188ec0000401dc23 */
/*00e8*/ MOV R0, RZ; /* 0x28000000fc001de4 */
/*00f0*/ #P0 EXIT; /* 0x80000000000001e7 */
/*00f8*/ MOV32I R5, 0x19; /* 0x1800000064015de2 */
/*0100*/ IADD R0, R0, 0x1; /* 0x4800c00004001c03 */
/*0108*/ IMAD R4, R4, 0x5, R5; /* 0x200ac00014411ca3 */
/*0110*/ ISETP.LT.AND P0, PT, R0, c[0x0][0x28], PT; /* 0x188e4000a001dc23 */
/*0118*/ #P0 BRA 0x100; /* 0x4003ffff800001e7 */
/*0120*/ ST.E [R2], R4; /* 0x9400000000211c85 */
/*0128*/ EXIT; /* 0x8000000000001de7 */
.....................................
Are the threads that did not take the branch switched off implicitly during execution of those branch targets, in some way that is not directly visible in the SASS?
Yes.
There is a warp execution or "active" mask which is separate from the formal concept of predication as defined in the PTX ISA manual.
Predicated execution may allow instructions to be executed (or not) for a particular thread on an instruction-by-instruction basis. The compiler may also emit predicated instructions to enact a conditional jump or branch.
However the GPU also maintains a warp active mask. When the machine observes that thread execution within a warp has diverged (for example at the point of a predicated branch, or perhaps any predicated instruction), it will set the active mask accordingly. This process isn't really "visible" at the SASS level. AFAIK the low level execution process for a diverged warp (not via predication) isn't well specified, so questions around how long the warp stays diverged and the exact mechanism for re-synchronization aren't well specified, and AFAIK can be affected by compiler choices, on some architectures. This is one recent discussion (note particularly the remarks by #njuffa).
Why would you attach a per-thread predicate to an instruction that might change something (the program counter) that is shared by all threads in the warp?
This is how you perform a conditional jump or branch. Since all execution is lock-step, if we are going to execute a particular instruction (regardless of mask status or predication status) the PC had better point to that instruction. However, the GPU can perform instruction replay to handle different cases, as needed at execution time.
A few other notes:
a mention of the "active mask" is here:
The scheduler dispatches all 32 lanes of the warp to the execution units with an active mask. Non-active threads execute through the pipe.
some NVIDIA tools allow for inspection of the active mask.
I was under impression that adding a positive zero to negative zero should produce a positive zero. To quote IEEE 754 2008:
When the sum of two operands with opposite signs (or the difference of two operands with like signs) is exactly zero, the sign of that sum (or difference) shall be +0 in all rounding-direction attributes except roundTowardNegative; under that attribute, the sign of an exact zero sum (or difference) shall be −0. However, x + x = x − (−x) retains the same sign as x even when x is zero.
However, in case of CUDA, it looks like compiler is being too aggressive in optimizing away addition of a positive zero in Release builds. Plain C/C++ (or C#/.NET) are working as expected. I’ve looked at PTX code produced by the compiler for different builds, and add.f32 instruction is indeed missing in Release build.
Am I missing anything here?
__global__ void convertToPositiveZero(float* dst, int size)
{
int index = blockIdx.x * blockDim.x + threadIdx.x;
if (index < size)
{
dst[index] += 0;
}
}
// Host code
int size = 100;
float* zzh = (float*)malloc(size * sizeof(float));
zzh[0] = -0.0f;
zzh[1] = 0.0f;
assert(0x80000000 == *((int*)&zzh[0]));
if (0x80000000 != *((int*)&zzh[0]))
{
printf("Expected negative zero.\n");
exit(-1);
}
assert(0x00000000 == *((int*)&zzh[1]));
float* zzd;
cudaMalloc(&zzd, size * sizeof(float));
cudaMemcpy(zzd, zzh, size * sizeof(float), cudaMemcpyHostToDevice);
convertToPositiveZero<<<1, 100>>>(zzd, size);
cudaMemcpy(zzh, zzd, size * sizeof(float), cudaMemcpyDeviceToHost);
//zzh[0] += 0.0f;
assert(0x00000000 == *((int*)&zzh[0]));
if (0x00000000 != *((int*)&zzh[0]))
{
printf("Expected positive zero.\n");
exit(-1);
}
assert(0x00000000 == *((int*)&zzh[1]));
printf("Done.\n");
Your problem seems to be due to the optimizations carried out by nvcc when fusing FADD and FMUL into FMAD operations.
I was able to reproduce your problem under a Release modality. The resulting disassembled code, compiled by CUDA 5.5 and for a sm=2.1, is
code for sm_21
Function : _Z21convertToPositiveZeroPfi
.headerflags #"EF_CUDA_SM20 EF_CUDA_PTX_SM(EF_CUDA_SM20)
/*0000*/ MOV R1, c[0x1][0x100];
/*0008*/ S2R R0, SR_CTAID.X;
/*0010*/ S2R R2, SR_TID.X;
/*0018*/ IMAD R0, R0, c[0x0][0x8], R2;
/*0020*/ ISETP.GE.AND P0, PT, R0, c[0x0][0x28], PT;
/*0028*/ #P0 BRA.U 0x60;
/*0030*/ #!P0 MOV32I R3, 0x4;
/*0038*/ #!P0 IMAD R2.CC, R0, R3, c[0x0][0x20];
/*0040*/ #!P0 IMAD.HI.X R3, R0, R3, c[0x0][0x24];
/*0048*/ #!P0 LD.E R0, [R2];
/*0050*/ #!P0 F2F.F32.F32 R0, R0;
/*0058*/ #!P0 ST.E [R2], R0;
/*0060*/ EXIT ;
As you also noticed from the PTX file, there is no floating point add operations. Now, if you compile with -fmad=false option, the disassembled code becomes
code for sm_21
Function : _Z21convertToPositiveZeroPfi
.headerflags #"EF_CUDA_SM20 EF_CUDA_PTX_SM(EF_CUDA_SM20)
/*0000*/ MOV R1, c[0x1][0x100];
/*0008*/ S2R R0, SR_CTAID.X;
/*0010*/ S2R R2, SR_TID.X;
/*0018*/ IMAD R0, R0, c[0x0][0x8], R2;
/*0020*/ ISETP.GE.AND P0, PT, R0, c[0x0][0x28], PT;
/*0028*/ #P0 BRA.U 0x60;
/*0030*/ #!P0 MOV32I R3, 0x4;
/*0038*/ #!P0 IMAD R2.CC, R0, R3, c[0x0][0x20];
/*0040*/ #!P0 IMAD.HI.X R3, R0, R3, c[0x0][0x24];
/*0048*/ #!P0 LD.E R0, [R2];
/*0050*/ #!P0 FADD R0, R0, RZ;
/*0058*/ #!P0 ST.E [R2], R0;
/*0060*/ EXIT ;
As you can see, the presence of a FADD operation is restored and the "correct" sign of 0 is restored as well.
I'm trying to understand how to use __threadfence(), as it seems like a powerful synchronization primitive that lets different blocks work together without going through the huge hassle of ending a kernel and starting a new one. The CUDA C Programming guide has an example of it (Appendix B.5), which is fleshed out in the "threadFenceReduction" sample in the SDK, so it seems like something we "should" be using.
However, when I have tried using __threadfence(), it is shockingly slow. See the code below for an example. From what I understand, __threadfence() should just make sure that all pending memory transfers from the current thread block are finished, before proceeding. Memory latency is somewhat better than a microsecond, I believe, so the total time to deal with the 64KB of memory transfers in the included code, on a GTX680, should be somewhere around a microsecond. Instead, the __threadfence() instruction seems to take around 20 microseconds! Instead of using __threadfence() to synchronize, I can instead end the kernel, and launch an entirely new kernel (in the same, default, stream so that it is synchronized), in less then a third of the time!
What is going on here? Does my code have a bug in it that I'm not noticing? Or is __threadfence() really 20x slower than it should be, and 6x slower than an entire kernel launch+cleanup?
Time for 1000 runs of the threadfence kernel: 27.716831 ms
Answer: 120
Time for 1000 runs of just the first 3 lines, including threadfence: 25.962912 ms
Synchronizing without threadfence, by splitting to two kernels: 7.653344 ms
Answer: 120
#include "cuda.h"
#include <cstdio>
__device__ unsigned int count = 0;
__shared__ bool isLastBlockDone;
__device__ int scratch[16];
__device__ int junk[16000];
__device__ int answer;
__global__ void usethreadfence() //just like the code example in B.5 of the CUDA C Programming Guide
{
if (threadIdx.x==0) scratch[blockIdx.x]=blockIdx.x;
junk[threadIdx.x+blockIdx.x*1000]=17+threadIdx.x; //do some more memory writes to make the kernel nontrivial
__threadfence();
if (threadIdx.x==0) {
unsigned int value = atomicInc(&count, gridDim.x);
isLastBlockDone = (value == (gridDim.x - 1));
}
__syncthreads();
if (isLastBlockDone && threadIdx.x==0) {
// The last block sums the results stored in scratch[0 .. gridDim.x-1]
int sum=0;
for (int i=0;i<gridDim.x;i++) sum+=scratch[i];
answer=sum;
}
}
__global__ void justthreadfence() //first three lines of the previous kernel, so we can compare speeds
{
if (threadIdx.x==0) scratch[blockIdx.x]=blockIdx.x;
junk[threadIdx.x+blockIdx.x*1000]=17+threadIdx.x;
__threadfence();
}
__global__ void usetwokernels_1() //this and the next kernel reproduce the functionality of the first kernel, but faster!
{
if (threadIdx.x==0) scratch[blockIdx.x]=blockIdx.x;
junk[threadIdx.x+blockIdx.x*1000]=17+threadIdx.x;
}
__global__ void usetwokernels_2()
{
if (threadIdx.x==0) {
int sum=0;
for (int i=0;i<gridDim.x;i++) sum+=scratch[i];
answer=sum;
}
}
int main() {
int sum;
cudaEvent_t start, stop; float time; cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord(start, 0);
for (int i=0;i<1000;i++) usethreadfence<<<16,1000>>>();
cudaEventRecord(stop, 0); cudaEventSynchronize(stop); cudaEventElapsedTime(&time, start, stop); printf ("Time for 1000 runs of the threadfence kernel: %f ms\n", time); cudaEventDestroy(start); cudaEventDestroy(stop);
cudaMemcpyFromSymbol(&sum,answer,sizeof(int)); printf("Answer: %d\n",sum);
cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord(start, 0);
for (int i=0;i<1000;i++) justthreadfence<<<16,1000>>>();
cudaEventRecord(stop, 0); cudaEventSynchronize(stop); cudaEventElapsedTime(&time, start, stop); printf ("Time for 1000 runs of just the first 3 lines, including threadfence: %f ms\n", time); cudaEventDestroy(start); cudaEventDestroy(stop);
cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord(start, 0);
for (int i=0;i<1000;i++) {usetwokernels_1<<<16,1000>>>(); usetwokernels_2<<<16,1000>>>();}
cudaEventRecord(stop, 0); cudaEventSynchronize(stop); cudaEventElapsedTime(&time, start, stop); printf ("Synchronizing without threadfence, by splitting to two kernels: %f ms\n", time); cudaEventDestroy(start); cudaEventDestroy(stop);
cudaMemcpyFromSymbol(&sum,answer,sizeof(int)); printf("Answer: %d\n",sum);
}
I have tested your code, compiled with CUDA 6.0, on two different cards: GT540M (Fermi) and Kepler K20c (Kepler) and these are the results
GT540M
Time for 1000 runs of the threadfence kernel: 303.373688 ms
Answer: 120
Time for 1000 runs of just the first 3 lines, including threadfence: 300.395416 ms
Synchronizing without threadfence, by splitting to two kernels: 597.729919 ms
Answer: 120
Kepler K20c
Time for 1000 runs of the threadfence kernel: 10.164096 ms
Answer: 120
Time for 1000 runs of just the first 3 lines, including threadfence: 8.808896 ms
Synchronizing without threadfence, by splitting to two kernels: 17.330784 ms
Answer: 120
I do not observe any particularly slow behavior of __threadfence() against the other two considered cases.
This can be justified by resorting to the disassembled codes.
usethreadfence()
c[0xe][0x0] = scratch
c[0xe][0x4] = junk
c[0xe][0xc] = count
c[0x0][0x14] = gridDim.x
/*0000*/ MOV R1, c[0x1][0x100];
/*0008*/ S2R R0, SR_TID.X; R0 = threadIdx.x
/*0010*/ ISETP.NE.AND P0, PT, R0, RZ, PT; P0 = (R0 != 0)
/*0018*/ S2R R5, SR_CTAID.X; R5 = blockIdx.x
/*0020*/ IMAD R3, R5, 0x3e8, R0; R3 = R5 * 1000 + R0 = threadIdx.x + blockIdx.x * 1000
if (threadIdx.x == 0)
/*0028*/ #!P0 ISCADD R2, R5, c[0xe][0x0], 0x2; R2 = scratch + threadIdx.x
/*0030*/ IADD R4, R0, 0x11; R4 = R0 + 17 = threadIdx.x + 17
/*0038*/ ISCADD R3, R3, c[0xe][0x4], 0x2; R3 = junk + threadIdx.x + blockIdx.x * 1000
/*0040*/ #!P0 ST [R2], R5; scratch[threadIdx.x] = blockIdx.x
/*0048*/ ST [R3], R4; junk[threadIdx.x + blockIdx.x * 1000] = threadIdx.x + 17
/*0050*/ MEMBAR.GL; __threadfence
/*0058*/ #P0 BRA.U 0x98; if (threadIdx.x != 0) branch to 0x98
if (threadIdx.x == 0)
/*0060*/ #!P0 MOV R2, c[0xe][0xc]; R2 = &count
/*0068*/ #!P0 MOV R3, c[0x0][0x14]; R3 = gridDim.x
/*0070*/ #!P0 ATOM.INC R2, [R2], R3; R2 = value = count + 1; *(&count) ++
/*0078*/ #!P0 IADD R3, R3, -0x1; R3 = R3 - 1 = gridDim.x - 1
/*0080*/ #!P0 ISETP.EQ.AND P1, PT, R2, R3, PT; P1 = (R2 == R3) = 8 value == (gridDim.x - 1))
/*0088*/ #!P0 SEL R2, RZ, 0x1, !P1; if (!P1) R2 = RZ otherwise R2 = 1 (R2 = isLastBlockDone)
/*0090*/ #!P0 STS.U8 [RZ], R2; Stores R2 (i.e., isLastBlockDone) to shared memory to [0]
/*0098*/ ISETP.EQ.AND P0, PT, R0, RZ, PT; P0 = (R0 == 0) = (threadIdx.x == 0)
/*00a0*/ BAR.RED.POPC RZ, RZ, RZ, PT; __syncthreads()
/*00a8*/ LDS.U8 R0, [RZ]; R0 = R2 = isLastBlockDone
/*00b0*/ ISETP.NE.AND P0, PT, R0, RZ, P0; P0 = (R0 == 0)
/*00b8*/ #!P0 EXIT; if (isLastBlockDone != 0) exits
/*00c0*/ ISETP.NE.AND P0, PT, RZ, c[0x0][0x14], PT; IMPLEMENTING THE FOR LOOP WITH A LOOP UNROLL OF 4
/*00c8*/ MOV R0, RZ;
/*00d0*/ #!P0 BRA 0x1b8;
/*00d8*/ MOV R2, c[0x0][0x14];
/*00e0*/ ISETP.GT.AND P0, PT, R2, 0x3, PT;
/*00e8*/ MOV R2, RZ;
/*00f0*/ #!P0 BRA 0x170;
/*00f8*/ MOV R3, c[0x0][0x14];
/*0100*/ IADD R7, R3, -0x3;
/*0108*/ NOP;
/*0110*/ ISCADD R3, R2, c[0xe][0x0], 0x2;
/*0118*/ IADD R2, R2, 0x4;
/*0120*/ LD R4, [R3];
/*0128*/ ISETP.LT.U32.AND P0, PT, R2, R7, PT;
/*0130*/ LD R5, [R3+0x4];
/*0138*/ LD R6, [R3+0x8];
/*0140*/ LD R3, [R3+0xc];
/*0148*/ IADD R0, R4, R0;
/*0150*/ IADD R0, R5, R0;
/*0158*/ IADD R0, R6, R0;
/*0160*/ IADD R0, R3, R0;
/*0168*/ #P0 BRA 0x110;
/*0170*/ ISETP.LT.U32.AND P0, PT, R2, c[0x0][0x14], PT;
/*0178*/ #!P0 BRA 0x1b8;
/*0180*/ ISCADD R3, R2, c[0xe][0x0], 0x2;
/*0188*/ IADD R2, R2, 0x1;
/*0190*/ LD R3, [R3];
/*0198*/ ISETP.LT.U32.AND P0, PT, R2, c[0x0][0x14], PT;
/*01a0*/ NOP;
/*01a8*/ IADD R0, R3, R0;
/*01b0*/ #P0 BRA 0x180;
/*01b8*/ MOV R2, c[0xe][0x8];
/*01c0*/ ST [R2], R0;
/*01c8*/ EXIT;
justthreadfence()
Function : _Z15justthreadfencev
.headerflags #"EF_CUDA_SM20 EF_CUDA_PTX_SM(EF_CUDA_SM20)"
/*0000*/ MOV R1, c[0x1][0x100]; /* 0x2800440400005de4 */
/*0008*/ S2R R3, SR_TID.X; /* 0x2c0000008400dc04 */
/*0010*/ ISETP.NE.AND P0, PT, R3, RZ, PT; /* 0x1a8e0000fc31dc23 */
/*0018*/ S2R R4, SR_CTAID.X; /* 0x2c00000094011c04 */
/*0020*/ IMAD R2, R4, 0x3e8, R3; /* 0x2006c00fa0409ca3 */
/*0028*/ #!P0 ISCADD R0, R4, c[0xe][0x0], 0x2; /* 0x4000780000402043 */
/*0030*/ IADD R3, R3, 0x11; /* 0x4800c0004430dc03 */
/*0038*/ ISCADD R2, R2, c[0xe][0x4], 0x2; /* 0x4000780010209c43 */
/*0040*/ #!P0 ST [R0], R4; /* 0x9000000000012085 */
/*0048*/ ST [R2], R3; /* 0x900000000020dc85 */
/*0050*/ MEMBAR.GL; /* 0xe000000000001c25 */
/*0058*/ EXIT; /* 0x8000000000001de7 */
usetwokernels_1()
Function : _Z15usetwokernels_1v
.headerflags #"EF_CUDA_SM20 EF_CUDA_PTX_SM(EF_CUDA_SM20)"
/*0000*/ MOV R1, c[0x1][0x100]; /* 0x2800440400005de4 */
/*0008*/ S2R R0, SR_TID.X; /* 0x2c00000084001c04 */
/*0010*/ ISETP.NE.AND P0, PT, R0, RZ, PT; /* 0x1a8e0000fc01dc23 */
/*0018*/ S2R R2, SR_CTAID.X; /* 0x2c00000094009c04 */
/*0020*/ IMAD R4, R2, 0x3e8, R0; /* 0x2000c00fa0211ca3 */
/*0028*/ #!P0 ISCADD R3, R2, c[0xe][0x0], 0x2; /* 0x400078000020e043 */
/*0030*/ IADD R0, R0, 0x11; /* 0x4800c00044001c03 */
/*0038*/ ISCADD R4, R4, c[0xe][0x4], 0x2; /* 0x4000780010411c43 */
/*0040*/ #!P0 ST [R3], R2; /* 0x900000000030a085 */
/*0048*/ ST [R4], R0; /* 0x9000000000401c85 */
/*0050*/ EXIT; /* 0x8000000000001de7 */
.....................................
usetwokernels_1()
Function : _Z15usetwokernels_2v
.headerflags #"EF_CUDA_SM20 EF_CUDA_PTX_SM(EF_CUDA_SM20)"
/*0000*/ MOV R1, c[0x1][0x100]; /* 0x2800440400005de4 */
/*0008*/ S2R R0, SR_TID.X; /* 0x2c00000084001c04 */
/*0010*/ ISETP.NE.AND P0, PT, R0, RZ, PT; /* 0x1a8e0000fc01dc23 */
/*0018*/ #P0 EXIT; /* 0x80000000000001e7 */
/*0020*/ ISETP.NE.AND P0, PT, RZ, c[0x0][0x14], PT; /* 0x1a8e400053f1dc23 */
/*0028*/ MOV R0, RZ; /* 0x28000000fc001de4 */
/*0030*/ #!P0 BRA 0x130; /* 0x40000003e00021e7 */
/*0038*/ MOV R2, c[0x0][0x14]; /* 0x2800400050009de4 */
/*0040*/ ISETP.GT.AND P0, PT, R2, 0x3, PT; /* 0x1a0ec0000c21dc23 */
/*0048*/ MOV R2, RZ; /* 0x28000000fc009de4 */
/*0050*/ #!P0 BRA 0xe0; /* 0x40000002200021e7 */
/*0058*/ MOV R3, c[0x0][0x14]; /* 0x280040005000dde4 */
/*0060*/ IADD R7, R3, -0x3; /* 0x4800fffff431dc03 */
/*0068*/ NOP; /* 0x4000000000001de4 */
/*0070*/ NOP; /* 0x4000000000001de4 */
/*0078*/ NOP; /* 0x4000000000001de4 */
/*0080*/ ISCADD R3, R2, c[0xe][0x0], 0x2; /* 0x400078000020dc43 */
/*0088*/ LD R4, [R3]; /* 0x8000000000311c85 */
/*0090*/ IADD R2, R2, 0x4; /* 0x4800c00010209c03 */
/*0098*/ LD R5, [R3+0x4]; /* 0x8000000010315c85 */
/*00a0*/ ISETP.LT.U32.AND P0, PT, R2, R7, PT; /* 0x188e00001c21dc03 */
/*00a8*/ LD R6, [R3+0x8]; /* 0x8000000020319c85 */
/*00b0*/ LD R3, [R3+0xc]; /* 0x800000003030dc85 */
/*00b8*/ IADD R0, R4, R0; /* 0x4800000000401c03 */
/*00c0*/ IADD R0, R5, R0; /* 0x4800000000501c03 */
/*00c8*/ IADD R0, R6, R0; /* 0x4800000000601c03 */
/*00d0*/ IADD R0, R3, R0; /* 0x4800000000301c03 */
/*00d8*/ #P0 BRA 0x80; /* 0x4003fffe800001e7 */
/*00e0*/ ISETP.LT.U32.AND P0, PT, R2, c[0x0][0x14], PT; /* 0x188e40005021dc03 */
/*00e8*/ #!P0 BRA 0x130; /* 0x40000001000021e7 */
/*00f0*/ NOP; /* 0x4000000000001de4 */
/*00f8*/ NOP; /* 0x4000000000001de4 */
/*0100*/ ISCADD R3, R2, c[0xe][0x0], 0x2; /* 0x400078000020dc43 */
/*0108*/ IADD R2, R2, 0x1; /* 0x4800c00004209c03 */
/*0110*/ LD R3, [R3]; /* 0x800000000030dc85 */
/*0118*/ ISETP.LT.U32.AND P0, PT, R2, c[0x0][0x14], PT; /* 0x188e40005021dc03 */
/*0120*/ IADD R0, R3, R0; /* 0x4800000000301c03 */
/*0128*/ #P0 BRA 0x100; /* 0x4003ffff400001e7 */
/*0130*/ MOV R2, c[0xe][0x8]; /* 0x2800780020009de4 */
/*0138*/ ST [R2], R0; /* 0x9000000000201c85 */
/*0140*/ EXIT; /* 0x8000000000001de7 */
.....................................
As it can be seen, the instructions of justthreadfencev() are strictly contained in those of usethreadfence(), while those of usetwokernels_1() and usetwokernels_2() are practically a partitioning of those of justthreadfencev(). So, the difference in timings could be ascribed to the kernel launch overhead of the second kernel.
Edit: this question is a re-done version of the original, so the first several responses may no longer be relevant.
I'm curious about what impact a device function call with forced no-inlining has on synchronization within a device function. I have a simple test kernel that illustrates the behavior in question.
The kernel takes a buffer and passes it to a device function, along with a shared buffer and an indicator variable which identifies a single thread as the "boss" thread. The device function has divergent code: the boss thread first spends time doing trivial operations on the shared buffer, then writes to the global buffer. After a synchronization call, all threads write to the global buffer. After the kernel call, the host prints the contents of the global buffer. Here is the code:
CUDA CODE:
test_main.cu
#include<cutil_inline.h>
#include "test_kernel.cu"
int main()
{
int scratchBufferLength = 100;
int *scratchBuffer;
int *d_scratchBuffer;
int b = 1;
int t = 64;
// copy scratch buffer to device
scratchBuffer = (int *)calloc(scratchBufferLength,sizeof(int));
cutilSafeCall( cudaMalloc(&d_scratchBuffer,
sizeof(int) * scratchBufferLength) );
cutilSafeCall( cudaMemcpy(d_scratchBuffer, scratchBuffer,
sizeof(int)*scratchBufferLength, cudaMemcpyHostToDevice) );
// kernel call
testKernel<<<b, t>>>(d_scratchBuffer);
cudaThreadSynchronize();
// copy data back to host
cutilSafeCall( cudaMemcpy(scratchBuffer, d_scratchBuffer,
sizeof(int) * scratchBufferLength, cudaMemcpyDeviceToHost) );
// print results
printf("Scratch buffer contents: \t");
for(int i=0; i < scratchBufferLength; ++i)
{
if(i % 25 == 0)
printf("\n");
printf("%d ", scratchBuffer[i]);
}
printf("\n");
//cleanup
cudaFree(d_scratchBuffer);
free(scratchBuffer);
return 0;
}
test_kernel.cu
#ifndef __TEST_KERNEL_CU
#define __TEST_KERNEL_CU
#define IS_BOSS() (threadIdx.x == blockDim.x - 1)
__device__
__noinline__
void testFunc(int *sA, int *scratchBuffer, bool isBoss) {
if(isBoss) { // produces unexpected output-- "broken" code
//if(IS_BOSS()) { // produces expected output-- "working" code
for (int c = 0; c < 10000; c++) {
sA[0] = 1;
}
}
if(isBoss) {
scratchBuffer[0] = 1;
}
__syncthreads();
scratchBuffer[threadIdx.x ] = threadIdx.x;
return;
}
__global__
void testKernel(int *scratchBuffer)
{
__shared__ int sA[4];
bool isBoss = IS_BOSS();
testFunc(sA, scratchBuffer, isBoss);
return;
}
#endif
I compiled this code from within the CUDA SDK to take advantage of the "cutilsafecall()" functions in test_main.cu, but of course these could be taken out if you'd like to compile outside the SDK. I compiled with CUDA Driver/Toolkit version 4.0, compute capability 2.0, and the code was run on a GeForce GTX 480, which has the Fermi architecture.
The expected output is
0 1 2 3 ... blockDim.x-1
However, the output I get is
1 1 2 3 ... blockDim.x-1
This seems to indicate that the boss thread executed the conditional "scratchBuffer[0] = 1;" statement AFTER all threads execute the "scratchBuffer[threadIdx.x] = threadIdx.x;" statement, even though they are separated by a __syncthreads() barrier.
This occurs even if the boss thread is instructed to write a sentinel value into the buffer position of a thread in its same warp; the sentinel is the final value present in the buffer, rather than the appropriate threadIdx.x .
One modification that causes the code to produce expected output is to change the conditional statement
if(isBoss) {
to
if(IS_BOSS()) {
; i.e., to change the divergence-controlling variable from being stored in a parameter register to being computed in a macro function. (Note the comments on the appropriate lines in the source code.) It's this particular change I've been focusing on to try and track down the problem. In looking at the disassembled .cubins of the kernel with the 'isBoss' conditional (i.e., broken code) and the 'IS_BOSS()' conditional (i.e., working code), the most conspicuous difference in the instructions seems to be the absence of an SSY instruction in the disassembled broken code.
Here are the disassembled kernels generated by disassembling the .cubin files with
"cuobjdump -sass test_kernel.cubin" . everything up to the first 'EXIT' is the kernel, and everything after that is the device function. The only differences are in the device function.
DISASSEMBLED OBJECT CODE:
"broken" code
code for sm_20
Function : _Z10testKernelPi
/*0000*/ /*0x00005de428004404*/ MOV R1, c [0x1] [0x100];
/*0008*/ /*0x20009de428004000*/ MOV R2, c [0x0] [0x8];
/*0010*/ /*0x84001c042c000000*/ S2R R0, SR_Tid_X;
/*0018*/ /*0xfc015de428000000*/ MOV R5, RZ;
/*0020*/ /*0x00011de428004000*/ MOV R4, c [0x0] [0x0];
/*0028*/ /*0xfc209c034800ffff*/ IADD R2, R2, 0xfffff;
/*0030*/ /*0x9001dde428004000*/ MOV R7, c [0x0] [0x24];
/*0038*/ /*0x80019de428004000*/ MOV R6, c [0x0] [0x20];
/*0040*/ /*0x08001c03110e0000*/ ISET.EQ.U32.AND R0, R0, R2, pt;
/*0048*/ /*0x01221f841c000000*/ I2I.S32.S32 R8, -R0;
/*0050*/ /*0x2001000750000000*/ CAL 0x60;
/*0058*/ /*0x00001de780000000*/ EXIT;
/*0060*/ /*0x20201e841c000000*/ I2I.S32.S8 R0, R8;
/*0068*/ /*0xfc01dc231a8e0000*/ ISETP.NE.AND P0, pt, R0, RZ, pt;
/*0070*/ /*0xc00021e740000000*/ #!P0 BRA 0xa8;
/*0078*/ /*0xfc001de428000000*/ MOV R0, RZ;
/*0080*/ /*0x04001c034800c000*/ IADD R0, R0, 0x1;
/*0088*/ /*0x04009de218000000*/ MOV32I R2, 0x1;
/*0090*/ /*0x4003dc231a8ec09c*/ ISETP.NE.AND P1, pt, R0, 0x2710, pt;
/*0098*/ /*0x00409c8594000000*/ ST.E [R4], R2;
/*00a0*/ /*0x600005e74003ffff*/ #P1 BRA 0x80;
/*00a8*/ /*0x040001e218000000*/ #P0 MOV32I R0, 0x1;
/*00b0*/ /*0x0060008594000000*/ #P0 ST.E [R6], R0;
/*00b8*/ /*0xffffdc0450ee0000*/ BAR.RED.POPC RZ, RZ;
/*00c0*/ /*0x84001c042c000000*/ S2R R0, SR_Tid_X;
/*00c8*/ /*0x10011c03200dc000*/ IMAD.U32.U32 R4.CC, R0, 0x4, R6;
/*00d0*/ /*0x10009c435000c000*/ IMUL.U32.U32.HI R2, R0, 0x4;
/*00d8*/ /*0x08715c4348000000*/ IADD.X R5, R7, R2;
/*00e0*/ /*0x00401c8594000000*/ ST.E [R4], R0;
/*00e8*/ /*0x00001de790000000*/ RET;
.................................
"working" code
code for sm_20
Function : _Z10testKernelPi
/*0000*/ /*0x00005de428004404*/ MOV R1, c [0x1] [0x100];
/*0008*/ /*0x20009de428004000*/ MOV R2, c [0x0] [0x8];
/*0010*/ /*0x84001c042c000000*/ S2R R0, SR_Tid_X;
/*0018*/ /*0xfc015de428000000*/ MOV R5, RZ;
/*0020*/ /*0x00011de428004000*/ MOV R4, c [0x0] [0x0];
/*0028*/ /*0xfc209c034800ffff*/ IADD R2, R2, 0xfffff;
/*0030*/ /*0x9001dde428004000*/ MOV R7, c [0x0] [0x24];
/*0038*/ /*0x80019de428004000*/ MOV R6, c [0x0] [0x20];
/*0040*/ /*0x08001c03110e0000*/ ISET.EQ.U32.AND R0, R0, R2, pt;
/*0048*/ /*0x01221f841c000000*/ I2I.S32.S32 R8, -R0;
/*0050*/ /*0x2001000750000000*/ CAL 0x60;
/*0058*/ /*0x00001de780000000*/ EXIT;
/*0060*/ /*0x20009de428004000*/ MOV R2, c [0x0] [0x8];
/*0068*/ /*0x8400dc042c000000*/ S2R R3, SR_Tid_X;
/*0070*/ /*0x20201e841c000000*/ I2I.S32.S8 R0, R8;
/*0078*/ /*0x4000000760000001*/ SSY 0xd0;
/*0080*/ /*0xfc209c034800ffff*/ IADD R2, R2, 0xfffff;
/*0088*/ /*0x0831dc031a8e0000*/ ISETP.NE.U32.AND P0, pt, R3, R2, pt;
/*0090*/ /*0xc00001e740000000*/ #P0 BRA 0xc8;
/*0098*/ /*0xfc009de428000000*/ MOV R2, RZ;
/*00a0*/ /*0x04209c034800c000*/ IADD R2, R2, 0x1;
/*00a8*/ /*0x04021de218000000*/ MOV32I R8, 0x1;
/*00b0*/ /*0x4021dc231a8ec09c*/ ISETP.NE.AND P0, pt, R2, 0x2710, pt;
/*00b8*/ /*0x00421c8594000000*/ ST.E [R4], R8;
/*00c0*/ /*0x600001e74003ffff*/ #P0 BRA 0xa0;
/*00c8*/ /*0xfc01dc33190e0000*/ ISETP.EQ.AND.S P0, pt, R0, RZ, pt;
/*00d0*/ /*0x040021e218000000*/ #!P0 MOV32I R0, 0x1;
/*00d8*/ /*0x0060208594000000*/ #!P0 ST.E [R6], R0;
/*00e0*/ /*0xffffdc0450ee0000*/ BAR.RED.POPC RZ, RZ;
/*00e8*/ /*0x10311c03200dc000*/ IMAD.U32.U32 R4.CC, R3, 0x4, R6;
/*00f0*/ /*0x10309c435000c000*/ IMUL.U32.U32.HI R2, R3, 0x4;
/*00f8*/ /*0x84001c042c000000*/ S2R R0, SR_Tid_X;
/*0100*/ /*0x08715c4348000000*/ IADD.X R5, R7, R2;
/*0108*/ /*0x00401c8594000000*/ ST.E [R4], R0;
/*0110*/ /*0x00001de790000000*/ RET;
.................................
The "SSY" instruction is present in the working code but not the broken code. The cuobjdump manual describes the instruction with, "Set synchronization point; used before potentially divergent instructions." This makes me think that for some reason the compiler does not recognize the possibility of divergence in the broken code.
I also found that if I comment out the __noinline__ directive, then the code produces the expected output, and indeed the assembly produced by the otherwise "broken" and "working" versions is exactly identical. So, this makes me think that when a variable is passed via the call stack, that variable cannot be used to control divergence and a subsequent synchronization call; the compiler does not seem to recognize the possibility of divergence in that case, and therefore doesn't insert an "SSY" instruction. Does anyone know if this is indeed a legitimate limitation of CUDA, and if so, if this is documented anywhere?
Thanks in advance.
This appears to have simply been a compiler bug fixed in CUDA 4.1/4.2. Does not reproduce for the asker on CUDA 4.2.