Don't understand why column addition faster than row in CUDA - cuda

I started with CUDA and wrote two kernels for experiment.
Whey both accept 3 pointers to array of n*n (matrix emulation) and n.
__global__
void th_single_row_add(float* a, float* b, float* c, int n) {
int idx = blockDim.x * blockIdx.x * n + threadIdx.x * n;
for (int i = 0; i < n; i ++) {
if (idx + i >= n*n) return;
c[idx + i] = a[idx + i] + b[idx + i];
}
}
__global__
void th_single_col_add(float* a, float* b, float* c, int n) {
int idx = blockDim.x * blockIdx.x + threadIdx.x;
for (int i = 0; i < n; i ++) {
int idx2 = idx + i * n;
if (idx2 >= n*n) return;
c[idx2] = a[idx2] + b[idx2];
}
}
In th_single_row_add each thread sum rows on n elemnts, In th_single_col_add each thread sum columns.
Here is profile on n = 1000 (1 000 000 elements)
986.29us th_single_row_add(float*, float*, float*, int)
372.96us th_single_col_add(float*, float*, float*, int)
As you see colums sum three times faster.
I thought that because in the column variant all indexes in the loop are far from each other it should be slower, where I wrong?

Threads in CUDA don't act individually, they are grouped together in warps of 32 threads. Those 32 threads execute in lockstep (usually). An instruction issued to one thread is issued to all 32 at the same time, in the same clock cycle.
If that instruction is an instruction that reads memory (for example), then up to 32 independent reads may be required/requested. The exact patterns of addresses needed to satisfy these read operations is determined by the code you write. If those addresses are all "adjacent" in memory, that will be an efficient read. If those addresses are somehow "scattered" in memory, that will be an inefficient read, and will be slower.
This basic concept just described is called "coalesced" access in CUDA. Your column-summing case allows for coalesced access across a warp, because the addresses generated by each thread in the warp are in adjacent columns, and the locations are adjacent in memory. Your row summing case breaks this. The addresses generated by each thread in the warp are not adjacent (they are "columnar", separated from each other by the width of your array) and are therefore not "coalesced".
The difference in performance is due to this difference in memory access efficiency.
You can study more about coalescing behavior in CUDA by studying an introductory treatment of CUDA optimization, such as here especially slides 44-54.

Related

CUDA dot product gives wrong results

I wrote a dot product code with CUDA to compute the dot product of two double vectors. The kernel was invoked by N threads(N<1024) 1 block. But it can't give correct results. I can't figure it out.
__global__ void dotthread(double* a, double *b,double *sum, int N)
{
int tid = threadIdx.x;
sum[0] = sum[0]+a[tid] * b[tid]; //every thread write to the sum[0]
__syncthreads();
}
Let's look at two of your three lines of code:
sum[0] = sum[0]+a[tid] * b[tid]; //every thread write to the sum[0]
__syncthreads();
The first line contains a memory race. Every thread in the block will simultaneously attempt to write to sum[0]. There is nothing in the cuda execution model which can stop this from happening. There is no automatic serialization or memory protections which can stop this behaviour.
The second line is an instruction barrier. This means that each warp of threads will be blocked until every warp of threads has reached the barrier. It has no effect whatsoever on prior instructions, it has no effect whatsoever on memory consistency or on the behaviour of any memory transactions which your code issues.
The code you have written is irreversibly broken. The canonical way to perform this sort of operation would be via a parallel reduction. There are a number of different ways this can be done. It is probably the most described and documented parallel algorithm for GPUs. It you have installed the CUDA toolkit, you already have both a complete working example and a comprehensive paper describing the algorithm as it would be implemented using shared memory. I suggest you study it.
You can see an (almost) working implementation of a dot product using shared memory here which I recommend you study as well. You can also find optimal implementations of the parallel block reduction in libraries such as cub
I wrote two versions of dot product procedures. The first one uses the atomiAdd function, the second one allocates one shared variable for each block.
The computational time is 3.33 ms and 0.19 ms respectively compared to 0.17 ms and 411.43 ms of the reduction dot product and a one thread dot product.
GPU Device 0: "GeForce GTX 1070 Ti" with compute capability 6.1
2000000flow array allocated 2147483647
naive elapsedTimeIn Ms 411.437042 Ms
sum is 6.2831853071e+06
thread atomic add elapsedTimeIn Ms 3.3371520042 Ms
sum is 6.2831853071e+06
cache reduction elapsedTimeIn Ms 0.1764480025 Ms
sum is 6.2831853072e+06
cache atomicadd elapsedTimeIn Ms 0.1914239973 Ms
sum is 6.2831853072e+06
__global__ void dotatomicAdd(double* a, double *b,double *sum, int N)
{
int tid = blockDim.x * blockIdx.x + threadIdx.x;
while (tid < N){
double t=a[tid] * b[tid];
atomicAdd(sum,t);
tid+=blockDim.x * gridDim.x;
}
}
__global__ void dotcache(double* a, double *b,double *c, int N)
{
__shared__ double cache;
int tid = threadIdx.x + blockIdx.x * blockDim.x;
int cacheIndex = threadIdx.x;
double temp = 0.0;
cache=0.0;
__syncthreads();
while (tid < N) {
temp += a[tid] * b[tid];
tid += blockDim.x * gridDim.x;
}
atomicAdd(&cache,temp);
__syncthreads();
if (cacheIndex == 0) c[blockIdx.x] = cache;
}

Only half of the shared memory array is assigned

I see only half of the shared memory array is assigned, when I use Nsight stepped after s_f[sidx] = 5;
__global__ void BackProjectPixel(double* val,
double* projection,
double* focalPtPos,
double* pxlPos,
double* pxlGrid,
double* detPos,
double *detGridPos,
unsigned int nN,
unsigned int nS,
double perModDetAngle,
double perModSpaceAngle,
double perModAngle)
{
const double fx = focalPtPos[0];
const double fy = focalPtPos[1];
//extern __shared__ double s_f[64]; //
__shared__ double s_f[64]; //
unsigned int i = (blockIdx.x * blockDim.x) + threadIdx.x;
unsigned int j = (blockIdx.y * blockDim.y) + threadIdx.y;
unsigned int idx = j*nN + i;
unsigned int sidx = threadIdx.y * blockDim.x + threadIdx.x;
unsigned int threadsPerSharedMem = 64;
if (sidx < threadsPerSharedMem)
{
s_f[sidx] = 5;
}
__syncthreads();
//double * angle;
//
if (sidx < threadsPerSharedMem)
{
s_f[idx] = TriPointAngle(detGridPos[0], detGridPos[1],fx, fy, pxlPos[idx*2], pxlPos[idx*2+1], nN);
}
}
Here is what I observed
I am wondering why there are only thirty-two 5? Shouldn't there be sixty-four 5 in s_f? Thanks.
Threads are executed in groups of threads (usually 32) which are also called warps. Warps group the threads in order. In your case one warp will get threads 0-31, the other 32-63. In your debugging context, you are probably seeing the results of only the warp that contains threads 0-31.
I am wondering why there are only thirty-two 5?
There are 32 fives because as mete says, kernels are executed simultaneously only by groups of threads of size 32, so called warps in CUDA terminology.
Shouldn't there be sixty-four 5 in s_f?
There will be 64 fives after the synchronization barrier, i.e. __syncthreads(). So if you place your breakpoint on the first instruction after the __syncthreads() call, you'll see all fives. Thats because by that time all the warps from one block will finish execution of all the code prior to __syncthreads().
How can I see all warps with Nsight?
You can see values for all the threads easily by putting this into watchfield:
s_f[sidx]
Although sidx value may become undefined due to optimizations, so I would better watch the value of:
s_f[((blockIdx.y * blockDim.y) + threadIdx.y) * nN + (blockIdx.x * blockDim.x) + threadIdx.x]
And indeed, if you want to investigate values for particular warp, then as Robert Crovella points out, you should use conditional breakpoints. If you want to break within the second warp, then something like this should work in case of two dimensional grid of two dimensional block (which I presume you are using):
((blockIdx.x + blockIdx.y * gridDim.x) * (blockDim.x * blockDim.y) + (threadIdx.y * blockDim.x) + threadIdx.x) == 32
Because 32 is the index of the first thread within the second warp. For other combinations of block and grid dimensions see this useful cheatsheet.

Can multiple blocks and threads write to the same output?

I have the following CUDA kernel code which computes the sum squared error of two arrays.
__global__ void kSquaredError(double* data, double* recon, double* error,
unsigned int num_elements)
{
const unsigned int idx = blockIdx.x * blockDim.x + threadIdx.x;
for (unsigned int i = idx; i < num_elements; i += blockDim.x * gridDim.x) {
*error += pow(data[i] - recon[i], 2);
}
}
I need a single scalar output (error). In this case, it seems like all threads are writing to error simultaneously. Is there some way I need to synchronize it?
Currently I'm getting a bad result so I'm guessing there is some issue.
The implementation you are doing now is subject to race conditions due to the fact that all threads try to update the same global memory address at the same time. You could easily put a atomicAdd function instead of *error += pow... but that suffers from performance issues due to it being serialized on each update.
Instead you should try and and do a reduction using the shared memory, as following:
_global__ void kSquaredError(double* data, double* recon, double* error, unsigned int num_elements) {
const unsigned int idx = blockIdx.x * blockDim.x + threadIdx.x;
const unsigned int tid = threadIdx.x;
__shared__ double serror[blockDim.x];//temporary storage of each threads error
for (unsigned int i = idx; i < num_elements; i += blockDim.x * gridDim.x) {
serror[tid] += pow(data[i] - recon[i], 2);//put each threads value in shared memory
}
__syncthreads();
int i = blockDim.x >> 1; //halve the threads
for(;i>0;i>>=1) {//reduction in shared memory
if(tid<i) {
serror[tid] += serror[tid+i];
__syncthreads();//make shure all threads are at the same state and have written to shared memory
}
}
if(tid == 0) {//thread 0 updates the value in global memory
atomicAdd(error,serror[tid]);// same as *error += serror[tid]; but atomic
}
}
It works by the following principle, each thread have its own temporary variable where it calculates the sum of the error for all its input, when it have finished all threads converge at the __syncthreads instruction to ensure that all data is complete.
Now half of all the threads in the block will take one value from the corresponding other half add add it to its own, half the threads again and do it again until you are left with one thread(thread 0) which will have the total sum.
Now thread 0 will uppdate the global memory with an atomicAdd function to avoid race condition with other blocks if there is any.
If we would just use the first example and use atomicAdd on every assignment. You would have gridDim.x*blockDim.x*num_elements atomic functions that would be serialized, now we have only gridDim.x atomic functions which is a lot less.
See Optimizing Parallel Reduction in CUDA for further reading on how reduction using cuda works.
Edit
Added if statement in the reduction for loop, forgot that.

2D Finite Difference Time Domain (FDTD) in CUDA

I would like to write an electromagnetic 2D Finite Difference Time Domain (FDTD) code in CUDA language.
The C code for the update of the magnetic field is the following
// --- Update for Hy and Hx
for(int i=n1; i<=n2; i++)
for(int j=n11; j<=n21; j++){
Hy[i*ydim+j]=A[i*ydim+j]*Hy[i*ydim+j]+B[i*ydim+j]*(Ezx[(i+1)*ydim+j]-Ezx[i*ydim+j]+Ezy[(i+1)*ydim+j]-Ezy[i*ydim+j]);
Hx[i*ydim+j]=G[i*ydim+j]*Hx[i*ydim+j]-H[i*ydim+j]*(Ezx[i*ydim+j+1]-Ezx[i*ydim+j]+Ezy[i*ydim+j+1]-Ezy[i*ydim+j]);
}
}
My first parallelization attempt has been the following kernel:
__global__ void H_update_kernel(double* Hx_h, double* Hy_h, double* Ezx_h, double* Ezy_h, double* A_h, double* B_h,double* G_h, double* H_h, int n1, int n2, int n11, int n21)
{
int idx = blockIdx.x*BLOCK_SIZE_X + threadIdx.x;
int idy = blockIdx.y*BLOCK_SIZE_Y + threadIdx.y;
if ((idx <= n2 && idx >= n1)&&(idy <= n21 && idy >= n11)) {
Hy_h[idx*ydim+idy]=A_h[idx*ydim+idy]*Hy_h[idx*ydim+idy]+B_h[idx*ydim+idy]*(Ezx_h[(idx+1)*ydim+idy]-Ezx_h[idx*ydim+idy]+Ezy_h[(idx+1)*ydim+idy]-Ezy_h[idx*ydim+idy]);
Hx_h[idx*ydim+idy]=G_h[idx*ydim+idy]*Hx_h[idx*ydim+idy]-H_h[idx*ydim+idy]*(Ezx_h[idx*ydim+idy+1]-Ezx_h[idx*ydim+idy]+Ezy_h[idx*ydim+idy+1]-Ezy_h[idx*ydim+idy]); }
}
However, by also using the Visual Profiler, I have been unsatisfied by this solution for two reasons:
1) The memory accesses are poorly coalesced;
2) The shared memory is not used.
I then decided to use the following solution
__global__ void H_update_kernel(double* Hx_h, double* Hy_h, double* Ezx_h, double* Ezy_h, double* A_h, double* B_h,double* G_h, double* H_h, int n1, int n2, int n11, int n21)
{
int i = threadIdx.x;
int j = threadIdx.y;
int idx = blockIdx.x*BLOCK_SIZE_X + threadIdx.x;
int idy = blockIdx.y*BLOCK_SIZE_Y + threadIdx.y;
int index1 = j*BLOCK_SIZE_Y+i;
int i1 = (index1)%(BLOCK_SIZE_X+1);
int j1 = (index1)/(BLOCK_SIZE_Y+1);
int i2 = (BLOCK_SIZE_X*BLOCK_SIZE_Y+index1)%(BLOCK_SIZE_X+1);
int j2 = (BLOCK_SIZE_X*BLOCK_SIZE_Y+index1)/(BLOCK_SIZE_Y+1);
__shared__ double Ezx_h_shared[BLOCK_SIZE_X+1][BLOCK_SIZE_Y+1];
__shared__ double Ezy_h_shared[BLOCK_SIZE_X+1][BLOCK_SIZE_Y+1];
if (((blockIdx.x*BLOCK_SIZE_X+i1)<xdim)&&((blockIdx.y*BLOCK_SIZE_Y+j1)<ydim))
Ezx_h_shared[i1][j1]=Ezx_h[(blockIdx.x*BLOCK_SIZE_X+i1)*ydim+(blockIdx.y*BLOCK_SIZE_Y+j1)];
if (((i2<(BLOCK_SIZE_X+1))&&(j2<(BLOCK_SIZE_Y+1)))&&(((blockIdx.x*BLOCK_SIZE_X+i2)<xdim)&&((blockIdx.y*BLOCK_SIZE_Y+j2)<ydim)))
Ezx_h_shared[i2][j2]=Ezx_h[(blockIdx.x*BLOCK_SIZE_X+i2)*xdim+(blockIdx.y*BLOCK_SIZE_Y+j2)];
__syncthreads();
if ((idx <= n2 && idx >= n1)&&(idy <= n21 && idy >= n11)) {
Hy_h[idx*ydim+idy]=A_h[idx*ydim+idy]*Hy_h[idx*ydim+idy]+B_h[idx*ydim+idy]*(Ezx_h_shared[i+1][j]-Ezx_h_shared[i][j]+Ezy_h[(idx+1)*ydim+idy]-Ezy_h[idx*ydim+idy]);
Hx_h[idx*ydim+idy]=G_h[idx*ydim+idy]*Hx_h[idx*ydim+idy]-H_h[idx*ydim+idy]*(Ezx_h_shared[i][j+1]-Ezx_h_shared[i][j]+Ezy_h[idx*ydim+idy+1]-Ezy_h[idx*ydim+idy]); }
}
The index trick is needed to make a block of BS_x * BS_y threads read (BS_x+1)*(BS_y+1) global memory locations to the shared memory.
I believe that this choice is better than the previous one, due to the use of the shared memory, although not all the accesses are really coalesced, see
Analyzing memory access coalescing of my CUDA kernel
My question is that if anyone of you can address me to a better solution in terms of coalesced memory access. Thank you.
Thank you for providing the profiling information.
Your second algorithm is better because you are getting a higher IPC. Still, on CC 2.0, max IPC is 2.0, so your average in the second solution of 1.018 means that only half of your available compute power is utilized. Normally, that means that your algorithm is memory bound, but I'm not sure in your case, because almost all the code in your kernel is inside if conditionals. A significant amount of warp divergence will affect performance, but I haven't checked if instructions which results are not used count towards the IPC.
You may want to look into reading through the texture cache. Textures are optimized for 2D spatial locality and better support semi-random 2D access. It may help your [i][j] type accesses.
In the current solution, make sure that it's the Y position ([j]) that changes the least between two threads with adjacent thread IDs (to keep memory accesses as sequential as possible).
It could be that the compiler has optimized this for you, but you recalculate idx*ydim+idy many times. Try calculating it once and reusing the result. That would have more potential for improvement if your algorithm was compute bound though.
I believe that in this case your first parallel solution is better because each thread reads each global memory array element only once. Therefore storing these arrays in shared memory doesn't bring you expected improvement.
It could speed up your program due to better coalesced access to global memory during storing data in shared memory but IMHO this is balanced by caching of global memory accesses if you are using Compute Capability 2.x and also using of shared memory can be downgraded due to bank conflicts.

CUDA efficient division?

I would like to know if there is, by any chance an efficient way of dividing elements of an array. I am running with matrix values 10000x10000 and it a considerable amount of time in comparison with other kernels. Division are expensive operations, and I can't see how to improve it.
__global__ void division(int N, float* A, int* B){
int row = blockIdx.x * blockDim.x + threadIdx.x;
int col = blockIdx.y * blockDim.y + threadIdx.y;
if((row < N) && (col <= row) ){
if( B[row*N+col] >0 )
A[row*N+col] /= (float)B[row*N+col];
}
}
kernel launched with
int N = 10000;
int threads = 32
int blocks = (N+threads-1)/threads
dim3 t(threads,threads);
dim3 b(blocks, blocks);
division<<< b, t >>>(N, A, B);
cudaThreadSynchronize();
Option B:
__global__ void division(int N, float* A, int* B){
int k = blockIdx.x * blockDim.x + threadIdx.x;
int kmax = N*(N+1)/2
int i,j;
if(k< kmax){
row = (int)(sqrt(0.25+2.0*k)-0.5);
col = k - (row*(row+1))>>1;
if( B[row*N+col] >0 )
A[row*N+col] /= (float)B[row*N+col];
}
}
launched with
int threads =192;
int totalThreadsNeeded = (N*(N+1)/2;
int blocks = ( threads + (totalThreadsNeeded)-1 )/threads;
division<<<blocks, threads >>>(N, A, B);
Why is option B giving a wrong result even if the threadIds are the correct one? what is missing here?
Your basic problem is that you are launching an improbably huge grid (over 100 million threads for your 10000x10000 array example), and then because of the triangular nature of the access pattern in the kernel, fully half of those threads never do anything productive. So a enormous amount of GPU cycles are being wasted for no particularly good reason. Further, the access pattern you are using isn't allowing coalesced memory access, which is going to further reduce the performance of the threads which are actually doing useful work.
If I understand your problem correctly, the kernel is only performing element-wise division on a lower-triangle of a square array. If this is the case, it could be equally done using something like this:
__global__
void division(int N, float* A, int* B)
{
for(int row=blockIdx.x; row<N; row+=gridDim.x) {
for(int col=threadIdx.x; col<=row; col+=blockDim.x) {
int val = max(1,B[row*N+col]);
A[row*N+col] /= (float)val;
}
}
}
[disclaimer: written in browser, never compiled, never tested, use at own risk]
Here, a one dimension grid is used, with each block computing a row at a time. Threads in a block move along the row, so memory access is coalesced. In comments you mention your GPU is a Tesla C2050. That device only requires 112 blocks of 192 threads each to completely "fill" each of the 14 SM with a full complement of 8 blocks each and the maximum number of concurrent threads per SM. So the launch parameters could be something like:
int N = 10000;
int threads = 192;
int blocks = min(8*14, N);
division<<<blocks, threads>>>(N, A, B);
I would expect this to run considerably faster than your current approach. If numerical accuracy isn't that important, you can probably achieve further speed-up by replacing the division with an approximate reciprocal intrinsic and a floating point multiply.
Because threads are executed in groups of 32, called warps, you are paying for the division for all 32 threads in a warp if both if conditions are true for just one of the threads. If the condition is false for many threads, see if you can filter out the values for which the division is not needed in a separate kernel.
The int to float conversion may itself be slow. If so, you might be able to generate floats directly in your earlier step, and pass B in as an array of floats.
You may be able to generate inverted numbers in the earlier step, where you generate the B array. If so, you can use multiplication instead of division in this kernel. (a / b == a * 1 / b).
Depending on your algorithm, maybe you can get away with a lower precision division. There's an intrinsic, __fdividef(x, y), that you can try. There is also a compiler flag, -prec-div=false.
The very first thing to look at should be coalesced memory access. There is no reason for the non-coalesced pattern here, just exchange rows and columns for to avoid wasting a lot of memory bandwidth:
int col = blockIdx.x * blockDim.x + threadIdx.x;
int row = blockIdx.y * blockDim.y + threadIdx.y;
...
A[row*N+col] ...
Even if this is run on compute capability 2.0 or higher, the caches are not large enough to remedy this suboptimal pattern.