CUDA: creating logical picture of grid - cuda

I've just started learning CUDA and I'm stuck at a fundamental concept: grids. I've read that grid is just a logical collection of blocks (?), but I'm unable to create a picture of the scene in my mind. I have a clear picture of threads and blocks in my mind and know where they relate to in physical GPU. Block go to cores and threads go to stream-processors. But where does grid fit into this picture?
Some analogies would be appreciated and would make understanding easier.
P.s.- I'm learning from udacity.
#include "reference_calc.cpp"
#include "utils.h"
#include <stdio.h>
__global__ void rgba_to_greyscale(const uchar4* const rgbaImage,
unsigned char* const greyImage,
int numRows, int numCols)
{
int x,y,i; // i is index for 1D array greyImage. x and y for rgbaImage
i = (blockIdx.y * blockDim.x) + blockIdx.x;
x= (blockIdx.x * blockDim.x) + threadIdx.x;
y= (blockIdx.y * blockDim.y) + threadIdx.y;
if(x < numCols && y < numRows)
{
greyImage[i] = (0.299f * rgbaImage[y].x) + (0.587f * rgbaImage[y].y) + (0.114f * rgbaImage[y].z);
}
}
void your_rgba_to_greyscale(const uchar4 * const h_rgbaImage, uchar4 * const d_rgbaImage,
unsigned char* const d_greyImage, size_t numRows, size_t numCols)
{
//You must fill in the correct sizes for the blockSize and gridSize
//currently only one block with one thread is being launched
const dim3 blockSize(10, 10, 1); //TODO
size_t gridSizeX, gridSizeY;
gridSizeX = numCols + (10 - (numCols % 10) ); //adding some number to make it multiple of 10
gridSizeY = numRows + (10 - (numRows % 10) ); //adding some number to make it multiple of 10
const dim3 gridSize( gridSizeX, gridSizeY, 1); //TODO
rgba_to_greyscale<<<gridSize, blockSize>>>(d_rgbaImage, d_greyImage, numRows, numCols);
cudaDeviceSynchronize(); checkCudaErrors(cudaGetLastError());
}

Actually threads go to compute cores (at least if we are referring to the marketing term "cuda cores") and thread-blocks are associated with streaming multiprocessors (SMs, or SMX's in Kepler-speak).
The GRID is all threads created by a kernel launch. You can call it the collection of blocks if you want to, since a grid is first hierarchically broken down into blocks, (then warps,) then threads.
For a pictorial representation of this hierarchy, refer to slide 9 of this webinar deck.
You can disregard the statement on that slide that "only one kernel can be launched at a time". That was true in 2009 when that deck was created, but is no longer true on newer devices today.

Related

Cuda In-Situ memory race issue for algorithms such as convolution of morphologicam dilation

I wrote a dilation kernel in CUDA and it works well when my input and my output images are different buffers, but I am facing what I understand to be a memory race issue when I call my kernel in an in-situ case, i.e. the input and the output buffers point to the same memory location.
I tried :
a. using cooperative groups,
b. using a mutex and an atomic addition but as suggested in this paper and in several sources on the web,
c. using a lock-free inter-block synchronization, the synchronization proposed in this same paper.
All my attempts failed because :
a. did not work because my input buffer is a const pointer and I have a compilation error when I have to cast it into a void* parameter (which makes sense), so I could not go further.
b. did not work because I faced a wierd behaviour : I have 16x16 blocks, each with 32x32 threads. Synchronizing the blocks should increase the mutex to 256 but the program blocks after 48 atomic additions.
c. did not work because it seams to be no inter-block synchronization, although the code I used directly from the paper seems good to me. I could improve a little the race effect by adding some __syncthreads()
This is the dilation function ;
template <typename T>
__global__ void GenericDilate2dImg_knl(const ImageSizeInfo imgSizeInfo,
volatile int* syncArrayIn, volatile int* syncArrayOut,
const unsigned long localSizeX, const unsigned long localSizeY,
const int borderPolicyType, const T outOfImageValue,
const struct StructuringElementInfo seInfo,
const T* pInBuf, T* pOutBuf)
{
// Extract sizeX, sizeY, etc. from imgSizeInfo
SPLIT_SIZES_FROM_STRUCT(imgSizeInfo)
// Declare the shared buffer pSharedBuf
extern __shared__ char pSharedMem[];
T* pSharedBuf = reinterpret_cast<T*>(pSharedMem);
const unsigned long x = blockDim.x * blockIdx.x + threadIdx.x;
const unsigned long y = blockDim.y * blockIdx.y + threadIdx.y;
const unsigned long planIdx = blockDim.z * blockIdx.z + threadIdx.z;
const unsigned long nbPlans = sizeZ * sizeC * sizeT;
const unsigned long idx = x + y * sizeX + planIdx * sizeX*sizeY;
// Copy the input image data into shared memory
if (x < blockDim.x * gridDim.x && y < blockDim.y * gridDim.y && planIdx < blockDim.z * gridDim.z) {
copyDataToSharedMemory2d(pInBuf, sizeX, sizeY, planIdx,
localSizeX, localSizeY,
seInfo._paddingX, seInfo._paddingY,
borderPolicyType, outOfImageValue,
pSharedBuf);
}
// Wait to ensure that the copy is terminated
if (pInBuf == pOutBuf) {
// Grid synchronization for in-situ case
//__gpu_sync(gridDim.x * gridDim.y); // Use a mutex
__gpu_sync2(1, syncArrayIn, syncArrayOut); // Use a lock-free barrier
}
else
// The input and ouput buffers point to different data
// -> we simply need to synchronize the threads inside the block
__syncthreads();
// Compute the convolution for pixels inside the image
if (x < sizeX && y < sizeY && planIdx < nbPlans) {
T vMax = 0;
for (unsigned int curCoefIdx = 0; curCoefIdx < seInfo._nbOffsets; ++curCoefIdx) {
const unsigned int sx = threadIdx.x + seInfo._paddingX + seInfo._pOffsetsX[curCoefIdx];
const unsigned int sy = threadIdx.y + seInfo._paddingY + seInfo._pOffsetsY[curCoefIdx];
const unsigned long sidx = sx + sy * localSizeX;
const T curVal = pSharedBuf[sidx];
vMax = (vMax > curVal ? vMax : curVal);
}
// Round the result
pOutBuf[idx] = vMax;
}
}
My function to copy from global to shared memory is :
template <typename T>
__device__ void copyDataToSharedMemory2d(const T* pInBuf,
const unsigned long sizeX, const unsigned long sizeY, const unsigned long planIdx,
const unsigned long localSizeX, const unsigned long localSizeY,
const int paddingX, const int paddingY,
const int borderPolicyType, const T outOfImageValue,
T* pSharedBuf)
{
const int x = blockDim.x * blockIdx.x + threadIdx.x;
const int y = blockDim.y * blockIdx.y + threadIdx.y;
const int localX = threadIdx.x;
const int localY = threadIdx.y;
// Fill the shared buffer tile by tile
// A tile is related to the group size
const unsigned int groupSizeX = blockDim.x;
const unsigned int groupSizeY = blockDim.y;
// For each tile
for (int offsetY = 0; offsetY < localSizeY; offsetY += groupSizeY) {
int curLocalY = localY + offsetY;
int curGlobalY = y + offsetY - paddingY;
for (int offsetX = 0; offsetX < localSizeX; offsetX += groupSizeX) {
int curLocalX = localX + offsetX;
int curGlobalX = x + offsetX - paddingX;
// If the current coordinate is inside the shared sub-image
if (curLocalX < localSizeX && curLocalY < localSizeY) {
const int idx = curLocalX + curLocalY * localSizeX;
pSharedBuf[idx] = getPixel2d(pInBuf, sizeX, sizeY, curGlobalX, curGlobalY, planIdx, borderPolicyType, outOfImageValue);
}
}
}
}
Where getPixel2d allows me to manage the data out of the image:
template <typename T>
__device__
T getPixel2d(const T* pInBuf,
const unsigned long sizeX, const unsigned long sizeY,
const int x, const int y, const int z,
const int borderPolicyType, const T outOfImageValue)
{
int x_inside = x;
if (x < 0 || x >= sizeX) {
switch (borderPolicyType) {
case 0://outside the image, there is a constant value
return outOfImageValue;
case 1://outside the image, we propagate the data at the image borders
if (x < 0)
x_inside = 0;
else // x >= sizeX
x_inside = sizeX - 1;
break;
case 2://Miror effect
if (x < 0)
x_inside = -(x + 1);
else // x >= sizeX
x_inside = sizeX - ((x - sizeX) + 1);
break;
}
}
// y-coordinate inside the image
int y_inside = y;
if (y < 0 || y >= sizeY) {
switch (borderPolicyType) {
case 0://outside the image, there is a constant value
return outOfImageValue;
case 1://outside the image, we propagate the data at the image borders
if (y < 0)
y_inside = 0;
else // y >= sizeY
y_inside = sizeY - 1;
break;
case 2://Miror effect
if (y < 0)
y_inside = -(y + 1);
else // y >= sizeY
y_inside = sizeY - ((y - sizeY) + 1);
break;
default: break;
}
}
return pInBuf[x_inside + y_inside * sizeX + z * sizeX * sizeY];
}
and now, here are my inter-block synchronization functions :
// Using a mutex
__device__ volatile int g_mutex;
__device__ void __gpu_sync(int goalVal) {
//thread ID in a block
int tid_in_block = threadIdx.x * blockDim.y + threadIdx.y;
// only thread 0 is used for synchronization
if (tid_in_block == 0) {
atomicAdd((int*)&g_mutex, 1);
printf("[%d] %d Vs %d\n", blockIdx.x * gridDim.y + blockIdx.y, g_mutex, goalVal);
//only when all blocks add 1 to g_mutex
//will g_mutex equal to goalVal
while (g_mutex </*!=*/ goalVal) {
;//Do nothing here
}
}
__syncthreads();
}
// Lock-free barrier
__device__ void __gpu_sync2(int goalVal, volatile int* Arrayin, volatile int* Arrayout) {
// thread ID in a block
int tid_in_blk = threadIdx.x * blockDim.y + threadIdx.y;
int nBlockNum = gridDim.x * gridDim.y;
int bid = blockIdx.x * gridDim.y + blockIdx.y;
// only thread 0 is used for synchronization
if (tid_in_blk == 0) {
Arrayin[bid] = goalVal;
}
if (bid == 1) {
if (tid_in_blk < nBlockNum) {
while (Arrayin[tid_in_blk] != goalVal) {
;//Do nothing here
}
}
__syncthreads();
if (tid_in_blk < nBlockNum) {
Arrayout[tid_in_blk] = goalVal;
}
}
if (tid_in_blk == 0) {
while (Arrayout[bid] != goalVal) {
;//Do nothing here
}
}
__syncthreads();
}
The image I get for in-situ calculation is :
I used a 11x15 structuring emelent and the size of the shared buffer is (nbThreadsPerBlock+2*paddindX) * (nbThreadsPerBlock+2*paddindY). The wrong result (showed by the arrows) appears at the top of some blocks, but always at the same location and with the same values. I'd expect a more random result for memory race effect...
Is there a better approach to manage in-situ calculation or any reason that would prevent the grid synchronization to work?
EDIT
The size of the image I used is 510x509 and I run my code on a NVidia Quadro RTX 5000.
I would normally suggest minimal reproducible example for a question like this, as well as an indication of the GPU you are running on, but we can probably proceed without that. In short, what you are trying to do will not work reliably, as you've already discovered.
You have chosen a thread strategy of assigning one thread in your grid per output point:
pOutBuf[idx] = vMax;
which is sensible and fine. I imagine based on this:
I have 16x16 blocks, each with 32x32 threads.
that your input images are 512x512 (16x32 threads in each direction, one thread per output point).
And as you've already stated, you have 256 blocks (each of 1024 threads) in your grid. Furthermore, for the in-situ case, we can simplify your kernel to the following pseudo-code:
__global__ void GenericDilate2dImg_knl(...){
read_in_image();
grid_wide_sync();
write_out_image();
}
For such a methodology to work, then, the read_in_image() step must be able to read the entire image, before any writing occurs. However your methodology will not work in the general case, and evidently not on your specific GPU, either. In order to read in the entire image as per above, we must have every threadblock in the grid simultaneously resident on the SMs in your GPU. All 256 blocks need to be deposited, and running on an SM. But the GPU provides no inherent guarantees of such a thing. If your GPU has, for example 24 SMs in it, each of which can hold a maximum of 2048 threads, then your GPU would have a "running" or "instantaneous" capacity of 24*2048 threads, or 48 of your threadblocks. There would not be enough room for all 256 threadblocks to be running. Not only does your algorithm depend on that, but all 3 of your grid sync methods depend on that notion as well.
The fact that your 2nd grid sync method stops after 48 "atomic additions" suggested the example numbers above to me. It's a plausible proximal explanation for why that method may have failed that way: your GPU only allowed 48 of your threadblocks to be resident, and the other 208 threadblocks were waiting in the wings, not yet deposited on any SM, and therefore not allowing any of their threads to run. Those threads in those 208 threadblocks need to run to pick up the relevant input data, as well as to satisfy the requirements of the grid-wide sync. But they are not running, because they are waiting for room to open up on a SM. And room never opens up on a SM, because the full SMs have threadblocks that are waiting at the grid sync point. So you have deadlock.
This problem is not easily solvable in the general case. Any grid sync mechanism, including cooperative groups, has an inherent requirement that all threadblocks be actually simultaneously schedulable on your particular GPU. Therefore in the general case, where we don't know the data set size or the GPU we will be running on, the problem is quite difficult.
One possible approach is to divide your input data set into regions, and have your kernel process a region at a time. This may require multiple grid syncs, one to handle the in/out division in each region, and one to handle the progression of the kernel as it steps through regions. You would also have to handle the region edges carefully.
Another possible approach if you know the specifics of the data set size and the GPU you are running on, is just to make sure you are running on a GPU "large enough" to handle the data set size. For example, an A100 GPU could probably have as many 216 blocks simultaneously resident, so for that case you could handle a somewhat smaller image size, perhaps 14x32=448 height and 448 width dimensions.
Given that these approaches for in-place or in-situ work for this particular example require considerable complexity, I personally would be strongly motivated to use the methodology where output is different than input. That approach will likely run noticeably quicker as well. A grid wide sync is not a "free" construct from a performance perspective.

Udacity parallel programming, unspecified launch failure cudaGetLastError()

I am trying to complete homework #2 for Udacity course parallel programming. I have ran into a CUDA error that I just can't get around. The error is encoutnered when I launch a kernel that is meant to separate an image in the format "RGBRGBRGB" to three separate arrays of "RRR" "GGG" and "BBB". Seeing as the error "unspecified launch failure" does not give me anything specific to go on I am not sure how to trouble shoot my issue.
Here is the "main" function called to start the entire process. I left out the rest after the error is encountered so that I don't post the rest of my work for someone to find later.
void your_gaussian_blur(const uchar4 * const h_inputImageRGBA, uchar4 * const d_inputImageRGBA, uchar4* const d_outputImageRGBA, const size_t numRows, const size_t numCols,
unsigned char *d_redBlurred,
unsigned char *d_greenBlurred,
unsigned char *d_blueBlurred,
const int filterWidth)
{
// Maximum number of threads per block = 512; do this
// to keep this compatable with CUDa 5 and lower
// MAX > threadsX * threadsY * threadsZ
int MAXTHREADSx = 16;
int MAXTHREADSy = 16; // 16 x 16 x 1 = 512
// We want to fill the blocks so we don't waste this blocks threads
// I wonder if blocks can intermix in a physical core?
// Either way this method makes things "clean"; one thread per px
int nBlockX = numCols / MAXTHREADSx + 1;
int nBlockY = numRows / MAXTHREADSy + 1;
const dim3 blockSize(MAXTHREADSx, MAXTHREADSy, 1);
const dim3 gridSize(nBlockX, nBlockY, 1);
separateChannels<<<gridSize, blockSize>>>(
h_inputImageRGBA,
numRows,
numCols,
d_red,
d_green,
d_blue);
// Call cudaDeviceSynchronize(), then call checkCudaErrors() immediately after
// launching your kernel to make sure that you didn't make any mistakes.
cudaDeviceSynchronize(); checkCudaErrors(cudaGetLastError());
And here is the function separateChannels
//This kernel takes in an image represented as a uchar4 and splits
//it into three images consisting of only one color channel each
__global__
void separateChannels(const uchar4* const inputImageRGBA,
int numRows,
int numCols,
unsigned char* const redChannel,
unsigned char* const greenChannel,
unsigned char* const blueChannel)
{
//const int2 thread_2D_pos = make_int2(blockIdx.x * blockDim.x + threadIdx.x, blockIdx.y * blockDim.y + threadIdx.y);
const int col = blockIdx.x * blockDim.x + threadIdx.x;
const int row = blockIdx.y * blockDim.y + threadIdx.y;
//if (thread_2D_pos.x >= numCols || thread_2D_pos.y >= numRows)
// return;
if (col >= numCols || row >= numRows)
return;
//const int thread_1D_pos = thread_2D_pos.y * numCols + thread_2D_pos.x;
int arrayPos = row * numCols + col;
uchar4 rgba = inputImageRGBA[arrayPos];
redChannel[arrayPos] = rgba.x;
greenChannel[arrayPos] = rgba.y;
blueChannel[arrayPos] = rgba.z;
}
I think I put in anything necessary, please let me know if not.
Without seeing the rest of the code I cannot tell for sure, but I believe you are sending pointer to host memory as a parameter to cuda kernel - not a good thing to do. In kernel launch you are sending in a h_inputImageRGBA while I believe you want to send in a d_inputImageRGBA.
Typically h_ prefix stands for host memory while d_ represents device.

CUDA: large kernel gives strange behavior

I recently bought a gtx550ti boost card. Programs that used to work on my old gf440 card fails. Here is an example. The following program works fine with smaller kernels, but goes wrong with larger ones.
#include "stdio.h"
__global__ void kernel(float * d_in, float * d_out){
int x = threadIdx.x + blockIdx.x * blockDim.x;
int y = threadIdx.y + blockIdx.y * blockDim.y;
int idx = x + y * blockDim.x * gridDim.x;
d_out[idx] = d_in[idx];
}
int main(){
const dim3 gridSize(10,10);
const dim3 blockSize(80,80);
const int size = 800*800;
float * h_in = new float[size];
float * h_out = new float[size];
float * d_in;
float * d_out;
cudaMalloc((void**)&d_in, sizeof(float)*size);
cudaMalloc((void**)&d_out, sizeof(float)*size);
for(int i = 0; i < size; i++)
h_in[i] = (float)i;
cudaMemcpy(d_in, h_in, sizeof(float)*size, cudaMemcpyHostToDevice);
kernel<<<gridSize,blockSize>>>(d_in, d_out);
cudaMemcpy(h_out, d_out, sizeof(float)*size, cudaMemcpyDeviceToHost);
for(int i = 0; i < size; i++)
printf("%f\n",h_out[i]);
cudaFree(d_in);
cudaFree(d_out);
return 0;
}
I expected it to output index in floats. But it outputs some random floats:
0.131061
2.520029
9.304665
0.000189
0.242134
0.525557
0.560013
size 100*100
Instead, when I switch to size 100*100:
const dim3 gridSize(10,10);
const dim3 blockSize(10,10);
const int size = 100*100;
And it works fine(last 5 outputs):
9995.000000
9996.000000
9997.000000
9998.000000
9999.000000
size 500*500
But for larger size 500*500:
const dim3 gridSize(10,10);
const dim3 blockSize(50,50);
const int size = 500*500;
It outputs wrong index(last 5 outputs):
512139.000000
512140.000000
512141.000000
512142.000000
512143.000000
I installed CUDA 5.5. Thanks!
Whenever you are having trouble with cuda code, you should be doing proper cuda error checking.
This is not valid:
const dim3 blockSize(80,80);
This is asking for a threadblock of 80*80 = 6400 threads. There are no GPUs that support 6400 threads per threadblock.
This is also not valid:
const dim3 blockSize(50,50);
2500 threads is also too many. These configs would not work on either of your cards.
This is acceptable:
const dim3 blockSize(10,10);
In the "not valid" cases, your kernel is not running. If you had done proper cuda error checking, you would have discovered this and even got a clue as to what might be wrong (invalid launch configuration).
You may also want to familiarize yourself with the deviceQuery cuda sample, and study the output for your GPUs.

Confusion over grid and block dimensions

I am trying to solve the problem at the end of lesson 1 of the Udacity course but I'm not sure if I have just made a typo or if the actual code is wrong.
void your_rgba_to_greyscale(const uchar4 * const h_rgbaImage, uchar4 * const d_rgbaImage, unsigned char* const d_greyImage, size_t numRows, size_t numCols)
{
size_t totalPixels = numRows * numCols;
size_t gridRows = totalPixels / 32;
size_t gridCols = totalPixels / 32;
const dim3 blockSize(32,32,1);
const dim3 gridSize(gridCols,gridRows,1);
rgba_to_greyscale<<<gridSize, blockSize>>>(d_rgbaImage, d_greyImage, numRows, numCols);
cudaDeviceSynchronize(); checkCudaErrors(cudaGetLastError());
}
The other method is:
void rgba_to_greyscale(const uchar4* const rgbaImage, unsigned char* const greyImage, int numRows, int numCols)
{
int x = (blockIdx.x * blockDim.x) + threadIdx.x;
int y = (blockIdx.y * blockDim.y) + threadIdx.y;
uchar4 rgba = rgbaImage[x * numCols + y];
float channelSum = 0.299f * rgba.x + 0.587f * rgba.y + 0.114f * rgba.z;
greyImage[x * numCols + y] = channelSum;
}
Error message says the following:
libdc1394 error: failed to initialize libdc1394
Cuda error at student_func.cu:76
unspecified launch failure cudaGetLastError()
we were unable to execute your code. Did you set the grid and/or block size correctly?
But then, it says that the code has compiled,
Your code compiled!
error output: libdc1394 error: Failed to initialize libdc1394
Cuda error at student_func.cu:76
unspecified launch failure cudaGetLastError()
Line 76 is the last line in the first code block and as far as I'm aware I haven't changed anything in it. Line 76 is as follows,
rgba_to_greyscale<<<gridSize, blockSize>>>(d_rgbaImage, d_greyImage, numRows, numCols);
I can't actually find the declaration of cudaGetLastError().
I'm mainly concerned with my understanding on setting up the grid/block dimensions + whether the first methods approach was right with regards to mapping between a 1D array of pixel positions and my threads.
EDIT:
I guess I've misunderstood something. Is numRows the number of pixels in the vertical? And is numCols the pixels in horizontal direction?
My block is made up of 8 x 8 threads, where each thread represents 1 pixel? If so, I'm assuming that's why I had to divide by 4 when calculating gridRows since the image is not square? I'm assuming I could have also made a block that was 2:1 columns : rows?
EDIT 2:
I just tried to change my block so that it was 2:1 ratio, so I could then divide numRows and numCol by the same number but its now showing blank areas at the bottom and side. Why is there blank areas both at the bottom and side. I haven't changed the y dimensions of by grid or block.
each blocks processes 32*32 pixels, and there are (totalPixels / 32) * (totalPixels / 32) blocks, so you process totalPixels ^ 2 pixels - that seems wrong
1st was wrong, this should be the correct one:
const dim3 blockSize(32,32,1);
size_t gridCols = (numCols + blockSize.x - 1) / blockSize.x;
size_t gridRows = (numRows + blockSize.y - 1) / blockSize.y;
it is a pretty common pattern for 2d - you can remember it
in the sample image size is not power of two and you want block to cover all your image(or even more)
so next must be correct:
gridCols * blockSize.x >= numCols
gridRows * blockSize.y >= numRows
you choose block size and basing on it you compute amount of blocks you need to cover all image
after that, in the kernel, you must check that you are not 'out of image', for cases with bad size
another problem is in the kernel, it must be (y * numCols + x), not oposite
kernel:
int x = (blockIdx.x * blockDim.x) + threadIdx.x;
int y = (blockIdx.y * blockDim.y) + threadIdx.y;
if(x < numCols && y < numRows)
{
uchar4 rgba = rgbaImage[y * numCols + x];
float channelSum = 0.299f * rgba.x + 0.587f * rgba.y + 0.114f * rgba.z;
greyImage[y * numCols + x] = channelSum;
}
calling code:
const dim3 blockSize(4,32,1); // may be any
size_t gridCols = (numCols + blockSize.x - 1) / blockSize.x;
size_t gridRows = (numRows + blockSize.y - 1) / blockSize.y;
const dim3 gridSize(gridCols,gridRows,1);
rgba_to_greyscale<<<gridSize, blockSize>>>(d_rgbaImage, d_greyImage, numRows, numCols);
cudaDeviceSynchronize();
checkCudaErrors(cudaGetLastError());
damn it, i feel like i am doing things even harder to understand (

cuda multiplication

Serial code snippet looks like this:
int i, j;
for(j=0; j<ny; j++)
{
for(i=0; i<nx; i++)
{
x[i + j*nx] *= y[i];
}
}
I converted this to CUDA using this kernel:
int tid = blockIdx.x * blockDim.x + threadIdx.x;
int i,j;
for(tid = 0; tid <nx*ny; tid++)
{
j = tid/nx;
i = tid - j*nx;
x[tid] *= y[i];
}
However the GPU kernel does not give any speedup improvement? Any suggestions on a better solution?? Thanks in advance
If this is the serial code:
int i, j;
for(j=0; j<ny; j++)
{
for(i=0; i<nx; i++)
{
x[i + j*nx] *= y[i];
}
}
then you should be doing this:
__global__ void fn(float *x, int nx)
{
int tid = blockIdx.x * blockDim.x + threadIdx.x;
int j = tid/nx, i = tid - j * nx;
x[tid] *= y[i];
}
fn<<<nx*ny/B, B>>>(x, nx); // with B = 256, 512, etc.
What you're doing is fairly bizarre: you're instructing each thread of the CUDA kernel to iterate over all values of tid between 0 and nx*ny, and compute the same function as your CPU version! Moreover, instead of just iterating over the indices, you're actually doing the loop less efficiently than you did for the CPU version; in other words, you do the same thing in each thread, just less efficiently, than you are doing in 1 thread on the CPU. It's no wonder that this is slower; it should be much, much slower. Your CUDA kernel is:
int **tid** = blockIdx.x * blockDim.x + threadIdx.x;
int i,j;
for(**tid** = 0; **tid** <nx*ny; **tid**++)
{
j = tid/nx;
i = tid - j*nx;
x[tid] *= y[i];
}
This does nx*ny iterations, same as your host code, for each thread; you lose all benefit of the parallelism, since each thread is doing the same thing; you would get the same performance using one thread on the GPU, and the same result!
If this is the verbatim code from your CUDA source file, you need to change it and redo the comparison; if this is code you have written to help explain what your code is doing for a lay non-CUDA audience, then you need to present your actual CUDA code so that we can see what's going on... as it is, the performance analysis I have done - the trivial one - is all you can expect.
Given your comment to this answer:
the nx * ny = 2205; so I used no. of blocks =
(nx*ny+(threads-1))/threads and threads = 64.
is implying you are intending to launch one thread per computation, the correct CUDA implementation would just be:
int tid = blockIdx.x * blockDim.x + threadIdx.x;
int j = tid/nx;
int i = tid - j*nx;
if (tid < (nx*ny))
x[tid] *= y[i];
If you were intending for each thread to compute more than one computation per kernel launch, then you would size the grid to "fill" each of the SM on the target GPU, not use the same number of threads as the input size, and then do something like:
int tid = blockIdx.x * blockDim.x + threadIdx.x;
int gsize = blockDim.x * gridDim.x;
int i,j;
for(; tid <nx*ny; tid+=gsize)
{
j = tid/nx;
i = tid - j*nx;
x[tid] *= y[i];
}
That would get you at least coalesced reads and writes to x, and remove the enormous number of redundant calculations in your posted version. There are a number of further optimizations that could be made, but it would require more information about the problem than has been supplied in the question and subsequent comments. Your indexing scheme contains an integer division and then an integer multiply-add per calculation. That is a lot of overhead for a single FLOP per input value. However, having said all of that, if the problem size I quoted is that actual problem size you are interested in, the GPU will never be faster than even a modest host CPU. You would require many orders of magnitude larger problems to realize useful speed up using the GPU for this sort low arithmetic intensity operation.
How big is the block? it may be that the time needed to copy a small amount of data to the GPU and setup the envirnoment is much longer than the calculation time.
Remember also that CUDA does a jit compile on the first run so to get accurate benchmarking you need to run it many times.
Try this using shared memory. One of the best implementations around:
// Matrices are stored in row-major order:
// M(row, col) = *(M.elements + row * M.stride + col)
typedef struct {
int width;
int height;
int stride; // In number of elements
float *elements;
} Matrix;
// Thread block size
#define BLOCK_SIZE 16
// Get a matrix element
__device__ float GetElement(const Matrix A, int row, int col)
{
return A.elements[row * A.stride + col];
}
// Set a matrix element
__device__ void SetElement(Matrix A, int row, int col, float value)
{
A.elements[row * A.stride + col] = value;
}
// Get the BLOCK_SIZExBLOCK_SIZE sub-matrix Asub of A that is
// located col sub-matrices to the right and row sub-matrices down
// from the upper-left corner of A
__device__ Matrix GetSubMatrix(Matrix A, int row, int col)
{
Matrix Asub;
Asub.width = BLOCK_SIZE; Asub.height = BLOCK_SIZE;
Asub.stride = A.stride;
Asub.elements = &A.elements[A.stride * BLOCK_SIZE * row +
BLOCK_SIZE * col];
return Asub;
}
// Forward declaration of the matrix multiplication kernel
__global__ void MatMulKernel(const Matrix, const Matrix, Matrix);
// Matrix multiplication - Host code
// Matrix dimensions are assumed to be multiples of BLOCK_SIZE
void MatMul(const Matrix A, const Matrix B, Matrix C)
{
// Same as in previous example, except the followings:
// d_A.width = d_A.stride = A.width;
// d_B.width = d_B.stride = B.width;
// d_C.width = d_C.stride = C.width;
}
// Matrix multiplication kernel called by MatMul()
__global__ void MatMulKernel(Matrix A, Matrix B, Matrix C)
{
// Block row and column
int blockRow = blockIdx.y;
int blockCol = blockIdx.x;
// Each thread block computes one sub-matrix Csub of C
Matrix Csub = GetSubMatrix(C, blockRow, blockCol);
// Each thread computes one element of Csub
// by accumulating results into Cvalue
float Cvalue = 0;
// Thread row and column within Csub
int row = threadIdx.y;
int col = threadIdx.x;
// Loop over all the sub-matrices of A and B that are
// required to compute Csub
// Multiply each pair of sub-matrices together
// and accumulate the results
for (int m = 0; m < (A.width / BLOCK_SIZE); ++m)
{
// Get sub-matrix Asub of A and Bsub of B
Matrix Asub = GetSubMatrix(A, blockRow, m);
Matrix Bsub = GetSubMatrix(B, m, blockCol);
// Shared memory used to store Asub and Bsub respectively
__shared__ float As[BLOCK_SIZE][BLOCK_SIZE];
__shared__ float Bs[BLOCK_SIZE][BLOCK_SIZE];
// Load Asub and Bsub from device memory to shared memory
// Each thread loads one element of each sub-matrix
As[row][col] = GetElement(Asub, row, col);
Bs[row][col] = GetElement(Bsub, row, col);
// Synchronize to make sure the sub-matrices are loaded
// before starting the computation
__syncthreads();
// Multiply Asub and Bsub together
for (int e = 0; e < BLOCK_SIZE; ++e)
Cvalue += As[row][e] * Bs[e][col];
// Synchronize to make sure that the preceding
// computation is done before loading two new
// sub-matrices of A and B in the next iteration
__syncthreads();
}
// Write Csub to device memory
// Each thread writes one element
SetElement(Csub, row, col, Cvalue);
}