I'm using CUDA 6.5 + VS2013 + GTX Titan black. I observe that the following printing codes will crash when the total number of threads larger than 65536. I googled a bit but havent seen anything useful. Does anyone else observe the same behaviour? Or can anyone provide some explanation? Thank you very much!
__global__ void testKernel(int val)
{
int X = blockDim.x * blockIdx.x + threadIdx.x;
int Y = blockDim.y * blockIdx.y + threadIdx.y;
printf("[%d, %d]:\t" "\tValue is:%d\n", X, Y, val);
}
void main(){
dim3 block(16,16);
dim3 grid(16,16);
testKernel << <grid, block >> >(10);
cudaDeviceSynchronize();
cudaGetLastError();
cudaDeviceReset();
}
And I got the following error message when I use block(32,16) and grid(16,16):
Gpu API call (the launch timed out and was terminated)...
Your kernel is taking too long to execute:
the launch timed out and was terminated
This is a limitation of the windows operating system, when running on WDDM devices.
There are a variety of workarounds possible. Some are:
reduce your kernel execution time
switch the GPU to TCC mode, if possible (not possible with GeForce GPUs).
extend the TDR timeout delay (or remove it) via windows registry modification
Also, the in-kernel printf feature has significant limits. It's really not designed for large-scale output for a variety of reasons. One in particular is that the buffer for this activity is limited, and when overflowed, the previous buffer data will be lost (i.e. not printed out).
Thanks to Robert's answer, I realize that the problem might due to the size of buffer. I use the following codes to find out that by default the size of the printing buffer is 1048576 bytes (1M)
size_t sz;
cudaDeviceGetLimit(&sz, cudaLimitPrintfFifoSize);
std::cout << sz << std::endl;
When I increase the buffer size to 100 Mb using the following codes, the error disappears and I have all expected outputs, 131072 lines in total! (I use block(32,16); .. grid(16,16); ... )
sz = 1048576 * 100;
cudaDeviceSetLimit(cudaLimitPrintfFifoSize, sz);
Somehow, the overflow of the printing buffer causes longer response time than usual and triggers a TDR. When I increase the buffer size accordingly, the codes manage to finish before time out. More importantly, sufficient buffer size ensures no data lost.
But, I think the upper bound of buffer size and execution time depends on devices. It works well on Titan Black does not necessarily mean it also works for other NVidia cards. Again, I agree with Robert that to use printf for exporting large amount of data from CUDA kernels are unreliable in practice. I just use it to dump some info to debug the kernel.
Related
In my application cuMemAlloc/cuMemFree seem awfully slow most of the time. However, I found that they are sometimes 10 times faster than usual. The test program below finishes in about 0.4s on two machines, both with cuda 5.5 but one with a compute capability 2.0 card, the other with a 3.5 one.
If the cublas initialization is removed then it takes about 5s. With the cublas initialization in, but allocating a different a different number of bytes such as 4000 it slows down about the same. Needless to say, I'm puzzled by this.
What can be causing this? If it's not a bug in my code, what kind of workaround do I have? The only thing I could think of is preallocating an arena an implementing my own allocator.
#include <stdio.h>
#include <cuda.h>
#include <cublas_v2.h>
#define cudaCheck(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(CUresult code, char *file, int line)
{
if (code != CUDA_SUCCESS) {
fprintf(stderr,"GPUassert: %d %s %d\n", code, file, line);
exit(1);
}
}
void main(int argc, char *argv[])
{
CUcontext context;
CUdevice device;
int devCount;
cudaCheck(cuInit(0));
cudaCheck(cuDeviceGetCount(&devCount));
cudaCheck(cuDeviceGet(&device, 0));
cudaCheck(cuCtxCreate(&context, 0, device));
cublasStatus_t stat;
cublasHandle_t handle;
stat = cublasCreate(&handle);
if (stat != CUBLAS_STATUS_SUCCESS) {
printf ("CUBLAS initialization failed\n");
exit(1);
}
{
int i;
for (i = 0; i < 30000; i++) {
CUdeviceptr devBufferA;
cudaCheck(cuMemAlloc(&devBufferA, 8000));
cudaCheck(cuMemFree(devBufferA));
}
}
}
I took your code and profiled it on a linux 64 bit system with the 319.21 driver and CUDA 5.5 and a non-display compute 3.0 device. My first observation is that the run time is about 0.5s, which seems much faster then you are reporting. If I analyse the nvprof output, I get these histograms:
cuMemFree
Time (us) Frequency
3.65190000e+00 2.96670000e+04
4.59380000e+00 2.76000000e+02
5.53570000e+00 3.20000000e+01
6.47760000e+00 1.00000000e+00
7.41950000e+00 1.00000000e+00
8.36140000e+00 6.00000000e+00
9.30330000e+00 0.00000000e+00
1.02452000e+01 1.00000000e+00
1.11871000e+01 2.00000000e+00
1.21290000e+01 1.40000000e+01
cuMemAlloc
Time (us) Frequency
3.53840000e+00 2.98690000e+04
4.50580000e+00 8.60000000e+01
5.47320000e+00 2.00000000e+01
6.44060000e+00 0.00000000e+00
7.40800000e+00 0.00000000e+00
8.37540000e+00 6.00000000e+00
9.34280000e+00 0.00000000e+00
1.03102000e+01 0.00000000e+00
1.12776000e+01 1.20000000e+01
1.22450000e+01 5.00000000e+00
which tells me that 99.6% of cuMemAlloc calls take less than 3.5384 microseconds, and 98.9% of cuMemFree calls take less than 3.6519 microseconds. No free or allocate operation took more than 12.25 microseconds.
So my conclusions based on these results are
Both cuMemfree and cuMemAlloc are extremely fast, with every one of the 60000 total calls to those APIs in your example taking less than 12.25 microseconds
The median call time for both APIs is 2.7 microseconds, with a standard deviation of 0.25 microseconds, suggesting that there is very little variability in the API latency as well
Very occasionally (about 0.01% of the time), both APIs can be around six times slower than this median. This is probably due to operating system level resource contention
Every single one of the above points completely contradicts every assertion you have made in your question.
Given how different your results apparently are, I can only guess that you are running on a known high latency platform like WDDM Windows, and that driver batching and WDDM subsystem latency are completely dominating the performance of the code. In that case, it would seem that the simplest workaround is change platforms.....
The CUDA memory manager is known to be slow. I've seen mention that it is "two orders of magnitude" slower than host malloc() and free(). This information may be dated, but there are some graphs here:
http://www.cs.virginia.edu/~mwb7w/cuda_support/memory_management_overhead.html
I think this is because the CUDA memory manager is optimized for handling a small number of memory allocations at the cost of slowing down when there is a large number of allocations. And that this is because, in general, it is not efficient to handle many small buffers in a kernel.
There are two main issues with dealing with many buffers in a kernel:
1) It implies passing a table of pointers to the kernel. If there is a pointer for each thread, you incur an initial cost of loading the pointer from a table in global memory, before you can start working with the memory. Following a series of pointers is sometimes called "pointer chasing" and it is especially expensive on a GPU because memory accesses is relatively more expensive.
2) More importantly, a pointer for each thread implies a non-coalesced memory access pattern. On current architectures, if each thread in a warp loads a 32-bit value from global memory that is more than 128 bytes away from the others, 32 memory transaction are required for serving the warp. Each transaction will load 128 bytes and then discard 124 bytes. If all threads in a warp load values from the same natively aligned 128 byte area, all the loads are served by a single memory transaction. So, in a memory bound kernel, memory throughput may be only 1/32 of potential.
The most efficient way to handle memory with CUDA is often to allocate a few large chunks and index into them in the kernel.
I'm trying to implement sum of absolute differences in CUDA for a homework assignment, but am having trouble getting correct results.
I am given a Blocksize that represents X and Y size (in pixels) of a square portion of the images I am given to compare. I am also given two images in YUV format. Below are the portions of the program I have to implement: the kernel that calculates the SAD and the setup for the size of the grid/blocks of threads. The rest of the program is provided, and can be assumed to be correct.
Here I'm getting the x and y index of the current thread and using those to get the pixel in the image arrays I'm dealing with in the current thread. Then I calculate the absolute difference, wait for all the threads to finish calculating that, then if the current thread is within the block in the image we care about the absolute difference is added to the sum in global memory with an atomicAdd to avoid a collision during write.
__global__ void gpuCounterKernel(pixel* cuda_curBlock, pixel* cuda_refBlock, uint32* cuda_SAD, uint32 cuda_Blocksize)
{
int idx = blockIdx.x * blockDim.x + threadIdx.x;
int idy = blockIdx.y * blockDim.y + threadIdx.y;
int id = idx * cuda_Blocksize + idy;
int AD = abs( cuda_curBlock[id] - cuda_refBlock[id] );
__syncthreads();
if( idx < cuda_Blocksize && idy < cuda_Blocksize ) {
atomicAdd( cuda_SAD, AD );
}
}
And this is how I'm setting up the grid and blocks for the kernel:
int grid_sizeX = Blocksize/2;
int grid_sizeY = Blocksize/2;
int block_sizeX = Blocksize/4;
int block_sizeY = Blocksize/4;
dim3 blocksInGrid(grid_sizeX, grid_sizeY);
dim3 threadsInBlock(block_sizeX, block_sizeY);
The given program calculates the SAD on the CPU as well and compares our result from the GPU with that one to check for correctness. Valid block sizes within the image are from 1-1000. My solution above is getting correct results from 10-91, but anything above 91 just returns 0 for the sum. What am I doing wrong?
Your grid and block size settings looks odd.
Usually we use the settings for image pixels similar as follows.
int imageROISize=1000;
dim3 threadInBlock(16,16);
dim3 blocksInGrid((imageROISize+15)/16, (imageROISize+15)/16);
You could refer to the following section in cuda programming guide for more information on how to distribute workloads to CUDA threads.
http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#thread-hierarchy
You really should show all the code and identify the GPU you are running on. At least the portion that calls the kernel and allocates data for GPU use.
Are you doing proper cuda error
checking on all cuda API calls and kernel calls?
Probably your kernel is not running at all because your
threadsInBlock parameter is exceeding 512 threads total. You indicate that at Blocksize = 92 and above, things are not working. Let's do the math:
92/4 = 23 threads in X and Y dimensions
23 * 23 = 529 total threads requested per threadblock
529 exceeds 512 which is the limit for cc 1.x devices, so I'm guessing you're running on a cc 1.x device, and therefore your kernel launch is failing, so your kernel is not running, and so you get no computed results (i.e. 0). Note that at 91/4 = 22 threads in X and Y dimensions, you are requesting 484 total threads which does not exceed the 512 limit for cc 1.x devices.
If you were doing proper cuda error checking, the error report would have focused your attention on the cuda kernel launch failing due to incorrect launch parameters.
I need some help understanding the behavior of Ron Farber's code: http://www.drdobbs.com/parallel/cuda-supercomputing-for-the-masses-part/208801731?pgno=2
I'm not understanding how the use of shared mem is giving faster performance over the non-shared memory version. i.e. If I add a few more index calculation steps and use add another Rd/Wr cycle to access the shared mem, how can this be faster than just using global mem alone? The same number or Rd/Wr cycles access global mem in either case. The data is still access only once per kernel instance. Data still goes in/out using global mem. The num of kernel instances is the same. The register count looks to be the same. How can adding more processing steps make it faster. (We are not subtracting any process steps.) Essentially we are doing more work, and it is getting done faster.
Shared mem access is much faster than global, but it is not zero, (or negative).
What am I missing?
The 'slow' code:
__global__ void reverseArrayBlock(int *d_out, int *d_in) {
int inOffset = blockDim.x * blockIdx.x;
int outOffset = blockDim.x * (gridDim.x - 1 - blockIdx.x);
int in = inOffset + threadIdx.x;
int out = outOffset + (blockDim.x - 1 - threadIdx.x);
d_out[out] = d_in[in];
}
The 'fast' code:
__global__ void reverseArrayBlock(int *d_out, int *d_in) {
extern __shared__ int s_data[];
int inOffset = blockDim.x * blockIdx.x;
int in = inOffset + threadIdx.x;
// Load one element per thread from device memory and store it
// *in reversed order* into temporary shared memory
s_data[blockDim.x - 1 - threadIdx.x] = d_in[in];
// Block until all threads in the block have written their data to shared mem
__syncthreads();
// write the data from shared memory in forward order,
// but to the reversed block offset as before
int outOffset = blockDim.x * (gridDim.x - 1 - blockIdx.x);
int out = outOffset + threadIdx.x;
d_out[out] = s_data[threadIdx.x];
}
Early CUDA-enabled devices (compute capability < 1.2) would not treat the d_out[out] write in your "slow" version as a coalesced write. Those devices would only coalesce memory accesses in the "nicest" case where i-th thread in a half warp accesses i-th word. As a result, 16 memory transactions would be issued to service the d_out[out] write for every half warp, instead of just one memory transaction.
Starting with compute capability 1.2, the rules for memory coalescing in CUDA became much more relaxed. As a result, the d_out[out] write in the "slow" version would also get coalesced, and using shared memory as a scratch pad is no longer necessary.
The source of your code sample is article "CUDA, Supercomputing for the Masses: Part 5", which was written in June 2008. CUDA-enabled devices with compute capability 1.2 only arrived on the market 2009, so the writer of the article clearly talked about devices with compute capability < 1.2.
For more details, see section F.3.2.1 in the NVIDIA CUDA C Programming Guide.
this is because the shared memory is closer to the computing units, hence the latency and peak bandwidth will not be the bottleneck for this computation (at least in the case of matrix multiplication)
But most importantly, the top reason is that a lot of the numbers in the tile are being reused by many threads. So if you access from global you are retrieving those numbers multiple times. Writing them once to shared memory will eliminate that wasted bandwidth usage
When looking at the global memory accesses, the slow code reads forwards and writes backwards. The fast code both read and writes forwards. I think the fast code if faster because the cache hierarchy is optimized in, some way, for accessing the global memory in descending order (towards higher memory addresses).
CPUs do some speculative fetching, where they will fill cache lines from higher memory addresses before the data has been touched by the program. Maybe something similar happens on the GPU.
I am facing the following problem on a GeForce GTX 580 (Fermi-class) GPU.
Just to give you some background, I am reading single-byte samples packed in the following manner in a file: Real(Signal 1), Imaginary(Signal 1), Real(Signal 2), Imaginary(Signal 2). (Each byte is a signed char, taking values between, -128 and 127.) I read these into a char4 array, and use the kernel given below to copy them to two float2 arrays corresponding to each signal. (This is just an isolated part of a larger program.)
When I run the program using cuda-memcheck, I get either an unqualified unspecified launch failure, or the same message along with User Stack Overflow or Breakpoint Hit or Invalid __global__ write of size 8 at random thread and block indices.
The main kernel and launch-related code is reproduced below. The strange thing is that this code works (and cuda-memcheck throws no error) on a non-Fermi-class GPU that I have access to. Another thing that I observed is that the Fermi gives no error for N less than 16384.
#define N 32768
int main(int argc, char *argv[])
{
char4 *pc4Buf_h = NULL;
char4 *pc4Buf_d = NULL;
float2 *pf2InX_d = NULL;
float2 *pf2InY_d = NULL;
dim3 dimBCopy(1, 1, 1);
dim3 dimGCopy(1, 1);
...
/* i do check for errors in the actual code */
pc4Buf_h = (char4 *) malloc(N * sizeof(char4));
(void) cudaMalloc((void **) &pc4Buf_d, N * sizeof(char4));
(void) cudaMalloc((void **) &pf2InX_d, N * sizeof(float2));
(void) cudaMalloc((void **) &pf2InY_d, N * sizeof(float2));
...
dimBCopy.x = 1024; /* number of threads in a block, for my GPU */
dimGCopy.x = N / 1024;
CopyDataForFFT<<<dimGCopy, dimBCopy>>>(pc4Buf_d,
pf2InX_d,
pf2InY_d);
...
}
__global__ void CopyDataForFFT(char4 *pc4Data,
float2 *pf2FFTInX,
float2 *pf2FFTInY)
{
int i = (blockIdx.x * blockDim.x) + threadIdx.x;
pf2FFTInX[i].x = (float) pc4Data[i].x;
pf2FFTInX[i].y = (float) pc4Data[i].y;
pf2FFTInY[i].x = (float) pc4Data[i].z;
pf2FFTInY[i].y = (float) pc4Data[i].w;
return;
}
One other thing I noticed in my program is that if I comment out any two char-to-float assignment statements in my kernel, there's no memory error. One other thing I noticed in my program is that if I comment out either the first two or the last two char-to-float assignment statements in my kernel, there's no memory error. If I comment out one from the first two (pf2FFTInX), and another from the second two (pf2FFTInY), errors still crop up, but less frequently. The kernel uses 6 registers with all four assignment statements uncommented, and uses 5 4 registers with two assignment statements commented out.
I tried the 32-bit toolkit in place of the 64-bit toolkit, 32-bit compilation with the -m32 compiler option, running without X windows, etc. but the program behaviour is the same.
I use CUDA 4.0 driver and runtime (also tried CUDA 3.2) on RHEL 5.6. The GPU compute capability is 2.0.
Please help! I could post the entire code if anybody is interested in running it on their Fermi cards.
UPDATE: Just for the heck of it, I inserted a __syncthreads() between the pf2FFTInX and the pf2FFTInY assignment statements, and memory errors disappeared for N = 32768. But at N = 65536, I still get errors. <-- This didn't last long. Still getting errors.
UPDATE: In continuing with the weird behaviour, when I run the program using cuda-memcheck, I get these 16x16 blocks of multi-coloured pixels distributed randomly all over my screen. This does not happen if I run the program directly.
The problem was a bad GPU card (see the comments). [I'm Adding this answer to remove the question from the unanswered list and make it more useful.]
Hey
I've seen on a website this example kernel
__global__ void loop1( int N, float alpha, float* x, float* y ) {
int i;
int i0 = blockIdx.x*blockDim.x + threadIdx.x;
for(i=i0;i<N;i+=blockDim.x*gridDim.x) {
y[i] = alpha*x[i] + y[i];
}
}
To compute this function in C
for(i=0;i<N;i++) {
y[i] = alpha*x[i] + y[i];
}
Surely the for loop inside the kernel isn't necessary? and you can just do y[i0] = alpha*x[i0] + y[i0] and remove the for loop altogether.
I'm just curious as to why it's there and what it's purpose is. This is assuming a kernel call such as loop1<<<64,256>>>> so presumably gridDim.x = 1
You need the for loop in the kernel if your vector has more entrys than you have started threads. If it's possible it is of course more efficent to start enough threads.
Interesting kernel. The loop inside the kernel is necessary, because N is greater than total number of threads, which is 16 384 (blockDim.x*gridDim.x), but I think it's not good practice to do it (the whole point of CUDA is to use SIMT concept). According to CUDA Programming Guide you can have at most 65535 thread blocks with one kernel. Futhermore starting from Compute Capability 2.x (Fermi) you can have at most 1024 threads per one block (512 before Fermi) Also you can (if possible) separate code into multiple (sequential) kernels.
Much as we would like to believe that CUDA GPUs have infinite execution resources, they do not, and authors of highly optimized code are finding that unrolled for loops, often with fixed numbers of blocks, give the best performance. Makes for painful coding, but optimized CPU code is also pretty painful.
btw a commenter mentioned that this code would have coalescing problems, and I don't see why. If the base addresses are correctly aligned (64B since those are floats), all of the memory transactions by this code will be coalesced, provided the threads/block is also divisible by 64.