Time to be used in calculating bandwidth - cuda

I am trying to find the effective bandwidth used by my code against the CUDA GEforce 8800 gtx maximum of 86GB/s .I am not sure what time to use though .Currently I am using the difference between calling the kernel with my instructions against calling the kernel with no instructions.Is this the correct approach?(formula i use is ->effective bw= (bytes read+written)/time)
Also I get a really bad kernel call overhead (close to 1 sec) .Is there a way to get rid of it?

You can time your kernel fairly precisely with cuda events.
//declare the events
cudaEvent_t start;
cudaEvent_t stop;
float kernel_time;
//create events before you use them
cudaEventCreate(&start);
cudaEventCreate(&stop);
//put events and kernel launches in the stream/queue
cudaEventRecord(start,0);
myKernel <<< config >>>( );
cudaEventRecord(stop,0);
//wait until the stop event is recorded
cudaEventSynchronize(stop);
//and get the elapsed time
cudaEventElapsedTime(&kernel_time,start,stop);
//cleanup
cudaEventDestroy(start);
cudaEVentDestroy(stop);

Effective Bandwidth in GBps= ( (Br + Bw)/10^9 ) / Time
Br = number of bytes read by kernel from DRAM
Bw = number of bytes written by kernel in DRAM
Time = time taken by kernel.
For example you test the effective bandwidth of copying a 2048x2048 matrix of floats (4 bytes each) from one locations to another in GPU's DRAM. The formula would be:
Bandwidth in GB/s = ( (2048x2048 x 4 x 2)/10^9 ) / time-taken-by-kernel
here:
2048x2048 (matrix elements)
4 (each element has 4 bytes)
2 (one for read and one for write)
/10^9 to covert B into GB.

Related

CUDA: Write directly from device to host pinned memory without sacrificing throughput?

In CUDA, is it possible to write directly to host (pinned) memory from a device kernel?
In my current setup, I first write to device DRAM and then copy from DRAM into host pinned memory.
I'm wondering if I can just write directly to host memory (i.e. use one step instead of two) without sacrificing throughput.
From what I understand, unified memory isn't the answer - guides mention that it's slower (perhaps because of its paging semantics?).
But I haven't tried it, so perhaps I'm mistaken - maybe there's an option to force everything to reside in host pinned memory?
There are numerous question here on the cuda SO tag about how to use pinned memory for "zero-copy" operations. Here is one example. You can find many more examples.
If you only have to write to each output point once, and your writes are/would be nicely coalesced, then there should not be a major performance difference between the costs of:
writing to device memory and then cudaMemcpy D->H after the kernel
writing directly to host-pinned memory
You will still need a cudaDeviceSynchronize() after the kernel call, before accessing the data on the host, to ensure consistency.
Differences on the order of ~10 microseconds are still possible due to CUDA operation overheads.
It should be possible to demonstrate that bulk transfer of data using direct read/writes to pinned memory from kernel code will achieve approximately the same bandwidth as what you would get with a cudaMemcpy transfer.
As an aside, the "paging semantics" of unified memory may be worked around but again a well optimized code in any of these 3 scenarios is not likely to show marked perf or duration differences.
Responding to comments, my use of "approximately" above is probably a stretch, here's a kernel that writes 4GB of data in less than half a second on a PCIE Gen2 system:
$ cat t2138.cu
template <typename T>
__global__ void k(T *d, size_t n){
for (size_t i = blockIdx.x*blockDim.x+threadIdx.x; i < n; i+=gridDim.x*blockDim.x)
d[i] = 0;
}
int main(){
int *d;
size_t n = 1048576*1024;
cudaHostAlloc(&d, sizeof(d[0])*n, cudaHostAllocDefault);
k<<<160, 1024>>>(d, n);
k<<<160, 1024>>>(d, n);
cudaDeviceSynchronize();
int *d1;
cudaMalloc(&d1, sizeof(d[0])*n);
cudaMemcpy(d, d1, sizeof(d[0])*n, cudaMemcpyDeviceToHost);
}
$ nvcc -o t2138 t2138.cu
$ compute-sanitizer ./t2138
========= COMPUTE-SANITIZER
========= ERROR SUMMARY: 0 errors
$ nvprof ./t2138
==21201== NVPROF is profiling process 21201, command: ./t2138
==21201== Profiling application: ./t2138
==21201== Profiling result:
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 72.48% 889.00ms 2 444.50ms 439.93ms 449.07ms void k<int>(int*, unsigned long)
27.52% 337.47ms 1 337.47ms 337.47ms 337.47ms [CUDA memcpy DtoH]
API calls: 60.27% 1.88067s 1 1.88067s 1.88067s 1.88067s cudaHostAlloc
28.49% 889.01ms 1 889.01ms 889.01ms 889.01ms cudaDeviceSynchronize
10.82% 337.55ms 1 337.55ms 337.55ms 337.55ms cudaMemcpy
0.17% 5.1520ms 1 5.1520ms 5.1520ms 5.1520ms cudaMalloc
0.15% 4.6178ms 4 1.1544ms 594.35us 2.8265ms cuDeviceTotalMem
0.09% 2.6876ms 404 6.6520us 327ns 286.07us cuDeviceGetAttribute
0.01% 416.39us 4 104.10us 59.830us 232.21us cuDeviceGetName
0.00% 151.42us 2 75.710us 13.663us 137.76us cudaLaunchKernel
0.00% 21.172us 4 5.2930us 3.0730us 8.5010us cuDeviceGetPCIBusId
0.00% 9.5270us 8 1.1900us 428ns 4.5250us cuDeviceGet
0.00% 3.3090us 4 827ns 650ns 1.2230us cuDeviceGetUuid
0.00% 3.1080us 3 1.0360us 485ns 1.7180us cuDeviceGetCount
$
4GB/0.44s = 9GB/s
4GB/0.34s = 11.75GB/s (typical for PCIE Gen2 to pinned memory)
We can see that contrary to my previous statement, the transfer of data using in-kernel copying to a pinned allocation does seem to be slower (about 33% slower in my test case) than using a bulk copy (cudaMemcpy DtoH to a pinned allocation). However this isn't quite an apples-to-apples comparison, because the kernel itself would still have to write the 4GB of data to the device allocation to make the comparison to cudaMemcpy be sensible. The speed of this operation will depend on the GPU device memory bandwidth, which varies by GPU of course. So 33% higher is probably "too high" of an estimate of the comparison. But if your GPU has lots of memory bandwidth, this estimate will be pretty close. (On my V100, writing 4GB to device memory only takes ~7ms).

Maximum number of resident blocks per SM?

It seems the that there is a maximum number of resident blocks allowed per SM. But while other "hard" limits are easily found (via, for example, `cudaGetDeviceProperties'), a maximum number of resident blocks doesn't seem to be widely documented.
In the following sample code, I configure the kernel with one thread per block. To test the hypothesis that this GPU (a P100) has a maximum of 32 resident blocks per SM, I create a grid of 56*32 blocks (56 = number of SMs on the P100). Each kernel takes 1 second to process (via a "sleep" routine), so if I have configured the kernel correctly, the code should take 1 second. The timing results confirm this. Configuring with 32*56+1 blocks takes 2 seconds, suggesting the 32 blocks per SM is the maximum allowed per SM.
What I wonder is, why isn't this limit made more widely available? For example, it doesn't show up `cudaGetDeviceProperties'. Where can I find this limit for various GPUs? Or maybe this isn't a real limit, but is derived from other hard limits?
I am running CUDA 10.1
#include <stdio.h>
#include <sys/time.h>
double cpuSecond() {
struct timeval tp;
gettimeofday(&tp,NULL);
return (double) tp.tv_sec + (double)tp.tv_usec*1e-6;
}
#define CLOCK_RATE 1328500 /* Modify from below */
__device__ void sleep(float t) {
clock_t t0 = clock64();
clock_t t1 = t0;
while ((t1 - t0)/(CLOCK_RATE*1000.0f) < t)
t1 = clock64();
}
__global__ void mykernel() {
sleep(1.0);
}
int main(int argc, char* argv[]) {
cudaDeviceProp prop;
cudaGetDeviceProperties(&prop, 0);
int mp = prop.multiProcessorCount;
//clock_t clock_rate = prop.clockRate;
int num_blocks = atoi(argv[1]);
dim3 block(1);
dim3 grid(num_blocks); /* N blocks */
double start = cpuSecond();
mykernel<<<grid,block>>>();
cudaDeviceSynchronize();
double etime = cpuSecond() - start;
printf("mp %10d\n",mp);
printf("blocks/SM %10.2f\n",num_blocks/((double)mp));
printf("time %10.2f\n",etime);
cudaDeviceReset();
}
Results :
% srun -p gpuq sm_short 1792
mp 56
blocks/SM 32.00
time 1.16
% srun -p gpuq sm_short 1793
mp 56
blocks/SM 32.02
time 2.16
% srun -p gpuq sm_short 3584
mp 56
blocks/SM 64.00
time 2.16
% srun -p gpuq sm_short 3585
mp 56
blocks/SM 64.02
time 3.16
Yes, there is a limit to the number of blocks per SM. The maximum number of blocks that can be contained in an SM refers to the maximum number of active blocks in a given time. Blocks can be organized into one- or two-dimensional grids of up to 65,535 blocks in each dimension but the SM of your gpu will be able to accommodate only a certain number of blocks. This limit is linked in two ways to the Compute Capability of your Gpu.
Hardware limit stated by CUDA.
Each gpu allows a maximum limit of blocks per SM, regardless of the number of threads it contains and the amount of resources used. For example, a Gpu with compute capability 2.0 has a limit of 8 Blocks/SM while one with compute capability 7.0 has a limit of 32 Blocks/SM. This is the best number of active blocks for each SM that you can achieve: let's call it MAX_BLOCKS.
Limit derived from the amount of resources used by each block.
A block is made up of threads and each thread uses a certain number of registers: the more registers it uses, the greater the number of resources used by the block that contains it. Similarly, the amount of shared memory assigned to a block increases the amount of resources the block needs to be allocated. Once a certain value is exceeded, the number of resources needed for a block will be so large that SM will not be able to allocate as many blocks as it is allowed by MAX_BLOCKS: this means that the amount of resources needed for each block is limiting the maximum number of active blocks for each SM.
How do I find these boundaries?
CUDA thought about that too. On their site is available the Cuda Occupancy Calculator file with which you can discover the hardware limits grouped by compute capability. You can also enter the amount of resources used by your blocks (number of threads, registers per threads, bytes of shared memory) and get graphs and important information about the number of active blocks.
The first tab of the linked file allows you to calculate the actual use of SM based on the resources used. If you want to know how many registers per thread you use you have to add the -Xptxas -v option to have the compiler tell you how many registers it is using when it creates the PTX.
In the last tab of the file you will find the hardware limits grouped by Compute capability.

cuBLAS dsyrk slower than dgemm

I am trying to compute C = A*A' on the GPU using cuBLAS and am finding that the rank-k update cublasDsyrk is running about 5x slower than the general matrix-matrix multiplication routine cublasDgemm.
This is surprising to me; I thought syrk would be faster since it is a more specialized piece of code. Is that an unreasonable expectation? Am I doing this wrong?
Timing the code
Ultimately I'm writing CUDA code to be compiled into MEX files for MATLAB, so apologies for not providing a complete working example (there would be a lot of extraneous code for wrangling with the MATLAB objects).
I know this is probably not the best way, but I'm using clock() to time how long the code takes to run:
// Start of main function
clock_t tic = clock();
clock_t toc;
/* ---- snip ---- */
cudaDeviceSynchronize();
toc = clock();
printf("%8d (%7.3f ms) Allocated memory on GPU for output matrix\n",
toc-tic,1000*(double)(toc-tic)/CLOCKS_PER_SEC);
// Compute the upper triangle of C = alpha*A*A' + beta*C
stat = cublasDsyrk(handle, CUBLAS_FILL_MODE_UPPER, CUBLAS_OP_N,
M, N, &alpha, A, M, &beta, C, M);
toc = clock();
printf("%8d (%7.3f ms) cublasDsyrk launched\n",
toc-tic,1000*(double)(toc-tic)/CLOCKS_PER_SEC);
cudaDeviceSynchronize();
toc = clock();
printf("%8d (%7.3f ms) cublasDsyrk completed\n",
toc-tic,1000*(double)(toc-tic)/CLOCKS_PER_SEC);
/* ----- snip ----- */
Runtimes
The output, running on a [12 x 500,000] random matrix (column-major storage):
911 ( 0.911 ms) Loaded inputs, initialized cuBLAS context
1111 ( 1.111 ms) Allocated memory on GPU for output matrix
1352 ( 1.352 ms) cublasDsyrk launched
85269 ( 85.269 ms) cublasDsyrk completed
85374 ( 85.374 ms) Launched fillLowerTriangle kernel
85399 ( 85.399 ms) kernel completed
85721 ( 85.721 ms) Finished and cleaned up
After replacing the syrk call with
stat = cublasDgemm(handle, CUBLAS_OP_N, CUBLAS_OP_T, M, M, N,
&alpha, A, M, A, M, &beta, C, M);
the whole thing runs way faster:
664 ( 0.664 ms) Loaded inputs, initialized cuBLAS context
796 ( 0.796 ms) Allocated memory on GPU for output matrix
941 ( 0.941 ms) cublasDgemm launched
16787 ( 16.787 ms) cublasDgemm completed
16837 ( 16.837 ms) Launched fillLowerTriangle kernel
16859 ( 16.859 ms) kernel completed
17263 ( 17.263 ms) Finished and cleaned up
I tried it with a few matrices of other sizes; interestingly it seems that the speed difference is most pronounced when the matrix has few rows. At 100 rows, gemm is only 2x faster, and at 1000 rows it's slightly slower (which is what I would have expected all along).
Other details
I'm using CUDA Toolkit 7.5 and the GPU device is an NVIDIA Grid K520 (Kepler, compute capability 3.0). I'm running on an Amazon EC2 g2.x2large instance.
[n x 500,000] for n=12,100,1000 are all very wide matrix. In these corner cases, gemm() and syrk() may not be able to reach their peak performance, where syrk() is nearly twice faster as gemm() (as the result matrix is symentric so you can save half of the computation).
Another consideration is that CUDA gemm()/syrk() usually divides matrix into fixed size sub-matrices as the basic computing unit to achieve high performance. The size of the sub-matrix can be up to 32x64 for dgemm() as shown in the following link.
http://www.netlib.org/lapack/lawnspdf/lawn267.pdf
The performance usually drops a lot if your size (12 or 100) is neither much larger than the sub-matrix nor a multiple of it.

driver.Context.synchronize()- what else to take into consideration -- -a clean-up operation failed

I have this code here (modified due to the answer).
Info
32 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info : Used 46 registers, 120 bytes cmem[0], 176 bytes
cmem[2], 76 bytes cmem[16]
I don't know what else to take into consideration in order to make it work for different combinations of points "numPointsRs" and "numPointsRp"
When ,for example, i run the code with Rs=10000 and Rp=100000 with block=(128,1,1),grid=(200,1) its fine.
My computations:
46 registers*128threads=5888 registers .
My card has limit 32768registers,so 32768/5888=5 +some => 5 block/SM
(my card has limit 6).
With the occupancy calculator i found that using 128 threads/block
gives me 42% and am in the limits of my card.
Also,the number of threads per MP is 640 (limit is 1536)
Now,if i try to use Rs=100000 and Rp=100000 (for the same threads and blocks) it gives me the message in the title,with:
cuEventDestroy failed: launch timeout
cuModuleUnload failed: launch timeout
1) I don't know/understand what else is needed to be computed.
2) I can't understand how we use/find the number of the blocks.I can see
that mostly,someone puts (threads-1+points)/threads ,but that still
doesn't work.
--------------UPDATED----------------------------------------------
After using driver.Context.synchronize() ,the code works for many points (1000000)!
But ,what impact has this addition to the code?(for many points the screen freezes for 1 minute or more).Should i use it or not?
--------------UPDATED2----------------------------------------------
Now,the code doesn't work again without doing anything!
Snapshot of code:
import pycuda.gpuarray as gpuarray
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy as np
import cmath
import pycuda.driver as drv
import pycuda.tools as t
#---- Initialization and passing(allocate memory and transfer data) to GPU -------------------------
Rs_gpu=gpuarray.to_gpu(Rs)
Rp_gpu=gpuarray.to_gpu(Rp)
J_gpu=gpuarray.to_gpu(np.ones((numPointsRs,3)).astype(np.complex64))
M_gpu=gpuarray.to_gpu(np.ones((numPointsRs,3)).astype(np.complex64))
Evec_gpu=gpuarray.to_gpu(np.zeros((numPointsRp,3)).astype(np.complex64))
Hvec_gpu=gpuarray.to_gpu(np.zeros((numPointsRp,3)).astype(np.complex64))
All_gpu=gpuarray.to_gpu(np.ones(numPointsRp).astype(np.complex64))
#-----------------------------------------------------------------------------------
mod =SourceModule("""
#include <pycuda-complex.hpp>
#include <cmath>
#include <vector>
typedef pycuda::complex<float> cmplx;
typedef float fp3[3];
typedef cmplx cp3[3];
__device__ __constant__ float Pi;
extern "C"{
__device__ void computeEvec(fp3 Rs_mat[], int numPointsRs,
cp3 J[],
cp3 M[],
fp3 Rp,
cmplx kp,
cmplx eta,
cmplx *Evec,
cmplx *Hvec, cmplx *All)
{
while (c<numPointsRs){
...
c++;
}
}
__global__ void computeEHfields(float *Rs_mat_, int numPointsRs,
float *Rp_mat_, int numPointsRp,
cmplx *J_,
cmplx *M_,
cmplx kp,
cmplx eta,
cmplx E[][3],
cmplx H[][3], cmplx *All )
{
fp3 * Rs_mat=(fp3 *)Rs_mat_;
fp3 * Rp_mat=(fp3 *)Rp_mat_;
cp3 * J=(cp3 *)J_;
cp3 * M=(cp3 *)M_;
int k=threadIdx.x+blockIdx.x*blockDim.x;
while (k<numPointsRp)
{
computeEvec( Rs_mat, numPointsRs, J, M, Rp_mat[k], kp, eta, E[k], H[k], All );
k+=blockDim.x*gridDim.x;
}
}
}
""" ,no_extern_c=1,options=['--ptxas-options=-v'])
#call the function(kernel)
func = mod.get_function("computeEHfields")
func(Rs_gpu,np.int32(numPointsRs),Rp_gpu,np.int32(numPointsRp),J_gpu, M_gpu, np.complex64(kp), np.complex64(eta),Evec_gpu,Hvec_gpu, All_gpu, block=(128,1,1),grid=(200,1))
#----- get data back from GPU-----
Rs=Rs_gpu.get()
Rp=Rp_gpu.get()
J=J_gpu.get()
M=M_gpu.get()
Evec=Evec_gpu.get()
Hvec=Hvec_gpu.get()
All=All_gpu.get()
My card:
Device 0: "GeForce GTX 560"
CUDA Driver Version / Runtime Version 4.20 / 4.10
CUDA Capability Major/Minor version number: 2.1
Total amount of global memory: 1024 MBytes (1073283072 bytes)
( 0) Multiprocessors x (48) CUDA Cores/MP: 0 CUDA Cores //CUDA Cores 336 => 7 MP and 48 Cores/MP
There are quite a few issues that you have to deal with. Answer 1 provided by #njuffa is the best general solution. I'll provide more feedback based upon the limited data you have provided.
PTX output of 46 registers is not the number of registers used by your kernel. PTX is an intermediate representation. The offline or JIT compiler will convert this to device code. Device code may use more or less registers. Nsight Visual Studio Edition, the Visual Profiler, and the CUDA command line profiler can all provide you the correct register count.
The occupancy calculation is not simply RegistersPerSM / RegistersPerThread. Registers are allocated based upon a granularity. For CC 2.1 the granularity is 4 registers per thread per warp (128 registers). 2.x devices can actually allocate at a 2 register granularity but this can lead to fragmentation later in the kernel.
In your occupancy calculation you state
My card has limit 32768registers,so 32768/5888=5 +some => 5 block/SM
(my card has limit 6).
I'm not sure what 6 means. Your device has 7 SMs. The maximum blocks per SM for 2.x devices is 8 blocks per SM.
You have provided an insufficient amount of code. If you provide pieces of code please provide the size of all inputs, the number of times each loop will be executed, and a description of the operations per function. Looking at the code you may be doing too many loops per thread. Without knowing the order of magnitude of the outer loop we can only guess.
Given that the launch is timing out you should probably approach debugging as follows:
a. Add a line to the beginning of the code
if (blockIdx.x > 0) { return; }
Run the exact code you have in one of the previously mentioned profilers to estimate the duration of a single block. Using the launch information provided by the profiler: register per thread, shared memory, ... use the occupancy calculator in the profiler or the xls to determine the maximum number of blocks that you can run concurrently. For example, if the theoretical block occupancy is 3 blocks per SM, and the number of SMs is 7 the you can run 21 blocks at a time which for you launch is 9 waves. NOTE: this assumes equal work per thread. Change the early exit code to allow 1 wave (21 blocks). If this launch times out then you need to reduce the amount of work per thread. If this passes then calculate how many waves you have and estimate when you will timeout (2sec on windows, ? on linux).
b. If you have too many waves then reduce you have to reduce the launch configuration. Given that you index by gridDim.x and blockDim.x you can do this by passing in these dimensions as as parameters to your kernel. This will require tou to minimally change your indexing code. You will also have to pass a blockIdx.x offset. Change your host code to launch multiple kernels back to back. Since there should be no conflict you can rr launch these in multiple streams to benefit from overlap at the end of each wave.
"launch timeout" would appear to indicate that the kernel ran too long and was killed by the watchdog timer. This can happen on GPUs that are also used for graphics output (e.g. a graphical desktop), where the task of the watchdog timer is to prevent the desktop from locking up for more than a few seconds. Best I can recall the watchdog time limit is on the order of 5 seconds or thereabouts.
At any given moment, the GPU can either run graphics, or CUDA, so the watchdog timer is needed when running a GUI to prevent the GUI from locking up for an extended period of time, which renders the machine inoperable through the GUI.
If possible, avoid using this GPU for the desktop and/or other graphics (e.g. don't run X if you are on Linux). If running without graphics isn't an option, to reduce kernel execution time to avoid hitting watchdog timer kernel termination, you will have to do less work per kernel launch, optimize the code so the kernel runs faster for the same amount of work, or deploy a faster GPU.
To provide more inputs on #njuffa's answer, in Windows systems you can increase the launch timeout or TDR (Timeout Detection & Recovery) by following these steps:
1: Open the options in Nsight Monitor.
2: Set an appropriate value for WDDM TDR Delay
CUATION: If this value is small you may get timeout error and for higher values your screen will stay frozen until kernel finishes it's job.
source

Timing CUDA kernels

hi every one im currently working on timing some of my CUDA code. I was able to time them using events. My kernel ran for 19 ms. Somehow I find this doubtful because when I ran a sequential implementation of this, it was at around 5000 ms. I know the code should run faster, but should it be this fast?
I'm using wrapper functions to call cuda kernels in my cpp program. Am I supposed to be calling them there or in the .cu file? Thanks!
The obvious way to check if your program is working would be to compare the output to that of your CPU based implementation. If you get the same output, it is working by definition, right? :)
If your program is experimental in such a way that it doesn't really produce any verifiable output then there is a good chance that the compiler has optimized out some (or all) of your code. The compiler will remove code that does not contribute to output data. This can cause, for instance, that the entire contents of a kernel is removed if the final statement that stores the calculated value is commented out.
As to your speedup. 5000ms / 19ms = 263x, which is an unlikely increase, even for algorithms that map perfectly to the GPU architecture.
Well, if you wrote your CUDA code right, yes, it could be that much faster. Think about it. You moved the code from sequential execution on a single processor to parallel execution on hundreds of processors, depending on your GPU model. My $179 mid range card has 480 cores. Some available now have 1500 cores. It is very possible to get 100x perf jumps with CUDA, particularly if your kernel is much more compute-bound than memory bound.
That said, make sure you are measuring what you think you are measuring. If you are invoking your CUDA kernel without using any explicit streams, then the call is synchronous to the host thread and your timings should be accurate. If you are invoking your kernel using a stream, then you need to call cudaDeviceSynchronise() or have your host code wait on an event signaled by the kernel. Kernel calls invoked on a stream execute asynchronously to the host thread, so time measurements in the host thread will not correctly reflect the kernel time unless you make the host thread wait until the kernel call is complete. You can also use CUDA events to measure elapsed time on the GPU within a given stream. See section 5.1.2 of the CUDA Best Practices Guide in the NVidia GPU Computing SDK 4.2.
In my own code, I use the clock() function to get precise timings. For convenience, I have the macros
enum {
tid_this = 0,
tid_that,
tid_count
};
__device__ float cuda_timers[ tid_count ];
#ifdef USETIMERS
#define TIMER_TIC clock_t tic; if ( threadIdx.x == 0 ) tic = clock();
#define TIMER_TOC(tid) clock_t toc = clock(); if ( threadIdx.x == 0 ) atomicAdd( &cuda_timers[tid] , ( toc > tic ) ? (toc - tic) : ( toc + (0xffffffff - tic) ) );
#else
#define TIMER_TIC
#define TIMER_TOC(tid)
#endif
These can then be used to instrument the device code as follows:
__global__ mykernel ( ... ) {
/* Start the timer. */
TIMER_TIC
/* Do stuff. */
...
/* Stop the timer and store the results to the "timer_this" counter. */
TIMER_TOC( tid_this );
}
You can then read the cuda_timers in the host code.
A few notes:
The timers work on a per-block basis, i.e. if you have 100 blocks executing the same kernel, the sum of all their times will be stored.
The timers count the number of clock ticks. To get the number of milliseconds, divide this by the number of GHz on your device and multiply by 1000.
The timers can slow down your code a bit, which is why I wrapped them in the #ifdef USETIMERS so you can switch them off easily.
Although clock() returns integer values of type clock_t, I store the accumulated values as float, otherwise the values will wrap around for kernels that take longer than a few seconds (accumulated over all blocks).
The selection ( toc > tic ) ? (toc - tic) : ( toc + (0xffffffff - tic) ) ) is necessary in case the clock counter wraps around.