Think i have a block with 1024 size and assume my gpu has 192 cuda cores.
How cuda handle __syncthreads() in kernels when cuda cores size is lower than block size?
__global__ void staticReverse(int *d, int n)
{
__shared__ int s[1024];
int t = threadIdx.x;
int tr = n-t-1;
s[t] = d[t];
__syncthreads();
d[t] = s[tr];
}
How 'tr' remaining in local memory?
I think you are mixing a few things.
First of all, GPU having 192 CUDA cores is the total core count. Each block however maps to a single Streaming Multiprocessor (SM) which may have a lower core count (depending on the GPU generation).
Let us assume that you own a Pascal GPU which has 64 cores per SM and you have 3
SMs.
A single block maps to a single SM. So you will have 64 cores handling 1024 threads concurrently. Such an SM has enough registers to hold all the necessary data for 1024 threads, but it has only 64 cores which quickly swap which threads they are handling.
This way all the local data, e.g. tr can remain in memory.
Now, because of this quick swapping and concurrent execution, it may happen -- completely by accident -- that some threads get ahead of others. If you want to ensure that at certain point all threads are at the same spot, you use __syncthreads(). All that function does is to instruct the scheduler to properly assign work to the CUDA cores so that they all are at that spot in program at some moment.
I had a CUDA program in which kernel registers were limiting maximum theoretical achieved occupancy to %50. So I decided to use shared memory instead of registers for those variables that were constant between block threads and were almost read-only throughout kernel run. I cannot provide source code here; what I did was conceptually like this:
My initial program:
__global__ void GPU_Kernel (...) {
__shared__ int sharedData[N]; //N:maximum amount that doesn't limit maximum occupancy
int r_1 = A; //except for this first initialization, these registers don't change anymore
int r_2 = B;
...
int r_m = Y;
... //rest of kernel;
}
I changed above program to:
__global__ void GPU_Kernel (...) {
__shared__ int sharedData[N-m];
__shared__ int r_1, r_2, ..., r_m;
if ( threadIdx.x == 0 ) {
r_1 = A;
r_2 = B;
...
r_m = Y; //last of them
}
__syncthreads();
... //rest of kernel
}
Now threads of warps inside a block perform broadcast reads to access newly created shared memory variables. At the same time, threads don't use too much registers to limit achieved occupancy.
The second program has maximum theoretical achieved occupancy equal to %100. In actual runs, the average achieved occupancy for the first programs was ~%48 and for the second one is around ~%80. But the issue is enhancement in net speed up is around %5 to %10, much less than what I was anticipating considering improved gained occupancy. Why isn't this correlation linear?
Considering below image from Nvidia whitepaper, what I've been thinking was that when achieved occupancy is %50, for example, half of SMX (in newer architectures) cores are idle at a time because excessive requested resources by other cores stop them from being active. Is my understanding flawed? Or is it incomplete to explain above phenomenon? Or is it added __syncthreads(); and shared memory accesses cost?
Why isn't this correlation linear?
If you are already memory bandwidth bound or compute bound, and either one of those bounds is near the theoretical performance of the device, improving occupancy may not help much. Improving occupancy usually helps when niether of these are the limiters to performance for your code (i.e. you are not at or near peak memory bandwidth utilization or peak compute). Since you haven't provided any code or any metrics for your program, nobody can tell you why it didn't speed up more. The profiling tools can help you find the limiters to performance.
You might be interested in a couple webinars:
CUDA Optimization: Identifying Performance Limiters by Dr Paulius Micikevicius
CUDA Warps and Occupancy Considerations+ Live with Dr Justin Luitjens, NVIDIA
In particular, review slide 10 from the second webinar.
I want to count the number of thread execution gradually for whole the kernel execution. Is there an native counter for this or is there any other method to do that? I know keeping a global variable and increment by each thread would not work since a variable in global memory does not guarantees the synchronized access by the threads.
There are numerous ways to measure thread level execution efficiency. This answer provides a list of different collection mechanisms. Robert Crovella's answer provides a manual instrumentation method that allows for accurately collection of information. A similar technique can be used to collect divergence information in the kernel.
Number of Threads Launched for Execution (static)
gridDim.x * gridDim.y * gridDim.z * blockDim.x * blockDim.y * blockDim.z
Number of Threads Launched
gridDim.x * gridDim.y * gridDim.z * ROUNDUP((blockDim.x * blockDim.y * blockDim.z), WARP_SIZE)
This number includes threads that are inactive for the life time of the warp.
This can be collected using the PM counter threads_launched.
Warp Instructions Executed
The counter inst_executed counts the number of warp instructions executed/retired.
Warp Instructions Issued
The counter inst_issued counts the number of instructions issued. inst_issued >= inst_executed. Some instructions will be issued multiple times per instruction executed in order to handle dispatch to narrow execution units or in order to handle address divergence in shared memory and L1 operations.
Thread Instructions Executed
The counter thread_inst_executed counts the number of thread instructions executed. The metrics avg_threads_executed_per_instruction can be derived using thread_inst_executed / inst_executed. The maximum value for this counter is WARP_SIZE.
Not Predicated Off Threads Instructions Executed
Compute capability 2.0 and above devices use instruction predication to disable write-back for threads in a warp as a performance optimization for short sequences of divergent instructions.
The counter not_predicated_off_thread_inst_executed counts the number of instructions executed by all threads. This counter is only available on compute capability 3.0 and above devices.
not_predicated_off_thread_inst_executed <= thread_inst_executed <= WARP_SIZE * inst_executed
This relationship will be off slightly on some chips due to small bugs in thread_inst_executed and not_predicated_off_thread_inst_executed counters.
Profilers
Nsight Visual Studio Edition 2.x support collecting the aforementioned counters.
Nsight VSE 3.0 supports a new Instruction Count experiment that can collect per SASS instruction statistics and show the data in table form or next to high level source, PTX, or SASS code. The information is rolled up from SASS to high level source. The quality of the roll up depends on the ability of the compiler to output high quality symbol information. It is recommended that you always look at both source and SASS at the same time. This experiment can collect the following per instruction statistics:
a. inst_executed
b. thread_inst_executed (or active mask)
c. not_predicated_off_thread_inst_executed (active predicate mask)
d. histogram of active_mask
e. histogram of predicate_mask
Visual Profiler 5.0 can accurately collect the aforementioned SM counters. nvprof can collect and show the per SM details. Visual Profiler 5.x does not support collection of per instruction statistics available in Nsight VSE 3.0. Older versions of the Visual Profiler and CUDA command line profiler can collect many of the aforementioned counters but the results may not be as accurate as the 5.0 and above version of the tools.
Maybe something like this:
__global__ void mykernel(int *current_thread_count, ...){
atomicAdd(current_thread_count, 1);
// the rest of your kernel code
}
int main() {
int tally, *dev_tally;
cudaMalloc((void **)&dev_tally, sizeof(int));
tally = 0;
cudaMemcpy(dev_tally, &tally, sizeof(int), cudaMemcpyHostToDevice);
....
// set up block and grid dimensions, etc.
dim3 grid(...);
dim3 block(...)
mykernel<<<grid, block>>>(dev_tally, ...);
cudaMemcpy(&tally, dev_tally, sizeof(int), cudaMemcpyDeviceToHost);
printf("total number of threads that executed was: %d\n", tally);
....
return 0;
}
You can read more about atomic functions here
Part of the reason for the confusion expressed by many in the comments, is that when mykernel is complete (assuming it ran successfully) everyone expects tally to end up with a value equal to grid.x*grid.y*grid.z*block.x*block.y*block.z
I don't think there is a way to calculate the number of threads in a specific path branch. for ex for an histogram, it would be nice to have the following:
PS: Histogram is about counting the pixels for each color.
for (i=0; i<256; i++) // 256 colors, 1 pixel = 1 thread
if (threadidx.x == i)
Histogramme[i] = CUDA_NbActiveThreadsInBranch() // Threads having i as color
I have this code here (modified due to the answer).
Info
32 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info : Used 46 registers, 120 bytes cmem[0], 176 bytes
cmem[2], 76 bytes cmem[16]
I don't know what else to take into consideration in order to make it work for different combinations of points "numPointsRs" and "numPointsRp"
When ,for example, i run the code with Rs=10000 and Rp=100000 with block=(128,1,1),grid=(200,1) its fine.
My computations:
46 registers*128threads=5888 registers .
My card has limit 32768registers,so 32768/5888=5 +some => 5 block/SM
(my card has limit 6).
With the occupancy calculator i found that using 128 threads/block
gives me 42% and am in the limits of my card.
Also,the number of threads per MP is 640 (limit is 1536)
Now,if i try to use Rs=100000 and Rp=100000 (for the same threads and blocks) it gives me the message in the title,with:
cuEventDestroy failed: launch timeout
cuModuleUnload failed: launch timeout
1) I don't know/understand what else is needed to be computed.
2) I can't understand how we use/find the number of the blocks.I can see
that mostly,someone puts (threads-1+points)/threads ,but that still
doesn't work.
--------------UPDATED----------------------------------------------
After using driver.Context.synchronize() ,the code works for many points (1000000)!
But ,what impact has this addition to the code?(for many points the screen freezes for 1 minute or more).Should i use it or not?
--------------UPDATED2----------------------------------------------
Now,the code doesn't work again without doing anything!
Snapshot of code:
import pycuda.gpuarray as gpuarray
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy as np
import cmath
import pycuda.driver as drv
import pycuda.tools as t
#---- Initialization and passing(allocate memory and transfer data) to GPU -------------------------
Rs_gpu=gpuarray.to_gpu(Rs)
Rp_gpu=gpuarray.to_gpu(Rp)
J_gpu=gpuarray.to_gpu(np.ones((numPointsRs,3)).astype(np.complex64))
M_gpu=gpuarray.to_gpu(np.ones((numPointsRs,3)).astype(np.complex64))
Evec_gpu=gpuarray.to_gpu(np.zeros((numPointsRp,3)).astype(np.complex64))
Hvec_gpu=gpuarray.to_gpu(np.zeros((numPointsRp,3)).astype(np.complex64))
All_gpu=gpuarray.to_gpu(np.ones(numPointsRp).astype(np.complex64))
#-----------------------------------------------------------------------------------
mod =SourceModule("""
#include <pycuda-complex.hpp>
#include <cmath>
#include <vector>
typedef pycuda::complex<float> cmplx;
typedef float fp3[3];
typedef cmplx cp3[3];
__device__ __constant__ float Pi;
extern "C"{
__device__ void computeEvec(fp3 Rs_mat[], int numPointsRs,
cp3 J[],
cp3 M[],
fp3 Rp,
cmplx kp,
cmplx eta,
cmplx *Evec,
cmplx *Hvec, cmplx *All)
{
while (c<numPointsRs){
...
c++;
}
}
__global__ void computeEHfields(float *Rs_mat_, int numPointsRs,
float *Rp_mat_, int numPointsRp,
cmplx *J_,
cmplx *M_,
cmplx kp,
cmplx eta,
cmplx E[][3],
cmplx H[][3], cmplx *All )
{
fp3 * Rs_mat=(fp3 *)Rs_mat_;
fp3 * Rp_mat=(fp3 *)Rp_mat_;
cp3 * J=(cp3 *)J_;
cp3 * M=(cp3 *)M_;
int k=threadIdx.x+blockIdx.x*blockDim.x;
while (k<numPointsRp)
{
computeEvec( Rs_mat, numPointsRs, J, M, Rp_mat[k], kp, eta, E[k], H[k], All );
k+=blockDim.x*gridDim.x;
}
}
}
""" ,no_extern_c=1,options=['--ptxas-options=-v'])
#call the function(kernel)
func = mod.get_function("computeEHfields")
func(Rs_gpu,np.int32(numPointsRs),Rp_gpu,np.int32(numPointsRp),J_gpu, M_gpu, np.complex64(kp), np.complex64(eta),Evec_gpu,Hvec_gpu, All_gpu, block=(128,1,1),grid=(200,1))
#----- get data back from GPU-----
Rs=Rs_gpu.get()
Rp=Rp_gpu.get()
J=J_gpu.get()
M=M_gpu.get()
Evec=Evec_gpu.get()
Hvec=Hvec_gpu.get()
All=All_gpu.get()
My card:
Device 0: "GeForce GTX 560"
CUDA Driver Version / Runtime Version 4.20 / 4.10
CUDA Capability Major/Minor version number: 2.1
Total amount of global memory: 1024 MBytes (1073283072 bytes)
( 0) Multiprocessors x (48) CUDA Cores/MP: 0 CUDA Cores //CUDA Cores 336 => 7 MP and 48 Cores/MP
There are quite a few issues that you have to deal with. Answer 1 provided by #njuffa is the best general solution. I'll provide more feedback based upon the limited data you have provided.
PTX output of 46 registers is not the number of registers used by your kernel. PTX is an intermediate representation. The offline or JIT compiler will convert this to device code. Device code may use more or less registers. Nsight Visual Studio Edition, the Visual Profiler, and the CUDA command line profiler can all provide you the correct register count.
The occupancy calculation is not simply RegistersPerSM / RegistersPerThread. Registers are allocated based upon a granularity. For CC 2.1 the granularity is 4 registers per thread per warp (128 registers). 2.x devices can actually allocate at a 2 register granularity but this can lead to fragmentation later in the kernel.
In your occupancy calculation you state
My card has limit 32768registers,so 32768/5888=5 +some => 5 block/SM
(my card has limit 6).
I'm not sure what 6 means. Your device has 7 SMs. The maximum blocks per SM for 2.x devices is 8 blocks per SM.
You have provided an insufficient amount of code. If you provide pieces of code please provide the size of all inputs, the number of times each loop will be executed, and a description of the operations per function. Looking at the code you may be doing too many loops per thread. Without knowing the order of magnitude of the outer loop we can only guess.
Given that the launch is timing out you should probably approach debugging as follows:
a. Add a line to the beginning of the code
if (blockIdx.x > 0) { return; }
Run the exact code you have in one of the previously mentioned profilers to estimate the duration of a single block. Using the launch information provided by the profiler: register per thread, shared memory, ... use the occupancy calculator in the profiler or the xls to determine the maximum number of blocks that you can run concurrently. For example, if the theoretical block occupancy is 3 blocks per SM, and the number of SMs is 7 the you can run 21 blocks at a time which for you launch is 9 waves. NOTE: this assumes equal work per thread. Change the early exit code to allow 1 wave (21 blocks). If this launch times out then you need to reduce the amount of work per thread. If this passes then calculate how many waves you have and estimate when you will timeout (2sec on windows, ? on linux).
b. If you have too many waves then reduce you have to reduce the launch configuration. Given that you index by gridDim.x and blockDim.x you can do this by passing in these dimensions as as parameters to your kernel. This will require tou to minimally change your indexing code. You will also have to pass a blockIdx.x offset. Change your host code to launch multiple kernels back to back. Since there should be no conflict you can rr launch these in multiple streams to benefit from overlap at the end of each wave.
"launch timeout" would appear to indicate that the kernel ran too long and was killed by the watchdog timer. This can happen on GPUs that are also used for graphics output (e.g. a graphical desktop), where the task of the watchdog timer is to prevent the desktop from locking up for more than a few seconds. Best I can recall the watchdog time limit is on the order of 5 seconds or thereabouts.
At any given moment, the GPU can either run graphics, or CUDA, so the watchdog timer is needed when running a GUI to prevent the GUI from locking up for an extended period of time, which renders the machine inoperable through the GUI.
If possible, avoid using this GPU for the desktop and/or other graphics (e.g. don't run X if you are on Linux). If running without graphics isn't an option, to reduce kernel execution time to avoid hitting watchdog timer kernel termination, you will have to do less work per kernel launch, optimize the code so the kernel runs faster for the same amount of work, or deploy a faster GPU.
To provide more inputs on #njuffa's answer, in Windows systems you can increase the launch timeout or TDR (Timeout Detection & Recovery) by following these steps:
1: Open the options in Nsight Monitor.
2: Set an appropriate value for WDDM TDR Delay
CUATION: If this value is small you may get timeout error and for higher values your screen will stay frozen until kernel finishes it's job.
source
When I run the simpleMultiCopy in the SDK (4.0) on the Tesla C2050 I get the following results:
[simpleMultiCopy] starting...
[Tesla C2050] has 14 MP(s) x 32 (Cores/MP) = 448 (Cores)
> Device name: Tesla C2050
> CUDA Capability 2.0 hardware with 14 multi-processors
> scale_factor = 1.00
> array_size = 4194304
Relevant properties of this CUDA device
(X) Can overlap one CPU<>GPU data transfer with GPU kernel execution (device property "deviceOverlap")
(X) Can overlap two CPU<>GPU data transfers with GPU kernel execution
(compute capability >= 2.0 AND (Tesla product OR Quadro 4000/5000)
Measured timings (throughput):
Memcpy host to device : 2.725792 ms (6.154988 GB/s)
Memcpy device to host : 2.723360 ms (6.160484 GB/s)
Kernel : 0.611264 ms (274.467599 GB/s)
Theoretical limits for speedup gained from overlapped data transfers:
No overlap at all (transfer-kernel-transfer): 6.060416 ms
Compute can overlap with one transfer: 5.449152 ms
Compute can overlap with both data transfers: 2.725792 ms
Average measured timings over 10 repetitions:
Avg. time when execution fully serialized : 6.113555 ms
Avg. time when overlapped using 4 streams : 4.308822 ms
Avg. speedup gained (serialized - overlapped) : 1.804733 ms
Measured throughput:
Fully serialized execution : 5.488530 GB/s
Overlapped using 4 streams : 7.787379 GB/s
[simpleMultiCopy] test results...
PASSED
This shows that the expected runtime is 2.7 ms, while it actually takes 4.3. What is it exactly that causes this discrepancy? (I've also posted this question at http://forums.developer.nvidia.com/devforum/discussion/comment/8976.)
The first kernel launch cannot start until the first memcpy is completed, and the last memcpy cannot start until the last kernel launch is completed. So, there is "overhang" that introduces some of the overhead you are observing. You can decrease the size of the "overhang" by increasing the number of streams, but the streams' inter-engine synchronization incurs its own overhead.
It's important to note that overlapping compute+transfer doesn't always benefit a given workload - in addition to the overhead issues described above, the workload itself has to spend equal amounts of time doing compute and data transfer. Due to Amdahl's Law, the potential speedup of 2x or 3x falls off as the workload becomes either transfer-found or compute-bound.