Is it possible, using streams, to have multiple unique kernels on the same streaming multiprocessor in Kepler 3.5 GPUs? I.e. run 30 kernels of size <<<1,1024>>> at the same time on a Kepler GPU with 15 SMs?
On a compute capability 3.5 device, it might be possible.
Those devices support up to 32 concurrent kernels per GPU and 2048 threads peer multi-processor. With 64k registers per multi-processor, two blocks of 1024 threads could run concurrently if their register footprint was less than 16 per thread, and less than 24kb shared memory per block.
You can find all of this is the hardware description found in the appendices of the CUDA programming guide.
Related
So I found this wikipedia resource
Maximum number of resident grids per device
(Concurrent Kernel Execution)
and for each compute capability it says a number of concurrent kernels, which I assume to be the maximum number of concurrent kernels.
Now I am getting a GTX 1060 delivered which according to this nvidia CUDA resource has a compute capability of 6.1. From what I have learned about CUDA so far you can specify the virtual compute capability of your code at compile time in NVCC though with the flag -arch=compute_XX.
So will my GPU be hardware constrained to 32 concurrent kernels or is it capable of 128 with the -arch=compute_60 flag?
According to table 13 in the NVIDIA CUDA programming guide compute capability 6.1 devices have a maximum of 32 resident grids = 32 concurrent kernels.
Even if you use the -arch=compute_60 flag, you will be limited to the hardware limit of 32 concurrent kernels. Choosing particular architectures to compile for does not allow you to exceed the hardware limits of the machine.
As we know Fermi support only single connection to GPU, and as written here: http://on-demand.gputechconf.com/gtc-express/2011/presentations/StreamsAndConcurrencyWebinar.pdf
Fermi architecture can simultaneously support
Up to 16 CUDA kernels on GPU
And as we know Hyper-Q allows for up to 32 simultaneous connections from multiple CUDA streams, MPI processes, or threads within a process: http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf
But how many kernels simultaneously support on Kepler CC3.0/3.5, 16 or 32 (STREAMs)?
From the programming guide:
The maximum number of kernel launches that a device can execute
concurrently is 32 on devices of compute capability 3.5 and 16 on
devices of lower compute capability.
On a compute capablility 1.3 GPU cuda card,
we run the following code
for(int i=1;i<20;++i)
kernelrun<<<30,320>>>(...);
we know that each SM has 8 SP and can run 1024 threads,
so there are 30 SM in tesla C1060 which can run 30*1024 threads concurrently.
As per the given code, how many threads can run concurrently ?
If there are 48 registers for the kernelrun kernel , what are the limitations on tesla C1060?
which has 16384 registers and 16KB shared memory?
Since concurrent kernel execution is not supported in Tesla C1060, how
can we execute the kernel in loop concurrently ? IS streams possible?
only one concurrent copy and execute engine in tesla C1060?
NVIDIA have been shipping an Occupancy calculator which you can use to answer this question for yourself since 2007. You should try it.
But to answer your question, each SM in your compute 1.3 device has 16384 registers per SM, so the number of threads per block if your kernel is register limited would be roughly 352 (16384/45 rounded down to the nearest 32). There is also a register page allocation granularity to consider.
My GPU has 2 multiprocessors with 48 CUDA cores each. Does this mean that I can execute 96 thread blocks in parallel?
No it doesn't.
From chapter 4 of the CUDA C programming guide:
The number of blocks and warps that can reside and be processed together on the multiprocessor for a given kernel depends on the amount of registers and shared memory used by the kernel and the amount of registers and shared memory available on the multiprocessor. There are also a maximum number of resident blocks and a maximum number of resident warps per multiprocessor. These limits as well the amount of registers and shared memory available on the multiprocessor are a function of the compute capability of the device and are given in Appendix F. If there are not enough registers or shared memory available per multiprocessor to process at least one block, the kernel will fail to launch.
Get the guide at: http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/CUDA_C_Programming_Guide.pdf
To check the limits for your specific device compile and execute the cudaDeviceQuery example from the SDK.
So far the maximum number of resident blocks per multiprocessor is the same across all compute capabilities and is equal to 8.
This comes down to semantics. What does "execute" and "running in parallel" really mean?
At a basic level, having 96 CUDA cores really means that you have a potential throughput of 96 results of calculations per cycle of the core clock.
A core is mainly an Arithmetic Logic Unit (ALU), it performs basic arithmetic and logical operations. Aside from access to an ALU, a thread needs other resources, such as registers, shared memory and global memory to run. The GPU will keep many threads "in flight" to keep all these resources utilized to the fullest. The number of threads "in flight" will typically be much higher than the number of cores. On one hand, these threads can be seen as being "executed in parallel" because they are all consuming resources on the GPU at the same time. But on the other hand, most of them are actually waiting for something, such as data to arrive from global memory or for results of arithmetic to go through the pipelines in the cores. The GPU puts threads that are waiting for something on the "back burner". They are consuming some resources, but are they actually running? :)
The number of concurrently executed threads depends on your code and type of your CUDA device. For example Fermi has 2 thread schedulers for each stream multiprocessor and for current CPU clock will be scheduled 2 half-warps for calculation or memory load or transcendent function calculation. While one half-warp wait load or executed transcendent function CUDA cores may execute anything else. So you can get 96 threads on cores but if your code may get it. And, of course, you must have enough memory.
The GeForce GTX 560 Ti has 8 SM and each SM has 48 cuda cores (SP). I'm going to launch kernel in this way: kernel<<<1024,1024>>> The SM schedules threads in groups of 32 parallel threads called warps. How will blocks and threads be distributed between 8 SM and 48 SP in each SM ? We have 1024 blocks and 1024 threads so what is possible scenario ? What is the maximum number of threads executing literally at the same time ? What is difference between fermi dual warp scheduler and earlier schedulers ?
The NVIDIA supplied occupancy calculator spreadsheet, which ships in every SDK or is available for download here, can provide the answer to the first three "sub-questions" you have asked.
As for the difference between multiprocessor level scheduling in Fermi compared with earlier architectures, the name ("dual warp scheduler") really says it all. In Fermi, MPs retire instructions from two warps simultaneously, compared to a single warp, as was the case in the first two generations of CUDA capable architectures. If you want a more detailed answer than that, I recommend reading the Fermi architecture whitepaper, available for download here.