Maximum number of concurrent kernels & virtual code architecture - cuda

So I found this wikipedia resource
Maximum number of resident grids per device
(Concurrent Kernel Execution)
and for each compute capability it says a number of concurrent kernels, which I assume to be the maximum number of concurrent kernels.
Now I am getting a GTX 1060 delivered which according to this nvidia CUDA resource has a compute capability of 6.1. From what I have learned about CUDA so far you can specify the virtual compute capability of your code at compile time in NVCC though with the flag -arch=compute_XX.
So will my GPU be hardware constrained to 32 concurrent kernels or is it capable of 128 with the -arch=compute_60 flag?

According to table 13 in the NVIDIA CUDA programming guide compute capability 6.1 devices have a maximum of 32 resident grids = 32 concurrent kernels.
Even if you use the -arch=compute_60 flag, you will be limited to the hardware limit of 32 concurrent kernels. Choosing particular architectures to compile for does not allow you to exceed the hardware limits of the machine.

Related

What is the relationship between NVIDIA GPUs' CUDA cores and OpenCL computing units?

My computer has a GeForce GTX 960M which is claimed by NVIDIA to have 640 CUDA cores. However, when I run clGetDeviceInfo to find out the number of computing units in my computer, it prints out 5 (see the figure below). It sounds like CUDA cores are somewhat different from what OpenCL considers as computing units? Or maybe a group of CUDA cores form an OpenCL computing unit? Can you explain this to me?
What is the relationship between NVIDIA GPUs' CUDA cores and OpenCL computing units?
Your GTX 960M is a Maxwell device with 5 Streaming Multiprocessors, each with 128 CUDA cores, for a total of 640 CUDA cores.
The NVIDIA Streaming Multiprocessor is equivalent to an OpenCL Compute Unit. The previously linked answer will also give you some useful information that may help with your kernel sizing question in the comments.
The CUDA architecture is a close match to the OpenCL architecture.
A CUDA device is built around a scalable array of multithreaded Streaming Multiprocessors (SMs). A multiprocessor corresponds to an OpenCL compute unit.
A multiprocessor executes a CUDA thread for each OpenCL work-item and a thread block for each OpenCL work-group. A kernel is executed over an OpenCLNDRange by a grid of thread blocks. As illustrated in Figure 2-1, each of the thread blocks that execute a kernel is therefore uniquely identified by its work-group ID, and each thread by its global ID or by a combination of its local ID and work-group ID.
Copied from OpenCL Programming Guide for the CUDA Architecture http://www.nvidia.com/content/cudazone/download/OpenCL/NVIDIA_OpenCL_ProgrammingGuide.pdf

Different Kernels sharing SMx [duplicate]

Is it possible, using streams, to have multiple unique kernels on the same streaming multiprocessor in Kepler 3.5 GPUs? I.e. run 30 kernels of size <<<1,1024>>> at the same time on a Kepler GPU with 15 SMs?
On a compute capability 3.5 device, it might be possible.
Those devices support up to 32 concurrent kernels per GPU and 2048 threads peer multi-processor. With 64k registers per multi-processor, two blocks of 1024 threads could run concurrently if their register footprint was less than 16 per thread, and less than 24kb shared memory per block.
You can find all of this is the hardware description found in the appendices of the CUDA programming guide.

What is the relation between compute units, SMXs, CUDA cores, etc.?

I'm quite confused with these terminologies... I understand that an nVidia GPU has some streaming multiprocessors (SMX), each consisting of a number of CUDA cores (streaming processor, SP). However I can't seem to figure out how this applies to OpenCL compute units.
For example, my GeForce GTS 250 says it has 16 compute units. The official nVidia site says it has 128 CUDA cores. However, some papers say the compute unit itself is a core.
So which one is which? Also, which one of these executes an OpenCL workgroup? So far I thought a work group gets executed on a CUDA core. But the OpenCL spec says it gets executed on a compute unit (which should be an SMX then).
Honestly, WTF???
I would completely ignore the term 'core' when thinking about OpenCL, because different hardware vendors have different opinions about what it actually means (as you have already found out). Neither an SM nor a 'CUDA core' is directly comparable to a traditional CPU core.
For NVIDIA hardware, an SM is an OpenCL compute unit. Each work-group will therefore be assigned to an SM, although each SM is capable of running multiple work-groups concurrently.

How many kernels simultaneously support on Kepler CC3.0/3.5, 16 or 32 (STREAMs)?

As we know Fermi support only single connection to GPU, and as written here: http://on-demand.gputechconf.com/gtc-express/2011/presentations/StreamsAndConcurrencyWebinar.pdf
Fermi architecture can simultaneously support
Up to 16 CUDA kernels on GPU
And as we know Hyper-Q allows for up to 32 simultaneous connections from multiple CUDA streams, MPI processes, or threads within a process: http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf
But how many kernels simultaneously support on Kepler CC3.0/3.5, 16 or 32 (STREAMs)?
From the programming guide:
The maximum number of kernel launches that a device can execute
concurrently is 32 on devices of compute capability 3.5 and 16 on
devices of lower compute capability.

Maximum concurrent kernels for devices of compute capability 3.0

What is the maximum number of concurrent kernels possible for NVIDIA devices of compute capability 3.0 ? I hope its not the same as the one for Compute Capability 2.0..
From the CUDA C programming guide version 4.2:
3.2.5.3 Concurrent Kernel Execution
The maximum number of kernel launches that a device can execute concurrently is sixteen.