Maximum concurrent kernels for devices of compute capability 3.0 - cuda

What is the maximum number of concurrent kernels possible for NVIDIA devices of compute capability 3.0 ? I hope its not the same as the one for Compute Capability 2.0..

From the CUDA C programming guide version 4.2:
3.2.5.3 Concurrent Kernel Execution
The maximum number of kernel launches that a device can execute concurrently is sixteen.

Related

Maximum number of concurrent kernels & virtual code architecture

So I found this wikipedia resource
Maximum number of resident grids per device
(Concurrent Kernel Execution)
and for each compute capability it says a number of concurrent kernels, which I assume to be the maximum number of concurrent kernels.
Now I am getting a GTX 1060 delivered which according to this nvidia CUDA resource has a compute capability of 6.1. From what I have learned about CUDA so far you can specify the virtual compute capability of your code at compile time in NVCC though with the flag -arch=compute_XX.
So will my GPU be hardware constrained to 32 concurrent kernels or is it capable of 128 with the -arch=compute_60 flag?
According to table 13 in the NVIDIA CUDA programming guide compute capability 6.1 devices have a maximum of 32 resident grids = 32 concurrent kernels.
Even if you use the -arch=compute_60 flag, you will be limited to the hardware limit of 32 concurrent kernels. Choosing particular architectures to compile for does not allow you to exceed the hardware limits of the machine.

What is the relationship between NVIDIA GPUs' CUDA cores and OpenCL computing units?

My computer has a GeForce GTX 960M which is claimed by NVIDIA to have 640 CUDA cores. However, when I run clGetDeviceInfo to find out the number of computing units in my computer, it prints out 5 (see the figure below). It sounds like CUDA cores are somewhat different from what OpenCL considers as computing units? Or maybe a group of CUDA cores form an OpenCL computing unit? Can you explain this to me?
What is the relationship between NVIDIA GPUs' CUDA cores and OpenCL computing units?
Your GTX 960M is a Maxwell device with 5 Streaming Multiprocessors, each with 128 CUDA cores, for a total of 640 CUDA cores.
The NVIDIA Streaming Multiprocessor is equivalent to an OpenCL Compute Unit. The previously linked answer will also give you some useful information that may help with your kernel sizing question in the comments.
The CUDA architecture is a close match to the OpenCL architecture.
A CUDA device is built around a scalable array of multithreaded Streaming Multiprocessors (SMs). A multiprocessor corresponds to an OpenCL compute unit.
A multiprocessor executes a CUDA thread for each OpenCL work-item and a thread block for each OpenCL work-group. A kernel is executed over an OpenCLNDRange by a grid of thread blocks. As illustrated in Figure 2-1, each of the thread blocks that execute a kernel is therefore uniquely identified by its work-group ID, and each thread by its global ID or by a combination of its local ID and work-group ID.
Copied from OpenCL Programming Guide for the CUDA Architecture http://www.nvidia.com/content/cudazone/download/OpenCL/NVIDIA_OpenCL_ProgrammingGuide.pdf

Different Kernels sharing SMx [duplicate]

Is it possible, using streams, to have multiple unique kernels on the same streaming multiprocessor in Kepler 3.5 GPUs? I.e. run 30 kernels of size <<<1,1024>>> at the same time on a Kepler GPU with 15 SMs?
On a compute capability 3.5 device, it might be possible.
Those devices support up to 32 concurrent kernels per GPU and 2048 threads peer multi-processor. With 64k registers per multi-processor, two blocks of 1024 threads could run concurrently if their register footprint was less than 16 per thread, and less than 24kb shared memory per block.
You can find all of this is the hardware description found in the appendices of the CUDA programming guide.

How many kernels simultaneously support on Kepler CC3.0/3.5, 16 or 32 (STREAMs)?

As we know Fermi support only single connection to GPU, and as written here: http://on-demand.gputechconf.com/gtc-express/2011/presentations/StreamsAndConcurrencyWebinar.pdf
Fermi architecture can simultaneously support
Up to 16 CUDA kernels on GPU
And as we know Hyper-Q allows for up to 32 simultaneous connections from multiple CUDA streams, MPI processes, or threads within a process: http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf
But how many kernels simultaneously support on Kepler CC3.0/3.5, 16 or 32 (STREAMs)?
From the programming guide:
The maximum number of kernel launches that a device can execute
concurrently is 32 on devices of compute capability 3.5 and 16 on
devices of lower compute capability.

Multiple kernels in cuda 4.0

Is it possible to launch multiple kernels on multiple GPUs concurrently from a single thread in cuda 4.0?
To use multiple GPUs from a single thread, you can switch between cuda contexts (each of which is bound is bound to a GPU) and launch kernels asynchronously. In effect you will be running multiple kernels across multiple GPUs this way.
However if you have cards with compute capability > 2.0, you can also run kernels concurrently as shown in the comments above. You can find the post about concurrent kernel execution over here.
Ofcourse you can use both if you have multiple cards with compute capability >= 2.0.
yes.
If there are 2 devices you can run kernel1<<<>>> at device0 and kernel2<<<>>> at device1. there is an option setdevice() with which you choose the device on which the kernel will be executed.
google it, it is in the cuda library 4.0