In nvidia-mps, we launch the mps-server by running sudo nvidia-cuda-mps-control -d, I have two questions.
How to specify which GPU to run mps-server when I have multiple GPUs on the same server.
How to control the resources (such as computation and memory) allocated for each mps client when I have multiple concurrent processes?
The CUDA MPS doc will answer many questions like this.
How to specify which GPU to run mps-server when I have multiple GPUs on the same server.
From the CUDA MPS doc, section 2.3.4, the GPUs that are visible (via CUDA_VISIBLE_DEVICES) when the MPS server is started, will determine which GPUs it will use:
2.3.4. MPS on Multi-GPU Systems
The MPS server supports using multiple GPUs. On systems with more than one GPU,
you can use CUDA_VISIBLE_DEVICES to enumerate the GPUs you would like to use.
See section 4.2 for more details.
How to control the resources (such as computation and memory) allocated for each mps client when I have multiple concurrent processes?
From the same doc, section 2.3.5.2, the primary method for computation allocation per process is via the setting of the environment variable CUDA_MPS_ACTIVE_THREAD_PERCENTAGE. The setting of this environment variable when the process begins and initializes the CUDA runtime or driver API will determine its available compute resources (SMs) expressed as a percentage. If you have multiple GPUs, it will be the percentage of the SM resources on the GPU selected by your application using cudaSetDevice() or similar.
MPS doesn't provide a mechanism for per-process memory allocation/partitioning, at this time.
EDIT: Update - CUDA 11.5 made publicly available on October 20th, 2021 adds a new feature allowing per-client memory limits in MPS.
Related
I followed Robert Crovella's example on how to use Nvidia's Multi-Process Service. According to docs:
2.1.2. Reduced on-GPU context storage
Without MPS each CUDA processes using a GPU allocates separate storage
and scheduling resources on the GPU. In contrast, the MPS server
allocates one copy of GPU storage and scheduling resources shared by
all its clients.
which I understood as the reduction of each of the processes' context sizes, which is possible because they are shared. This would increase free GPU memory and thus enable running more processes in parallel.
Now, back to the example. Without MPS:
And with MPS:
Unfortunately each process still takes virtually the same (~300MB) amount of memory. Isn't this in contradiction to the docs? Is there a way to decrease per process memory consumption?
Oops, I overeagerly asked before checking the memory usage on the other (pre-Volta) card and yes, there is actually a difference. Let me just post it here for future reference if anyone else stumbled on this problem too:
MPS off:
MPS on:
Indeed, as seen here, in Volta architecture, you can see the processes communicate directly with the GPU, without the MPS server in the middle:
Volta MPS clients submit work directly to the GPU without passing through the MPS server.
This can be easily seen from your first screenshot where the t1034 processes are listed as using the GPU.
On the contrary, in pre-Volta architectures, the client processes communicate with the GPU through the MPS server. This results in seeing only the MPS server process communicating directly with the GPU in the latter screenshot.
I have Tesla K20m GPU card from NVIDIA. In CUDA 5.0 onwards multiple processes from the same application on same GPU is allowed. Does CUDA allow execution of different applications on same GPU at the same time?
Depends what do you mean by 'at the same time'. If you mean 'two applications have CUDA contexts on same card at the same time' then yes.
Though you may want to use MPS to get full benefits and reduce context switching. See also this question.
Multiple applications may run at the same time on the same GPU. Namely, multiple applications can have a CUDA context at the same time and launch kernels, copy memory, etc...
But kernels from different CUDA contexts cannot be executed simultaneously on the same GPU. Meaning, at the very same slice of time, only kernels from a single CUDA context may be executed on a GPU. This may cause a GPU underutilization if kernels do not occupy the entire GPU resources (memory + compute), and some of the resources may be left unused.
MPS enables that by actually having a server with a single CUDA context, and all client processes communicate with the GPU device through this server, and eventually using its single CUDA context. This enables actual concurrency between kernel launches from different (logical) CUDA contexts.
Suppose I have 4 GPUs and would like to run 50 CUDA programs in parallel. My question is: is the NVIDIA driver smart enough to run the 50 CUDA programs on the different GPUs or do I have to set the CUDA device for each program?
thank you
The first point to make is that you cannot run 50 applications in parallel on 4 GPUs on just about any CUDA platform. If you have a Hyper-Q capable GPU, there is the possibility of up to 32 threads or MPI processes queuing work to the GPU. Otherwise there is a single command queue.
For anything other than the latest Kepler Tesla cards, CUDA driver only supports a single active context at a time. If you run more that one application on a GPU, the processes will both have contexts which just contend with one another in a "first come, first serve" basis. If one application blocks the other with a long running kernel or similar, there is no pre-emption or anything else which makes the process yield to another process. When the GPU is shared with a display manager, there is a watchdog timer that will impose an upper limit of a few seconds before the application will get its context killed. The result is that only one context ever runs on the hardware at a time. Context switching isn't free, and there is a performance penalty to having multiple processes contending for a single device.
Furthermore, every context present on a GPU requires device memory. On the platform you are asking about, linux, there is no memory paging, so every context's resources must coexist in GPU memory. I don't believe it would be possible to have 12 non-trivial contexts running on any current GPU simultaneously - you would run out of available memory well before that number. Trying to run more applications would result in an context establishment failure.
As for the behaviour of the driver distributing multiple applications on multiple GPUs, AFAIK the linux driver doesn't do any intelligent distribution of processes amongst GPUs, except when one or more of the GPUs are in a non-default compute mode. If no device is specifically requested, the driver will always try and find the first valid, free GPU it can run a process or thread on. If a GPU is busy and marked compute exclusive (either thread or process) or marked prohibited, then the driver will skip over it when trying to find a GPU to run on. If all GPUs are exclusive and occupied or prohibited, then the application will fail with a no valid device available error.
So in summary,for everything other than Hyper-Q devices, there is no performance gain in doing what you are asking about (quite the opposite) and I would expected it to break if you tried. A much saner approach would be to use compute exclusivity in combination with a resource managing task scheduler like Torque or one of the (former) Sun Grid Engine versions, which could schedule your processes to run in an orderly fashion according to the availability of GPUs. This is how most general purpose HPC clusters deal with scheduling in multi-gpu environments.
I know that NVIDIA gpus with compute capability 2.x or greater can execute u pto 16 kernels concurrently.
However, my application spawns 7 "processes" and each of these 7 processes launch CUDA kernels.
My first question is that what would be the expected behavior of these kernels. Will they execute concurrently as well or, since they are launched by different processes, they would execute sequentially.
I am confused because the CUDA C programming guide says:
"A kernel from one CUDA context cannot execute concurrently with a kernel from another CUDA context."
This brings me to my second question, what are CUDA "contexts"?
Thanks!
A CUDA context is a virtual execution space that holds the code and data owned by a host thread or process. Only one context can ever be active on a GPU with all current hardware.
So to answer your first question, if you have seven separate threads or processes all trying to establish a context and run on the same GPU simultaneously, they will be serialised and any process waiting for access to the GPU will be blocked until the owner of the running context yields. There is, to the best of my knowledge, no time slicing and the scheduling heuristics are not documented and (I would suspect) not uniform from operating system to operating system.
You would be better to launch a single worker thread holding a GPU context and use messaging from the other threads to push work onto the GPU. Alternatively there is a context migration facility available in the CUDA driver API, but that will only work with threads from the same process, and the migration mechanism has latency and host CPU overhead.
To add to the answer of #talonmies
In the newer architectures, by the use of MPS multiple processes can launch multiple kernels concurrently. So, now it is definitely possible which was not sometime before. For a detailed understanding read this article.
https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf
Additionally, you can also see maximum number of concurrent kernels allowed per cuda compute capability type supported by different GPUs. Here is a link to that:
https://en.wikipedia.org/wiki/CUDA#Version_features_and_specifications
For example a GPU with cuda compute capability of 7.5 can have maximum of 128 Cuda kernels launched to it.
Do you really need to have separate threads and contexts?
I believe that best practice is a usage one context per GPU, because multiple contexts on single GPU bring a sufficient overhead.
To execute many kernels concrurrenlty you should create few CUDA streams in one CUDA context and queue each kernel into its own stream - so they will be executed concurrently, if there are enough resources for it.
If you need to make the context accessible from few CPU threads - you can use cuCtxPopCurrent(), cuCtxPushCurrent() to pass them around, but only one thread will be able to work with the context at any time.
I have an application in which I would like to share a single GPU between multiple processes. That is, each of these processes would create its own CUDA or OpenCL context, targeting the same GPU. According to the Fermi white paper[1], application-level context switching is less then 25 microseconds, but the launches are effectively serialized as they launch on the GPU -- so Fermi wouldn't work well for this. According to the Kepler white paper[2], there is something called Hyper-Q that allows for up to 32 simultaneous connections from multiple CUDA streams, MPI processes, or threads within a process.
My questions: Has anyone tried this on a Kepler GPU and verified that its kernels are run concurrently when scheduled from distinct processes? Is this just a CUDA feature, or can it also be used with OpenCL on Nvidia GPUs? Do AMD's GPUs support something similar?
[1] http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIA_Fermi_Compute_Architecture_Whitepaper.pdf
[2] http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf
In response to the first question, NVIDIA has published some hyper-Q results in a blog here. The blog is pointing out that the developers who were porting CP2K were able to get to accelerated results more quickly because hyper-Q allowed them to use the application's MPI structure more or less as-is and run multiple ranks on a single GPU, and get higher effective GPU utilization that way. As mentioned in the comments, this (hyper-Q) feature is only available on K20 processors currently, as it is dependent on the GK110 GPU.
I've run simultaneous kernels from Fermi architecture it works wonderfully and in fact, is often the only way to get high occupancy from your hardware. I used OpenCL and you need to run a separate command queue from a separate cpu thread in order to do this. Hyper-Q is the ability to dispatch new data parallel kernels from within another kernel. This is only on Kepler.