How to enable/disable a specific graphic card? - cuda

I'm working on a "fujitsu" machine. It has 2 GPUs installed: Quadro 2000 and Tesla C2075. The Quadro GPU has 1 GB RAM and Tesla GPU has 5GB of it. (I checked using the output of nvidia-smi -q). When I run nvidia-smi, the output shows 2 GPUs, but the Tesla ones display is shown as off.
I'm running a memory intensive program and would like to use 5 GB of RAM available, but whenever I run a program, it seems to be using the Quadro GPU.
Is there some way to use a particular GPU out of the 2 in a program? Does the Tesla GPU being "disabled" means it's drivers are not installed?

You can control access to CUDA GPUs either using the environment or programmatically.
You can use the environment variable CUDA_VISIBLE_DEVICES to specify a list of 1 or more GPUs that will be visible to any application, as well as their order of visibility. For example if nvidia-smi reports your Tesla GPU as GPU 1 (and your Quadro as GPU 0), then you can set CUDA_VISIBLE_DEVICES=1 to enable only the Tesla to be used by CUDA code.
See my blog post on the subject.
To control what GPU your application uses programmatically, you should use the device management API of CUDA. Query the number of devices using cudaGetDeviceCount, then you can cudaSetDevice to each device, query its properties using cudaGetDeviceProperties, and then select the device that fits your application criteria. You can also use cudaChooseDevice to select the device that most closely matches the device properties you specify.

Related

GTX Titan Z Global Memory

I have a GTX Titan Z graphics card. It has twin GPUs with total memory of 12 GB (6GB + 6GB). When I use DeviceQuery application in Cuda Sample (V6.5) folder to see the specification, it shows two devices, each having total memory of 4 GB. Furthermore, in my C++ code I can only access to 4GB memory. On the other hand, when I run GPU-Z software, it shows two Titan Zs, each having 6GB memory. Could anyone explain what has caused this problem and how can be resolved?
The problem here was that the program was being compiled as a 32-bit application. With 32 bits the program is only able to address 4GB of memory. The CUDA call to check device specifications (cudaGetDeviceProperties) appears to recognize this fact, and only reports the 4GB you can actually use.
Compiling as a 64-bit application should resolve this problem.

how to get utilization rates of gpu? (nvml)

I need gpu information for my cuda project test.
I am using nvml library, and I successfully get temperature information.
But, nvml reports ERROR_NOT_SUPPORTED in nvmlDeviceGetUtilizationRates().
So now, how to get utilization rates of gpu?
Clearly, there will be a way like NVIDIA GeForce Experience.
thanks,
p.s. oops! I am insufficient reputation...
If you want to see NVIDIA GeForce Experience example image, click this link.
Maybe your graphics card, the GTX Titan (based on your image) simply does not support volatile GPU utilization?
I think this functionality is only supported on Quadro and Tesla cards.
Additionally, can you show how you use nvml in your code?
To see your GPU utilization in linux for nvidia cards use:
nvidia-smi
use -l to refresh for eg
nvidia-smi -l 1
this will refresh for every 1 second.

NVML Power readings with nvmlDeviceGetPowerUsage

I'm running an application using the NVML function nvmlDeviceGetPowerUsage().
The problem is that I always get the same number for different applications I'm running using on a TESLA M2050.
Any suggestions?
If you read the documentation, you'll discover that there are some qualifiers on whether this function is available:
For "GF11x" Tesla ™and Quadro ®products from the Fermi family.
• Requires NVML_INFOROM_POWER version 3.0 or higher.
For Tesla ™and Quadro ®products from the Kepler family.
• Does not require NVML_INFOROM_POWER object.
And:
It is only available if power management mode is supported. See nvmlDeviceGetPowerManagementMode.
I think you'll find that power management mode is not supported on the M2050, and if you run that nvmlDeviceGetPowerManagementMode API call on your M2050 device, you'll get confirmation of that.
The M2050 is niether a Kepler GPU nor is it a GF11x Fermi GPU. It is using the GF100 Fermi GPU, so it is not covered by this API capability (and the GetPowerManagementMode API call would confirm that.)

cublas failed to synchronize stop event?

I'm playing with the matrixMulCUBLAS sample code and tried changing the default matrix sizes to something slightly more fun rows=5k x cols=2.5k and then the example fails with the error Failed to synchronize on the stop event (error code unknown error)! at line #377 when all the computation is done and it is apparently cleaning up cublas. What does this mean? and how can be fixed?
I've got cuda 5.0 installed with an EVGA FTW nVidia GeForce GTX 670 with 2GB memory. The driver version is 314.22 latest one as of today.
In general, when using CUDA on windows, it's necessary to make sure the execution time of a single kernel is not longer than about 2 seconds. If the execution time becomes longer, you may hit a windows TDR event. This is a windows watchdog timer that will reset the GPU driver if it does not respond within a certain period of time. Such a reset halts the execution of your kernel and generates bogus results, as well as usually a briefly "black" display and a brief message in the system tray. If your kernel execution is triggering the windows watchdog timer, you have a few options:
If you have the possibility to use more than one GPU in your system (i.e. usually not talking about a laptop here) and one of your GPUs is a Quadro or Tesla device, the Quadro or Tesla device can usually be placed in TCC mode. This will mean that GPU can no longer driver a physical display (if it was driving one) and that it is removed from the WDDM subsystem, so is no longer subject to the watchdog timer. You can use the nvidia-smi.exe tool that ships with the NVIDIA GPU driver to modify the setting from WDDM to TCC for a given GPU. Use your windows file search function to find nvidia-smi.exe and then use nvidia-smi --help to get command line help for how to switch from WDDM to TCC mode.
If the above method is not available to you (don't have 2 GPUs, don't have a Quadro or Tesla GPU...) then you may want to investigate changing the watchdog timer setting. Unfortunately this requires modifying the system registry, and the process and specific keys vary by OS. There are a number of resources on the web, such as here from Microsoft, as well as other questions on Stack Overflow, such as here, which may help with this.
A third option is simply to limit the execution time of your kernel(s). Successive operations might be broken into multiple kernel calls. The "gap" between kernel calls will allow the display driver to respond to the OS, and prevent the watchdog timeout.
The statement about TCC support is a general one. Not all Quadro GPUs are supported. The final determinant of support for TCC (or not) on a particular GPU is the nvidia-smi tool. Nothing here should be construed as a guarantee of support for TCC on your particular GPU.

How to decide whether a GPU card is being used?

In CUDA, is there any runtime API that will tell whether a GPU device is being used or not? And whether the user is from video display or a GUGPU application? And what is the GPU occupancy?
On linux at least, you can use the program nvidia-smi to see the current memory use, and if any compute processes are running. Think though that the status about compute processes is only supported on a selected number of graphics cards, e.g. tesla.
While it doesn't show exactly what is using it, MSI Afterburner on Windows will show you the core usage, memory usage, fan speed, and temperature of GPU's in a system (NV or otherwise.)