With a fresh CUDA 5.0 Linux install on CentOS 5.5, I am not able to gdb. So I am wondering if you still need a dedicated GPU for the Linux cuda-gdb? I tried it with the Vesa device driver for X11, but get the same result. Profiling works, running the app works, but trying to run cuda-gdb gives :
warning: no loadable sections found in added symbol-file system-supplied DSO at 0x2aaaaaaab000
Any suggestions?
cuda-gdb still needs a GPU that is not used by graphical environment (e.g. if you are running Gnome/KDE/etc. you need to have system with several GPUs - not necessary all of them must be NVIDIA GPUs)
This particular message is not about this problem - you can ignore it. cuda-gdb will tell if it fails because no GPU can be used for debugging.
Related
Running a simple application on nvidia Visual Profiler shows the error:
Encountered invalid option : --openacc-profiling
======== Use "nvprof --help" to get more information.
Any gpu applicatiion I try to profile gets the same error.
I tried to uncheck the option "Enable OpenACC profiling" and got the same error.
Versions:
nvprof --version
nvprof: NVIDIA (R) Cuda command line profiler
Copyright (c) 2013 - 2014 NVIDIA Corporation
Release version 6.5.14 (21)
And
NVIDIA Visual Profiler
Version: 6.5
It appears (based on comments above) that the issue here was a mixed configuration - a CUDA 8 version of nvvp (the visual profiler) calling a CUDA 6.5 version of nvprof.
The visual profiler performs some of its work by calling nvprof to do low-level profiling. As a result, it is passing command-line switches to nvprof, and so nvprof is expected to match, version-wise, the version of nvvp that is being used. If that is not the case, problems like this can occur.
The solution is to have a consistent install. It should be possible to have both CUDA 6.5 and CUDA 8 installed on the same machine, but it's necessary for the PATH and LD_LIBRARY_PATH variables to be set in such a way that the CUDA 8 version of nvvp will find/invoke the CUDA 8 version of nvprof, for example. Generally, the instructions contained in the linux install guide for setting of these variables should be sufficient, but care should be taken, for example, to be sure that there is not some previous version of nvprof that will be found due to the PATH setting when using CUDA 8. It's not possible cover all the possible ways in which this may happen, so some rudimentary linux administration skills will be necessary to ensure such a configuration is internally consistent.
Otherwise, if these skills don't exist, the linux install instructions may provide the best solution - remove all previous versions of CUDA when installing a new version. That is another possible approach which, if done correctly, should absolutely prevent a problem such as this from occurring.
I have installed cuda.7.0.28 into my laptop. I tried to run one of the sample file. I ran deviceQuery project and got this message:
cudaGetDeviceCount returned 38
-> no CUDA-capable device is detected
Result = FAIL
Then, I ran nvidia-smi.exe file and got this message:
As you see, it is written that "Not Supported". What should I do?
nvidia-smi returning 'not supported' does not necessarily mean that your GPU does not have the ability to run CUDA code. It means that you don't have the ability to see the active CUDA process name using nvidia-smi.
Cuda-z might be of help here. Take a look at what it is here: http://cuda-z.sourceforge.net/
Also, I have to say I had quite a few problems getting CUDA running on Windows. If you really need to run it on Windows, make sure you go through this first: http://docs.nvidia.com/cuda/cuda-getting-started-guide-for-microsoft-windows/#axzz3cNkYKZDP
Have you tried to run it on linux on the same machine? It was much easier to get it workinge.
NVIDIA now provide a toolkit to install CUDA on windows (Linux or Mac also). It does a handy check of your system, to see if it meets necessary requirements for CUDA if you are unsure about your GPU
https://developer.nvidia.com/cuda-80-ga2-download-archive
I've noticed that when my nvidia driver is updated during the system package update process (on Ubuntu) that I'll get this message. It is resolved by a reboot, or likely an X restart although I haven't tried that.
This was disconcerting the first time it happened since it was one of those "Hey! My code just ran fine. WTF happened?" moments.
I'm using MPI+CUDA mixed mode to program a GPU cluster for matrix multiplication. When I offload the multiplication operations to the GPUs via MPI and CUDA, it gives an error message at run time:
FATAL: Error inserting nvidia (/lib/modules/3.2.0-23-generic-pae/kernel/drivers/video/nvidia.ko): No such device
MPI is used to transfer the data blocks and then upon receiving the data, a generic C function is called that triggers a CUDA kernel.
Test setup has 3 machines, each has single GPU.
I tested with a CUDA only local version version. I didn't get any error messages, but the answers of the algorithms were wrong (Even for the small simple algorithms).
What's the reason for this error?
Please note that this is only when I try to use the MPI with CUDA. CUDA only version works well. Thanks in advance.
The errors have been caused because Nouveau is controlling the GPU, not the NVIDIA driver. So, before installing NVIDIA driver and CUDA toolkit, nouveau should be blacklisted.
sudo nano /etc/modprobe.d/blacklist.conf
Insert nouveau at the end of the file.
If the NVIDIA driver is already installed, then re-install the NVIDIA driver.
I was planning on starting to use CUDA on a machine with Kubuntu 12.04 LTS and a Quadro card. I installed CUDA 5.5 using the .deb from here, and the installation seems to have gone fine. Then I built the CUDA samples, again everything went fine.
When I run the samples in sequence, however, some of them botch my display, and others simply crash my computer.
What causes the crash? How can I fix it?
I'll mention that my NVidia card is the only display adapter the machine has, but that shouldn't make CUDA crash and burn.
The problem was due to the X server using the FOSS nouveau drivers. These are known to conflict with NVidia's way of accessing the card. When I restarted X (actually, I restarted the machine), the samples did run and work properly.
Not all the samples are runnable if you just installed CUDA on a clean ubuntu system. Some of them require additional libraries, and some of them require particular CC versions.
You could read the CUDA sample document of those crashed samples for more information.
http://docs.nvidia.com/cuda/cuda-samples/index.html
I have a two gpu system, a Geforce 8400 GS and Geforce GT 520. I am able to run my cuda programs on both the gpus. But when I use cuda-gdb to debug them I get an error saying that the Cuda driver initialization failed. Also, when I run the program with cuda-gdb, the cudaGetDeviceCount says I have only 1 gpu. I am able to run the programs on either of the gpus if I am not using cuda-gdb. Can somebody help me with this?
I am running Ubuntu 11.04.
It looks like you have a display driver version older than the one required by the CUDA Toolkit. Make sure you installed the display driver downloaded from the same download page you got your toolkit from.
cuda-gdb will hide from the application being debugged GPUs used to run your desktop environment. Otherwise the desktop environment might've hanged when the application is suspended on the breakpoint. To see both GPUs in cuda-gdb you need to run without desktop environment.