I'we been writing some simple cuda program (I'm student so I need to practice), and the thing is I can compile it with nvcc from terminal (using Kubuntu 12.04LTS) and then execute it with optirun ./a.out (hardver is geforce gt 525m on dell inspiron) and everything works fine. The major problem is that I can't do anything from Nsight. When I try to start debug version of code the message is "Launch failed! Binaries not found!". I think it's about running command with optirun but I'm not sure. Any similar experiences? Thanks, for helping in advance folks. :)
As this was the first post I found when searching for "nsight optirun" I just wanted to wanted to write down the steps I took to make it working for me.
Go to Run -> Debug Configurations -> Debugger
Find the textbox for CUDA GDB executable (in my case it was set to "${cuda_bin}/cuda-gdb")
Prepend "optirun --no-xorg", in my case it was then "optirun --no-xorg ${cuda_bin}/cuda-gdb"
The "--no-xorg" option might not be required or even counterproductive if you have an OpenGL window as it prevents any of that to appear. For my scientific code however it is required as it prevents me from running into kernel timeouts.
Happy bug hunting.
We tested Nsight on Optimus systems without optirun - see "Install the cuda toolkit" in CUDA Toolkit Getting Started on using CUDA toolkit on the Optimus system. We have not tried optirun with Nsight EE.
If you still need to use optirun for debugging, you can try making a shell script that uses optirun to start cuda-gdb and set that shell script as cuda-gdb executable in the debug configuration properties.
The simplest thing to do is to run eclipse with optirun, that will also run your app properly.
Related
I've successfully installed Cuda SDK and tested the compiler with an HelloWorld
Then I've opened Nsight and I've tried with the same code.
I got this answer " Launch Failed. Binary not found." Is this a problem of the compiler involved in Nsight?
Thank you
This is a known bug. We are aware of it and will try to fix it in one of our future releases.
You need to manually build the project at least once before starting the debug - this is needed for the debugger to be able to detect executable and setup all settings. Note that the debugger will automatically trigger the build on subsequent runs - when it already knows the executable and build configuration you are using..
I'we been writing some simple cuda program (I'm student so I need to practice), and the thing is I can compile it with nvcc from terminal (using Kubuntu 12.04LTS) and then execute it with optirun ./a.out (hardver is geforce gt 525m on dell inspiron) and everything works fine. The major problem is that I can't do anything from Nsight. When I try to start debug version of code the message is "Launch failed! Binaries not found!". I think it's about running command with optirun but I'm not sure. Any similar experiences? Thanks, for helping in advance folks. :)
As this was the first post I found when searching for "nsight optirun" I just wanted to wanted to write down the steps I took to make it working for me.
Go to Run -> Debug Configurations -> Debugger
Find the textbox for CUDA GDB executable (in my case it was set to "${cuda_bin}/cuda-gdb")
Prepend "optirun --no-xorg", in my case it was then "optirun --no-xorg ${cuda_bin}/cuda-gdb"
The "--no-xorg" option might not be required or even counterproductive if you have an OpenGL window as it prevents any of that to appear. For my scientific code however it is required as it prevents me from running into kernel timeouts.
Happy bug hunting.
We tested Nsight on Optimus systems without optirun - see "Install the cuda toolkit" in CUDA Toolkit Getting Started on using CUDA toolkit on the Optimus system. We have not tried optirun with Nsight EE.
If you still need to use optirun for debugging, you can try making a shell script that uses optirun to start cuda-gdb and set that shell script as cuda-gdb executable in the debug configuration properties.
The simplest thing to do is to run eclipse with optirun, that will also run your app properly.
I run code using nvcc command and it gave correct output but when I run the same code on the nsight eclipse it gave wrong output. Any one have any idea why is this behavior.
Finally I found there is problem in one of the array allocation.While the command line doesn't make any problem the nsight does.
Nsight EE builds the projects by generating make files based on the project settings and by invoking the OS make utility to build the project. It is using nvcc compiler found in the PATH but it relies on some newer options introduced in NVCC compiler 5.0 (that is a part of the same toolkit distribution).
Please do a clean rebuild in Nsight Eclipse - it will print out the command lines used to build your application. Then you can compare that command line with the one you use outside. Possible differences are:
Nsight specifies debug flags and optimization flag when building in debug and release modes.
By default, Nsight sets the new project to build for the hardware detected on your system. NVCC default is SM 1.0.
Make sure the compiler used by Nsight and from the command line are one and the same. It is possible that you have different compilers (e.g. 4.x and 5.0) installed on your system that may generate a slightly different code.
In any case, it is likely your code has some bug that manifests itself under different compilation settings. I would recommend running CUDA memcheck on you program to ensure there is no hidden bugs.
I'm pretty new to CUDA and flying a bit by the seat of my pants here...
I'm trying to debug my CUDA program on a remote machine I don't have admin rights on. I compile my program with nvcc -g -G and then try to debug it with cuda-gdb. However, as soon as gdb hits a call to a kernel (doesn't even have to enter it, and it doesn't happen in host code), I get:
(cuda-gdb) run
Starting program: /path/to/my/binary/cuda_clustered_tree
[Thread debugging using libthread_db enabled]
[1]+ Stopped cuda-gdb cuda_clustered_tree
cuda-gdb then dumps me back to my terminal. If I try to run cuda-gdb again, I get
An instance of cuda-gdb (pid 4065) is already using device 0. If you believe
you are seeing this message in error, try deleting /tmp/cuda-dbg/cuda-gdb.lock.
The only way to recover is to kill -9 cuda-gdb and cuda_clustered_ (I assume the latter is part of my binary).
This machine has two GPUs, is running CUDA 4.1 (I believe -- there were a lot installed, but that's the one I set the PATH and LD_LIBRARY_PATH to) and compile + runs deviceQuery and bandwidthTest fine.
I can provide more info if need be. I've searched everywhere I could find online and found no help with this.
Figured it out! Turns out, cuda-gdb hates csh.
If you are running csh, it will cause cuda-gdb to exhibit the above anomalous behavior. Even running bash from within csh, then running cuda-gdb, I still saw the behavior. You need to start your shell as bash, and only bash.
On the machine, the default shell was csh, but I use bash. I wasn't allowed to change it directly, so I added 'exec /bin/bash --login' to my .login script.
So even though I was running bash, because it was started by csh, cuda-gdb would exhibit the above anomalous behavior. Getting rid of 'exec' command, so I was running csh directly with nothing on top, still showed the behavior.
In the end, I had to get IT to change my shell to bash directly (after much patient troubleshooting by them.) Now it works as intended.
with a very simple code, hello world, the breakpoint is not working.
I can't write the exact comment since it's not written in English,
but it's like 'the symbols of this document are not loaded' or something.
there's not cuda codes, just only one line printf in main function.
The working environment is windows7 64bit, vc++2008 sp1, cuda toolkit 3.1 64bits.
Please give me some explanation on this. :)
So this is just a host application (i.e. nothing to do with CUDA) doing printf that you can't debug? Have you selected "Debug" as the configuration instead of "Release"?
Are you trying to use a Visual Studio breakpoint to stop in your CUDA device code (.cu)? If that is the case, then I'm pretty sure that you can't do that. NVIDIA has released Parallel NSIGHT, which should allow you to do debugging of CUDA device code (.cu), though I don't have much experience with it myself.
Did you compile with -g -G options as noted in the documentation?
NVCC, the NVIDIA CUDA compiler driver, provides a mechanism for generating the debugging information necessary for CUDA-GDB to work properly. The -g -G option pair must be passed to NVCC when an application is compiled for ease of debugging with CUDA-GDB; for example,
nvcc -g -G foo.cu -o foo
here: https://docs.nvidia.com/cuda/cuda-gdb/index.html