with a very simple code, hello world, the breakpoint is not working.
I can't write the exact comment since it's not written in English,
but it's like 'the symbols of this document are not loaded' or something.
there's not cuda codes, just only one line printf in main function.
The working environment is windows7 64bit, vc++2008 sp1, cuda toolkit 3.1 64bits.
Please give me some explanation on this. :)
So this is just a host application (i.e. nothing to do with CUDA) doing printf that you can't debug? Have you selected "Debug" as the configuration instead of "Release"?
Are you trying to use a Visual Studio breakpoint to stop in your CUDA device code (.cu)? If that is the case, then I'm pretty sure that you can't do that. NVIDIA has released Parallel NSIGHT, which should allow you to do debugging of CUDA device code (.cu), though I don't have much experience with it myself.
Did you compile with -g -G options as noted in the documentation?
NVCC, the NVIDIA CUDA compiler driver, provides a mechanism for generating the debugging information necessary for CUDA-GDB to work properly. The -g -G option pair must be passed to NVCC when an application is compiled for ease of debugging with CUDA-GDB; for example,
nvcc -g -G foo.cu -o foo
here: https://docs.nvidia.com/cuda/cuda-gdb/index.html
Related
I am on Ubuntu 12.04 LTS and have installed CUDA 5.5. I understand that without any CUDA/GPGPU elements in the code, nvcc behaves as a C/C++ compiler -- more like gcc, however is there any exception to this rule ? if not, then can I use nvcc as gcc for non-CUDA C/C++ codes ?
No, nvcc doesn't behave like a C/C++ compiler for host code. What it does is the following:
separate device from host code into two separate files
compile device code (with nvcc, cudafe, ptxas)
invoke gcc for host code
If no device code exists, nothing is done in steps 1) and 2). So nvcc is actually no compiler, it is a compiler driver which invokes the right compilers for every part in the right order. To answer your question, if you use nvcc to compile host code only, you still use gcc.
It doesn't accept options to suppress warnings ( -W*)
I'we been writing some simple cuda program (I'm student so I need to practice), and the thing is I can compile it with nvcc from terminal (using Kubuntu 12.04LTS) and then execute it with optirun ./a.out (hardver is geforce gt 525m on dell inspiron) and everything works fine. The major problem is that I can't do anything from Nsight. When I try to start debug version of code the message is "Launch failed! Binaries not found!". I think it's about running command with optirun but I'm not sure. Any similar experiences? Thanks, for helping in advance folks. :)
As this was the first post I found when searching for "nsight optirun" I just wanted to wanted to write down the steps I took to make it working for me.
Go to Run -> Debug Configurations -> Debugger
Find the textbox for CUDA GDB executable (in my case it was set to "${cuda_bin}/cuda-gdb")
Prepend "optirun --no-xorg", in my case it was then "optirun --no-xorg ${cuda_bin}/cuda-gdb"
The "--no-xorg" option might not be required or even counterproductive if you have an OpenGL window as it prevents any of that to appear. For my scientific code however it is required as it prevents me from running into kernel timeouts.
Happy bug hunting.
We tested Nsight on Optimus systems without optirun - see "Install the cuda toolkit" in CUDA Toolkit Getting Started on using CUDA toolkit on the Optimus system. We have not tried optirun with Nsight EE.
If you still need to use optirun for debugging, you can try making a shell script that uses optirun to start cuda-gdb and set that shell script as cuda-gdb executable in the debug configuration properties.
The simplest thing to do is to run eclipse with optirun, that will also run your app properly.
I run code using nvcc command and it gave correct output but when I run the same code on the nsight eclipse it gave wrong output. Any one have any idea why is this behavior.
Finally I found there is problem in one of the array allocation.While the command line doesn't make any problem the nsight does.
Nsight EE builds the projects by generating make files based on the project settings and by invoking the OS make utility to build the project. It is using nvcc compiler found in the PATH but it relies on some newer options introduced in NVCC compiler 5.0 (that is a part of the same toolkit distribution).
Please do a clean rebuild in Nsight Eclipse - it will print out the command lines used to build your application. Then you can compare that command line with the one you use outside. Possible differences are:
Nsight specifies debug flags and optimization flag when building in debug and release modes.
By default, Nsight sets the new project to build for the hardware detected on your system. NVCC default is SM 1.0.
Make sure the compiler used by Nsight and from the command line are one and the same. It is possible that you have different compilers (e.g. 4.x and 5.0) installed on your system that may generate a slightly different code.
In any case, it is likely your code has some bug that manifests itself under different compilation settings. I would recommend running CUDA memcheck on you program to ensure there is no hidden bugs.
I'we been writing some simple cuda program (I'm student so I need to practice), and the thing is I can compile it with nvcc from terminal (using Kubuntu 12.04LTS) and then execute it with optirun ./a.out (hardver is geforce gt 525m on dell inspiron) and everything works fine. The major problem is that I can't do anything from Nsight. When I try to start debug version of code the message is "Launch failed! Binaries not found!". I think it's about running command with optirun but I'm not sure. Any similar experiences? Thanks, for helping in advance folks. :)
As this was the first post I found when searching for "nsight optirun" I just wanted to wanted to write down the steps I took to make it working for me.
Go to Run -> Debug Configurations -> Debugger
Find the textbox for CUDA GDB executable (in my case it was set to "${cuda_bin}/cuda-gdb")
Prepend "optirun --no-xorg", in my case it was then "optirun --no-xorg ${cuda_bin}/cuda-gdb"
The "--no-xorg" option might not be required or even counterproductive if you have an OpenGL window as it prevents any of that to appear. For my scientific code however it is required as it prevents me from running into kernel timeouts.
Happy bug hunting.
We tested Nsight on Optimus systems without optirun - see "Install the cuda toolkit" in CUDA Toolkit Getting Started on using CUDA toolkit on the Optimus system. We have not tried optirun with Nsight EE.
If you still need to use optirun for debugging, you can try making a shell script that uses optirun to start cuda-gdb and set that shell script as cuda-gdb executable in the debug configuration properties.
The simplest thing to do is to run eclipse with optirun, that will also run your app properly.
I am running a cuda project. But somehow I am not able to set the flag -arch=sm_20 in the
sconscript file which has been written by someone else. I need to use printf in kernel for debugging and I have little experience of sconscript python.
The specifics depend on the way you have SCons set up to work with CUDA. I use these scripts: http://github.com/BryanCatanzaro/cuda-scons
With this setup, all you need to do is invoke SCons with your preferred architecture:
scons arch=sm_20
And nvcc will be invoked with the -arch=sm_20 flag.
Details of your setup may be different, but if you look through your SCons script, you should see how to change this flag.