CUDA: Why does compute_20 code fail on compute_35 device? - cuda

For a computer with Titan GPU (compute_35,sm_35), I compiled some code using this line in CMakeLists.txt:
set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};-gencode arch=compute_35,code=sm_35)
The code compiles and also runs fine.
I wanted to check what compilation problems this code would cause for a friend who uses a GTS 450 (compute_20,sm_21). So, I changed the above line to:
set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};-gencode arch=compute_20,code=sm_21)
The code compiles without any errors on my computer with Titan. But when I run it (again on my Titan computer), its fails after a thrust::copy call with the following error:
$ ./foobar
terminate called after throwing an instance of 'thrust::system::system_error'
what(): invalid device function
"foobar" terminated by signal SIGABRT (Abort)
Google says the above error is caused due to GPU architecture mismatch.
The strangest part is that with the above line (arch=compute_20,code=sm_21), the code compiles and runs without error on my friend's computer with GTS 450! Except for the GPU, her Ubuntu 12.04, gcc and CUDA SDK 5.5 versions are the same as mine.
Is this the real cause of this error? Why cannot Titan run compute_20 code? Isn't a CUDA GPU supposed to be backwards compatible with PTX or SASS code? Even if it isn't, why cannot the driver JIT compile the compute_20 PTX to the SASS of sm_35?

If you specify:
-gencode arch=compute_20,code=compute_20
your code should run (via JIT) on either GPU.
According to the nvcc manual, JIT is directly enabled when you specify a virtual architecture for the code switch. You can make multiple specifications in a single command:
-arch=compute_20 -code=compute20,sm_21,sm_35
(note this is in lieu of specifying -gencode ...)
which would allow JIT from sm_20 PTX, and non-JIT execution directly on cc2.1 or cc3.5 devices.

Related

Undefined reference to `cublasCreate_v2’ in ‘/tmp/tmpxft_0000120b_0000000-10_my_program”

I have tried to compile a code using CUDA 9.0 toolkit on NVIDIA Tesla P100 graphic card (Ubuntu version 16.04) and CUBLAS library is used in the code. For compilation, I have used the following command to compile “my_program.cu”
nvcc -std=c++11 -L/usr/local/cuda-9.0/lib64 my_program.cu -o mu_program.o -lcublas
But, I have got the following error:
nvlink error: Undefined reference to 'cublasCreate_v2’in '/tmp/tmpxft_0000120b_0000000-10_my_program’
As I have already linked the library path in the compilation command, why do I still get the error. Please help me to solve this error.
It seems fairly evident that you are trying to use the CUBLAS library in device code. This is different than ordinary host usage and requires special compilation/linking steps. You need to:
compile for the correct device architecture (must be cc3.5 or higher)
use relocatable device code linking
link in the cublas device library (in addition to the cublas host library)
link in the CUDA device runtime library
Use a CUDA toolkit prior to CUDA 10.0
The following additions to your compile command line should get you there:
nvcc -std=c++11 my_program.cu -o my_program.o -lcublas -arch=sm_60 -rdc=true -lcublas_device -lcudadevrt
The above assumes you are actually using a proper install of CUDA 9.0. The CUBLAS device library was deprecated and is now removed from newer CUDA toolkits (see here).

CUDA: safeCall() Runtime API error invalid device symbol [duplicate]

I use the cmake gui tool to configure my cuda project in vs2013.
CMakeLists.txt is as below:
project(CUDA_PART)
# required cmake version
cmake_minimum_required(VERSION 3.0)
include_directories(${CUDA_PART_SOURCE_DIR}/common)
# packages
find_package(CUDA REQUIRED)
# nvcc flags
set(CUDA_NVCC_FLAGS -gencode arch=compute_20,code=sm_20;-G;-g)
set(CUDA_VERBOSE_BUILD ON)
#FILE(GLOB SOURCES "*.cu" "*.cpp" "*.c" "*.h")
CUDA_ADD_EXECUTABLE(CUDA_PART hist_gpu_shmem_atomics.cu)
The .cu file is from Cuda by example source code hist_gpu_shmem_atomics.cu
There are two problems:
After the line histo_kernel <<<blocks * 2, 256 >>>(dev_buffer, SIZE, dev_histo);an "invalid device function" error occurs.
When I use the CUDA debugging tool to debug, its cannot trigger breakpoints in the device code.
But when I create a project with the same code by the cuda project temple in visual studio 2013.It works correctly!
So, is there something wrong in the CMakeLists.txt ?
OS: Win7 64bit;GPU: GTX960;CUDA: CUDA 7.5;VS: 2013 (and also 2010)
When I use set the "Code Generation" in vs2013 as follow :
The CUDA_NVCC_FLAGES turns out to be -gencode=arch=compute_20,code=\"sm_20,compute_20\"
It equals to:
-gencode=arch=compute_20,code=sm_20 \
-gencode=arch=compute_20,code=compute_20
So, I guess it will generate 2 versions machine code: the first one(SASS) with virtual and real architectures and the second one(PTX) with only virtual architecture. Since my GTX960 is a cc5.2 device, it chooses the second one (PTX) and convert it to a suitable SASS.
This is a problem:
set(CUDA_NVCC_FLAGS -gencode arch=compute_20,code=sm_20;-G;-g)
Those flags will cause nvcc to generate SASS code (only) for a cc 2.0 device (only). Such cc2.0 SASS code will not run on your cc5.2 device (GTX960). "Invalid device function" is exactly the error you would get when trying to launch a kernel in such a scenario. Since the kernel will never launch, trying to hit breakpoints in device code won't work.
I'm not a CMake expert, so there might be other, more sensible approaches, but one possible way to try to fix this might be:
set(CUDA_NVCC_FLAGS -gencode arch=compute_52,code=sm_52;-G;-g)
which should generate code for your cc5.2 device. There are undoubtedly other possible settings here, you may want to read this or the nvcc manual for more background on compile options to target specific devices.
Also note that -G generates device debug code, which is fine if that is what you want. However it will generally run slower than code compiled without that switch. If you want to debug, however, that switch is necessary.

How to force cubin file generation for a higher compute version

In the samples provided with CUDA 6.0, I'm running the following compile command with error output:
foo#foo:/usr/local/cuda-6.0/samples/0_Simple/cdpSimpleQuicksort$ nvcc --cubin -I../../common/inc cdpSimpleQuicksort.cu
nvcc warning : The 'compute_10' and 'sm_10' architectures are deprecated, and may be removed in a future release.
cdpSimpleQuicksort.cu(105): error: calling a __global__ function("cdp_simple_quicksort") from a __global__ function("cdp_simple_quicksort") is only allowed on the compute_35 architecture or above
cdpSimpleQuicksort.cu(114): error: calling a __global__ function("cdp_simple_quicksort") from a __global__ function("cdp_simple_quicksort") is only allowed on the compute_35 architecture or above
2 errors detected in the compilation of "/tmp/tmpxft_0000241a_00000000-6_cdpSimpleQuicksort.cpp1.ii".
I then altered the command to this, with a new failure:
foo#foo:/usr/local/cuda-6.0/samples/0_Simple/cdpSimpleQuicksort$ nvcc --cubin -I../../common/inc -gencode arch=compute_35,code=sm_35 cdpSimpleQuicksort.cu
cdpSimpleQuicksort.cu(105): error: kernel launch from __device__ or __global__ functions requires separate compilation mode
cdpSimpleQuicksort.cu(114): error: kernel launch from __device__ or __global__ functions requires separate compilation mode
2 errors detected in the compilation of "/tmp/tmpxft_000024f3_00000000-6_cdpSimpleQuicksort.cpp1.ii".
Does this have anything to do with the fact that the machine I'm on is only Compute 2.1 capable and the build tools are blocking me? What's the resolution... I'm not finding anything in the documentation that is clearly handling this error.
I looked at this question, and that... a link to documentation is simply not helping. I need to know how I have to modify the compile command.
Look at the makefile that comes with that cdpSimpleQuicksort project. It shows some additional switches that are needed to compile it, due to CUDA dynamic parallelism (which is essentially the second set of errors you are seeing.) Go back and study that makefile, and see if you can figure out how to combine some of the compile commands there with --cubin.
The readers digest version is that this should compile without error:
nvcc --cubin -rdc=true -I../../common/inc -arch=sm_35 cdpSimpleQuicksort.cu
Having said all that, you should be able to compile for whatever kind of target you want, but you won't be able to run a cdp code on a cc2.1 architecture.
cdp documentation
and here

Cuda-gdb not stopping at breakpoints inside kernel

Cuda-gdb was obeying all the breakpoints I would set, before adding '-arch sm_20' flag while compiling. I had to add this to avoid error being thrown : 'atomicAdd is undefined' (as pointed here). Here is my current statement to compile the code:
nvcc -g -G --maxrregcount=32 Main.cu -o SW_exe (..including header files...) -arch sm_20
and when I set a breakpoint inside kernel, cuda-gdb stops once at the last line of the kernel, and then the program continues.
(cuda-gdb) b SW_kernel_1.cu:49
Breakpoint 1 at 0x4114a0: file ./SW_kernel_1.cu, line 49.
...
[Launch of CUDA Kernel 5 (diagonalComputation<<<(1024,1,1),(128,1,1)>>>) on Device 0]
Breakpoint 1, diagonalComputation (__cuda_0=15386, __cuda_1=128, __cuda_2=0xf00400000, __cuda_3=0xf00200000,
__cuda_4=100, __cuda_5=0xf03fa0000, __cuda_6=0xf04004000, __cuda_7=0xf040a0000, __cuda_8=0xf00200200,
__cuda_9=15258, __cuda_10=5, __cuda_11=-3, __cuda_12=8, __cuda_13=1) at ./SW_kernel_1.cu:183
183 }
(cuda-gdb) c
Continuing.
But as I said, if I remove the 'atomicAdd()' call and the flag '-arch sm_20' which though makes my code incorrect, but now the cuda-gdb stops at the breakpoint I specify. Please tell me the reasons of this behaviour.
I am using CUDA 5.5 on Tesla M2070 (Compute Capability = 2.0).
Thanks!
From the CUDA DEBUGGER User Manual, Section 3.3.1:
NVCC, the NVIDIA CUDA compiler driver, provides a mechanism for generating the
debugging information necessary for CUDA-GDB to work properly. The -g -G option
pair must be passed to NVCC when an application is compiled in order to debug with
CUDA-GDB; for example,
nvcc -g -G foo.cu -o foo
Using this line to compile the CUDA application foo.cu
forces -O0 compilation, with the exception of very limited dead-code eliminations
and register-spilling optimizations.
makes the compiler include debug information in the executable
This means that, in principle, breakpoints could not be hit in kernel functions even when the code is compiled in debug mode since the CUDA compiler can perform some code optimizations and so the disassembled code could not correspond to the CUDA instructions.
When breakpoints are not hit, a workaround is to put a printf statement immediately after the variable one wants to check, as suggested by Robert Crovella at
CUDA debugging with VS - can't examine restrict pointers (Operation is not valid)
The OP has chosen here a different workaround, i.e., to compile for a different architecture. Indeed, the optimization the compiler does can change from architecture to architecture.

Nsight skips (ignores) over break points in VS10 Cuda works fine, nsight consistently skips over several breakpoints

I'm using nsight 2.2, Toolkit 4.2, latest nvidia driver, I'm using couple GPUs in my computer. Build customize 4.2. I have set "generate GPU ouput" on CUDA's project properties, nsight monitor is on (everything looks great).
I set several break points on my global - kernel function. nsight stops at the declaration of the function, but skips over several break points. It's just like nsight decide whether to hit a break point or skip over a break point. The funny thing is that nsight stops at for loops, but doesn't stop on simple assignment operations.
One more problem is that I can't set focus or add variables to the watch list. In this case (see attached screenshot) I can't resolve the value of variable : "posss" or "testDetctoinRate1" which are registers in this case. On the other hand, shared memory or block memory would insert automatically to the local's list.
Here is a screen shot of the kernel, before debugging
Here is a screen shot during debugging
I evoke my kernel function with following call:
checkCUDA<<<1, 32>>>(sumMat->rows,sumMat->cols , (UINT *)pGPUsumMat);
cudaError = cudaGetLastError();
if(cudaError != cudaSuccess)
{
printf("CUDA error: %s\n", cudaGetErrorString(cudaError));
exit(-1);
}
kernel call works without an error.
Is there any option to forcing nsight stops at all breakpoints? How can I add thread's registers to my watch list?
Update
Initially, my debug command line is as follows:
# Runtime API (NVCC Compilation Type is hybrid object or .c file)
set CUDAFE_FLAGS=--sdk_dir "c:\Program Files\Microsoft SDKs\Windows\v7.0A\"
"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.2\bin\nvcc.exe" --use-local-env --cl-version 2010 -ccbin "C:\Program Files\Microsoft Visual Studio 10.0\VC\bin" -I"..\..\..\opencv\modules\gpu\src\opencv2\gpu\device" -I"..\..\..\opencv\modules\gpu\include\opencv2\gpu" -I"..\..\..\build\include\\" -G --keep-dir "Debug" -maxrregcount=0 --machine 32 --compile -g -Xcompiler "/EHsc /nologo /Od /Zi /MDd " -o "Debug\%(Filename)%(Extension).obj" "%(FullPath)"
I changed on property page --> cuda --> host --> generate hosting debug information --> No
Now my command line doesn't contain the -g and -O letters , my command line is as followed:
# Runtime API (NVCC Compilation Type is hybrid object or .c file)
set CUDAFE_FLAGS=--sdk_dir "c:\Program Files\Microsoft SDKs\Windows\v7.0A\"
"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.2\bin\nvcc.exe" --use-local-env --cl-version 2010 -ccbin "C:\Program Files\Microsoft Visual Studio 10.0\VC\bin" -I"..\..\..\opencv\modules\gpu\src\opencv2\gpu\device" -I"..\..\..\opencv\modules\gpu\include\opencv2\gpu" -I"..\..\..\build\include\\" -G --keep-dir "Debug" -maxrregcount=0 --machine 32 --compile -Xcompiler "/EHsc /nologo /Od /Zi /MDd " -o "Debug\%(Filename)%(Extension).obj" "%(FullPath)"
Although, I do debug with -o, does it matter? It doesn't make any change.
Right click the .cu file in the Solution Explorer, then go to CUDA C/C++ | Device and set Generate GPU Debug Information to Yes (-G0).
Check whether "Enable CUDA Memory Checker" under Nsight is turned off or not. It may allow NSight to stop breakpoints of your CUDA kernel code on Debug mode of VS C++ 2010. At least, it works for me.
In the debug build, are you passing both the -O and the -g options to nvcc? If so, try removing the -O.
Background: This sounds like the kind of problem one gets when trying to debug code that has been optimized by the compiler. During optimization, the compiler changes the code in such a way that some lines of source no longer have any machine code instructions associated with them, making it impossible for the debugger to set breakpoints on those lines.
I have similar issue. Nsight is not stopping at any of the break points. But completes execution.
If i use -G0 as debug info option. It gives an error.
I am using nvidia 2.2.0.1225 with cuda 4.2 and cuda 5 tool kit. With 301.42 graphic driver.