I'm currently trying to compile Darknet on the latest CUDA toolkit which is version 11.1. I have a GPU capable of running CUDA version 5 which is a GeForce 940M. However, while rebuilding darknet using the latest CUDA toolkit, it said
nvcc fatal : Unsupported GPU architecture 'compute_30'
compute_30 is for version 3, how can it fail while my GPU can run version 5
Is it possible that my code detected my intel graphics card instead of my Nvidia GPU? if that's the case, is it possible to change its detection?
Support for compute_30 has been removed for versions after CUDA 10.2. So if you are using nvcc make sure to use this flag to target the correct architecture in the build system for darknet
-gencode=arch=compute_50,code=sm_50
You may also need to use this one to avoid a warning of architectures are deprecated.
-Wno-deprecated-gpu-targets
I added the following:
makefiletemp = open('Makefile','r+')
list_of_lines = makefiletemp.readlines()
list_of_lines[15] = list_of_lines[14]
list_of_lines[16] = "ARCH= -gencode arch=compute_35,code=sm_35 \\\n"
makefiletemp = open('Makefile','w')
makefiletemp.writelines(list_of_lines)
makefiletemp.close()
right before the
#Compile Darknet
!make
command. That seemed to work!
Related
I have tried to compile a code using CUDA 9.0 toolkit on NVIDIA Tesla P100 graphic card (Ubuntu version 16.04) and CUBLAS library is used in the code. For compilation, I have used the following command to compile “my_program.cu”
nvcc -std=c++11 -L/usr/local/cuda-9.0/lib64 my_program.cu -o mu_program.o -lcublas
But, I have got the following error:
nvlink error: Undefined reference to 'cublasCreate_v2’in '/tmp/tmpxft_0000120b_0000000-10_my_program’
As I have already linked the library path in the compilation command, why do I still get the error. Please help me to solve this error.
It seems fairly evident that you are trying to use the CUBLAS library in device code. This is different than ordinary host usage and requires special compilation/linking steps. You need to:
compile for the correct device architecture (must be cc3.5 or higher)
use relocatable device code linking
link in the cublas device library (in addition to the cublas host library)
link in the CUDA device runtime library
Use a CUDA toolkit prior to CUDA 10.0
The following additions to your compile command line should get you there:
nvcc -std=c++11 my_program.cu -o my_program.o -lcublas -arch=sm_60 -rdc=true -lcublas_device -lcudadevrt
The above assumes you are actually using a proper install of CUDA 9.0. The CUBLAS device library was deprecated and is now removed from newer CUDA toolkits (see here).
A CUDA source file can be compiled into PTX format using LLVM compiler with the command clang -Xclang -I$LIBCLC/include/generic -I$LIBCLC/include/ptx -Dcl_clang_storage_class_specifiers -O3 cudaFile.cu -S -o ptxOutputFile.ptx --cuda-gpu-arch=sm_XX
Where sm_XX can be replaced as sm_20, sm_30. For compute capability 1.0, when sm_XX was replaced with sm_10, it gives the error fatal error: cannot open file '/tmp/shared-25f2f5.s': No such file or directory
1 error generated.
So it seems the LLVM has a minimum compute capability of 2.0. Is this assumption correct?
It should be correct. As from CUDA 7.0, both the toolkit and driver support for sm_1x has stopped. If sm_20 works, it has to be the minimum.
CUDA Toolkit and CUDA Driver Support for Tesla Architecture
The CUDA Toolkit and CUDA Driver no longer supports the sm_10, sm_11, sm_12, and sm_13 architectures. As a consequence, CU_TARGET_COMPUTE_1x enum values have been removed from the CUDA headers.
http://developer.download.nvidia.com/compute/cuda/7_0/Prod/doc/CUDA_Toolkit_Release_Notes.pdf
I'm having trouble installing CUDA 7.0 (to use with TensorFlow) on a workstation with the Nvidia Quadro FX 3800. I'm wondering if this is because the GPU is no longer supported.
Installation of the driver (340.96) seems to work fine:
$ sh ./NVIDIA-Linux-x86_64-340.96.run
Installation of the NVIDIA Accelerated Graphics Driver for Linux-x86_64
(version: 340.96) is now complete. Please update your XF86Config or
xorg.conf file as appropriate; see the file
/usr/share/doc/NVIDIA_GLX-1.0/README.txt for details.
However, I think I may be having trouble with the following:
$ ./cuda_7.0.28_linux.run --kernel-source-path=/usr/src/linux-headers-3.13.0-76-generic
The driver installation is unable to locate the kernel source. Please make sure
that the kernel source packages are installed and set up correctly. If you know
that the kernel source packages are installed and set up correctly, you may pass
the location of the kernel source with the '--kernel-source-path' flag.
...
Logfile is /tmp/cuda_install_1357.log
$ vi /tmp/cuda_install_1357.log
WARNING: The NVIDIA Quadro FX 3800 GPU installed in this system is
supported through the NVIDIA 340.xx legacy Linux graphics drivers.
Please visit http://www.nvidia.com/object/unix.html for more
information. The 346.46 NVIDIA Linux graphics driver will ignore
this GPU.
WARNING: You do not appear to have an NVIDIA GPU supported by the 346.46
NVIDIA Linux graphics driver installed in this system. For
further details, please see the appendix SUPPORTED NVIDIA GRAPHICS
CHIPS in the README available on the Linux driver download page at
www.nvidia.com.
...
ERROR: Unable to load the kernel module 'nvidia.ko'. This happens most
frequently when this kernel module was built against the wrong or
improperly configured kernel sources, with a version of gcc that
differs from the one used to build the target kernel, or if a driver
such as rivafb, nvidiafb, or nouveau is present and prevents the
NVIDIA kernel module from obtaining ownership of the NVIDIA graphics
device(s), or no NVIDIA GPU installed in this system is supported by
this NVIDIA Linux graphics driver release.
...
Please see the log entries 'Kernel module load error' and 'Kernel
messages' at the end of the file '/var/log/nvidia-installer.log' for
more information.
Is the installation failure due to CUDA dropping support for this graphics card?
I followed the link trail: https://developer.nvidia.com/cuda-gpus > https://developer.nvidia.com/cuda-legacy-gpus > http://www.nvidia.com/object/product_quadro_fx_3800_us.html and I would have thought the Quadro FX 3800 supported CUDA (at least at the beginning).
Yes, the Quadro FX 3800 GPU is no longer supported by CUDA 7.0 and beyond.
The last CUDA version that supported that GPU was CUDA 6.5.
This answer and this answer may be of interest. Your QFX 3800 is a compute capability 1.3 device.
If you review the release notes that come with CUDA 7, you will find a notice of the elimination of support for these earlier GPUs. Likewise, the newer CUDA driver versions also don't support those GPUs.
I tried to debug my CUDA application with cuda-gdb but got some weird error.
I set option -g -G -O0 to build my application. I could run my program without cuda-gdb, but didn't get correct result. Hence I decided to use cuda-gdb, however, I got following error message while running program with cuda-gdb
Error: Failed to read the valid warps mask (dev=1, sm=0, error=16).
What does it means? Why sm=0 and what's the meaning of error=16?
Update 1: I tried to use cuda-gdb to CUDA samples, but it fails with same problem. I just installed CUDA 6.0 Toolkit followed by instruction of NVIDIA. Is it a problem of my system?
Update 2:
OS - CentOS 6.5
GPU
1 Quadro 400
2 Tesla C2070
I'm using only 1 GPU for my program, but I've got same bug message from any GPU that I selected
CUDA version - 6.0
GPU Driver
NVRM version: NVIDIA UNIX x86_64 Kernel Module 331.62 Wed Mar 19 18:20:03 PDT 2014
GCC version: gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC)
Update 3:
I tried to get more information in cuda-gdb, but I got following results
(cuda-gdb) info cuda devices
Error: Failed to read the valid warps mask (dev=1, sm=0, error=16).
(cuda-gdb) info cuda sms
Focus not set on any active CUDA kernel.
(cuda-gdb) info cuda lanes
Focus not set on any active CUDA kernel.
(cuda-gdb) info cuda kernels
No CUDA kernels.
(cuda-gdb) info cuda contexts
No CUDA contexts.
Actually, this issue is only specific to some old NVIDIA GPUs(like "Quadro 400", "GeForce GT220", or "GeForce GT 330M", etc).
On Liam Kim's setup, cuda-gdb should work fine by set environment variable "CUDA_VISIBLE_DEVICES", and let cuda-gdb running on Tesla C2070 GPUs specifically.
I.e
$export CUDA_VISIBLE_DEVICES=0 (or 2)
- the exact CUDA devices index could be found by running cuda sample - "deviceQuery".
And now, this issue has been fixed, the fix would be availble for CUDA developers in the next CUDA release(it will be posted out around early July, 2014).
This is internal cuda-gdb bug. You should report a bug.
Can you try installing CUDA toolkit from the package on NVIDIA site?
I updated my cuda toolkit from 5.5 to 6.5. Then following command
nvcc -arch=sm_52
starts to give me an error
nvcc fatal : Value 'sm_52' is not defined for option 'gpu-architecture'
Is this a bug ? or nvcc 6.5 does not support Maxwell virtual architecture.
CUDA Toolkit 6.5 was released before sm_52 architecture came into production.
After the arrival of sm_52 architecture, an update to CUDA 6.5 was released which enabled nvcc to generate code for sm_52.
Make sure you download the newer version of CUDA Toolkit 6.5.
P.S: I would rather use the latest version of toolkit (currently 7.0).