Check failed: error == cudaSuccess during training SSD - deep-learning

I am training SSD and I have error as
I0116 13:10:31.206343 3447 net.cpp:761] Ignoring source layer drop6
I0116 13:10:31.207219 3447 net.cpp:761] Ignoring source layer drop7
I0116 13:10:31.207229 3447 net.cpp:761] Ignoring source layer fc8
I0116 13:10:31.207233 3447 net.cpp:761] Ignoring source layer prob
F0116 13:10:31.227303 3447 parallel.cpp:130] Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal
*** Check failure stack trace: ***
# 0x7f158382e5cd google::LogMessage::Fail()
# 0x7f1583830433 google::LogMessage::SendToLog()
# 0x7f158382e15b google::LogMessage::Flush()
# 0x7f1583830e1e google::LogMessageFatal::~LogMessageFatal()
# 0x7f158412f7bd caffe::DevicePair::compute()
# 0x7f15841354e0 caffe::P2PSync<>::Prepare()
# 0x7f1584135fee caffe::P2PSync<>::Run()
# 0x40af10 train()
# 0x407608 main
# 0x7f1581fbd830 __libc_start_main
# 0x407ed9 _start
# (nil) (unknown)
Aborted (core dumped)
My Graphic is Quadro4200.
./deviceQuery gives me
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "Quadro K4200"
CUDA Driver Version / Runtime Version 9.0 / 8.0
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 4034 MBytes (4230479872 bytes)
( 7) Multiprocessors, (192) CUDA Cores/MP: 1344 CUDA Cores
GPU Max Clock rate: 784 MHz (0.78 GHz)
Memory Clock rate: 2700 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 524288 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 4 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = Quadro K4200
Result = PASS
I can successfully test SSD library, just that I have error in training.
Is that Graphic card not powerful enough to train the library?

I found the error.
If we run this command python examples/ssd/ssd_pascal.py in ssd, the next step of training command is as follow.
gdb --args ./build/tools/caffe train --solver="models/VGGNet/VOC0712/SSD_300x300/solver.prototxt" --weights="models/VGGNet/VGG_ILSVRC_16_layers_fc_reduced.caffemodel" --gpu 0,1,2,3 2>&1 | tee jobs/VGGNet/VOC0712/SSD_300x300/VGG_VOC0712_SSD_300x300.log
this --gpu 0,1,2,3 2>&1 is giving the issue. I changed to --gpu 0 and run from the training command directly as
./build/tools/caffe train --solver="models/VGGNet/VOC0712/SSD_300x300/solver.prototxt" --weights="models/VGGNet/VGG_ILSVRC_16_layers_fc_reduced.caffemodel" --gpu 0 | tee jobs/VGGNet/VOC0712/SSD_300x300/VGG_VOC0712_SSD_300x300.log,
then it solved.

Related

CUDA failed to launch kernel : no kernel image available for execution

i am trying to run CUDA on a rather old GPU. I tried the CUDA Samples vectorAdd which gives me the following error:
Failed to launch vectorAdd kernel (error code no kernel image is available for execution on the device)!
These are the outputs from
deviceQuery:
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 580"
CUDA Driver Version / Runtime Version 9.1 / 9.0
CUDA Capability Major/Minor version number: 2.0
Total amount of global memory: 1467 MBytes (1538392064 bytes)
MapSMtoCores for SM 2.0 is undefined. Default to use 64 Cores/SM
MapSMtoCores for SM 2.0 is undefined. Default to use 64 Cores/SM
(16) Multiprocessors, ( 64) CUDA Cores/MP: 1024 CUDA Cores
GPU Max Clock rate: 1630 MHz (1.63 GHz)
Memory Clock rate: 2050 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 786432 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65535), 3D=(2048, 2048, 2048)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (65535, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 3 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.1, CUDA Runtime Version = 9.0, NumDevs = 1
Result = PASS
nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.147 Driver Version: 390.147 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 580 Off | 00000000:03:00.0 N/A | N/A |
| 42% 48C P12 N/A / N/A | 257MiB / 1467MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176
Now according to the CUDA compatibility PDF
https://docs.nvidia.com/pdf/CUDA_Compatibility.pdf
I assume I have Binary Compatibility from CUDA 9.0.176 to the GPU Driver.
For Compute Capability Support, the table does not list the 390 Driver.
Is it even possible to program CUDA on this GPU or should I get a newer one? If it is possible, what combination of driver and CUDA toolkit version do I need?
The GPU you are using is a Fermi class (compute capability 2.0) device. Support was officially removed from the CUDA toolkit when CUDA 9.0 was released in September 2017. The last release of the CUDA toolkit with Fermi support was CUDA 8.0. You will have to use that (or something even older) if you wish to use that GPU with CUDA.
[Answer assembled from comments an added as a community wiki entry to get this question off the unanswered list for the CUDA tag]

CUDA Peer-to-Peer Memory Access on RTX2080

I have four RTX2080 GPUs and I want to enable peer access from device 1 to device 0 in following code.
cudaSetDevice(0);
float* data;
cudaMalloc(&data, 1000 * sizeof(float));
cudaSetDevice(1);
cudaDeviceEnablePeerAccess(0, 0); // This will fail with error: cudaErrorPeerAccessUnsupported
I have checked unifiedAddressing of cudaDeviceProp and the value is 1. Is there anything wrong with my code?
Here is the topology of my GPU connection:
GPU0 GPU1 GPU2 GPU3
GPU0 X NODE SYS SYS
GPU1 NODE X SYS SYS
GPU2 SYS SYS X NODE
GPU3 SYS SYS NODE X
The Driver Version: 430.40
The CUDA Version: 10.1
Turning a comment into an answer:
Peer-to-peer memory access on the RTX2080 is only supported when the nvlink bridge hardware is installed between GPUs. That is why you receive an unsupported error in this case.

Sample deviceQuery cuda program

I have a Intel Xeon machine with NVIDIA GeForce1080 GTX configured and CentOS 7 as operating system. I have installed NVIDIA-driver 410.93 and cuda-toolkit 10.0. After compiling the cuda-samples, i tried to run ./deviceQuery.
But it throws like this
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
cudaGetDeviceCount returned 30
-> unknown error
Result = FAIL
some command outputs
lspci | grep VGA
01:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1080] (rev a1)
nvidia-smi
Wed Feb 13 16:08:07 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.93 Driver Version: 410.93 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 Off | 00000000:01:00.0 On | N/A |
| 0% 54C P0 46W / 240W | 175MiB / 8119MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 6275 G /usr/bin/X 94MiB |
| 0 7268 G /usr/bin/gnome-shell 77MiB |
+-----------------------------------------------------------------------------+
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.13
PATH & LD_LIBRARY_PATH
PATH =/usr/local/cuda-10.0/bin:/usr/local/cuda/bin:/usr/local/bin:/usr/local/sbin:
LD_LIBRARY_PATH = /usr/local/cuda-10.0/lib64:/usr/local/cuda/lib64:
lsmod | grep nvidia
nvidia_drm 39819 3
nvidia_modeset 1036573 6 nvidia_drm
nvidia 16628708 273 nvidia_modeset
drm_kms_helper 179394 1 nvidia_drm
drm 429744 6 drm_kms_helper,nvidia_drm
ipmi_msghandler 56032 2 ipmi_devintf,nvidia
lsmod | grep nvidia-uvm
no output
dmesg | grep NVRM
[ 8.237489] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 410.93 Thu Dec 20 17:01:16 CST 2018 (using threaded interrupts)
Is this problem anything related to modprobe or nvidia-uvm?
I asked this in NVIDIA-devtalk forum, but no-reply yet.
Please give some suggestions.
Thanking in advance.
I debugged it. The problem is version mismatch between nvidia-driver(410.93) and cuda(with driver 410.48 came with cuda run file). Gave autoremove all the drivers and reinstalled from the beginning. Deleted all the link files in /var/lib/dkms/nvidia/*.
Now it works fine. And nvidia-uvm also loaded.
lsmod | grep nvidia
nvidia_uvm 786031 0
nvidia_drm 39819 3
nvidia_modeset 1048491 6 nvidia_drm
nvidia 16805034 274 nvidia_modeset,nvidia_uvm
drm_kms_helper 179394 1 nvidia_drm
drm 429744 6 drm_kms_helper,nvidia_drm
ipmi_msghandler 56032 2 ipmi_devintf,nvidia
nvidia-smi
Fri Feb 15 11:46:24 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.48 Driver Version: 410.48 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 Off | 00000000:01:00.0 On | N/A |
| 0% 45C P8 10W / 240W | 242MiB / 8119MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 6063 G /usr/bin/X 120MiB |
| 0 7502 G /usr/bin/gnome-shell 119MiB |
+-----------------------------------------------------------------------------+
nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 1080"
CUDA Driver Version / Runtime Version 10.0 / 10.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 8119 MBytes (8513585152 bytes)
(20) Multiprocessors, (128) CUDA Cores/MP: 2560 CUDA Cores
GPU Max Clock rate: 1797 MHz (1.80 GHz)
Memory Clock rate: 5005 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 10.0, NumDevs = 1
Result = PASS

What utility/binary can I call to determine an nVIDIA GPU's Compute Capability?

Suppose I have a system with a single GPU installed, and suppose I've also installed a recent version of CUDA.
I want to determine what's the compute capability of my GPU. If I could compile code, that would be easy:
#include <stdio.h>
int main() {
cudaDeviceProp prop;
cudaGetDeviceProperties(&prop, 0);
printf("%d", prop.major * 10 + prop.minor);
}
but - suppose I want to do that without compiling. Can I? I thought nvidia-smi might help me, since its lets you query all sorts of information about devices, but it seems it doesn't let you obtain the compute capability. Maybe there's something else I can do? Maybe something visible via /proc or system logs?
Edit: This is intended to run before a build, on a system which I don't control. So it must have minimal dependencies, run on a command-line and not require root privileges.
Unfortunately, it looks like the answer at the moment is "No", and that one needs to either compile a program or use a binary compiled elsewhere.
Edit: I have adapted a workaround for this issue - a self-contained bash script which compiles a small built-in C program to determine the compute capability. (It is particualrly useful to call from with CMake, but can just run independently.)
Also, I've filed a feature-requesting bug report at nVIDIA about this.
Here's the script, in a version assuming that nvcc is on your path:
//usr/bin/env nvcc --run "$0" ${1:+--run-args "${#:1}"} ; exit $?
#include <cstdio>
#include <cstdlib>
#include <cuda_runtime_api.h>
int main(int argc, char *argv[])
{
cudaDeviceProp prop;
cudaError_t status;
int device_count;
int device_index = 0;
if (argc > 1) {
device_index = atoi(argv[1]);
}
status = cudaGetDeviceCount(&device_count);
if (status != cudaSuccess) {
fprintf(stderr,"cudaGetDeviceCount() failed: %s\n", cudaGetErrorString(status));
return -1;
}
if (device_index >= device_count) {
fprintf(stderr, "Specified device index %d exceeds the maximum (the device count on this system is %d)\n", device_index, device_count);
return -1;
}
status = cudaGetDeviceProperties(&prop, device_index);
if (status != cudaSuccess) {
fprintf(stderr,"cudaGetDeviceProperties() for device device_index failed: %s\n", cudaGetErrorString(status));
return -1;
}
int v = prop.major * 10 + prop.minor;
printf("%d\n", v);
}
We can use nvidia-smi --query-gpu=compute_cap --format=csv to get the compute capability.
Sample output:
compute_cap
8.6
It is available for cuda tool kit 11.6.
You can use deviceQuery utility included in cuda installation
# change cwd into utility source directoy
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
# build deviceQuery utility with make as root
$ sudo make
# run deviceQuery
$ ./deviceQuery | grep Capability
CUDA Capability Major/Minor version number: 7.5
# optionally copy deviceQuery in ~/bin for future use
$ cp ./deviceQuery ~/bin
Full ouput from deviceQuery with RTX2080Ti is follows:
$ ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce RTX 2080 Ti"
CUDA Driver Version / Runtime Version 11.2 / 10.2
CUDA Capability Major/Minor version number: 7.5
Total amount of global memory: 11016 MBytes (11551440896 bytes)
(68) Multiprocessors, ( 64) CUDA Cores/MP: 4352 CUDA Cores
GPU Max Clock rate: 1770 MHz (1.77 GHz)
Memory Clock rate: 7000 Mhz
Memory Bus Width: 352-bit
L2 Cache Size: 5767168 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1024
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 3 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS
Thanks.

deviceQuery program - number of multiprocessors = 0

I have executed the deviceQuery program in the CUDA SDK. The number of mutiprocessors and cores are 0 in the file that I'm sure that is not true.
What the reasons can be?
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
There are 3 devices supporting CUDA
Device 0: "Tesla C2050"
CUDA Driver Version: 4.10
CUDA Runtime Version: 4.10
CUDA Capability Major revision number: 2
CUDA Capability Minor revision number: 0
Total amount of global memory: 2817982464 bytes
Number of multiprocessors: 0
Number of cores: 0
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per block: 1024
Maximum sizes of each dimension of a block: 1024 x 1024 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 65535
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Clock rate: 1.15 GHz
Concurrent copy and execution: Yes
Run time limit on kernels: Yes
Integrated: Yes
Support host page-locked memory mapping: No
Compute mode: Default
(multiple host threads can use this device simultaneously)
Ensure that:
Uninstall all the old graphics drivers and install the latest NVIDIA graphics drivers.
Uninstall all the old CUDA toolkits and install the latest CUDA toolkit.
Make sure you update your nvidia drivers to the latest available and then reboot. If that doesn't fix it, please run the following commands and post the output:
uname -a
nvidia-smi -q
lspci
echo $LD_LIBRARY_PATH
ldd /path/to/deviceQuery
ldconfig -p
devicequery (with all output, not just the card in question)