How to find cuda version in ubuntu? - cuda

I installed cuda 8.0 in my ubuntu 16.04 machine and checked the cuda version using the command "nvcc --version". it shows version as 7.5!!!.How Can I be sure that it is accurate? Are there other commands that I can also use to verify my result?

[edited 2022]
For CUDA 11:
$ cat /usr/local/cuda/version.json
For cuda-8.0 on Ubuntu16.04, you should be able to read
$ cat /usr/local/cuda/version.txt
CUDA Version 8.0.44
I agree with Robert Crovella, you might need to check your PATH

Starting from CUDA 8.0, it's possible to have multiple CUDA versions installed. You can then activate different values for $PATH environment variable that will present you with different CUDA version.
Command to immediately obtain the CUDA version:
$ nvcc --version | grep "release" | awk '{print $6}' | cut -c2-
You can confirm the result by checking the install status of CUDA libraries:
$ dpkg -l | grep cuda
For installing multiple versions of CUDA, you can refer to this article.

Thank you all...
Previously I tried to install cuda8.0 using run file from https://developer.nvidia.com/compute/cuda/8.0/prod/local_installers/cuda_8.0.44_linux-run. After that I tried to check "nvcc --version", but it shows the following error "The program 'nvcc' is currently not installed. You can install it by typing: sudo apt-get install nvidia-cuda-toolkit". So I tried the above command. It gave the cuda7.5 version.
Later I tried to install cuda using debian package which by default contained nvcc. Now I am getting correct version.

It may be due to the fact that you have both v7.5 and v8.0 installed. So instead of changing path, try uninstalling v7.5 first

Related

nvcc not found but cuda runs fine?

I was trying to run nvcc -V to check cuda version but I got the following error message.
Command 'nvcc' not found, but can be installed with:
sudo apt install nvidia-cuda-toolkit
But gpu acceleration is working fine for training models on cuda. Is there another way to find out cuda compiler tools version. I know nvidia-smi doesn't give the right version.
Is there a way to install or configure nvcc. So I don't have to install a whole new toolkit.
Most of the time, nvcc and other CUDA SDK binaries are not in the environment variable PATH. Check the installation path of CUDA; if it is installed under /usr/local/cuda, add its bin folder to the PATH variable in your ~/.bashrc:
export CUDA_HOME=/usr/local/cuda
export PATH=${CUDA_HOME}/bin:${PATH}
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH
You can apply the changes with source ~/.bashrc, or the next time you log in, everything is set automatically.
As #pQB and #talonmies above mentioned you only need to install the GPU drivers (Versioned 430-470 these days) to use PyTorch. If you are using your GPU display port you should be fine.
For Cuda compilation tools you need to install the whole toolkit, which includes the driver as well. If installing manually from CLI the downloaded file, CLI will give you the option to choose the components to install or skip.
Generally, it is recommended to install the compilation tools (which are system wide) and GPU drivers together because it avoids compatibility issues.
Append:
export PATH="/usr/local/cuda/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"
to
~/.bashrc
Note: your path to cuda may include a version so navigate to /usr/local/ and check for cudaXX.XX and modify the command to point to that in ~/.bashrc

Getting gfortran 10 from Fedora 31

I tried to install gfortran 10 from Fedora 31.
Follow https://fortran-lang.org/learn/os_setup/install_gfortran
sudo dnf install gcc-gfortran leads to gfortran 9
I tried to download from https://fedora.pkgs.org/33/fedora-x86_64/gcc-gfortran-10.2.1-3.fc33.x86_64.rpm.html
the download link file leads to Failed to install file, not supported from graphic interface :(
or bash: ./gcc-gfortran-10.2.1-9.fc33.x86_64.rpm: cannot execute binary file: Exec format error from terminal with root.
Is there any way to install gfortran-10 from Fedora?
Thanks!
You can of course always compile GCC from source, it is not that hard and the script for getting the prerequisites is included (./contrib/download_prerequisites).
The easiest way is to download one of the snapshots https://gcc.gnu.org/snapshots.html and follow the instructions. You do not even have to have admin rights, you can do it privately in your home directory.
Check whether there is a repository with additional GCC versions for your distro. For example, on my OpenSuSE, I have packages for GCC 7, 8, 9, 10 and 11. And they can be installed concurrently.
Regarding:
bash: ./gcc-gfortran-10.2.1-9.fc33.x86_64.rpm: cannot execute binary file: Exec format error
You cannot run a rpm file in bash, you have to install it using rpm -i or using your higher level package manager.
The file you downloaded is an RPM package, not an executable. You would normally install it with dnf install ./gcc-gfortran-10.2.1-9.fc33.x86_64.rpm from the command line. However, that package is for Fedora Linux 33, and you're running 31. Occasionally this works, but generally installing packages from new releases on older releases isn't supported.
If, for some reason you can't upgrade to Fedora Linux 33 for your whole system, one approach is to use the toolbox utility to make a containerized workspace using a F33 container image. Then, you can install the version of gfortran you want into that (with dnf install gcc-gfortran).
You could also use F34 (out tomorrow) but note that that has gcc 11.

Installing cuda via brew and dmg

After attempting to install nvidia toolkit on MAC by following guide : http://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x/index.html#axzz4FPTBCf7X I received error "Package manifest parsing error" which led me to this : NVidia CUDA toolkit 7.5.27 failing to install on OS X . I unmounted the dmg and upshot was that instead of receiving "Package manifest parsing error" the installer would not launch (it seemed to launch briefly , then quit).
Installing via command brew install Caskroom/cask/cuda (CUDA 7.5 install on Mac missing nvrtc) seems to have successfully installed cuda.
command nvcc --version returns :
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Mon_Apr_11_13:23:40_CDT_2016
Cuda compilation tools, release 7.5, V7.5.26
I've built the example in /Developer/NVIDIA/CUDA-7.5/samples/1_Utilities with :
make -C bandwidthTest/
This executed without error.
It appears installing with brew install Caskroom/cask/cuda is safe method of installing ? What is difference between this install method and installing via DMG file from nvidia ?
Caskroom appears to be an extension for brew for installing GUI applications : https://github.com/caskroom/homebrew-cask
Should an IDE also be installed as part of the cuda install ?
Nowadays you have to do the following to install cuda via brew:
brew tap homebrew/cask-drivers
brew cask install nvidia-cuda
See https://github.com/caskroom/homebrew-cask/issues/38325 .
Then you also need to add the following to your file ~/.bash_profile:
export PATH=/Developer/NVIDIA/CUDA-9.0/bin${PATH:+:${PATH}}
export DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-9.0/lib${DYLD_LIBRARY_PATH:+:${DYLD_LIBRARY_PATH}}
See http://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x/index.html.
UPDATE: Newer versions of Mac OS X with activated SIP (System integrity protection) will prevent modifying the DYLD_LIBRARY_PATH (see https://groups.google.com/forum/#!topic/caffe-users/waugt62RQMU). You can check that via
source ~/.bash_profile
env | grep DYLD_LIBRARY_PATH
If the output of this command is empty SIP is active and you might want to deactivate it as described at https://www.macworld.com/article/2986118/security/how-to-modify-system-integrity-protection-in-el-capitan.html . After doing this you should see
env | grep DYLD_LIBRARY_PATH
DYLD_LIBRARY_PATH=/Developer/NVIDIA/CUDA-9.0/lib
Both methods download and install from the same .dmg file from NVidia.
The homebrew-cask framework is the preferred method for installing software distributed as binaries in the homebrew paradigm.
This is my understanding.
Using DMG file, follow below:
wget 'https://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda_10.2.89_mac.dmg' && \
hdiutil attach cuda_10.2.89_mac.dmg \
-nobrowse \
-mountpoint \
/Volumes/CUDAMacOSXInstaller
Open installer:
open /Volumes/CUDAMacOSXInstaller/CUDAMacOSXInstaller.app
Uncheck "CUDA Samples" before continue.
Unmount and remove file:
hdiutil detach /Volumes/CUDAMacOSXInstaller && rm ./cuda_10.2.89_mac.dmg

Unable to get cuda to work in tensorflow

I'm trying to use cuda to accelerate tensorflow. I'm running tensorflow using the docker image.
Firstly, when I launch the gpu image, it has a mismatch in the LT_LIBRARY_PATH environment variable:
~# echo $LD_LIBRARY_PATH
/usr/local/nvidia/lib:/usr/local/nvidia/lib64:
root#d578acbbc2cd:~# ls /usr/local/
bin cuda cuda-7.0 etc games include lib man sbin share src
There's no nvidia directory there. When I try to run the convolutional.py demo, it can't initialise the cuda support:
# python models/image/mnist/convolutional.py
Succesfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Succesfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Succesfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Succesfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 8
modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/4.2.0-23-generic/modules.dep.bin'
E tensorflow/stream_executor/cuda/cuda_driver.cc:466] failed call to cuInit: CUDA_ERROR_UNKNOWN
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:98] retrieving CUDA diagnostic information for host: d578acbbc2cd
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:106] hostname: d578acbbc2cd
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:131] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:242] driver version file contents: """NVRM version: NVIDIA UNIX x86_64 Kernel Module 352.68 Tue Dec 1 17:24:11 PST 2015
GCC version: gcc version 5.2.1 20151010 (Ubuntu 5.2.1-22ubuntu2)
"""
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:135] kernel reported version is: 352.68
I tensorflow/core/common_runtime/gpu/gpu_init.cc:112] DMA:
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 8
It then goes on to train using cpu only.
# find /usr -name libcuda.so
/usr/lib/x86_64-linux-gnu/libcuda.so
So in the docker image, there's only the gnu cpu cuda implementation. No NVIDIA stuff. In the host ubuntu 15.10 session, I have libcuda.so installed:
$ find /usr -name libcuda.so
/usr/lib/x86_64-linux-gnu/libcuda.so
/usr/lib/i386-linux-gnu/libcuda.so
/usr/local/cuda-7.5/targets/x86_64-linux/lib
/stubs/libcuda.so
So these seem to be stubs ... not sure why.
Is there some trick to getting this to work?
Try rebuilding the Docker image directly from the Tensorflow repository (i.e. don't rely on the image on the container registry) and use https://github.com/NVIDIA/nvidia-docker to run the container (the Docker command described in the Tensorflow documentation is not portable).
I had a similar problem, though not in docker. The libcuda.so in /usr/local/cuda/lib64/stubs was a broken sym link. When I searched for libcuda.so it only turned up a file in a lib32 folder.
It seems that the problem was how I originally installed the NVIDIA device driver. At some point in the driver install process you're given the option to install the lib32 drivers. I had thought this meant in addition to lib64 drivers so I selected it. Turns out it only installs lib32 and not lib64 drivers.
I reinstalled the NIVDIA device driver, this time not selecting the lib32 'option'. Now tensorflow finds libcuda.so.
I had the same problem with running tensorflow on a Ubuntu machine after I upgraded my driver to 352.63 and 352.93. (I remember it works with 346.* but when I try to install 346., it installs 352. automatically for some reason).
I finally figured out that it's caused by permission issue. (I can run it with root) So, I changed the permission of the libcuda.so.352-63 file to executable by anyone and it works well now.
Hope this will be helpful to those still struggling with this issue.
I didn't try the docker one, but I guess it's also caused by permission setting.
Try this command
sudo apt-get install nvidia-modprobe
As mentioned here:
https://github.com/tensorflow/tensorflow/issues/394
and
http://kkjkok.blogspot.in/2016_08_01_archive.html
After I updated NVIDIA driver to 378.09 on Ubuntu 14.10 I had the same error,
although all the right for lib files were set correctly.
Thanks to #PhoenixQ, I tried to run with sudo and it worked.
After that I tried to run without sudo one more time and error disappeared. I'm not sure what ecxactly happened, but maybe something was configured during call with sudo, which was not possible withous sudo.
So the solution:
Try to run the same thing with sudo.
After this. Tryu running without sudo. Worked for me.

How to get the CUDA version?

Is there any quick command or script to check for the version of CUDA installed?
I found the manual of 4.0 under the installation directory but I'm not sure whether it is of the actual installed version or not.
As Jared mentions in a comment, from the command line:
nvcc --version
(or /usr/local/cuda/bin/nvcc --version) gives the CUDA compiler version (which matches the toolkit version).
From application code, you can query the runtime API version with
cudaRuntimeGetVersion()
or the driver API version with
cudaDriverGetVersion()
As Daniel points out, deviceQuery is an SDK sample app that queries the above, along with device capabilities.
As others note, you can also check the contents of the version.txt using (e.g., on Mac or Linux)
cat /usr/local/cuda/version.txt
However, if there is another version of the CUDA toolkit installed other than the one symlinked from /usr/local/cuda, this may report an inaccurate version if another version is earlier in your PATH than the above, so use with caution.
On Ubuntu Cuda V8:
$ cat /usr/local/cuda/version.txt
You can also get some insights into which CUDA versions are installed with:
$ ls -l /usr/local | grep cuda
which will give you something like this:
lrwxrwxrwx 1 root root 9 Mar 5 2020 cuda -> cuda-10.2
drwxr-xr-x 16 root root 4096 Mar 5 2020 cuda-10.2
drwxr-xr-x 16 root root 4096 Mar 5 2020 cuda-8.0.61
Given a sane PATH, the version cuda points to should be the active one (10.2 in this case).
NOTE: This only works if you are willing to assume CUDA is installed under /usr/local/cuda (which is true for the independent installer with the default location, but not true e.g. for distributions with CUDA integrated as a package). Ref: comment from #einpoklum.
[Edited answer. Thanks for everyone who corrected it]
If you run
nvidia-smi
You should find the CUDA Version highest CUDA version the installed driver supports on the top right corner of the comand's output. At least I found that output for CUDA version 10.0 e.g.,
For CUDA version:
nvcc --version
Or use,
nvidia-smi
For cuDNN version:
For Linux:
Use following to find path for cuDNN:
$ whereis cuda
cuda: /usr/local/cuda
Then use this to get version from header file,
$ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
For Windows,
Use following to find path for cuDNN:
C:\>where cudnn*
C:\Program Files\cuDNN7\cuda\bin\cudnn64_7.dll
Then use this to dump version from header file,
type "%PROGRAMFILES%\cuDNN7\cuda\include\cudnn.h" | findstr CUDNN_MAJOR
If you're getting two different versions for CUDA on Windows -
Different CUDA versions shown by nvcc and NVIDIA-smi
Use the following command to check CUDA installation by Conda:
conda list cudatoolkit
And the following command to check CUDNN version installed by conda:
conda list cudnn
If you want to install/update CUDA and CUDNN through CONDA, please use the following commands:
conda install -c anaconda cudatoolkit
conda install -c anaconda cudnn
Alternatively you can use following commands to check CUDA installation:
nvidia-smi
OR
nvcc --version
If you are using tensorflow-gpu through Anaconda package (You can verify this by simply opening Python in console and check if the default python shows Anaconda, Inc. when it starts, or you can run which python and check the location), then manually installing CUDA and CUDNN will most probably not work. You will have to update through conda instead.
If you want to install CUDA, CUDNN, or tensorflow-gpu manually, you can check out the instructions here https://www.tensorflow.org/install/gpu
Other respondents have already described which commands can be used to check the CUDA version. Here, I'll describe how to turn the output of those commands into an environment variable of the form "10.2", "11.0", etc.
To recap, you can use
nvcc --version
to find out the CUDA version.
I think this should be your first port of call.
If you have multiple versions of CUDA installed, this command should print out the version for the copy which is highest on your PATH.
The output looks like this:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Thu_Jun_11_22:26:38_PDT_2020
Cuda compilation tools, release 11.0, V11.0.194
Build cuda_11.0_bu.TC445_37.28540450_0
We can pass this output through sed to pick out just the MAJOR.MINOR release version number.
CUDA_VERSION=$(nvcc --version | sed -n 's/^.*release \([0-9]\+\.[0-9]\+\).*$/\1/p')
If nvcc isn't on your path, you should be able to run it by specifying the full path to the default location of nvcc instead.
/usr/local/cuda/bin/nvcc --version
The output of which is the same as above, and it can be parsed in the same way.
Alternatively, you can find the CUDA version from the version.txt file.
cat /usr/local/cuda/version.txt
The output of which
CUDA Version 10.1.243
can be parsed using sed to pick out just the MAJOR.MINOR release version number.
CUDA_VERSION=$(cat /usr/local/cuda/version.txt | sed 's/.* \([0-9]\+\.[0-9]\+\).*/\1/')
Note that sometimes the version.txt file refers to a different CUDA installation than the nvcc --version. In this scenario, the nvcc version should be the version you're actually using.
We can combine these three methods together in order to robustly get the CUDA version as follows:
if nvcc --version 2&> /dev/null; then
# Determine CUDA version using default nvcc binary
CUDA_VERSION=$(nvcc --version | sed -n 's/^.*release \([0-9]\+\.[0-9]\+\).*$/\1/p');
elif /usr/local/cuda/bin/nvcc --version 2&> /dev/null; then
# Determine CUDA version using /usr/local/cuda/bin/nvcc binary
CUDA_VERSION=$(/usr/local/cuda/bin/nvcc --version | sed -n 's/^.*release \([0-9]\+\.[0-9]\+\).*$/\1/p');
elif [ -f "/usr/local/cuda/version.txt" ]; then
# Determine CUDA version using /usr/local/cuda/version.txt file
CUDA_VERSION=$(cat /usr/local/cuda/version.txt | sed 's/.* \([0-9]\+\.[0-9]\+\).*/\1/')
else
CUDA_VERSION=""
fi
This environment variable is useful for downstream installations, such as when pip installing a copy of pytorch that was compiled for the correct CUDA version.
python -m pip install \
"torch==1.9.0+cu${CUDA_VERSION/./}" \
"torchvision==0.10.0+cu${CUDA_VERSION/./}" \
-f https://download.pytorch.org/whl/torch_stable.html
Similarly, you could install the CPU version of pytorch when CUDA is not installed.
if [ "$CUDA_VERSION" = "" ]; then
MOD="+cpu";
echo "Warning: Installing CPU-only version of pytorch"
else
MOD="+cu${CUDA_VERSION/./}";
echo "Installing pytorch with $MOD"
fi
python -m pip install \
"torch==1.9.0${MOD}" \
"torchvision==0.10.0${MOD}" \
-f https://download.pytorch.org/whl/torch_stable.html
But be careful with this because you can accidentally install a CPU-only version when you meant to have GPU support.
For example, if you run the install script on a server's login node which doesn't have GPUs and your jobs will be deployed onto nodes which do have GPUs. In this case, the login node will typically not have CUDA installed.
On Ubuntu :
Try
$ cat /usr/local/cuda/version.txt
or
$ cat /usr/local/cuda-8.0/version.txt
Sometimes the folder is named "Cuda-version".
If none of above works, try going to
$ /usr/local/
And find the correct name of your Cuda folder.
Output should be similar to:
CUDA Version 8.0.61
If you have installed CUDA SDK, you can run "deviceQuery" to see the version of CUDA
If you have PyTorch installed, you can simply run the following code in your IDE:
import torch
print(torch.version.cuda)
On Windows 10, I found nvidia-smi.exe in 'C:\Program Files\NVIDIA Corporation\NVSMI'; after cd into that folder (was not in the PATH in my case) and '.\nvidia-smi.exe' it showed
You might find CUDA-Z useful, here is a quote from their Site:
"This program was born as a parody of another Z-utilities such as CPU-Z and GPU-Z. CUDA-Z shows some basic information about CUDA-enabled GPUs and GPGPUs. It works with nVIDIA Geforce, Quadro and Tesla cards, ION chipsets."
http://cuda-z.sourceforge.net/
On the Support Tab there is the URL for the Source Code: http://sourceforge.net/p/cuda-z/code/ and the download is not actually an Installer but the Executable itself (no installation, so this is "quick").
This Utility provides lots of information and if you need to know how it was derived there is the Source to look at. There are other Utilities similar to this that you might search for.
One can get the cuda version by typing the following in the terminal:
$ nvcc -V
# below is the result
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85
Alternatively, one can manually check for the version by first finding out the installation directory using:
$ whereis -b cuda
cuda: /usr/local/cuda
And then cd into that directory and check for the CUDA version.
We have three ways to check Version:
In my case below is the output:-
Way 1:-
cat /usr/local/cuda/version.txt
Output:-
CUDA Version 10.1.243
Way2:-
nvcc --version
Output:-
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85
Way3:-
/usr/local/cuda/bin/nvcc --version
Output:-
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Way4:-
nvidia-smi
NVIDIA-SMI 450.36.06 Driver Version: 450.36.06 CUDA Version: 11.0
Outputs are not same. Don't know why it's happening.
First you should find where Cuda installed.
If it's a default installation like here the location should be:
for ubuntu:
/usr/local/cuda
in this folder you should have a file
version.txt
open this file with any text editor or run:
cat version.txt
from the folder
OR
cat /usr/local/cuda/version.txt
On Windows 11 with CUDA 11.6.1, this worked for me:
cat "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\version.json"
if nvcc --version is not working for you then use cat /usr/local/cuda/version.txt
After installing CUDA one can check the versions by: nvcc -V
I have installed both 5.0 and 5.5 so it gives
Cuda Compilation Tools,release 5.5,V5.5,0
This command works for both Windows and Ubuntu.
Apart from the ones mentioned above, your CUDA installations path (if not changed during setup) typically contains the version number
doing a which nvcc should give the path and that will give you the version
PS: This is a quick and dirty way, the above answers are more elegant and will result in the right version with considerable effort
If you are running on linux:
dpkg -l | grep cuda
If you have multiple CUDA installed, the one loaded in your system is CUDA associated with "nvcc". Therefore, "nvcc --version" shows what you want.
Open a terminal and run these commands:
cd /usr/local/cuda/samples/1_Utilities/deviceQuery
sudo make
./deviceQuery
You can get the information of CUDA Driver version, CUDA Runtime Version, and also detailed information for GPU(s). An image example of the output from my end is as below.
You can find the image here.
i get /usr/local - no such file or directory. Though nvcc -V gives
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Sun_Sep__4_22:14:01_CDT_2016
Cuda compilation tools, release 8.0, V8.0.44
Found mine after:
whereis cuda
at
cuda: /usr/lib/cuda /usr/include/cuda.h
with
nvcc --version
CUDA Version 9.1.85
Using tensorflow:
import tensorflow as tf
from tensorflow.python.platform import build_info as build
print(f"tensorflow version: {tf.__version__}")
print(f"Cuda Version: {build.build_info['cuda_version']}")
print(f"Cudnn version: {build.build_info['cudnn_version']}")
tensorflow version: 2.4.0
Cuda Version: 11.0
Cudnn version: 8
Programmatically with the CUDA Runtime API C++ wrappers (caveat: I'm the author):
auto v1 = cuda::version::maximum_supported_by_driver();
auto v2 = cuda::version::runtime();
This gives you a cuda::version_t structure, which you can compare and also print/stream e.g.:
if (v2 < cuda::version_t{ 8, 0 } ) {
std::cerr << "CUDA version " << v2 << " is insufficient." std::endl;
}
You can check the version of CUDA using
nvcc -V
or you can use
nvcc --version
or You can check the location of where the CUDA is using
whereis cuda
and then do
cat location/of/cuda/you/got/from/above/command
On my cuda-11.6.0 installation, the information can be found in /usr/local/cuda/version.json. It contains the full version number (11.6.0 instead of 11.6 as shown by nvidia-smi.
The information can be retrieved as follows:
python -c 'import json; print(json.load(open("/usr/local/cuda/version.json"))["cuda"]["version"])'
If there is a version mismatch between nvcc and nvidia-smi then different versions of cuda are used as driver and run time environemtn.
To ensure same version of CUDA drivers are used what you need to do is to get CUDA on system path.
First run whereis cuda and find the location of cuda driver.
Then go to .bashrc and modify the path variable and set the directory precedence order of search using variable 'LD_LIBRARY_PATH'.
for instance
$ whereis cuda
cuda: /usr/lib/cuda /usr/include/cuda.h /usr/local/cuda
CUDA is installed at /usr/local/cuda, now we need to to .bashrc and add the path variable as:
vim ~/.bashrc
export PATH="/usr/local/cuda/bin:${PATH}"
and after this line set the directory search path as:
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:${LD_LIBRARY_PATH}"
Then save the .bashrc file. And refresh it as:
$ source ~/.bashrc
This will ensure you have nvcc -V and nvidia-smi to use the same version of drivers.