TeXLive update error: tlmgr: Remote repository is newer than local (2017 < 2018) - auto-update

I am having a problem configuring the automatic update for the TeXLive
even though I am pretty sure that I downladed the 2018 TeXLive version
I get is this error
$ ls -l /usr/local/texlive$ sudo tlmgr update --self --all
[sudo] Passwort :
(running on Debian, switching to user mode!)
tlmgr: Remote repository is newer than local (2017 < 2018)
Cross release updates are only supported with
update-tlmgr-latest(.sh/.exe) --update
Please see https://tug.org/texlive/upgrade.html for details.
looking for my TexLive version it shows that it is the 2018 version
$ ls -l /usr/local/texlive
insgesamt 8
drwxr-xr-x 9 root root 4096 Nov 20 17:50 2018
drwxr-xr-x 10 root root 4096 Nov 20 16:13 texmf-local
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
But running this command it shows another version of it
$ tlmgr version
(running on Debian, switching to user mode!)
tlmgr revision 46207 (2018-01-04 19:34:36 +0100)
tlmgr using installation: /usr/share/texlive
TeX Live (http://tug.org/texlive) version 2017
$ tex --version
TeX 3.14159265 (TeX Live 2017/Debian)
kpathsea version 6.2.3
Copyright 2017 D.E. Knuth.
There is NO warranty. Redistribution of this software is
covered by the terms of both the TeX copyright and
the Lesser GNU General Public License.
For more information about these matters, see the file
named COPYING and the TeX source.
Primary author of TeX: D.E. Knuth.
$ latex --version
pdfTeX 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian)
kpathsea version 6.2.3
Copyright 2017 Han The Thanh (pdfTeX) et al.
There is NO warranty. Redistribution of this software is
covered by the terms of both the pdfTeX copyright and
the Lesser GNU General Public License.
For more information about these matters, see the file
named COPYING and the pdfTeX source.
Primary author of pdfTeX: Han The Thanh (pdfTeX) et al.
Compiled with libpng 1.6.34; using libpng 1.6.34
Compiled with zlib 1.2.11; using zlib 1.2.11
Compiled with poppler version 0.62.0

Change the repository to an older version fixed this for me.
tlmgr option repository ftp://tug.org/historic/systems/texlive/2017/tlnet-final
After that, I can run the update command.
Found this solution in a comment at stackoverflow.com, but I don't know the thread anymore.

Related

Getting gfortran 10 from Fedora 31

I tried to install gfortran 10 from Fedora 31.
Follow https://fortran-lang.org/learn/os_setup/install_gfortran
sudo dnf install gcc-gfortran leads to gfortran 9
I tried to download from https://fedora.pkgs.org/33/fedora-x86_64/gcc-gfortran-10.2.1-3.fc33.x86_64.rpm.html
the download link file leads to Failed to install file, not supported from graphic interface :(
or bash: ./gcc-gfortran-10.2.1-9.fc33.x86_64.rpm: cannot execute binary file: Exec format error from terminal with root.
Is there any way to install gfortran-10 from Fedora?
Thanks!
You can of course always compile GCC from source, it is not that hard and the script for getting the prerequisites is included (./contrib/download_prerequisites).
The easiest way is to download one of the snapshots https://gcc.gnu.org/snapshots.html and follow the instructions. You do not even have to have admin rights, you can do it privately in your home directory.
Check whether there is a repository with additional GCC versions for your distro. For example, on my OpenSuSE, I have packages for GCC 7, 8, 9, 10 and 11. And they can be installed concurrently.
Regarding:
bash: ./gcc-gfortran-10.2.1-9.fc33.x86_64.rpm: cannot execute binary file: Exec format error
You cannot run a rpm file in bash, you have to install it using rpm -i or using your higher level package manager.
The file you downloaded is an RPM package, not an executable. You would normally install it with dnf install ./gcc-gfortran-10.2.1-9.fc33.x86_64.rpm from the command line. However, that package is for Fedora Linux 33, and you're running 31. Occasionally this works, but generally installing packages from new releases on older releases isn't supported.
If, for some reason you can't upgrade to Fedora Linux 33 for your whole system, one approach is to use the toolbox utility to make a containerized workspace using a F33 container image. Then, you can install the version of gfortran you want into that (with dnf install gcc-gfortran).
You could also use F34 (out tomorrow) but note that that has gcc 11.

Q: MariaDB Installation Problem on Fedora 30

I need to install MariaDB and i follow the official Fedora Wiki instructions but i couldn't manage to do a proper installatiın though. So, first i run dnf install mariadb mariadb-server command (under root) and the return is
Package mysql-community-client-8.0.17-1.fc30.x86_64 is already installed.
Package mysql-community-server-8.0.17-1.fc30.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
So i try to enable/start mariadb with systemctl start mariadb and then the return is
Failed to start mariadb.service: Unit mariadb.service not found.
I installed other essential packages like php, httpd, mysql etc. and i need to install LAMP too but I get this MariaDB error. Could you show me the way?
It happens because mysql-community packages are configured to obsolete mariadb packages.
The approach below might be not the most optimal, everyone is easy to pitch in to simplify it.
First of all, if you have mysql-community-* packages installed, you probably have MySQL Community repo configured. Search your /etc/yum.repo.d for it and remove it from there:
$ grep -ri community /etc/yum.repos.d/*
/etc/yum.repos.d/mysql.repo:[mysql80-community]
/etc/yum.repos.d/mysql.repo:name=MySQL 8.0 Community Server
/etc/yum.repos.d/mysql.repo:baseurl=http://repo.mysql.com/yum/mysql-8.0-community/fc/$releasever/$basearch/
sudo mv /etc/yum.repos.d/mysql.repo /tmp/
Then, check which exactly mysql-community-* packages you have:
$ rpm -qa | grep mysql-community
mysql-community-server-8.0.17-1.fc30.x86_64
mysql-community-libs-8.0.17-1.fc30.x86_64
mysql-community-client-8.0.17-1.fc30.x86_64
mysql-community-common-8.0.17-1.fc30.x86_64
The easiest way to get rid of them is just to remove them by dnf. But it needs to be done very carefully, because if they have been there for a while, you probably have other packages depending on them. So, when you run the remove command, make sure that you don't have an auto-yes (-y option), and examine the output before agreeing to uninstall. It might look somewhat like this:
$ sudo dnf remove mysql-community*
Dependencies resolved.
===================================================================================================================================================================================
Package Architecture Version Repository Size
===================================================================================================================================================================================
Removing:
mysql-community-client x86_64 8.0.17-1.fc30 #mysql80-community 66 M
mysql-community-common x86_64 8.0.17-1.fc30 #mysql80-community 8.3 M
mysql-community-libs x86_64 8.0.17-1.fc30 #mysql80-community 7.5 M
mysql-community-server x86_64 8.0.17-1.fc30 #mysql80-community 128 M
Removing dependent packages:
perl-DBD-MySQL x86_64 4.050-2.fc30 #fedora 367 k
Removing unused dependencies:
Note the section Removing dependent packages. I added only one package to make an example, but in your case it can be much longer and scarier. If you do have the section and don't know if you can safely remove all its contents, it maybe be better not to do it (just yet). Abort the operation.
Is this ok [y/N]: n
Operation aborted.
Instead, you can try to replace mysql-community packages with MariaDB. There is dnf option --allowerasing which seems to do the trick, but you need to specify package names with versions to work around mysql obsoleting (replace the version in the command with the actual version available by the time you are doing it):
$ sudo dnf install --allowerasing --setopt=install_weak_deps=False mariadb-server-10.3.17 mariadb-10.3.17
Last metadata expiration check: 0:07:18 ago on Mon 07 Oct 2019 02:25:32 PM UTC.
Dependencies resolved.
===================================================================================================================================================================================
Package Architecture Version Repository Size
===================================================================================================================================================================================
Installing:
mariadb x86_64 3:10.3.17-1.fc30 updates 5.9 M
mariadb-server x86_64 3:10.3.17-1.fc30 updates 17 M
Installing dependencies:
mariadb-common x86_64 3:10.3.17-1.fc30 updates 36 k
mariadb-connector-c-config noarch 3.1.3-1.fc30 updates 12 k
mariadb-errmsg x86_64 3:10.3.17-1.fc30 updates 205 k
mysql-selinux noarch 1.0.0-8.fc30 fedora 35 k
psmisc x86_64 23.1-5.1.fc30 fedora 133 k
Removing dependent packages:
mysql-community-client x86_64 8.0.17-1.fc30 #mysql80-community 66 M
mysql-community-server x86_64 8.0.17-1.fc30 #mysql80-community 128 M
Transaction Summary
Now nothing is removed as dependencies, apart from mysql-community, which was the goal.
The option --setopt=install_weak_deps=False is not strictly necessary, but without it dnf installs many packages which you probably don't need. You can run without the option to see the difference.
After you have replaced the server and client packages, you can check what else you have left from MySQL community server, and try to remove remaining packages if you want:
$ rpm -qa | grep mysql-community
mysql-community-libs-8.0.17-1.fc30.x86_64
mysql-community-common-8.0.17-1.fc30.x86_64
$ sudo dnf remove mysql-community-libs mysql-community-common
Dependencies resolved.
===================================================================================================================================================================================
Package Architecture Version Repository Size
===================================================================================================================================================================================
Removing:
mysql-community-common x86_64 8.0.17-1.fc30 #mysql80-community 8.3 M
mysql-community-libs x86_64 8.0.17-1.fc30 #mysql80-community 7.5 M
Transaction Summary
Now it seems safe, no dependencies anymore.
There is one catch I can think of. If you don't know why mysql-community was installed in the first place, it's possible that you have something which requires exactly it, and won't be satisfied with MariaDB replacing it. Then it probably won't allow you to replace the packages. But I can't guess what it could be, so it's up to you to try and see. I suppose it will show up in dnf output which you examine before confirmation.

How to set CUDA parameters with GTX1080 for Tensorflow?

After I install the diriver of GTX1080, tensorflow shows that it can find the cudnn library.
However, the GPU driver is not recognized by the modprobe.
Detais information are as follows:
$ python
[14:22:14]
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
>>> sess = tf.InteractiveSession()
modprobe: ERROR: could not insert 'nvidia_352_uvm': Invalid argument
E tensorflow/stream_executor/cuda/cuda_driver.cc:491] failed call to cuInit: CUDA_ERROR_UNKNOWN
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:153] retrieving CUDA diagnostic information for host: work-data
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:160] hostname: work-data
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:185] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:347] driver version file contents: """NVRM version: NVIDIA UNIX x86_64 Kernel Module 367.27 Thu Jun 9 18:53:27 PDT 2016 GCC version: gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) """
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] kernel reported version is: 367.27.0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:81] No GPU devices available on machine.
The version of GTX1080 driver is 367.27, which is provided by the NVIDIA.
I don't know why there is a 'nvidia_352_uvm'?
The result of nvidia-smi is here.
May be I need to reinstall cuda, but I really reinstall it several times.
Should I remove all the cuda library and nvidia dirver, then reinstall them all? Is there any install sequence about this two?
enter image description here
Too long for a comment, but here are some tips I've learned after trying to get NVidia drivers to play nice with Ubuntu.
Upgrading new driver on top of existing driver gives a partially upgraded installation. You need to remove the previous stuff first.
sudo apt-get remove --purge nvidia-*
sudo rm /etc/X11/xorg.conf # if you ran nvidia-xconfig
Reload NVidia driver as follows (from virtual terminal, CTRL+ALT+F7)
sudo service lightdm stop # stop your window manager
killall python # kill all running TensorFlow instances to free GPU
sudo modprobe -r nvidia
sudo modprobe nvidia
dmesg | tail -100 # check for error messages
Check logs for any error messages from NVidia
dmesg | grep -i nvidia
lspci | grep -i nvidia
nvidia-smi # make sure this reports version 367.27
Also, there are two ways to install drivers, using Ubuntu's built-in upgrade with sudo apt-get install nvidia-current, or by getting tar ball from NVidia website. I was not able to get sudo apt-get route to work for TensorFlow, so I would recommend downloading drivers from NVidia website

In Windows and Linux how to find installed mercurial is 32 or 64 bit?

In both windows and linux how to find installed mercurial is 32 or 64 bit using cmd line ?
hg version doesnt show.
C:\Users\dkanagaraj>hg --version
Mercurial Distributed SCM (version 3.4.2)
(see http://mercurial.selenic.com for more information)
Copyright (C) 2005-2015 Matt Mackall and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
On Debian or Ubuntu Linux you can query the "mercurial" package using dpkg -l mercurial.
Here is some sample output:
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-============-============-=================================
ii mercurial 2.8.2-1ubunt amd64 easy-to-use, scalable distributed
Notice the amd64, it shows that it's a 64bit version.
On RedHat or Fedora and other RPM-based distributions the command is probably
rpm -qi mercurial

How to get the CUDA version?

Is there any quick command or script to check for the version of CUDA installed?
I found the manual of 4.0 under the installation directory but I'm not sure whether it is of the actual installed version or not.
As Jared mentions in a comment, from the command line:
nvcc --version
(or /usr/local/cuda/bin/nvcc --version) gives the CUDA compiler version (which matches the toolkit version).
From application code, you can query the runtime API version with
cudaRuntimeGetVersion()
or the driver API version with
cudaDriverGetVersion()
As Daniel points out, deviceQuery is an SDK sample app that queries the above, along with device capabilities.
As others note, you can also check the contents of the version.txt using (e.g., on Mac or Linux)
cat /usr/local/cuda/version.txt
However, if there is another version of the CUDA toolkit installed other than the one symlinked from /usr/local/cuda, this may report an inaccurate version if another version is earlier in your PATH than the above, so use with caution.
On Ubuntu Cuda V8:
$ cat /usr/local/cuda/version.txt
You can also get some insights into which CUDA versions are installed with:
$ ls -l /usr/local | grep cuda
which will give you something like this:
lrwxrwxrwx 1 root root 9 Mar 5 2020 cuda -> cuda-10.2
drwxr-xr-x 16 root root 4096 Mar 5 2020 cuda-10.2
drwxr-xr-x 16 root root 4096 Mar 5 2020 cuda-8.0.61
Given a sane PATH, the version cuda points to should be the active one (10.2 in this case).
NOTE: This only works if you are willing to assume CUDA is installed under /usr/local/cuda (which is true for the independent installer with the default location, but not true e.g. for distributions with CUDA integrated as a package). Ref: comment from #einpoklum.
[Edited answer. Thanks for everyone who corrected it]
If you run
nvidia-smi
You should find the CUDA Version highest CUDA version the installed driver supports on the top right corner of the comand's output. At least I found that output for CUDA version 10.0 e.g.,
For CUDA version:
nvcc --version
Or use,
nvidia-smi
For cuDNN version:
For Linux:
Use following to find path for cuDNN:
$ whereis cuda
cuda: /usr/local/cuda
Then use this to get version from header file,
$ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
For Windows,
Use following to find path for cuDNN:
C:\>where cudnn*
C:\Program Files\cuDNN7\cuda\bin\cudnn64_7.dll
Then use this to dump version from header file,
type "%PROGRAMFILES%\cuDNN7\cuda\include\cudnn.h" | findstr CUDNN_MAJOR
If you're getting two different versions for CUDA on Windows -
Different CUDA versions shown by nvcc and NVIDIA-smi
Use the following command to check CUDA installation by Conda:
conda list cudatoolkit
And the following command to check CUDNN version installed by conda:
conda list cudnn
If you want to install/update CUDA and CUDNN through CONDA, please use the following commands:
conda install -c anaconda cudatoolkit
conda install -c anaconda cudnn
Alternatively you can use following commands to check CUDA installation:
nvidia-smi
OR
nvcc --version
If you are using tensorflow-gpu through Anaconda package (You can verify this by simply opening Python in console and check if the default python shows Anaconda, Inc. when it starts, or you can run which python and check the location), then manually installing CUDA and CUDNN will most probably not work. You will have to update through conda instead.
If you want to install CUDA, CUDNN, or tensorflow-gpu manually, you can check out the instructions here https://www.tensorflow.org/install/gpu
Other respondents have already described which commands can be used to check the CUDA version. Here, I'll describe how to turn the output of those commands into an environment variable of the form "10.2", "11.0", etc.
To recap, you can use
nvcc --version
to find out the CUDA version.
I think this should be your first port of call.
If you have multiple versions of CUDA installed, this command should print out the version for the copy which is highest on your PATH.
The output looks like this:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Thu_Jun_11_22:26:38_PDT_2020
Cuda compilation tools, release 11.0, V11.0.194
Build cuda_11.0_bu.TC445_37.28540450_0
We can pass this output through sed to pick out just the MAJOR.MINOR release version number.
CUDA_VERSION=$(nvcc --version | sed -n 's/^.*release \([0-9]\+\.[0-9]\+\).*$/\1/p')
If nvcc isn't on your path, you should be able to run it by specifying the full path to the default location of nvcc instead.
/usr/local/cuda/bin/nvcc --version
The output of which is the same as above, and it can be parsed in the same way.
Alternatively, you can find the CUDA version from the version.txt file.
cat /usr/local/cuda/version.txt
The output of which
CUDA Version 10.1.243
can be parsed using sed to pick out just the MAJOR.MINOR release version number.
CUDA_VERSION=$(cat /usr/local/cuda/version.txt | sed 's/.* \([0-9]\+\.[0-9]\+\).*/\1/')
Note that sometimes the version.txt file refers to a different CUDA installation than the nvcc --version. In this scenario, the nvcc version should be the version you're actually using.
We can combine these three methods together in order to robustly get the CUDA version as follows:
if nvcc --version 2&> /dev/null; then
# Determine CUDA version using default nvcc binary
CUDA_VERSION=$(nvcc --version | sed -n 's/^.*release \([0-9]\+\.[0-9]\+\).*$/\1/p');
elif /usr/local/cuda/bin/nvcc --version 2&> /dev/null; then
# Determine CUDA version using /usr/local/cuda/bin/nvcc binary
CUDA_VERSION=$(/usr/local/cuda/bin/nvcc --version | sed -n 's/^.*release \([0-9]\+\.[0-9]\+\).*$/\1/p');
elif [ -f "/usr/local/cuda/version.txt" ]; then
# Determine CUDA version using /usr/local/cuda/version.txt file
CUDA_VERSION=$(cat /usr/local/cuda/version.txt | sed 's/.* \([0-9]\+\.[0-9]\+\).*/\1/')
else
CUDA_VERSION=""
fi
This environment variable is useful for downstream installations, such as when pip installing a copy of pytorch that was compiled for the correct CUDA version.
python -m pip install \
"torch==1.9.0+cu${CUDA_VERSION/./}" \
"torchvision==0.10.0+cu${CUDA_VERSION/./}" \
-f https://download.pytorch.org/whl/torch_stable.html
Similarly, you could install the CPU version of pytorch when CUDA is not installed.
if [ "$CUDA_VERSION" = "" ]; then
MOD="+cpu";
echo "Warning: Installing CPU-only version of pytorch"
else
MOD="+cu${CUDA_VERSION/./}";
echo "Installing pytorch with $MOD"
fi
python -m pip install \
"torch==1.9.0${MOD}" \
"torchvision==0.10.0${MOD}" \
-f https://download.pytorch.org/whl/torch_stable.html
But be careful with this because you can accidentally install a CPU-only version when you meant to have GPU support.
For example, if you run the install script on a server's login node which doesn't have GPUs and your jobs will be deployed onto nodes which do have GPUs. In this case, the login node will typically not have CUDA installed.
On Ubuntu :
Try
$ cat /usr/local/cuda/version.txt
or
$ cat /usr/local/cuda-8.0/version.txt
Sometimes the folder is named "Cuda-version".
If none of above works, try going to
$ /usr/local/
And find the correct name of your Cuda folder.
Output should be similar to:
CUDA Version 8.0.61
If you have installed CUDA SDK, you can run "deviceQuery" to see the version of CUDA
If you have PyTorch installed, you can simply run the following code in your IDE:
import torch
print(torch.version.cuda)
On Windows 10, I found nvidia-smi.exe in 'C:\Program Files\NVIDIA Corporation\NVSMI'; after cd into that folder (was not in the PATH in my case) and '.\nvidia-smi.exe' it showed
You might find CUDA-Z useful, here is a quote from their Site:
"This program was born as a parody of another Z-utilities such as CPU-Z and GPU-Z. CUDA-Z shows some basic information about CUDA-enabled GPUs and GPGPUs. It works with nVIDIA Geforce, Quadro and Tesla cards, ION chipsets."
http://cuda-z.sourceforge.net/
On the Support Tab there is the URL for the Source Code: http://sourceforge.net/p/cuda-z/code/ and the download is not actually an Installer but the Executable itself (no installation, so this is "quick").
This Utility provides lots of information and if you need to know how it was derived there is the Source to look at. There are other Utilities similar to this that you might search for.
One can get the cuda version by typing the following in the terminal:
$ nvcc -V
# below is the result
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85
Alternatively, one can manually check for the version by first finding out the installation directory using:
$ whereis -b cuda
cuda: /usr/local/cuda
And then cd into that directory and check for the CUDA version.
We have three ways to check Version:
In my case below is the output:-
Way 1:-
cat /usr/local/cuda/version.txt
Output:-
CUDA Version 10.1.243
Way2:-
nvcc --version
Output:-
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85
Way3:-
/usr/local/cuda/bin/nvcc --version
Output:-
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Way4:-
nvidia-smi
NVIDIA-SMI 450.36.06 Driver Version: 450.36.06 CUDA Version: 11.0
Outputs are not same. Don't know why it's happening.
First you should find where Cuda installed.
If it's a default installation like here the location should be:
for ubuntu:
/usr/local/cuda
in this folder you should have a file
version.txt
open this file with any text editor or run:
cat version.txt
from the folder
OR
cat /usr/local/cuda/version.txt
On Windows 11 with CUDA 11.6.1, this worked for me:
cat "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\version.json"
if nvcc --version is not working for you then use cat /usr/local/cuda/version.txt
After installing CUDA one can check the versions by: nvcc -V
I have installed both 5.0 and 5.5 so it gives
Cuda Compilation Tools,release 5.5,V5.5,0
This command works for both Windows and Ubuntu.
Apart from the ones mentioned above, your CUDA installations path (if not changed during setup) typically contains the version number
doing a which nvcc should give the path and that will give you the version
PS: This is a quick and dirty way, the above answers are more elegant and will result in the right version with considerable effort
If you are running on linux:
dpkg -l | grep cuda
If you have multiple CUDA installed, the one loaded in your system is CUDA associated with "nvcc". Therefore, "nvcc --version" shows what you want.
Open a terminal and run these commands:
cd /usr/local/cuda/samples/1_Utilities/deviceQuery
sudo make
./deviceQuery
You can get the information of CUDA Driver version, CUDA Runtime Version, and also detailed information for GPU(s). An image example of the output from my end is as below.
You can find the image here.
i get /usr/local - no such file or directory. Though nvcc -V gives
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Sun_Sep__4_22:14:01_CDT_2016
Cuda compilation tools, release 8.0, V8.0.44
Found mine after:
whereis cuda
at
cuda: /usr/lib/cuda /usr/include/cuda.h
with
nvcc --version
CUDA Version 9.1.85
Using tensorflow:
import tensorflow as tf
from tensorflow.python.platform import build_info as build
print(f"tensorflow version: {tf.__version__}")
print(f"Cuda Version: {build.build_info['cuda_version']}")
print(f"Cudnn version: {build.build_info['cudnn_version']}")
tensorflow version: 2.4.0
Cuda Version: 11.0
Cudnn version: 8
Programmatically with the CUDA Runtime API C++ wrappers (caveat: I'm the author):
auto v1 = cuda::version::maximum_supported_by_driver();
auto v2 = cuda::version::runtime();
This gives you a cuda::version_t structure, which you can compare and also print/stream e.g.:
if (v2 < cuda::version_t{ 8, 0 } ) {
std::cerr << "CUDA version " << v2 << " is insufficient." std::endl;
}
You can check the version of CUDA using
nvcc -V
or you can use
nvcc --version
or You can check the location of where the CUDA is using
whereis cuda
and then do
cat location/of/cuda/you/got/from/above/command
On my cuda-11.6.0 installation, the information can be found in /usr/local/cuda/version.json. It contains the full version number (11.6.0 instead of 11.6 as shown by nvidia-smi.
The information can be retrieved as follows:
python -c 'import json; print(json.load(open("/usr/local/cuda/version.json"))["cuda"]["version"])'
If there is a version mismatch between nvcc and nvidia-smi then different versions of cuda are used as driver and run time environemtn.
To ensure same version of CUDA drivers are used what you need to do is to get CUDA on system path.
First run whereis cuda and find the location of cuda driver.
Then go to .bashrc and modify the path variable and set the directory precedence order of search using variable 'LD_LIBRARY_PATH'.
for instance
$ whereis cuda
cuda: /usr/lib/cuda /usr/include/cuda.h /usr/local/cuda
CUDA is installed at /usr/local/cuda, now we need to to .bashrc and add the path variable as:
vim ~/.bashrc
export PATH="/usr/local/cuda/bin:${PATH}"
and after this line set the directory search path as:
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:${LD_LIBRARY_PATH}"
Then save the .bashrc file. And refresh it as:
$ source ~/.bashrc
This will ensure you have nvcc -V and nvidia-smi to use the same version of drivers.