Pytorch Hardware Requirement - deep-learning

What is the minimum Computation Capability required by the latest PyTorch version?
I have Nvidia Geforce 820M with computation capability 2.1. How can I run PyTorch models on my GPU (if it doesn't support naturally)

Looking at this page, PyTorch (even the somewhat oldest versions) support CUDA upwards from version 7.5. Whereas, looking at this page, CUDA 7.5 requires minimum Compute Capability 2.0. So, on paper, your machine should support some older version of PyTorch which allows CUDA 7.5 or preferably 8.0 (as of writing this answer, the latest version uses minimum CUDA 9.2).
However, PyTorch also requires cuDNN. So, cuDNN 6.0 works for CUDA 7.5. But cuDNN 6.0 requires Compute Capability of 3.0. So, mostly, PyTorch won't work on your machine. (Thanks for pointing out the cuDNN part Robert Crovella)

Related

Does CUDA 11.2 supports backward compatibility with application that is compiled on CUDA 10.2?

I have the base image for my application built with nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04.I have to run that application in the cluster which is having cuda version
NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2.
My application is not giving me right prediction results for the GPU trained model(it is returning the base score as prediction output).However, it is able to return accurate prediction results for the CPU-trained model.so, I am speculating it as the CUDA version incompatibility issue between the two. I want to know that whether CUDA version 11.2 works well with application that is complied with CUDA 10.2 or not..
Yes, it is possible for an application compiled with CUDA 10.2 to run in an environment that has CUDA 11.2 installed. This is part of the CUDA compatibility model/system.
Otherwise, there isn't enough information in this question to diagnose why your application is behaving the way you describe. For that, SO expects a minimal reproducible example.

Find supported GPU

I want to know if the latest CUDA version, which is 8.0, supports the GPUs in my computer, which are GeForce GTX 970 and Quadro K4200 (a dual-GPU system); I couldn't find the info online.
In general, how to find if a CUDA version, especially the newly released version, supports a specific Nvidia GPU?
Thanks!
In general, how to find if a CUDA version, especially the newly released version, supports a specific Nvidia GPU?
All CUDA versions from CUDA 7.0 to CUDA 8.0 support GPUs that have a compute capability of 2.0 or higher. Both of your GPUs are in this category.
Prior to CUDA 7.0, some older GPUs were supported also. You can find details of that here.
Note that CUDA 8.0 has announced that development for compute capability 2.0 and 2.1 is deprecated, meaning that support for these (Fermi) GPUs may be dropped in a future CUDA release.
In general, a list of currently supported CUDA GPUs and their compute capabilities is maintained by NVIDIA here although the list occasionally has omissions for very new GPUs just released.

How can I make tensorflow run on a GPU with capability 2.x?

I've successfully installed tensorflow (GPU) on Linux Ubuntu 16.04 and made some small changes in order to make it work with the new Ubuntu LTS release.
However, I thought (who knows why) that my GPU met the minimum requirement of a compute capability greater than 3.5. That was not the case since my GeForce 820M has just 2.1. Is there a way of making tensorflow GPU version working with my GPU?
I am asking this question since apparently there was no way of making tensorflow GPU version working on Ubuntu 16.04 but by searching the internet I found out that was not the case and indeed I made it almost work were it not for this unsatisfied requirement. Now I am wondering if this issue with GPU compute capability could be fixed as well.
Recent GPU versions of tensorflow require compute capability 3.5 or higher (and use cuDNN to access the GPU.
cuDNN also requires a GPU of cc3.0 or higher:
cuDNN is supported on Windows, Linux and MacOS systems with Pascal, Kepler, Maxwell, Tegra K1 or Tegra X1 GPUs.
Kepler = cc3.x
Maxwell = cc5.x
Pascal = cc6.x
TK1 = cc3.2
TX1 = cc5.3
Fermi GPUs (cc2.0, cc2.1) are not supported by cuDNN.
Older GPUs (e.g. compute capability 1.x) are also not supported by cuDNN.
Note that there has never been either a version of cuDNN or any version of TF that officially supported NVIDIA GPUs less than cc3.0. The initial version of cuDNN started out by requiring cc3.0 GPUs, and the initial version of TF started out by requiring cc3.0 GPUs.
Sep.2017 Update: No way to do that without problems and pains. I've tried hard all the ways and even apply below trick to force it run but finally I had to give up. If you are serious with Tensorflow just go ahead and buy 3.0 compute capability GPU.
This is a trick to force tensorflow run on 2.0 compute capability GPU (not officially):
Find the file in
Lib/site-packages/tensorflow/python/_pywrap_tensorflow_internal.pyd
(orLib/site-packages/tensorflow/python/_pywrap_tensorflow.pyd)
Open it with Notepad++ or something similar
Search for the first occurence of 3\.5.*5\.2 using regex
You see the 3.0 before 3.5*5.2, change it to 2.0
I changed as above and can do simple calculation with GPU, but get stuck with strange and unknown issues when try with practical projects(those projects run well with 3.0 compute capability GPU)
I found it how to install Tensorflow-gpu on a compute capability 2.1 NVIDIA GeForce 525M for python ,the trick is simple use a archived version of tensorflow, I used 1.9.0
The python command for package using PIP is
pip install tensorflow-gpu==1.9.0
and cuDNN version is 7.4.1

Caffe using GPU with NVidia Quadro 2200

I'm using the deep learning framework Caffe on a Ubuntu 14.04 machine. I compiled CAFE with CPU_ONLY option, i.e. I disabled GPU and CUDA usage. I have an NVidia Quadro K2200 graphics card and CUDA version 5.5.
I would like to know if it is possible to use Caffe with CUDA enabled with my GPU. On NVidia page, it is written that Quadro K2200 has a compute capability of 5.0. Does it mean that I can use it with CUDA versions up to release 5.0? When it is possible to use Caffe with GPU-enabled with Quadro K2200, how can I choose the appropriate CUDA version for that?
CUDA version is not the same thing as Compute Capability. For one, CUDA is current (7.5 prerelease), while CC is only at 5.2. K2200 supports CC 5.0.
The difference:
CUDA version means the library/toolkit/SDK/etc version. You should always use the highest one available.
Compute Capability is your GPU's capability to perform certain instructions, etc. Every CUDA function has a minimum CC requirement. When you write a CUDA program, it's CC requirement is the maximum of the requirements of all the features you used.
That said, I've no idea what Caffe is, but a quick search shows they require CC of 2.0, so you should be good to go. CC 5.0 is pretty recent, so very few things won't work on it.

Is it possible for lower CUDA toolkit version with higher driver version?

The hardware seems to newer than the highest-support hardware of the driver version of the lower toolkit. Is it possible for this newer hardware with the newer driver, but with the lower cuda toolkit?
For example,
the hardware is NVIDIA GTS 450,
the cuda toolkit is cuda 2.3, because the driver of cuda 2.3 seems not to support the GTS 450, so I want to install a newer driver, but the toolkit is still cuda 2.3.
Does this work?
In general, older CUDA toolkits should be compatible with newer GPU drivers. CUDA toolkit 2.3 is very old however, so I don't know what other issues you may run into. I would suggest updating to a newer CUDA toolkit as well.