I'd like to use pylearn2, theano and scikit-neuralnetwork to build neural network models. But my friend told me that all this module can only run on NVIDIA GPU based platform (because they would import the pycuda module). But I only have an AMD GPU(R9 270,and an AMD FX-8300 CPU),and I wish to take advantage of AMD GPU to speed up computing. Can I use all of the modules metioned above? Or is there any substitutes I can use to build neural network models ? Thanks!
Currently, Theano only supports nvidia GPUs. There is a partial implementation of an OpenCL backend that would support AMD GPUs but it is incomplete and unsupported.
scikit-neuralnetwork builds on PyLearn2 and PyLearn2 builds on Theano so none of those packages can operate on AMD GPUs.
Torch appears to already have some OpenCL support. Caffe's OpenCL support appears to be under development.
Related
I'm using a laptop which has Intel Corporation HD Graphics 520.
Does anyone know how to it set up for Deep Learning, specifically Pytorch? I have seen if you have Nvidia graphics I can install cuda but what to do when you have intel GPU?
PyTorch doesn't support anything other than NVIDIA CUDA and lately AMD Rocm.
Intels support for Pytorch that were given in the other answers is exclusive to xeon line of processors and its not that scalable either with regards to GPUs.
Intel's oneAPI formerly known ad oneDNN however, has support for a wide range of hardwares including intel's integrated graphics but at the moment, the full support is not yet implemented in PyTorch as of 10/29/2020 or PyTorch 1.7.
But you still have other options. for inference you have couple of options.
DirectML is one of them. basically you convert your model into onnx, and then use directml provider to run your model on gpu (which in our case will use DirectX12 and works only on Windows for now!)
Your other Option is to use OpenVino and TVM both of which support multi platforms including Linux, Windows, Mac, etc.
OpenVino and TVM use ONNX models so you need to first convert your model to onnx format and then use them.
Lately(as of 2023),IREE (Intermediate Representation Execution Environment) (torch-mlir in this case) can be used as well.
Intel provides optimized libraries for Deep and Machine Learning if you are using one of their later processors. A starting point would be this post, which is about getting started with Intel optimization of PyTorch. They provide more information about this in their AI workshops.
I see many torch codes use:
require cudnn
require cunn
require cutorch
What are these package used for? What is their relation with Cuda?
All 3 are used for CUDA GPU implementations for torch7.
cutorch is the cuda backend for torch7, offering various support for CUDA implementations in torch, such as a CudaTensor for tensors in GPU memory. Also adds some helpful features when interacting with the GPU.
cunn provides additional modules over the nn library, mainly converting those nn modules to GPU CUDA versions transparently. This makes it easy to switch neural networks to the GPU and vice versa via cuda!
cuDNN is a wrapper of NVIDIA's cuDNN library, which is an optimized library for CUDA containing various fast GPU implementations, such as for convolutional networks and RNN modules.
Not sure what 'cutorch' is but from my understanding:
Cuda: Library to use GPUs.
cudnn: Library to do Neural Net stuff on GPUs (probably uses Cuda to talk to the GPUs)
source: https://www.quora.com/What-is-CUDA-and-cuDNN
Cuda is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
And cuDNN is a Cuda Deep neural network library which is accelerated on GPU's. It's built on underlying Cuda framework.
I have a very simple Toshiba Laptop with i3 processor. Also, I do not have any expensive graphics card. In the display settings, I see Intel(HD) Graphics as display adapter. I am planning to learn some cuda programming. But, I am not sure, if I can do that on my laptop as it does not have any nvidia's cuda enabled GPU.
In fact, I doubt, if I even have a GPU o_o
So, I would appreciate if someone can tell me if I can do CUDA programming with the current configuration and if possible also let me know what does Intel(HD) Graphics mean?
At the present time, Intel graphics chips do not support CUDA. It is possible that, in the nearest future, these chips will support OpenCL (which is a standard that is very similar to CUDA), but this is not guaranteed and their current drivers do not support OpenCL either. (There is an Intel OpenCL SDK available, but, at the present time, it does not give you access to the GPU.)
Newest Intel processors (Sandy Bridge) have a GPU integrated into the CPU core. Your processor may be a previous-generation version, in which case "Intel(HD) graphics" is an independent chip.
Portland group have a commercial product called CUDA x86, it is hybrid compiler which creates CUDA C/ C++ code which can either run on GPU or use SIMD on CPU, this is done fully automated without any intervention for the developer. Hope this helps.
Link: http://www.pgroup.com/products/pgiworkstation.htm
If you're interested in learning a language which supports massive parallelism better go for OpenCL since you don't have an NVIDIA GPU. You can run OpenCL on Intel CPUs, but at best you can learn to program SIMDs.
Optimization on CPU and GPU are different. I really don't think you can use Intel card for GPGPU.
Intel HD Graphics is usually the on-CPU graphics chip in newer Core i3/i5/i7 processors.
As far as I know it doesn't support CUDA (which is a proprietary NVidia technology), but OpenCL is supported by NVidia, ATi and Intel.
in 2020 ZLUDA was created which provides CUDA API for Intel GPUs. It is not production ready yet though.
I am currently working on an ML project on my personal computer that has an AMD graphics card. I have an old NVidia 8800GT card that I could plug in for CUDA accelerated convolution, but I haven't found if it is compatible with Theano. Googling has surprisingly been unsuccessful.
I know the 8800GT supports CUDA and I've done some CUDA work with it in the past, but is compatible with Theano? (or TensorFlow?)
Best,
Joe
Theano has no specific requirements for cards other than "it works with cuda".
If you want to use the cuDNN layers or other specilized things, then you might need a more recent card and the requirements for those is specified in the documentation for those libraries.
I wish to run Caffe on a 32 core machine.
Does caffe scale up to available number of cores to utilize them the best?
Although there are 32 cores, can I make caffe use only a selected number of cores?
Generally caffe doesn't support multiple CPUs/cores in its source code, but it uses BLAS routines.
Thus answers to your questions are the following:
Yes, but only through BLAS configuration, i. e. your BLAS version should be compiled with multithreading support (see related discussions: here or here - at the second link you can also find some modifications for caffe itself).
Also through BLAS (if it was compiled with openmp support, you can define OMP_NUM_THREADS to desired value).
caffe does not, but you can you Intel caffe which is optimized for CPU and supports multi node
https://github.com/intel/caffe/wiki/Multinode-guide