GPU Compatibility for Theano [NVidia GeForce 8800GT] - cuda

I am currently working on an ML project on my personal computer that has an AMD graphics card. I have an old NVidia 8800GT card that I could plug in for CUDA accelerated convolution, but I haven't found if it is compatible with Theano. Googling has surprisingly been unsuccessful.
I know the 8800GT supports CUDA and I've done some CUDA work with it in the past, but is compatible with Theano? (or TensorFlow?)
Best,
Joe

Theano has no specific requirements for cards other than "it works with cuda".
If you want to use the cuDNN layers or other specilized things, then you might need a more recent card and the requirements for those is specified in the documentation for those libraries.

Related

Benefit of higher version of CUDA for devices with lower Compute Capability

I'm using CUDA 7.0 on a Tesla K20X (C.C. 3.5). Is there any benefit to update to a higher version of CUDA, say 8.0. Is there any compatibility or stability risk with using higher version of CUDA with devices with (much) lower C.C.?
(Various available versions of CUDA on Nvidia website make me doubtful which one is really good)
Regarding benefits, newer CUDA toolkit versions usually provide feature benefits (new features, and/or enhanced performance) over previous CUDA toolkit version. However there are also occasionally performance regressions. Specifics can't be given - it may vary based on your exact code. However there are generally summary blog articles for each new CUDA toolkit version, for example here is the one for CUDA 8 and here is the one for CUDA 9, describing the new features available.
Regarding compatibility, there should be no risk to moving to a higher CUDA version, regardless of the compute capability of your device, as long as your device is supported. All current CUDA versions in the range of 7-9 support your cc3.5 GPU.
Regarding stability, it is possible that a newer CUDA version may have a bug, but it is also possible that a bug in your existing CUDA version may be fixed in a newer version. Guarantees can't be made here; software almost always has bugs in it. However it is generally recommended to use the latest CUDA version compatible with your GPU (in the absence of other considerations), as this gives you access the latest features and at least gives you the best possibility that a historically known issue has been addressed.
I doubt these sort of platitudes are any different regardless of the software stack (e.g. compiler, tools framework, etc.) that you are using. I don't think these considerations are specific or unique to CUDA.
I'm using CUDA 7.0 on a Tesla K20X (C.C. 3.5). Is there any benefit to update to a higher version of CUDA, say 8.0 ?
Are you kidding me? There are enormous benefits. It's a world of difference! Just have a look at the CUDA 8 feature descriptions (Parallel4All blog entry). Specifically,
CUDA 8.0 lets you compile with GCC 5.x instead of 4.x
Not only does that save you a life full of pain having to build your own GCC - since modern distros often don't package it at all, and it's not the system's default compiler. Also, GCC 5.x has lots of improvements, not the least of which being full C++14 support for host-side code.
CUDA 8 lets you use C++11 lambdas in device code
(actually, CUDA 7.5 lets you do that and this is rounded off in CUDA 8)
NVCC internal improvements
Not that I can list these, but hopefully NVIDIA continues working on its compiler, equipping it with better optimization logic.
Much faster compilation
NVCC is markedly faster with CUDA 8. It might be up to 2x, but even if it's just 1.5x - that really improves your quality of life as a developer...
Shall I go on? ... all of the above applies regardless of your compute capability. And CC 3.5 or 3.7 is nothing to sneeze at anyway.

CUDA driver version is insufficient for runtime version [duplicate]

I have a very simple Toshiba Laptop with i3 processor. Also, I do not have any expensive graphics card. In the display settings, I see Intel(HD) Graphics as display adapter. I am planning to learn some cuda programming. But, I am not sure, if I can do that on my laptop as it does not have any nvidia's cuda enabled GPU.
In fact, I doubt, if I even have a GPU o_o
So, I would appreciate if someone can tell me if I can do CUDA programming with the current configuration and if possible also let me know what does Intel(HD) Graphics mean?
At the present time, Intel graphics chips do not support CUDA. It is possible that, in the nearest future, these chips will support OpenCL (which is a standard that is very similar to CUDA), but this is not guaranteed and their current drivers do not support OpenCL either. (There is an Intel OpenCL SDK available, but, at the present time, it does not give you access to the GPU.)
Newest Intel processors (Sandy Bridge) have a GPU integrated into the CPU core. Your processor may be a previous-generation version, in which case "Intel(HD) graphics" is an independent chip.
Portland group have a commercial product called CUDA x86, it is hybrid compiler which creates CUDA C/ C++ code which can either run on GPU or use SIMD on CPU, this is done fully automated without any intervention for the developer. Hope this helps.
Link: http://www.pgroup.com/products/pgiworkstation.htm
If you're interested in learning a language which supports massive parallelism better go for OpenCL since you don't have an NVIDIA GPU. You can run OpenCL on Intel CPUs, but at best you can learn to program SIMDs.
Optimization on CPU and GPU are different. I really don't think you can use Intel card for GPGPU.
Intel HD Graphics is usually the on-CPU graphics chip in newer Core i3/i5/i7 processors.
As far as I know it doesn't support CUDA (which is a proprietary NVidia technology), but OpenCL is supported by NVidia, ATi and Intel.
in 2020 ZLUDA was created which provides CUDA API for Intel GPUs. It is not production ready yet though.

When will OpenCL 1.2 for NVIDIA hardware be available?

I would have asked this question on the NVIDIA developer forum but since it's still down maybe someone here can tell me something.
Does anybody know if there is already OpenCL 1.2 support in NVIDIAs driver? If not, is it coming soon?
I don't have a GeForce 600 series card to check myself. According to Wikipedia there are already some cards that could support it though.
It somewhat seems like NVIDIA does not mention OpenCL a whole lot anymore and just focuses on CUDA C/C++ (see StreamComputing.eu). I guess it makes sense to them but I would like to see some more OpenCL love.
Thanks
NVidia's latest SDK (v4.2.9) does not support OpenCL 1.2 with regard to the header files or library it provides. I considered this might just be the SDK itself: as you point out, the GeForce 600 series appears to support it in hardware. Unfortunately I don't own any 600 series card, but OpenCL64.dll supplied with the latest drivers (v306.23) does not export OpenCL 1.2 symbols. Further, I can find no trace of the new symbols (such as "clLinkProgram") as strings in the driver package. Although this does not rule out the possibility of bootstrapping 1.2 functionality in the driver via an ICD Loader, there is no evidence that there is an 1.2 implementation there, and this would be undocumented and unsupported.
As to when OpenCL 1.2 will be officially supported by NVidia, unfortunately I don't know the answer to this, and would be equally keen to find out.
In the mean-time you might consider an alternative OpenCL 1.2 implementation for development; for example the Intel SDK 2013 Beta (Intel CPU) or AMD APP SDK v2.7 (AMD CPU or AMD/ATI GPU).
An aside, but personally I am considering switching from NVidia GPUs to ATI for production purposes, partly based on AMD's investment in OpenCL and also arguments comparing "bang for buck" between NVidia and the latest AMD cards: NVIDIA vs AMD: GPGPU performance
The NVIDIA hotfix driver version 350.05 (April 2015) adds support for OpenCL 1.2.
With the 350.12 (also April 2015) release, NVidia has clarified the situation:
With this driver release NVIDIA has also posted a bit more information on their OpenCL 1.2 driver. The driver has not yet passed OpenCL conformance testing over at Khronos, but it is expected to do so. OpenCL 1.2 functionality will only be available on Kepler and Maxwell GPUs, with Fermi getting left behind.
It looks like the 700 series supports OpenCL 1.2
I'm still looking for which driver I'll need to get that working.

Can't run CUDA nor OpenCL on GeForce 540M

I have problem running samples provided by Nvidia in their GPU Computing SDK (there's a library of compiled sample codes).
For cuda I get message "No CUDA-capable device is detected", for OpenCL there's error from function that should find OpenCL capable units.
I have installed all three parts from Nvidia to develop with OpenCL - devdriver for win7 64bit v.301.27, cuda toolkit 4.2.9 and gpu computing sdk 4.2.9.
I think this might have to do with Optimus technology that reroutes output from Nvidia GPU to Intel to render things (this notebook has also Intel 3000HD accelerator), but in Nvidia control pannel I set to use high performance Nvidia GPU, set power profile to prefer maximum performance and for PhysX I changed from automatic selection to Nvidia processor again. Nothing has changed though, those samples won't run (not even those targeted for GF8000 cards).
I would like to play somewhat with OpenCL and see what it is capable of but without ability to test things it's useless. I have found some info about this on forums, but it was mostly about linux users where you need Bumblebee to access Nvidia GPU. There's no such problem on Windows however, drivers are better and so you can access it without dark magic (or I thought so until I found this problem).
My laptop has a GeForce 540M as well, in an Optimus configuration since my Sandy Bridge CPU also has Intel's integrated graphics. To run CUDA codes, I have to:
Install NVIDIA Driver
Go to NVIDIA Control Panel
Click 3D Settings -> Manage 3D Settings -> Global Settings
In the Preferred Graphics processor drop down, select "High-performance NVIDIA processor"
Apply the settings
Note that the instructions above apply the settings for all applications, so you don't have to worry about CUDA errors any more. But it will drain more battery.
Here is a video recap as well. Good luck!
Ok this has proven to be totally crazy solution. I was thinking if something isn't hooking between the hardware and application and only thing that came to my mind was AV software. I'm using Comodo with sandbox and Defense+ on and after turning them off I could run all those samples. What's more, only Defense+ needs to be turned off.
Now I just think about how much apps could have been blocked from accessing that GPU..
That's most likely because of the architecture of Optimus. So I'd suggest you to read
NVIDIA CUDA Developer Guide for NVIDIA Optimus Platforms, especially the section "Querying for a CUDA Device" which addresses this issue, I believe.

CUDA or same something that can be available to intel graphic card?

I want to learn GPGPU and CUDA programming. But I know that only Nvidia card support it. My laptop has an Intel HD Graphic Card. So I need to search if it is possible to do GPGPU or something like that with Intel graphic card. Thanks for any information.
To develop in CUDA your options are:
Use an NVIDIA GPU - all NVIDIA server, desktop and laptop GPUs support CUDA since around 2006, since your laptop does not have one you could try using one remotely.
Use PGI CUDA x86, not free but does what you want.
Use gpuocelot to execute the PTX on the CPU, that's an open-source project in development so YMMV.
You cannot do GPGPU on Intel HD Graphics cards today, unless you do shader-based programming (which was common practice in the days before CUDA and OpenCL).
In my experience, the PGI X86 stuff seems to have fallen flat and I'm not aware of anyone using that. Ocelot is another attempt at the same, but it is very reasearchy and not fully robust at this point.
The only OpenCL compliant devices from Intel are the latest CPUs (Sandy Bridge and Ivy Bridge).
What CPU do you have in your system?
CUDA is Nvidia specific as starter. The GPU emulators are always there in CUDA, so you can use them without a graphics card easily, though it will be slow. A faster solution is the
the x86 implementation. Any of these will allow you to learn the basics of CUDA without using the GPU at all.
If you are want to learn GPGPU in general you still have the option to learn OpenCL, which more widely supported, including AMD, Intel, Nvidia etc... E.g. Intel has an OpenCL SDK (the target is the CPU then, but I guess is irrelevant for you).
After learning the basics of either CUDA or OpenCL, the other will be easy to learn. Neither the syntax nor the semantics are the same, but it is easy step forward as the concepts are the same.