Can I run CUDA C code without an nVida GPU? [duplicate] - cuda

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What do I have to do, to be able to do Cuda programming on a Macbook Air with Intel HD 4000 graphics?
Setup a virtual machine? Buy an external Nvidia card? Is it possible at all?

If you have a new(-ish) Macbook Air you could perhaps use an external (NVidia) graphics device like this:
external Thunderbolt PCIe case
Otherwise it will not be possible to run Cuda programms on non NVidia Hardware (since it is a proprietary framework)
You may also be able to run Cuda code through converting it to OpenCL first (for example with this framework: Swan Framwork )

Related

understanding HPC Linpack (CUDA edition) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to know what role play CPUs when HPC Linpack (CUDA version) is runnig. They are recieving data from other cluster nodes and performing CPU-GPU data exchange, arenot they? so thier work doesnot influence on performance, yes?
In typical usage both GPU and CPU are contributing to the numerical calculations. The host code will use MKL or another BLAS implementation for host-generated numerical results, and the device code will use CUBLAS or something related for device numerical results.
A version of HPL is available to registered developers in source code format, so you can inspect all this yourself.
And as you say the CPUs are also involved in various other administration activities such as internode data exchange in a multinode setting.

Platform vs Software Framework [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
CUDA advertises itself as a parallel computing platform. However, I'm having trouble seeing how it's any different from a software framework (a collection of libraries used for some functionality). I am using CUDA in class and all I'm seeing is that it provides libraries in C for - functions that help in parallel computing on the GPU - which fits my definition of a framework. So tell me, how is a platform like CUDA different from a framework? Thank you.
CUDA the hardware platform, is the actual GPU and its scheduler ("CUDA architecture"). However CUDA is also a programming language, which is very close to C. To work with the software written in CUDA you also need an API for calling these functions, allocating memory etc. from your host language. So CUDA is a platform, a language and a set of APIs.
If the latter (a set of APIs) matches your definition of a software framework, then the answer is simply yes, as both options are true.

Difference between CUDA level and compute level? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
What is the difference these two definitions?
If no, does it mean, I will be never able to run code with sm > 21 on the gpu with compute level 2.1?
That's correct. For a compute capability 2.1 device, the maximum code specification (virtual architecture/target architecture) you can give it is -arch=sm_21 Code compiled for -arch=sm_30 for example, would not run correctly on a cc 2.1 device
For more information, you can take a look at the nvcc manual section which covers virtual architectures, as well as the manual section which covers the compile switches specifying virtual architecture and compile targets (code architecture).

what is the optimized cufft library for tesla k20m card [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
In our application we have FFT part. We would like to port that part onto GPU. We have Tesla K20m GPU. Which version of cuFFT is optimized for K20m card.
There is not a specific version of the cufft library that is optimized for a specific card. Just use the standard cufft library that ships with cuda 5.0 (or cuda 5.5 RC, if you like).

How can I download the latest version of the GPU computing SDK? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to download the latest version of the GPU computing SDK which is compatible with the system that I work on. The CUDA driver and runtime version are 4.10, but I can not find the link. I can just find the CUDA Toolkit which is not what I want. Can anyone help me by sending a direct link for me? Thanks.....
CUDA 4.1 is the latest CUDA release. The GPU Computing SDK for this release can be found at the bottom of this page: http://developer.nvidia.com/cuda-toolkit-41
The GPU Computing SDK is supposed to be available at this page: http://developer.nvidia.com/gpu-computing-sdk
But, it looks like NVIDIA has messed up the webpages a bit, the CUDA Toolkit and the GPU Computing SDK pages point at each other, with neither offering the SDK.