Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
What is the difference these two definitions?
If no, does it mean, I will be never able to run code with sm > 21 on the gpu with compute level 2.1?
That's correct. For a compute capability 2.1 device, the maximum code specification (virtual architecture/target architecture) you can give it is -arch=sm_21 Code compiled for -arch=sm_30 for example, would not run correctly on a cc 2.1 device
For more information, you can take a look at the nvcc manual section which covers virtual architectures, as well as the manual section which covers the compile switches specifying virtual architecture and compile targets (code architecture).
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I tried creating a new enviroment as shown here:
https://github.com/microsoft/computervision-recipes/blob/master/SETUP.md
and I checked cuda is running well, my gpu is detected and everything seems well. but when I fit the model nvdia-smi shows no occupation on GPU and the CPU is at 100%.
Torch requires setting Cuda device for each model and training process, you need to pass all weights and another tensors to Cuda device by hand. Like that:
a = torch.tensor([1, 2, 3])
a = a.to('cuda')
But in your case, maybe you need to use the method from the library. I recommend checking this one https://github.com/microsoft/computervision-recipes/blob/1bb489af757fde7c773e16fab87b24305cff4457/utils_cv/tracking/references/fairmot/trains/base_trainer.py#L32
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What do I have to do, to be able to do Cuda programming on a Macbook Air with Intel HD 4000 graphics?
Setup a virtual machine? Buy an external Nvidia card? Is it possible at all?
If you have a new(-ish) Macbook Air you could perhaps use an external (NVidia) graphics device like this:
external Thunderbolt PCIe case
Otherwise it will not be possible to run Cuda programms on non NVidia Hardware (since it is a proprietary framework)
You may also be able to run Cuda code through converting it to OpenCL first (for example with this framework: Swan Framwork )
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
There are jobs running on the GPU, and if I run another code on top of it, the code stops at the point of cudaDeviceSynchronize(). Why does this happen?
Currently only one process is allowed to use a GPU at a given point in time. There is no fairness nor quantum to kill a ''job'' in case it runs for hours in a GPU. The basic usage is first come first serve.
But you may use the CUDA Multi-Process Service (MPS). It basically allows multiple processes to share a single gpu
https://docs.nvidia.com/deploy/pdf/CUDA_Multi_Process_Service_Overview.pdf
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to know what role play CPUs when HPC Linpack (CUDA version) is runnig. They are recieving data from other cluster nodes and performing CPU-GPU data exchange, arenot they? so thier work doesnot influence on performance, yes?
In typical usage both GPU and CPU are contributing to the numerical calculations. The host code will use MKL or another BLAS implementation for host-generated numerical results, and the device code will use CUBLAS or something related for device numerical results.
A version of HPL is available to registered developers in source code format, so you can inspect all this yourself.
And as you say the CPUs are also involved in various other administration activities such as internode data exchange in a multinode setting.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
CUDA advertises itself as a parallel computing platform. However, I'm having trouble seeing how it's any different from a software framework (a collection of libraries used for some functionality). I am using CUDA in class and all I'm seeing is that it provides libraries in C for - functions that help in parallel computing on the GPU - which fits my definition of a framework. So tell me, how is a platform like CUDA different from a framework? Thank you.
CUDA the hardware platform, is the actual GPU and its scheduler ("CUDA architecture"). However CUDA is also a programming language, which is very close to C. To work with the software written in CUDA you also need an API for calling these functions, allocating memory etc. from your host language. So CUDA is a platform, a language and a set of APIs.
If the latter (a set of APIs) matches your definition of a software framework, then the answer is simply yes, as both options are true.