Is there a way to compile CUDA programs in a machine that does not have NVIDIA graphics card? [duplicate] - cuda

I tried to install cuda toolkit without display driver in CentOS 6. It gets installed properly. I was able to compile but it is compiling without performing any operation and I get garbage values in array addition. For cudaGetDeviceCount(&count) I am getting value as "o" which means I don't have any card on my machine.

You can install the CUDA toolkit without installing the driver.
You can then compile CUDA codes that use the runtime API.
You will not be able to run those codes unless you have a proper CUDA driver and GPU installed in the machine, however.
Codes that depend on the driver API will also not be compilable in this configuration, on older CUDA toolkits, without additional work. Newer CUDA toolkits provide stub libraries for driver libraries, which can be linked against.
This answer covers the method to install the CUDA toolkit without the driver.

If you want just run the codes and profiling the performance and other parameters, it would be helpful if you install GPGPU-sim simulator. It doesn't need any graphic card on your machine.

Related

cudart_static - when is it necessary?

Since newer drivers ship with the CUDA runtime (I can choose 9.1 or 9.2 in the drivers download page) my question is: should my library (which uses a CUDA kernel internally) be shipped with -lcudart_static?
I had issues launching kernels compiled with 9.2 on systems which used 9.1 CUDA drivers. What's the most 'compatible' way of ensuring my library will run everywhere a recent CUDA driver is installed? (I'm already compiling for a virtual architecture)
Since newer drivers ship with the CUDA runtime (I can choose 9.1 or 9.2 in the drivers download page)
No, that's incorrect. That choice in the drivers download page is related to the fact that each CUDA version has a minimum required driver version associated with it. It does not mean that the driver ships with the CUDA runtime (stated another way, the driver does not install libcudart.so on linux and never has - with some careful experimentation on a clean install, you can prove this to yourself.)
Some additional comments:
-lcudart_static is actually the default for current/recent versions of nvcc. You can discover this by reading the nvcc manual. Therefore, by default, your executable, when compiled/built with nvcc should already be statically linked to the CUDA runtime library corresponding to the version of nvcc that you are using for compilation. The reason you might need to specify this or something like this is if you are building an application with e.g. the gnu toolchain (on linux) rather than nvcc.
The purpose of static linking to the CUDA runtime library is, as you surmise, so that an application can be built in such a way that it does not need an installation of the CUDA toolkit to run properly. It only needs a machine with a proper GPU driver install.
The most compatible way to ensure that an application will run on a range of machines with a range of GPU driver installs is to compile your application using the oldest CUDA toolkit required to meet the needs of the earliest GPU driver in the range you intend to cover. Again, you can refer to the table here.

Installing CUDA as a non-root user with no GPU

I have a desktop without a GPU, on which I would like to develop code; and a machine on some cluster which has a GPU, and CUDA installed, but where I really can't "touch" anything and on which I won't run an IDE etc. I don't have root on any of the machines, woe is me.
So, essentially, I want to be able to compile and build my CUDA code on my own GPU-less desktop machine, then just copy it and test it on the other machine.
Can this be done despite the two hindering factors: I seem to recall the CUDA installer requiring the presence of a GPU; playing with the kernel; and doing other root-y stuff.
Notes:
I'll be using the standalone installer, not a package.
I'm on Fedora 22 with an x86_64 CPU.
Assuming you want to develop codes that use the CUDA runtime API, you can install the cuda toolkit on a system that does not have a GPU. Using the runfile installer method, simply answer no when prompted to install the driver.
If you want to compile codes (successfully) that use the CUDA driver API, that process will require a libcuda.so on your machine. This file is installed by the driver installer. There are various methods to "force" the driver installer to run on a machine without a GPU. You can get started by extracting the driver runfile installer (or downloading it separately) and passing the --help command line switch to the installer to learn about some of the options.
These methods will not allow you to run those codes on a machine with no GPU of course. Furthermore, the process of moving a compiled binary from one machine to another, and expecting it to run correctly, is troublesome in my opinion. Therefore my suggestion would be to re-compile the code on a target machine. Otherwise getting a compiled binary to run from one machine to the next is a question that is not unique to CUDA, and is outside the scope of my answer.
If you have no intention of running the codes on the non-GPU machine, and are willing to recompile on the target machine, then you can probably develop driver API codes even without libcuda.so (or there is a libcuda.so stub that you could try linking against just for compilation-test purposes, which is installed by the CUDA installer, if you search for it: /usr/local/cuda/lib64/stubs). If you don't link your driver API code against -lcuda, then you'll get a link error of course, but it should not matter, given the previously stated caveats.
Fedora 22 is not officially supported by CUDA 7.5 or prior. YMMV.
If you don't run the driver installer, you don't need to be a root user for any of this. Of course the install locations you pass to the installer must be those that your user privilege allows access to.

Does CUDA compilation rely on presence of graphics card? [duplicate]

This question already has an answer here:
Is CUDA hardware needed at compile time?
(1 answer)
Closed 8 years ago.
Suppose, hypothetically, that I want to test compile, but not run, CUDA code on a machine that has no CUDA capable GPU present. Should I be able to do that with only the CUDA Toolkit installed? Or does NVCC rely on the presence of graphics card hardware in any way?
Certainly on linux, you can install the CUDA toolkit and compile code without a GPU installed. There are nuances to this. For example, if your code depends on a library that only gets installed by the driver (such as libraries required by CUDA code using the Driver API), then there are additional bridges to cross. But ordinary CUDA runtime API code can be compiled this way just fine. nvcc does not depend on a GPU.
I haven't actually tried this in Windows, but I think it should be possible to install the CUDA toolkit without a CUDA GPU.

CUDA 5.0 cuda-gdb on Linux Needs dedicated CPU?

With a fresh CUDA 5.0 Linux install on CentOS 5.5, I am not able to gdb. So I am wondering if you still need a dedicated GPU for the Linux cuda-gdb? I tried it with the Vesa device driver for X11, but get the same result. Profiling works, running the app works, but trying to run cuda-gdb gives :
warning: no loadable sections found in added symbol-file system-supplied DSO at 0x2aaaaaaab000
Any suggestions?
cuda-gdb still needs a GPU that is not used by graphical environment (e.g. if you are running Gnome/KDE/etc. you need to have system with several GPUs - not necessary all of them must be NVIDIA GPUs)
This particular message is not about this problem - you can ignore it. cuda-gdb will tell if it fails because no GPU can be used for debugging.

cuda with optimus just to access gpgpu

I have a Dell XPS L502 with the Nvidia 525M graphics card. I am only interested in using the gpgpu capabilities of the card for now.
I installed Ubuntu 12.04 as a dual boot with the Windows 7 that came with the machine and followed several installation procedures for installing the CUDA driver and developer kit from Nvidia ( many re-installs of Ubuntu ). In all cases the display drops to 640x480 resolution. Best I can determine this has something to do with Optimus technology and Linux. I tried Bumblebee to no avail.
I really don't care about using the NVidia card to drive the display. Is there any way that I can just install the NVidia drivers so that a program can use the CUDA capabilities of the graphics card and I still get the full resolution on the display?
I had a similar issue with my Alienware M11xR2, and posted the solution on the NVIDIA Forums. Unfortunately the forums are down at the moment but essentially the process is as follows:
Install the Nvidia Drivers, but when prompted to modify your X11 Config, select 'No'. This is because the Nvidia card cannot be used as a display device.
Install the CUDA SDK and run one of the samples as root. I found this to be a necessary step. After this you should be able to execute further CUDA programs as a normal user.
Hope that helps.
With the new release of CUDA 5 the, comes the installation guide, there you have just one file that installs drivers, toolkit and sdk (even nvidia nsight). And one thing that got my attention is that you also have optimus options in the installation process.
I also have and Alienware M14x, and i understand your problem, but i also wanted the drivers to work for me, so i didn't try too hard on that.
Maybe you could give that a try and comment with the rest of us.
Here you can look for the CUDA 5 release candidate: CUDA 5
and here is the installation guide (maybe give this a read first): CUDA 5 Starting Guide for Linux.