How to connect NVIDIA CUDA PCI-E graphic card over USB? [closed] - cuda

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I would like to run some CUDA calculations, but I have only simple notebook without NVIDIA.
Is there any USB adapter that allows to connect NVIDIA graphic card to my notebook?
That would be great if there is such a device, that I connect my NVIDIA card, plug it into my computer, run calculation, and disconnect from laptop until calculations are finished.

Unfortunately not.
USB is very-very slow compared to the internal bus to the graphics card in a PC, so the speed of the GPU for calculations would be wasted by the long time to copy the data there and back.
USB is alos message based, it doesn't allow your computer to see the GPU card memory (or the other way around) so you would effectively need another computer on the GPU end to unwrap things.
There is a new high speed connector called Thunderbolt which is (essentially) the PCIe bus inside your computer connected to a socket. This would allow an external device (like a GPU) to act like it was directly connected to the bus. But it's only on a few expensive models today and not many devices exist for it (yet).
Amazon do now offer GPUs on their cloud service, but this might be a bit expensive for just learnign / playing with.

Related

CUDA: performance throttling even though clocks are stable [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 17 days ago.
Improve this question
I'm doing some benchmarking of a kernel on an RTX 3070 Ti by running it a few thousand times. I've tried to set stable clocks using
nvidia-smi -lgc 1575
nvidia-smi -lmc 9251
Despite this, I find that performance varies randomly by up to 28%. I've used Nsight Systems to record what happens and sometimes I can see a sharp drop after a few thousand iterations (it's fast and stable until a step transition, after which it is slow and stable). However, I can't see any corresponding dip in clock speeds:
I've tried just watching nvidia-smi -q output (updated every 0.05 seconds) to check for either down-clocking or reports of throttling. Temperature stays below 50°C.
I've run nsys with --gpu-metrics-device=0; it shows the graphics clock stable at 1575 MHz
I've run the same benchmark using Nsight Compute to record details from every 1000th invocation, which shows that the memory clock is also stable.
I don't have a rigorous test, but it feels like it might be thermal in the sense that performance is worse if I repeat the test immediately after loading the GPU, whereas if I give it a minute to cool off then performance is better.
Any idea what sort of throttling this might be, and how to prevent or at least measure it?

understanding HPC Linpack (CUDA edition) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to know what role play CPUs when HPC Linpack (CUDA version) is runnig. They are recieving data from other cluster nodes and performing CPU-GPU data exchange, arenot they? so thier work doesnot influence on performance, yes?
In typical usage both GPU and CPU are contributing to the numerical calculations. The host code will use MKL or another BLAS implementation for host-generated numerical results, and the device code will use CUBLAS or something related for device numerical results.
A version of HPL is available to registered developers in source code format, so you can inspect all this yourself.
And as you say the CPUs are also involved in various other administration activities such as internode data exchange in a multinode setting.

Why I have to manually active my GPUs? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I installed a new Intel Xeon Phi in a work station which already has 3 Nvidia GPUs installed. To make the Phi card work, I have to load the Intel's MIC kernel module into my Linux kernel. And by doing so the Phi card works fine. However, every time when we reboot the system, we just couldn't use the GPU. The error message is that the system couldn't find the CUDA driver.
However, the only thing I need to do to fix this is to use "SUDO" to run one CUDA binaries or some Nvidia's command just like "sudo nvida-smi". Then everything just works fine, both CUDA and Intel's Xeon phi.
Anybody knows why? Without my sudo command, other people just can not use the GPUs. This is kind of annoying. How can I fix this?
CUDA requires that certain resource files be established for GPU usage, and this is covered in the Linux getting started guide (step 6 under runfile installation -- note the recommended startup script).
You may also be interested in this article, which focuses on the same subject -- how to automatically establish the resource files at startup.
Once these files are established correctly, an ordinary user (non-root) will be able to use the GPUs without any other intervention.
I have no idea why Xeon Phi installation might have affected this in your particular setup.

What is the absolutely fastest way to output a signal to external hardware in modern PC? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I was wondering, what is the absolutely fastest way (lowest latency) to produce external signal (for example CMOS state change from 0 to 1 on electrical wire connected to other device etc.) from PC, counting from the moment, where CPU assembler program knows that signal must be produced.
I know that network device, usb, VGA monitor output have some large latency comapred to other interfaces (SATA, PCI-E). Wich of interfaces or what hardware modification can provide a near-0 latency in output from let's suppose assembler program?
I don't know if it is really the fastest interface you can provide, because that also depends on your definition of "external", but http://en.wikipedia.org/wiki/InfiniBand certainly comes close to what your question aims at. Latency is 200 nanoseconds and below in certain scenarios ...

How to shut graphic card output signal but still link it for CUDA? (it is a geforce) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
for using CUDA on PC's graphic card(usually single card), it's known that windows or linux will reset the graphic card if the card lost response for 5 sec or 2 sec(depending on OS version,this mechanism is called Timeout detection recovery,TDR).
msdn says a graphic card giving output signal will be restricted by TDR in case that video signal by graphic card is interrupted.
if windows does that, my CUDA program (takes much longer than 2 or 5 sec running on the graphic card)cannot be completed.
so as to avoid this, i enabled the onboard graphic card(biostar HD 880G Mainboard),attach monitor to onboard graphic card.
the system now recognised both graphic cards(NV gtx 460 and onboard AMD HD4250), but the 2 sec restriction on gtx 460 is still there. i switched my monitor on both graphic card, both cards give output signal.
How can I make the independent graphic card stop giving video signal(or stop OS giving it signal), but still links to system?
http://msdn.microsoft.com/zh-cn/library/windows/hardware/ff569918(v=vs.85).aspx