Ethminer Ubuntu 16 not using NVIDIA GPU - ethereum

I have followed instructions here and successfully build and setup geth.
Ethminer seems to work except it doesn't use the Titan X GPU and the mining rate is only 341022 H/s.
Also when I try to use the -G option ethminer says it is an invalid argument; the -G flag also doesn't appear in the ethminer help command.

Your GPU must have a minimum memory to perform mining. Upgrade to GPU you with higher memories (minimum 4GB is preferable)
The current DAG size is above (2GB). That means you cant mine with GPU with memory less than 2GB.

Related

Unsupported gpu architecture compute_30 on a CUDA 5 capable gpu

I'm currently trying to compile Darknet on the latest CUDA toolkit which is version 11.1. I have a GPU capable of running CUDA version 5 which is a GeForce 940M. However, while rebuilding darknet using the latest CUDA toolkit, it said
nvcc fatal : Unsupported GPU architecture 'compute_30'
compute_30 is for version 3, how can it fail while my GPU can run version 5
Is it possible that my code detected my intel graphics card instead of my Nvidia GPU? if that's the case, is it possible to change its detection?
Support for compute_30 has been removed for versions after CUDA 10.2. So if you are using nvcc make sure to use this flag to target the correct architecture in the build system for darknet
-gencode=arch=compute_50,code=sm_50
You may also need to use this one to avoid a warning of architectures are deprecated.
-Wno-deprecated-gpu-targets
I added the following:
makefiletemp = open('Makefile','r+')
list_of_lines = makefiletemp.readlines()
list_of_lines[15] = list_of_lines[14]
list_of_lines[16] = "ARCH= -gencode arch=compute_35,code=sm_35 \\\n"
makefiletemp = open('Makefile','w')
makefiletemp.writelines(list_of_lines)
makefiletemp.close()
right before the
#Compile Darknet
!make
command. That seemed to work!

Autolykos GPU Miner software no longer recognizing NVIDIA GPU on Ubuntu 18.04

A month or so ago, Autolykos miner (https://github.com/ergoplatform/Autolykos-GPU-miner) compiled and ran. Now suddenly it doesn't work because the .cu files don't recognize any installed NVIDIA GPU. I made NO changes to the Autolykos code--it just stopped working. I merely dropped into the source folder (as described by the README) and typed make. But when I install and make all of the CUDA examples, THOSE run just fine. Running on UBUNTU 18.04 with a GeForce TITAN X. For example, the utility "deviceQuery" returns the following:
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX TITAN X"
CUDA Driver Version / Runtime Version 10.1 / 10.1
CUDA Capability Major/Minor version number: 5.2
...
Whereas the output at startup of the mining binary spits out ONE line and quits:
Error Checking GPU: Using 0 GPU devices
Any suggestions would be welcome...
SOLVED: After re-compiling the CUDA code from NVIDIA, the miner is working. I suspect that a system update broke something

CUDA : How to detect shared memory bank conflict on device with compute capabiliy >= 7.2?

On device with compute capability <= 7.2 , I always use
nvprof --events shared_st_bank_conflict
but when i run it on RTX2080ti with CUDA10 , it returns
Warning: Skipping profiling on device 0 since profiling is not supported on devices with compute capability greater than 7.2
So how can i detect whether there's share memory bank conflict on this devices ?
I've installed Nvidia Nsight Systems and Nsight Compute , find no such profiling report...
thks
You can use --metrics:
Either
nv-nsight-cu-cli --metrics l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_ld.sum
for conflicts when reading (load'ing) from shared memory, or
nv-nsight-cu-cli --metrics l1tex__data_bank_conflicts_pipe_lsu_mem_shared_op_st.sum
for conflicting when writing (store'ing) to shared memory.
It seems this is a problem, and is addressed in this post to the NVIDIA forums. Apparently it should be supported using one of the Nsight tools (either the CLI or UI).

CUDA Installation for NVidia Quadro FX 3800

I'm having trouble installing CUDA 7.0 (to use with TensorFlow) on a workstation with the Nvidia Quadro FX 3800. I'm wondering if this is because the GPU is no longer supported.
Installation of the driver (340.96) seems to work fine:
$ sh ./NVIDIA-Linux-x86_64-340.96.run
Installation of the NVIDIA Accelerated Graphics Driver for Linux-x86_64
(version: 340.96) is now complete. Please update your XF86Config or
xorg.conf file as appropriate; see the file
/usr/share/doc/NVIDIA_GLX-1.0/README.txt for details.
However, I think I may be having trouble with the following:
$ ./cuda_7.0.28_linux.run --kernel-source-path=/usr/src/linux-headers-3.13.0-76-generic
The driver installation is unable to locate the kernel source. Please make sure
that the kernel source packages are installed and set up correctly. If you know
that the kernel source packages are installed and set up correctly, you may pass
the location of the kernel source with the '--kernel-source-path' flag.
...
Logfile is /tmp/cuda_install_1357.log
$ vi /tmp/cuda_install_1357.log
WARNING: The NVIDIA Quadro FX 3800 GPU installed in this system is
supported through the NVIDIA 340.xx legacy Linux graphics drivers.
Please visit http://www.nvidia.com/object/unix.html for more
information. The 346.46 NVIDIA Linux graphics driver will ignore
this GPU.
WARNING: You do not appear to have an NVIDIA GPU supported by the 346.46
NVIDIA Linux graphics driver installed in this system. For
further details, please see the appendix SUPPORTED NVIDIA GRAPHICS
CHIPS in the README available on the Linux driver download page at
www.nvidia.com.
...
ERROR: Unable to load the kernel module 'nvidia.ko'. This happens most
frequently when this kernel module was built against the wrong or
improperly configured kernel sources, with a version of gcc that
differs from the one used to build the target kernel, or if a driver
such as rivafb, nvidiafb, or nouveau is present and prevents the
NVIDIA kernel module from obtaining ownership of the NVIDIA graphics
device(s), or no NVIDIA GPU installed in this system is supported by
this NVIDIA Linux graphics driver release.
...
Please see the log entries 'Kernel module load error' and 'Kernel
messages' at the end of the file '/var/log/nvidia-installer.log' for
more information.
Is the installation failure due to CUDA dropping support for this graphics card?
I followed the link trail: https://developer.nvidia.com/cuda-gpus > https://developer.nvidia.com/cuda-legacy-gpus > http://www.nvidia.com/object/product_quadro_fx_3800_us.html and I would have thought the Quadro FX 3800 supported CUDA (at least at the beginning).
Yes, the Quadro FX 3800 GPU is no longer supported by CUDA 7.0 and beyond.
The last CUDA version that supported that GPU was CUDA 6.5.
This answer and this answer may be of interest. Your QFX 3800 is a compute capability 1.3 device.
If you review the release notes that come with CUDA 7, you will find a notice of the elimination of support for these earlier GPUs. Likewise, the newer CUDA driver versions also don't support those GPUs.

cuda-gdb run error:"Failed to read the valid warps mask(dev=0,sm=0,error=16)" [duplicate]

I tried to debug my CUDA application with cuda-gdb but got some weird error.
I set option -g -G -O0 to build my application. I could run my program without cuda-gdb, but didn't get correct result. Hence I decided to use cuda-gdb, however, I got following error message while running program with cuda-gdb
Error: Failed to read the valid warps mask (dev=1, sm=0, error=16).
What does it means? Why sm=0 and what's the meaning of error=16?
Update 1: I tried to use cuda-gdb to CUDA samples, but it fails with same problem. I just installed CUDA 6.0 Toolkit followed by instruction of NVIDIA. Is it a problem of my system?
Update 2:
OS - CentOS 6.5
GPU
1 Quadro 400
2 Tesla C2070
I'm using only 1 GPU for my program, but I've got same bug message from any GPU that I selected
CUDA version - 6.0
GPU Driver
NVRM version: NVIDIA UNIX x86_64 Kernel Module 331.62 Wed Mar 19 18:20:03 PDT 2014
GCC version: gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC)
Update 3:
I tried to get more information in cuda-gdb, but I got following results
(cuda-gdb) info cuda devices
Error: Failed to read the valid warps mask (dev=1, sm=0, error=16).
(cuda-gdb) info cuda sms
Focus not set on any active CUDA kernel.
(cuda-gdb) info cuda lanes
Focus not set on any active CUDA kernel.
(cuda-gdb) info cuda kernels
No CUDA kernels.
(cuda-gdb) info cuda contexts
No CUDA contexts.
Actually, this issue is only specific to some old NVIDIA GPUs(like "Quadro 400", "GeForce GT220", or "GeForce GT 330M", etc).
On Liam Kim's setup, cuda-gdb should work fine by set environment variable "CUDA_VISIBLE_DEVICES", and let cuda-gdb running on Tesla C2070 GPUs specifically.
I.e
$export CUDA_VISIBLE_DEVICES=0 (or 2)
- the exact CUDA devices index could be found by running cuda sample - "deviceQuery".
And now, this issue has been fixed, the fix would be availble for CUDA developers in the next CUDA release(it will be posted out around early July, 2014).
This is internal cuda-gdb bug. You should report a bug.
Can you try installing CUDA toolkit from the package on NVIDIA site?