I’m trying to compile Qemu from sources and as such face the well known errors of requiring Glib-2.4 and gthread-2.0
ERROR: glib-2.40 gthread-2.0 is required to compile QEMU
I’m on Linux Mint and I’ve installed libglib-2.0 from the repo
AB1#piper11:~$ dpkg -l | grep -i libglib
ii libglib-object-introspection-perl 0.042-1 amd64 Perl bindings for gobject-introspection libraries
ii libglib-perl 3:1.324-1 amd64 interface to the GLib and GObject libraries
ii libglib2.0-0:amd64 2.50.3-2+deb9u1 amd64 GLib library of C routines
ii libglib2.0-0-dbg:amd64 2.50.3-2+deb9u1 amd64 Debugging symbols for the GLib libraries
ii libglib2.0-bin 2.50.3-2+deb9u1 amd64 Programs for the GLib library
ii libglib2.0-cil 2.12.40-2 amd64 CLI binding for the GLib utility library 2.12
ii libglib2.0-cil-dev 2.12.40-2 amd64 CLI binding for the GLib utility library 2.12
ii libglib2.0-data 2.50.3-2+deb9u1 all Common files for GLib library
ii libglib2.0-dev 2.50.3-2+deb9u1 amd64 Development files for the GLib library
ii libglib2.0-doc 2.50.3-2+deb9u1 all Documentation files for the GLib library
ii libglib2.0-tests 2.50.3-2+deb9u1 amd64 GLib library of C routines - installed tests
ii libglibmm-2.4-1v5:amd64 2.50.0-1 amd64 C++ wrapper for the GLib toolkit (shared libraries)
Here is the content of config.log: https://pastebin.com/6zrSXWAG
I am configuring it for 'arm-softmmu' as can be seen from the above paste. File /usr/lib/x86_64-linux-gnu/pkgconfig/gthread-2.0.pc does exist and it's contents are
prefix=/usr
exec_prefix=${prefix}
libdir=${prefix}/lib/x86_64-linux-gnu
includedir=${prefix}/include
Name: GThread
Description: Thread support for GLib
Requires: glib-2.0
Version: 2.50.3
Libs: -L${libdir} -lgthread-2.0 -pthread
Cflags: -pthread
Related
No devices were found
error in installing
ZOTAC GAMING GeForce RTX 3080 Ti Trinity OC in Ubuntu 20.04.
cat /proc/driver/nvidia/version command gives
NVRM version: NVIDIA UNIX x86_64 Kernel Module 525.85.05 Sat Jan 14 00:49:50 UTC 2023
GCC version: gcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
lspci | grep -i nvidia command gives
06:00.0 VGA compatible controller: NVIDIA Corporation Device 2216 (rev a1)
06:00.1 Audio device: NVIDIA Corporation Device 1aef (rev a1)
The system has Nvidia driver version 525.85.05 and
Cuda 11.6.
All installed successfully using run files.
But nvidia-smi command gives
No devices were found
What is wrong with installation?
EDIT:
dpkg -l | grep -i nvidia command has outputs as below
ii libnvidia-compute-510:i386 510.108.03-0ubuntu0.20.04.1 i386 NVIDIA libcompute package
ii libnvidia-decode-510:i386 510.108.03-0ubuntu0.20.04.1 i386 NVIDIA Video Decoding runtime libraries
ii libnvidia-encode-510:i386 510.108.03-0ubuntu0.20.04.1 i386 NVENC Video Encoding runtime library
ii libnvidia-fbc1-510:i386 510.108.03-0ubuntu0.20.04.1 i386 NVIDIA OpenGL-based Framebuffer Capture runtime library
ii screen-resolution-extra 0.18build1 all Extension for the nvidia-settings control panel
I have installed the pytorch, and would like to check are there any script to test whether the installation is correct, e.g., whether it can enable CUDA or not, etc?
Coming to your 1st question,
In your python script....
just add
import torch
if this gives "ModuleNotFoundError: No module named 'torch'",
then your pytorch installation is not complete
And your 2nd question to check if your pytorch is using cuda,use this
torch.cuda.is_available()
this will return True if your pytorch is using cuda
You can use the collect_env.py script provided in the PyTorch utils folder.
Its output is as follows:
Collecting environment information...
PyTorch version: 1.2.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.14.6
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce RTX 2080
Nvidia driver version: 410.48
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.4.1
Versions of relevant libraries:
[pip] numpy==1.16.4
[pip] torch==1.2.0
[pip] torchsample==0.1.3
[pip] torchsummary==1.5.1
[pip] torchvision==0.4.0a0+6b959ee
[conda] blas 1.0 mkl
[conda] mkl 2019.4 243
[conda] mkl-service 2.0.2 py37h7b6447c_0
[conda] mkl_fft 1.0.14 py37ha843d7b_0
[conda] mkl_random 1.0.2 py37hd81dba3_0
[conda] pytorch 1.2.0 py3.7_cuda10.0.130_cudnn7.6.2_0 pytorch
[conda] torchsample 0.1.3 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.4.0 py37_cu100 pytorch
If you installed it from here you are doing fine.
Check this:
import torch
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
print(dev)
If you have your GPU installed correctly you should have nvidia-smi.
(On Windows it should be inside C:\Program Files\NVIDIA Corporation\NVSMI)
You can use my code. This repo implements a set of PyTorch environment checker and cuda-based operators, which helps you verify whether your GPU-based PyTorch is installed properly.
https://github.com/BAI-Yeqi/PyTorch-Verification
I'm trying to build TensorFlow from source and run it with GPU support. To install the toolkit I use the runfile, to install the driver I used the Additional Drivers Tool, since I did not get Ubuntu to boot into Text mode as specified in the CUDA documentation and stop lightdm and start lightdm does not work either, it gives me (also with sudo):
Name com.ubuntu.Upstart does not exist
So far I could build a release from the TensorFlow repository. However, when I'm trying to run the example as specified in the how-to
bazel-bin/tensorflow/cc/tutorials_example_trainer --use_gpu
the GPU apparently cannot be found:
jonas#jonas-Aspire-V5-591G:~/Documents/repos/tensoflow_fork$ bazel-bin/tensorflow/cc/tutorials_example_trainer --use_gpu
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
E tensorflow/stream_executor/cuda/cuda_driver.cc:491] failed call to cuInit: CUDA_ERROR_UNKNOWN
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:153] retrieving CUDA diagnostic information for host: jonas-Aspire-V5-591G
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:160] hostname: jonas-Aspire-V5-591G
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:185] libcuda reported version is: 352.63.0
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:356] driver version file contents: """NVRM version: NVIDIA UNIX x86_64 Kernel Module 352.63 Sat Nov 7 21:25:42 PST 2015 GCC version: gcc version
4.9.2 (Ubuntu 4.9.2-10ubuntu13) """
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] kernel reported version is: 352.63.0
I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:293] kernel version seems to match DSO: 352.63.0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:81] No GPU devices available on machine.
F tensorflow/cc/tutorials/example_trainer.cc:125] Check failed: ::tensorflow::Status::OK() == (session->Run({{"x", x}}, {"y:0", "y_normalized:0"}, {}, &outputs)) (OK vs. Invalid argument: Cannot assign a device to node 'y': Could not satisfy explicit device specification '/gpu:0' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0
[[Node: y = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/gpu:0"](Const, x)]])
Aborted
I'm using a clean Ubuntu 15.04 installation on an Acer Notebook with the GTX950M.
Can anybody tell me how to properly install the driver?
Can you run deviceQuery (comes with cuda installation)? Can you see nvidia present in lspci/lsmod/nvidia-smi?
lsmod |grep nvidia
dmesg | grep -i nvidia
lspci | grep -i nvidia
nvidia-smi
You can reload nvidia module and look for error messages
modprobe -r nvidia
dmesg | tail
sudo dmesg | grep NVRM
Related issue https://github.com/tensorflow/tensorflow/issues/601
I have error in installing general package using the instruction.
pkg install -forge general
and get the message
octave:3> pkg install -forge general
In file included from /usr/local/octave/3.8.0/lib/gcc47/gcc/x86_64-apple-darwin13/4.7.3/include/stdint.h:3:0,
from /usr/local/octave/3.8.0/include/octave-3.8.0/octave/oct-conf-post.h:167,
from /usr/local/octave/3.8.0/include/octave-3.8.0/octave/config.h:3351,
from /usr/local/octave/3.8.0/include/octave-3.8.0/octave/../octave/oct.h:31,
from SHA1.cc:19:
/usr/local/octave/3.8.0/lib/gcc47/gcc/x86_64-apple-darwin13/4.7.3/include-fixed/stdint.h:27:32: fatal error: sys/_types/_int8_t.h: No such file or directory
compilation terminated.
make: *** [SHA1.oct] Error 1
/usr/local/octave/3.8.0/bin/mkoctfile-3.8.0 SHA1.cc
pkg: error running `make' for the general package.
error: called from 'configure_make' in file /usr/local/octave/3.8.0/share/octave/3.8.0/m/pkg/private/configure_make.m near line 82, column 9
error: called from:
error: /usr/local/octave/3.8.0/share/octave/3.8.0/m/pkg/private/install.m at line 199, column 5
error: /usr/local/octave/3.8.0/share/octave/3.8.0/m/pkg/pkg.m at line 394, column 9
octave:3>
I have no idea to solve this problem. My computer OS is Mac 10.9.3 Mavericks. Octave version is 3.8.0
octave:1> ver
----------------------------------------------------------------------
GNU Octave Version 3.8.0
GNU Octave License: GNU General Public License
Operating System: Darwin 13.2.0 Darwin Kernel Version 13.2.0: Thu Apr 17 23:03:13 PDT 2014; root:xnu-2422.100.13~1/RELEASE_X86_64 x86_64
----------------------------------------------------------------------
no packages installed.
Does anyone have idea?
I find the solution! Using this comment
xcode-select --install
and it's success!
octave:1> ver
----------------------------------------------------------------------
GNU Octave Version 3.8.0
GNU Octave License: GNU General Public License
Operating System: Darwin 13.2.0 Darwin Kernel Version 13.2.0: Thu Apr 17 23:03:13 PDT 2014; root:xnu-2422.100.13~1/RELEASE_X86_64 x86_64
----------------------------------------------------------------------
no packages installed.
octave:2> pkg install -forge general
For information about changes from previous versions of the general package, run 'news general'.
octave:3> ver
----------------------------------------------------------------------
GNU Octave Version 3.8.0
GNU Octave License: GNU General Public License
Operating System: Darwin 13.2.0 Darwin Kernel Version 13.2.0: Thu Apr 17 23:03:13 PDT 2014; root:xnu-2422.100.13~1/RELEASE_X86_64 x86_64
----------------------------------------------------------------------
Package Name | Version | Installation directory
--------------+---------+-----------------------
general | 1.3.4 | /Users/apple/octave/general-1.3.4
I was having the same issue when trying to install the Octave Signal Package without success. The following finally appears to be working.
code-select --install from the Terminal window to install the command line tools
Install MacPorts for Mac. This is a standard installer that you can download from Macports.
sudo port install gcc48 --> This is a Fortran compiler, which is necessary for installing octave-general
sudo port install octave-general [NOTE: THIS TOOK A VERY LONG TIME, and I had to disable Spotlight indexing...Hours on a Macbook Pro]
sudo port install octave-control
sudo port install octave-signal
While looking at how to install the control package, I found this in the Arch Wiki:
Note: Some Octave's packages, like control, need the gcc-fortran ArchLinux's package in order to compile and install.
(https://wiki.archlinux.org/index.php/Octave)
So you might have to install gcc-fortran first.
I create a default CUDA project in VisualStudio2008. It works OK for the MS compiler. When I try the Intel C++ Composer, it fails as showed in the following:
1>------ Rebuild All started: Project: testCUDA, Configuration: Debug Win32 ------
1>Deleting intermediate files and output files for project 'testCUDA', configuration 'Debug|Win32'.
1>Compiling with CUDA Build Rule... (Microsoft VC++ Environment)
1>"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\\bin\nvcc.exe" -G -gencode=arch=compute_10,code=\"sm_10,compute_10\" -gencode=arch=compute_20,code=\"sm_20,compute_20\" --machine 32 -ccbin "D:\Microsoft Visual Studio 9\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\\include" -maxrregcount=0 --compile -o "Debug/kernel.cu.obj" kernel.cu
1>nvcc : fatal error : A single input file is required for a non-link phase when an outputfile is specified
1>Project testCUDA : error: A tool returned an error code from "Compiling with CUDA Build Rule..."
1>Build log was saved at "file://C:\Users\JSC\Documents\Visual Studio 2008\Projects\testCUDA\testCUDA\Debug\BuildLog.htm"
1>testCUDA - 2 error(s), 0 warning(s), 0 remark(s)
========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ==========
My platform is win7(32bit) with CUDA5.0. I use the Intel C++ compiler with version form 11.1 to Composer XE 2011 and even Composer XE 2013. All the versions of Intel C++ compiler will provide the error information.
Your help will be highly appreciated!
As explained in The CUDA 5.0 Release Notes, on Windows only Visual C++ 9.0/10.0 compilers are supported.
On Linux only GCC is supported (see the link above for specific versions).