qemu-system-riscv64 is not found in package qemu-system-misc - qemu

I'm trying to set up xv6 on Ubuntu 18.04.5 but there is an error during make qemu:
# outputs...
qemu-system-riscv64 -machine virt -bios none -kernel kernel/kernel -m 128M -smp 3 -nographic -drive file=fs.img,if=none,format=raw,id=x0 -device virtio-blk-device,drive=x0,bus=virtio-mmio-bus.0
make: qemu-system-riscv64: Command not found
I found that there is no qemu-system-riscv64 under /usr/bin after installing qemu-system-misc(version 1:2.11+dfsg-1ubuntu7.36):
$ ls /usr/bin | grep qemu
qemu-img
qemu-io
qemu-nbd
qemu-system-alpha
qemu-system-cris
qemu-system-lm32
qemu-system-m68k
qemu-system-microblaze
qemu-system-microblazeel
qemu-system-moxie
qemu-system-nios2
qemu-system-or1k
qemu-system-sh4
qemu-system-sh4eb
qemu-system-tricore
qemu-system-unicore32
qemu-system-xtensa
qemu-system-xtensaeb
I've tried to install an older version of qemu-system-misc which is mentioned in Tools Used in 6.S081
At this moment in time, it seems that the package qemu-system-misc has received an
update that breaks its compatibility with our kernel. If you run make qemu and the
script appears to hang after
qemu-system-riscv64 -machine virt -bios none -kernel kernel/kernel -m 128M -smp 3 nographic -drive file=fs.img,if=none,format=raw,id=x0 -device virtio-blk device,drive=x0,bus=virtio-mmio-bus.0
you'll need to uninstall that package and install an older version:
$ sudo apt-get remove qemu-system-misc
$ sudo apt-get install qemu-system-misc=1:4.2-3ubuntu6
yet this version was not found.
Any solution for installing either qemu-system-riscv64 or an older version of qemu-system-misc?

Refer https://pdos.csail.mit.edu/6.828/2020/tools.html
You can compile qemu
$ wget https://download.qemu.org/qemu-5.1.0.tar.xz
$ tar xf qemu-5.1.0.tar.xz
$ cd qemu-5.1.0
$ ./configure --disable-kvm --disable-werror --prefix=/usr/local --target-list="riscv64-softmmu"
$ make
$ sudo make install
$ cd ..

Everything went right after I upgraded Ubuntu to version 20.04.2 :)

Related

Installation Requirements for mysql with DBIish on rakudo-star docker image

I was creating an own docker image based on the latest rakudo-star docker image. I wanted to use DBIish to connect to a mysql database. Unfortunately I am not able to get the DBDish::mysql to work.
I've installed default-libmysqlclient-dev as you can see in
# find / -name 'libmysqlclient*.so'
/usr/lib/x86_64-linux-gnu/libmysqlclient_r.so
/usr/lib/x86_64-linux-gnu/libmysqlclient.so
The error i am facing is:
# perl6 -Ilib -e 'use DBDish::mysql; DBDish::mysql.connect()'
Cannot locate native library 'mysqlclient': mysqlclient: cannot open shared object file: No such file or directory
in method setup at /usr/share/perl6/sources/24DD121B5B4774C04A7084827BFAD92199756E03 (NativeCall) line 289
in method CALL-ME at /usr/share/perl6/sources/24DD121B5B4774C04A7084827BFAD92199756E03 (NativeCall) line 539
in method connect at /root/DBIish/lib/DBDish/mysql.pm6 (DBDish::mysql) line 12
in block <unit> at -e line 1
Short answer: you need the package libmysqlclient20 (I added the documentation request to a similar DBIish issue). Debian 9 (stable at the moment) uses and older version than Ubuntu 18.04 (stable at the moment) and Debian Unstable. It also refers to mariadb instead of mysql. Pick libmariadbclient18 on images based on Debian Stable and create a link with the mysql name (see below).
On Debian Testing/Unstable and recent derivatives:
$ sudo apt-get install libmysqlclient20
$ dpkg -L libmysqlclient20
/.
/usr
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libmysqlclient.so.20.3.9
/usr/share
/usr/share/doc
/usr/share/doc/libmysqlclient20
/usr/share/doc/libmysqlclient20/NEWS.Debian.gz
/usr/share/doc/libmysqlclient20/changelog.Debian.gz
/usr/share/doc/libmysqlclient20/copyright
/usr/lib/x86_64-linux-gnu/libmysqlclient.so.20
On Debian 9 and derivatives:
$ dpkg -L libmariadbclient18
/.
/usr
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libmariadbclient.so.18.0.0
/usr/lib/x86_64-linux-gnu/mariadb18
/usr/lib/x86_64-linux-gnu/mariadb18/plugin
/usr/lib/x86_64-linux-gnu/mariadb18/plugin/client_ed25519.so
/usr/lib/x86_64-linux-gnu/mariadb18/plugin/dialog.so
/usr/lib/x86_64-linux-gnu/mariadb18/plugin/mysql_clear_password.so
/usr/share
/usr/share/doc
/usr/share/doc/libmariadbclient18
/usr/share/doc/libmariadbclient18/changelog.Debian.gz
/usr/share/doc/libmariadbclient18/copyright
/usr/lib/x86_64-linux-gnu/libmariadbclient.so.18
Create the link:
$ sudo ln -s /usr/lib/x86_64-linux-gnu/libmariadbclient.so.18 /usr/lib/x86_64-linux-gnu/libmysqlclient.so.18
In order to illustrate this, I created an Ubuntu 18.04 container for the occasion*:
docker run -ti --rm --entrypoint=bash rakudo/ubuntu-amd64-18.04
And the abbreviated commands and output:
# apt-get install -y libmysqlclient20 build-essential
# zef install DBIish
# perl6 -e 'use DBDish::mysql; DBDish::mysql.connect()'
Cannot look up attributes in a DBDish::mysql type object
[...]
The error is because I didn't pass the correct parameters for connect as I didn't have a db running. The important thing is that no .so file is missing.
*: I uploaded it to the Docker Hub, a normal run will put you right in the REPL:
$ docker run -ti --rm rakudo/ubuntu-amd64-18.04
To exit type 'exit' or '^D'
>
(I didn't use the Star image when debugging, but it does not matter because this is a more generic problem.)

Qemu built from source doesn't work with --enable-kvm flag on RHEL-7 but suprisingly works on CentOS-7

I am trying to build and run QEMU from its source with --enable-kvm flag. The suprising fact is qemu with --enable-kvm flag works like a charm on CentOS 7 (Server as well as on workstation) but it hangs terribly on RHEL 7 server.
I am using Linux From sratch guide to build the system link : http://www.linuxfromscratch.org/blfs/view/cvs/postlfs/qemu.html
I have tested the vmx flag using the script
grep -E "(vmx|svm)" /proc/cpuinfo | wc -l
4
On Rhel as well as centos I have downloaded the dependencies using following script.
#!/bin/sh
yum install gcc
yum install zlib-devel
yum install gnutls-devel
yum install libgcrypt-devel
yum install glibc-devel
yum install glib2-devel
yum install pixman-devel
Then I used following script to compile the build
if [ $(uname -m) = i686 ]; then
QEMU_ARCH=i386-softmmu
else
QEMU_ARCH=x86_64-softmmu
fi
sed -i 's/ memfd_create/ qemu_memfd_create/' util/memfd.c &&
mkdir -vp build &&
cd build &&
../configure --target-list=$QEMU_ARCH \
--enable-gnutls \
--enable-gcrypt &&
unset QEMU_ARCH &&
make &&
make-install
After this I am trying to boot an encrypted virtual disk using the command.
qemu-system-x86_64 --enable-kvm -daemonize -display none \
-net user,hostfwd=tcp::3000-:22,hostfwd=tcp::8080-:8080,hostfwd=tcp::80-:80,hostfwd=tcp::443-:443 -net nic \
-object secret,id=secmaster0,format=base64,file=key.b64 \
-object secret,id=sec0,keyid=secmaster0,format=base64,\
data=$SECRET,iv=$(<iv.b64) \
-drive if=none,driver=luks,key-secret=sec0,\
id=drive0,file.driver=file,\
file.filename=prod.luks \
-device virtio-blk,drive=drive0 -m 5120
Then, I just ssh in the daemonised kvm. The point is everything works like a charm in CentOS but in RHEL 7 I am not able to ssh in the machine.
If I remove --enable-kvm flag I am able to ssh in.
I have already spent a lot of time experimenting over it and I simply don't understand what is going wrong. I am no pro on this topic just trying to find a fix for a niche problem. Any guidance on debuging the qemu or any guidance or reference to documents/mailing-list thread/forum is deeply appreciated.
Peace.
Update
As mentioned by #Peter Maydell in the comments, I started the qemu without the display none flag. It started a vnc, I connnected with the vnc and it seems kvm doesn't seem to boot the hard disk. It is stuck on
Booting from Hard Disk ...

MySQL service inside docker container not working in macOS Sierra 10.12.6

I was forced to reinstall macOS Sierra because I was in the beta program for High Sierra and I get a serious crash so I downgraded the system.
This Dockerfile was working in High Sierra before the sudden crash of the system.
FROM ubuntu:16.04
MAINTAINER XXX version 0.0.1
# Prepare Debian environment
ENV DEBIAN_FRONTEND noninteractive
# we don't need an apt cache in a container
RUN echo "Acquire::http {No-Cache=True;};" > /etc/apt/apt.conf.d/no-cache
# ----------------------------
# Configure supervisor
# ----------------------------
RUN apt-get update > /dev/null 2>&1 && apt-get install -y supervisor > /dev/null 2>&1
RUN mkdir -p /var/log/supervisor
COPY files/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
## Mysql
RUN apt-get install -y mysql-client > /dev/null 2>&1
#RUN debconf-set-selections <<< 'mysql-server mysql-server/root_password password 1234'
#RUN debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password 1234'
RUN echo 'mysql-server mysql-server/root_password password 1234' | debconf-set-selections
RUN echo 'mysql-server mysql-server/root_password_again password 1234' | debconf-set-selections
RUN apt-get -y install mysql-server > /dev/null 2>&1
RUN sed -i -e 's/127.0.0.1/0.0.0.0/g' /etc/mysql/mysql.conf.d/mysqld.cnf
RUN echo "sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE" >> /etc/mysql/mysql.conf.d/mysqld.cnf
RUN usermod -d /var/lib/mysql/ mysql
mys
ADD files/xxx.dump /tmp/xxx.dump
ADD files/mysql_xxx.sql /tmp/mysql_xxx.sql
RUN service mysql start && \
mysql -uroot -p1234 < /tmp/mysql_xxx.sql
RUN rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN apt-get clean
EXPOSE 3306
CMD ["/usr/bin/supervisord"]
Now, reinstalling all things I does not work, it always outputs the same error:
Starting MySQL database server mysqld
...fail!
To add some information, in the step where the Dockerfile stops I am starting the service to dump a database.
My system version macOS Sierra 10.12.6 (16G29) is and my Docker version is 17.06.0-ce-mac19 (18663). Any workaround for this problem?
You should be using the official MySQL Dockerfile as your starting point. You could either just use the MySQL image itself, FROM mysql:5.7 for your Dockerfile or you can copy the Dockerfile and the docker-entrypoint,sh from their Github to "take control" of the MySQL image and not have any dependency on MySQL releases/changes.
I think it is a good idea to use the same OS base for all your Docker containers, you are currently using Ubuntu distro, if you do not mind or are just getting started you may want to have Debian be your OS base. I say this because the Docker images I have looked so far, Cloudbee's Jenkins, .NET Core, ASP.NET COre and MySQL all have a Debian base so I been thinking Debian is the most popular OS base for Docker images though I only have anecdotal evidence of this obviously.
My company prefers CentOS distro, so I took the official MySQL Dockerfile and converted to work with CentOS:7, it was a pain.

Using GPU from a docker container?

I'm searching for a way to use the GPU from inside a docker container.
The container will execute arbitrary code so i don't want to use the privileged mode.
Any tips?
From previous research i understood that run -v and/or LXC cgroup was the way to go but i'm not sure how to pull that off exactly
Writing an updated answer since most of the already present answers are obsolete as of now.
Versions earlier than Docker 19.03 used to require nvidia-docker2 and the --runtime=nvidia flag.
Since Docker 19.03, you need to install nvidia-container-toolkit package and then use the --gpus all flag.
So, here are the basics,
Package Installation
Install the nvidia-container-toolkit package as per official documentation at Github.
For Redhat based OSes, execute the following set of commands:
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
$ sudo yum install -y nvidia-container-toolkit
$ sudo systemctl restart docker
For Debian based OSes, execute the following set of commands:
# Add the package repositories
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
$ sudo systemctl restart docker
Running the docker with GPU support
docker run --name my_all_gpu_container --gpus all -t nvidia/cuda
Please note, the flag --gpus all is used to assign all available gpus to the docker container.
To assign specific gpu to the docker container (in case of multiple GPUs available in your machine)
docker run --name my_first_gpu_container --gpus device=0 nvidia/cuda
Or
docker run --name my_first_gpu_container --gpus '"device=0"' nvidia/cuda
Regan's answer is great, but it's a bit out of date, since the correct way to do this is avoid the lxc execution context as Docker has dropped LXC as the default execution context as of docker 0.9.
Instead it's better to tell docker about the nvidia devices via the --device flag, and just use the native execution context rather than lxc.
Environment
These instructions were tested on the following environment:
Ubuntu 14.04
CUDA 6.5
AWS GPU instance.
Install nvidia driver and cuda on your host
See CUDA 6.5 on AWS GPU Instance Running Ubuntu 14.04 to get your host machine setup.
Install Docker
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update && sudo apt-get install lxc-docker
Find your nvidia devices
ls -la /dev | grep nvidia
crw-rw-rw- 1 root root 195, 0 Oct 25 19:37 nvidia0
crw-rw-rw- 1 root root 195, 255 Oct 25 19:37 nvidiactl
crw-rw-rw- 1 root root 251, 0 Oct 25 19:37 nvidia-uvm
Run Docker container with nvidia driver pre-installed
I've created a docker image that has the cuda drivers pre-installed. The dockerfile is available on dockerhub if you want to know how this image was built.
You'll want to customize this command to match your nvidia devices. Here's what worked for me:
$ sudo docker run -ti --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm tleyden5iwx/ubuntu-cuda /bin/bash
Verify CUDA is correctly installed
This should be run from inside the docker container you just launched.
Install CUDA samples:
$ cd /opt/nvidia_installers
$ ./cuda-samples-linux-6.5.14-18745345.run -noprompt -cudaprefix=/usr/local/cuda-6.5/
Build deviceQuery sample:
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
$ make
$ ./deviceQuery
If everything worked, you should see the following output:
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GRID K520
Result = PASS
Ok i finally managed to do it without using the --privileged mode.
I'm running on ubuntu server 14.04 and i'm using the latest cuda (6.0.37 for linux 13.04 64 bits).
Preparation
Install nvidia driver and cuda on your host. (it can be a little tricky so i will suggest you follow this guide https://askubuntu.com/questions/451672/installing-and-testing-cuda-in-ubuntu-14-04)
ATTENTION : It's really important that you keep the files you used for the host cuda installation
Get the Docker Daemon to run using lxc
We need to run docker daemon using lxc driver to be able to modify the configuration and give the container access to the device.
One time utilization :
sudo service docker stop
sudo docker -d -e lxc
Permanent configuration
Modify your docker configuration file located in /etc/default/docker
Change the line DOCKER_OPTS by adding '-e lxc'
Here is my line after modification
DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 -e lxc"
Then restart the daemon using
sudo service docker restart
How to check if the daemon effectively use lxc driver ?
docker info
The Execution Driver line should look like that :
Execution Driver: lxc-1.0.5
Build your image with the NVIDIA and CUDA driver.
Here is a basic Dockerfile to build a CUDA compatible image.
FROM ubuntu:14.04
MAINTAINER Regan <http://stackoverflow.com/questions/25185405/using-gpu-from-a-docker-container>
RUN apt-get update && apt-get install -y build-essential
RUN apt-get --purge remove -y nvidia*
ADD ./Downloads/nvidia_installers /tmp/nvidia > Get the install files you used to install CUDA and the NVIDIA drivers on your host
RUN /tmp/nvidia/NVIDIA-Linux-x86_64-331.62.run -s -N --no-kernel-module > Install the driver.
RUN rm -rf /tmp/selfgz7 > For some reason the driver installer left temp files when used during a docker build (i don't have any explanation why) and the CUDA installer will fail if there still there so we delete them.
RUN /tmp/nvidia/cuda-linux64-rel-6.0.37-18176142.run -noprompt > CUDA driver installer.
RUN /tmp/nvidia/cuda-samples-linux-6.0.37-18176142.run -noprompt -cudaprefix=/usr/local/cuda-6.0 > CUDA samples comment if you don't want them.
RUN export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64 > Add CUDA library into your PATH
RUN touch /etc/ld.so.conf.d/cuda.conf > Update the ld.so.conf.d directory
RUN rm -rf /temp/* > Delete installer files.
Run your image.
First you need to identify your the major number associated with your device.
Easiest way is to do the following command :
ls -la /dev | grep nvidia
If the result is blank, use launching one of the samples on the host should do the trick.
The result should look like that
As you can see there is a set of 2 numbers between the group and the date.
These 2 numbers are called major and minor numbers (wrote in that order) and design a device.
We will just use the major numbers for convenience.
Why do we activated lxc driver?
To use the lxc conf option that allow us to permit our container to access those devices.
The option is : (i recommend using * for the minor number cause it reduce the length of the run command)
--lxc-conf='lxc.cgroup.devices.allow = c [major number]:[minor number or *] rwm'
So if i want to launch a container (Supposing your image name is cuda).
docker run -ti --lxc-conf='lxc.cgroup.devices.allow = c 195:* rwm' --lxc-conf='lxc.cgroup.devices.allow = c 243:* rwm' cuda
We just released an experimental GitHub repository which should ease the process of using NVIDIA GPUs inside Docker containers.
Recent enhancements by NVIDIA have produced a much more robust way to do this.
Essentially they have found a way to avoid the need to install the CUDA/GPU driver inside the containers and have it match the host kernel module.
Instead, drivers are on the host and the containers don't need them.
It requires a modified docker-cli right now.
This is great, because now containers are much more portable.
A quick test on Ubuntu:
# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi
For more details see:
GPU-Enabled Docker Container
and: https://github.com/NVIDIA/nvidia-docker
Updated for cuda-8.0 on ubuntu 16.04
Install docker https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-16-04
Build the following image that includes the nvidia drivers and the cuda toolkit
Dockerfile
FROM ubuntu:16.04
MAINTAINER Jonathan Kosgei <jonathan#saharacluster.com>
# A docker container with the Nvidia kernel module and CUDA drivers installed
ENV CUDA_RUN https://developer.nvidia.com/compute/cuda/8.0/prod/local_installers/cuda_8.0.44_linux-run
RUN apt-get update && apt-get install -q -y \
wget \
module-init-tools \
build-essential
RUN cd /opt && \
wget $CUDA_RUN && \
chmod +x cuda_8.0.44_linux-run && \
mkdir nvidia_installers && \
./cuda_8.0.44_linux-run -extract=`pwd`/nvidia_installers && \
cd nvidia_installers && \
./NVIDIA-Linux-x86_64-367.48.run -s -N --no-kernel-module
RUN cd /opt/nvidia_installers && \
./cuda-linux64-rel-8.0.44-21122537.run -noprompt
# Ensure the CUDA libs and binaries are in the correct environment variables
ENV LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64
ENV PATH=$PATH:/usr/local/cuda-8.0/bin
RUN cd /opt/nvidia_installers &&\
./cuda-samples-linux-8.0.44-21122537.run -noprompt -cudaprefix=/usr/local/cuda-8.0 &&\
cd /usr/local/cuda/samples/1_Utilities/deviceQuery &&\
make
WORKDIR /usr/local/cuda/samples/1_Utilities/deviceQuery
Run your container
sudo docker run -ti --device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm <built-image> ./deviceQuery
You should see output similar to:
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GRID K520
Result = PASS
Goal:
My goal was to make a CUDA enabled docker image without using nvidia/cuda as base image. Because I have some custom jupyter image, and I want to base from that.
Prerequisite:
The host machine had nvidia driver, CUDA toolkit, and nvidia-container-toolkit already installed. Please refer to the official docs, and to Rohit's answer.
Test that nvidia driver and CUDA toolkit is installed correctly with: nvidia-smi on the host machine, which should display correct "Driver Version" and "CUDA Version" and shows GPUs info.
Test that nvidia-container-toolkit is installed correctly with: docker run --rm --gpus all nvidia/cuda:latest nvidia-smi
Dockerfile
I found what I assume to be the official Dockerfile for nvidia/cuda here I "flattened" it, appended the contents to my Dockerfile and tested it to be working nicely:
FROM sidazhou/scipy-notebook:latest
# FROM ubuntu:18.04
###########################################################################
# See https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/10.1/ubuntu18.04-x86_64/base/Dockerfile
# See https://sarus.readthedocs.io/en/stable/user/custom-cuda-images.html
###########################################################################
USER root
###########################################################################
# base
RUN apt-get update && apt-get install -y --no-install-recommends \
gnupg2 curl ca-certificates && \
curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | apt-key add - && \
echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 /" > /etc/apt/sources.list.d/cuda.list && \
echo "deb https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 /" > /etc/apt/sources.list.d/nvidia-ml.list && \
apt-get purge --autoremove -y curl \
&& rm -rf /var/lib/apt/lists/*
ENV CUDA_VERSION 10.1.243
ENV CUDA_PKG_VERSION 10-1=$CUDA_VERSION-1
# For libraries in the cuda-compat-* package: https://docs.nvidia.com/cuda/eula/index.html#attachment-a
RUN apt-get update && apt-get install -y --no-install-recommends \
cuda-cudart-$CUDA_PKG_VERSION \
cuda-compat-10-1 \
&& ln -s cuda-10.1 /usr/local/cuda && \
rm -rf /var/lib/apt/lists/*
# Required for nvidia-docker v1
RUN echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf && \
echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64
###########################################################################
#runtime next
ENV NCCL_VERSION 2.7.8
RUN apt-get update && apt-get install -y --no-install-recommends \
cuda-libraries-$CUDA_PKG_VERSION \
cuda-npp-$CUDA_PKG_VERSION \
cuda-nvtx-$CUDA_PKG_VERSION \
libcublas10=10.2.1.243-1 \
libnccl2=$NCCL_VERSION-1+cuda10.1 \
&& apt-mark hold libnccl2 \
&& rm -rf /var/lib/apt/lists/*
# apt from auto upgrading the cublas package. See https://gitlab.com/nvidia/container-images/cuda/-/issues/88
RUN apt-mark hold libcublas10
###########################################################################
#cudnn7 (not cudnn8) next
ENV CUDNN_VERSION 7.6.5.32
RUN apt-get update && apt-get install -y --no-install-recommends \
libcudnn7=$CUDNN_VERSION-1+cuda10.1 \
&& apt-mark hold libcudnn7 && \
rm -rf /var/lib/apt/lists/*
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES all
ENV NVIDIA_REQUIRE_CUDA "cuda>=10.1"
###########################################################################
#docker build -t sidazhou/scipy-notebook-gpu:latest .
#docker run -itd -gpus all\
# -p 8888:8888 \
# -p 6006:6006 \
# --user root \
# -e NB_UID=$(id -u) \
# -e NB_GID=$(id -g) \
# -e GRANT_SUDO=yes \
# -v ~/workspace:/home/jovyan/work \
# --name sidazhou-jupyter-gpu \
# sidazhou/scipy-notebook-gpu:latest
#docker exec sidazhou-jupyter-gpu python -c "import tensorflow as tf; print(tf.config.experimental.list_physical_devices('GPU'))"
To use GPU from docker container, instead of using native Docker, use Nvidia-docker. To install Nvidia docker use following commands
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/ubuntu16.04/amd64/nvidia-
docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker
sudo pkill -SIGHUP dockerd # Restart Docker Engine
sudo nvidia-docker run --rm nvidia/cuda nvidia-smi # finally run nvidia-smi in the same container
Use x11docker by mviereck:
https://github.com/mviereck/x11docker#hardware-acceleration says
Hardware acceleration
Hardware acceleration for OpenGL is possible with option -g, --gpu.
This will work out of the box in most cases with open source drivers on host. Otherwise have a look at wiki: feature dependencies.
Closed source NVIDIA drivers need some setup and support less x11docker X server options.
This script is really convenient as it handles all the configuration and setup. Running a docker image on X with gpu is as simple as
x11docker --gpu imagename
I would not recommend installing CUDA/cuDNN on the host if you can use docker. Since at least CUDA 8 it has been possible to "stand on the shoulders of giants" and use nvidia/cuda base images maintained by NVIDIA in their Docker Hub repo. Go for the newest and biggest one (with cuDNN if doing deep learning) if unsure which version to choose.
A starter CUDA container:
mkdir ~/cuda11
cd ~/cuda11
echo "FROM nvidia/cuda:11.0-cudnn8-devel-ubuntu18.04" > Dockerfile
echo "CMD [\"/bin/bash\"]" >> Dockerfile
docker build --tag mirekphd/cuda11 .
docker run --rm -it --gpus 1 mirekphd/cuda11 nvidia-smi
Sample output:
(if nvidia-smi is not found in the container, do not try install it there - it was already installed on thehost with NVIDIA GPU driver and should be made available from the host to the container system if docker has access to the GPU(s)):
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.57 Driver Version: 450.57 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:01:00.0 On | N/A |
| 0% 50C P8 17W / 280W | 409MiB / 11177MiB | 7% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
Prerequisites
Appropriate NVIDIA driver with the latest CUDA version support to be installed first on the host (download it from NVIDIA Driver Downloads and then mv driver-file.run driver-file.sh && chmod +x driver-file.sh && ./driver-file.sh). These are have been forward-compatible since CUDA 10.1.
GPU access enabled in docker by installing sudo apt get update && sudo apt get install nvidia-container-toolkit (and then restarting docker daemon using sudo systemctl restart docker).

How to download all dependencies and packages to directory

I'm trying to install a package on a machine with no Internet connection. What I want to do is download all the packages and dependences on a machine WITH an Internet connection and then sneaker-net everything to the offline computer.
I've been playing with the apt-get and apt-cache but I haven't figured out a quick and easy way to download the package and dependencies in one swoop to a directory of my choosing. How would I do this? Am I going about this problem correctly?
How would you install offline packages that have a lot of dependencies?
The marked answer has the problem that the available packages on the machine that is doing the downloads might be different from the target machine, and thus the package set might be incomplete.
To avoid this and get all dependencies, use the following:
apt-get download $(apt-rdepends <package>|grep -v "^ ")
Some packages returned from apt-rdepends don't exist with the exact name for apt-get download to download (for example, libc-dev). In those cases, filter out those exact names (be sure to use ^<NAME>$ so that other related names, for example libc-dev-bin, that do exist are not skipped).
apt-get download $(apt-rdepends <package>|grep -v "^ " |grep -v "^libc-dev$")
Once downloaded, you can move the .deb files to a machine without Internet and install them:
sudo dpkg -i *.deb
Same question already answered here:
How to list/download the recursive dependencies of a debian package?
try:
PACKAGES="wget unzip"
apt-get download $(apt-cache depends --recurse --no-recommends --no-suggests \
--no-conflicts --no-breaks --no-replaces --no-enhances \
--no-pre-depends ${PACKAGES} | grep "^\w")
# aptitude clean
# aptitude --download-only install <your_package_here>
# cp /var/cache/apt/archives/*.deb <your_directory_here>
The aptitude --download-only ... approach only works if you have a debian distro with internet connection in your hands.
If you don't, I think it is better to run the following script on the disconnected debian machine:
apt-get --print-uris --yes install <my_package_name> | grep ^\' | cut -d\' -f2 >downloads.list
move the downloads.list file into a connected linux (or non linux) machine, and run:
wget --input-file myurilist
this downloads all your files into the current directory.After that you can copy them on an USB key and install in your disconnected debian machine.
Credits: http://www.tuxradar.com/answers/517
This will download all the Debs to the current directory, and will NOT fail if It can't find a candidate.
Also does NOT require sudo to run sript!
nano getdebs.sh && chmod +x getdebs.sh && ./getdebs.sh
#!/bin/bash
package=ssmtp
apt-cache depends "$package" | grep Depends: >> deb.list
sed -i -e 's/[<>|:]//g' deb.list
sed -i -e 's/Depends//g' deb.list
sed -i -e 's/ //g' deb.list
filename="deb.list"
while read -r line
do
name="$line"
apt-get download "$name"
done < "$filename"
apt-get download "$package"
Note: I used this as my example because I was actually trying to DL the Deps for SSMTP and it failed on debconf-2.0, but this script got me what I need!
Somewhat simplified (and what worked for me) way that worked for me (based on all the above)
Note that dependencies hierarchy can go deeper then one level
Get dependencies of your package
$ apt-cache depends mongodb | grep Depends:
Depends: mongodb-dev
Depends: mongodb-server
Get urls:
sudo apt-get --print-uris --yes -d --reinstall install mongodb-org mongodb-org-server mongodb-org-shell mongodb-org-tools | grep "http://" | awk '{print$1}' | xargs -I'{}' echo {} | tee files.list
wget --input-file files.list
I used apt-cache depends package to get all required packages in any case if the are already installed on system or not.
So it will work always correct.
Because the command apt-cache works different, depending on language, you have to try this command on your system and adapt the command.
apt-cache depends yourpackage
On an englisch system you get:
$ apt-cache depends yourpackage
node
Depends: libax25
Depends: libc6
On an german system you get:
node
Hängt ab von: libax25
Hängt ab von: libc6
The englisch version with the term:
"Depends:"
You have to change the term "yourpackage" to your wish twice in this command, take care of this!
$ sudo apt-get --print-uris --yes -d --reinstall install yourpackage $(apt-cache depends yourpackage | grep " Depends:" | sed 's/ Depends://' | sed ':a;N;$!ba;s/\n//g') | grep ^\' | cut -d\' -f2 >downloads.list
And the german version with the term:
"Hängt ab von:"
You have to change the term "yourpackage" to your wish twice in this command, take care of this!
This text is used twice in this command, if you want to adapt it to your language take care of this!
$ sudo apt-get --print-uris --yes -d --reinstall install yourpackage $(apt-cache depends yourpackage | grep "Hängt ab von:" | sed 's/ Hängt ab von://' | sed ':a;N;$!ba;s/\n//g') | grep ^\' | cut -d\' -f2 >downloads.list
You get the list of links in downloads.list
Check the list, go to your folder and run the list:
$ cd yourpathToYourFolder
$ wget --input-file downloads.list
All your required packages are in:
$ ls yourpathToYourFolder
This will download all packages and dependencies (no already installed) to a directory of your choice:
sudo apt-get install -d -o Dir::Cache=/path-to/directory/apt/cache -o Dir::State::Lists=/path-to/directory/apt/lists packages
Make sure /path-to/directory/apt/cache and /path-to/directory/apt/lists exist.
If you don't set -o Dir::Cache it points to /var/cache/apt,
Dir::State::Lists points to /var/lib/apt/lists (which keeps the index files of available packages)
Both -o options can be used with update and upgrade instead of install.
On different machine run the same command without '-d'
I'm assuming you've got a nice fat USB HD and a good connection to the net. You can use apt-mirror to essentially create your own debian mirror.
http://apt-mirror.sourceforge.net/
On modern Ubuntu systems (for example, 22.04):
apt clean
apt update
apt install --download-only freeipa-client
After you can find deb-files in
ls -l /var/cache/apt/archives/
IF you accept the caveat that there may be dependencies already installed on your system, then the easiest way is to go apt-get install --simulate <your_package>, this will first list all the deps it will install, then copy the list of packages, then apt-get download <the_list_of_packages>
e.g. for qt5-gtk2-platformtheme on a xubuntu-21.04 MINIMAL INSTALL you'll get (after apt-get install --simulate) the following:
libdouble-conversion3 libmd4c0 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5gui5 libqt5network5 libqt5svg5 libqt5widgets5 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-render-util0 libxcb-xinerama0 libxcb-xinput0 libxcb-xkb1 libxkbcommon-x11-0 qt5-gtk-platformtheme qttranslations5-l10n
then you just cd in a folder of your choice, do apt-get download <the_list_above>, and you have them all d/w in there. you can then dpkg -i *.deb
Complementing and automating the exclusion of ALL conflictive dependencies (dependencies not found) by the command given by #onno:
apt-get download $(apt-rdepends <package>|grep -v "^ " |grep -v "^conflictiv-dependency$")
At least for Ubuntu, where the Error Message format is as follows:
E: Can't select candidate version from package <package> as it has no candidate
The following script Downloads all Found Dependencies, Excluding not Found ones:
#!/bin/bash
rm -f error.txt
apt download $(apt-rdepends $1 | grep -v "^ ") 2> error.txt
#IF THERE WAS ERRORS (DEPENDENCIES NOT FOUND)
if [ $(cat error.txt | wc -l) -gt 0 ]
then
partial_command="\("
while read -r line
do
conflictive_package="$(awk '{split($0,array," "); print array[8]}' <<< $line)"
partial_command="$partial_command$conflictive_package\|"
done < error.txt
partial_command="$(awk '{print substr($0, 1, length($0)-2)}' <<< $partial_command)\)"
eval "apt download \$(apt-rdepends $1 | grep -v '^ ' | grep -v '^$partial_command$')"
fi
rm error.txt
It works with me
sudo apt-get reinstall --download-only <your software>
for example
sudo apt-get reinstall --download-only ubuntu-restricted-extras
For accessing installed .deb files, you can look in this path:
/var/cache/apt/archives