run ubuntu img from console on qemu - qemu

I created vdi img first
$sudo qemu-img create -f vdi ubuntu-server.vdi 10G
and installed ubuntu server
$sudo qemu-system-x86_64 -m 4G -hda ubuntu-server.vdi -cdrom ubuntu-16.04.3-server-amd64.iso -boot d
However, I want to run this ubuntu img without gui.(only console)
I searched about this, but usually people boot kernel img (-kernel bzimage) with append option(console=ttyS0). This option is used with -kernel option.
so when I tried
$qemu -hda ubuntu-server.vdi -m 1G -append "~~~"
that option cannot be used without -kernel option. I tried nographic, curses options, but the result is just not to display. is there any command or option to use console instead of gui when using run ubuntu?

Related

How can i add packages to alpine netboot

I have download Alpine netboot distribution from this URL:
https://dl-cdn.alpinelinux.org/alpine/v3.16/releases/x86/alpine-netboot-3.16.1-x86.tar.gz
I have run a virtual machine with QEMU this way:
qemu-system-i386 -m 256 -kernel boot/vmlinuz-lts -initrd boot/initramfs-lts -append "console=ttyS0 ip=dhcp alpine_repo=http://dl-cdn.alpinelinux.org/alpine/edge/main/"
With this command, a virtual machine is created and 22 apk pacakges are installed.
I am able to install additionnal apk packages, but I have to do it by hand (apk add command).
How can I script packages installation in QEMU command line ?
Please note: I can replace http://dl-cdn.alpinelinux.org/ by a local mirror. So, I can also change files on the repository.

What is the right way to increase the hard and soft ulimits for a singularity-container image?

The task I want to complete: I need to run a python package inside of a singularity-container that is asking to open at least some 9704 files. This is the first I have heard of it and searching around this has something to do with a system’s ulimit.
What I currently have is the following def file.
I am setting the * hard nofile flag and the * soft nofile flag to 15 thousand. The sed line does edit the conf file but within the singularity shell my ulimit is still the default 1024.
Bootstrap: docker
From: fedora
%post
dnf -y update
dnf -y install nano pip wget libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
wget -c https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh
/bin/bash Anaconda3-2020.02-Linux-x86_64.sh -bfp /usr/local
conda config --file /.condarc --add channels defaults
conda config --file /.condarc --add channels conda-forge
conda update conda
sed -i '2s/#/\n* hard nofile 15000\n* soft nofile 15000\n\n#/g' /etc/security/limits.conf
bash
%runscript
python /Users/lamsal/count_of_monte_cristo/orthofinder_run/OrthoFinder_source/orthofinder.py -f /Users/lamsal/count_of_monte_cristo/orthofinder_run/concatanated_FAs/
I am following the “official” instuctions to change the ulimits for a RHEL based system from IBM’s webpage here: https://www.ibm.com/docs/en/rational-clearcase/9.0.2?topic=servers-increasing-number-file-handles-linux-workstations
Is the sed line not the right way to change ulimits for a singularity image?
Short answer:
Change the value on the host OS.
Long answer:
In this instance, running a singularity container is best thought of as any other binary you're executing in your host OS. It creates its own separate environment, but otherwise it follows the rules and restrictions of the user running it. Here, the ulimit is taken from the host kernel and completely ignores any configs that may exist in the container itself.
Compare the output from the following:
# check the ulimit on the host
ulimit -n
# check the ulimit in the singularity container
singularity exec -e image.sif ulimit -n
# docker only cares about container config settings
docker run --rm fedora:latest ulimit -n
# change your local ulimit
ulimit -n 4096
# verify it has changed
ulimit -n
# singularity has changed
singularity exec -e image.sif ulimit -n
# ... but docker hasn't
docker run --rm fedora:latest ulimit -n
To have a persistent fix, you'll need to modify the setting on your host OS. Assuming you're on MacOS this answer should take care of that.
If you don't have root privs or you're only using this intermittently you can run ulimit by before running singularity. Alternatively, you could use a wrapper script to run the image and set it in there.

How to save the QEMU console output form Windows Host to a file?

Background:
Host: Win10
Qemu: Qemu 6.0.0
This is my command: qemu-system-arm.exe -D ./log.txt -M sabrelite -smp 4 -m 1G -nographic -serial null -serial mon:stdio -kernel image -dtb sabrelite.dtb
I'm using this command to create a Qemu, in order to run some tests with a lot of output logs on it.
I wanna save the outputs to a file.
Question:
How can I save the console output from windows host QEMU to a file?
It seems that the -D ./log.txt just created an empty file, and did not save the outputs to it.
The -D option is for the log file for the debug info enabled with '-d'. If you don't specify any '-d' options there will be no debug info in the log file.
The output of the serial console is entirely separate. That is controlled by the '-serial' option, which currently you have set up to go to stdio (with a monitor muxed to also use stdio). You can look at the other options for where -serial can be directed; this does include a "send to file", but note that if you just do that then you won't also be able to see it on the console and you won't be able to input anything.
You can use standard windows output redirection. This command line will redirect stdout and stderr to log.txt:
qemu-system-arm.exe -M sabrelite -smp 4 -m 1G -nographic -serial null -serial mon:stdio -kernel image -dtb sabrelite.dtb > 1> ./log.txt 2>&1

Installation Requirements for mysql with DBIish on rakudo-star docker image

I was creating an own docker image based on the latest rakudo-star docker image. I wanted to use DBIish to connect to a mysql database. Unfortunately I am not able to get the DBDish::mysql to work.
I've installed default-libmysqlclient-dev as you can see in
# find / -name 'libmysqlclient*.so'
/usr/lib/x86_64-linux-gnu/libmysqlclient_r.so
/usr/lib/x86_64-linux-gnu/libmysqlclient.so
The error i am facing is:
# perl6 -Ilib -e 'use DBDish::mysql; DBDish::mysql.connect()'
Cannot locate native library 'mysqlclient': mysqlclient: cannot open shared object file: No such file or directory
in method setup at /usr/share/perl6/sources/24DD121B5B4774C04A7084827BFAD92199756E03 (NativeCall) line 289
in method CALL-ME at /usr/share/perl6/sources/24DD121B5B4774C04A7084827BFAD92199756E03 (NativeCall) line 539
in method connect at /root/DBIish/lib/DBDish/mysql.pm6 (DBDish::mysql) line 12
in block <unit> at -e line 1
Short answer: you need the package libmysqlclient20 (I added the documentation request to a similar DBIish issue). Debian 9 (stable at the moment) uses and older version than Ubuntu 18.04 (stable at the moment) and Debian Unstable. It also refers to mariadb instead of mysql. Pick libmariadbclient18 on images based on Debian Stable and create a link with the mysql name (see below).
On Debian Testing/Unstable and recent derivatives:
$ sudo apt-get install libmysqlclient20
$ dpkg -L libmysqlclient20
/.
/usr
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libmysqlclient.so.20.3.9
/usr/share
/usr/share/doc
/usr/share/doc/libmysqlclient20
/usr/share/doc/libmysqlclient20/NEWS.Debian.gz
/usr/share/doc/libmysqlclient20/changelog.Debian.gz
/usr/share/doc/libmysqlclient20/copyright
/usr/lib/x86_64-linux-gnu/libmysqlclient.so.20
On Debian 9 and derivatives:
$ dpkg -L libmariadbclient18
/.
/usr
/usr/lib
/usr/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu/libmariadbclient.so.18.0.0
/usr/lib/x86_64-linux-gnu/mariadb18
/usr/lib/x86_64-linux-gnu/mariadb18/plugin
/usr/lib/x86_64-linux-gnu/mariadb18/plugin/client_ed25519.so
/usr/lib/x86_64-linux-gnu/mariadb18/plugin/dialog.so
/usr/lib/x86_64-linux-gnu/mariadb18/plugin/mysql_clear_password.so
/usr/share
/usr/share/doc
/usr/share/doc/libmariadbclient18
/usr/share/doc/libmariadbclient18/changelog.Debian.gz
/usr/share/doc/libmariadbclient18/copyright
/usr/lib/x86_64-linux-gnu/libmariadbclient.so.18
Create the link:
$ sudo ln -s /usr/lib/x86_64-linux-gnu/libmariadbclient.so.18 /usr/lib/x86_64-linux-gnu/libmysqlclient.so.18
In order to illustrate this, I created an Ubuntu 18.04 container for the occasion*:
docker run -ti --rm --entrypoint=bash rakudo/ubuntu-amd64-18.04
And the abbreviated commands and output:
# apt-get install -y libmysqlclient20 build-essential
# zef install DBIish
# perl6 -e 'use DBDish::mysql; DBDish::mysql.connect()'
Cannot look up attributes in a DBDish::mysql type object
[...]
The error is because I didn't pass the correct parameters for connect as I didn't have a db running. The important thing is that no .so file is missing.
*: I uploaded it to the Docker Hub, a normal run will put you right in the REPL:
$ docker run -ti --rm rakudo/ubuntu-amd64-18.04
To exit type 'exit' or '^D'
>
(I didn't use the Star image when debugging, but it does not matter because this is a more generic problem.)

Qemu built from source doesn't work with --enable-kvm flag on RHEL-7 but suprisingly works on CentOS-7

I am trying to build and run QEMU from its source with --enable-kvm flag. The suprising fact is qemu with --enable-kvm flag works like a charm on CentOS 7 (Server as well as on workstation) but it hangs terribly on RHEL 7 server.
I am using Linux From sratch guide to build the system link : http://www.linuxfromscratch.org/blfs/view/cvs/postlfs/qemu.html
I have tested the vmx flag using the script
grep -E "(vmx|svm)" /proc/cpuinfo | wc -l
4
On Rhel as well as centos I have downloaded the dependencies using following script.
#!/bin/sh
yum install gcc
yum install zlib-devel
yum install gnutls-devel
yum install libgcrypt-devel
yum install glibc-devel
yum install glib2-devel
yum install pixman-devel
Then I used following script to compile the build
if [ $(uname -m) = i686 ]; then
QEMU_ARCH=i386-softmmu
else
QEMU_ARCH=x86_64-softmmu
fi
sed -i 's/ memfd_create/ qemu_memfd_create/' util/memfd.c &&
mkdir -vp build &&
cd build &&
../configure --target-list=$QEMU_ARCH \
--enable-gnutls \
--enable-gcrypt &&
unset QEMU_ARCH &&
make &&
make-install
After this I am trying to boot an encrypted virtual disk using the command.
qemu-system-x86_64 --enable-kvm -daemonize -display none \
-net user,hostfwd=tcp::3000-:22,hostfwd=tcp::8080-:8080,hostfwd=tcp::80-:80,hostfwd=tcp::443-:443 -net nic \
-object secret,id=secmaster0,format=base64,file=key.b64 \
-object secret,id=sec0,keyid=secmaster0,format=base64,\
data=$SECRET,iv=$(<iv.b64) \
-drive if=none,driver=luks,key-secret=sec0,\
id=drive0,file.driver=file,\
file.filename=prod.luks \
-device virtio-blk,drive=drive0 -m 5120
Then, I just ssh in the daemonised kvm. The point is everything works like a charm in CentOS but in RHEL 7 I am not able to ssh in the machine.
If I remove --enable-kvm flag I am able to ssh in.
I have already spent a lot of time experimenting over it and I simply don't understand what is going wrong. I am no pro on this topic just trying to find a fix for a niche problem. Any guidance on debuging the qemu or any guidance or reference to documents/mailing-list thread/forum is deeply appreciated.
Peace.
Update
As mentioned by #Peter Maydell in the comments, I started the qemu without the display none flag. It started a vnc, I connnected with the vnc and it seems kvm doesn't seem to boot the hard disk. It is stuck on
Booting from Hard Disk ...