KVMs not running after qemu-kvm upgrade in CentOS 8.1, RHEL 8.1 - qemu

This is the error i encountered when i updated my CentOS 8.1/RHEL 8.1 machines and all the KVMs are showing the error below:
error: internal error: process exited while connecting to monitor: 2020-06-09T12:41:10.410896Z qemu-kvm: -machine pc-q35-rhel8.1.0,accel=kvm,usb=off,vmport=off,smm=on,dump-guest-core=off: unsupported machine type
Use -machine help to list supported machines

Note: The problem states the machine type Q35 is not well stated/configured in your virtual Kernel based machines RUNNING on RHEL 8/ CentOS 8
[Step 1:] cat /etc/libvirt/qemu/*.xml | grep \<name&apos;\| machine&apos;
This will list the machine type in all of the KVMs installed.
[Output Snippet]
machine pc-q35-rhel8.1.0
[Step 2:] cd /etc/libvirt/qemu; ll
This will list all the xml files in connection with your KVMs
[Step 3:] At /etc/libvirt/qemu Use virsh edit <KVM file> ###Don&apos;t include .xml###
Navigate to machine
[Output Snippet]
<os>
<type arch=&apos;x86_64&apos; machine=&apos;pc-q35-rhel8.1.0&apos;>hvm</type>
<loader readonly=&apos;yes&apos; secure=&apos;yes&apos; type=&apos;pflash&apos;>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
<nvram>/var/lib/libvirt/qemu/nvram/Loadbalancer_VARS.fd</nvram>
<boot dev=&apos;hd&apos;/>
</os>
Change machine=&apos;pc-q35-rhel8.1.0&apos; to machine=&apos;q35&apos;
shift + zz to save and quit
[Step 4:]
systemctl restart libvirtd && systemctl status -l libvirtd
virsh list --all
virsh start --domain <KVM>
Check the status of your running KVMs
virsh list --state-running
Now the issue should be resolved and your KVMs should be humming away.
Note though if head back in and check on the configuration xml file with virsh edit, you&apos;ll note that q35 converts to pc-q35-rhel7.6.0 automatically.
But this shouldn&apos;t be an issue.
Cheers :)

Related

Error while running "podman run"; error adding pod to CNI network "podman": unexpected end of JSON input

I'm new to podman, and I just trying to run containers on it.
(podman version 3.4.0, installed by brew, intel Core MAC)
However, when I trying to run "podman run {image-name}", below errors were thrown.
$ podman run -ti -d --name web httpd 125
Error: error configuring network namespace for container b0e70d672cb66005833c0a300c8661b88eab49e942c240d69d17587e0b75c47b: error adding pod web_2_web_2 to CNI network "podman": unexpected end of JSON input
$ podman run centos:7
Error: error preparing container a6d0bc1ad217cd8207935561dc8ff7bd33672da3fa513917f9965cb39520c449 for attach: error configuring network namespace for container a6d0bc1ad217cd8207935561dc8ff7bd33672da3fa513917f9965cb39520c449: error adding pod quirky_snyder_quirky_snyder to CNI network "podman": unexpected end of JSON input
By reading https://issueexplorer.com/issue/containers/podman/11452, I removed ~/.docker/, but the solution doesn't work in my case.
Of course, the error message says there was "unexpected end of JSON input", but I don't know how to fix it. Could anyone guess why podman didn't work even running these base images, or how to debug it?
Thanks in advance.
on macos, current machine version 3.3.1 has this problem. I had this problem on server version 3.3.1 and I do not encounter it on server version 3.4.0. You can check server version with podman version.
Try removing current machine and installing a newer one
podman machine stop
podman machine rm
podman machine init --image-path next
podman machine start
Check server version again with podman version.
Try running your image again.

Gnome Boxes on Fedora 33 fails to open

I attempt to load gnome-boxes from the terminal (I'm running Fedora 33) and get the following error
$ gnome-boxes
(gnome-boxes:3194): Gtk-WARNING **: 12:34:57.343: GtkFlowBox with a model will ignore sort and filter functions
(gnome-boxes:3194): Gtk-WARNING **: 12:34:57.344: GtkListBox with a model will ignore sort and filter functions
(gnome-boxes:3194): Boxes-WARNING **: 12:34:57.904: libvirt-machine.vala:83: Failed to disable 3D Acceleration
(gnome-boxes:3194): Boxes-WARNING **: 12:34:57.913: libvirt-broker.vala:70: Failed to update domain 'fedora33-wor-2': Failed to set domain configuration: XML error: Invalid PCI address 0000:04:00.0. slot must be >= 1
(gnome-boxes:3194): Boxes-CRITICAL **: 12:34:57.916: boxes_vm_importer_get_source_media: assertion 'self != NULL' failed
Segmentation fault (core dumped)
My system:
$uname -a
Linux localhost.localdomain 5.9.16-200.fc33.x86_64 #1 SMP Mon Dec 21 14:08:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
I don't whether it's related but I recently updated from kernel 5.9.11 directly to 5.9.16 (haven't used the PC in question for some weeks) and before gnome-boxes was working as normal.
Please advise how I can restore gnome-boxes - I have some virtual machines that I need to access...
I faced this issue when I force stopped Gnome-Boxes while cloning a VM.
Deleting the conflicting VM will resolve your issue(in your case 'fedora33-wor-2').
To delete the VM in fedora, install "libvirt-client" which provides "virsh" using the command
dnf install libvirt-client
then double check the available VM's using
virsh list --all
Delete the VM using command,
virsh undefine VM_Name
#channel-fun solved the problem of staring up gnome-boxes.
But the real problem is in cloning procedure. The XML describing the new machine is malformed.
virt-clone --original fedora33-ser --auto-clone
works properly.
I know this is an old thread, but I had the same problem recently.
I shut down gnome boxes whilst it was cloning a vm, and shutdown the machine.
I then couldn't open boxes, as it would just crash.
I was able to delete the VM itself, and then deleted the XML file associated with it.
To delete the VM itself, go to :
$HOME/.var/app/org.gnome.Boxes/data/gnome-boxes/images (which in my case is a symbolic link to a data drive)
and delete the VM with the name that you were cloning to (or safer, just move it somewhere).
To delete the XML file associated with it:
$HOME/.var/app/org.gnome.Boxes/config/libvirt/qemu/
and delete (or safer move) the file that is named VM_NAME.xml.
Then boxes should open ok, at least it worked for me.
Extending on Channel Fun's answer for Ubuntu repos the package is libvirt-clients (note the plural s):
sudo apt install libvirt-clients
Check the available VM's using:
virsh list --all
Delete the VM using:
virsh undefine VM_Name
If you receive the error:
error: Refusing to undefine while domain managed save image exists
Then you can explicitly remove that also using the --managed-save flag:
virsh undefine VM_Name --managed-save

Cannot install any ejabberd contrib module

I'm trying to instal ejabberd-contrib modules. Using this guide,
However when I run ejabberdctl modules_update_specs nothing is returned.
And when I try install any of the individual modules:
ejabberdctl module_install mod_pottymouth
Failed RPC connection to the node ejabberd#localhost: {'EXIT',
{undef,
[{bitarray,new,
[16777216,false],
[]},
{etbloom,
'-bloom/3-lc$^0/1-0-',
2,
[{file,
"/var/lib/ejabberd/.ejabberd-modules/sources/ejabberd-contrib/mod_pottymouth/deps/etbloom/src/etbloom.erl"},
{line,77}]},
{etbloom,bloom,3,
[{file,
"/var/lib/ejabberd/.ejabberd-modules/sources/ejabberd-contrib/mod_pottymouth/deps/etbloom/src/etbloom.erl"},
{line,77}]},
{etbloom,sbf,4,
[{file,
"/var/lib/ejabberd/.ejabberd-modules/sources/ejabberd-contrib/mod_pottymouth/deps/etbloom/src/etbloom.erl"},
{line,98}]},
{bloom_gen_server,
init,1,
[{file,
"/var/lib/ejabberd/.ejabberd-modules/sources/ejabberd-contrib/mod_pottymouth/src/bloom_gen_server.erl"},
{line,28}]},
{gen_server,init_it,
2,
[{file,
"gen_server.erl"},
{line,374}]},
{gen_server,init_it,
6,
[{file,
"gen_server.erl"},
{line,342}]},
{proc_lib,
init_p_do_apply,3,
[{file,
"proc_lib.erl"},
{line,249}]}]}}
Commands to start an ejabberd node:
start Start an ejabberd node in server mode
debug Attach an interactive Erlang shell to a running ejabberd node
iexdebug Attach an interactive Elixir shell to a running ejabberd node
live Start an ejabberd node in live (interactive) mode
iexlive Start an ejabberd node in live (interactive) mode, within an Elixir shell
foreground Start an ejabberd node in server mode (attached)
ejabberdctl status
The node ejabberd#localhost is started with status: started
How can I fix this?
However when I run ejabberdctl modules_update_specs nothing is returned.
Then it probably worked correctly, otherwise it would have returned some error, and echo would return 1. Example that it worked correctly and created the path:
$ ejabberdctl modules_update_specs
$ echo $?
0
$ ls $HOME/.ejabberd-modules/
sources
And when I try install any of the individual modules:
Installing ANY module produces an error? For example, if you try installing mod_rest, does it work? Notice this reports a warning about documentation, which is not important:
$ ejabberdctl module_install mod_rest
/home/badlop/.ejabberd-modules/sources/ejabberd-contrib/mod_rest/src/mod_rest.erl:27: Warning: undefined callback function mod_doc/0 (behaviour 'gen_mod')
$ ls $HOME/.ejabberd-modules/
mod_rest sources
{'EXIT', {undef, [{bitarray,new,
Yes, this is a known problem when installing mod_pottymouth. The workaround for installing that module is described in the README.txt file of that module. I've followed those instructions now, and the module compiled and installed correctly.

libvirt error when trying to 'hot' attach-disk on guest with "Channel qemu-ga"

I have KVM virtual machine running CentOS 7 as guest OS. I'm trying to attach an additional disk to it on the run (without shutting it down) using this command:
$ sudo virsh attach-disk centos --source /var/lib/libvirt/images/newdisk.img --target sdb --persistent
But receive an error:
error: Failed to attach disk
error: internal error: cannot update AppArmor profile 'libvirt-d2e7bbb8-c7b3-44ec-b0ea-27539e0df732'
If I do the same with Debian guest - everything is ok.
What is difference, how to solve that?
UPDATE:
I have a comment!
I compared two VM's xml and saw that CentOS have QEMU-agent in his configuration:
<channel type="unix">
<source mode="bind" path="/var/lib/libvirt/qemu/channel/target/centos_auto.org.qemu.guest_agent.0"></source>
<target name="org.qemu.guest_agent.0" type="virtio"></target>
<address bus="0" controller="0" port="1" type="virtio-serial"></address>
</channel>
Then I removed "channel qemu-ga", restarted VM and checked "hot add" feature. It worked.
I tested it on other VMs (CentOS, Fedora, Debian) and saw the same.
As a result:
If enable qemu-agent i cannot use hot plug.
If use "hot plug" i must forget about agent.
Is it my mistake in configuration or these features can't work together?
Host-OS: Ubuntu 15.10
QEMU emulator: now 2.4.92 (tested 2.3 and 2.4.1)
VMM: 1.3.0
This is a clear bug in the apparmor security driver for libvirt. The existence of the QEMU guest agent config in the XML should have no impact on ability to hotplug disks to a guest. This bug should be reported to the libvirt upstream, or Ubuntu bug trackers.

ejabberd 15.07: no sasl log file anymore

I compiled ejabberd 15.07 (make install, using a prefix for destination).
I am under Linux Mint 16 (based on Ubuntu 13.10) with the distro's erlang package (version R16B01).
I also tested on Ubuntu 12.04 using Erlang OTP 18.0 (erlang compiled myself and ejabberd compiled using --with-erlang, distro's packages are not installed).
ejabberd works perfectly fine in both environments except that the erlang.log file is not created, basically no erlang SASL logs at all.
I upgraded from distro's ejabberd v2.1.10 where that file used to contain a bunch of supervisor logs or component crash/restarts.
Looking at ejabberdctl script, it actually does set the sasl_error_logger file like it always did in the past (e.g. in versions 2.x).
If I do
gperrot#tristan ~/apps/ejabberd/sbin $ su -c 'erl -sasl sasl_error_logger \{file,\"/home/gperrot/apps/ejabberd/var/log/ejabberd/erlang.log\"\}'
Erlang R16B01 (erts-5.10.2) [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false]
Eshell V5.10.2 (abort with ^G)
1> application:start(sasl).
ok
I can get a erlang.log file created with some content.
I used bash -x ejabberdctl start to check the command it uses:
sh -c '/usr/bin/erl -sname ejabberd#localhost -noinput -detached -pa /home/gperrot/apps/ejabberd/lib/ejabberd/ebin -mnesia dir "\"/home/gperrot/apps/ejabberd/var/lib/ejabberd\"" -ejabberd log_rate_limit 100 log_rotate_size 10485760 log_rotate_count 1 log_rotate_date '\''""'\'' -s ejabberd -sasl sasl_error_logger \{file,\"/home/gperrot/apps/ejabberd/var/log/ejabberd/erlang.log\"\} +K true -smp auto +P 250000 start ""'
File path is set properly, same user running, same file permissions.
I don't understand why I get no erlang.log file managed by ejabberd.
SASL is correctly started:
gperrot#tristan ~/apps/ejabberd/sbin $ ejabberdctl debug
(ejabberd#localhost)1> application:start(sasl).
{error,{already_started,sasl}}
And in ejabberd.log I also get
2015-09-04 17:29:51.252 [info] <0.7.0> Application sasl started on node ejabberd#localhost
But not erlang.log file.
Any idea why?