libvirt error when trying to 'hot' attach-disk on guest with "Channel qemu-ga" - qemu

I have KVM virtual machine running CentOS 7 as guest OS. I'm trying to attach an additional disk to it on the run (without shutting it down) using this command:
$ sudo virsh attach-disk centos --source /var/lib/libvirt/images/newdisk.img --target sdb --persistent
But receive an error:
error: Failed to attach disk
error: internal error: cannot update AppArmor profile 'libvirt-d2e7bbb8-c7b3-44ec-b0ea-27539e0df732'
If I do the same with Debian guest - everything is ok.
What is difference, how to solve that?
UPDATE:
I have a comment!
I compared two VM's xml and saw that CentOS have QEMU-agent in his configuration:
<channel type="unix">
<source mode="bind" path="/var/lib/libvirt/qemu/channel/target/centos_auto.org.qemu.guest_agent.0"></source>
<target name="org.qemu.guest_agent.0" type="virtio"></target>
<address bus="0" controller="0" port="1" type="virtio-serial"></address>
</channel>
Then I removed "channel qemu-ga", restarted VM and checked "hot add" feature. It worked.
I tested it on other VMs (CentOS, Fedora, Debian) and saw the same.
As a result:
If enable qemu-agent i cannot use hot plug.
If use "hot plug" i must forget about agent.
Is it my mistake in configuration or these features can't work together?
Host-OS: Ubuntu 15.10
QEMU emulator: now 2.4.92 (tested 2.3 and 2.4.1)
VMM: 1.3.0

This is a clear bug in the apparmor security driver for libvirt. The existence of the QEMU guest agent config in the XML should have no impact on ability to hotplug disks to a guest. This bug should be reported to the libvirt upstream, or Ubuntu bug trackers.

Related

Error while running "podman run"; error adding pod to CNI network "podman": unexpected end of JSON input

I'm new to podman, and I just trying to run containers on it.
(podman version 3.4.0, installed by brew, intel Core MAC)
However, when I trying to run "podman run {image-name}", below errors were thrown.
$ podman run -ti -d --name web httpd 125
Error: error configuring network namespace for container b0e70d672cb66005833c0a300c8661b88eab49e942c240d69d17587e0b75c47b: error adding pod web_2_web_2 to CNI network "podman": unexpected end of JSON input
$ podman run centos:7
Error: error preparing container a6d0bc1ad217cd8207935561dc8ff7bd33672da3fa513917f9965cb39520c449 for attach: error configuring network namespace for container a6d0bc1ad217cd8207935561dc8ff7bd33672da3fa513917f9965cb39520c449: error adding pod quirky_snyder_quirky_snyder to CNI network "podman": unexpected end of JSON input
By reading https://issueexplorer.com/issue/containers/podman/11452, I removed ~/.docker/, but the solution doesn't work in my case.
Of course, the error message says there was "unexpected end of JSON input", but I don't know how to fix it. Could anyone guess why podman didn't work even running these base images, or how to debug it?
Thanks in advance.
on macos, current machine version 3.3.1 has this problem. I had this problem on server version 3.3.1 and I do not encounter it on server version 3.4.0. You can check server version with podman version.
Try removing current machine and installing a newer one
podman machine stop
podman machine rm
podman machine init --image-path next
podman machine start
Check server version again with podman version.
Try running your image again.

Gnome Boxes on Fedora 33 fails to open

I attempt to load gnome-boxes from the terminal (I'm running Fedora 33) and get the following error
$ gnome-boxes
(gnome-boxes:3194): Gtk-WARNING **: 12:34:57.343: GtkFlowBox with a model will ignore sort and filter functions
(gnome-boxes:3194): Gtk-WARNING **: 12:34:57.344: GtkListBox with a model will ignore sort and filter functions
(gnome-boxes:3194): Boxes-WARNING **: 12:34:57.904: libvirt-machine.vala:83: Failed to disable 3D Acceleration
(gnome-boxes:3194): Boxes-WARNING **: 12:34:57.913: libvirt-broker.vala:70: Failed to update domain 'fedora33-wor-2': Failed to set domain configuration: XML error: Invalid PCI address 0000:04:00.0. slot must be >= 1
(gnome-boxes:3194): Boxes-CRITICAL **: 12:34:57.916: boxes_vm_importer_get_source_media: assertion 'self != NULL' failed
Segmentation fault (core dumped)
My system:
$uname -a
Linux localhost.localdomain 5.9.16-200.fc33.x86_64 #1 SMP Mon Dec 21 14:08:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
I don't whether it's related but I recently updated from kernel 5.9.11 directly to 5.9.16 (haven't used the PC in question for some weeks) and before gnome-boxes was working as normal.
Please advise how I can restore gnome-boxes - I have some virtual machines that I need to access...
I faced this issue when I force stopped Gnome-Boxes while cloning a VM.
Deleting the conflicting VM will resolve your issue(in your case 'fedora33-wor-2').
To delete the VM in fedora, install "libvirt-client" which provides "virsh" using the command
dnf install libvirt-client
then double check the available VM's using
virsh list --all
Delete the VM using command,
virsh undefine VM_Name
#channel-fun solved the problem of staring up gnome-boxes.
But the real problem is in cloning procedure. The XML describing the new machine is malformed.
virt-clone --original fedora33-ser --auto-clone
works properly.
I know this is an old thread, but I had the same problem recently.
I shut down gnome boxes whilst it was cloning a vm, and shutdown the machine.
I then couldn't open boxes, as it would just crash.
I was able to delete the VM itself, and then deleted the XML file associated with it.
To delete the VM itself, go to :
$HOME/.var/app/org.gnome.Boxes/data/gnome-boxes/images (which in my case is a symbolic link to a data drive)
and delete the VM with the name that you were cloning to (or safer, just move it somewhere).
To delete the XML file associated with it:
$HOME/.var/app/org.gnome.Boxes/config/libvirt/qemu/
and delete (or safer move) the file that is named VM_NAME.xml.
Then boxes should open ok, at least it worked for me.
Extending on Channel Fun's answer for Ubuntu repos the package is libvirt-clients (note the plural s):
sudo apt install libvirt-clients
Check the available VM's using:
virsh list --all
Delete the VM using:
virsh undefine VM_Name
If you receive the error:
error: Refusing to undefine while domain managed save image exists
Then you can explicitly remove that also using the --managed-save flag:
virsh undefine VM_Name --managed-save

KVMs not running after qemu-kvm upgrade in CentOS 8.1, RHEL 8.1

This is the error i encountered when i updated my CentOS 8.1/RHEL 8.1 machines and all the KVMs are showing the error below:
error: internal error: process exited while connecting to monitor: 2020-06-09T12:41:10.410896Z qemu-kvm: -machine pc-q35-rhel8.1.0,accel=kvm,usb=off,vmport=off,smm=on,dump-guest-core=off: unsupported machine type
Use -machine help to list supported machines
Note: The problem states the machine type Q35 is not well stated/configured in your virtual Kernel based machines RUNNING on RHEL 8/ CentOS 8
[Step 1:] cat /etc/libvirt/qemu/*.xml | grep \<name&apos;\| machine&apos;
This will list the machine type in all of the KVMs installed.
[Output Snippet]
machine pc-q35-rhel8.1.0
[Step 2:] cd /etc/libvirt/qemu; ll
This will list all the xml files in connection with your KVMs
[Step 3:] At /etc/libvirt/qemu Use virsh edit <KVM file> ###Don&apos;t include .xml###
Navigate to machine
[Output Snippet]
<os>
<type arch=&apos;x86_64&apos; machine=&apos;pc-q35-rhel8.1.0&apos;>hvm</type>
<loader readonly=&apos;yes&apos; secure=&apos;yes&apos; type=&apos;pflash&apos;>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
<nvram>/var/lib/libvirt/qemu/nvram/Loadbalancer_VARS.fd</nvram>
<boot dev=&apos;hd&apos;/>
</os>
Change machine=&apos;pc-q35-rhel8.1.0&apos; to machine=&apos;q35&apos;
shift + zz to save and quit
[Step 4:]
systemctl restart libvirtd && systemctl status -l libvirtd
virsh list --all
virsh start --domain <KVM>
Check the status of your running KVMs
virsh list --state-running
Now the issue should be resolved and your KVMs should be humming away.
Note though if head back in and check on the configuration xml file with virsh edit, you&apos;ll note that q35 converts to pc-q35-rhel7.6.0 automatically.
But this shouldn&apos;t be an issue.
Cheers :)

Failed to start zabbix3.0 in centos7

I met a issue when I install zabbix3.0 by packages on centos7.
When I done on mysql , php , apache and configuration in zabbix.conf.
I run systemctl start zabbix-server.service.It didn't work!and show :
Job for zabbix-server.service failed. See 'systemctl status zabbix-server.service' and 'journalctl -xn' for details.Then , my colleague told me to install trousers and gnutls,and then ,zabbix-server worked.What is the use of these two software?If they are necessary,Why not put them in the package of zabbix?
You won't start Zabbix Server 3.0 on CentOS 7 because you don't have "Disabled" SELinux.
You can disabled SELinux right here: /etc/selinux/config.
After that, you must reboot your server with reboot or shutdown -r now.
After reboot, confirm that the getenforce command returns Disabled.
Most likely, you didn't install those packages, but upgraded them. They are linked in through the Jabber/XMPP support.
This was a bug in RedHat packages that took some time to resolve, see this bugreport : https://bugzilla.redhat.com/show_bug.cgi?id=1071171
And this is the Zabbix issue tracking the same problem : https://support.zabbix.com/browse/ZBX-7790

Hortonworks Data Platform (Hadoop) installation single node cluster on Ubuntu 12.04 64bit

I am referring the manual installation guide (http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.0/bk_installing_manually_book/content/rpm-chap1.html) provided on Hortonworks website. I am facing issue while configuring the remote repositories (http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.0/bk_installing_manually_book/content/rpm-chap1-3.html). When I am running the command “sudo wget http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x/hdp.list -O /etc/apt/sources.list.d/hdp.list” on the Ubuntu 12.04 terminal, it shows error “404 NOT FOUND”.
Below is the error:
–2015-04-13 12:59:10– http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x/hdp.list
Resolving public-repo-1.hortonworks.com (public-repo-1.hortonworks.com)… 54.192.174.35, 54.230.174.43, 54.230.174.121, …
Connecting to public-repo-1.hortonworks.com (public-repo-1.hortonworks.com)|54.192.174.35|:80… connected.
HTTP request sent, awaiting response… 404 Not Found
2015-04-13 12:59:11 ERROR 404: Not Found.
Please help me solving this issue.
Can you disable your Firewall and check if you still get this problem?
Some corporate firewalls block Hortonworks (as a source of unauthorized software)