Gnome Boxes on Fedora 33 fails to open - fedora

I attempt to load gnome-boxes from the terminal (I'm running Fedora 33) and get the following error
$ gnome-boxes
(gnome-boxes:3194): Gtk-WARNING **: 12:34:57.343: GtkFlowBox with a model will ignore sort and filter functions
(gnome-boxes:3194): Gtk-WARNING **: 12:34:57.344: GtkListBox with a model will ignore sort and filter functions
(gnome-boxes:3194): Boxes-WARNING **: 12:34:57.904: libvirt-machine.vala:83: Failed to disable 3D Acceleration
(gnome-boxes:3194): Boxes-WARNING **: 12:34:57.913: libvirt-broker.vala:70: Failed to update domain 'fedora33-wor-2': Failed to set domain configuration: XML error: Invalid PCI address 0000:04:00.0. slot must be >= 1
(gnome-boxes:3194): Boxes-CRITICAL **: 12:34:57.916: boxes_vm_importer_get_source_media: assertion 'self != NULL' failed
Segmentation fault (core dumped)
My system:
$uname -a
Linux localhost.localdomain 5.9.16-200.fc33.x86_64 #1 SMP Mon Dec 21 14:08:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
I don't whether it's related but I recently updated from kernel 5.9.11 directly to 5.9.16 (haven't used the PC in question for some weeks) and before gnome-boxes was working as normal.
Please advise how I can restore gnome-boxes - I have some virtual machines that I need to access...

I faced this issue when I force stopped Gnome-Boxes while cloning a VM.
Deleting the conflicting VM will resolve your issue(in your case 'fedora33-wor-2').
To delete the VM in fedora, install "libvirt-client" which provides "virsh" using the command
dnf install libvirt-client
then double check the available VM's using
virsh list --all
Delete the VM using command,
virsh undefine VM_Name

#channel-fun solved the problem of staring up gnome-boxes.
But the real problem is in cloning procedure. The XML describing the new machine is malformed.
virt-clone --original fedora33-ser --auto-clone
works properly.

I know this is an old thread, but I had the same problem recently.
I shut down gnome boxes whilst it was cloning a vm, and shutdown the machine.
I then couldn't open boxes, as it would just crash.
I was able to delete the VM itself, and then deleted the XML file associated with it.
To delete the VM itself, go to :
$HOME/.var/app/org.gnome.Boxes/data/gnome-boxes/images (which in my case is a symbolic link to a data drive)
and delete the VM with the name that you were cloning to (or safer, just move it somewhere).
To delete the XML file associated with it:
$HOME/.var/app/org.gnome.Boxes/config/libvirt/qemu/
and delete (or safer move) the file that is named VM_NAME.xml.
Then boxes should open ok, at least it worked for me.

Extending on Channel Fun's answer for Ubuntu repos the package is libvirt-clients (note the plural s):
sudo apt install libvirt-clients
Check the available VM's using:
virsh list --all
Delete the VM using:
virsh undefine VM_Name
If you receive the error:
error: Refusing to undefine while domain managed save image exists
Then you can explicitly remove that also using the --managed-save flag:
virsh undefine VM_Name --managed-save

Related

Getting logs/more information during start-build command execution

Jenkins pipeline is building Docker images. OpenShift plugin(s) are used for the same.
An example command:
openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
While this works smoothly most of the time, whenever this command fails due to some underlying platform issues, almost no information is seen in the Jenkins build job console:
[Pipeline] }
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] ............................................................
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Uploading finished
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
[Pipeline] }
ERROR: Error running start-build on at least one item: [buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd];
{err=, verb=start-build, cmd=oc --server=https://api.scp-west-zone02-z01.net:6443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=sb-1166-amld5-car-service-se --token=XXXXX start-build buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd --from-dir=./build/libs --wait --follow -o=name , out=Uploading directory "build/libs" as binary input for the build ...
............................................................
Uploading finished
Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
, status=1}
[Pipeline] // catchError
I need more verbosity, detailed error information. I checked the start-build command reference, and I thought --build-loglevel [0-5] might help here. When I used it, I got a warning that since I am using source type as 'Binary' in the BuildConfig, logging isn't supported(seriously???)
NOTE: the selector returned when -F/--follow is supplied to startBuild() will be inoperative for the various selector operations.
Consider removing those options from startBuild and using the logs() command to follow the build output.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying --build-loglevel with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying environment variables with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] Uploading directory "build/libs" as binary input for the build ...
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] ..
How do I get more logs, info. while executing the start-build command?
I was facing the same problem, I just used something like:
def build = openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
build.logs('-f')
And so far it seems to work, I got the logs from my openshift build in my Jenkins pipeline. Now I'll try to get the logs only if build does not Complete, to reduce the overall logs.
(for future searchers like me ^^)

SLES crash dump

I would like to test whether my server creating a crash dump upon a OS crash. I can see the /etc/sysconfig/kdump config file is configured.
So I issued the command to kernel panic echo c > /proc/sysrq-trigger so it crashed the server but it never create a dump file for some reason. This is HP BL460g7 blade with ASR disabled.
When I trigger the kernel panic it crashed but stays about 10 minutes (looks like its trying to save a crash dump) but it never. I checked the message logs but cannot see reason why its not dumping. Main problem is how do I find why it's not dumping a crash file, is there are any logs I can check what has really gone wrong?
I'm using SUSE Linux Enterprise Server 11 (x86_64) SP 1.
Did you follow the steps explained here?
SUSE Support - Configure kernel core dump capture
The most important tasks should be:
install kdump, kexec-tools and makedumpfile
add crashkernel=... to the kernel command line (Grub)
chkconfig boot.kdump on
ensure that you have enough free space in /var/crash (default dir)
Then please reboot your system and run:
sync; echo c >/proc/sysrq-trigger
After another boot please check for new files in /var/crash. If this doesn't work for you, please show us the content of /etc/sysconfig/kdump and at least the output of
cat /proc/cmdline
chkconfig boot.kdump
Do you have a display connected to the machine?

libvirt error when trying to 'hot' attach-disk on guest with "Channel qemu-ga"

I have KVM virtual machine running CentOS 7 as guest OS. I'm trying to attach an additional disk to it on the run (without shutting it down) using this command:
$ sudo virsh attach-disk centos --source /var/lib/libvirt/images/newdisk.img --target sdb --persistent
But receive an error:
error: Failed to attach disk
error: internal error: cannot update AppArmor profile 'libvirt-d2e7bbb8-c7b3-44ec-b0ea-27539e0df732'
If I do the same with Debian guest - everything is ok.
What is difference, how to solve that?
UPDATE:
I have a comment!
I compared two VM's xml and saw that CentOS have QEMU-agent in his configuration:
<channel type="unix">
<source mode="bind" path="/var/lib/libvirt/qemu/channel/target/centos_auto.org.qemu.guest_agent.0"></source>
<target name="org.qemu.guest_agent.0" type="virtio"></target>
<address bus="0" controller="0" port="1" type="virtio-serial"></address>
</channel>
Then I removed "channel qemu-ga", restarted VM and checked "hot add" feature. It worked.
I tested it on other VMs (CentOS, Fedora, Debian) and saw the same.
As a result:
If enable qemu-agent i cannot use hot plug.
If use "hot plug" i must forget about agent.
Is it my mistake in configuration or these features can't work together?
Host-OS: Ubuntu 15.10
QEMU emulator: now 2.4.92 (tested 2.3 and 2.4.1)
VMM: 1.3.0
This is a clear bug in the apparmor security driver for libvirt. The existence of the QEMU guest agent config in the XML should have no impact on ability to hotplug disks to a guest. This bug should be reported to the libvirt upstream, or Ubuntu bug trackers.

How to create an external snapshot with virsh snapshot-create-as...?

When I try to create a snapshot with
virsh snapshot-create-as one-217 snap_base "desc" --diskspec vda,file=/var/lib/one/datastores/1/2aae91bd6c04fa2db0849bc0db1342ba --disk-only --atomic
There is an error occurred.
error: unsupported configuration: external snapshot file for disk vda already exists and is not a block device: /var/lib/one/datastores/1/2aae91bd6c04fa2db0849bc0db1342ba
Then I run
virsh snapshot-list one-217
There is no snapshot displayed for one-217.
I run
virsh domblklist one-217
Result like this
vda /var/lib/one//datastores/0/217/disk.0
hda /var/lib/one//datastores/0/217/disk.1
I am confused. How can I create an external snapshot with virsh snapshot-create-as command or I should try another way? And how to create a multi disk snapshot?
virsh version is
Compiled against library: libvirt 0.10.2
Using library: libvirt 0.10.2
Using API: QEMU 0.10.2
Running hypervisor: QEMU 0.12.1
Could anyone help me please? Thx in advance!
Seems like the file 2aae91bd6c04fa2db0849bc0db1342ba already exists, so the error message you see is valid -- libvirt was rightly refusing to use an existing file, because that can cause data loss. Here's the relevant bug, which is fixed in upstream libvirt.
To resolve that, try providing path to a file that does not exist: /var/lib/libvirt/images/snap1-one-217.qcow2 (or something like that).
And, judging from the error, your libvirt version seems to be old. Please use a relatively newer version (or at-least a version above libvirt-0.9.10).

unknown error: failed to write prefs file

I keep getting the error while running functional tests using runner with following:
-selenium 2.44
-Chrome Driver
-Windows Server 2008 R2 Enterprise
Error Description: Listening on 0.0.0.0:7000
Starting tunnel...
UnknownError: [POST http://test.com/wd/hub/session / {"de
siredCapabilities":{"browserName":"chrome","name":"tests/intern","idle-timeout":
60,"selenium-version":"2.44.0"}}] unknown error: failed to write prefs file
(Driver info: chromedriver=2.12.301325 (962dea43ddd90e7e4224a03fa3c36a421281ab
b7),platform=Windows NT 6.1 SP1 x86_64) (WARNING: The server did not provide any
stacktrace information)
Command duration or timeout: 1.06 seconds
Anyone have ever come across such issue? How do i fix this? Suggestions please
I've recently had the same issue. The problem was caused by full C drive. Apparently chromedriver needs some space in C drive (or the drive where chrome binary file is located) to create temporary profile files and so on.
One of the solutions could be to move chrome installation to some other drive. You could use mklink command in command line window.
It can be caused by executing ChromeDriver in parallel. Other errors as "failed to write first run file" or "cannot create default profile directory" may happen in that case.
My solution was to specify option user-data-dir. Two concurrent Chromedriver should not use same user data directory.
chromeOptions.AddArgument("--user-data-dir=C:\\tmp\\chromeprofiles\\profile" + someKindOfIdOrIndex);
You can of course change the path for whatever you want :)
This issue occurs if C drive disk runs out of space.The best solution to clear temp files.This solution worked for me.
Open Run command
2.Type % tmp%
3.Click on OK
4.Select all files.Delete all the files permanently.
you have different versions of chrome on server and on node
In my case, it was a consol application which should run as Administrator to gain access to the HDD
Follow These Steps
1)Press Window key+R
Type RUN
Type %temp%
4)Click Ok
5)Press Ctrl+A
Press Shift Delete