qemu simulate x86_64 can't use "virtio-gpu-gl" - qemu

my qemu version is 7.1.0.
my qemu configure: --enable-sdl --enable-opengl --enable-virglrenderer
When I use qemu-system-x86_64 -machine q35, it can run with a device virtio-gpu. But, when I use the device virtio-gpu-gl, qemu tells me ERROR: qemu-system-x86_64: -device virtio-gpu-gl: opengl is not available.
When I add -display sdl,gl=on,qemu tells me ERROR: qemu-system-x86_64: ../ui/console-gl.c:105: surface_gl_update_texture: Assertion `gls' failed.
I'm expect qemu can run virtio-gpu-gl

Related

QEMU Not booting

Do qemu 5.1.0-dirty and qemu 5.1.0 versions behave differently?
No error occurs, but it boots with qemu 5.1.0-dirty version and not with 5.1.0. What could be the problem?
/home/pi/qemu/qemu-5.1.0/build/aarch64-softmmu/qemu-system-aarch64 -drive file=/home/pi/images/boot.qcow2,id=disk0,format=raw,if=none,cache=none -monitor null -object rng-random,filename=/dev/random,id=rng0 -cpu host -machine type=virt -device virtio-keyboard-pci -device virtio-rng-pci,rng=rng0 -device virtio-blk-pci,drive=disk0 -serial mon:stdio -kernel /home/pi/kernel/Image-vdt -usb -nodefaults -device virtio-net-pci,netdev=net0,mac=CA:FE:BA:BE:BE:EF,rombar=0 -netdev type=tap,id=net0,ifname=qemu_tap0,script=no,downscript=no -device virtio-gpu-pci,virgl,xres=1680,yres=560 -display sdl,gl=on -device virtio-tablet-pci -show-cursor -m 5G -smp 3 -device qemu-xhci,id=xhci -enable-kvm -append "root=/dev/vda9 ro loglevel=7 audit=0 enforcing=0 console=tty0 fbcon=map:10 video=1680x560-32 mem=5G"
Both versions used the same command line, but only booted
from the qemu 5.1.0-dirty version.
In qemu 5.1.0, which does not boot, the QEMU screen is created, but the phrase 'guest has not initialized the display (yet)' is displayed and it does not proceed any further.
5.1.0-dirty only exists in binaries, and version 5.1.0 was used after compiling the source code.
Use the compile options --enable-sdl --enable-gtk --target-list=aarch64-softmmu .
img1
It should boot normally, but it doesn't.
What is the difference between qemu 5.1.0-dirty and regular qemu 5.1.0?
-dirty on the end of a QEMU version string means "5.1.0 plus any number of unknown extra changes", i.e. it is not a clean upstream version. It could have absolutely anything in it. You would need to find out exactly where the binary came from and what sources it was built from to be able to find out what the differences are between it and a clean 5.1.0.

Start YOCTO Intel x86_64 image on QEMU failure

I create Yocto image for architecture x86_64 and run on the QEMU virtual machine by the below qemu command:
qemu-system-x86_64.exe -m 1024 -hda "rootfs.img" -cpu q35 -kernel "vmlinuz" -initrd "initrd" -append "root=/dev/ram0"
But the booting process was failed. How to indicate the rootfs.img to correct path? What is the problem with QEMU, I tested OK with VirtualBox.
I have to select the correct cpu (i7) and use correct virtual device to mount rootfs.img.
qemu-system-x86_64.exe -m 1024 -drive file=rootfs.img,format=raw,if=virtio -cpu Nehalem -initrd "initrd" -kernel "vmlinuz" -append "root=/dev/vda"
As the result, the image can boot successfully.
(Refer: https://wiki.qemu.org/Documentation/9psetup)

QEMU snapshot without an image?

I'm working with VxWorks, a Real Time Operating System for embedded systems. They recently added QEMU support, and I've been trying to figure it out. (I'm fairly new to all these technologies.) I would like to checkpoint and restart the virtual machine, ie save the RAM and processor state and reload it later from exactly that point.
QEMU has some support for this called "snapshots." However, everything I've seen and tried requires a disk image in qcow2 format. But my simulation has no disk, the program is loaded directly into RAM and run.
Here's my QEMU command:
qemu-system-aarch64 -m 4096M -smp 4 -machnie xlnx-zcu102 -device loader,file=~/vxworks_21.03/workspace3/QEMU_helloWorld/default/vxWorks,addr=0x00100000 -nographic -monitor telnet:127.0.0.1:35163,server,nowait -serial telnet:127.0.0.1:39251,server -device loader,file=~/vxworks_21.03/workspace3/vip_xlnx_zynqmp_smp_64/default/xlnx-zcu102-rev-1.1.dtb,addr=0x0f000000 -device loader,addr=0x000ffffc,data=0xd2a1e000,data-len=4 -device loader,addr=0x000ffffc,cpu-num=0 -nic user -nic user -nic user -nic user,id=n0,hostfwd=tcp:127.0.0.1:0-:1534,hostfwd=udp:127.0.0.1:0-:17185
Then I log into the monitor and:
$ telnet 127.0.0.1 35163
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
QEMU 5.2.0 monitor - type 'help' for more information
(qemu) savevm
Error: No block device can accept snapshots
I tried a number of things, like creating an empty disk image, or the snapshot_blkdev command, but no luck so far.
The host is RedHat Linux 8.4 running on an x86 desktop, the guest is ARM64.
It turns out that a disk image is required to do snapshots, but you don't have to hook it up to the guest. To do that you pass qemu -drive argument with with if=none. Like this:
-drive if=none,format=qcow2,file=dummy.qcow2
So here is the whole sequence that worked:
$ qemu-img create -f qcow2 dummy.qcow2 32M
$ qemu-system-aarch64 -m 4096M -smp 4 -machnie xlnx-zcu102 -device loader,file=vxWorks,addr=0x00100000 -nographic -monitor telnet:127.0.0.1:35163,server,nowait -serial telnet:127.0.0.1:39251,server -device loader,file=xlnx-zcu102-rev-1.1.dtb,addr=0x0f000000 -device loader,addr=0x000ffffc,data=0xd2a1e000,data-len=4 -device loader,addr=0x000ffffc,cpu-num=0 -nic user -nic user -nic user -nic user,id=n0,hostfwd=tcp:127.0.0.1:0-:1534,hostfwd=udp:127.0.0.1:0-:17185 -snapshot -drive if=none,format=qcow2,file=dummy.qcow2
Then in the monitor terminal savevm and loadvm work:
$ telnet 127.0.0.1 35163
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
QEMU 5.2.0 monitor - type 'help' for more information
(qemu) savevm save1
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK ICOUNT
-- save1 44.3 MiB 2021-06-28 10:08:28 00:00:05.952
(qemu) loadvm save1
This information came thanks to Peter Maydell and his blog post: https://translatedcode.wordpress.com/2015/07/06/tricks-for-debugging-qemu-savevm-snapshots/

Can KVM be enabled (-enable-kvm) when running qemu without -cpu host

can KVM be enabled (-enable-kvm) when running qemu without -cpu host ?
e.g.
qemu-system-x86_64 \
-boot c -m 16G -vnc :0 -enable-kvm \
-cpu qemu64,avx,pdpe1gb,check,enforce \
...
Does QEMU use the KVM when running virtual QEMU64 CPU ?
I always thought that this option can be enabled ONLY when using qemu with -cpu host...
Yes, running a guest with KVM acceleration (-enable-kvm option in qemu command line) can be done without -cpu host.
In the case of -cpu qemu64,avx,pdpe1gb,check,enforce qemu will set the union of the virtual qemu64 cpu and avx,pdpe1gb,check,enforce features as cpu features for this guest. This is done by calling KVM's KVM_SET_CPUID2 ioctl.
When the guest will ask for cpu features, it will receive these from KVM.

What can qemu's virtio-blk's drive parameter be set to

I'm trying to start qemu with a virtio disk controller and it says:
qemu-system-x86_64 -S -gdb tcp::9000 --nographic --enable-kvm -cpu host -m 8192 -device virtio-blk-pci,drive=c,scsi=off -drive file=hard.disk,if=virtio,format=raw -fda floppy.img
qemu-system-x86_64: -device virtio-blk-pci,drive=c,scsi=off: Property 'virtio-blk-device.drive' can't find value 'c'
The reason I'm using the -device parameter is that I already tried just if=virtio on -drive but when I scanned the PCI devices no virtio block device showed up.
I'm writing my own OS from scratch.
How do I get this virtio PCI device to appear?
The drive option of -device should be set to the ID of a drive you created with -drive:
-drive id=mydrive,file=foo.img,... -device virtio-blk-pci,drive=mydrive,...
This is a common pattern with QEMU options.
PS: if you're connecting the drive created with -drive to a device via the "give it an ID name and the specify it in a -device option" then you don't want to pass if=virtio. ("if=virtio" means "try to automatically connect this drive to a virtio interface", and QEMU will complain that you've asked it to connect the drive twice, once automatically and once explicitly.)