qemu-system-aarch64 - 'virtio-vga-gl' is not a valid device model name - qemu

I configured and built qemu 6.2.0 with --enable-sdl --enable-opengl --enable-virglrenderer parameters as qemu-system-aarch64 target for an amd64 ubuntu host. When I try to enable -device virtio-vga-gl is tells me that it is not a valid device model name.
Did I miss something?
Regards.

I think the virtio-vga device is not compiled in by default for aarch64, because the intention is that it's only for machine types where there is legacy firmware that does not know about virtio-gpu but only about VGA (such as the x86 PC machine types). The recommended graphics type for the aarch64 'virt' board (according to the documentation) is virtio-gpu-pci. Your guest OS will obviously need support for that device type.

Related

CPU_ON on QEMU ARMv8A using PSCI from EL2/EL3

I have 4 core ARMv8A (Cortex-A53) emulated on QEMU 6.2.0. The primary code (CPU#0) is running and I am able to debug it using GDB. I wanted to bring up other cores. For that I have used the following GDB commands. From the different experiments conducted, my conclusion is that only CPU#0 is running and all other CPUs never started.
(gdb) thread 3
(gdb) info thread
Id Target Id Frame
1 Thread 1.1 (CPU#0 [running]) 0x0000000040000008 in ?? ()
2 Thread 1.2 (CPU#1 [halted ]) 0x0000000040000000 in ?? ()
* 3 Thread 1.3 (CPU#2 [halted ]) 0x0000000040000000 in ?? ()
4 Thread 1.4 (CPU#3 [halted ]) 0x0000000040000000 in ?? ()
(gdb) where
#0 0x0000000040000000 in ?? ()
Exploting further, came across this thread about turning on CPU using PSCI. Start a second core with PSCI on QEMU.
Also came across SMC calling convention related to this. I have gone through the documentation of SMCCC and PSCI.
I am implementing a minimal hypervisor. The guest is a Linux and it is booting. The Linux boot log shows
[ 0.072225] psci: failed to boot CPU1 (-22)
...
Further debugging the code revealed that Linux is throwing a synchronous exception to the hypervisor with necessary parameters as per the specification using the "HVC" instruction.
If my understanding is correct, PSCI implementation is vendor specific- that is, the code running at EL2/EL3 has to use some vendor provided mechanism to turn on the CPU(core). Is this correct? on on system without EL3, how the code running at EL2 turn on the CPU?
My QEMU command line is given below
$qemu-system-aarch64 -machine virt,gic-version=2,virtualization=on -cpu cortex-a53 -nographic -smp 4 -m 4096 -kernel hypvisor.elf -device loader,file=linux-5.10.155/arch/arm64/boot/Image,addr=0x80200000 -device loader,file=1gb_4core.dtb,addr=0x88000000
Any hint is greatly appreciated.
When the guest is not booting at EL3, the QEMU virt machine implements its own internal PSCI emulation. This is described in the DTB file passed to the guest, and will say that PSCI calls should be done via the SMC instruction (if the guest is starting at EL2) or the HVC instruction (if the guest is starting at EL1). Effectively, QEMU is emulating an EL3 firmware for you.
(If the guest does boot at EL3, then QEMU assumes that the EL3 guest code will be implementing PSCI; in that case it provides some simple emulated hardware that does the power on/off operation and which the EL3 guest's PSCI implementation will manipulate as part of its implementation of the CPU_ON and CPU_OFF calls. But that's not the case you're in.)
If you are running a hypervisor in your guest at EL2, then it is your hypervisor's job to implement PSCI for your EL1 guests (it's unlikely that you want to allow an EL1 guest to be able to directly shut down a CPU under your hypervisor's feet, for instance). So you want to pass your EL1 guest a different DTB that describes the view an EL1 guest has of its emulated hardware, and which says "PSCI via HVC". Then your hypervisor's HVC handling should emulate PSCI. Separately, your hypervisor's bootup code should be using the real PSCI-via-SMC to power up the secondary CPUs as part of its bootup sequence.

Errors thrown when trying to run basic.sh in sosumi

I was hoping that you could help me. I've been stuck on this problem for quite a while.
When I try to start up the clover boot loader or run the basic.sh file, I get these errors in the terminal:
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.01H:ECX.sse4.1 [bit 19]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.01H:ECX.sse4.2 [bit 20]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.01H:ECX.movbe [bit 22]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.01H:ECX.aes [bit 25]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.01H:ECX.xsave [bit 26]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.01H:ECX.avx [bit 28]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.07H:EBX.bmi1 [bit 3]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.07H:EBX.avx2 [bit 5]
etc.
I have no idea what they mean. Could you please tell me a solution? I've tried uninstalling and reinstalling manually. It didn't work and it threw these errors at me again. I followed the instructions in the readme: https://github.com/foxlet/macOS-Simple-KVM
Qemu and everything it needs, all the dependencies are installed on my computer.
When I run the clover bootloader, it just shows a bunch of text then brings me back to the menu. I hit enter again. last time i kept ending up in the shell, and I don't know why.
Why does it keep crashing? Could you tell me pls how to fix it?
This is the second time I'm struggling with this, please help.
UPDATE: I tried using this repo: https://github.com/kholia/OSX-KVM and got the same errors. It's still not working.
The shell script you're running starts QEMU asking it to provide a guest CPU with various features (including SSE4, AVX and AVX2). With KVM, the only way we can give the guest a CPU with a feature like AVX is if the host CPU has it, because we run guest code directly on the host CPU. QEMU is warning you that you asked for something it can't do, because the host CPU you're running it on doesn't have those features. QEMU removes the features it can't provide from the set of things it tells the guest about via the CPUID registers.
If the guest OS really needs a CPU with AVX2 and all the rest of it, you need to run on a newer host CPU.
If the guest OS is happy to read the CPUID registers and adjust itself to avoid using features that aren't there, then you could adjust the -cpu options the script is passing to make it request something with fewer features, but all this will do is mean that QEMU won't print the warnings -- it won't change how the guest runs on that kind of CPU.

Why won't DraftSight run on Fedora 26 with Intel graphics?

DraftSight 2017SP1 Linux (beta) worked on Fedora 24. It fails after upgrading to Fedora 26. Running it from the command line so you can see the low-level errors,
/opt/dassault-systemes/DraftSight/Linux/DraftSight
Qt: Session management error: None of the authentication protocols specified are supported
Could not parse stylesheet of object 0x238a050
Could not parse stylesheet of object 0x238a050
In the graphics environment you see the usual start screens, then error pop-ups which offer to report the error and then close the application when clicked. One says that error-reporting is not available.
Similarly with 2017SP3 and 2018SP0. Fedora updates are current as of today.
This system is an Intel core i3. lspci reports "Intel Corp Xeon E3-1200 v3/4th Gen core processor Integrated Graphics Controller (rev 06)"
2018SP0 does work once an Nvidia GT710 card and the nvidia driver module are installed. It does not work with the nouveau driver module and the same card.
Does anybody have any insight as to the cause? A regression in Fedora, or a latent bug in DraftSight, or anything else?
Knowing whether it works with Fedora 26 and AMD graphics might be very helpful.
Edit March 2018
Doesn't work but differently on a system with AMD R5 230. No "Could not parse" errors, not anything else on the terminal window, but Draftsight starts up with the display all wrong and then locks up. Clicking the "X" gets to "the program is not responding".
Also worth noting that this isn't a Wayland issue. Systems are running Cinnamon and lightdm, so it's good old X.
Also a work-around, if performance is unimportant. (And it probably isn't, with Gen 4 Intel Graphics). Run it as a "remote" application on localhost, on a system with Intel graphics.
$ ssh -X 127.0.0.1
password:
Last login: Wed Mar ...
-bash-4.4$ /opt/dassault-systemes/DraftSight/Linux/DraftSight
(success)
Further update Fedora 29, DraftSight 2018SP3
New wrinkles for Nvidia, Cinnamon as above
Needs invocation
LD_PRELOAD=/usr/lib64/libfreetype.so.6 /opt/dassault-systemes/DraftSight/Linux/DraftSight
otherwise fails with /lib64/libfontconfig.so.1 lookup error FT_DOne_MM_Var
Also kernel 4.20 plus NVidia 390.87 fails to build. There's a patched NVidia installer that does work at if_not_false_then_true site.
Also does not install a .desktop file into /usr/share/applications
I had similar problems when I updated Fedora 24 to 25. The parse stylesheet messages still show up but I can run draftsight from an Xorg session, (not Wayland), using the nouveau drivers but only under root privileges using sudo .
You might try the following script:
sudo DISPLAY=$DISPLAY vblank_mode=1 /opt/dassault-systemes/DraftSight/Linux/DraftSight
I can only get DraftSight to run as root under Fedora 27, 4.18.16-100.fc27.x86_64. I have installed a VM with Ubuntu, and it runs fine, without elevated privileges.

how to force chrome to use mesa software driver for webgl

I want to force chrome to render WebGL using software drivers, not hardware.
I'm using Ubuntu Linux and I understand that the Mesa GL drivers can be forced to use a software implementation by specifying the environment variable, LIBGL_ALWAYS_SOFTWARE=1, when launching a program. I confirmed that the driver changes when specifying the env var.
bash$ glxinfo | grep -i "opengl"
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) 945GM x86/MMX/SSE2
OpenGL version string: 1.4 Mesa 10.1.3
OpenGL extensions:
bash$ LIBGL_ALWAYS_SOFTWARE=1 glxinfo | grep -i "opengl"
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 3.4, 128 bits)
OpenGL version string: 2.1 Mesa 10.1.3
OpenGL shading language version string: 1.30
OpenGL extensions:
The default GL driver provides OpenGL 1.4 support, and the software driver provides OpenGL 2.1 support.
I tracked down where the desktop launcher exists (/usr/share/applications/) and edited it to specify the env var, but chrome://gpu still shows GL version 1.4. The Chrome GPU info contains a promising value:
Command Line Args --flag-switches-begin --disable-accelerated-2d-canvas --ignore-gpu-blacklist --flag-switches-end
I wonder if I can customize the --flag-switches-begin.
I also found the '--use-gl' command line switch, but I'm not sure how to leverage it to force the driver into software mode.
As a side note, I have already enabled 'Override software rendering list' in chrome://flags/, which did remove my model from the 'blacklist' making it possible to use WebGL, but the OpenGL feature set is still quite limited.
I have an old laptop with a terrible 'gpu' that I would like to use to develop some shaders and test in WebGL, no matter the performance.
Is it possible to tell Chrome to use the software drivers?
I don't have a linux box so I can't check but you can specify a prefix chrome will use for launching the GPU process with
--gpu-launcher=<prefix>
It's normally used for debugging for example
--gpu-launcher="xterm -e gdb --args"
When chrome launches a process it calls spawn. Normally it just launches
path/to/chrome <various flags>
--gpu-launcher lets you add a prefix to that. So for example
--gpu-launcher=/usr/local/yourname/launch.sh
would make it spawn
/usr/local/yourname/launch.sh path/to/chrome <various flags>
You can now make /usr/local/yourname/launch.sh do whatever you want and finally launch chrome. The simplest would be something like
#!/bin/sh
"$#"
In your case I'd guess you'd want
#!/bin/sh
export LIBGL_ALWAYS_SOFTWARE=1
"$#"
Be sure to mark launch.sh as executable.
given the script above this worked for me
/opt/google/chrome/chrome --ignore-gpu-blacklist --gpu-launcher=/usr/local/gman/launch.sh
after which about:gpu gives me
GL_VENDOR VMware, Inc.
GL_RENDERER Gallium 0.4 on llvmpipe (LLVM 0x301)
GL_VERSION 2.1 Mesa 9.0.3

How to prevent syslogging "Error inserting nvidia" on cudaGetDeviceCount()?

I have a tool that can be run on both, GPU and CPU. In some init-step I check cudaGetDeviceCount() for the available GPUs. If the tool is being executed on a node without video cards, this results in the following syslog message:
Sep 13 00:21:10 [...] NVRM: No NVIDIA graphics adapter found!
How can I prevent the nvidia driver from flooding my syslog server with this message? It's OK if the node doesn't have a video card, it's not that critical, so I just want to get rid of the message.
That message gets inserted into the syslog by the NVIDIA driver. So the most direct solution would be to not install the NVIDIA driver on a node that does not have a GPU.
If you need some NVIDIA driver components on that node, for example to build CUDA driver API codes on a GPU-less login node, then you will need to use some special switches during driver installation.
You can find out more about driver install switches by using the --help switch on the driver installer package.
A sequence of switches like this may do the trick:
sudo sh NVIDIA-Linux-x86_64-319.72.run --no-nvidia-modprobe --no-kernel-module --no-kernel-module-source -z