How to run QEMU without TCG and without KVM - qemu

I'm trying to shut down a virtual core while my QEMU Virtual Machine is running.
For that purpose, I need to use the function qemu_cpu_kick() which is found at cpus.c:
void qemu_cpu_kick(CPUState *cpu)
{
qemu_cond_broadcast(cpu->halt_cond);
if (!tcg_enabled() && !cpu->thread_kicked) {
qemu_cpu_kick_thread(cpu);
cpu->thread_kicked = true;
}
}
It works well - only if I enable KVM.
However, I need to have KVM disabled, and once I disable KVM - the tcg_enabled() function returns true, and the cpu doesn't shut down.
Is it possible to disable TCG?
I didn't find any knob regarding TCG; --disable-tcg, as well as other trials, do not work.
I tried to reconfigure my compilation with --disable-tcg-interpreter, but still nothing changes.
So, how can I disable TCG ? Or, alternatively - is there a better way to shut down a virtual cpu?
Thanks!

Well, as I understood, running QEMU without KVM forces QEMU to use the Tiny Code Generator (TCG) instead of KVM. So, running QEMU without KVM and without TCG is simply not possible!

Related

ARM At91 CPU startup in qemu

ARM AT91 can not startup in QEMU. I can't get any print on the console.
I am trying to use QEMU(latest code pulled by git) to simulate an ARM AT91 board. But when startup the QEMU, I got no print in the console. In my understanding, there would be two steps to achieve this:
1, Property setup with the memory address in QEMU, let the QEMU decompress zImage. In this step, I will see "Uncompressing Linux...done, booting the kernel."
2, Property setup the output device(eg: uart0). I will get the kernel startup message.
I've succeeded in starting up with the ARM versatilePB because the QEMU supports versatilePB itself. The difference between versatilePB and AT91 is they have different SDRAM address. I've tried to modify loader_start to 0x20000000 but it seems still not work.
hwaddr loader_start;//0x2000000, which is AT91 SDRAM address
memory_region_add_subregion(sysmem, 0x2000000, ram);
At least it should print Uncompressing Linux...done, booting the kernel., which indicates that the zImage is executed and decompressed.
QEMU (at least upstream QEMU) does not have a model of the AT91 SoCs. The differences between these systems and ones like the versatilePB that QEMU does support are greater than just "the RAM is at a different address" -- they will have different devices of all kinds (including the UART) which both behave differently and are found at different locations. It is impossible to run bare metal code intended for an AT91 without implementing in QEMU a model of the correct board and at least some of the AT91 devices. The changes required would be much much more substantial than just changing a few addresses for the RAM base address.

how to communicate an exit code from the QEMU guest to the host on arm64 / aarch64 (equivalent to isa-debug-exit)

I've found I can have the guest write to the port specified by the command line flag isa-debug-exit, and the value written to the port gets used as the exit code of QEMU (after some predictable transformation).
Is there an equivalent mechanism for aarch64 / arm64?
You can use the semihosting API for that: see
qemu-system-aarch64 exit from within the guest system for details (and also the caveats about needing to trust the code in your guest if you use it).

Trying to monitor resource usage of a kvm/qemu virtual machine with mesos

I’m currently deploying a kvm/qemu virtual machine with mesos/marathon. In marathon, I’m using the built in mesos command executor and running the script.
virsh start centos7.0; while true; do echo 'centos 7.0 guest is running'; sleep 5; done
Note the while loop is there only to keep the task running. My issue is that I cannot get mesos to monitor the resource usage of the virtual machine.
When marathon deploys this task on a mesos-agent, it is creating a container that uses the memory and cpu cgroups.
/sys/fs/cgroup/cpu/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
/sys/fs/cgroup/memory/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
When the virtual machine is being kicked off, the virsh start command is sending a request to libvirtd. Libvirtd then reads the guest.xml file located in /etc/libvirt/qemu/ and then sends a request to the qemu/kvm driver to deploy it.
In my guest.xml file I’m using a custom partition cgroup slice to monitor my virtual machine usage.
https://libvirt.org/cgroups.html
(for each cgroup)
/sys/fs/cgroup/???/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
What I have tried.
I tried deleting my memory / cpu cgroup from this slice by doing
cgdelete -r cpu,memory:vmHolder.slice
and then adding my qemu guest process to the mesos controllers
cgclassify -g cpu,memory:mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895 GUEST-PID
When I run the command cat /proc/5531/cgroup
11:perf_event:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
10:pids:/
9:devices:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
8:cpuset:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope/emulator
7:net_prio,net_cls:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
6:freezer:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
5:blkio:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
4:hugetlb:/
3:cpuacct,cpu:/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
2:memory:/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
1:name=systemd:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
It shows that I’m using those controllers, but when I run systemd-cgtop it's not adding the memory usage of the VM. I'm not sure what to do next. Any suggestions?

Stream Management (XEP-0198) in Prosody

I am using Prosody for stream management. But I am suffering from some issues.
How can I ensure that stream management is enabled on prosody ? Is there any command to test on terminal ?
I also tried to add mod_smacks.lua modules in modules. but I don't know how to enable it on server.
I am using XMPPFramework as chat client on iOS. There is already a method to check support for stream management or not, but it is returning me always false so far.
Please help me out to enable stream management in prosody.
After you added mod_smacks.lua into your /usr/lib/prosody/modules/ add
"smacks";
to your
modules_enabled = {
...
}
in your /etc/prosody/prosody.cfg.lua if you want the module to be loaded every time prosody starts.
Then restart prosody.
Prosodyctl does not show loaded modules.
You can check if the module is loaded via ad-hoc commands (or telnet if activated). You can even load and unload modules via ad-hoc/telnet.
You get more information about mod_smacks here.

Permission error while adding new physical device to QEMU under libvirt?

I'm trying to add a USB camera to QEMU so that it can be virtualized for guest OS. I've added the following item in /etc/libvirt/qemu.conf.
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
...
"/dev/rtc", "/dev/hpet", **"/dev/video0",**
]
Also, I've mounted the cgroup controller as below.
mkdir /dev/cgroup
mount -t cgroup none /dev/cgroup -o devices
But I'm getting "Permission denied" error(13) in the following code.
fd = open("/dev/video0", O_RDWR | O_NONBLOCK, 0);
Strange observation is that this error only happens when I use Virt-manager(libvirt). The issue disappears when QEMU is run by command-line.
Is there anyway to give all the device access to QEMU in libvirt? Or any more step to check for libvirt/qemu.conf?
Very long shot, but did you had a chance to go through this page on libvirt docs?
It's a different issue, but it's being stated there, that disabling selinux is one of the steps required.
One simple work-around to give the access right is to change the ownership of the device to libvirt-qemu. I've done the following command and Libvirt can now open the device all right.
sudo chown libvirt-qemu /dev/video0