I have the following definition of network define virsh edit vm:
<controller type='pci' index='0' model='pci-root'/>
<interface type='bridge'>
<mac address='f2:ff:ff:ff:ff:07'/>
<source bridge='br0:'/>
<model type='e1000'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
i.e slot=3, but after running virsh domxml-to-native qemu-argv i'm getting -
qemu-system-x86_64 -name guest=vm07 -machine pc-i440fx-2.12,accel=kvm,usb=off,dump-guest-core=off -cpu SandyBridge-IBRS -m 4096
.... -netdev tap,fd=21,id=hostnet0
-device e1000,netdev=hostnet0,id=net0,mac=f2:ff:ff:ff:ff:07,bus=pci.0,addr=0x2
i.e slot=2, which change previous ens3 interface -> ens2 interface and failing getting IP by dhcp.
Any idea why it's happen and how to keep the slot number?
Thanks!
verified with libvirt user group as bug in version 4.5.0, fix is in the way.
Related
Do qemu 5.1.0-dirty and qemu 5.1.0 versions behave differently?
No error occurs, but it boots with qemu 5.1.0-dirty version and not with 5.1.0. What could be the problem?
/home/pi/qemu/qemu-5.1.0/build/aarch64-softmmu/qemu-system-aarch64 -drive file=/home/pi/images/boot.qcow2,id=disk0,format=raw,if=none,cache=none -monitor null -object rng-random,filename=/dev/random,id=rng0 -cpu host -machine type=virt -device virtio-keyboard-pci -device virtio-rng-pci,rng=rng0 -device virtio-blk-pci,drive=disk0 -serial mon:stdio -kernel /home/pi/kernel/Image-vdt -usb -nodefaults -device virtio-net-pci,netdev=net0,mac=CA:FE:BA:BE:BE:EF,rombar=0 -netdev type=tap,id=net0,ifname=qemu_tap0,script=no,downscript=no -device virtio-gpu-pci,virgl,xres=1680,yres=560 -display sdl,gl=on -device virtio-tablet-pci -show-cursor -m 5G -smp 3 -device qemu-xhci,id=xhci -enable-kvm -append "root=/dev/vda9 ro loglevel=7 audit=0 enforcing=0 console=tty0 fbcon=map:10 video=1680x560-32 mem=5G"
Both versions used the same command line, but only booted
from the qemu 5.1.0-dirty version.
In qemu 5.1.0, which does not boot, the QEMU screen is created, but the phrase 'guest has not initialized the display (yet)' is displayed and it does not proceed any further.
5.1.0-dirty only exists in binaries, and version 5.1.0 was used after compiling the source code.
Use the compile options --enable-sdl --enable-gtk --target-list=aarch64-softmmu .
img1
It should boot normally, but it doesn't.
What is the difference between qemu 5.1.0-dirty and regular qemu 5.1.0?
-dirty on the end of a QEMU version string means "5.1.0 plus any number of unknown extra changes", i.e. it is not a clean upstream version. It could have absolutely anything in it. You would need to find out exactly where the binary came from and what sources it was built from to be able to find out what the differences are between it and a clean 5.1.0.
I'm working with VxWorks, a Real Time Operating System for embedded systems. They recently added QEMU support, and I've been trying to figure it out. (I'm fairly new to all these technologies.) I would like to checkpoint and restart the virtual machine, ie save the RAM and processor state and reload it later from exactly that point.
QEMU has some support for this called "snapshots." However, everything I've seen and tried requires a disk image in qcow2 format. But my simulation has no disk, the program is loaded directly into RAM and run.
Here's my QEMU command:
qemu-system-aarch64 -m 4096M -smp 4 -machnie xlnx-zcu102 -device loader,file=~/vxworks_21.03/workspace3/QEMU_helloWorld/default/vxWorks,addr=0x00100000 -nographic -monitor telnet:127.0.0.1:35163,server,nowait -serial telnet:127.0.0.1:39251,server -device loader,file=~/vxworks_21.03/workspace3/vip_xlnx_zynqmp_smp_64/default/xlnx-zcu102-rev-1.1.dtb,addr=0x0f000000 -device loader,addr=0x000ffffc,data=0xd2a1e000,data-len=4 -device loader,addr=0x000ffffc,cpu-num=0 -nic user -nic user -nic user -nic user,id=n0,hostfwd=tcp:127.0.0.1:0-:1534,hostfwd=udp:127.0.0.1:0-:17185
Then I log into the monitor and:
$ telnet 127.0.0.1 35163
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
QEMU 5.2.0 monitor - type 'help' for more information
(qemu) savevm
Error: No block device can accept snapshots
I tried a number of things, like creating an empty disk image, or the snapshot_blkdev command, but no luck so far.
The host is RedHat Linux 8.4 running on an x86 desktop, the guest is ARM64.
It turns out that a disk image is required to do snapshots, but you don't have to hook it up to the guest. To do that you pass qemu -drive argument with with if=none. Like this:
-drive if=none,format=qcow2,file=dummy.qcow2
So here is the whole sequence that worked:
$ qemu-img create -f qcow2 dummy.qcow2 32M
$ qemu-system-aarch64 -m 4096M -smp 4 -machnie xlnx-zcu102 -device loader,file=vxWorks,addr=0x00100000 -nographic -monitor telnet:127.0.0.1:35163,server,nowait -serial telnet:127.0.0.1:39251,server -device loader,file=xlnx-zcu102-rev-1.1.dtb,addr=0x0f000000 -device loader,addr=0x000ffffc,data=0xd2a1e000,data-len=4 -device loader,addr=0x000ffffc,cpu-num=0 -nic user -nic user -nic user -nic user,id=n0,hostfwd=tcp:127.0.0.1:0-:1534,hostfwd=udp:127.0.0.1:0-:17185 -snapshot -drive if=none,format=qcow2,file=dummy.qcow2
Then in the monitor terminal savevm and loadvm work:
$ telnet 127.0.0.1 35163
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
QEMU 5.2.0 monitor - type 'help' for more information
(qemu) savevm save1
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK ICOUNT
-- save1 44.3 MiB 2021-06-28 10:08:28 00:00:05.952
(qemu) loadvm save1
This information came thanks to Peter Maydell and his blog post: https://translatedcode.wordpress.com/2015/07/06/tricks-for-debugging-qemu-savevm-snapshots/
Background:
Host: Win10
Qemu: Qemu 6.0.0
This is my command: qemu-system-arm.exe -D ./log.txt -M sabrelite -smp 4 -m 1G -nographic -serial null -serial mon:stdio -kernel image -dtb sabrelite.dtb
I'm using this command to create a Qemu, in order to run some tests with a lot of output logs on it.
I wanna save the outputs to a file.
Question:
How can I save the console output from windows host QEMU to a file?
It seems that the -D ./log.txt just created an empty file, and did not save the outputs to it.
The -D option is for the log file for the debug info enabled with '-d'. If you don't specify any '-d' options there will be no debug info in the log file.
The output of the serial console is entirely separate. That is controlled by the '-serial' option, which currently you have set up to go to stdio (with a monitor muxed to also use stdio). You can look at the other options for where -serial can be directed; this does include a "send to file", but note that if you just do that then you won't also be able to see it on the console and you won't be able to input anything.
You can use standard windows output redirection. This command line will redirect stdout and stderr to log.txt:
qemu-system-arm.exe -M sabrelite -smp 4 -m 1G -nographic -serial null -serial mon:stdio -kernel image -dtb sabrelite.dtb > 1> ./log.txt 2>&1
can KVM be enabled (-enable-kvm) when running qemu without -cpu host ?
e.g.
qemu-system-x86_64 \
-boot c -m 16G -vnc :0 -enable-kvm \
-cpu qemu64,avx,pdpe1gb,check,enforce \
...
Does QEMU use the KVM when running virtual QEMU64 CPU ?
I always thought that this option can be enabled ONLY when using qemu with -cpu host...
Yes, running a guest with KVM acceleration (-enable-kvm option in qemu command line) can be done without -cpu host.
In the case of -cpu qemu64,avx,pdpe1gb,check,enforce qemu will set the union of the virtual qemu64 cpu and avx,pdpe1gb,check,enforce features as cpu features for this guest. This is done by calling KVM's KVM_SET_CPUID2 ioctl.
When the guest will ask for cpu features, it will receive these from KVM.
Using linux KVM/QEMU, I have a virtual machine with two NICs presented at the host as tap interfaces:
-net nic,macaddr=AA:AA:AA:AA:00:01,model=virtio \
-net tap,ifname=tap0a,script=ifupbr0.sh \
-net nic,macaddr=AA:AA:AA:AA:00:02,model=virtio \
-net tap,ifname=tap0b,script=ifupbr1.sh \
In the guest (also running linux), these are configured with different subnets:
eth0 Link encap:Ethernet HWaddr aa:aa:aa:aa:00:01
inet addr:10.0.0.10 Bcast:10.0.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
eth1 Link encap:Ethernet HWaddr aa:aa:aa:aa:00:02
inet addr:192.168.0.10 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Routes only go to the expected places:
ip route list
default via 10.0.0.1 dev eth0 metric 100
10.0.0.0/16 dev eth0 proto kernel scope link src 10.0.0.10
192.168.0.0/24 dev eth1 proto kernel scope link src 192.168.0.10
But somehow don't seem to be treated by KVM as being connected to distinct networks.
If I trace the individual interfaces, they both see the same traffic.
For example, if I ping on the 10.0.0.0/16 subnet, ping -I eth0 10.0.0.1
And simultaneously trace the two tap interfaces with tcpdump , I see the pings coming through on both tap interfaces:
sudo tcpdump -n -i tap0a
10:51:56.308190 IP 10.0.0.10 > 10.0.0.1: ICMP echo request, id 867, seq 1, length 64
10:51:56.308217 IP 10.0.0.1 > 10.0.0.10: ICMP echo reply, id 867, seq 1, length 64
sudo tcpdump -n -i tap0b
10:51:56.308190 IP 10.0.0.10 > 10.0.0.1: ICMP echo request, id 867, seq 1, length 64
10:51:56.308217 IP 10.0.0.1 > 10.0.0.10: ICMP echo reply, id 867, seq 1, length 64
That seems strange to me since it's pretty clear that the guest OS would have only actually sent this on the tap0a interface.
Is this expected behavior? Is there a way to keep the interfaces separate as I expected?
Is this some misconfiguration issue on my part?
Additional info, here are the two ifupbr0.sh and ifupbr1.sh scripts:
% cat ifupbr1.sh
#!/bin/sh
set -x
switch=br0
echo args = $*
if [ -n "$1" ];then
sudo tunctl -u `whoami` -t $1
sudo ip link set $1 up
sleep 0.5s
sudo brctl addif $switch $1
exit 0
else
echo "Error: no interface specified"
exit 1
fi
% cat ifupbr1.sh
#!/bin/sh
set -x
switch=br1
echo args = $*
if [ -n "$1" ];then
sudo tunctl -u `whoami` -t $1
sudo ip link set $1 up
sleep 0.5s
sudo brctl addif $switch $1
exit 0
else
echo "Error: no interface specified"
exit 1
fi
I see this problem even if I detach the "tap0b" interface from the br1. It still shows the traffic that I'd expect only for tap0a. That is, even when:
% brctl show
bridge name bridge id STP enabled interfaces
br0 8000.26a2d168234b no tap0a
br1 8000.000000000000 no
br2 8000.000000000000 no
It looks like I answered my own question eventually, but I'll document it for anyone else that hits this.
Evidently this really is the intended behavior of KVM for the options I was using.
At this URL:
http://wiki.qemu.org/Documentation/Networking
I found:
QEMU previously used the -net nic option instead of -device DEVNAME
and -net TYPE instead of -netdev TYPE. This is considered obsolete
since QEMU 0.12, although it continues to work.
The legacy syntax to create virtual network devices is:
-net nic,model=MODEL
And sure enough, I'm using this legacy syntax. I thought the new syntax was just more flexible but it apparently actually has this intended behavior:
The obsolete -net syntax automatically created an emulated hub (called
a QEMU "VLAN", for virtual LAN) that forwards traffic from any device
connected to it to every other device on the "VLAN". It is not an
802.1q VLAN, just an isolated network segment.
The vlans it supports are also just emulated hubs, and don't forward out to the host at all as best I can tell.
Regardless, I reworked the QEMU options to use the "new" netdev syntax and obtained the behavior I wanted here.
What do you have in the ifupbr0.sh and ifupbr1.sh scripts? What bridging tool are you using? That is the important piece which segregates your traffic to the interfaces desired.
I've used openvswitch to handle my bridging stuff. But before that I used bridge-utils in Debian.
I wrote some information about bridge-utils at http://blog.raymond.burkholder.net/index.php?/archives/31-QEMUKVM-BridgeTap-Network-Configuration.html. I have other posts regarding what I did with bridging on the OpenVSwitch side of things.