I've not been able to find any documentation stating the existence of an API that can be used to automate things inside of a qemu guest.
For example, I would like to launch a process inside of the guest machine from the host machine. Libvirt does not appear to contain such functionality.
[Note: Automation without using any virtualization API. From my blog post.]
Step 1:
By default, QEMU uses SDL to display the VGA output. So, the first step is make this interaction with QEMU through standard I/O. QEMU provides an option for this.
From QEMU docs:
-nographic Normally, QEMU uses SDL to display the VGA output. With this option, you can totally disable graphical output so that QEMU is
a simple command line application. The emulated serial port is
redirected on the console. Therefore, you can still use QEMU to debug
a Linux kernel with a serial console.
So, all you have to do is invoke QEMU with -nographic.
qemu -nographic -hda guest.disk
Step 2:
Now that you can interact with your guest (or QEMU process) through command line, you have to automate this interaction. The obvious way to do this in python is start the QEMU process (with -nographic) with subprocess module and then communicate with that process. But to my surprise, this just didn’t work out for me. So, I looked for some other way.
Later, I found out that the most awesome tool for this kind of jobs is Expect. It is an automation tool for interactive applications written in TCL.
This guide should help you in getting started with Expect. Here is the script to run a guest with QEMU using Expect.
#!/usr/bin/expect -f
#starts guest vm, run benchmarks, poweroff
set timeout -1
#Assign a variable to the log file
set log [lindex $argv 0]
#Start the guest VM
spawn qemu -nographic -hda guest.disk
#Login process
expect "login: "
#Enter username
send "user\r"
#Enter Password
expect "Password: "
send "user\r"
#Do whatever you want to do with in the guest VM. ( Run a process and write result to log )
#poweroff the Guest VM
expect "# "
send "shutdown -h now\r"
The QEMU Monitor can interact with guest systems to a limited extent using it's own console. This includes reading registers, controlling the mouse/keyboard, and getting screen dumps.
There is a QEMU Monitor Protocol (QMP) that let's you pass JSON commands to and read values from the guest system.
As far as I know, the only way to communicate to the guest is through the network bridge.
I use python with pexpect to interact with spawned VMs using their serial consoles. I generally automate scenarios that have up to 128VMs this way, its reasonably swift. I generally use virt-install to instantiate guests, and use "virsh console (domainname)" using pexpect to get a "handle" to each console, so I can send commands to configure networking, startup tools/utilities/scripts, monitor operation, etc. Pretty sweet in terms of simplicity, and since the scripts are just issuing shell commands, you aren't exposed to APIs that change from version to version, e.g. the serial console will always be there. Sometimes I use qemu directly, (lately I am working with a QEMU that libvirt doesn't support since its too new), in that case I will have the guest console use a telnet port so I can "telnet localhost portnumber" to make a console connection instead of "virsh console (domainname)". Either way, python scripts with the pexpect module for interacting with VMs is great.
PyQemu can theoretically do this. I've used it in the past, although it looks like a stale project now. It provides a python agent (the equivalent of VMWare guest tools) to run on the guest, communicating with the host via serial port. You can get proxies to python modules running in the context of the VM, and any communication with them is marshaled over the serial port. In the following example, AutoIt is being used to automate Notepad:
machine = PyQemu.GetProxy("win2k")
# Wrap the machine object in another proxy representing the 'os'
# module running inside the VM.
os = PyQemu.vm.Module(machine,"os")
# NOTE: This is running on the VM!
os.system("notepad")
# Get an IDispatch object representing the autoit ActiveX control
autoit = PyQemu.vm.Dispatch(machine,"AutoItX3.Control")
# See if a window is active on the VM
state = autoit.WinActive("Untitled -")
Caveat: Due to using the serial port it is far from quick (regardless of serial speed settings), so perhaps best to transfer any bulk data by other means, e.g. Virtual FAT disk image.
You can create a reverse ssh tunnel from guest to host, which will redirect each request to host on specific port to guest. This way will help you to control guest from host.
If you're running Linux in the guest, couldn't you just use ssh/screen to launch remote processes on the guest?
Alternatively, I have seen people write python wrappers that use popen() to grab stdin/stdout and use those to automate some commands (i.e. when you see the login prompt, send the login name to stdin of QEMU.
Related
I have a bare qemu or kvm virtual machine and would like to observe or wait for the event that the virtual machine has booted in a reliable way.
This is a generic question that may not have a generic answer. If it helps you may assume a subset of the following:
The VM is running Debian GNU/Linux.
The actual question is whether the contained ssh server is reachable. It is exported via user networking and a hostfwd.
Remarks:
reliable means that it is suitable for continous integration testing. It should fail in less than 0.1% of cases.
Running ssh -o ConnectionAttempts=30 sometimes produces a failure even though ssh would work afterwards.
For example, I'm running a Linux guest, and I want to do something like this in my init script just after boot:
savevm-somehow
run-lengthy-benchmark
I know how to use the monitor from the host, but it is hard to stop at the correct point to do the savevm: I could GDB step debug until there and then connect to the monitor, but that would be annoying.
In theory you could tell QEMU to put its monitor on a TCP port, and then also tell QEMU's networking to forward that port to the guest, and then from the guest connect to the forwarded port. I would worry about the possibility of deadlocks in this setup, though...
An approach I've used in the past is to script the QEMU monitor prompt using expect. There's an example here:
https://translatedcode.wordpress.com/2015/07/06/tricks-for-debugging-qemu-savevm-snapshots/
which uses a hardcoded delay time, but you ought to also be able to get expect to look at the serial port output to decide when to send the commands.
I am attempting to profile an application standalone (i.e. on the same machine as a developer). I'm not sure I'm configuring it right, but I do:
NSOLID_PROXY=0.0.0.0:0 npm run myserverlauncher
The application fires up and uses a random port for NSolid
Now, I want to fire up the nsolid console, and it starts but cannot find my machine. I tried:
npm start
NSOLID_PROXY=0.0.0.0:0 npm start
NSOLID_PROXY=0.0.0.0:47020 npm start (using the port given during launch)
None of these can discover my server.
Any clues on how to troubleshoot the standalone configuration?
To avoid overload on your application when profiling you don't connect directly to N|Solid. We designed a Hub for use to gathering the information for profiling without any overload.
You'll need a etcd server running and the N|Solid Hub. Afterwards you point your application to connect to the Hub using NSOLID_HUB env var (note that NSOLID_PROXY is wrong).
We have a very complete guide to run N|Solid in a standalone and development environment, take a look and also check out the scripts used there to make it all work out of the box.
Feel free to reach us anytime!
We're using OpenNebula to simulate a simple replicated JBoss application.
We've installed all opennebula packages, qemu and kvm and libvirt.
We have created a simple ethernet network ad hoc between my pc (a node) and the one of my friend (which is both node and front-end) by plugging an ethernet cable between me and him (10.0.0.1 and 10.0.0.2).
So we can ping each other correctly, we've set everything to that we can ssh without a password to each other with "oneadmin" user.
We've configured all files such as below:
/etc/libvirt/libvirtd.conf
/etc/default/libvirtd-bin
And so on...
kvm and kvm-intel are both enabled.
The daemon
libvirtd -d -l
seems to start correctly.
In fact, from the gui of opennebula in the front end, we can see both the hosts monitored.
Anyway there's a problem when we try to start the virtual machine on the node which is not the front-end. I mean when we try to do a deploy of a VM on the other node. The error is something like this
cannot stat `/var/lib/one/datastores/1/f5394317d377beaa09fc07697df9ff68
but if, from the front end which has virtual machine n°1 we perform,
cd /var/lib/one/datastores/1
then we can see that file, we've also given all the permissions to it...
Any idea? :(
This may be related with the datastore configuration. If you left the default values, OpenNebula expects a shared filesystem (ie NFS) between the front-end and the virtualization nodes.
More context on the error (which I believe can be found in /var/lib/one/oned.log) would help analysing this problem.
I am launching qemu using qemu-system-x86_64 along with options. What options should I give to assign IP address for guest OS I launched, so that I can ping the guest os from my host machine.
Can anyone help me on this and post if there any other way to assign IP address of Guest OS other than passing it from command line of qemu-system-x86-64?
Thanks.
I haven't found a good solution to do this via command line.
First and foremost, your best bet is probably Cloud-Init. I've had varying success but I also haven't spent a ton of time perfecting it, either.
You could utilize DHCP and get the IP address from the guest agent after the VM boots. If you are placing it in a network that doesn't have DHCP then you could consider using dnsmasq on Proxmox.
If you're using multiple VLANs, you could also consider building the VM in a VLAN with DHCP (either from your router or using the aforementioned Proxmox dnsmasq approach) and then SSH / RDP in and set the static address and move the NIC to the right VLAN.
If you're trying to automate this deployment, I'd recommend using Terraform and Ansible (Terraform to build, Ansible to configure) to accomplish this. I've found the best approach is to configure and trigger the Terraform with Ansible and then save the IPs as facts. You can then use the facts to delegate the Ansible task to the temporary IP and then log in to set the static IP address. If you're changing VLANs then you can either use Terraform or Ansible to adjust the config but I've found Terraform to be best for this task.