ARM AT91 can not startup in QEMU. I can't get any print on the console.
I am trying to use QEMU(latest code pulled by git) to simulate an ARM AT91 board. But when startup the QEMU, I got no print in the console. In my understanding, there would be two steps to achieve this:
1, Property setup with the memory address in QEMU, let the QEMU decompress zImage. In this step, I will see "Uncompressing Linux...done, booting the kernel."
2, Property setup the output device(eg: uart0). I will get the kernel startup message.
I've succeeded in starting up with the ARM versatilePB because the QEMU supports versatilePB itself. The difference between versatilePB and AT91 is they have different SDRAM address. I've tried to modify loader_start to 0x20000000 but it seems still not work.
hwaddr loader_start;//0x2000000, which is AT91 SDRAM address
memory_region_add_subregion(sysmem, 0x2000000, ram);
At least it should print Uncompressing Linux...done, booting the kernel., which indicates that the zImage is executed and decompressed.
QEMU (at least upstream QEMU) does not have a model of the AT91 SoCs. The differences between these systems and ones like the versatilePB that QEMU does support are greater than just "the RAM is at a different address" -- they will have different devices of all kinds (including the UART) which both behave differently and are found at different locations. It is impossible to run bare metal code intended for an AT91 without implementing in QEMU a model of the correct board and at least some of the AT91 devices. The changes required would be much much more substantial than just changing a few addresses for the RAM base address.
Related
I've found I can have the guest write to the port specified by the command line flag isa-debug-exit, and the value written to the port gets used as the exit code of QEMU (after some predictable transformation).
Is there an equivalent mechanism for aarch64 / arm64?
You can use the semihosting API for that: see
qemu-system-aarch64 exit from within the guest system for details (and also the caveats about needing to trust the code in your guest if you use it).
i am using qemu-system-arm to execute a bare-metal cortex-m3 binary using a custom machine populated with emulations of memory mapped devices.
To exchange data between host and the m3 binary running in qemu, I start qemu with
-chardev udp, id=ch0, port=x, localport=y -serial chardev:ch0
Then in qemu I bind a device to serial_hds[0]. Writing to the serial device, then results in udp packets sent to the host.
My question is: do I have to make the connection to -serial ? Can I in some way access the created chardevs without using the way via the -serial?
I want to setup qemu to listen on 10 udp ports, but what I understand, the -serial option is limited to max 4 devices.
QEMU's chardev abstraction has "front ends" and "back ends".
The "back end" is whatever you connect to on the host side (which might be a UDP port, stdin/stdout, a UNIX domain socket, etc). The -chardev option is what creates and configures this back end.
The "front end" is the part on the QEMU side of this. The most common use is a UART (serial port), but you can also use chardevs to specify how to talk to the QEMU monitor, or to a guest parallel port.
In this case your problem is "what are the N things that the guest sees", ie what are the front ends? There has to be something here, which means your board needs to actually create multiple UARTs or something. -serial is a limit of 4 (you can probably raise that with a local hack changing MAX_SERIAL_PORTS), but if your device model is written to take a QEMU chardev rather than to look directly at serial_hds[] it should be possible to configure it other than via -serial (either with -device ... or -global ... to set the chardev as the device property).
I’m currently deploying a kvm/qemu virtual machine with mesos/marathon. In marathon, I’m using the built in mesos command executor and running the script.
virsh start centos7.0; while true; do echo 'centos 7.0 guest is running'; sleep 5; done
Note the while loop is there only to keep the task running. My issue is that I cannot get mesos to monitor the resource usage of the virtual machine.
When marathon deploys this task on a mesos-agent, it is creating a container that uses the memory and cpu cgroups.
/sys/fs/cgroup/cpu/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
/sys/fs/cgroup/memory/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
When the virtual machine is being kicked off, the virsh start command is sending a request to libvirtd. Libvirtd then reads the guest.xml file located in /etc/libvirt/qemu/ and then sends a request to the qemu/kvm driver to deploy it.
In my guest.xml file I’m using a custom partition cgroup slice to monitor my virtual machine usage.
https://libvirt.org/cgroups.html
(for each cgroup)
/sys/fs/cgroup/???/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
What I have tried.
I tried deleting my memory / cpu cgroup from this slice by doing
cgdelete -r cpu,memory:vmHolder.slice
and then adding my qemu guest process to the mesos controllers
cgclassify -g cpu,memory:mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895 GUEST-PID
When I run the command cat /proc/5531/cgroup
11:perf_event:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
10:pids:/
9:devices:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
8:cpuset:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope/emulator
7:net_prio,net_cls:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
6:freezer:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
5:blkio:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
4:hugetlb:/
3:cpuacct,cpu:/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
2:memory:/mesos/31b48dc3-6f09-4b5a-8964-b82d711bb895
1:name=systemd:/vmHolder.slice/machine-qemu\x2d1\x2dcentos7.0\x2dclone.scope
It shows that I’m using those controllers, but when I run systemd-cgtop it's not adding the memory usage of the VM. I'm not sure what to do next. Any suggestions?
I'm trying to shut down a virtual core while my QEMU Virtual Machine is running.
For that purpose, I need to use the function qemu_cpu_kick() which is found at cpus.c:
void qemu_cpu_kick(CPUState *cpu)
{
qemu_cond_broadcast(cpu->halt_cond);
if (!tcg_enabled() && !cpu->thread_kicked) {
qemu_cpu_kick_thread(cpu);
cpu->thread_kicked = true;
}
}
It works well - only if I enable KVM.
However, I need to have KVM disabled, and once I disable KVM - the tcg_enabled() function returns true, and the cpu doesn't shut down.
Is it possible to disable TCG?
I didn't find any knob regarding TCG; --disable-tcg, as well as other trials, do not work.
I tried to reconfigure my compilation with --disable-tcg-interpreter, but still nothing changes.
So, how can I disable TCG ? Or, alternatively - is there a better way to shut down a virtual cpu?
Thanks!
Well, as I understood, running QEMU without KVM forces QEMU to use the Tiny Code Generator (TCG) instead of KVM. So, running QEMU without KVM and without TCG is simply not possible!
I have a Google compute engine instance(Cent-Os) which I could access using its external IP address till recently.
Now suddenly the instance cannot be accessed using its using its external IP address.
I logged in to the developer console and tried rebooting the instance but that did not help.
I also noticed that the CPU usage is almost at 100% continuously.
On further analysis of the Serial port output it appears the init module is not loading properly.
I am pasting below the last few lines from the serial port output of the virtual machine.
rtc_cmos 00:01: RTC can wake from S4
rtc_cmos 00:01: rtc core: registered rtc_cmos as rtc0
rtc0: alarms up to one day, 114 bytes nvram
cpuidle: using governor ladder
cpuidle: using governor menu
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver hiddev
usbcore: registered new interface driver usbhid
usbhid: v2.6:USB HID core driver
GRE over IPv4 demultiplexor driver
TCP cubic registered
Initializing XFRM netlink socket
NET: Registered protocol family 17
registered taskstats version 1
rtc_cmos 00:01: setting system clock to 2014-07-04 07:40:53 UTC (1404459653)
Initalizing network drop monitor service
Freeing unused kernel memory: 1280k freed
Write protecting the kernel read-only data: 10240k
Freeing unused kernel memory: 800k freed
Freeing unused kernel memory: 1584k freed
Failed to execute /init
Kernel panic - not syncing: No init found. Try passing init= option to kernel.
Pid: 1, comm: swapper Not tainted 2.6.32-431.17.1.el6.x86_64 #1
Call Trace:
[] ? panic+0xa7/0x16f
[] ? init_post+0xa8/0x100
[] ? kernel_init+0x2e6/0x2f7
[] ? child_rip+0xa/0x20
[] ? kernel_init+0x0/0x2f7
[] ? child_rip+0x0/0x20
Thanks in advance for any tips to resolve this issue.
Mathew
It looks like you might have an script or other program that is causing you to run out of Inodes.
You can delete the instance without deleting the persistent disk (PD) and create a new vm with a higher capacity using your PD, however if it's an script causing this, you will end up with the same issue. It's always recommended to backup your PD before making any changes.
Run this command to find more info about your instance:
gcutil --project= getserialportoutput
If the issue still continue, you can either
- Make a snapshot of your PD and make a PD's copy or
- Delete the instance without deleting the PD
Attach and mount the PD to another vm as a second disk, so you can access it to find what is causing this issue. Visit this link https://developers.google.com/compute/docs/disks#attach_disk for more information on how to do this.
Visit this page http://www.ivankuznetsov.com/2010/02/no-space-left-on-device-running-out-of-inodes.html for more information about inodes troubleshooting.
Make sure the Allow HTTP traffic setting on the vm is still enabled.
Then see which network firewall you are using and it's rules.
If your network is set up to use an ephemral IP, it will be periodically released back. This will cause your IP to change over time. Set it to static/reserved then (on networks page).
https://developers.google.com/compute/docs/instances-and-network#externaladdresses