Qemu Virt Machine from flash device image - qemu

I have been developing the OS for a prototype device using hardware. Unfortunately, it's a very manual and buggy process to flash the OS each time and then debug the issues.
I'd like to switch to developing the OS in QEMU, so that I can be sure that the OS is loading correctly before going through the faff of programming the device for real. This will come in handy later for Continuous Integration work.
I have a full copy of the NVM device that is generated from my build process. This is a known working image I'd like to run in QEMU as a start point. This is ready to then be JTAG'd onto the device. The partition layout is:
P0 - loader - Flash of IDBLoader from rockchip loader binaries
P1 - Uboot - Flash of Uboot
P2 - trust - Flash of Trust image for rockchip specific loader
P3 - / - Root partition with Debian based image and packages required for application
P4 - data partition - Application Data
I have not changed anything with the Rockchip partitions (P0 - P2) apart from the serial console settings. When trying to boot the image though, nothing happens. There is no output at all, but the VM shows as still running. I use the following command to run it:
qemu-system-aarch64 -machine virt -cpu cortex-a53 \
-kernel u-boot-nodtb.bin \
-drive format=raw,file=image.img \
-boot c -serial stdio
I have no error information to go on to understand what is going on with it, where can I get more information or debug?

QEMU cannot not emulate arbitrary hardware. You will have to compile U-Boot to match the hardware that QEMU emulates, e.g. using make qemu_arm64_defconfig. The OS must also provide drivers for QEMU's emulated hardware.
If you want to emulate the complete hardware to debug drivers, Renode (https://renode.io/) is a good choice.

For anyone else trying to figure this out, I found good resources here:
https://translatedcode.wordpress.com/2017/07/24/installing-debian-on-qemus-64-bit-arm-virt-board/
and
https://azeria-labs.com/emulate-raspberry-pi-with-qemu/
Looking at the information though, you need to extract the kernel from your image and provide that to the qemu command line as an argument. You'll also need to append an argument telling the system which partition to use as a root drive.
My final command line for starting the machine looks like this:
qemu-system-aarch64 -machine virt -cpu cortex-a53 \
-drive format=raw,file=image.img,id=hd \
-boot c -serial stdio
-kernel <kernelextracted> -append "root=fe04"

Different Arm boards can be significantly different from one another in where they put their hardware, including where they put basic hardware required for bootup (UART, RAM, interrupt controller, etc). It is therefore pretty much expected that if you take a piece of low-level software like u-boot or a Linux kernel that was compiled to run on one board and try to run it on a different one that it will fail to boot. Generally it won't be able to output anything because it won't even have been able to find the UART. (Linux kernels can be compiled to be generic and include drivers for a wider variety of hardware, so if you have that sort of kernel it can be booted on a different board type: it will use a device tree blob, provided either by you or autogenerated by QEMU for the 'virt' board, to figure out what hardware it's running on and adapt to it. But kernels compiled for a specific embedded target are often built with only the device drivers they need, and that kind of kernel can't boot on a different system.)
There are broadly speaking two paths you can take:
(1) build the guest for the board you're emulating (here the 'virt' one). u-boot and Linux both have support for QEMU's 'virt' board. This may or may not be useful for what you're trying to do -- you would be able to test any userspace level code that doesn't care what hardware it runs on, but obviously not anything that's specific to the real hardware you're targeting.
(2) In theory you could add emulation support to QEMU for the hardware you're trying to run on. However this is generally a fair amount of work and isn't trivial if you're not already familiar with QEMU internals. I usually ballpark estimate it as "about as much work as it would take to port the kernel to the hardware"; though it depends a bit on how much functionality you/your guest need to have working, and whether QEMU already has a model of the SoC your hardware is using.
To answer the general "what's the best way to debug a guest image that doesn't boot", the best thing is usually to connect an arm-aware gdb to QEMU's gdbstub. This gives you debug access broadly similar to a JTAG debug connection to real hardware and is probably sufficient to let you find out where the guest is crashing. QEMU also has some debug logging options under the '-d' option, although the logging is partially intended to assist in debugging problems within QEMU itself and can be a bit tricky to interpret.

Related

Running NIOS2 on QEMU

I found in QEMU NIOS IP https://wiki.qemu.org/Documentation/Platforms/Nios2
I have downloaded intel tool chain from their website : https://www.intel.com/content/www/us/en/programmable/products/boards_and_kits/dev-kits/altera/kit-niosii-2s60.html
I have few questions:
Is the NIOS2 in QEMU IP matching intel’s NIOS IP ?
What is the toolchain you use to compile and run it in QEMU ? Is it same tool-chain as provided by intel’s website ?
How to general Firmware code and run it on NIOS over QEMU. In the Wiki it says:
qemu-system-nios2 -M 10m50-ghrd -kernel -dtb -nographic
How to generate dtb file for it?
Do we need to take products created by the quartos/EDS for the running of the QEMU, other from the compiled binary? (DTB - board specification?)
Do we need to run it with specific QEMU parameters/arguments ?
Do you have code examples for NIOS using its peripherals?
Basically, I didn’t find any documentations/examples about how to use the NIOS2 in QEMU. Can you help with some additional info ?
Even some basic “hello would” (compile and run in QEMU) would be great…
UPDATE: the most up-to-date answer to this question may be to analyse the linux console nios test at https://gitlab.com/qemu-project/qemu/-/blob/master/tests/acceptance/boot_linux_console.py#L1029 (or of course contact a maintainer). The kernel image from advent calendar 2018 day 14 runs great. It looks like it can all be done with buildroot.
My comments started bearing fruit, so I'll try to put a partial answer together. I haven't gotten this to work yet, but maybe this can be helpful to others who might work farther.
NOTE: If you just want to run a single nios2 binary, you can pass it straight to qemu-nios2. qemu-system-nios2 is for running linux.
I believe the qemu behavior is functionality rather than intellectual property. It would be a bug if it mismatched. I do not know whether it does. Mentioning IP here, please remember that open source projects are generally run by a handful of vulnerable caring devs who usually have no legal team if ownership of intellectual property is challenged. If there's an issue, it would be polite to refer the concerning party to https://eff.org/ who often legally represents such things.
I expect that any nios2 toolchain works. Here's a toolchain from a quick internet search that led me to bootlin.com. Appears to include instructions on how to duplicate it from source.
See 4
Here is what I have so far for firmware generation:
# set up a toolchain (note: this old step is redundant with buildroot, lower down, which also installs a toolchain and even builds a kernel if asked)
wget https://toolchains.bootlin.com/downloads/releases/toolchains/nios2/tarballs/nios2--glibc--stable-2020.08-1.tar.bz2
tar -jxvf nios2--glibc--stable-2020.08-1.tar.bz2
# get kernel sources (pass --depth 1 to speed up)
git clone https://github.com/altera-opensource/linux-socfpga.git
# build kernel and device tree
cd linux-socfpga
make ARCH=nios2 CROSS_COMPILE=$(pwd)/../nios2--glibc--stable-2020.08-1/bin/nios2-linux- 10m50_defconfig 10m50_devboard.dtb vmlinux -j5
cd ..
# kernel is now at linux-socfpga/vmlinux
# device tree is now at linux-socfpga/arch/nios2/boot/dts/10m50_devboard.dtb
# set up buildroot to build a root image
git clone https://github.com/buildroot/buildroot.git
cd buildroot
# configure for qemu nios2
make qemu_nios2_10m50_defconfig
# build root image
PERL_MM_OPT= LDFLAGS= CPPFLAGS= LD_LIBRARY_PATH= make
cd ..
# rootfs images are now in buildroot/output/images/
I'm afraid I'm just a visitor and I don't know who quartos/eds are or what compiled binary you are referring to.
The qemu command line appears to be qemu-system-nios2 -M <machine> -kernel <kernel file> -dtb <dtb file> <rootfs image file>. The example machine is 10m50-ghrd which we built the kernel for above, and this may be the only one.
not yet! i'll try to update this answer if i get farther. feel free to edit it if you get farther.

How to run u-boot on QEMU(raspi2)?

I am trying to run the u-boot on QEMU. But when start QEMU it gives nothing, so why this doesn't work and how to debug to find out the reason?
This is I tried:
Install Ubuntu 18.04 WSL2 on Windows.
Compile u-boot for the Raspi2
sudo apt install make gcc bison flex
sudo apt-get install gcc-arm-none-eabi binutils-arm-none-eabi
export CROSS_COMPILE=arm-none-eabi-
export ARCH=arm
make rpi_2_defconfig all
Start QEMU
qemu-system-arm -M raspi2 -nographic -kernel ./u-boot/u-boot.bin
And also tried QEMU on the Windows side, and the result is the same.
PS C:\WINDOWS\system32> qemu-system-arm.exe -M raspi2 --nographic -kernel E:\u-boot\u-boot.bin
Then QEMU didn't give output, even I tried to ctrl+c cannot stop the process.
Unfortunately this is an incompatibility between the way that u-boot expects to be started on the raspberry pi and the ways of starting binaries that QEMU supports for this board.
QEMU supports two ways of starting guest code on Arm in general:
Linux kernels; these boot with whatever the expected
boot protocol for the kernel on this board is. For raspi
that will be "start the primary CPU, but put the secondaries in
the pen waiting on the mbox". Effectively, QEMU emulates a
very minimal bit of the firmware, just enough to boot Linux.
Not Linux kernels; these are booted as if they were the
first thing to execute on the raw hardware, which is to say
that all CPUs start executing at once, and it is the job of
the guest code to provide whatever penning of secondary CPUs
it wants to do. That is, your guest code has to do the work
of the firmware here, because it effectively is the firmware.
We assume that you're a Linux kernel if you're a raw image,
or a suitable uImage. If you're an ELF image we assume you're
not a Linux kernel. (This is not exactly ideal but we're to
some extent lumbered with it for backwards-compatibility reasons.)
On the raspberry pi boards, the way the u-boot binary expects to be started is likely to be "as if the firmware launched it", which is not exactly the same as either of the two options QEMU supports. This mismatch tends to result in u-boot crashing (usually because it is not expecting the "all CPUs run at once" behaviour).
A fix would require either changes to u-boot so it can handle being launched the way QEMU launches it, or changes to QEMU to support more emulation of the firmware of this board (which QEMU upstream would be reluctant to accept).
An alternative approach if it's not necessary to use the raspi board in particular would be to use some other board like the 'virt' board which u-boot does handle in a way that allows it to boot on QEMU. (The 'virt' board also has better device support; for instance it can do networking and USB devices, which 'raspi' and 'raspi2' cannot at the moment.)

qemu-system-aarch64 exit from within the guest system

I was wondering if there is a way to exit from qemu from within the guest system in the aarch64 version. For instance the x86 has the isa-debug-exit device which is used for this purpose.
Any ideas?
Cheers
The general answer to this question is "do whatever you would do on the real hardware to cause a power-off". The details of this depend on which machine QEMU is emulating. For the aarch64 "virt" board, you can use the emulated PSCI firmware interface to request a powerdown using the SYSTEM_OFF function.
The PSCI API documentation is here: http://infocenter.arm.com/help/topic/com.arm.doc.den0022d/Power_State_Coordination_Interface_PDD_v1_1_DEN0022D.pdf
For debug/test purposes you might also be interested in the semihosting API (https://developer.arm.com/docs/dui0003/b) which has a SYS_EXIT function, but some caveats: for QEMU you can only use semihosting if you enable it via the -semihosting commandline argument, and only from kernel mode in the guest, and you must only use it if you absolutely trust the guest code, because it provides access to functions that allow the guest to read and write any host file. But for explicitly trusted small test programs it can be a nice way to do easy debug printing and exit with a given exit status.

Booting QEMU with u-boot solely from disk image

QEMU allows to boot a VM with u-boot passed to -kernel option. But it requires additional file (u-boot itself) to be available on host system. My goal is to load u-boot which is stored inside of QEMU disk image. So I am expecting to do something like this:
qemu-system-arm -kernelblocks 1:128 -device sdhci-pci -drive format=raw,file=./build/disk.img,if=none,id=disk,cache=writeback,discard=unmap -device sd-card,drive=disk
where -kernelblocks is an imaginary option telling QEMU to load u-boot from specific blocks of the QEMU disk image instead of file on the host system.
So the question is: how I can get QEMU to load u-boot from QEMU disk image?
As alternative I may accept answer showing how to load a file from a file system on the QEMU disk image.
For my task I am at liberty to pass any options to QEMU but cannot have any files on the host system except just QEMU disk image.
Your command line doesn't specify a particular machine model, which isn't valid for qemu-system-arm. The below rules of thumb may or may not apply for a particular QEMU machine model.
For running QEMU guest code you can generally either:
specify a guest kernel directly with -kernel, which then boots and knows how to talk to the emulated disk
specify a guest BIOS using the -bios option (or some -pflash option); the bios then boots and knows how to read the kernel off the emulated disk. The "bios" here could be u-boot or UEFI or something similar.
The first is a convenient shortcut; the second is like how the hardware actually boots. (qemu-system-x86_64 works like the -bios option approach; you just don't notice because the -bios option is enabled by default and loads the bios image from a system library directory.) Some board models don't support -bios.
QEMU doesn't have any way of saying "load the guest image from this block of the disk". That would be getting too much into the details of hand-emulating a guest BIOS, and we'd prefer to provide that functionality by just running an actual guest BIOS blob.

I have kernel image for Qemu but i dont know for what machine it has configured to emulate? (for example vexpress-a9 or virt .,)

I have kernel image for Qemu but don't know for what machine it has configured to emulate? (for example vexpress-a9 or virt .,)
How do I find the configured machine by using kernel image?
If you have a working QEMU command line that goes with the kernel, you can look at what the -M/--machine option it uses is. Otherwise, if you have the kernel .config you can see which machines the kernel was built with support for. Otherwise, you need to ask whoever it was that built the kernel what they intended it to be used for, or just throw it away and get or build one that you know does work.