Booting Kernel in QEMU - PFLASH: Possible BUG - Write block confirm - qemu

Trying to boot a Samsung S7 Edge Kernel 3.18.x using QEMU
qemu-system-aarch64 -kernel s7boot/boot.emmc.win-zImage -machine virt -cpu cortex-a5
Unfortunately, I get this error:
PFLASH: Possible BUG - Write block confirm
And QEMU exits.
What could be the cause and the solution to this?
Thanks,
P.S Related to this:
Boot Sasmsung S7 Edge extraced Kernel from Device in Android Emulator

This behavior was recently fixed in QEMU (in release 4.0.0) with the following commit.
The commit message says that aborting FLASH interface command "write to buffer" is not supported by QEMU pflash_cfi01 model. QEMU 4.0.0 and newer will still print the message but will not terminate emulation, one possible solution to your issue is to update QEMU version or to backport the fix. Another is to find a place where the kernel tries to abort write to buffer command and see if it can be avoided.

Related

qemu-system-aarch64: ../accel/tcg/cpu-exec.c:681: cpu_loop_exec_tb: Assertion `icount_enabled()' failed

I have compiled my qemu which version is 5.2.0 on mips64el host machine. When I run qemu-system-aarch64 to start a linux OS, I get this problem as follow:
qemu-system-aarch64: ../accel/tcg/cpu-exec.c:681: cpu_loop_exec_tb: Assertion `icount_enabled()' failed
Have anyone miss the same problem, please help me, thanks a lot!
This shouldn't happen -- it is a bug in QEMU. You should confirm that you can reproduce it on a QEMU built from the head of upstream git, and then report it in the QEMU bug tracker at https://gitlab.com/qemu-project/qemu/-/issues with the full instructions on how to reproduce the failure, including the full QEMU command line and any image files that are needed.
It would also be helpful to check whether the problem reproduces on a different host architecture (eg x86-64) -- it may be that the bug is specific to running on MIPS hosts (we don't have good MIPS hosts that we can run CI testing on, so this setup gets a lot less testing).
The -icount flag is buggy for a lot of QEMU architectures. I could not find if it is still supported in qemu. (As pointed out by Peter Maydell it is properly supported). Try removing that flag for now and check if that works or not.
Also check if your device is supported here.

Qemu Virt Machine from flash device image

I have been developing the OS for a prototype device using hardware. Unfortunately, it's a very manual and buggy process to flash the OS each time and then debug the issues.
I'd like to switch to developing the OS in QEMU, so that I can be sure that the OS is loading correctly before going through the faff of programming the device for real. This will come in handy later for Continuous Integration work.
I have a full copy of the NVM device that is generated from my build process. This is a known working image I'd like to run in QEMU as a start point. This is ready to then be JTAG'd onto the device. The partition layout is:
P0 - loader - Flash of IDBLoader from rockchip loader binaries
P1 - Uboot - Flash of Uboot
P2 - trust - Flash of Trust image for rockchip specific loader
P3 - / - Root partition with Debian based image and packages required for application
P4 - data partition - Application Data
I have not changed anything with the Rockchip partitions (P0 - P2) apart from the serial console settings. When trying to boot the image though, nothing happens. There is no output at all, but the VM shows as still running. I use the following command to run it:
qemu-system-aarch64 -machine virt -cpu cortex-a53 \
-kernel u-boot-nodtb.bin \
-drive format=raw,file=image.img \
-boot c -serial stdio
I have no error information to go on to understand what is going on with it, where can I get more information or debug?
QEMU cannot not emulate arbitrary hardware. You will have to compile U-Boot to match the hardware that QEMU emulates, e.g. using make qemu_arm64_defconfig. The OS must also provide drivers for QEMU's emulated hardware.
If you want to emulate the complete hardware to debug drivers, Renode (https://renode.io/) is a good choice.
For anyone else trying to figure this out, I found good resources here:
https://translatedcode.wordpress.com/2017/07/24/installing-debian-on-qemus-64-bit-arm-virt-board/
and
https://azeria-labs.com/emulate-raspberry-pi-with-qemu/
Looking at the information though, you need to extract the kernel from your image and provide that to the qemu command line as an argument. You'll also need to append an argument telling the system which partition to use as a root drive.
My final command line for starting the machine looks like this:
qemu-system-aarch64 -machine virt -cpu cortex-a53 \
-drive format=raw,file=image.img,id=hd \
-boot c -serial stdio
-kernel <kernelextracted> -append "root=fe04"
Different Arm boards can be significantly different from one another in where they put their hardware, including where they put basic hardware required for bootup (UART, RAM, interrupt controller, etc). It is therefore pretty much expected that if you take a piece of low-level software like u-boot or a Linux kernel that was compiled to run on one board and try to run it on a different one that it will fail to boot. Generally it won't be able to output anything because it won't even have been able to find the UART. (Linux kernels can be compiled to be generic and include drivers for a wider variety of hardware, so if you have that sort of kernel it can be booted on a different board type: it will use a device tree blob, provided either by you or autogenerated by QEMU for the 'virt' board, to figure out what hardware it's running on and adapt to it. But kernels compiled for a specific embedded target are often built with only the device drivers they need, and that kind of kernel can't boot on a different system.)
There are broadly speaking two paths you can take:
(1) build the guest for the board you're emulating (here the 'virt' one). u-boot and Linux both have support for QEMU's 'virt' board. This may or may not be useful for what you're trying to do -- you would be able to test any userspace level code that doesn't care what hardware it runs on, but obviously not anything that's specific to the real hardware you're targeting.
(2) In theory you could add emulation support to QEMU for the hardware you're trying to run on. However this is generally a fair amount of work and isn't trivial if you're not already familiar with QEMU internals. I usually ballpark estimate it as "about as much work as it would take to port the kernel to the hardware"; though it depends a bit on how much functionality you/your guest need to have working, and whether QEMU already has a model of the SoC your hardware is using.
To answer the general "what's the best way to debug a guest image that doesn't boot", the best thing is usually to connect an arm-aware gdb to QEMU's gdbstub. This gives you debug access broadly similar to a JTAG debug connection to real hardware and is probably sufficient to let you find out where the guest is crashing. QEMU also has some debug logging options under the '-d' option, although the logging is partially intended to assist in debugging problems within QEMU itself and can be a bit tricky to interpret.

How to run u-boot on QEMU(raspi2)?

I am trying to run the u-boot on QEMU. But when start QEMU it gives nothing, so why this doesn't work and how to debug to find out the reason?
This is I tried:
Install Ubuntu 18.04 WSL2 on Windows.
Compile u-boot for the Raspi2
sudo apt install make gcc bison flex
sudo apt-get install gcc-arm-none-eabi binutils-arm-none-eabi
export CROSS_COMPILE=arm-none-eabi-
export ARCH=arm
make rpi_2_defconfig all
Start QEMU
qemu-system-arm -M raspi2 -nographic -kernel ./u-boot/u-boot.bin
And also tried QEMU on the Windows side, and the result is the same.
PS C:\WINDOWS\system32> qemu-system-arm.exe -M raspi2 --nographic -kernel E:\u-boot\u-boot.bin
Then QEMU didn't give output, even I tried to ctrl+c cannot stop the process.
Unfortunately this is an incompatibility between the way that u-boot expects to be started on the raspberry pi and the ways of starting binaries that QEMU supports for this board.
QEMU supports two ways of starting guest code on Arm in general:
Linux kernels; these boot with whatever the expected
boot protocol for the kernel on this board is. For raspi
that will be "start the primary CPU, but put the secondaries in
the pen waiting on the mbox". Effectively, QEMU emulates a
very minimal bit of the firmware, just enough to boot Linux.
Not Linux kernels; these are booted as if they were the
first thing to execute on the raw hardware, which is to say
that all CPUs start executing at once, and it is the job of
the guest code to provide whatever penning of secondary CPUs
it wants to do. That is, your guest code has to do the work
of the firmware here, because it effectively is the firmware.
We assume that you're a Linux kernel if you're a raw image,
or a suitable uImage. If you're an ELF image we assume you're
not a Linux kernel. (This is not exactly ideal but we're to
some extent lumbered with it for backwards-compatibility reasons.)
On the raspberry pi boards, the way the u-boot binary expects to be started is likely to be "as if the firmware launched it", which is not exactly the same as either of the two options QEMU supports. This mismatch tends to result in u-boot crashing (usually because it is not expecting the "all CPUs run at once" behaviour).
A fix would require either changes to u-boot so it can handle being launched the way QEMU launches it, or changes to QEMU to support more emulation of the firmware of this board (which QEMU upstream would be reluctant to accept).
An alternative approach if it's not necessary to use the raspi board in particular would be to use some other board like the 'virt' board which u-boot does handle in a way that allows it to boot on QEMU. (The 'virt' board also has better device support; for instance it can do networking and USB devices, which 'raspi' and 'raspi2' cannot at the moment.)

Qemu translator raises a FATAL: kernel too old

I build qemu from sources with targets arm-softmmu and arm-linux-user. I have a simple binary compiled for arm but when i launch it with qemu translator i get a FATAL: kernel is too old. I'm running qemu on a x86_64 host with kernel 2.6.32. What could be the problem?
I got this error because I was running:
qemu-system-x86_64 -kernel vmlinux -initrd rootfs.cpio.gz
while it should be:
qemu-system-x86_64 -kernel bzImage -initrd rootfs.cpio.gz
where bzImage is located at arch/x86/boot/bzImage.
This error happens because glibc has a kernel version check to avoid incompatibilities. The message comes from glibc, not QEMU.
Also in user mode, you might try to work around the problem with the -r option which artificially sets a different uname kernel version:
qemu-x86_64 -r 4.18
This might work, but of course, is not necessarily reliable, as QEMU presumably sets its version more or less correctly based on the syscalls/system interfaces it implements, so your program could through glibc rely on some interface that is not yet implemented.
The correct solution is to get a cross-compiler that more closely matches what QEMU userland gives you.
The c library you used to build the binary is much more newer that your guest kernel.

cuda-gdb exits with "[1] stopped" when it hits a kernel call

I'm pretty new to CUDA and flying a bit by the seat of my pants here...
I'm trying to debug my CUDA program on a remote machine I don't have admin rights on. I compile my program with nvcc -g -G and then try to debug it with cuda-gdb. However, as soon as gdb hits a call to a kernel (doesn't even have to enter it, and it doesn't happen in host code), I get:
(cuda-gdb) run
Starting program: /path/to/my/binary/cuda_clustered_tree
[Thread debugging using libthread_db enabled]
[1]+ Stopped cuda-gdb cuda_clustered_tree
cuda-gdb then dumps me back to my terminal. If I try to run cuda-gdb again, I get
An instance of cuda-gdb (pid 4065) is already using device 0. If you believe
you are seeing this message in error, try deleting /tmp/cuda-dbg/cuda-gdb.lock.
The only way to recover is to kill -9 cuda-gdb and cuda_clustered_ (I assume the latter is part of my binary).
This machine has two GPUs, is running CUDA 4.1 (I believe -- there were a lot installed, but that's the one I set the PATH and LD_LIBRARY_PATH to) and compile + runs deviceQuery and bandwidthTest fine.
I can provide more info if need be. I've searched everywhere I could find online and found no help with this.
Figured it out! Turns out, cuda-gdb hates csh.
If you are running csh, it will cause cuda-gdb to exhibit the above anomalous behavior. Even running bash from within csh, then running cuda-gdb, I still saw the behavior. You need to start your shell as bash, and only bash.
On the machine, the default shell was csh, but I use bash. I wasn't allowed to change it directly, so I added 'exec /bin/bash --login' to my .login script.
So even though I was running bash, because it was started by csh, cuda-gdb would exhibit the above anomalous behavior. Getting rid of 'exec' command, so I was running csh directly with nothing on top, still showed the behavior.
In the end, I had to get IT to change my shell to bash directly (after much patient troubleshooting by them.) Now it works as intended.