QEMU - support for mcimx6ul-evk machine - qemu

Lately, I am trying to get Linux up on qemu-arm mcimx6ul-evk machine using buildroot. I have generated all the images required for my target machine. Following are the steps I followed for building images on buildroot (version: buildroot-2016.11.1).
$ make freescale_imx6ulevk_defconfig
$ make nconfig
I have selected the packages required and then built the images using command
$ make
in "output/images" I see the following images generated,
boot.vfat, imx6ul-14x14-evk.dtb, rootfs.ext2, rootfs.ext4, rootfs.tar, sdcard.img, u-boot.bin, u-boot.imx, zImage.
I am referring this article to replicate the same for mcimx6ul-evk. I ran the below command to boot Linux on my target machine
$ qemu-system-arm -M mcimx6ul-evk -m 512M -kernel output/images/zImage -monitor stdio -drive file=output/images/rootfs.ext2,format=raw
When a run the above command with -d int I get the exception logs as follows
QEMU 4.1.0 monitor - type 'help' for more information
(qemu) Exception return from AArch32 hyp to svc PC 0x80010088
Taking exception 11 [Hypervisor Call]
...from EL1 to EL2
...with ESR 0x12/0x4a000000
Exception return from AArch32 hyp to svc PC 0x800134dc
Taking exception 11 [Hypervisor Call]
...from EL1 to EL2
...with ESR 0x12/0x4a000000
Exception return from AArch32 hyp to svc PC 0x800134dc
Taking exception 11 [Hypervisor Call]
...from EL1 to EL2
...with ESR 0x12/0x4a000000
Exception return from AArch32 hyp to svc PC 0x80be4d1c
Taking exception 11 [Hypervisor Call]
...from EL1 to EL2
...with ESR 0x12/0x4a000000
Exception return from AArch32 hyp to svc PC 0x80008034
AArch32 mode switch from svc to irq PC 0x800118b4
AArch32 mode switch from irq to abt PC 0x800118b8
AArch32 mode switch from abt to und PC 0x800118c4
AArch32 mode switch from und to fiq PC 0x800118d0
AArch32 mode switch from fiq to svc PC 0x800118dc
Taking exception 4 [Data Abort]
...from EL1 to EL1
...with ESR 0x25/0x9600003f
...with DFSR 0x5 DFAR 0x0
Exception return from AArch32 abt to svc PC 0x800130e0
I don't see Linux booting logs on the serial0 console. can someone please help me how to resolve this? Or am I missing something here?
Note: The qemu(version: qemu-4.1.0) is built for arm target using below commands.
$ ./configure --target-list=arm-softmmu --enable-sdl
$ make
$ sudo make install

I encountered this issue as well. Here are the steps I did to investigate it:
Compile the kernel with debugging info enabled:
CONFIG_DEBUG_INFO=y
Start qemu with gdb debugging:
$qemu-system-arm -M mcimx6ul-evk -cpu cortex-a7 -m 512M -kernel arch/arm/boot/zImage -nographic -dtb arch/arm/boot/dts/imx6ul-14x14-evk.dtb -s -S -d int
Take note of these qemu parameters:
-s : -gdb tcp::1234 for gdb debugging
-S : do not start CPU at startup(use ‘c’ to start execution). Stopped at startup code. Need to do ‘c’ command in gdb.
Debug with gdb-multiarch:
$gdb-multiarch
(gdb) target remote localhost:1234
(gdb) file vmlinux
(gdb) b start_kernel
Breakpoint 1 at 0x80e008cc: file init/main.c, line 482.
(gdb) c
Continuing.
Breakpoint 1, start_kernel () at init/main.c:482
482 {
(gdb)
The kernel was starting because it hit the breakpoint.
Add kernel command-line entries to see more logs:
$qemu-system-arm -M mcimx6ul-evk -cpu cortex-a7 -m 512M -kernel arch/arm/boot/zImage -nographic -dtb arch/arm/boot/dts/imx6ul-14x14-evk.dtb -append "console=ttymxc0 loglevel=8 earlycon earlyprintk"
Based on the logs, I disabled drivers that were causing the kernel to crash (it is because they are not supported by the qemu mcimx6ul-evk machine). You can disable them from menuconfig, then recompile the kernel.
Note that I did not use buildroot. This was my setup:
Repository: https://source.codeaurora.org/external/imx/linux-imx
Config: imx_v7_defconfig
Toolchain: arm-linux-gnueabihf-
Host: WSL2

Related

CPU_ON on QEMU ARMv8A using PSCI from EL2/EL3

I have 4 core ARMv8A (Cortex-A53) emulated on QEMU 6.2.0. The primary code (CPU#0) is running and I am able to debug it using GDB. I wanted to bring up other cores. For that I have used the following GDB commands. From the different experiments conducted, my conclusion is that only CPU#0 is running and all other CPUs never started.
(gdb) thread 3
(gdb) info thread
Id Target Id Frame
1 Thread 1.1 (CPU#0 [running]) 0x0000000040000008 in ?? ()
2 Thread 1.2 (CPU#1 [halted ]) 0x0000000040000000 in ?? ()
* 3 Thread 1.3 (CPU#2 [halted ]) 0x0000000040000000 in ?? ()
4 Thread 1.4 (CPU#3 [halted ]) 0x0000000040000000 in ?? ()
(gdb) where
#0 0x0000000040000000 in ?? ()
Exploting further, came across this thread about turning on CPU using PSCI. Start a second core with PSCI on QEMU.
Also came across SMC calling convention related to this. I have gone through the documentation of SMCCC and PSCI.
I am implementing a minimal hypervisor. The guest is a Linux and it is booting. The Linux boot log shows
[ 0.072225] psci: failed to boot CPU1 (-22)
...
Further debugging the code revealed that Linux is throwing a synchronous exception to the hypervisor with necessary parameters as per the specification using the "HVC" instruction.
If my understanding is correct, PSCI implementation is vendor specific- that is, the code running at EL2/EL3 has to use some vendor provided mechanism to turn on the CPU(core). Is this correct? on on system without EL3, how the code running at EL2 turn on the CPU?
My QEMU command line is given below
$qemu-system-aarch64 -machine virt,gic-version=2,virtualization=on -cpu cortex-a53 -nographic -smp 4 -m 4096 -kernel hypvisor.elf -device loader,file=linux-5.10.155/arch/arm64/boot/Image,addr=0x80200000 -device loader,file=1gb_4core.dtb,addr=0x88000000
Any hint is greatly appreciated.
When the guest is not booting at EL3, the QEMU virt machine implements its own internal PSCI emulation. This is described in the DTB file passed to the guest, and will say that PSCI calls should be done via the SMC instruction (if the guest is starting at EL2) or the HVC instruction (if the guest is starting at EL1). Effectively, QEMU is emulating an EL3 firmware for you.
(If the guest does boot at EL3, then QEMU assumes that the EL3 guest code will be implementing PSCI; in that case it provides some simple emulated hardware that does the power on/off operation and which the EL3 guest's PSCI implementation will manipulate as part of its implementation of the CPU_ON and CPU_OFF calls. But that's not the case you're in.)
If you are running a hypervisor in your guest at EL2, then it is your hypervisor's job to implement PSCI for your EL1 guests (it's unlikely that you want to allow an EL1 guest to be able to directly shut down a CPU under your hypervisor's feet, for instance). So you want to pass your EL1 guest a different DTB that describes the view an EL1 guest has of its emulated hardware, and which says "PSCI via HVC". Then your hypervisor's HVC handling should emulate PSCI. Separately, your hypervisor's bootup code should be using the real PSCI-via-SMC to power up the secondary CPUs as part of its bootup sequence.

KVMs not running after qemu-kvm upgrade in CentOS 8.1, RHEL 8.1

This is the error i encountered when i updated my CentOS 8.1/RHEL 8.1 machines and all the KVMs are showing the error below:
error: internal error: process exited while connecting to monitor: 2020-06-09T12:41:10.410896Z qemu-kvm: -machine pc-q35-rhel8.1.0,accel=kvm,usb=off,vmport=off,smm=on,dump-guest-core=off: unsupported machine type
Use -machine help to list supported machines
Note: The problem states the machine type Q35 is not well stated/configured in your virtual Kernel based machines RUNNING on RHEL 8/ CentOS 8
[Step 1:] cat /etc/libvirt/qemu/*.xml | grep \<name&apos;\| machine&apos;
This will list the machine type in all of the KVMs installed.
[Output Snippet]
machine pc-q35-rhel8.1.0
[Step 2:] cd /etc/libvirt/qemu; ll
This will list all the xml files in connection with your KVMs
[Step 3:] At /etc/libvirt/qemu Use virsh edit <KVM file> ###Don&apos;t include .xml###
Navigate to machine
[Output Snippet]
<os>
<type arch=&apos;x86_64&apos; machine=&apos;pc-q35-rhel8.1.0&apos;>hvm</type>
<loader readonly=&apos;yes&apos; secure=&apos;yes&apos; type=&apos;pflash&apos;>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
<nvram>/var/lib/libvirt/qemu/nvram/Loadbalancer_VARS.fd</nvram>
<boot dev=&apos;hd&apos;/>
</os>
Change machine=&apos;pc-q35-rhel8.1.0&apos; to machine=&apos;q35&apos;
shift + zz to save and quit
[Step 4:]
systemctl restart libvirtd && systemctl status -l libvirtd
virsh list --all
virsh start --domain <KVM>
Check the status of your running KVMs
virsh list --state-running
Now the issue should be resolved and your KVMs should be humming away.
Note though if head back in and check on the configuration xml file with virsh edit, you&apos;ll note that q35 converts to pc-q35-rhel7.6.0 automatically.
But this shouldn&apos;t be an issue.
Cheers :)

Is LLDB compatible with gdbserver (for debugging cross compiled code?)

I'm a CS student that just learned basic mips for class (Patterson & Hennessy + spim), and I'm attempting to find a mips debugging solution that allows arbitrary instruction execution during the debugging process.
Attempt with gdb (so you know why not to suggest this)
The recommended mips cross compilation tool chain is qemu and gdb, see mips.com docs and related Q/A.
gdb's compile code command does not support mips-linux-gnu-gcc as far as I can tell, see gdb docs ("Relocating the object file") and related Q/A. I get errors for malloc, mmap, and invalid memory errors (something appears to be going wrong with the ad-hoc linking gdb performs) when attempting to use compile code with mips-linux-gnu-gcc, even after filtering the hard coded compilation arguments that mips-linux-gnu-gcc doesn't recognize.
Actual question
lldb has a similar command called expression, see lldb docs, and I'm interested in using lldb in conjunction with qemu. The expression command also relies on clang as opposed to gcc, but cross compilation in clang is relatively simple (clang -target mips-linux-gnu "just works"). The only issue is that qemu-mips -g launches gdbserver, and I can find no option for launching lldb-server.
I have read lldb docs on remote debugging, and there is an option to select remote-gdb-server as the platform. I can't find much in the way of documentation for remote-gdb-server, but the name seems to imply that lldb can be compatible with gdbserver.
Here is my attempt to make this work:
qemu-mips -g 1234 test
lldb test
(lldb) platform select remote-gdb-server
Platform: remote-gdb-server
Connected: no
(lldb) platform connect connect://localhost:1234
Platform: remote-gdb-server
Hostname: (null)
Connected: yes
(lldb) b main
Breakpoint 1: where = test`main + 16 at test.c:4, address = 0x00400530
(lldb) c
error: invalid process
Is there a way to either
use lldb with gdbserver, or to
launch lldb-server from qemu-mips as opposed to gdbserver
so that I can execute instructions while debugging mips code?
Note: I understand that I could instead use qemu system emulation to be able to just run lldb-server on the remote. I have tried to virtualize debian mips, using this guide, but the netinstaller won't detect my network card. Based on numerous SO Q/A and online forums, it looks like solving this problem is hard. So for now I am trying to avoid whole system emulation.
YES
Use LLDB with QEMU
LLDB supports GDB server that QEMU uses, so you can do the same thing with the previous section, but with some command modification as LLDB has some commands that are different than GDB
You can run QEMU to listen for a "GDB connection" before it starts executing any code to debug it.
qemu -s -S <harddrive.img>
...will setup QEMU to listen on port 1234 and wait for a GDB connection to it. Then, from a remote or local shell:
lldb kernel.elf
(lldb) target create "kernel.elf"
Current executable set to '/home/user/osdev/kernel.elf' (x86_64).
(lldb) gdb-remote localhost:1234
Process 1 stopped
* thread #1, stop reason = signal SIGTRAP
frame #0: 0x000000000000fff0
-> 0xfff0: addb %al, (%rax)
0xfff2: addb %al, (%rax)
0xfff4: addb %al, (%rax)
0xfff6: addb %al, (%rax)
(Replace localhost with remote IP / URL if necessary.) Then start execution:
(lldb) c
Process 1 resuming
To set a breakpoint:
(lldb) breakpoint set --name kmain
Breakpoint 1: where = kernel.elf`kmain, address = 0xffffffff802025d0
for your situation:
qemu-mips -s -S test;
lldb test
gdb-remote localhost:1234
here is my,you can reference:
############################################# gdb #############################################
QEMU_GDB_OPT := -S -gdb tcp::10001,ipv4
# 调试配置:-S -gdb tcp::10001,ipv4
qemudbg:
ifeq ($(BOOT_MODE),$(BOOT_LEGACY_MODE))
$(QEMU) $(QEMU_GDB_OPT) $(QEMU_ARGUMENT)
else
ifeq ($(BOOT_MODE),$(BOOT_GRUB2_MODE))
ifeq ($(EFI_BOOT_MODE),n)
$(QEMU) $(QEMU_GDB_OPT) $(QEMU_ARGUMENT) -cdrom $(KERNSRC)/$(OS_NAME).iso
else
$(QEMU) $(QEMU_GDB_OPT) $(QEMU_ARGUMENT) -bios $(BIOS_FW_DIR)/IA32_OVMF.fd -cdrom $(KERNSRC)/$(OS_NAME).iso
endif
endif
endif
# 连接gdb server: target remote localhost:10001
gdb:
$(GDB) $(KERNEL_ELF)
############################################# lldb #############################################
QEMU_LLDB_OPT := -s -S
LLDB := lldb
qemulldb:
ifeq ($(BOOT_MODE),$(BOOT_LEGACY_MODE))
$(QEMU) $(QEMU_LLDB_OPT) $(QEMU_ARGUMENT)
else
ifeq ($(BOOT_MODE),$(BOOT_GRUB2_MODE))
ifeq ($(EFI_BOOT_MODE),n)
$(QEMU) $(QEMU_LLDB_OPT) $(QEMU_ARGUMENT) -cdrom $(KERNSRC)/$(OS_NAME).iso
else
$(QEMU) $(QEMU_LLDB_OPT) $(QEMU_ARGUMENT) -bios $(BIOS_FW_DIR)/IA32_OVMF.fd -cdrom $(KERNSRC)/$(OS_NAME).iso
endif
endif
endif
lldb:
$(LLDB) $(KERNEL_ELF)

Compile error during Caffe installation on OS X 10.11

I've configured Caffe environment on my Mac for several times. But this time I encountered a problem I've never met before:
I use Intel's MKL for accelerating computation instead of ATLAS, and I use Anaconda 2.7 and OpenCV 2.4, with Xcode 7.3.1 on OS X 10.11.6.
when I
make all -j8
in terminal under Caffe's root directory, the error info is:
AR -o .build_release/lib/libcaffe.a
LD -o .build_release/lib/libcaffe.so.1.0.0-rc5
clang: warning: argument unused during compilation: '-pthread'
ld: can't map file, errno=22 file '/usr/local/cuda/lib' for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [.build_release/lib/libcaffe.so.1.0.0-rc5] Error 1
make: *** Waiting for unfinished jobs....
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: .build_release/lib/libcaffe.a(parallel.o) has no symbols
I've tried many times, does anyone can help me out?
This looks like you haven't changed Makefile.config from GPU to CPU mode. There shouldn't be anything trying to actively link that library. I think the only CUDA one you should need is libicudata.so
Look for the lines
# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1
and remove the octothorpe from the front of the second line.

qemu beagle sd card error

I am trying to run uefi code in qemu for beagle board using the following command
qemu-system-arm -M beagle -sd beagle_sd.img -serial stdio -clock unix
The sdcard holds uefi, which starts the shell, when this runs I am getting sd card error saying CMD17 is a wrong state. My log
SD: CMD17 in a wrong state
sd_read_data: not in Sending-Data state (state=0)
----above line prints multiple times
sd_read_data: not in Sending-Data state (state=0)MmcIoBlocks(MMC_CMD65553): Error Device Error
EhcCreateUsb2Hc: capability length 0
EhcDriverBindingStart: failed to create USB2_HC
Due to this my uefi shell only maps blk0 and blk1.
Any ideas on how to ahead?