I am trying to run the u-boot on QEMU. But when start QEMU it gives nothing, so why this doesn't work and how to debug to find out the reason?
This is I tried:
Install Ubuntu 18.04 WSL2 on Windows.
Compile u-boot for the Raspi2
sudo apt install make gcc bison flex
sudo apt-get install gcc-arm-none-eabi binutils-arm-none-eabi
export CROSS_COMPILE=arm-none-eabi-
export ARCH=arm
make rpi_2_defconfig all
Start QEMU
qemu-system-arm -M raspi2 -nographic -kernel ./u-boot/u-boot.bin
And also tried QEMU on the Windows side, and the result is the same.
PS C:\WINDOWS\system32> qemu-system-arm.exe -M raspi2 --nographic -kernel E:\u-boot\u-boot.bin
Then QEMU didn't give output, even I tried to ctrl+c cannot stop the process.
Unfortunately this is an incompatibility between the way that u-boot expects to be started on the raspberry pi and the ways of starting binaries that QEMU supports for this board.
QEMU supports two ways of starting guest code on Arm in general:
Linux kernels; these boot with whatever the expected
boot protocol for the kernel on this board is. For raspi
that will be "start the primary CPU, but put the secondaries in
the pen waiting on the mbox". Effectively, QEMU emulates a
very minimal bit of the firmware, just enough to boot Linux.
Not Linux kernels; these are booted as if they were the
first thing to execute on the raw hardware, which is to say
that all CPUs start executing at once, and it is the job of
the guest code to provide whatever penning of secondary CPUs
it wants to do. That is, your guest code has to do the work
of the firmware here, because it effectively is the firmware.
We assume that you're a Linux kernel if you're a raw image,
or a suitable uImage. If you're an ELF image we assume you're
not a Linux kernel. (This is not exactly ideal but we're to
some extent lumbered with it for backwards-compatibility reasons.)
On the raspberry pi boards, the way the u-boot binary expects to be started is likely to be "as if the firmware launched it", which is not exactly the same as either of the two options QEMU supports. This mismatch tends to result in u-boot crashing (usually because it is not expecting the "all CPUs run at once" behaviour).
A fix would require either changes to u-boot so it can handle being launched the way QEMU launches it, or changes to QEMU to support more emulation of the firmware of this board (which QEMU upstream would be reluctant to accept).
An alternative approach if it's not necessary to use the raspi board in particular would be to use some other board like the 'virt' board which u-boot does handle in a way that allows it to boot on QEMU. (The 'virt' board also has better device support; for instance it can do networking and USB devices, which 'raspi' and 'raspi2' cannot at the moment.)
Related
I have been developing the OS for a prototype device using hardware. Unfortunately, it's a very manual and buggy process to flash the OS each time and then debug the issues.
I'd like to switch to developing the OS in QEMU, so that I can be sure that the OS is loading correctly before going through the faff of programming the device for real. This will come in handy later for Continuous Integration work.
I have a full copy of the NVM device that is generated from my build process. This is a known working image I'd like to run in QEMU as a start point. This is ready to then be JTAG'd onto the device. The partition layout is:
P0 - loader - Flash of IDBLoader from rockchip loader binaries
P1 - Uboot - Flash of Uboot
P2 - trust - Flash of Trust image for rockchip specific loader
P3 - / - Root partition with Debian based image and packages required for application
P4 - data partition - Application Data
I have not changed anything with the Rockchip partitions (P0 - P2) apart from the serial console settings. When trying to boot the image though, nothing happens. There is no output at all, but the VM shows as still running. I use the following command to run it:
qemu-system-aarch64 -machine virt -cpu cortex-a53 \
-kernel u-boot-nodtb.bin \
-drive format=raw,file=image.img \
-boot c -serial stdio
I have no error information to go on to understand what is going on with it, where can I get more information or debug?
QEMU cannot not emulate arbitrary hardware. You will have to compile U-Boot to match the hardware that QEMU emulates, e.g. using make qemu_arm64_defconfig. The OS must also provide drivers for QEMU's emulated hardware.
If you want to emulate the complete hardware to debug drivers, Renode (https://renode.io/) is a good choice.
For anyone else trying to figure this out, I found good resources here:
https://translatedcode.wordpress.com/2017/07/24/installing-debian-on-qemus-64-bit-arm-virt-board/
and
https://azeria-labs.com/emulate-raspberry-pi-with-qemu/
Looking at the information though, you need to extract the kernel from your image and provide that to the qemu command line as an argument. You'll also need to append an argument telling the system which partition to use as a root drive.
My final command line for starting the machine looks like this:
qemu-system-aarch64 -machine virt -cpu cortex-a53 \
-drive format=raw,file=image.img,id=hd \
-boot c -serial stdio
-kernel <kernelextracted> -append "root=fe04"
Different Arm boards can be significantly different from one another in where they put their hardware, including where they put basic hardware required for bootup (UART, RAM, interrupt controller, etc). It is therefore pretty much expected that if you take a piece of low-level software like u-boot or a Linux kernel that was compiled to run on one board and try to run it on a different one that it will fail to boot. Generally it won't be able to output anything because it won't even have been able to find the UART. (Linux kernels can be compiled to be generic and include drivers for a wider variety of hardware, so if you have that sort of kernel it can be booted on a different board type: it will use a device tree blob, provided either by you or autogenerated by QEMU for the 'virt' board, to figure out what hardware it's running on and adapt to it. But kernels compiled for a specific embedded target are often built with only the device drivers they need, and that kind of kernel can't boot on a different system.)
There are broadly speaking two paths you can take:
(1) build the guest for the board you're emulating (here the 'virt' one). u-boot and Linux both have support for QEMU's 'virt' board. This may or may not be useful for what you're trying to do -- you would be able to test any userspace level code that doesn't care what hardware it runs on, but obviously not anything that's specific to the real hardware you're targeting.
(2) In theory you could add emulation support to QEMU for the hardware you're trying to run on. However this is generally a fair amount of work and isn't trivial if you're not already familiar with QEMU internals. I usually ballpark estimate it as "about as much work as it would take to port the kernel to the hardware"; though it depends a bit on how much functionality you/your guest need to have working, and whether QEMU already has a model of the SoC your hardware is using.
To answer the general "what's the best way to debug a guest image that doesn't boot", the best thing is usually to connect an arm-aware gdb to QEMU's gdbstub. This gives you debug access broadly similar to a JTAG debug connection to real hardware and is probably sufficient to let you find out where the guest is crashing. QEMU also has some debug logging options under the '-d' option, although the logging is partially intended to assist in debugging problems within QEMU itself and can be a bit tricky to interpret.
QEMU allows to boot a VM with u-boot passed to -kernel option. But it requires additional file (u-boot itself) to be available on host system. My goal is to load u-boot which is stored inside of QEMU disk image. So I am expecting to do something like this:
qemu-system-arm -kernelblocks 1:128 -device sdhci-pci -drive format=raw,file=./build/disk.img,if=none,id=disk,cache=writeback,discard=unmap -device sd-card,drive=disk
where -kernelblocks is an imaginary option telling QEMU to load u-boot from specific blocks of the QEMU disk image instead of file on the host system.
So the question is: how I can get QEMU to load u-boot from QEMU disk image?
As alternative I may accept answer showing how to load a file from a file system on the QEMU disk image.
For my task I am at liberty to pass any options to QEMU but cannot have any files on the host system except just QEMU disk image.
Your command line doesn't specify a particular machine model, which isn't valid for qemu-system-arm. The below rules of thumb may or may not apply for a particular QEMU machine model.
For running QEMU guest code you can generally either:
specify a guest kernel directly with -kernel, which then boots and knows how to talk to the emulated disk
specify a guest BIOS using the -bios option (or some -pflash option); the bios then boots and knows how to read the kernel off the emulated disk. The "bios" here could be u-boot or UEFI or something similar.
The first is a convenient shortcut; the second is like how the hardware actually boots. (qemu-system-x86_64 works like the -bios option approach; you just don't notice because the -bios option is enabled by default and loads the bios image from a system library directory.) Some board models don't support -bios.
QEMU doesn't have any way of saying "load the guest image from this block of the disk". That would be getting too much into the details of hand-emulating a guest BIOS, and we'd prefer to provide that functionality by just running an actual guest BIOS blob.
I'll need someone to walk me through setting up Wayland Desktop Environment with linux within a systemd-nspawn container.
How to set up your nested Wayland Desktop Environment with systemd-nspawn container, like VirtualBox
This tutorial walks you through setting up Wayland Desktop Environment with linux systemd-nspawn container on your computer. This is similar to VMware Workstation or VirtualBox, but linux only with minimal overhead performance.
A Quick Look at the final result
Features and benefits
✓ Hardware independent containerOS by the hardware abstraction with extremely efficient, minimal performance overhead method by systemd-nspawn container technology
✓ 100% portable among systemd enabled linux hosts, easy backup and recovery
✓ Direct rendering works such as 3D Desktop effects
✓ Video and Sound works
✓ Network works out of the box
✓ Less likely to mess up the hostOS and infrequent reboot operations for the hostOS and the hardware, instead, enjoy the instant virtual boot, poweroff and reboot of the containerOS.
Summary of How to
Launch kwin_wayland window,
that is nested on your current desktop environment.
Boot your container OS with systemd-nspawn
From the containerOS console:
(a) Launch a Desktop Environment such as XFCE or LXQt to the targeted kwin_wayland window.
(b) Simply prepare your favorite launcher app like synapse or xfce4-panel alone for a minimal setup.
Walk through
HostOS with minimal applications
The hostOS can be any linuxOS with systemd and the desktop environment can be either Wayland or legacy X11.
Alghough, Wayland hostOS is obviously preferable, the situation is still immature. As of March 2017, only Fedora 25 sports Wayland-based GNOME session as the default over the X11-based one, but the other distros does not. The latest version of KDE-Plasma is stable with X11/Xorg, but unstable with Wayland.
Probably, if you use GNOME for the host environment, go for Wayland, but if Plasma or other DE, be conservative to use X11/Xorg for stability.
This method works very well on both conditions, and personally, I use Arch Linux with KDE-Plasma(X11/Xorg).
Install systemd-nspawn and kwin_wayland
Some distro such as Arch already has systemd-nspawn, but others such as Ubuntu does not.
systemd-nspawn
Binary package “systemd-container” in ubuntu xenial
kwin-wayland
Binary package “kwin-wayland” in ubuntu xenial
Arch probably has kwin_wayland in xorg-server-xwayland package.
Launch kwin_wayland window
KWin is known as one of the most feature complete and most stable window managers.
This is a direct rendering enabled wayland window space managed by KWin, and nested on your current desktop environment.
Starting a nested KWin #KWin/Wayland - KDE Community Wiki
Since 5.3 it is possible to start a nested KWin instance under either X11 or Wayland:
export $(dbus-launch); \
kwin_wayland --xwayland &;
for fish shell
export (dbus-launch);
Boot your containerOS
sudo systemd-nspawn \
-bD /YOUR_MACHINE_ROOT_DIRECTORY \
--volatile=no \
--bind-ro=/home/YOUR_USERNAME/.Xauthority \
--bind=/run/user/1000 \
--bind=/tmp/.X11-unix \
--bind=/dev/shm \
--bind=/dev/dri \
--bind=/run/dbus/system_bus_socket \
--bind=/YOUR_DATA_DIRECTORY
Bind /YOUR_DATA_DIRECTORY of the hostOS to the containerOS, so that you can share the data directory between both, at the same time, your containerOS can stay as small and clean as possible and good for portability and backup/restore.
Login the containerOS console.
Typically, you build your container distro OS from minimal/server OS images.
Remember, you do not need to instal X11/Xorg display server, or wayland for containerOS since kwin_wayland window plays the role.
Launch a DesktopEnvironment (XFCE) to the targeted kwin_wayland window.
Remember, KWin is already running, and it's a feature complete and powerful WindowManager. You can launch and switch tasks with KWin via shortcut-keys, or prepare your favorite launcher app like synapse or xfce4-panel for a minimal setup.
However, if we need more user friendly Desktop Environments, just install and launch XFCE or LXQt that can run along with KWin.
From the containerOS console:
export XAUTHORITY=/home/YOUR_USERNAME/.Xauthority; \
export XDG_RUNTIME_DIR=/run/user/1000; \
export CLUTTER_BACKEND=x11; \
export QT_X11_NO_MITSHM=1; \
xfce4-session --display :1;
Maximize and remove the frame of the kwin_wayland window as default
Probably, you want to remove the frame of the containerOS, this is how to on Plasma (DE of the HostOS).
Final result
Confirm XFCE environment recognizes that running on XWAYLAND display.
XWayland implements a compatibility layer to seamlessly run legacy X11 applications on Wayland.
So far, more like exceptionally, if you install GUI libraries of wayland, with a certain flag, you can see the GUI applications run natively on wayland.
The left is kate window with Xorg/X11 compatiblity mode.
The right is the window with wayland native mode.
As you can see the native wayland app does not reflect the current window theme and the XFCE panel does not show the app task, and you cannot tell the difference of the performance as long as you use the normal applications of PC.
So, probably there's not much reason to pursuit wayland native mode app.
but the situation can be different for 3D games, and significantly different on small devices such as Raspberry Pi.
(Optional) legacy X11/Xorg
Althogh this tutorial focuses on wayland nested window, Xephyr (a nested X server that runs as an X application) has been around for a long time.
Unlike kwin_wayland, Xepher is not optimized for direct rendering and KWin Window manager is not bundled, so if you run KWin or other direct rendering composer on top of Xepher, things are going slow and inefficient, therefore, not recommended, but here's how:
Xephyr -ac -screen 1200x700 -resizeable -reset :1 &;
HostOS and ContainerOS interaction
You cannot Copy&Paste between HostOS and ContainerOS.
You may consider to use GoogleKeep to share contents between HostOS and ContainerOS, and of course, you shold have shared directories via systemd-nspawn bind.
Portability
You may "backup/recover" or "copy" or "move" the continerOS to anywhere regardless of
Kernel updates
Hardware drivers
Disk partitions (/etc/fstab etc.)
GRUB/UEFI configurations
or any other typical integration glitches!
Just be aware of the host kernel versions.
Backup
your machines directory ./machines
your machines backup directory ./machines-bak
your machine image directory arch1
cd ~/machines/
sudo tar -cpf ~/machines-bak/arch1.tar arch1 --totals
Recovery
cd ~/machines/
sudo tar -xpf ~/machines-bak/arch1.tar --totals
Backup Tools
The tar commands above may be not the smartest method, however, it's a proven robust method without any extra tool installations. Often, simple is best.
However, you may select various backup tools for more efficiency.
Synchronization and backup programs #ArchWIKI
Git base bup looks good and new.
What you may consider to remove from the container OS
Any hardeware dependent factors such as:
linux kernels with various drivers
/etc/fstab
NetworkManager.service of systemd
MIT License
I build qemu from sources with targets arm-softmmu and arm-linux-user. I have a simple binary compiled for arm but when i launch it with qemu translator i get a FATAL: kernel is too old. I'm running qemu on a x86_64 host with kernel 2.6.32. What could be the problem?
I got this error because I was running:
qemu-system-x86_64 -kernel vmlinux -initrd rootfs.cpio.gz
while it should be:
qemu-system-x86_64 -kernel bzImage -initrd rootfs.cpio.gz
where bzImage is located at arch/x86/boot/bzImage.
This error happens because glibc has a kernel version check to avoid incompatibilities. The message comes from glibc, not QEMU.
Also in user mode, you might try to work around the problem with the -r option which artificially sets a different uname kernel version:
qemu-x86_64 -r 4.18
This might work, but of course, is not necessarily reliable, as QEMU presumably sets its version more or less correctly based on the syscalls/system interfaces it implements, so your program could through glibc rely on some interface that is not yet implemented.
The correct solution is to get a cross-compiler that more closely matches what QEMU userland gives you.
The c library you used to build the binary is much more newer that your guest kernel.
I have been trying to set up an Ubuntu environment on my laptop for some time now for CUDA programming. I am currently dual booting Windows 8 and Ubuntu 12.04 and want to install CUDA 5 on Ubuntu.
The laptop has a GeForce GT 640M graphics card (See below for full specs). It is an Optimus card.
Originally I was dual booting Ubuntu 11.10 and have tried tutorials on both 11.10 and 12.04.
I have tried many tutorials of all shapes and sizes, including this tutorial. The installation process shows the device driver installing and the Toolkit installing, and the Samples failing, but when I go to test a simple Vector Add CUDA program in NSight, "No compatible CUDA Device" error is thrown.
Ubuntu Details also still shows "Unknown" for Graphics
Suggestions?
Laptop Specs:
Acer V3-771G
Intel Core i7 2670QM
nVidia GeForce GT 640M 2GB - Optimus
16GB DDR3-1600 RAM
120GB SSD + 500GB HDD + 32GB Cache SSD
Since it is an optimus device, there are some extra steps to be able to use the nvidia GPU. While it is not necessary, I suggest that you use the bumblebee wrapper program because it is the easiest solution.
After you have installed the bumblebee wrapper you can run your programs using optirun programname or start a shell with the nvidia card activated: optirun bash --login
An added bonus is that the bumblebee daemon will disable the GPU when it is not running and will save you some battery.
If you don't care about battery life and just want CUDA to be always enabled without wrapping commands you can load the nvidia kernel module and then create the necessary device nodes manually:
mknod /dev/nvidia0 c 195 0
mknod /dev/nvidiactl c 195 255
(This advanced method lets you run cuda programs from the console without starting Xorg, for example when SSH-ing to a machine without a running X server.)
See also https://askubuntu.com/questions/131506/how-can-i-get-nvidia-cuda-or-opencl-working-on-a-laptop-with-nvidia-discrete-car for a more detailed discussion.
Try the command sudo apt-get install mesa-utils.
See if the graphics is recognized and then try to install cuda
If does not recognized with the first command try:
sudo add-apt-repository ppa:ubuntu-x-swat/x-updates
sudo apt-get update
sudo apt-get install nvidia-current
First install the following libraries & Tools:
sudo apt-get install freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libgl1-mesa-glx libglu1-mesa libglu1-mesa-dev
Next we will blacklist some modules(drivers), in terminal enter:
sudo gedit /etc/modprobe.d/blacklist.conf
Add the following to the end of the file(one per line like so):
blacklist amd76x_edac
blacklist vga16fb
blacklist nouveau
blacklist rivafb
blacklist nvidiafb
blacklist rivatv
Save the file and close the editor.
Now we want to get rid of any nvidia risiduals, in terminal:
sudo apt-get remove --purge nvidia*
Next you need to restart your machine (sudo reboot).
0) Press Ctrl+Alt+F1 at login screen(you don't have to login, we'll have to restart later anyway), then log in.
1) sudo service lightdm stop
2) cd Downloads
3) chmod +x devdriver*.run (your driver filename)
4) sudo ./devdriver*.run
You might have to run the driver-installer once, reboot(it will remove nouveau drivers) and repeat the steps again. Follow the installer instructions and it will be fine, when it asks you;
yes, you do want the 32-bit libraries and you DO want it to change the xorg.conf file.
Once the installer completes, restart (sudo reboot). You're done :]
In Order to install SDK and Toolkit,
use the steps 3 and 4 with the downloaded files. (.run)
In theory, the drivers included with CUDA 5.5 should natively support Optimus (as well as single GPU debugging for non-Optimus laptops). I haven't tried it yet because I'm waiting for a compute 3.5 Optimus laptop so that it'll support kernel recursion and HyperQ. In theory the HP Envy 15t-j000 has the GK208 version of the GT 740m, but I'd really rather have an ultrabook form factor like the upcoming Acer S3-392 with GT 735m. The NVIDIA guys at GTC assured me that Optimus should be working with the CUDA 5.5 RC. I found this 'CUDA Getting Started Guide for Linux' released this month that provides some flags for getting Optimus drivers installed correctly:
http://www.google.com/url?q=http://developer.download.nvidia.com/compute/cuda/5_5/rc/docs/CUDA_Getting_Started_Linux.pdf
Also, more information about GK208 Chips and Compute 3.5 in laptops:
https://devtalk.nvidia.com/default/topic/546357/sounds-like-gk208-laptops-cards-will-support-most-sm_35-features/
Anyone have luck with CUDA 5.5 and Optimus laptops under linux?