I have a QorIQ (P2041) processor based IoT device firmware. I have uBoot, Kernel and initrd ramdisk. Whatever I do with qemu-system-ppc I can't get it to work. I suspect that qemu-system-ppc doesn't support QorIQ processors. Is there anyway for me to load and boot this firmware in Qemu or any other emulator?
U-Boot has configuration file qemu-ppce500_defconfig. You should be able to run the U-Boot built with this configuration using command
qemu-system-ppc -nographic -bios u-boot -M ppce500
The CPU can be specified via the -cpu parameter as e500mc.
To run your kernel it will need drivers for the hardware provided by the emulated machine like the E1000 network card and the NS16550 console.
Use the fdt command of U-Boot to get an overview of the available devices in the emulated machine.
Firmware binaries are generally very closely tied to the hardware they're built to run on -- they make assumptions about what hardware is available, what addresses in memory it can be found at, and so on. You need to use a firmware blob that corresponds to the hardware you're asking QEMU to emulate. Since QEMU doesn't emulate whatever your random IoT device is, you need to use a u-boot which matches the hardware QEMU actually has (as for example suggested in Xypron's answer).
Once you have booting firmware, you will likely still find you have exactly the same problem with the kernel -- it is built to run on one bit of hardware, and you're trying to run it on something different, and this simply won't work.
Related
Apparently, QEMU is the only piece of open source code that can emulate an x86 operating system on the new Apple silicon (M1, M2, etc.).
Apple built Rosetta 2, which, in theory, does the exact same thing that QEMU would be doing in these scenarios. It translates x86 (Intel) instructions into the instruction set supported by the new Apple silicon processors.
Rosetta 2 does it with remarkable performance, and some x86 applications even run with better performance than on native x86 hardware. QEMU, on the other hand, doesn't get even close when running x86 Linux on Apple silicon.
How can Rosetta have such superior performance? Are there any "secrets" that only Apple knows about their architecture that were never shared with the QEMU project? Any forbidden APIs that QEMU is not allowed to access?
Rosetta and QEMU are both emulators. However, they tackle the problem in vastly different ways.
QEMU
In order to emulate a a Linux system, QEMU must also emulate storage devices, console output devices, ethernet devices, keyboards, and the entire CPU. With this framework, it emulates every instruction doing everything with Just in Time translation. From the Linux kernel down to your /bin/ls command.
There are generally few limitations to QEMU's Intel emulate. You can run most any Intel Operating System and associated applications.
Rosetta 2
Apple's emulate, on the other hand, happens before the application launches. The entire binary is translated from x86 to Apple Silicon and launched. Once translated, the application is in effect a native arm64 binary making native macOS system calls.
Apple's documentation explains it thus:
If an executable contains only Intel instructions, macOS automatically
launches Rosetta and begins the translation process. When translation
finishes, the system launches the translated executable in place of
the original. However, the translation process takes time, so users
might perceive that translated apps launch or run more slowly at times
Rosetta 2 has a number of significant limitations. For example you can't use Intel Kernel extensions, Virtual Machine apps that virtualize x86_64 computer platforms (Parallels for example), or AVX/AVX2/AVX512 vector instructions.
Does kvm have out of order execution feature? if not can we implement to increase virtual machine performance or underlying processor will take care of it.
If the question is whether QEMU-KVM emulate OOO, then no they don't. QEMU can emulate an instruction set architecture (so you could run ARM code on x86, for example) but not at the level of instruction re-ordering. And it probably would only add extra overhead to do this in software.
On the other hand, if you run native code inside a VM (x86 binary on x86, but virtualized), then all unprivileged instructions are executed exactly as they would on a bare-metal. So if your CPU can execute out-of-order it will do so also for the code of your VM. The way the privileged processor instructions are executed depends on whether you are using KVM module alongside the QEMU. You can read more about this here or in more details here.
If you think your QEMU is too slow, check whether the KVM module is used: append the command line with by supplying the -enable-kvm argument. Also make sure your processor has virtualization support.
Also check this answer
I downloaded and installed CUDA-7.5 and found that instruction that I need to check whether I have a CUDA-Capable GPU.
I did as
lin#lin-VirtualBox:/opt/caffe$ sudo update-pciids
Downloaded daily snapshot dated 2015-09-07 03:15:01
then why I type
lspci | grep -i nvidia
nothing comes out.
lin#lin-VirtualBox:/opt/caffe$ lspci | grep -i nvidia
lin#lin-VirtualBox:/opt/caffe$
I have NVIDIA graphic card GEFORCE GT750M.
What could be wrong?
My OS is Ubuntu14.04.
Thanks
It seems you are running in a VirtualBox VM (virtual machine) instance. With a typical VirtualBox setup, the graphics in the VM is virtualized; there is no physical GPU device present in the VM.
As a result, the GPU does not show up when you run lspci in the VM.
One possible approach to work around this would be to switch to a "baremetal" config; i.e. load Ubuntu directly on your laptop as the primary (or "host") OS, rather than in a VM. The GPU should show up that way.
Another possible approach would be to attempt to use VirtualBox PCI Passthrough to make the GPU "visible" in the VM. Whether or not this would work in a laptop scenario I don't know; there may be side effects of trying to pass through the laptop GPU to a VM; your laptop hypervisor and any other OS's would not have access to the GPU (or the laptop display) in this situation. I think there are a number of other requirements and restrictions with this approach. Your laptop hardware may or may not meet the requirements, and I think it is expected that the host OS uses some specific flavors of linux (kernel); you may have windows as the host OS on your laptop.
In any event, how to configure your machine with VirtualBox and/or PCI Passthrough is not a programming question, and I think is off-topic for SO. You might try askubuntu or another similar forum, for related questions.
I am a fresh man in kvm,qemu-kvm and kvm are both very complicated now.
Anyone can introduce some primers about qemu-kvm and kvm?
thanks very much!
KVM stands for kernel based virtual machine. it enables you to create as many number of virtual machine as you like. These machine can be of two types LVM based or Non-LVM based.
Those machine which are LVM based you can take live backup for them. for non-lvm based VM you cannot take live backup i.e. they will be paused when you take backup for them. please refer KVM home page KVM Home Page.
QEMU is a generic and open source machine emulator and virtualizer.When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your own PC). By using dynamic translation, it achieves very good performance. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU. QEMU supports virtualization when executing under the Xen hypervisor or using the KVM kernel module in Linux. When using KVM, QEMU can virtualize x86, server and embedded PowerPC, and S390 guests.
For managing the KVM VM's you need to install Libvirt which is the virtualization library. It provides you the tools for starting, suspending, resuming, cloning, restarting, listing of virtual machine. Please refer Libvirt home page for more reference.
If you are working on some backup or recovery process then I suggest you to go through this excellent perl script as well it will give a fair idea of how the backup and snapshot is being taken for KVM VM's.
KVM based virtual machines are not complicated once you go through the theory of them and start implementing them. I believe once you start working on them you will find fun in managing them.
Putting in a nutshell
QEMU : An emulator which translates the instruction of guest operating system to host operating system. As you can guess that translation has a certain cost, you will not see Guest machine working as fast as host machine.
For more info see the QEMU wiki
KVM (Kernal Virtual Machine): A module in Kernel which support Virtual Machine (host operating system) in hardware. By support I mean that if your guest architecture is same as host architecture, then certainly there is no need to translate the instructions as they can directly be executed by host. For this modern hardware are equipped with special registers and storage location which is leveraged by KVM. Also KVM is a module, some driver is needed to use the KVM, which is qemu also.
For more info see the KVM section in the same wiki.
QEMU-KVM : As I above mentioned, KVM is a module only, qemu is needed (or other) to use KVM. When KVM is used with QEMU, control transfers from QEMU to KVM and vice-versa over the execution.
Talking about KVM is talking about virtualization technology or about kernel modules (kvm.ko, kvm-intel.ko or kvm-amd-ko). Sometimes KVM is mentioned as a virtual machine, this is not correct, because KVM does not provide virtualized hardware.
Source
With a fresh CUDA 5.0 Linux install on CentOS 5.5, I am not able to gdb. So I am wondering if you still need a dedicated GPU for the Linux cuda-gdb? I tried it with the Vesa device driver for X11, but get the same result. Profiling works, running the app works, but trying to run cuda-gdb gives :
warning: no loadable sections found in added symbol-file system-supplied DSO at 0x2aaaaaaab000
Any suggestions?
cuda-gdb still needs a GPU that is not used by graphical environment (e.g. if you are running Gnome/KDE/etc. you need to have system with several GPUs - not necessary all of them must be NVIDIA GPUs)
This particular message is not about this problem - you can ignore it. cuda-gdb will tell if it fails because no GPU can be used for debugging.