Lets say that I have a device, such as an android phone (just for example) and I have the firmware for that device. Is there a method to emulate the entire firmware? Just like a virtual machine but for firmware that is not designed to run on normal x86 processors. I was looking into it and I think qemu might do what I need but I wanted to see if anyone had any experience with something similar.
Thanks, and sorry if its a noob question
PS, the firmware I have is designed to run on ARM processors
you need to emulate the hardware.
an operating system or firmware directly interfaces with hardware... like the display, touch screen, buttons, speakers, wireless chipset, etc.
to make the operating system work on different hardware, you either need to program it to accept the available hardware (such is more easily possible in the case of an open-source operating system like android), or provide it with simulated hardware identical to the original device.
Related
Does the 'softmmu' mean that the virtual machine has a single linear address space available to machine and user mode? Or does it have some virtual memory capabilities that are implemented via software and not the underlying processor? Or maybe it means something different entirely?
-softmmu as a suffix in QEMU target names means "complete system emulation including an emulated MMU, for running entire guest OSes or bare metal programs". It is opposed to QEMU's -linux-user mode, which means "emulates a single Linux binary only, translating syscalls it makes into syscalls on the host". Building the foo-softmmu target will give you a qemu-system-foo executable; building foo-linux-user will give you a qemu-foo executable.
So a CPU emulated by -softmmu should provide all the facilities that the real guest CPU's hardware MMU provides, which usually means multiple address spaces which can be configured via the guest code setting up page tables and enabling the MMU.
I develop on VS2012. I have 3 monitors connected to my pc with one GTX 960 graphic card.
I knew that it's impossible to debug CUDA on the same device that drives the display output. Maybe I'm reading it wrong, but when I go to NSight->Windows->System Info->Display Devices, I can see that the monitor uses my graphic card. Since I have only one graphic card and I can debug (as the image shows in CUDA WarpWatch1) I deduct that either I do can debug on the same device that drives the display output or it uses my built-in Intel HD Graphics but doesn't show it in the Display Device .
Despite what you have apparently read somewhere, CUDA (and NSight) has supported debugging on GPUs with the WDDM driver on active display GPUs for a number of years. You can see the exact matrix of supported hardware, drviers and debugging modes in the documentation here.
When CUDA was first introduced, debugging was limited to non-display cards. However, this limitation was removed on Windows and Linux using more recent hardware some time ago.
I want use the motherboard as the primary display adapter and my NVIDIA graphics card as a dedicated CUDA processor. My first thought was to simply plug the monitor's VGA cable into the motherboard's VGA port and hope the BIOS was smart enough to use the on-board video as the display adapter when it booted. That didn't work. The BIOS must have detected the NVIDIA card and continued to use it as the display adapter. The next thing I looked for was a setting in the BIOS to tell it "don't use the the NVIDIA 560 as the display adapter, use the on-board video as the display adapter". I search through the BIOS and the Web, but either this cannot be done or I cannot figure out how to do it. The mobo is a BIOSTAR TH67+ LGA 1155. Windows 7 OS.
RESULTS SUMMARY (from answers provided below)
Enabling the Integrated Graphics Device (IGD) in the BIOS will allow the system to be driven from the on-board graphics even with the graphics card connected to the system bus. However, the graphics card cannot be used for CUDA processing. Windows will not enable graphics devices unless a monitor is attached to them. The normal driver stack cannot see them. Solution: use Linux, or attach a display to the graphics card but do not use it. The Tesla cards (GPGPU-only) are not recognized by Windows as graphics devices, so they don't suffer from this.
Also ,a newer BIOSTAR motherboard, the TZ68A+, supports the Virtu drivers which permit sophisticated simultaneous use of the graphics cards and on-board video.
Looking at the BIOS manual (.zip), the setting you probably want is Chipset -> North Bridge -> Initiate Graphics Adapter. Try setting it to IGD (Integrated Graphics Device).
I believe this will happen automatically as the native video won't support CUDA. After installing the SDK, if you run DeviceQuery, do you see more than one result?
I believe h67 allows coexistence of both integrated & dedicated GPU. Check out Lucid Virtu here http://www.lucidlogix.com/driverdownloads-virtu.html it allows switching GPUs on the fly. But I don't know if it affects CUDA device query.
I never tried it on my rig, because its x58, I just heard it from tomshardware. Try it out and let us know. Lucid Virtu is definitely worth a try, its free, and it can cut you electric bill.
Let's say I have a piece of code that runs fine on an OS. Now, if I install that OS on a virtual machine (server virtualization), and run that code on that, is it possible that the code behaves differently?
If so, what are the prerequisites for that? For example, does it have to be compiled machine code (in other words, are interpreted languages safe?)? Does it have to be certain OS instructions? Specific virtualization technology (Xen, KVM, VMware..)?
Also, what are the possible different behaviors?
Yes. Like any machine, the virtual machine is just another computer (implemented in software instead of hardware).
For one, lots of commercial apps will blow up when you run them on a VM due to:
copy protection detecting the VM
copy protection rigging your hardware, using undocumented features of BIOS/Kernel/hardware
Secondly, a VM is just another computer consisting of hardware implemented in assembly instead of circuits/dye/microcode/magic. This means the VM must provide the emulated hardware either through pass-through or emulation. The fact that hardware is very diverse can cause all kinds of different behavior. Also note the possible lack of drivers for or acceleration of the emulated hardware.
But of course a typical business application for example isn't nearly as likely to rely on any hardware details as all it does is call some GUI API.
Interpreted languages are only safe from this to the extent that they are "interpreted", if the interpreted language calls out to some native code, all this is possible again.
For an example of something detecting that it's running under a VM, check this, it's just one of the literally thousands of ways to detect the VM.
In theory the program should run exactly the same as on a physical machine.
In practice however, there may be differences due to
Machine\OS configuration and drivers
Load of the virtual machine host.
Differences in machine configuration are similar to difference you would see between any difference physical machine. Depending on how critical you application is to the end user, you should run the same set of tests that you would a physical box to determine whether the environment is acceptable for use.
Depending on the virtualisation technology, the host may not have the ability to guarantee the client resources at specific times. This can lead to weird behavior on the client. Potentially you would see more occurrences of application errors due to IO timeouts an starvation of memory.
To successfully virtualise an application for production use you need to do a bit of work to understand the resource profile of the application\client and virtual host.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Are there any open source real time operating systems out there? I've heard of real-time Linux, but most implementations seem to really be a proprietary RTOS (that you have to pay for) that run Linux as a process -- much the same way Ardence's RTX real-time system works for Windows.
EDIT: I should clarify that I'm looking for RTOS to work with multi-core x86-family CPUs.
FreeRTOS, it provides the underlying kernel. I've used it in some embedded apps and it seems robust. But, it really depends on your application.
http://www.freertos.org/
Check out eCos free, open source and real-time operating system. (Supports x86, not sure about multi-core)
RTLinux is also available
eCos is free (but you can get paid support). It supports Intel x86 architecture. It supports multi-processor systems. Depending on your timing requirements, I've had not too good experience with real-time Linux systems. Although response time may be good in average, I've seen cases where the worst case over a few days may be 10 or even 100 times as much. I guess this partly depends on the quality of the drivers, partly on the scheduler itself.
But I guess it boils down to whether your system demands hard or soft real-time, what the timing constraints are, what kind of application you need to run. And how streamlined development system you require.
There are hard real-time extensions to the Linux kernel. You might want to check some of those out.
Good examples are RTAI and LXRT
RTAI
OpenSolaris has real-time capabilities, however you should watch out if you decide to use it for real-time development: pretty much all I/O can cause priority inversions in the kernel (low-priority system worker threads can starve and cause high priority threads to be blocked, e.g. in STREAMS code).
I have also been using the FreeRTOS operating system that is available either for free under a modified GNU licence, a paid commercial licence version or an expensive safety certified version (SafeRTOS)
From the web-site there is an x86 port as follows
x86
* Supported processor families: Any x86 compatible running in Real mode only, plus a Win32 simulator
* Supported tools: Open Watcom, Borland, Paradigm, plus Visual Studio for the WIN32 simulator
This OS provides the pre-emptive or co-operative task scheduling with queues, semaphores and priority setting for the tasks. It does not provide the sort of I/O or file library functions that come with other larger OS implementations like Linux.
What are your exact requirements? Perhaps you can use vanilla Linux - it doesn't provide real-time guarantees but might be good enough. Some people find that it's not as bad as the real-time vendors try to make out.
Vanilla Linux DOES have different scheduling policies as well, but not a lot of people know that.
Prex is under BSD License.
There is the S.Ha.R.K. Project. It works with x86 CPUs but I don't know if it handles all cores of a CPU.
Well this is not Open Source, but did you know that Windows CE is a hard real time operating system and that it does have a x86 port? I don't know however if it can support multi core CPUs. If it is a commercial project, you definitely should consider it.
There is also MicroC/OS-II, which has a x86 port, but as above, I don't know if it supports multi cores. It is free for non-commercial applications.
There are real-time extensions to Linux, as already mentioned by someone else. Have a look at xenomai.org.
I'm not so sure about the multiprocessor issue. What exactly do you want to do on your multiple processors?
BeRTOS looks quite interesting. But for x86 it supports "emulator only". Not sure why though.