Are There any Open Source Real Time Operating Systems for x86? [closed] - open-source

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Are there any open source real time operating systems out there? I've heard of real-time Linux, but most implementations seem to really be a proprietary RTOS (that you have to pay for) that run Linux as a process -- much the same way Ardence's RTX real-time system works for Windows.
EDIT: I should clarify that I'm looking for RTOS to work with multi-core x86-family CPUs.

FreeRTOS, it provides the underlying kernel. I've used it in some embedded apps and it seems robust. But, it really depends on your application.
http://www.freertos.org/

Check out eCos free, open source and real-time operating system. (Supports x86, not sure about multi-core)
RTLinux is also available

eCos is free (but you can get paid support). It supports Intel x86 architecture. It supports multi-processor systems. Depending on your timing requirements, I've had not too good experience with real-time Linux systems. Although response time may be good in average, I've seen cases where the worst case over a few days may be 10 or even 100 times as much. I guess this partly depends on the quality of the drivers, partly on the scheduler itself.
But I guess it boils down to whether your system demands hard or soft real-time, what the timing constraints are, what kind of application you need to run. And how streamlined development system you require.

There are hard real-time extensions to the Linux kernel. You might want to check some of those out.
Good examples are RTAI and LXRT
RTAI

OpenSolaris has real-time capabilities, however you should watch out if you decide to use it for real-time development: pretty much all I/O can cause priority inversions in the kernel (low-priority system worker threads can starve and cause high priority threads to be blocked, e.g. in STREAMS code).

I have also been using the FreeRTOS operating system that is available either for free under a modified GNU licence, a paid commercial licence version or an expensive safety certified version (SafeRTOS)
From the web-site there is an x86 port as follows
x86
* Supported processor families: Any x86 compatible running in Real mode only, plus a Win32 simulator
* Supported tools: Open Watcom, Borland, Paradigm, plus Visual Studio for the WIN32 simulator
This OS provides the pre-emptive or co-operative task scheduling with queues, semaphores and priority setting for the tasks. It does not provide the sort of I/O or file library functions that come with other larger OS implementations like Linux.

What are your exact requirements? Perhaps you can use vanilla Linux - it doesn't provide real-time guarantees but might be good enough. Some people find that it's not as bad as the real-time vendors try to make out.
Vanilla Linux DOES have different scheduling policies as well, but not a lot of people know that.

Prex is under BSD License.

There is the S.Ha.R.K. Project. It works with x86 CPUs but I don't know if it handles all cores of a CPU.

Well this is not Open Source, but did you know that Windows CE is a hard real time operating system and that it does have a x86 port? I don't know however if it can support multi core CPUs. If it is a commercial project, you definitely should consider it.
There is also MicroC/OS-II, which has a x86 port, but as above, I don't know if it supports multi cores. It is free for non-commercial applications.

There are real-time extensions to Linux, as already mentioned by someone else. Have a look at xenomai.org.
I'm not so sure about the multiprocessor issue. What exactly do you want to do on your multiple processors?

BeRTOS looks quite interesting. But for x86 it supports "emulator only". Not sure why though.

Related

What is real difference between Firmware and Embedded Software

I am searching real difference between firmware and embedded software.
On the internet it is written for firmware is firmware is a type of embedded software but not vice versa. In addition to that a classic BIOS example it is very old.
They both run in non-volatile memory. One difference is Embedded software like an application programming that has an rtos and file system and can be run on RAM.
If i dont use rtos and RAM and only uses flash memory it means my embedded software is a firmware, it is true?
What actually makes real difference its memory layout.
The answers on the internet are lack of technical explanations and not satisfied.
Thank you very much.
They are not distinctly separate things, or even well defined. Firmware is a subset of software; the term typically implies that it is in read-only memory:
Software refers to any machine executable code - including "firmware".
Firmware refers to software in read-only memory
Read-only memory in this context includes re-writable memory such as flash or EPROM that requires a specific erase/write operation and is not simply random-access writable.
The distinction between RAM and ROM execution is not really a distinction between firmware and software. Many embedded systems load executable code from ROM and execute from RAM for performance reasons, while others execute directly from ROM. Rather if the end-user cannot easily modify or replace the software without special tools or a bootloader, then it might be regarded as "firm". If on the other hand a normal end-user can modify, update or replace the software using facilities on the system itself (by copying a file from removable media or network for example), then it is not firmware. Consider the difference in operation for example in updating your PC's BIOS and updating Microsoft Office - the former requires a special procedure distinct from normal operating system services for loading and running software.
For example, the operating system, bootloader and BIOS of a smart phone might be considered firmware. The apps a user loads from an app-store are certainly not firmware.
In other contexts "firmware" might refer to the configuration of a programmable logic device such as an FPGA as opposed to sequentially executed processor instructions. But that is rather a niche distinction, but useful in systems employing both programmable logic and software execution.
Ultimately you would use the term "firmware" to imply some level of "permanence" of software in a system, but there is a spectrum, so you would use the term in whatever manner is useful in the context of your particular system. For example, I am working on a system where all the code runs from flash, so only ever use the term software to refer to it because there is no need to distinguish it from any other kind of software in the system.

KVM as hypervisor choice in GCE

As per wikipedia, google compute engine uses KVM as hypervisor. I can see mention about vcpu while creating an instance.
Why KVM? Why not VMware OR Xen?
I mean what is the specific reason to choose KVM as a Hypervisor choice?
PS:
Even Xen is a Open source product.
There were a number of factors in the decision, you might not be surprised to learn. :-)
One important factor was compatibility between KVM and existing isolation/scaling processes at Google. (cgroups aka "containers") This allows Google to reuse the same mechanisms that it uses to ensure performance of applications like websearch and gmail to provide consistent performance between VMs scheduled on the machine. This helps GCE avoid noisy neighbor problems.
As you're probably aware, Google has had a long history of Linux kernel development; using KVM allows Google to leverage that talent for GCE. In addition, the hypervisor/hardware emulation split in KVM (where the hypervisor implemented by KVM only emulates a few low-level devices/features, and defers the remaining emulation the the process that opens /dev/kvm) allows for development of virtual devices that have access to the full range of user-space software, including infrastructure like Colossus and BigTable where needed.
Xen, VMware, and HyperV are also great hypervisors and machine emulators, but hopefully that gives you a glimpse into some of the reasons that KVM was a good fit for Google.

Emulating device firmware on ubuntu

Lets say that I have a device, such as an android phone (just for example) and I have the firmware for that device. Is there a method to emulate the entire firmware? Just like a virtual machine but for firmware that is not designed to run on normal x86 processors. I was looking into it and I think qemu might do what I need but I wanted to see if anyone had any experience with something similar.
Thanks, and sorry if its a noob question
PS, the firmware I have is designed to run on ARM processors
you need to emulate the hardware.
an operating system or firmware directly interfaces with hardware... like the display, touch screen, buttons, speakers, wireless chipset, etc.
to make the operating system work on different hardware, you either need to program it to accept the available hardware (such is more easily possible in the case of an open-source operating system like android), or provide it with simulated hardware identical to the original device.

Writing x64-based apps. Is there really any point?

Obviously, it probably has some (or many) advantages over 32-bit that I'm clearly not aware of. So, what are they?
I just don't get it, so many things still aren't supported on X64 PC's. For example, on Internet Explorer 8 and 9 64-bit versions don't support Flash, and when I manage to get it working, it then UN-works, then brings up a message telling me that 64-bit IE's don't currently support flash or Flash isn't available on 64-bit browsers.
I have a 64-bit pc now with Windows 7, and am still writing 32-bit apps, and they all work perfectly (minus a few bugs here n there, which would appear whether you're using 32/64-bit). Why should/would one want to develop for 64-bit systems? I don't see how they are any different and, if I were to learn more about developing for 64bit, where would you recommend I start?
The most commonly cited reason for 64-bit applications is access to more memory. Database servers, for one obvious example, can benefit tremendously when most (or all) the data they're working with is available in memory instead of being stored on disk.
You can also gain extra speed, especially for floating point-intensive applications (I've seen a 3x speed-up fairly routinely, though it also depends somewhat on the CPU).
Some other applications, however, gain little or even lose some by moving to 64-bits. The CPU still has the same bandwidth to memory, but all your pointers double in size so if you're using pointers a lot, it can end up a net loss.
It depends what you are doing.
If you're writing a standalone app that doesn't talk to anything else, isn't going to need a huge amount of memory and wouldn't benefit from the extra registers x64 provides then you won't get much (except bloated structure sizes :)) from making an x64 version.
OTOH, for code that runs in-process, x64 is kinda viral. The shell itself is 64-bit now so if you want to plug into it you have to be 64-bit as well. (Or at least provide an adapter which can talk to the 64-bit world.) As a result, it's often easier to compile everything as 64-bit so you don't have the hassle of marshalling calls between the two worlds.
(While still having a 32-bit build for 32-bit OS, of course.)
Edit: Forgot to say, it's also useful to target x64 if you want to present the "real" view of a machine. 64-bit Windows "lies" to 32-bit processes about various things for compatibility reasons. You can disable/bypass the lies but doing so without breaking things (e.g. 3rd party DLLs) can be tricky and it's best avoided.
64-bit software can address more than 4GB of memory (in reality the limit is ~3GB) directly and it uses additional hardware (extra registers, etc.) available on modern CPUs, thus improving performance. These are the two major reasons of the migration to 64 bit.
Normally you would develop cross-platform software and your compiler would take care of using all the 64-bit features.

GPU Emulator for CUDA programming without the hardware [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Question: Is there an emulator for a Geforce card that would allow me to program and test CUDA without having the actual hardware?
Info:
I'm looking to speed up a few simulations of mine in CUDA, but my problem is that I'm not always around my desktop for doing this development. I would like to do some work on my netbook instead, but my netbook doesn't have a GPU. Now as far as I know, you need a CUDA capable GPU to run CUDA. Is there a way to get around this? It would seem like the only way is a GPU emulator (which obviously would be painfully slow, but would work). But whatever way there is to do this I would like to hear.
I'm programming on Ubuntu 10.04 LTS.
For those who are seeking the answer in 2016 (and even 2017) ...
Disclaimer
I've failed to emulate GPU after all.
It might be possible to use gpuocelot if you satisfy its list of
dependencies.
I've tried to get an emulator for BunsenLabs (Linux 3.16.0-4-686-pae #1 SMP
Debian 3.16.7-ckt20-1+deb8u4 (2016-02-29) i686 GNU/Linux).
I'll tell you what I've learnt.
nvcc used to have a -deviceemu option back in CUDA Toolkit 3.0
I downloaded CUDA Toolkit 3.0, installed it and tried to run a simple
program:
#include <stdio.h>
__global__ void helloWorld() {
printf("Hello world! I am %d (Warp %d) from %d.\n",
threadIdx.x, threadIdx.x / warpSize, blockIdx.x);
}
int main() {
int blocks, threads;
scanf("%d%d", &blocks, &threads);
helloWorld<<<blocks, threads>>>();
cudaDeviceSynchronize();
return 0;
}
Note that in CUDA Toolkit 3.0 nvcc was in the /usr/local/cuda/bin/.
It turned out that I had difficulties with compiling it:
NOTE: device emulation mode is deprecated in this release
and will be removed in a future release.
/usr/include/i386-linux-gnu/bits/byteswap.h(47): error: identifier "__builtin_bswap32" is undefined
/usr/include/i386-linux-gnu/bits/byteswap.h(111): error: identifier "__builtin_bswap64" is undefined
/home/user/Downloads/helloworld.cu(12): error: identifier "cudaDeviceSynchronize" is undefined
3 errors detected in the compilation of "/tmp/tmpxft_000011c2_00000000-4_helloworld.cpp1.ii".
I've found on the Internet that if I used gcc-4.2 or similarly ancient instead of gcc-4.9.2 the errors might disappear. I gave up.
gpuocelot
The answer by Stringer has a link to a very old gpuocelot project website. So at first I thought that the project was abandoned in 2012 or so. Actually, it was abandoned few years later.
Here are some up to date websites:
GitHub;
Project's website;
Installation guide.
I tried to install gpuocelot following the guide. I had several errors during installation though and I gave up again. gpuocelot is no longer supported and depends on a set of very specific versions of libraries and software.
You might try to follow this tutorial from July, 2015 but I don't guarantee it'll work. I've not tested it.
MCUDA
The MCUDA translation framework is a linux-based tool designed to
effectively compile the CUDA programming model to a CPU architecture.
It might be useful. Here is a link to the website.
CUDA Waste
It is an emulator to use on Windows 7 and 8. I've not tried it though. It doesn't seem to be developed anymore (the last commit is dated on Jul 4, 2013).
Here's the link to the project's website: https://code.google.com/archive/p/cuda-waste/
CU2CL
Last update: 12.03.2017
As dashesy pointed out in the comments, CU2CL seems to be an interesting project. It seems to be able to translate CUDA code to OpenCL code. So if your GPU is capable of running OpenCL code then the CU2CL project might be of your interest.
Links:
CU2CL homepage
CU2CL GitHub repository
This response may be too late, but it's worth noting anyway. GPU Ocelot (of which I am one of the core contributors) can be compiled without CUDA device drivers (libcuda.so) installed if you wish to use the Emulator or LLVM backends. I've demonstrated the emulator on systems without NVIDIA GPUs.
The emulator attempts to faithfully implement the PTX 1.4 and PTX 2.1 specifications which may include features older GPUs do not support. The LLVM translator strives for correct and efficient translation from PTX to x86 that will hopefully make CUDA an effective way of programming multicore CPUs as well as GPUs. -deviceemu has been a deprecated feature of CUDA for quite some time, but the LLVM translator has always been faster.
Additionally, several correctness checkers are built into the emulator to verify: aligned memory accesses, accesses to shared memory are properly synchronized, and global memory dereferencing accesses allocated regions of memory. We have also implemented a command-line interactive debugger inspired largely by gdb to single-step through CUDA kernels, set breakpoints and watchpoints, etc... These tools were specifically developed to expedite the debugging of CUDA programs; you may find them useful.
Sorry about the Linux-only aspect. We've started a Windows branch (as well as a Mac OS X port) but the engineering burden is already large enough to stress our research pursuits. If anyone has any time and interest, they may wish to help us provide support for Windows!
Hope this helps.
[1]: GPU Ocelot - https://code.google.com/archive/p/gpuocelot/
[2]: Ocelot Interactive Debugger - http://forums.nvidia.com/index.php?showtopic=174820
You can check also gpuocelot project which is a true emulator in the sense that PTX (bytecode in which CUDA code is converted to) will be emulated.
There's also an LLVM translator, it would be interesting to test if it's more fast than when using -deviceemu.
The CUDA toolkit had one built into it until the CUDA 3.0 release cycle. I you use one of these very old versions of CUDA, make sure to use -deviceemu when compiling with nvcc.
https://github.com/hughperkins/cuda-on-cl lets you run NVIDIA® CUDA™ programs on OpenCL 1.2 GPUs (full disclosure: I'm the author)
Be careful when you're programming using -deviceemu as there are operations that nvcc will accept while in emulation mode but not when actually running on a GPU. This is mostly found with device-host interaction.
And as you mentioned, prepare for some slow execution.
GPGPU-Sim is a GPU simulator that can run CUDA programs without using GPU.
I created a docker image with GPGPU-Sim installed for myself in case that is helpful.