Open source segmented interrupt architecture RTOS? [closed] - open-source

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
A segmented interrupt architecture RTOS can boast "zero interrupt latency" using clever partitioning of work between the interrupt handler and the scheduler. There are at least a couple proprietary closed source instances of this approach, e.g., AVIX and Quasarsoft- Q-Kernel.
A related SO question asked about open source RTOS links, but all of the suggested operating systems used unified interrupt architectures.
Is there any open source segmented interrupt architecture RTOS?

I believe this is also sometimes referred to as "deferred interrupt" servicing or handling, so it may be worth using that term to find candidates.
It is perhaps possible to 'fake' it by reserving the highest priority task levels for ISR servicing, so say you have 32 interrupt vectors, you would reserve priority levels 0 to 31 (assuming zero is high) for the ISR2 levels. Each real interrupt then simply sets an event flag signalling the ISR2 task. It remains your responsibility in this case not to call blocking functions in the ISR2 tasks, nut non-blocking kernel services can be used freely.
I am not sure whether this gives you exactly the same effect (I'd have to study it more fully than I have - or care to right now), but it does mean that you can do minimal work in the true ISR, and a true ISR will always preempt any ISR2.

Related

Dynamic Function Analysis [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I found this program that appears to assist with locating when a function is called in a program. It seems quite handy and I am wondering if there is more out there like it.
http://split-code.com/cda.html
https://www.youtube.com/watch?v=P0UXR861WYM
What exactly would this program be classified as? Are there other programs similar? Is this widely used and I'm just a fool?
As the link you provided states, this tool is a
dynamic code analysis process instrumentation tool
Dynamic It is used to inspect programs at runtime.
Code analysis It provides information about the code executing (?)
Process It analysis code running in a process (specifically, a 32-bit x86 process under Windows)
Instrumentation This tool uses debugging techniques to allow automatic tracing (into every inter-modular function call) and profiling. It also allows for PIN like (although probably not as neatly implemented) callbacks.
I must mention that the author using analysis is somewhat inaccurate. The software (as far as I understand it) does not analyses code, it only provides inter-modular and intra-modular calls information from runtime. IDA, on the other hand, is a real analysis tool, because it provides information like x-refs and string view, which can only be given via in depth analysis.
There is no 'short name' for this specific type of program. This program will be classified as some sort of Instrumentational software, .

What is meant by the term "inplace" [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I have a vague sense of the meaning of this term, usually in the context of data structures and algorithms which happen to rely upon swap variables to shuttle data around containers and what not. But I'd like to hear some richer definitions and nuances to people's knowledge of this term. Taking a shot at it myself I'd say doing something in place (interesting subquestion, what verbs can come before inplace? moving inplace? transferring inplace? copying inplace?) is to transfer elements of container data from one memory location to another without recourse to a second copy of the whole container.
"inplace" usually means "with O(1) additional space".
This term is often used to indicate an alternative to some operation that would normally involve some kind of a copying operation. The alternative achieves the same results, but avoids the copying procedure or operation, whatever the case may be.
One example comes from C++. Before the C++11 revision to the language, adding an element to a container could not avoid a copy operation of some kind, which could get expensive when the container has a non-trivial object.
If a completely new class instance is to be added to the container, it was pretty much a foregone conclusion that what ends up happening is: 1) class instance construction, 2) a copy construction, and 3) destruction of the first instance.
C++11 added certain language features that make it possible to avoid copying, with the new class instance ending up getting construct "in place", or "emplace" inside the container.

Did the first operating system programmed in binary? And how did that programmers know what to do? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
Did the first operating system programmed in binary?
I'm just curious because It looks like that it should be programmed in binary. But if it's really binary , how would that first programmers know what to write the programs? Isn't he in total blindness?
Also , can you give me the example of command in binary?(First operating system's function if possible).
The very first computers (like e.g. Eniac) did not have any operating system. People programmed them in e.g. binary! Then some guy decided to develop a monitor (which later was called, and evolved in, an operating system).
Even in the 1960s some computers (like the IBM 1620) where able to start without any code (at that time they had no firmware and no ROM): I am old enough to have played, as a teenager, on one (in a museum): you was able, by setting special switches, to type the few machine instructions (in BCD) to load the rest of the system (on punched tapes).
It is a classic bootstrapping problem. J.Pitrat's blog on bootstrapping artificial intelligence should have useful references.
Read about history of computing software & history of operating systems.

What is the absolutely fastest way to output a signal to external hardware in modern PC? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I was wondering, what is the absolutely fastest way (lowest latency) to produce external signal (for example CMOS state change from 0 to 1 on electrical wire connected to other device etc.) from PC, counting from the moment, where CPU assembler program knows that signal must be produced.
I know that network device, usb, VGA monitor output have some large latency comapred to other interfaces (SATA, PCI-E). Wich of interfaces or what hardware modification can provide a near-0 latency in output from let's suppose assembler program?
I don't know if it is really the fastest interface you can provide, because that also depends on your definition of "external", but http://en.wikipedia.org/wiki/InfiniBand certainly comes close to what your question aims at. Latency is 200 nanoseconds and below in certain scenarios ...

Where is EXC_BAD_ACCESS documented? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
One of the more commonplace debugging errors in my own development (Mac, iOS) is EXC_BAD_ACCESS. Despite its commonness, its origin and precise meaning remain mysterious. Google lists many occurrences of the error, but the only explanation I could find is informal and incomplete.
I know that this exception (if that's the proper term for it) means that the code has attempted to access an address to which it does not have read and/or write privileges—the null address, for example, or an address outside of the process's address space. But this is an intuitive interpretation based on my prior experience with virtual memory and protected memory systems. I have never seen EXC_BAD_ACCESS documented anywhere, and indeed I'm not sure "who" is sending me this exception—the CPU, Mac OS, UNIX, the runtime, the debugger?—so I don't know who to ask (that is, what class of documentation to consult). I would like to know, for example, what the "code" that is listed with the exception means. Or another example: what other classes of similar exceptions (presumably also tagged with "EXC_") might also come from the same source?
Where can I find an explanation of EXC_BAD_ACCESS, its codes and general semantics, from an authoritative source? What is the authoritative source—who is actually detecting and throwing the exception?
The only official documentation I've been able to find for EXC_BAD_ACCESS is a Technical Q&A called Finding EXC_BAD_ACCESS bugs in a Cocoa project. It's dated and only confirms what you already know:
This kind of problem is usually the result of over-releasing an object. It can be very confusing, since the failure tends to occur well after the mistake is made. The crash can also occur while the program is deep in framework code, often with none of your own code visible in the stack.
Indeed, it can be very confusing. At least Apple acknowledges that much. :)