Are there any CPU-state bits indicating being in an exception/interrupt handler in x86 and x86-64? - exception

Are there any CPU-state bits indicating being in an exception/interrupt handler in x86 and x86-64? In other words, can we tell whether the main thread or exception handler is currently executed based only on the CPU registers' state?

Not, there's no bit in the CPU itself (e.g. a control register) that means "we're in an exception or interrupt handler".
But there is hidden state indicating that you're in an NMI (Non-Maskable Interrupt) handler. Since you can't block them by disabling interrupts, and unblockable arbitrary nesting of NMIs would be inconvenient, another NMI won't get delivered until you run an iret. Even if an exception (like #DE div by 0) happens during an NMI handler, and that exception handler itself returns with iret even if you're not done handling the NMI. See The x86 NMI iret problem on LWN.
For normal interrupts, you can disable interrupts (cli) if you don't want another interrupt to be delivered while this one is being handled.
However, the interrupt controller (logically outside the CPU core, but actually part of modern CPUs) may need to be told when you're done handling an external interrupt. (Not a software-interrupt or exception). https://wiki.osdev.org/IDT_problems#I_can_only_receive_one_IRQ shows the outb instructions needed to keep the legacy PIC happy. (I don't know if this applies to more modern ways of doing interrupts, like MSI-X message-signalled interrupts.
That part of the OSdev wiki page might be specific to toy OSes that let the BIOS emulate legacy IBM-PC stuff.) But either way, that's only for external interrupts like PS/2 keyboard controller, hard drive DMA complete, or whatever (not exceptions), so it's unrelated to your Are Linux system calls executed inside an exception handler? question.
The lack of exception-state means there's no special instruction you have to run to "acknowledge" an exception before calling schedule() from what was an interrupt handler. All you have to do is make sure interrupts are enabled or not when they should or shouldn't be. (sti / cli, or pushf / popf to save/restore the old interrupt state.) And of course that your software data structures remain consistent and appropriate for what you're doing. But there isn't anything you have to do specifically to keep the CPU happy.
It's not like with user-space where a signal handler should tell the OS it's done instead of just jumping somewhere and running indefinitely. (In Linux, a signal handler can modify the main-thread program-counter so sigreturn(2) resumes execution somewhere other than where you were when it was delivered.) If POSIX or Linux signals were the (mental) model you were wondering about for interrupts/exceptions, no, it's not like that.
There is an interrupt-priority mechanism (CR8 in x86-64, or the LAPIC TPR (Task Priority Register)), but it does not automatically get set when the CPU delivers an interrupt. You can set it once (e.g. if you have a lot of high-priority interrupts to process on this core) and it persists across interrupts. (How is CR8 register used to prioritize interrupts in an x86-64 CPU?).
It's just a filter on what interrupt-numbers can get delivered to this core when interrupts are enabled (sti, IF=1 bit in RFLAGS). Apparently Windows makes some use of it, or did back in 2007, but Linux doesn't (or didn't).
It's not like you have to tell the CPU / LAPIC that you're done with this interrupt so it's ok for it to deliver another interrupt of this or lower priority.

Related

RISC-V return from exception handler with compressed instructions

I see the standard way of exiting RISC-V exception handler is update mepc to mepc+4 before mret.
But won't this cause problem if the next instruction is only 2-bytes long in compressed instruction mode?
In compressed instruction mode there are mixed of 4-bytes and 2-bytes instructions. If you not update mepc and just mret then you keep getting same exception. But always adding 4 to trapped mepc seem like a bug for mixed compressed instruction.
Am I missing something?
I see the standard way of exiting risc-v exception handler is update mepc to mepc+4 before mret.
These are not serious exception handlers; they are illustrative only — to show catching of exceptions at all, and returning back to the interrupted code without having done the actual exception processing needed for the given situation. Thus, the easiest thing to do to prevent an infinite loop is to simply skip past the offending instruction.
One of the few places where we advance the pc in order to return to the code that cause the exception is in handling an ecall. As far as I know there is no compressed (16-bit) ecall instruction.
Many resumable exceptions need to rerun the instruction that caused the exception — loads & stores that cause page faults (available in both 32-bit and 16-bit form), for example, need to re-execute once the page tables have been fixed (the page read in from disc and mapped into the user's address space).
Many other exceptions are not generally resumable.
However, emulation of an instruction requires knowing its size, as is the case with ecall. if you choose to emulate, for example, misaligned memory accesses, you will indeed have to make a decision as to the size of the instruction, as emulating it means resuming past it. Also note that RISC V supports 16-bit, 32-bit, 48-bit, 64-bit and longer instructions, so an exception handler that is going to emulate instructions will need to be able to decode their length (only the instructions chosen for emulation, though).
The other thing to add is that the sample exception handlers you may be looking at are designed to work without the compressed instruction set, and since RVC is optional that is a reasonable design choice (though ideally, of course, would be clearly stated).

Where is hardware exception handling entry / exit code stored

I know this question seems very generic as it can depend on the platform,
but I understand with procedure / function calls, the assembler code to push return address on the stack and local variables etc. can be part of either the caller function or callee function.
When a hardware exception or interrupt occurs tho, the Program Counter will get the address of the exception handler via the exception table, but where is the actual code to store the state, return address etc. Or is this automatically done at the hardware level for interrupts and exceptions?
Thanks in advance
since you are asking about arm and you tagged microcontroller you might be talking about the arm7tdmi but are probably talking about one of the cortex-ms. these work differently than the full sized arm architecture. as documented in the architectural reference manual that is associated with these cores (the armv6-m or armv7-m depending on the core) it documents that the hardware conforms to the ABI, plus stuff for an interrupt. So the return address the psr and registers 0 through 4 plus some others are all put on the stack, which is unusual for an architecture to do. R14 instead of getting the return address gets an invalid address of a specific pattern which is all part of the architecture, unlike other processor ip, addresses spaces on the cortex-ms are encouraged or dictated by arm, that is why you see ram starts at 0x20000000 usually on these and flash is less than that, there are some exceptions where they place ram in the "executable" range pretending to be harvard when really modified harvard. This helps with the 0xFFFxxxxx link register return address, depending on the manual they either yada yada over the return address or they go into detail as to what the patterns you find mean.
likewise the address in the vector table is spelled out something like the first 16 are system/arm exceptions then interrupts follow after that where it can be up to 128 or 256 possible interrupts, but you have to look at the chip vendor (not arm) documentation for that to see how many they exposed and what is tied to what. if you are not using those interrupts you dont have to leave a huge hole in your flash for vectors, just use that flash for your program (so long as you insure you are never going to fire that exception or interrupt).
For function calls, which occur at well defined (synchronous) locations in the program, the compiler generates executable instructions to manage the return address, registers and local variables. These instructions are integrated with your function code. The details are hardware and compiler specific.
For a hardware exception or interrupt, which can occur at any location (asynchronous) in the program, managing the return address and registers is all done in hardware. The details are hardware specific.
Think about how a hardware exception/interrupt can occur at any point during the execution of a program. And then consider that if a hardware exception/interrupt required special instructions integrated into the executable code then those special instructions would have to be repeated everywhere throughout the program. That doesn't make sense. Hardware exception/interrupt management is handled in hardware.
The "code" isn't software at all; by definition the CPU has to do it itself internally because interrupts happen asynchronously. (Or for synchronous exceptions caused by instructions being executed, then the internal handling of that instruction is what effectively triggers it).
So it's microcode or hardwired logic inside the CPU that generates the stores of a return address on an exception, and does any other stuff that the architecture defines as happening as part of taking an exception / interrupt.
You might as well as where the code is that pushes a return address when the call instruction executes, on x86 for example where the call instruction pushes return info onto the stack instead of overwriting a link register (the way most RISCs do).

Servicing an interrupt vs servicing an exception

My understanding : An interrupt (hardware interrupt) occurs asynchronously generally been caused by an external event directly interrupting the CPU. The CPU will then vector to the particular ISR to handle the interrupt.
Obviously an ISR cannot have a return value or have parameters passed because the event happen at anytime at any point of execution in our code.
With exceptions however, my understanding is that this is a software interrupt which is caused by a special instruction pd within the software.
I've heard that exceptions are handled in a similar fashion to handling an ISR. Can an exception handler in that case behave differently to an ISR , by taking arguments from the code and return a value, because we know where we were in our code when it was executed?
Thanks in advance
A hardware exception is not a software interrupt, you do not explicitly call it - it occurs on some hardware detectable error such as:
invalid address
invalid instruction
invalid alignment
divide-by-zero
You can of course write code to deliberately cause any of these and therefore use them as software interrupts, but then you may loose the utility of them as genuine error traps. Exceptions are in some cases used for this purpose - for example in a processor without an FPU on an architecture where and FPU is an option, the invalid instruction handler can be used to implement software emulation of an FPU so that the compiler does not need to generate different code for FPU and non-FPU variants. Similarly an invalid address exception can invoke a memory manager to implement a virtual memory swap-file (on devices with an MMU).
A software interrupt is explicitly called by an SWI instruction. It's benefit over a straightforward function call is that the application does not need to know the location of the handler - that is determined by the vector table, and is often used to make operating system or BIOS calls in simple operating systems can dynamically load code, but that do not support dynamic-linking (MS-DOS for example worked in this way).
What hardware interrupts, software interrupts and exceptions all have in common is that they execute in different processor context that normal code - typically switching to an independent stack and automatically pushing registers (or using an alternate register bank). They all operate via a vector table, and you cannot pass or return parameters via formal function parameter passing and return. With SWI and forced-exceptions, it is possible to load values into specific registers or memory locations known to the handler.
The above are general principles - the precise details will vary between different architectures. Consult technical reference for the specific device used.
The term "exception" can mean completely different things.
There are "software exceptions" in the form of exception handling, as a high-level language feature in languages like C++. An "exception handler" in this context would be something like a try { } catch block.
And there are "hardware exceptions" which is a term used by some CPU cores like PowerPC. These are a form of critical interrupts corresponding to an error state. An "exception handler" in this context would be similar to an interrupt vector table, although when a hardware exception occurs, there is usually nothing the software can do about it.
Hardware exceptions take no arguments and return no data, just like interrupts. Architectures like PowerPC separate hardware exceptions from hardware interrupts, where the former are various error states, and the later are interrupts from the application-specific hardware.
It isn't all that meaningful for a hardware exception to communicate with the software, as they would be generated from critical failures like wrong OP code executed, CPU clock gone bad, runaway code etc etc. That is, the execution environment has been compromised so there is nothing meaningful that the software executing in that environment can possibly do.

Understanding hardware interrupts and exceptions at processor and hardware level

After a lot of reading about interrupt handling etcetera, i still can figure out the full process of interrupt handling from the very beginning.
For example:
A division by zero.
The CPU fetches the instruction to divide a number by zero and send it to the ALU.
Assuming the the ALU started the process of the division or run some checks before starting it.
How the exception is signaled to the CPU ?
How the CPU knows what exception has occurred from only one bit signal ? Is there a register that is reads after it gets interrupted to know this ?
2.How my application catches the exception?
Do i need to write some function to catch a specipic SIGNAL or something else? And when i write expcepion handling routine like
Try {}
Catch {}
And an exception occurres how can i know what exeption is thrown and handle it well ?
The most important part that bugs me is for example when an interupt is signaled from the keyboard to the PIC the pic in his turn signals to the CPU that an interrupt occurred by changing the wite INT.
But how does the CPU knows what device need to be served ?
What is the processes the CPU is doing when his INTR pin turns on ?
Does he has a routine that checks some register that have a value of the interrupt (that set by the PIC when it turns on the INT wire? )
Please don't ban the post, it's really important for me to understand this topic, i read a researched a couple of weaks but connot connect the dots in my head.
Thanks.
There are typically several thing associated with interrupts other than just a pin. Normally for more recent micro-controllers there is a interrupt vector placed on memory that addresses each interrupt call, and a register that signals the interrupt event/flag.
When a event that is handled by an interruption occurs and a specific flag is set. Depending on priority's and current state of the CPU the context switch time may vary for example a low priority interrupt flagged duding a higher priority interrupt will have to wait till the high priority interrupt is finished. In the event that nesting is possible than higher priority interrupts may interrupt lower priority interrupts.
In the particular case of exceptions like dividing by 0, that indeed would be detected by the ALU, the CPU may offer or not a derived interruption that we will call in events like this. For other types of exceptions an interrupt might not be available and the CPU would just act accordingly for example rebooting.
As a conclusion the interrupt events would occur in the following manner:
Interrupt event is flagged and the corresponding flag on the register is set
When the time comes the CPU will switch context to the interruption handler function.
At the end of the handler the interruption flag is cleared and the CPU is ready to re-flag the interrupt when the next event comes.
Deciding between interrupts arriving at the same time or different priority interrupts varies with different hardware.
It may be simplest to understand interrupts if one starts with the way they work on the Z80 in its simplest interrupt mode. That processor checks the state of a
pin called /IRQ at a certain point during each instruction; if the pin is asserted and an "interrupt enabled" flag is set, then when it is time to fetch the next instruction the processor won't advance the program counter or read a byte from memory, but instead disable the "interrupt enabled" flag and "pretend" that it read an "RST 38h" instruction. That instruction behaves like a single-byte "CALL 0038h" instruction, pushing the program counter and transferring control to that address.
Code at 0038h can then poll various peripherals if they need any service, use an "ei" instruction to turn the "interrupt enabled" flag back on, and perform a "ret". If no peripheral still has an immediate need for service at that point, code can then resume with whatever it was doing before the interrupt occurred. To prevent problems if the interrupt line is still asserted when the "ret" is executed, some special logic will ensure that the interrupt line will be ignored during that instruction (or any other instruction which immediately follows "ei"). If another peripheral has developed a need for service while the interrupt handler was running, the system will return to the original code, notice the state of /IRQ while it processes the first instruction after returning, and then restart the sequence with the RST 38h.
In the simple Z80 approach, there is only one kind of interrupt; any peripheral can assert /IRQ, and if any peripheral does so the Z80 will need to ask every peripheral if it wants attention. In more advanced systems, it's possible to have many different interrupts, so that when a peripheral needs service control can be dispatched to a routine which is designed to handle just that peripheral. The same general principles still apply, however: an interrupt effectively inserts a "call" instruction into whatever the processor was doing, does something to ensure that the processor will be able to service whatever needed attention without continuously interrupting that process [on the Z80, it simply disables interrupts, but systems with multiple interrupt sources can leave higher-priority sources enabled while servicing lower ones], and then returns to whatever the processor had been doing while re-enabling interrupts.

OS development: How to avoid an infinite loop after an exception routine

For some months I've been working on a "home-made" operating system.
Currently, it boots and goes into 32-bit protected mode.
I've loaded the interrupt table, but haven't set up the pagination (yet).
Now while writing my exception routines I've noticed that when an instruction throws an exception, the exception routine is executed, but then the CPU jumps back to the instruction which threw the exception! This does not apply to every exception (for example, a div by zero exception will jump back to the instruction AFTER the division instruction), but let's consider the following general protection exception:
MOV EAX, 0x8
MOV CS, EAX
My routine is simple: it calls a function that displays a red error message.
The result: MOV CS, EAX fails -> My error message is displayed -> CPU jumps back to MOV CS -> infinite loop spamming the error message.
I've talked about this issue with a teacher in operating systems and unix security.
He told me he knows Linux has a way around it, but he doesn't know which one.
The naive solution would be to parse the throwing instruction from within the routine, in order to get the length of that instruction.
That solution is pretty complex, and I feel a bit uncomfortable adding a call to a relatively heavy function in every affected exception routine...
Therefore, I was wondering if the is another way around the problem. Maybe there's a "magic" register that contains a bit that can change this behaviour?
--
Thank you very much in advance for any suggestion/information.
--
EDIT: It seems many people wonder why I want to skip over the problematic instruction and resume normal execution.
I have two reasons for this:
First of all, killing a process would be a possible solution, but not a clean one. That's not how it's done in Linux, for example, where (AFAIK) the kernel sends a signal (I think SIGSEGV) but does not immediately break execution. It makes sense, since the application can block or ignore the signal and resume its own execution. It's a very elegant way to tell the application it did something wrong IMO.
Another reason: what if the kernel itself performs an illegal operation? Could be due to a bug, but could also be due to a kernel extension. As I've stated in a comment: what should I do in that case? Shall I just kill the kernel and display a nice blue screen with a smiley?
That's why I would like to be able to jump over the instruction. "Guessing" the instruction size is obviously not an option, and parsing the instruction seems fairly complex (not that I mind implementing such a routine, but I need to be sure there is no better way).
Different exceptions have different causes. Some exceptions are normal, and the exception only tells the kernel what it needs to do before allowing the software to continue running. Examples of this include a page fault telling the kernel it needs to load data from swap space, an undefined instruction exception telling the kernel it needs to emulate an instruction that the CPU doesn't support, or a debug/breakpoint exception telling the kernel it needs to notify a debugger. For these it's normal for the kernel to fix things up and silently continue.
Some exceptions indicate abnormal conditions (e.g. that the software crashed). The only sane way of handling these types of exceptions is to stop running the software. You may save information (e.g. core dump) or display information (e.g. "blue screen of death") to help with debugging, but in the end the software stops (either the process is terminated, or the kernel goes into a "do nothing until user resets computer" state).
Ignoring abnormal conditions just makes it harder for people to figure out what went wrong. For example, imagine instructions to go to the toilet:
enter bathroom
remove pants
sit
start generating output
Now imagine that step 2 fails because you're wearing shorts (a "can't find pants" exception). Do you want to stop at that point (with a nice easy to understand error message or something), or ignore that step and attempt to figure out what went wrong later on, after all the useful diagnostic information has gone?
If I understand correctly, you want to skip the instruction that caused the exception (e.g. mov cs, eax) and continue executing the program at the next instruction.
Why would you want to do this? Normally, shouldn't the rest of the program depend on the effects of that instruction being successfully executed?
Generally speaking, there are three approaches to exception handling:
Treat the exception as an unrepairable condition and kill the process. For example, division by zero is usually handled this way.
Repair the environment and then execute the instruction again. For example, page faults are sometimes handled this way.
Emulate the instruction using software and skip over it in the instruction stream. For example, complicated arithmetic instructions are sometimes handled this way.
What you're seeing is the characteristic of the General Protection Exception. The Intel System Programming Guide clearly states that (6.15 Exception and Interrupt Reference / Interrupt 13 - General Protection Exception (#GP)) :
Saved Instruction Pointer
The saved contents of CS and EIP registers point to the instruction that generated the
exception.
Therefore, you need to write an exception handler that will skip over that instruction (which would be kind of weird), or just simply kill the offending process with "General Protection Exception at $SAVED_EIP" or a similar message.
I can imagine a few situations in which one would want to respond to a GPF by parsing the failed instruction, emulating its operation, and then returning to the instruction after. The normal pattern would be to set things up so that the instruction, if retried, would succeed, but one might e.g. have some code that expects to access some hardware at addresses 0x000A0000-0x000AFFFF and wish to run it on a machine that lacks such hardware. In such a situation, one might not want to ever bank in "real" memory in that space, since every single access must be trapped and dealt with separately. I'm not sure whether there's any way to handle that without having to decode whatever instruction was trying to access that memory, although I do know that some virtual-PC programs seem to manage it pretty well.
Otherwise, I would suggest that you should have for each thread a jump vector which should be used when the system encounters a GPF. Normally that vector should point to a thread-exit routine, but code which was about to do something "suspicious" with pointers could set it to an error handler that was suitable for that code (the code should unset the vector when laving the region where the error handler would have been appropriate).
I can imagine situations where one might want to emulate an instruction without executing it, and cases where one might want to transfer control to an error-handler routine, but I can't imagine any where one would want to simply skip over an instruction that would have caused a GPF.