I am trying to understand how interrupts work in an ARM architecture(ARM7TDMI to be specific). I know that there are seven exceptions (Reset,Data Abort, FIQ, IRQ, Pre-fetch abort, SWI and Undefined instruction) and they execute in particular modes(Supervisor, Abort, FIQ, IRQ, Abort, Supervisor and Undefined respectively). I have the following questions.
1. When the I and F bits in CPSR(status register) are set to 1 to disable external and fast interrupt, does the other 5 exceptions are also disabled ?
2. If the SWI is not disabled when I and F bits are enabled then, is it possible to intentionally trigger a SWI exception within ISR of an external interrupt?
3.When any interrupt is triggered saving the CPSR to SPSR, changing the mode is done by the processor itself. So, is it enough to write the ISR handler function and update the vector table with the handler addresses(I don't want to save r0 to r12 general purpose registers) ?
4. Whenever the mode of execution is changed does context saving happens internally by the processor(even when we change the mode manually)?
5. How to mask/disable a SWI exception?
Thank you.
When the I and F bits in CPSR(status register) are set to 1 to disable external and fast interrupt, does the other 5 exceptions are
also disabled ?
No, these all depend on your code to be correct. For instance, a compiler will not normally generate an swi instruction.
If the SWI is not disabled when I and F bits are enabled then, is it possible to intentionally trigger a SWI exception within ISR of an
external interrupt?
Yes, it is possible. You may check the mode of the SPSR in your swi handler and abort (or whatever is appropriate) if you want.
3.When any interrupt is triggered saving the CPSR to SPSR, changing the mode is done by the processor itself. So, is it enough to write
the ISR handler function and update the vector table with the handler
addresses(I don't want to save r0 to r12 general purpose registers) ?
No one wants to save registers. However, if you use r0 to r12 then the main code will become corrupt. The banked sp is made to store these registers. Also, the vector table is not a handler address but an instruction/code.
Whenever the mode of execution is changed does context saving happens internally by the processor(even when we change the mode
manually)?
No, the instruction/code in the vector page is responsible for saving the context. If you have a pre-emptable OS then you need to save the context for the process and restore later. You may have 1000s of processes. So a CPU could not do this automatically. Your context save area may be rooted in the super mode stack; you can use the ISR/FIQ sp as a temporary register in this case. For instance, the switch_to function in ARM Linux maybe helpful. thread_info is rooted in the supervisor stack for the kernel management of the user space process/thread. The minimum code (with features removed) is,
__switch_to:
add ip, r1, #TI_CPU_SAVE # Get save area to `ip`.
stmia ip!, {r4 - sl, fp, sp, lr} ) # Store most regs on stack
add r4, r2, #TI_CPU_SAVE # Get restore area to `r4`
ldmia r4, {r4 - sl, fp, sp, pc} ) # Load all regs saved previously
# note the last instruction returns to a previous
# switch_to call by the destination thread/process
How to mask/disable a SWI exception?
You can not do this. You could write an swi handler that does nothing but increment the PC and/or you could just jump to the undefined handler depending on what it does.
Related
Are there any CPU-state bits indicating being in an exception/interrupt handler in ARM Cortex-A processors (like e.g. IPSR reister in ARM Cortex-M CPUs)? In other words, can we tell whether the main thread or exception handler is currently executed based only on the CPU registers' state?
The CPSR mode field indicates what mode the processor is currently executing in. You cannot act on it directly, you have to move it into a gpr to examine it.
Are there any CPU-state bits indicating being in an exception/interrupt handler in x86 and x86-64? In other words, can we tell whether the main thread or exception handler is currently executed based only on the CPU registers' state?
Not, there's no bit in the CPU itself (e.g. a control register) that means "we're in an exception or interrupt handler".
But there is hidden state indicating that you're in an NMI (Non-Maskable Interrupt) handler. Since you can't block them by disabling interrupts, and unblockable arbitrary nesting of NMIs would be inconvenient, another NMI won't get delivered until you run an iret. Even if an exception (like #DE div by 0) happens during an NMI handler, and that exception handler itself returns with iret even if you're not done handling the NMI. See The x86 NMI iret problem on LWN.
For normal interrupts, you can disable interrupts (cli) if you don't want another interrupt to be delivered while this one is being handled.
However, the interrupt controller (logically outside the CPU core, but actually part of modern CPUs) may need to be told when you're done handling an external interrupt. (Not a software-interrupt or exception). https://wiki.osdev.org/IDT_problems#I_can_only_receive_one_IRQ shows the outb instructions needed to keep the legacy PIC happy. (I don't know if this applies to more modern ways of doing interrupts, like MSI-X message-signalled interrupts.
That part of the OSdev wiki page might be specific to toy OSes that let the BIOS emulate legacy IBM-PC stuff.) But either way, that's only for external interrupts like PS/2 keyboard controller, hard drive DMA complete, or whatever (not exceptions), so it's unrelated to your Are Linux system calls executed inside an exception handler? question.
The lack of exception-state means there's no special instruction you have to run to "acknowledge" an exception before calling schedule() from what was an interrupt handler. All you have to do is make sure interrupts are enabled or not when they should or shouldn't be. (sti / cli, or pushf / popf to save/restore the old interrupt state.) And of course that your software data structures remain consistent and appropriate for what you're doing. But there isn't anything you have to do specifically to keep the CPU happy.
It's not like with user-space where a signal handler should tell the OS it's done instead of just jumping somewhere and running indefinitely. (In Linux, a signal handler can modify the main-thread program-counter so sigreturn(2) resumes execution somewhere other than where you were when it was delivered.) If POSIX or Linux signals were the (mental) model you were wondering about for interrupts/exceptions, no, it's not like that.
There is an interrupt-priority mechanism (CR8 in x86-64, or the LAPIC TPR (Task Priority Register)), but it does not automatically get set when the CPU delivers an interrupt. You can set it once (e.g. if you have a lot of high-priority interrupts to process on this core) and it persists across interrupts. (How is CR8 register used to prioritize interrupts in an x86-64 CPU?).
It's just a filter on what interrupt-numbers can get delivered to this core when interrupts are enabled (sti, IF=1 bit in RFLAGS). Apparently Windows makes some use of it, or did back in 2007, but Linux doesn't (or didn't).
It's not like you have to tell the CPU / LAPIC that you're done with this interrupt so it's ok for it to deliver another interrupt of this or lower priority.
After a lot of reading about interrupt handling etcetera, i still can figure out the full process of interrupt handling from the very beginning.
For example:
A division by zero.
The CPU fetches the instruction to divide a number by zero and send it to the ALU.
Assuming the the ALU started the process of the division or run some checks before starting it.
How the exception is signaled to the CPU ?
How the CPU knows what exception has occurred from only one bit signal ? Is there a register that is reads after it gets interrupted to know this ?
2.How my application catches the exception?
Do i need to write some function to catch a specipic SIGNAL or something else? And when i write expcepion handling routine like
Try {}
Catch {}
And an exception occurres how can i know what exeption is thrown and handle it well ?
The most important part that bugs me is for example when an interupt is signaled from the keyboard to the PIC the pic in his turn signals to the CPU that an interrupt occurred by changing the wite INT.
But how does the CPU knows what device need to be served ?
What is the processes the CPU is doing when his INTR pin turns on ?
Does he has a routine that checks some register that have a value of the interrupt (that set by the PIC when it turns on the INT wire? )
Please don't ban the post, it's really important for me to understand this topic, i read a researched a couple of weaks but connot connect the dots in my head.
Thanks.
There are typically several thing associated with interrupts other than just a pin. Normally for more recent micro-controllers there is a interrupt vector placed on memory that addresses each interrupt call, and a register that signals the interrupt event/flag.
When a event that is handled by an interruption occurs and a specific flag is set. Depending on priority's and current state of the CPU the context switch time may vary for example a low priority interrupt flagged duding a higher priority interrupt will have to wait till the high priority interrupt is finished. In the event that nesting is possible than higher priority interrupts may interrupt lower priority interrupts.
In the particular case of exceptions like dividing by 0, that indeed would be detected by the ALU, the CPU may offer or not a derived interruption that we will call in events like this. For other types of exceptions an interrupt might not be available and the CPU would just act accordingly for example rebooting.
As a conclusion the interrupt events would occur in the following manner:
Interrupt event is flagged and the corresponding flag on the register is set
When the time comes the CPU will switch context to the interruption handler function.
At the end of the handler the interruption flag is cleared and the CPU is ready to re-flag the interrupt when the next event comes.
Deciding between interrupts arriving at the same time or different priority interrupts varies with different hardware.
It may be simplest to understand interrupts if one starts with the way they work on the Z80 in its simplest interrupt mode. That processor checks the state of a
pin called /IRQ at a certain point during each instruction; if the pin is asserted and an "interrupt enabled" flag is set, then when it is time to fetch the next instruction the processor won't advance the program counter or read a byte from memory, but instead disable the "interrupt enabled" flag and "pretend" that it read an "RST 38h" instruction. That instruction behaves like a single-byte "CALL 0038h" instruction, pushing the program counter and transferring control to that address.
Code at 0038h can then poll various peripherals if they need any service, use an "ei" instruction to turn the "interrupt enabled" flag back on, and perform a "ret". If no peripheral still has an immediate need for service at that point, code can then resume with whatever it was doing before the interrupt occurred. To prevent problems if the interrupt line is still asserted when the "ret" is executed, some special logic will ensure that the interrupt line will be ignored during that instruction (or any other instruction which immediately follows "ei"). If another peripheral has developed a need for service while the interrupt handler was running, the system will return to the original code, notice the state of /IRQ while it processes the first instruction after returning, and then restart the sequence with the RST 38h.
In the simple Z80 approach, there is only one kind of interrupt; any peripheral can assert /IRQ, and if any peripheral does so the Z80 will need to ask every peripheral if it wants attention. In more advanced systems, it's possible to have many different interrupts, so that when a peripheral needs service control can be dispatched to a routine which is designed to handle just that peripheral. The same general principles still apply, however: an interrupt effectively inserts a "call" instruction into whatever the processor was doing, does something to ensure that the processor will be able to service whatever needed attention without continuously interrupting that process [on the Z80, it simply disables interrupts, but systems with multiple interrupt sources can leave higher-priority sources enabled while servicing lower ones], and then returns to whatever the processor had been doing while re-enabling interrupts.
I have a ARMv5-powered non-XScale device (the SHARP Brain™ electronic dictionary) with Windows Embedded CE 6.0 installed in NAND flash, and I use TCPMP to play my favorite AAC tunes and MPEG-4 movies.
But, when I start TCPMP, sometimes TCPMP freezes. So I looked into TCPMP and I managed to found that the freeze happens when this code is executed.
CheckARMXScale PROC
mov r0,#0x1000000
mov r1,#0x1000000
mar acc0,r0,r1 ; <--- here
mov r0,#32
mov r1,#32
mra r0,r1,acc0
cmp r0,#0x1000000
moveq r0,#1
movne r0,#0
cmp r1,#0x1000000 ;64bit or just 40bit?
moveq r0,#2
mov pc,lr
This code determines whether XScale is present by trying executing XScale instruction, and catching the exception if the "Undefined Instruction" exception was thrown.
The problem is that somehow the system fails to pass this exception to TCPMP properly, causing TCPMP to freeze. It seems to be not because of Windows CE, but rather because of buggy drivers in this device. Any driver updates are not expected since running TCPMP on this device is not officially supported.
I posted this issue to 2channel, and some people claimed that this way to determine if XScale is present is not good, but no one even tried to find a better way. So I googled and read through ARMv5 Architecture Reference Manual and so on, but I could find nothing. It appears that almost every program that utilizes XScale instruction set determines if XScale is present in the same way.
The question is that, is it possible to determine if XScale instruction set is present, without making use of any exception or any CPU mode except user mode?
Try IOCTL_PROCESSOR_INFORMATION
(needs switching into kernel mode) read the CP15 register c0, Main ID register aka ID Code Register aka ARM CPUID. The top byte is the implementor which will be 0x69 ('i', Intel) for XScale.
Check also this thread.
When executing this instruction I got an exception
LFS ESI,PWORD PTR [EBP+12]
From this page http://wiki.osdev.org/Double_Fault#Double_Fault
Any PUSH or POP instruction or any instruction using ESP or EBP as a base register is executed, while the stack address is not in canonical form.
So i think it should be an Stack-Segment Fault here.
But the system gives an general protection exception(0D).
Could anyone tell me why the result is this?
General protection fault for an LFS occurs when:
the segment selector index you are
trying to load is not with the
descriptor table limits
the segment is in the descriptor
table, but it's not a readable data
segment
your privilege level is higher
(meaning less privilege) that the
privilige level for the descriptor.
So, the problem is not the instruction itself, but the segment descriptor table.
See chapter 3 in the Intel Software Developer’s Manual Volume 3A:
http://www.intel.com/products/processor/manuals/?wapkw=(Intel+64+and+IA-32+Architectures)