This question already has answers here:
Disable IRQ on STM32
(2 answers)
Closed 3 years ago.
I'm trying to write some code for the stm32f103c8t6 microcontroller. It is constantly communicating with a device, which requires interrupts to be disabled... however, this also needs to be interrupted immediately by the falling-edge of a certain GPIO pin.
Without disabling interrupts, the communication fails occasionally, with sporadic delays of about 45 clock cycles. Disabling all interrupts by setting the I bit of the CPSR register fixes this problem entirely, making me think it's a interrupt problem... however, then my GPIO interrupt doesn't work, so this isn't a solution.
I've tried clearing all enable bits in the NVIC, except the one used for my GPIO interrupt, but the problem still occurs.
Are there any interrupts which aren't handled by the NVIC which might be causing the problem? Or does anyone have any other ideas? Any help or ideas would be much appreciated! Thanks.
Use priority grouping - you will disable interrupts with priority lower that you set.
Related
I hope this question isn't to stupid cause it may seem obvious.
As I'm doing a little research on Buffer overflows I stumble over a simple question:
After going to a new Instruction Address after a call/return/jump:
Will the CPU execute the OP Code at that address and then move one byte to the next address and execute the next OP Code and so on until the next call/return/jump is reached? Or is there something more tricky involved?
A bit boringly extended explanation (saying the same as those comments):
CPU has special purpose register instruction pointer eip, which points to the next instruction to execute.
A jmp, call, ret, etc. ends internally with something similar to:
mov eip,<next_instruction_address>.
While the CPU is processing instructions, it does increment eip by appropriate size of last executed instruction automatically (unless overridden by one of those jmp/j[condition]/call/ret/int/... instructions).
Wherever you point the eip (by whatever means), CPU will try it's best to execute content of that memory as next instruction opcode(s), not aware of any context (where/why did it come from to this new eip). Actually this amnesia sort of happens ahead of each instruction executed (I'm silently ignoring the modern internal x86 architecture with various pre-execution queues and branch predictions, translation into micro instructions, etc... :) ... all of that are implementation details quite hidden from programmer, usually visible only trough poor performance, if you disturb that architecture much by jumping all around mindlessly). So it's CPU, eip and here&now, not much else.
note: some context on x86 can be provided by defining the memory layout by supervising code (like OS), ie. marking some areas of memory as non-executable. CPU detecting it's eip pointing to such area will signal a failure, and fall into "trap" handler (usually managed by OS also, killing the interfering process).
The call instruction saves (onto the stack) the address to the instruction after it onto the stack. After that, it simply jumps. It doesn't explicitly tell the cpu to look for a return instruction, since that will be handled by popping (from the stack) the return address that call saved in the first place. This allows for multiple calls and returns, or to put it simply, nested calls.
While the CPU is processing instructions, it does increment eip by
appropriate size of last executed instruction automatically (unless
overridden by one of those jmp/j[condition]/call/ret/int/... instructions).
That's what i wanted to know.
I'm well aware that thers more Stuff arround (NX Bit, Pipelining ect).
Thanks everybody for their replys
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Interrupts and exceptions
Can anybody explain what is the difference between software interrupt and software exception?
Interrupt: means time slice expires, call instruction strikes
exception: means access voilation,
I am right or can anybody explain in depth ?
A software interrupt occurs when the processor executes an INT instruction. Written in the program, typically used to invoke a system service.
A processor interrupt is caused by an electrical signal on a processor pin. Typically used by devices to tell a driver that they require attention. The clock tick interrupt is very common, it wakes up the processor from a halt state and allows the scheduler to pick other work to perform.
A processor fault like access violation is triggered by the processor itself when it encounters a condition that prevents it from executing code. Typically when it tries to read or write from unmapped memory or encounters an invalid instruction.
I have been thinking about the dormant fault and cannot figure out an example. By definition, dormant fault is a fault (defect in the code) that does not cause error and thus do not cause a failure. Can anyone give me an example? The only thing that crossed my mind was unusued buggy code..
Thanks
Dormant faults are much more common than one might think. Most programmers have experienced moments of thinking "What was I thinking? How could that ever run correctly?", even though the code didn't show erroneous behaviour. A classic case is faulty corner-case handling, e.g. on failed memory allocation:
char *foo = malloc(42);
strcpy( foo, "BarBaz" );
The above code will work fine in most situations and pass tests just fine; however, when malloc fails due to memory exhaustion, it will fail miserably. The fault is there, but dormant.
Dormant faults are simply ones that don't get revealed until you send the right input [edit: or circumstances] to the system.
A classic example is from Therac-25. The race condition caused by an unlikely set of keys on input didn't occur until technicians became "fluent" with using the system. They memorized the key strokes for common treatments, which means they could enter them very quickly.
Some other ones that come to my mind:
Y2K bugs were all dormant faults, until the year 2000 came around...
Photoshop 7 still runs OK on my Windows 7 machine, yet it thinks my 1TB disks are full. An explanation is that the datatype used to hold free space was not designed to account for such high amounts of free space, and there's an overflow causing the free space to appear insufficient.
Transfering a file greater than 32MB with TFTP (the block counter can only go to 65535 in 16 bits) can reveal a dormant bug in a lot of old implementations.
In this last set of examples, one could argue that there was no specification requiring these systems to support such instances, and so they're not really faults. But that gets into completeness of specifications.
So I'm getting a "prefetch abort" exception on our arm9 system. This system does not have an MMU, so is there anyway this could be a software problem? All the registers seem correct to me, and the code looks right (not corrupted) from the JTAG point of view.
Right now I'm thinking this is some kind of hardware issue (although I hate to say it - the hardware has been fine until now).
What exactly is the exception you're getting?
Last time this happened to me, I went up the wrong creek for a while because I didn't realize an ARM "prefetch abort" meant the instruction prefetch, not data prefetch, and I'd just been playing with data prefetch instructions. It simply means that the program has attempted to jump to a memory location that doesn't exist. (The actual problem was that I'd mistyped "go 81000000" as "go 81000" in the bootloader.)
See also:
http://www.keil.com/support/docs/3080.htm (KB entry on debugging data aborts)
http://www.ethernut.de/en/documents/arm-exceptions.html (list of ARM exceptions)
What's the address that the prefetch abort is triggering on. It can occur because the program counter (PC or R15) is being set to an address that isn't valid on your microcontroller (this can happen even if you're not using an MMU - the microcontroller's address space likely has 'holes' in it that will trigger the prefetch abort). It could also occur if you try to prefetch an address that would be improperly aligned, but I think this dpends on the microcontroller implementation (the ARM ARM lists the behavior as 'UPREDICTABLE').
Is the CPU actually in Abort mode? If it's executing the Prefetch handler but isn't in abort mode that would mean that some code is branching through the prefetch abort vector, generally through address 0x0000000c but controllers often allow the vector addresses to be remapped.
There is a function that calls itself recursively infinitely.
This function has some arguments too.
For every function call the arguments and return address are pushed on the stack .
For each process there is fixed size of stack space that cannot grow dynamically like heap.
And I guess each thread also has its own stack.
Now if a function is called recursively infinitely and process runs out of stack space , what will happen?
Will program crash?
Will OS handle the situation?
There is 4GB of address space so why cannot OS do something to increase stack size.
stack overflow.
In UNIX and compatible the process will get terminated throwing SIGSEGV or SIGSTKFLT signal.
In Windows the process will get terminated throwing exception STATUS_STACK_OVERFLOW.
For C++ at least, you will be in the realms of "undefined behaviour" - a bit like the Twilight Zone, anything could happen.
And if the recursion is infinite, what good will increasing the stack size do? Better to fail early than later.
Depending on the language you will either get an exception (e.g. in Java) or the program will crash (in C, C++).
Typically the stack is relatively small, because that suffices, and a stack overflow signals an error. In Java you can increase the stack space with a command line option if you must.
Also note that functional languages typically compile tail recursion into a loop, and no stack space is used in that case.
This isn't language-agnostic - it very much depends on the language/platform.
In C# (or any .NET language) you'll get a StackOverflowException, which as of .NET 2.0 cannot be caught and will bring down the process.
In Java (or any JVM language) you'll get a StackOverflowError (as specified here).
I'll leave it to other answers to deal with other languages and platforms :)
A typical Unix result will be a segmentation fault. Don't know about Windows.
Yes, your program will crash. There is no way for the OS to "handle the situation", other than preventing your buggy code from damaging other processes (which it is already doing). The OS has no way of knowing what it was you really wanted your program to do, rather than what you told it to do.
There are plenty of answers what will happen, but I want to mention the extremely easy solution:
Just by a turing machine and let your code run on that. It will have plenty of space.
The program will crash. Usually stack is limited by operating systems in order to trap bugs like this before they consume ALL available memory. On Linux at least the stack size can be changed by the user by issuing the limit command in shell.