I meet this problem when finishing the lab of my OS course. We are trying to implement a kernel with the function of system call (platform: QEMU/i386).
When testing the kernel, problem occurred that after kernel load user program to memory and change the CPU state from kernel mode to user mode using 'iret' instruction, CPU works in a strange way as following.
%EIP register increased by 2 each time no matter how long the current instrution is.
no instruction seems to be execute, for no other registers change meantime.
Your guest has probably ended up executing a block of zeroed out memory. In i386, zeroed memory disassembles to a succession of "add BYTE PTR [rax],al" instructions, each of which is two bytes long (0x00 0x00), and if rax happens to point to memory which reads as zeroes, this will effectively be a 2-byte-insn no-op, which corresponds to what you are seeing. This might happen because you set up the iret incorrectly and it isn't returning to the address you expected, or because you've got the MMU setup wrong and the userspace program isn't in the memory where you expect it to be, for instance.
You could confirm this theory using QEMU's debug options (eg -d in_asm,cpu,exec,int,unimp,guest_errors -D qemu.log will log a lot of execution information to a file), which should (among a lot of other data) show you what instructions it is actually executing.
Related
I hope this question isn't to stupid cause it may seem obvious.
As I'm doing a little research on Buffer overflows I stumble over a simple question:
After going to a new Instruction Address after a call/return/jump:
Will the CPU execute the OP Code at that address and then move one byte to the next address and execute the next OP Code and so on until the next call/return/jump is reached? Or is there something more tricky involved?
A bit boringly extended explanation (saying the same as those comments):
CPU has special purpose register instruction pointer eip, which points to the next instruction to execute.
A jmp, call, ret, etc. ends internally with something similar to:
mov eip,<next_instruction_address>.
While the CPU is processing instructions, it does increment eip by appropriate size of last executed instruction automatically (unless overridden by one of those jmp/j[condition]/call/ret/int/... instructions).
Wherever you point the eip (by whatever means), CPU will try it's best to execute content of that memory as next instruction opcode(s), not aware of any context (where/why did it come from to this new eip). Actually this amnesia sort of happens ahead of each instruction executed (I'm silently ignoring the modern internal x86 architecture with various pre-execution queues and branch predictions, translation into micro instructions, etc... :) ... all of that are implementation details quite hidden from programmer, usually visible only trough poor performance, if you disturb that architecture much by jumping all around mindlessly). So it's CPU, eip and here&now, not much else.
note: some context on x86 can be provided by defining the memory layout by supervising code (like OS), ie. marking some areas of memory as non-executable. CPU detecting it's eip pointing to such area will signal a failure, and fall into "trap" handler (usually managed by OS also, killing the interfering process).
The call instruction saves (onto the stack) the address to the instruction after it onto the stack. After that, it simply jumps. It doesn't explicitly tell the cpu to look for a return instruction, since that will be handled by popping (from the stack) the return address that call saved in the first place. This allows for multiple calls and returns, or to put it simply, nested calls.
While the CPU is processing instructions, it does increment eip by
appropriate size of last executed instruction automatically (unless
overridden by one of those jmp/j[condition]/call/ret/int/... instructions).
That's what i wanted to know.
I'm well aware that thers more Stuff arround (NX Bit, Pipelining ect).
Thanks everybody for their replys
I have an LPC2220 ARMv7 device. From time to time, when executing my code, it appears Data Abort Exceptions, each time returning a different memory address, and I want to make sure there's nothing related to the physical memory or a chip failure, as my program is using most of the memory space. Is there any way to test the memory? from code?
N.B. I'm using keil uVision for debugging the code
I am giving very limited information, I know, but it is probably enough.
System specs: MIPS 64-bit processor with 8 cores running 4 virtual cpus each.
OS: Some proprietary Linux based on the 2.6.32.9 kernel
Process: a simple user-land process running 7 posix threads. This specific application is running on core 0, which does not have any cpu affinity by any process.
The crash is almost impossible to reproduce. There is no specific scenario. We know that, if we perform some minor activity with the application it might crash once a day.
The specific thread that's crashing wakes up every 5 milliseconds, reads information from one shared memory area and updates another. That's it.
There is not too much activity. The process is not working too hard.
Now: When I open the core and load a symbol-less image of the application, gdb points to instruction 100661e0. Instruction 100661e0 looks like this (viewing with an objdump view of the non-stripped image):
void class::foo(uint8_t factor)
{
100661d8: ffbf0018 sd ra,24(sp)
100661dc: 0080802d move s0,a0
bar(factor, shared_memory_a->profiles[PROFILE_1]);
100661e0: 0c019852 jal 10066148 <_ZN28class35barEhR30profile_t>
100661e4: 64c673c8 daddiu a2,a2,29640
bar(factor, shared_memory_a->profiles[PROFILE_2]);
100661e8: de060010 ld a2,16(s0)
The line that shows as the exception line is
100661e0: 0c019852 jal 10066148 <_ZN28class35barEhR30profile_t>
Note that 10066148 is a valid instruction.
The Bad register contains the following address, which is aligned, but does not look valid as far as the instruction address space is concerned: c0000003ddc5dd90
The cause register contains this value: 0000000000800014
I don't understand why the Bad registers shows the value it does, where the instruction specifically states a valid instruction. I am a little concerned about branch delay slot issues, but I shouldn't be concerned about those when I am running a userland application using simple C++.
I'd appreciate any thoughts.
Is the kernel stack a different structure to the user-mode stack that is used by applications we (programmers) write?
Can you explain the differences?
Conceptually, both are the same data structure: a stack.
The reason why there are two different stack per thread is because in user mode, code must not be allowed to mess up kernel memory. When switching to kernel mode, a different stack in memory only accessible in kernel mode is used for return addresses an so on.
If the user mode had access to the kernel stack, it could modify a jump address (for instance), then do a system call; when the kernel jumps to the previously modified address, your code is executed in kernel mode!
Also, security-related information/information about other processes (for synchronisation) might be on the kernel stack, so the user mode should not have read access to it either.
The stack of a typical modern operating system is just a region of memory used for storing return addresses and local data. It has the same structure in both the kernel and user-mode, but each thread gets its own memory area for storing its stack. Context switches restore the stack pointer, so no thread sees another thread's stack even though they may be able to share other memory (if the threads are in the same process).
A thread doesn't have to use the stack by the way. The operating system makes assumptions about how it will be used, but the thread doesn't have to follow them.
I have a VxWorks application running on ARM uC.
First let me summarize the application;
Application consists of a 3rd party stack and a gateway application.
We have implemented an operating system abstraction layer to support OS in-dependency.
The underlying stack has its own memory management&control facility which holds memory blocks in a doubly linked list.
For instance ; we don't directly perform malloc/new , free/delege .Instead we call OSA layer's routines and it gets the memory from OS and puts it in a list then returns this memory to application.(routines : XXAlloc , XXFree,XXReAlloc)
And when freeing the memory we again use XXFree.
In fact this block is a struct which has
-magic numbers indication the beginning and end of memory block
-size that user requested allocated
-size in reality due to alignment issue previous and next pointers
-pointer to piece of memory given back to application. link register that shows where in the application xxAlloc is called.
With this block structure stack can check if a block is corrupted or not.
Also we have pthread library which is ported from Linux that we use to
-create/terminate threads(currently there are 22 threads)
-synchronization objects(events,mutexes..)
There is main task called by taskSpawn and later this task created other threads.
this was a description of application and its VxWorks interface.
The problem is :
one of tasks suddenly gets destroyed by VxWorks giving no information about what's wrong.
I also have a jtag debugger and it hits the VxWorks taskDestoy() routine but call stack doesn't give any information neither PC or r14.
I'm suspicious of specific routine in code where huge xxAlloc is done but problem occurs
very sporadic giving no clue that I can map it to source code.
I think OS detects and exception and performs its handling silently.
any help would be great
regards
It resolved.
I did an isolated test. Allocated 20MB with malloc and memset with 0x55 and stopped thread of my application.
And I wrote another thread which checks my 20MB if any data else than 0x55 is written.
And quess what!! some other thread which belongs other components in CPU (someone else developed them) write my allocated space.
Thanks 4 your help
If your task exits, taskDestroy() is called. If you are suspicious of huge xxAlloc, verify that the allocation code is not calling exit() when memory is exhausted. I've been bitten by this behavior in a third party OSAL before.
Sounds like you are debugging after integration; this can be a hell of a job.
I suggest breaking the problem into smaller pieces.
Process
1) you can get more insight by instrumenting the code and/or using VxWorks intrumentation (depending on which version). This allows you to get more visibility in what happens. Be sure to log everything to a file, so you move back in time from the point where the task ends. Instrumentation is a worthwile investment as it will be handy in more occasions. Interesting hooks in VxWorks: Taskhooklib
2) memory allocation/deallocation is very fundamental functionality. It would be my first candidate for thorough (unit) testing in a well-defined multi-thread environment. If you have done this and no errors are found, I'd first start to look why the tas has ended.
other possible causes
A task will also end when the work is done.. so it may be a return caused by a not-so-endless loop. Especially if it is always the same task, this would be my guess.
And some versions of VxWorks have MMU support which must be considered.