Why would you choose Vectored Interrupt and non-vectored interrupt?
I know the differences between them but not sure when you would use one over the other/what devices use either one!
Thank you so much.
If the hardware supports vectored interrupts, there is no reason not to use them. This is more a question of implementation cost (vector tables and prioritisation logic) vs software cost (reading status registers and looking up the correct vector).
As hardware has become cheaper over time, it makes sense to have dedicated logic to provide the correct vector address - this improves interrupt latency for typical real world implementations to start processing 'actual handler code'.
Where hardware supports both, the non-vectored mode may be for legacy compatibility, or for the unusual case where only one interrupt is required (possibly saving one or two cycles of latency).
Related
Hypervisors isolate different OS running on the same physical machine from each other. Within this definition, non-volatile memory (like hard-drives or flash) separation exists as well.
When thinking on Type-2 hypervisors, it is easy to understand how they separate non-volatile memory because they just use the file system implementation of the underlying OS to allocate different "hard-drive files" to each VM.
But than, when i come to think about Type-1 hypervisors, the problem becomes harder. They can use IOMMU to isolate different hardware interfaces, but in the case of just one non-volatile memory interface in the system I don't see how it helps.
So one way to implement it will be to separate one device into 2 "partitions", and make the hypervisor interpret calls from the VMs and decide whether the calls are legit or not. I'm not keen on communication protocols to non-volatile interfaces but the hypervisor will have to be be familiar with those protocols in order to make the verdict, which sounds (maybe) like an overkill.
Are there other ways to implement this kind of isolation?
Yes you are right, hypervisor will have to be be familiar with those protocols in order to make the isolation possible.
The overhead is mostly dependent on protocol. Like NVMe based SSD basically works on PCIe and some NVMe devices support SR-IOV which greatly reduces the effort but some don't leaving the burden on the hyperviosr.
Mostly this support is configured at build time, like how much memory will be given to each guest, command privilege for each guest etc, and when a guest send a command, hypervisor verifies its bounds and forwards them accordingly.
So why there is not any support like MMU or IOMMU in this case?
There are hundreds of types of such devices with different protocols, NVMe, AHCI etc, and if the vendor came to support these to allow better virtualization, he will end up with a huge chip that's not gonna fit in.
I am currently studying the specifications for RISC-V with specification version 2.2 and Privileged Architecture version 1.10. In Chapter 2 of RISC-V specification, it is mentioned that "[...] though a simple implementation might cover the eight SCALL/SBREAK/CSRR* instructions with a single SYSTEM hardware instruction that always traps [...]"
However, when I look at the privileged specification, the instruction MRET is also a SYSTEM instruction, which is required to return from a trap. Right now I am confused how much of the Machine-level ISA are required: is it possible to omit all M-level CSRs and use a software handler for any SYSTEM instructions, as stated in Specification? If so, how does one pass in information such as return address and trap cause? Are they done through regular registers x1-x31?
Alternatively, is it enough to implement only the following M-level CSRs, if I am aiming for a simple embedded core with only M-level privilege?
mvendorid
marchid
mimpid
mhartid
misa
mscratch
mepc
mcause
Finally, how many of these CSRs can be omitted?
ECALL/EBREAK instructions are traps anyway. CSR instructions need to be carefully parsed to make sure they specify existent registers being accessed in allowed modes, which sounds like a job for your favorite sparse matrix, whether PLA or if/then.
You could emulate all SYSTEM instructions, but, as you see, you need to be able to access information inside the hardware that is not part of the normal ISA. This implies that you need to add "instruction extensions."
I would also recommend making the SYSTEM instructions atomic, meaning that exceptions should be masked or avoided within each emulated instruction.
Since I am not a very trusting person, I would create a new mode that would enable the instruction extensions that would allow you to read the exception address directly from the hardware, for example, and fetch instructions from a protected area of memory. Interrupts would be disabled automatically. The mode would be exited by branching to epc+4 or the illegal instruction handler. I would not want to have anything outside the RISC-V spec available even in M-mode, just to be safe.
In my experience, it is better to say "I do everything," than it is to explain to each customer, or worse, have a competitor explain to your customers, what it is that you do not do. But perhaps someone who knows the CSRs better could help; it is not something I do.
I learnt in computer architecture course that, data hazard can be prevented by using several arbitrary, independent nop instructions in between two mutually dependent instructions. This can be done at assembly level in compiler design.
The alternative way to avoid data hazard is to use data forwarding.
I am bit confused, How these two alternatives differ as far as performance, speed and hardware is concerned. Because as per my knowledge data forwarding is to be implemented at hardware level, whereas nop can be implemented at assembly level.
Anybody please explain me which approach is better if we consider factors such as performance, speed, hardware etc?
Thanks.
Obviously, having the compiler insert nops into the code stream to fill pipeline slots allows hardware to be simplified which can reduce the duration of a pipeline stage or the depth of the pipeline, reduce design effort (time to market, project risk, design cost), or allow a full processor core to fit on a single chip (which helps performance). However, this benefit is tiny compared to the loss of performance from not using forwarding. Higher latency for dependent instructions is very bad for typical programs.
The MIPS R2000, which had both delayed branches and delayed loads, provided result forwarding. (MIPS is an acronym for "Microprocessor without Interlocked Pipeline Stages"). Delayed loads were soon removed from MIPS (which was possible because such did not affect binary compatibility of correct code). The use of delayed instructions was partially from a belief that most delay slots could be filled by the compiler with useful instructions and partially from believing that the increase in code size was not important relative to the simplification of hardware.
Reducing the latency of a load operation was not practical, so the pipeline would need to be stalled for a cycle anyway. The cost of a nop is in cache and memory capacity effects (i.e., the effect of lower code density), and in some cases a single load delay slot could be filled.
Exposing the pipeline organization also has implications for binary compatibility. Later binary compatible implementations must accommodate the ISA designed for the original pipeline organization. A single delayed branch slot works reasonably well for a simple 5-stage scalar implementation (it can be filled with a useful instruction most of the time and allows zero-effective-delay branches [i.e., no stall to resolve the branch or prediction and flushing the pipeline on misprediction]), but when the pipeline is deepened (or made wider) prediction or stalling becomes necessary anyway.
If sufficient parallelism exists in the targeted workloads, hardware simplicity is sufficiently important, and binary compatibility is not a problem, then exposing a pipeline with minimal support for dynamically detecting and handling stall conditions may be sensible. (There are also ways of encoding nops that avoid most of the code size expansion issues.) Having reliably sufficient parallelism (whether instruction-level or thread-level) allows the avoiding of nops; by compiler scheduling with instruction-level parallelism or by hardware thread interleaving with thread-level parallelism.
Hardware simplicity tends to reduce energy per unit of work (as well as chip area), and many modern designs are limited by power use. It also makes sense to perform optimizations at compile time (when they are less latency critical and can be done once rather than each time the code is executed) if the storage and communication cost of additional information is not too expensive (assuming information necessary to perform the optimization is available at compile time [dynamic branch prediction is a classic example of where dynamic information is helpful]).
Well, basically since hardware is optimised with feed forwarding, there has to be no use of explicitly declared software NOPs. But that's not the case.
Though, feed forwarding proves helpful in reducing data hazards, but some hazards cannot be dealt with feed forwarding. It just isn't possible.
Eg.
beq R1,R5,label
instruction 2nd
Here the instruction 2nd will not be fetched until instruction 1 has completed its execution stage and decided whether or not to branch. Until then the 2nd instruction has to be stalled. (stalled for 2 memory cycles). This is done by software by sending out NOPs.
With improvements in technology and hardware optimizations, the beq instruction can complete its execution stage in its register fetch/decode stage by inserting a comparator in the fetch stage itself. Even so, the 2nd instruction will be stalled for(1 memory cycle now). Again NOP is needed.
What advantages are there to programming for a non-cache-coherent multi-core machine? Cache_coherence has many benefits, but how would one take advantage of the opposite of this feature - an independent cache for each individual core. What programming paradigm and to what particular practical problems would such an architecture be beneficial over a cache-coherent one?
You don't as such take advantage of cache non-coherence. You can't write code which relies on different cores having different views of memory, because a non-coherent cache doesn't guarantee to show different memory to different cores. It just reserves the right to do that.
Cache coherence costs circuits and time. Non-coherent caches are therefore cheaper (and cooler, perhaps?) and faster. Memory access might be faster in cycles, or might be the same best-case speed but with fewer stalls due to cache synchronisation and especially false sharing.
So it's not so much extra things you do to take advantage of non-coherence, it's the things that you don't have to do because you've dropped the disadvantages of coherence - you don't have to redesign your parallel code because it's spending all its time sitting around waiting for the result of a memory store from another core.
The downside on a non-coherent cache architecture at first appears to be that find yourself using additional synchronisation that's provided automatically by coherent caches. No double-checked locking for you. Then you realise that in effect, the coherent-cache architectures do this synchronisation (albeit in a super-fast hardware-implemented form) for every single memory access, and block if the cache line is dirty, whether you need it to or not. That cheers me right up :-)
What programming paradigm
Message passing.
and to what particular practical problems would such an architecture be beneficial over a cache-coherent one?
Pattern matching - the input block of memory could very well be "read-only": the "output" result can very well be placed in separate blocks waiting for a "reducer" of some sort.
Of course, this is just an example amongst many I am sure.
Just to make things clear: the principal reasons for going with "non-cache-coherent" architecture are cost & speed (assuming the problems at hand are more efficiently tackled using this architecture).
You can get a bit of extra performance, but you shoul never rely on each processor having different cache values, as you can never know when the cache is flushed.
I'm not an expert; but I don't think it has any advantage over a cache coherent architecture, besides from being simpler to implement. Of course, such simplicity can allow other optimizations that could be prohibitive in a more complex coherent system, making the non-coherent machine faster when carefully programmed.
said that, i concur with jldupont, message passing doesn't need coherency, so it's (almost) the mandatory way to do IPC.
You could think of the Cell SPE local memory as a sort of cache. It isn't cache really since it isn't automatic at all, but the speed is the same and it isn't coherent.
It has big speed advantages because the hardware does not need to spend any time synchronizing the cache line states between cores.
In a Cell, the programmer must do the synchronization manually by writing code to copy SPE local memory back and forth. So a disadvantage is much greater program complexity.
As always after some research I was unable to find anything of real value. My question is how does one go about handling exceptions in a real time system? As program failure generally is not the best case i.e. nuclear reactor/ heart monitor.
Ok since everyone got lost on the second piece of this, which had NOTHING to do with the main question. I had it in there to show how I normally escape code blocks.
Exception handling in real-time/embedded systems has several layers. Not just the language supported options, but also MMU, CPU exceptions and one of my favorites: watchdogs.
Language exceptions (C/C++)
- not often used, beause it is hard to prove that all exceptions are handled at the right level. Also it is pretty hard to determine what threat/process should be responsible. Instead, programming by contract is preferred.
Programming style:
- i.e. programming by contract. Additional constraints : Misra/C Misra/C++. This can be checked to unsure that all possible cases are somehow handled. (i.e. no if without else)
Hardware support:
- MMU : use of multiple processes which are protected against each other. This allows
- watchdog
- CPU exceptions
- multi core: use of multiple cores to separate cricical processes from the rest. Also allows to have voting mechanisms (you want this and more for your nuclear reactor).
- multi-system
Most important is to define a strategy. Depending on the other nonfunctional requirements (safety, reliability, security) a strategy needs to be thought of. Can be graceful degradation to partial system reboot.
In a 'real-time', 'nuclear reactor' type system, chances are the exception handling allows the system to instead of fail, do the next best thing.
Let's say that we have a heart monitor. If it isn't receiving a signal, that might trigger an exception. In that case, the heart monitor might handle the exception by waiting a few seconds and trying again.
In a nuclear reactor, getting to a certain temperature might trigger an exception. In that case, the handling might shut off various parts of the reactor to start to cool it down, and then start them back up when it gets to a reasonable temperature.
Exceptions are meant to have a lower-level system say that it doesn't know what to do, and to have a higher level system handle it. Like in the nuclear reactor, the system that measures temperature probably doesn't know how to turn on parts of the reactor, so it triggers an exception so that some higher-level system can handle it.
A critical system is like any other system, except it's specified more clearly, passes through more testing phases, and is generally will fail-safe.
Regarding your form, yes it's pretty bad. I do mind the lack of {} very much; and it's been said so-often that this is just plain bad style, and leads to confusion when adding new code.