Pipeline Hazards questions - mips

I'm currently studying for an exam tomorrow and need some help understanding the following:
The following program is given:
ADDF R12, R13, R14
ADD R1,R8,R9
MUL R4,R2,R3
MUL R5,R6,R7
ADD R10,R5,R7
ADD R11,R2,R3
Find the potential conficts that can arise if the architecture has:
a) No pipeline
b) A Pipeline
c) Multiple pipelines
So for (b) I would say the instruction on line 5 is a Data Hazard because it fetches the value of R5 which is from the previous line given the result of a multiplication, so that instruction is not yet finished.
But what happens if an architecture doesn't have a pipeline? My best guess is that no hazards exist, but I'm not sure.
Also, what happens if it has 2 or more pipelines?
Cheers.

You are correct to suggest that for a) there are no hazards as each instruction must complete before the next starts.
For b):
There is a "Read After Write" dependency between lines 4 and 5.
There are "Read After Read" dependencies between lines 4 and 5 and also between lines 2 and 6.
I suspect that the difference between parts b) and c) is that the question assumes you know ahead of time that the pipe-line has a well defined number of stages. For example we know that if the pipe-line has 3 stages then the RAR dependency between lines 2 and 6 is irrelevant.
In a system with multiple pipelines however the system could fetch say 4 instructions per cycle making dependencies that were formally too far apart now potential hazards.

Related

How many instructions need to be killed on a miss-predict in a 6-stage scalar or superscalar MIPS?

I am working on a pipeline with 6 stages: F D I X0 X1 W. I am asked how many instructions need to be killed when a branch miss-predict happens.
I have come up with 4. I think this because the branch resolution happens in X1 and we will need to kill all the instructions that came after the branch. In the pipeline diagram, it looks like it would require killing 4 instructions that are in the process of flowing through the pipeline. Is that correct?
I am also asked how many need to be killed if the pipeline is a three-wide superscalar. This one I am not sure on. I think that it would be 12 because you can fetch 3 instructions at a time. Is that correct?
kill all the instructions that came after the branch
Not if this is a real MIPS. MIPS has one branch-delay slot: The instruction after a branch always executes whether the branch is taken or not. (jal's return address is the end of the delay slot so it doesn't execute twice.)
This was enough to fully hide the 1 cycle of branch latency on classic MIPS I (R2000), which used a scalar classic RISC 5-stage pipeline. It managed that 1 cycle branch latency by forwarding from the first half of an EX clock cycle to an IF starting in the 2nd half of a clock cycle. This is why MIPS branch conditions are all "simple" (don't need carry propagation through the whole word), like beq between two registers but only one-operand bgez / bltz against an implicit 0 for signed 2's complement comparisons. That only has to check the sign bit.
If your pipeline was well-designed, you'd expect it to resolve branches after X0 because the MIPS ISA is already limited to make low-latency branch decision easy for the ALU. But apparently your pipeline is not optimized and branch decisions aren't ready until the end of X1, defeating the purpose of making it run MIPS code instead of RISC-V or whatever other RISC instruction set.
I have come up with 4. I think this because the branch resolution happens in X1 and we will need to kill all the instructions that came after the branch.
I think 4 cycles looks right for a generic scalar pipeline without a branch delay slot.
At the end of that X1 cycle, there's an instruction in each of the previous 4 pipeline stages, waiting to move to the next stage on that clock edge. (Assuming no other pipeline bubbles). The delay-slot instruction is one of those and doesn't need to be killed.
(Unless there was an I-cache miss fetching the delay slot instruction, in which case the delay slot instruction might not even be in the pipeline yet. So it's not as simple as killing the 3 stages before X0, or even killing all but the oldest previous instruction in the pipeline. Delay slots are not free to implement, also complicating exception handling.)
So 0..3 instructions need to be killed in pipeline stages from F to I. (If it's possible for the delay-slot instruction to be in one of those stages, you have to detect that special case. If it isn't, e.g. I-cache miss latency long enough that it's either in X0 or still waiting to be fetched, then the pipeline can just kill those first 3 stages and do something based on X0 being a bubble or not.)
I think that it would be 12 because you can fetch 3 instructions at a time
No. Remember the branch itself is one of a group of 3 instructions that can go through the pipeline. In the predict-not-taken case, presumably the decode stage would have sent all 3 instructions in that fetch/decode group down the pipe.
The worst case is I think when the branch is the first (oldest in program order) instruction in a group. Then 1 (or 2 with no branch delay slot) instructions from that group in X1 have to be killed, as well as all instructions in previous stages. Then (assuming no bubbles) you're cancelling 13 (or 14) instructions, 3 in each previous stage.
The best case is when the branch is last (youngest in program order) in a group of 3. Then you're discarding 11 (or 12 with no delay slot).
So for a 3-wide version of this pipeline with no delay slot, depending on bubbles in previous pipeline stages, you're killing 0..14 instructions that are in the pipeline already.
Implementing a delay slot sucks; there's a reason newer ISAs don't expose that pipeline detail. Long-term pain for short-term gain.

Mips datapath procedure for executing an AND instruction?

Based on this figure, executing the AND instruction would cause these values to be assigned to the signals labeled in blue:
RegWrite = 1
ALUSrc = 0
ALU operation = 0000
MemRead = 0
MemWrite = 0
MemtoReg = 0
PCSrc =0
However, I am a little confused which inputs will be used in the Registers block? Can anyone describe the overall AND procedure in the MIPS datapath?
Starting from after the instruction is read from instruction memory, you need to know that AND is an r-type instruction and thus uses 3 registers. Which register is actually used is based off of the encoded instruction. (An R-Type has 3 5-bit fields, one for each reg.) rs and rt go to Read register 1 and 2, while rd goes to Write register. From there, Read data 1 and 2 (the bits of registers s and t) go to the ALU where a bitwise AND is performed on them. The result of that is written to the write register. I traced the path in your picture (omitting the PC incrementing part). I'm taking a class that uses that exact book this semester. If you look a little ahead, it goes deeper into what is going on. The explanation of the control (blue) lines helps a lot. The mux blocks are multiplexers, that is they allow alternating the output between two inputs. In this case, the ALUSrc mux will use Read data 2 because AND is an r-type. If it were i-type, it would switch to use the data coming from the sign extend, because that would be the immediate. The other mux is to allow either memory from data to be written to the write register or the result of an ALU operation. In this case, it will be the result of an ALU operation.
To imply answer your question about the register block, keep in mind that the inputs to the register block are the addresses of the registers your instruction will be using, the register block then either fetches the data in the registers who's addresses were given or write data at the end on this register.
However one note you have an inconsistency in your mux design MemtoReg and ALUSrc should have opposite values, so unless one of the 2 muxes is upside down (which is not advisable) then there is a mistake with your controller logic.

Designing A Simplified MIPS Processor

So I'm going over some old quizzes for my Computer Organization final and I must have missed this lecture or something. I'm decently proficient in programming MIPS, but this problem has me completely stumped. Could someone help me understand this?
The diagram is missing lines connecting the various parts of the processor as well as a multiplexor for determining if the next instruction address is coming from the PC+4 or from a register as in a jr ra instruction.
There needs to be a line from the ALU to the write data portion of the register file. This is for R-Type instructions, as their result will need to be written back to the destination register. Going into the ALU needs to be lines from Read data 1 and Read data 2, this is how the values from the registers make their way into the ALU for R-type instructions.
A couple lines have been added from the instruction memory to the registers, you're missing the one to the Write Register though (this specifies the destination register for R-type instructions).
For the PC, the line going into the adder goes into the other input (the above one). The 4 for the adder is constant, as each instruction address is 4 bytes after the previous one, so unless we're jumping to an address we will be executing the instruction immediately after the current instruction. The line from the PC to the read address is also necessary as the PC specifies which instruction address to find the current instruction at. The line going into the PC comes from either the result of the PC+4 addition or from the register specified in a jr ra instruction.
To handle this decision a multiplexor is needed. Multiplexors have two inputs and one output, so this one will have one input from the PC+4 adder (for regular R-type instructions) and another from Read Data 1 (for jr ra instructions). The input from Read Data 1 should be visualized as a split from the line between Read Data 1 and the ALU. The output will go right back to the PC, as it determines the next instruction to execute.
I think that's everything that's wanted for that question, as the prof specifies that control signals are already generated (the multiplexor is a type of control unit, but I think it's necessary nonetheless). Hope that helps!

Why does replacing if-else by bit-operation turn out to be slower in CUDA?

I replace
if((nMark >> tempOffset) & 1){nDuplicate++;}
else{nMark = (nMark | (1 << tempOffset));}
with
nDuplicate += ((nMark >> tempOffset) & 1);
nMark = (nMark | (1 << tempOffset));
this replacement turns out to be 5ms slower on GT 520 graphics card.
Could you tell me why? or do you have any idea to help me improve it?
The native instruction set for the GPU deals with small conditions very efficiently via predication. Additionally, the ISET instruction converts a condition code register into an integer with the value 0 or 1, which naturally fits with your conditional increment.
My guess is that the key difference between the first and second formulations is that you've effectively hidden the fact that it's an if/else.
To tell for sure, you can use cuobjdump to look at the microcode generated for the two cases: specify --keep to nvcc and use cuobjdump on the .cubin file to see the disassembled microcode.
Shot in the dark, but you're always incrementing/re-assigning to the nDuplicate variable now in the latter implementation where as you weren't incrementing/assigning to it if the test in the if statement was false previously. Guessing the overhead comes from that, but you don't describe your test data set so I don't know if that was already the case.
Does your program exhibit significant branch divergence? If you're running e.g. 100 warps and only 5 have divergent behavior, and they run in 5 SMs, you would only see 21 time cycles (expecting 20)... a 5% increase that could easily be defeated by doing 2x the work in each thread to avoid rare divergence.
Barring that, the 520 is a fairly modern graphics card, and might incorporate modern SIMT scheduling techniques, e.g. Dynamic Warp Formation and Thread Block Compaction, to hide SIMT stalls. Maybe look into architectural features (specs) or write a simple benchmark to generate n-way branch divergence and measure slowdown?
Barring that, check where your variables live. Does making them shared affect performance/results? Since you always access all variables in the second and the first can avoid accessing nDimension, slow (uncoalesced global?) memory accesses could explain it.
Just some things to think about.
For low-level optimization, it is often helpful to look at the low-level assembly (SASS) of the kernel directly. You can do this with the cuobjdump tool distributed as part of the CUDA Toolkit. Basic usage is to compile with -keep in nvcc then do:
cuobjdump -sass mykernel.cubin
Then you can see the exact sequence of instructions and compare them. I'm not sure why version 1 would be faster than version 2 of the code, but the SASS listings might give you a clue.

Creating a logic gate simulator

I need to make an application for creating logic circuits and seeing the results. This is primarily for use in A-Level (UK, 16-18 year olds generally) computing courses.
Ive never made any applications like this, so am not sure on the best design for storing the circuit and evaluating the results (at a resomable speed, say 100Hz on a 1.6Ghz single core computer).
Rather than have the circuit built from the basic gates (and, or, nand, etc) I want to allow these gates to be used to make "chips" which can then be used within other circuits (eg you might want to make a 8bit register chip, or a 16bit adder).
The problem is that the number of gates increases massively with such circuits, such that if the simulation worked on each individual gate it would have 1000's of gates to simulate, so I need to simplify these components that can be placed in a circuit so they can be simulated quickly.
I thought about generating a truth table for each component, then simulation could use a lookup table to find the outputs for a given input. The problem occurred to me though that the size of such tables increase massively with inputs. If a chip had 32 inputs, then the truth table needs 2^32 rows. This uses a massive amount of memory in many cases more than there is to use so isn't practical for non-trivial components, it also wont work with chips that can store their state (eg registers) since they cant be represented as a simply table of inputs and outputs.
I know I could just hardcode things like register chips, however since this is for educational purposes I want it so that people can make their own components as well as view and edit the implementations for standard ones. I considered allowing such components to be created and edited using code (eg dlls or a scripting language), so that an adder for example could be represented as "output = inputA + inputB" however that assumes that the students have done enough programming in the given language to be able to understand and write such plugins to mimic the results of their circuit which is likly to not be the case...
Is there some other way to take a boolean logic circuit and simplify it automatically so that the simulation can determine the outputs of a component quickly?
As for storing the components I was thinking of storing some kind of tree structure, such that each component is evaluated once all components that link to its inputs are evaluated.
eg consider: A.B + C
The simulator would first evaluate the AND gate, and then evaluate the OR gate using the output of the AND gate and C.
However it just occurred to me that in cases where the outputs link back round to the inputs, will cause a deadlock because there inputs will never all be evaluated...How can I overcome this, since the program can only evaluate one gate at a time?
Have you looked at Richard Bowles's simulator?
You're not the first person to want to build their own circuit simulator ;-).
My suggestion is to settle on a minimal set of primitives. When I began mine (which I plan to resume one of these days...) I had two primitives:
Source: zero inputs, one output that's always 1.
Transistor: two inputs A and B, one output that's A and not B.
Obviously I'm misusing the terminology a bit, not to mention neglecting the niceties of electronics. On the second point I recommend abstracting to wires that carry 1s and 0s like I did. I had a lot of fun drawing diagrams of gates and adders from these. When you can assemble them into circuits and draw a box round the set (with inputs and outputs) you can start building bigger things like multipliers.
If you want anything with loops you need to incorporate some kind of delay -- so each component needs to store the state of its outputs. On every cycle you update all the new states from the current states of the upstream components.
Edit Regarding your concerns on scalability, how about defaulting to the first principles method of simulating each component in terms of its state and upstream neighbours, but provide ways of optimising subcircuits:
If you have a subcircuit S with inputs A[m] with m < 8 (say, giving a maximum of 256 rows) and outputs B[n] and no loops, generate the truth table for S and use that. This could be done automatically for identified subcircuits (and reused if the subcircuit appears more than once) or by choice.
If you have a subcircuit with loops, you may still be able to generate a truth table. There are fixed-point finding methods which can help here.
If your subcircuit has delays (and they are significant to the enclosing circuit) the truth table can incorporate state columns. E.g. if the subcircuit has input A, inner state B, and output C, where C <- A and B, B <- A, the truth table could be:
A B | B C
0 0 | 0 0
0 1 | 0 0
1 0 | 1 0
1 1 | 1 1
If you have a subcircuit that the user asserts implements a particular known pattern such as "adder", provide an option for using a hard-coded implementation for updating that subcircuit instead of by simulating its inner parts.
When I made a circuit emulator (sadly, also incomplete and also unreleased), here's how I handled loops:
Each circuit element stores its boolean value
When an element "E0" changes its value, it notifies (via the observer pattern) all who depend on it
Each observing element evaluates its new value and does likewise
When the E0 change occurs, a level-1 list is kept of all elements affected. If an element already appears on this list, it gets remembered in a new level-2 list but doesn't continue to notify its observers. When the sequence which E0 began has stopped notifying new elements, the next queue level is handled. Ie: the sequence is followed and completed for the first element added to level-2, then the next added to level-2, etc. until all of level-x is complete, then you move to level-(x+1)
This is in no way complete. If you ever have multiple oscillators doing infinite loops, then no matter what order you take them in, one could prevent the other from ever getting its turn. My next goal was to alleviate this by limiting steps with clock-based sync'ing instead of cascading combinatorials, but I never got this far in my project.
You might want to take a look at the From Nand To Tetris in 12 steps course software. There is a video talking about it on youtube.
The course page is at: http://www1.idc.ac.il/tecs/
If you can disallow loops (outputs linking back to inputs), then you can significantly simplify the problem. In that case, for every input there will be exactly one definite output. Cycles however can make the output undecideable (or rather, constantly changing).
Evaluating a circuit without loops should be easy - just use the BFS algorithm with "junctions" (connections between logic gates) as the items in the list. Start off with all the inputs to all the gates in an "undefined" state. As soon as a gate has all inputs "defined" (either 1 or 0), calculate its output and add its output junctions to the BFS list. This way you only have to evaluate each gate and each junction once.
If there are loops, the same algorithm can be used, but the circuit can be built in such a way that it never comes to a "rest" and some junctions are always changing between 1 and 0.
OOps, actually, this algorithm can't be used in this case because the looped gates (and gates depending on them) would forever stay as "undefined".
You could introduce them to the concept of Karnaugh maps, which would help them simplify truth values for themselves.
You could hard code all the common ones. Then allow them to build their own out of the hard coded ones (which would include low level gates), which would be evaluated by evaluating each sub-component. Finally, if one of their "chips" has less than X inputs/outputs, you could "optimize" it into a lookup table. Maybe detect how common it is and only do this for the most used Y chips? This way you have a good speed/space tradeoff.
You could always JIT compile the circuits...
As I haven't really thought about it, I'm not really sure what approach I'd take.. but it would possibly be a hybrid method and I'd definitely hard code popular "chips" in too.
When I was playing around making a "digital circuit" simulation environment, I had each defined circuit (a basic gate, a mux, a demux and a couple of other primitives) associated with a transfer function (that is, a function that computes all outputs, based on the present inputs), an "agenda" structure (basically a linked list of "when to activate a specific transfer function), virtual wires and a global clock.
I arbitrarily set the wires to hard-modify the inputs whenever the output changed and the act of changing an input on any circuit to schedule a transfer function to be called after the gate delay. With this at hand, I could accommodate both clocked and unclocked circuit elements (a clocked element is set to have its transfer function run at "next clock transition, plus gate delay", any unclocked element just depends on the gate delay).
Never really got around to build a GUI for it, so I've never released the code.