I've been trying to learn RISC-V coming from MIPS and initially they don't look to dissimilar, especially the instruction set. Are there any significant differences between the two? Are most of the differences in the backend?
According to Section 2.16 of Patterson, D. A., & Hennessy, J. L. (2018). Computer organization and design: The hardware/software interface. Cambridge, MA: Morgan Kaufmann Publishers. (RISC-V edition):
One of the main differences between RISC-V and MIPS is for conditional
branches other than equal or not equal. Whereas RISC-V simply provides branch
instructions to compare two registers, MIPS relies on a comparison instruction that
sets a register to 0 or 1 depending on whether the comparison is true. Programmers
then follow that comparison instruction with a branch on equal to or not equal
to zero depending on the desired outcome of the comparison. Keeping with its
minimalist philosophy, MIPS only performs less than comparisons, leaving it up to
the programmer to switch order of operands or to switch the condition being tested by the branch to get all the desired outcomes. MIPS has both signed and unsigned
versions of the set on less than instructions: slt and sltu.
When we look beyond the core instructions that are most commonly used, the
other main difference is that the full MIPS is a much larger instruction set than
RISC-V [...]
Figure 2.29 from the book shows the slight differences in instruction formats for the MIPS and the RISC-V:
One thing I want to add that is a bit more specific is that Immediate instructions with RISC-V use the upper 20 bits as compared to the upper 16 bit in MIPS.
For example in MIPS:
lui S0, 0x1234
S0 = 0x1234 0000
And in RISC-V its S0 = 0x0123 4000
Related
I'm reading about the Instruction Decode (ID) phase in the MIPS datapath, and I've got the following quote: "Once operands are known, read the actual data (from registers) or extend the data to 32 bits (immediates)."
Can someone explain what the "extend the data to 32 bits (immediates)" part means? I know that registers all contain 32 bits, and I know what an immediate is. I just don't understand why you need to extend the immediate from 26 to 32 bits.
Thanks!
26-bit immediates are only in jump instructions, and aren't sign- or zero-extended to 32 bit, because they're not displacements to be added/subtracted.
I-type instructions with 16-bit immediates are different.
addi / addiu immediates are sign-extended (by duplicating the top/sign bit of the immediate to all higher bits).
https://en.wikipedia.org/wiki/Two%27s_complement#Sign_extension
This allows 2's complement numbers from -2^15 .. +2^15-1 to be encoded.
(0xFFFF8000 to 0x00007FFF)
ori/andi/xori boolean immediates are zero-extended (by setting all higher bits to zero)
This allows unsigned / 2's complement numbers from 0 .. 2^16-1 to be encoded.
(0x00000000 to 0x0000FFFF)
For other instructions see this instruction-set reference which breaks down each instruction showing 016 || [I15..0] for zero-extension or [I15]16 || [I15..0] for sign-extension.
This makes it possible to use 16-bit immediates as inputs to a 32-bit binary operation that only makes sense with 2 equal-width inputs. (In a simple classic MIPS pipeline, the decode stage fetches operands from registers and/or immediates. Register inputs are always going to be 32-bit, so the ALU is wired up for 32-bit inputs. Extending immediates to 32-bit means the rest of the CPU doesn't have to care whether the data came from an immediate or a register.)
Also sign-extended:
offsets in the reg+imm16 addressing mode used by lw/sw and other load/store instructions
relative branches (PC += imm16<<2)
maybe others, check the manual for instructions I didn't mention to see if they sign- or zero- extend.
You might be wondering "why does addiu sign-extend its immediate even though it's unsigned?"
Remember that there's no subiu, only addiu with a negative immediate. Being able to add or subtract numbers in the range -2^15 .. +2^15-1 is more useful than only being able to add 0 .. 2^16-1.
And usually you don't want to raise an exception on signed overflow, so normally compilers use addu / addiu even on signed integers. addu is badly named: it's not "for unsigned integers", it's just a wrapping-allowed / never-faulting version of add/addi. It sort of makes sense if you think of C, where signed overflow is undefined behaviour (and thus could use add and raise an exception in that case if the compiler wanted to implement it that way), but unsigned integers have well-defined overflow behaviour: base 2 wraparound.
On a 32-bit CPU, most of the operations you do (like adding, subtracting, dereferencing a pointer) are done with 32-bit numbers. When you have a number with fewer bits, you need to somehow decide what those other bits are going to be when you want to use that number in one of those operations. The act of deciding what those new high bits are is called "extending".
Assuming you are just doing a standard zero extension or sign extension, extending is very cheap. However, it does require some circuitry, so it makes sense that a description of the MIPS datapath would mention it.
I am trying to understand how the verilog branch statement works in an immediate instruction format for the MIPS processor. I am having trouble understanding what the following Verilog code does:
IR is the instruction so IR[31:26] would give the opcode.
reg[31:0] BT = PC + 4 + { 14{IR[15]}, IR[15:0], 2'b0};
I see bits and pieces such as we are updating the program counter and that we are taking the last 16 bits of the instruction to get the immediate address. Then we need a 32 bit word so we extend 16 more zeros.
Why is it PC + 4 instead of just PC?
What is 2'b0?
I have read something about sign extension but don't quite understand what is going on here.
Thanks for all the help!
1: Branch offsets in MIPS are calculated relative to the next instruction (since the instruction after the branch is also executed, as the branch delay slot). Thus, we have to use PC +4 for the base address calculation.
2: Since MIPS uses a bytewise memory addressing system (every byte in memory has a unique address), but uses 32-bit (4-byte) words, the specification requires that each instruction be word-aligned; thus, the last two bits of the address point at the bottom byte of the instruction (0x____00).
IN full, the instruction calculates the branch target address by taking the program counter, adding 4 to account for hte branch delay slot, and then adding the sign extended (because the branch offset could be either positive or negative; this is what the 14{IR[15]} does) offset to the target.
Numbers in Verilog can be represented using number of bits tick format of the following number.
2'b11; // 2 bit binary
3'd3 ; // 3 bit decimal
4'ha ; // 4 bit hex
The format describes the following number, the bit pattern used is not changed by the format. Ie 2'b11 is identical to 2'd3;
I am writing a program for an embedded hardware that only supports 32-bit single-precision floating-point arithmetic. The algorithm I am implementing, however, requires a 64-bit double-precision addition and comparison. I am trying to emulate double datatype using a tuple of two floats. So a double d will be emulated as a struct containing the tuple: (float d.hi, float d.low).
The comparison should be straightforward using a lexicographic ordering. The addition however is a bit tricky because I am not sure which base should I use. Should it be FLT_MAX? And how can I detect a carry?
How can this be done?
Edit (Clarity): I need the extra significant digits rather than the extra range.
double-float is a technique that uses pairs of single-precision numbers to achieve almost twice the precision of single precision arithmetic accompanied by a slight reduction of the single precision exponent range (due to intermediate underflow and overflow at the far ends of the range). The basic algorithms were developed by T.J. Dekker and William Kahan in the 1970s. Below I list two fairly recent papers that show how these techniques can be adapted to GPUs, however much of the material covered in these papers is applicable independent of platform so should be useful for the task at hand.
https://hal.archives-ouvertes.fr/hal-00021443
Guillaume Da Graça, David Defour
Implementation of float-float operators on graphics hardware,
7th conference on Real Numbers and Computers, RNC7.
http://andrewthall.org/papers/df64_qf128.pdf
Andrew Thall
Extended-Precision Floating-Point Numbers for GPU Computation.
This is not going to be simple.
A float (IEEE 754 single-precision) has 1 sign bit, 8 exponent bits, and 23 bits of mantissa (well, effectively 24).
A double (IEEE 754 double-precision) has 1 sign bit, 11 exponent bits, and 52 bits of mantissa (effectively 53).
You can use the sign bit and 8 exponent bits from one of your floats, but how are you going to get 3 more exponent bits and 29 bits of mantissa out of the other?
Maybe somebody else can come up with something clever, but my answer is "this is impossible". (Or at least, "no easier than using a 64-bit struct and implementing your own operations")
It depends a bit on what types of operations you want to perform. If you only care about additions and subtractions, Kahan Summation can be a great solution.
If you need both the precision and a wide range, you'll be needing a software implementation of double precision floating point, such as SoftFloat.
(For addition, the basic principle is to break the representation (e.g. 64 bits) of each value into its three consitituent parts - sign, exponent and mantissa; then shift the mantissa of one part based on the difference in the exponents, add to or subtract from the mantissa of the other part based on the sign bits, and possibly renormalise the result by shifting the mantissa and adjusting the exponent correspondingly. Along the way, there are a lot of fiddly details to account for, in order to avoid unnecessary loss of accuracy, and deal with special values such as infinities, NaNs, and denormalised numbers.)
Given all the constraints for high precision over 23 magnitudes, I think the most fruitful method would be to implement a custom arithmetic package.
A quick survey shows Briggs' doubledouble C++ library should address your needs and then some. See this.[*] The default implementation is based on double to achieve 30 significant figure computation, but it is readily rewritten to use float to achieve 13 or 14 significant figures. That may be enough for your requirements if care is taken to segregate addition operations with similar magnitude values, only adding extremes together in the last operations.
Beware though, the comments mention messing around with the x87 control register. I didn't check into the details, but that might make the code too non-portable for your use.
[*] The C++ source is linked by that article, but only the gzipped tar was not a dead link.
This is similar to the double-double arithmetic used by many compilers for long double on some machines that have only hardware double calculation support. It's also used as float-float on older NVIDIA GPUs where there's no double support. See Emulating FP64 with 2 FP32 on a GPU. This way the calculation will be much faster than a software floating-point library.
However in most microcontrollers there's no hardware support for floats so they're implemented purely in software. Because of that, using float-float may not increase performance and introduce some memory overhead to save the extra bytes of exponent.
If you really need the longer mantissa, try using a custom floating-point library. You can choose whatever is enough for you, for example change the library to adapt a new 48-bit float type of your own if only 40 bits of mantissa and 7 bits of exponent is needed. No need to spend time for calculating/storing the unnecessary 16 bits anymore. But this library should be very efficient because compiler's libraries often have assembly level optimization for their own type of float.
Another software-based solution that might be of use: GNU MPFR
It takes care of many other special cases and allows arbitrary precision (better than 64-bit double) that you would have to otherwise take care of yourself.
That's not practical. If it was, every embedded 32-bit processor (or compiler) would emulate double precision by doing that. As it stands, none do it that I am aware of. Most of them just substitute float for double.
If you need the precision and not the dynamic range, your best bet would be to use fixed point. IF the compiler supports 64-bit this will be easier too.
If a thirty-two bit word can represent a MIPS instruction. How can we tell if that instruction is of type R, J, or I?
I'm having a hard time understanding these concepts, I think the opcodes might be different?
Basically MIPS instructions have an opcode stored in the most significant 6 bits which specify the format of the following bits. In particular, R-type instructions always have an opcode of 000000 (with the instruction functionality then further specified by the 6 least significant bits.
MIPS Instruction Coding
I am learning MIPS 32 bit. I wanted to ask that why do we Sign Extend the 16 bit offset (in Single Cycle Datapath) before sending it to the ALU in case of Store Word?
I am not sure if it's helpful for you now, but I am posting it anyway.
Let us consider in a very very general sense, an array of instructions in C++ i.e. A[0],A[1],A[2] .....
The "figurative" distance between any two instructions is 1 UNIT.
Lets take this analogy to MIPS. In MIPS, figuratively every instruction is separated by "1 UNIT", however, 1 UNIT = 4 Bytes in MIPS. Every instruction is 4 Bytes long and this is why when moving from instruction to instruction the PC is incremented by 4 i.e. PC+4. So that way the gap between instruction i and instruction i+2 is "figuratively" 2 but actually 2*4=8 i.e. PC+4+4
Coming back to offsets that are specified in Branch instructions, the offset represents the "figurative" distance from the next instruction(the instruction following the Branch). So to get the "real" distance, the offset is to be multiplied by 4. This is the reason we are instructed to "sign-extend" the offset by 2 bits to the 'LEFT', because, left shifting any binary value by n bits results in multiplying that value by 2^n. In our case 2^2 = 4
So the actual target address of a branch instruction is PC+4+4*Offset.
Hope this helps.
Sounds like the 16-bit offset is a signed 2's complement number, i.e. it can be either positive or negative.
When converting it to 32 bits, the most significant bit needs to be copied to the upper 16 bits in order to keep the sign information.
To the best of my knowledge,in load or store instructions the offset value is added to the value in temporary register,as temp. register is 32 bit and addition operation of 16 bit and 32 bit is not possible,the value is sign extended.
I think you are getting your concepts a little wrong here.
The 5 bits that you think are going inside the ALU, actually go inside the register memory to select one of the 32[2^5] registers.
Each register itself is of 32 bits. Hence, to add the offset to the register value, you need to sign extend it to 32 bits.
ALU operation is always between two registers of the same size in the single cycle datapath for MIPS.
In the hardware of a 32-bit machine most ALU's take 32-bit inputs, and all registers are 32-bit registers.
To work with your data it must be 32-bits wide, this why we SIGN-extend, however another approach would be to ZERO-extend, but SIGN-extend is used when you are dealing with immediates and offsets to preserve the sign in 2's complement.
Sign extension happens e.g. in case of M68xxx machines only in case of loading the address registers. Not so in case of data registers.
having e.g.
movea.w addr,a0
move addr,d0
addr:
dc.w $FFFF
leads in case of data register loading to $0000FFFF, in case of the
address register loading however to $FFFFFFFF.
To understand this, build the two complement of the signed negative
presentation, $FFFF, extend the number to 32 bit and redo the two-
complement, finding the corresponding representation in 32 bit.
Cheers and kind regards,
Stephan S.