MIPS – 5 bits to represent registers? - mips

In MIPS, the opcodes for registers is 5 bits long.
I read that each register is 32 bits long, so why are only 5 used to represent their opcodes in the instruction format?

http://www.cs.uwm.edu/classes/cs315/Bacon/Lecture/HTML/ch05s03.html
"For example, the MIPS processor has 32 general-purpose registers, so it takes 5 bits to specify which one to use. In contrast, the MIPS has a 4 gibibyte memory capacity, so it takes 32 bits to specify which memory cell to use. An instruction with 3 operands will require 15 bits if they are all registers, and 96 bits if they are all memory addresses."
Compute all things in base 2.

You can address up to 4GiB of RAM with only 32 bits, right? This is because 2^32 gives you 4'294'967'296, which is the amount of independent "cells" you can access. Each of those "cells" are 8 bits (a byte).
The same thing happens with registers, except that each "cell" is 32 bits rather than 8 bits. With 5 bits for addressing registers, you get 2^5 = 32 possible cells - i.e. 32 possible registers of 32 bits.
The capacity of a register is not related to the quantity of bits you need to address a certain amount of registers.

Related

MIPS sra Instruction on SPIM

I am currently using SPIM (QTSpim) to learn about MIPS. I had a few questions regarding the SPIM commands and how they work.
1) As far as I know, MIPS usually uses 16 bits to display values, but why do the registers in QTSpim only have 8 bits?
2) For register $11(t3), the original value was 10. After the machine performs a [sra $11, $11, 2] instruction, the value changes from 10 to 4. How does this happen? How are 2 positions shifted right when 10 is only 2 bits?
Thank you.
1) Not sure where you got that idea. QtSpim simulates a MIPS32-based machine, so the general-purpose registers are 32-bit.
2) 10 hexadecimal is 10000 binary. Shift that right by two and you get 100 binary, which is 4 decimal. You can also think of it as 16 decimal divided by 4, since sra by N bits is a (signed) division by 2^N.

Number of bits required to store unsigned Int

I need to know what is the correct solution for the minimum bits required to store an unsigned int. Say, I have 403 its binary representation as an unsigned int will be 00000000000000000000000110010011 that adds up to 32 bit. Now, I know that an unsigned integer takes 32 bits to store. But, why do we have all those zeros in front when the number can be explained by only 9 bits 110010011. Moreover, How come unsigned int takes 32 bits to store and decimal takes only 8 bits ?
Please explain in detail. Thanks
This has nothing to do with how many bits are needed, and everything to do with how many bits your computer is wired for (32). Though 9 bits are enough, your computer has data channels that are 32 bits wide - it's physically wired to be efficient for 32, 64, 128, etc. And your compiler presumably chose 32 bits for you.
The decimal representation of "403" is three digits, and to represent each digit in binary requires at least four bits (2^4 is 16, so you have 6 spare codes); so the minimum "decimal" representation of "403" requires 12 bits, not eight.
However, to represent a normal character (including the decimal digits as well as alpha, punctuation, etc) it's common to use 8 bits, which allows up to 2^8 or 256 possible characters. Represented this way, it takes 3x8 or 24 binary bits to represent 403.

MIPS Multiplication with overflow follow by subtraction

The result of multiplication is stored in two different registers in mips high and low.
For Example: in this example i take high and low as 4 bit register for the sake of convenience.
li $t0,12
mult $t0,$t0
12 * 12 = 144
1100 * 1100 = 1001 0000
so the high has 1001 and low has 0000. now if i want to subtract 12 from the result. how do i do that?
i cant use
mflo $t1
subi $t2,$t1,12
because low has all zeros and the result would be wrong.How do i perform subtraction in this case.when two numbers are 32 bit integers and multiplication causes an overflow.say something like
2^30 * 2^4 - 14
the high register is used.
MIPS registers are 32-bit (or 64-bit, but that does not change results in this case), so after multiplication you will have hi=0x00000000 and lo=0x00000090. I.e. the 8 bits of product fit into 32 bits of lo just fine.
After subtracting 12 from 0x90 you should expect to see t2=0x84

The name of 16 and 32 bits

8 bits is called "byte". How is 16 bits called? "Short"? "Word"?
And what about 32 bits? I know "int" is CPU-dependent, I'm interested in universally applicable names.
A byte is the smallest unit of data that a computer can work with. The C language defines char to be one "byte" and has CHAR_BIT bits. On most systems this is 8 bits.
A word on the other hand, is usually the size of values typically handled by the CPU. Most of the time, this is the size of the general-purpose registers. The problem with this definition, is it doesn't age well.
For example, the MS Windows WORD datatype was defined back in the early days, when 16-bit CPUs were the norm. When 32-bit CPUs came around, the definition stayed, and a 32-bit integer became a DWORD. And now we have 64-bit QWORDs.
Far from "universal", but here are several different takes on the matter:
Windows:
BYTE - 8 bits, unsigned
WORD - 16 bits, unsigned
DWORD - 32 bits, unsigned
QWORD - 64 bits, unsigned
GDB:
Byte
Halfword (two bytes).
Word (four bytes).
Giant words (eight bytes).
<stdint.h>:
uint8_t - 8 bits, unsigned
uint16_t - 16 bits, unsigned
uint32_t - 32 bits, unsigned
uint64_t - 64 bits, unsigned
uintptr_t - pointer-sized integer, unsigned
(Signed types exist as well.)
If you're trying to write portable code that relies upon the size of a particular data type (e.g. you're implementing a network protocol), always use <stdint.h>.
The correct name for a group of exactly 8 bits is really an octet. A byte may have more than or fewer than 8 bits (although this is relatively rare).
Beyond this there are no rigorously well-defined terms for 16 bits, 32 bits, etc, as far as I know.
Dr. Werner Buchholz coined the word byte to mean, "a unit of digital information to describe an ordered group of bits, as the smallest amount of data that a computer could process." Therefore, the word's actual meaning is dependent on the machine in question's architecture. The number of bits in a byte is therefore arbitrary, and could be 8, 16, or even 32.
For a thorough dissertation on the subject, refer to Wikipedia.
There's no universal name for 16-bit or 32-bit units of measurement.
The term 'word' is used to describe the number of bits processed at a time by a program or operating system. So, in a 16-bit CPU, the word length is 16 bits. In a 32-bit CPU, the word length is 32 bits. I also believe the term is a little flexible, so if I write a program that does all its processing in chunks of say, 10 bits, I could refer to those 10-bit chunks as 'words'.
And just to be clear; 'int' is not a unit of measurement for computer memory. It really is just the data type used to store integer numbers (i.e. numbers with a decimal component of zero). So if you find a way to implement integers using only 2 bits (or whatever) in your programming language, that would still be an int.
short, word and int are all dependent on the compiler and/or architecture.
int is a datatype and is usually 32-bit on desktop 32-bit or 64-bit systems. I don't think it's ever larger than the register size of the underlying hardware, so it should always be a fast (and usually large enough) datatype for common uses.
short may be of smaller size then int, that's all you know. In practice, they're usually 16-bit, but you cannot depend on it.
word is not a datatype, it rather denotes the natural register size of the underlying hardware.
And regarding the names of 16 or 32 bits, there aren't any. There is no reason to label them.
I used to ear them referred as byte, word and long word. But as others mention it is dependant on the native architecture you are working on.
They are called 2 bytes and 4 bytes
There aren't any universal terms for 16 and 32 bits. The size of a word is machine dependent.

Minimum register length required to store values between -64 (hex) and 128 (hex)?

What is the minimum register length in a processor required to store values between -64 (hex) and 128 (hex), assuming 2's complement format?
I was thinking an 8 bit register since a 2's complement of 8 bit register goes from 0 to 255.
Am I correct?
Probably you've used the wrong term. 0x64 and 0x128 are very rarely used as hex values. And if you do mean those values then obviously you can't store that big range with 8 bits. 0x128 - (-0x64) = 0x18C which needs at least 9 bits to store
OTOH 64 and 128 are extremely common values, because they're powers of 2. Using the common 2's complement encoding would also cost you 9 bits (because 128 is outside an 8-bit two's complement range) and waste a lot of unused values. But in fact there's almost no 9-bit system so you'll have to use 16-bit shorts. Hence if you want to save memory, the only way is using your own encoding.
In case that you want the value only for storage, almost any encoding is appropriate. For example, using int8_t with -64 to 127 as normal and a special case for 128 (-128, -65... any number you prefer), or an uint8_t from 0 to 192 and map the values linearly. Just convert to and from the correct value when load/store. Operations still need to be done in a type wider than 8 bits, but the on-disk size is only 8 bits
If you need the value for calculation, more care should be taken. For example you could use the excess-64 encoding in which the binary 0 represents -64, 192 represents 128, or generally a would be represented by a - 64. After each calculation you'll have to readjust the value for the correct representation. For example if A and B are stored as a and b which are A - 64 and B - 64 respectively then A + B will be done as a + b + 64 (as we've subtracted 64 one more than expected)
You would not be correct even if it was 128 (Decimal) max only. Since your using 2's complement the range is actually from −(2N−1) to +(2N−1 − 1) where N is the number of bits. So 8 bits would have a range of −128 to 127 (Decimal).
Since you present it as actually -64 (Hex) to 128 (Hex) you are actually looking at -100 (Decimal) to 296 (Decimal). Adding a bit you increase the range up to -256 to 255 and one last addition gets you to -512 to 511. Making the necessary amount needed as 10 bits.
Now make sure that you were not dealing with -64 to 128 (Decimal). As I pointed out earlier the 8 bit range only goes to 127 which would make it a very tricky question if you were not on your toes. Then it would be 9 bits.
In two's complement, an 8-bit register will range from -128 to +127. To get the upper bound, you fill the lower 7 bits with 1s: 01111111 is 127 in decimal. To get the lower bound, you set the highest bit to 1 and the rest to 0: 10000000 is -128 in two's complement.
Those hex values seem a bit odd (they're powers of two in decimal), but in any case: 0x128 (the 0x is a standard prefix for hex numbers) is the larger of the numbers in magnitude, and its binary representation is 100101000. You need to be able to represent those nine bits after the sign bit. So to be able to use two's complement, you'd need at least ten bits.