My question is: What is the size of integer constants in MIPS?
Here I found how they are used.
If I have such a constant defined in my data segment and I want to
calculate the size of the data segment, what size do I take for this
constant: size of word, byte, half,..?
Here's a data segment example:
.data
array: .word 1, 2, 3
LEN = 2 ; Here's the constant
The size of data segment is: 3 * 32(bit) + ?(bit)
Thank you in advance!
I assume you are calling constants to "equates".
Constants do not occupy space in the data segment, whenever used as an operand they will be replaced by their expression and the size should match that of the operand, so in your example the data segment would be using 4*3 bytes = 12 bytes (96 bits).
For example if you write in MARS simulator
.data
array: .word 1,2,3
.eqv LEN 2
.eqv LARGE_VALUE 20000
buffer: .space LARGE_VALUE
then you can use identifier LEN as a substitute for 2, e.g.
li $a1, LEN
li $a2, LARGE_VALUE
In this case, LEN will be a 16 bit immediate when assembling the first instruction, and the assembler will emit code to do a 32-bit load for the second pseudo instruction. The buffer defined in data segment will be 20000 bytes (as defined by eqv LARGE_VALUE)
Related
I have a file that defines a set of tiles (used in an online game). The format for each tile is as follows:
x: 12 bits
y: 12 bits
tile: 8 bits
32 bits in total, so each tile can be expressed as a 32 bit integer.
More info about the file format can be found here:
http://wiki.minegoboom.com/index.php/LVL_Format
http://www.rarefied.org/subspace/lvlformat.html
The 4 byte structures are not broken along byte boundaries. As you can see x: and y: are both defined as 12 bits. ie. x is stored in 1.5 bytes, y is stored in 1.5 bytes and tile is stored in 1 byte.
Even though x and y use 12 bits their max value is 1023, so they could be expressed in 10 bits. This was down to the creator of the format. I guess they were just padding things out so they could use a 32-bit integer for each tile? Either way, for x and y we can ignore the final 2 bits.
I'm using a nodejs Buffer to read the file and I'm using the following code to read the values.
var n = tileBuffer.readUInt32LE(0);
var x = n & 0x03FF;
var y = (n >> 12) & 0x03FF;
var tile = (n >> 24) & 0x00ff;
This code works fine but when I read the bits themselves, in an attempt to understand binary better, I see something that confuses me.
Take, for example a int that expresses the following:
x: 1023
y: 1023
tile: 1
Creating the tiles in a map editor and reading the resulting file into a buffer returns <Buffer ff f3 3f 01>
When I convert each byte into a string of bits I get the following:
ff = 11111111
f3 = 11110011
3f = 00111111
01 = 00000001
11111111 11110011 00111111 00000001
I assume I should just take the first 12 bits as x but chop off the last 2 bits. Use the next 12 bits as y, chopping off 2 bits again, and the remaining 8 bits would be the tile.
x: 1111111111
y: 0011001111
tile: 00000001
The x is correct (1111111111 = 1023), the y is wrong (0011001111 = 207, not 1023), and tile is correct (00000001 = 1)
I'm confused and obviously missing something.
It makes more sense to look at it in this order: (this would be the binary representation of n)
00000001 00111111 11110011 11111111
On that order, you can easily do the masking and shifting visually.
The problem with what you did is that for example in 11111111 11110011, the bits of the second byte that belong to the first field are at the right (the lowest part of that byte), which in that order is discontinuous.
Also, masking with 0x03FF makes those first two fields have 10 bits, with two bits just disappearing. You can make them 12 bits by masking with 0x0FFF. As it is now, you effectively have two padding bits.
I have this code :
void test(int x)
{
cout<<x;
double y=x+4.0;
cout<<y;
}
void main ()
{
test(7); // call the test in main
}
In MIPS :
after I put the value of parameter x in 0($fp) in stack and jump to test :
lw $a0,0($fp) // load value of x and print it
li $v0,1
syscall
lw $t1,0($fp)
sw $t1,0($sp) // put the value of x in stack and point it by $sp
li.d $f0,4.0
s.d $f0,4($sp) // put the value 4.0 in stack and point it by $sp
l.d $f0,0($sp)
l.d $f2,4($sp)
add.d $f4,$f0,$f2
s.d $f4,8($sp) // put the result of add
l.d $f12,8($sp) // print the value of y
li $v0,3
syscall
My problem is the result of y in QTSPIM is 4 .... the problem because I load an integer value in a double register ... How I can solve this problem ???
You need to load the integer value into an fp register and then convert it to floating point format. You have an 8-byte load (which will load the 4 bytes of your value, plus the following 4 bytes, whatever they happen to be), so you probably want to change that, and then do a cvt:
l.w $f0,0($fp)
cvt.d.w $f0,$f0
Normally I see 'lw' double used with 2 args, the destination register and the memory location to load from:
lw $2, 3($1)
In a CPU implementation the $2 part is the write address sent to the register file and the 3($1) is calculated by the ALU and then passed to memory. How does work when 'lw' has 3 args as follows:
array:
.word 1, 2, 3, 4, 5, 6, 7, 8, 9
lw $r2, $r24, array
I believe array here is a label, which will be translated into the immediate field of the lw instruction by the loader, so at load time it might look like this:
lw $r2, 121($r24)
where 121 will be related to the location in memory where the loader placed the initialized array.
If I have the following binary:
<<32,16,10,9,108,111,99,97,108,104,111,115,116,16,170,31>>
How can I know what length it has?
For byte size:
1> byte_size(<<32,16,10,9,108,111,99,97,108,104,111,115,116,16,170,31>>).
16
For bit size:
2> bit_size(<<32,16,10,9,108,111,99,97,108,104,111,115,116,16,170,31>>).
128
When you have a bit string (a binary with bit length not divisible by the byte size 8) byte_size/1 will round up to the nearest whole byte. I.e. the amount of bytes the bit string would fit in:
3> bit_size(<<0:19>>).
19
4> byte_size(<<0:19>>). % 19 bits fits inside 3 bytes
3
5> bit_size(<<0:24>>).
24
6> byte_size(<<0:24>>). % 24 bits is exactly 3 bytes
3
7> byte_size(<<0:25>>). % 25 bits only fits inside 4 bytes
4
Here's an example illustrating the difference in sizes going from 8 bits (fits in 1 byte) to 17 bits (needs 3 bytes to fit):
8> [{bit_size(<<0:N>>), byte_size(<<0:N>>)} || N <- lists:seq(8,17)].
[{8,1},
{9,2},
{10,2},
{11,2},
{12,2},
{13,2},
{14,2},
{15,2},
{16,2},
{17,3}]
Let's say I have $t0, and I'd like to divide its integer contents by two, and store it in $t1.
My gut says: srl $t1, $t0, 2
... but wouldn't that be a problem if... say... the right-most bit was 1? Or does it all come out in the wash because the right-most bit (if positive) makes $t0 an odd number, which becomes even when divided?
Teach me, O wise ones...
Use instruction sra: Shift right arithmetic !!
sra $t1, $t0, 1
Divides the content of $t0 by the first power of 2.
Description: Shifts a register value
right by the shift amount (shamt) and
places the value in the destination
register. The sign bit is shifted in.
Operation: $d = $t >> h;
advance_pc (4);
Syntax: sra $d, $t, h
Encoding:
0000 00-- ---t tttt dddd dhhh hh00
0011
Why is this important? Check this simple program that divides an integer number (program's input) by 2.
#include <stdio.h>
/*
* div divides by 2 using sra
* udiv divides by 2 using srl
*/
int div(int n);//implemented in mips assembly.
int udiv(int n);
int main(int argc,char** argv){
if (argc==1) return 0;
int a = atoi(argv[1]);
printf("div:%d udiv:%d\n",div(a),udiv(a));
return 1;
}
//file div.S
#include <mips/regdef.h>
//int div(int n)
.globl div
.text
.align 2
.ent div
div:
sra v0,a0,1
jr ra //Returns value in v0 register.
.end div
//int udiv(int n)
.globl udiv
.text
.align 2
.ent udiv
udiv:
srl v0,a0,1
jr ra //Returns value in v0 register.
.end udiv
Compile
root#:/tmp#gcc -c div.S
root#:/tmp#gcc -c main.c
root#:/tmp#gcc div.0 main.o -o test
Test drives:
root#:~# ./test 2
div:1 udiv:1
root#:~# ./test 4
div:2 udiv:2
root#:~# ./test 8
div:4 udiv:4
root#:~# ./test 16
div:8 udiv:8
root#:~# ./test -2
div:-1 udiv:2147483647
root#:~# ./test -4
div:-2 udiv:2147483646
root#:~# ./test -8
div:-4 udiv:2147483644
root#:~# ./test -16
div:-8 udiv:2147483640
root#:~#
See what happens? The srl instruction is shifting the sign bit
-2 = 0xfffffffe
if we shift one bit to the right, we get 0x7fffffff
0x7ffffffff = 2147483647
Of course this is not a problem when the number is a positive integer, because the sign bit is 0.
To do unsigned integer division, thats right. This only works for unsigned integers and if you don't care about the fractional part.
You will want to use a shift amount of 1, not 2:
srl $t1, $t0, 1
If you use 2, you will end up dividing by 4. In general, shifting right by x divides by 2x.
If you are concerned about "rounding" and you want to round up, you can just increment by 1 before doing the logical (unsigned) shift.
And other have stated it previously but you only shift by 1 to divide by 2. A right shift by N bits divides by 2^N.
To use rounding (rounding up at 0.5 or greater) with shift values of N other than 1, just add 1<<(N-1) prior to the shift.