I need some help understanding the format of looking at a binary file in hex so I get get the correct numbers out of a table using python to parse it
Example:
0000b50: 0400 0000 ffff 0900 0000 ffff 0900 0000 ................
0000b60: ffff 0900 0000 ffff 0900 0000 ffff 0900 ............0..#
When I need to find the start of an object at index 0x0b54 where would that be? Would it be [here]? 0000b50: 0400 [0]000 ffff 0900 0000 ffff 0900 0000
The object is 96 bytes long. is one set of four hex numbers one byte? ie. ffff? or since it is base 16 each individual spot contains 2 bytes? so ffff is 8 bytes? And I need to find 6 bytes for each entry into the table which would be fff?
What does the part at the end represent? ie. ............0..#
f = 16 = 1111 so ff is 16x16 = 11111111 = 256 = one 8 bit byte.
ffff = 2 bytes.
You need to translate the numbers into binary to figure out the byte count.
Related
I need to initialize an register with an specific decimal value in order to extract the bits of another register (by doing this). The only problem is, since I need to do an andoperation with a very large decimal, it turns out to be very difficult to convert a binary number into a decimal number without a calculator to perform this operation.
So I am interested to know if there is any way of initializing a register with a binary value directly
If your assembler/simulator does not allow you to write binary constants but it allows you to write hexadecimal ones then you may easily encode a binary number in hexadecimal.
Starting from the least significand bit, every 4 bits you encode its hexadecimal value using the following table:
0000 - 0
0001 - 1
0010 - 2
0011 - 3
0100 - 4
0101 - 5
0110 - 6
0111 - 7
1000 - 8
1001 - 9
1010 - A
1011 - B
1100 - C
1101 - D
1110 - E
1111 - F
If the original binary number does not have a multiple of 4 bits you fill the most significand bits to 0 until it fills the last packet.
For example, you may encode this words in hexadecimal converting them from binary numbers:
myvar: .word 0x1234 # binary 1 0010 0011 0100
other: .word 0xCAFEBABE # binary 1100 1010 1111 1110 1011 1010 1011 1110
In a class I am currently taking (Operating Systems), we needed to convert hexadecimal to binary and then split the binary into it's respective parts to get the Opcodes and likewise.
Here is an example of the process
0x4B010000
01001011000000010000000000000000
0000000000000000 001011 0000 0001
0000 MOVI 1 0
0000 MOVI R1 0
0x4C060001
01001100000001100000000000000001
0000000000000001 001100 0000 0110
0001 ADDI 6 0
0001 ADDI R6 0
0x10658000
00010000011001011000000000000000
000000000000 010000 0110 0101 1000
0000 SLT 8 6 5
0000 SLT R8 R6 R5
0x56810018
01010110100000010000000000011000
0000000000011000 010110 1000 0001
0018 BNE 1 8
0018 BNE R1 8
I have provided the GitHub that contains the files stating the instruction set and the instruction format, plus all the data values.
https://github.com/DavidClifford-Pro/InstructionsFormat
The instructions say to decode it like I have, and then to execute it. I just don't understand the instructions and how it's supposed to be executed. I believe it's all been decoded correctly.
EDIT:
The the instruction list is part of the project (the first line - the hex codes), we were given these codes to decode and then execute.
Please refer to example packet:
2010-08-22 21:35:26.571793 00:50:56:9c:69:38 (oui Unknown) > Broadcast,
ethertype Unknown (0xcafe), length 74
0x0000: 0200 000a ffff 0000 ffff 0c00 3c00 0000 ............<...
0x0010: 0000 0000 0100 0080 3e9e 2900 0000 0000 ........>.).....
0x0020: 0000 0000 ffff ffff ad00 996b 0600 0050 ...........k...P
0x0030: 569c 6938 0000 0000 8e07 0000 V.i8........
An organizationally unique identifier (OUI) is a 24-bit number that uniquely identifies a vendor, manufacturer, or other organization.
These are purchased from the Institute of Electrical and Electronics Engineers, Incorporated (IEEE) Registration Authority by the "assignee" (IEEE term for the vendor, manufacturer, or other organization). They are used as the first portion of derivative identifiers to uniquely identify a particular piece of equipment as MAC addresses,Subnetwork Access Protocol protocol identifiers, World Wide Names for Fibre Channel devices.
In MAC addresses, the OUI is combined with a 24-bit number (assigned by the owner or 'assignee' of the OUI) to form the address. The first three octets of the address are the OUI.
OUI List: http://standards-oui.ieee.org/oui.txt
I had file with such content:
00 00 00 00 00
I had changed 1 bit. Changed file:
00 60 00 00 00
My teacher said, that I don't know what means bit. What I did wrong? Clarify this for me, please: file has 5 block (10 digits). Bit is 00? Or bit is 0 — 1 digit of pair. Thank you.
If this is in hexidecimal notation, then you have some terminology confusion.
00 00 00 00 00
|__| ^
\ |
byte nibble
A byte is two nibbles, and a nibble is 4 bits.
Decimal Hex Binary
0 0 0000 <- You went from here...
1 1 0001
2 2 0010
3 3 0011
4 4 0100
5 5 0101
6 6 0110 <- ...to here, a change in two bits of one nibble.
7 7 0111
8 8 1000
9 9 1001
10 a 1010
11 b 1011
12 c 1100
13 d 1101
14 e 1110
15 f 1111
That depends on what that notation means, but I'm assuming it's showing 5 bytes in hexadecimal notation.
These are bytes, 8 bit, in binary notation:
00000000
00000001
00000010
...
These are the same bytes in hexadecimal notation:
00
01
02
...
Hexadecimal notation goes from 00 to FF, binary notation for the same values from 00000000 to 11111111. If you changed 00 to 60, you changed 00000000 to 01100000. So you changed 2 bits.
You are viewing the file in a hex editor/viewer. Each digit is a hexadecimal digit consisting of four bits in binary. The fact that you went from 00 to 60 means that you changed two bits in one of the hex digits. If you were viewing in binary mode, you wouldn't see anything other than 0s and 1s.
hex 0 == binary 0000
hex 6 == binary 0110
I would recommend reviewing binary and hexadecimal notation.
Can someone explain this in simpler terms?
The binary representation of 170 is 0000 0000 1010 1010. The binary representation of 75 is 0000 0000 0100 1011. Performing the bitwise AND operation on these two values produces the binary result 0000 0000 0000 1010, which is decimal 10.
0000 0000 1010 1010
0000 0000 0100 1011
-------------------
0000 0000 0000 1010
This will make will click for me once I know what is being done. I have a basic understanding of binaries and know a few off the top of my head... like 1 represented in binary would be 00000001 and 2 would be 00000010 and 3 would be 00000011 and 4 would be 00000100 and 5 would be 00000101 and 6 would be 00000110. So I understand what is going on when you got up a digit each time.
I also understand what is going on when this sql developer is subtracting, but not something is missing when she uses t-sql code to find her answers.... in regards to what is stated in this link.
http://sqlfool.com/2009/02/bitwise-operations/
Look at the individual binary digits in your example as columns. If there is a 1 in both input rows of a particular column, the output is 1 for that column. Otherwise it is 0.
The AND operator can be used to "mask" values. So if you just want the first four low-order bits of a number, you can AND it with 15, like this:
0010 1101 1110 1100
0000 0000 0000 1111
-------------------
0000 0000 0000 1100 <-- the value of the first four bits in the top number
That's what is happening in the SQL example you linked.
freq_interval is one or more of the following:
1 = Sunday
2 = Monday
4 = Tuesday
8 = Wednesday
16 = Thursday
32 = Friday
64 = Saturday
Corresponds to the bit masks:
0000 0001 = Sunday
0000 0010 = Monday
0000 0100 = Tuesday
0000 1000 = Wednesday
0001 0000 = Thursday
0010 0000 = Friday
0100 0000 = Saturday