What does "oui Unknown" means in tcpdump? - tcpdump

Please refer to example packet:
2010-08-22 21:35:26.571793 00:50:56:9c:69:38 (oui Unknown) > Broadcast,
ethertype Unknown (0xcafe), length 74
0x0000: 0200 000a ffff 0000 ffff 0c00 3c00 0000 ............<...
0x0010: 0000 0000 0100 0080 3e9e 2900 0000 0000 ........>.).....
0x0020: 0000 0000 ffff ffff ad00 996b 0600 0050 ...........k...P
0x0030: 569c 6938 0000 0000 8e07 0000 V.i8........

An organizationally unique identifier (OUI) is a 24-bit number that uniquely identifies a vendor, manufacturer, or other organization.
These are purchased from the Institute of Electrical and Electronics Engineers, Incorporated (IEEE) Registration Authority by the "assignee" (IEEE term for the vendor, manufacturer, or other organization). They are used as the first portion of derivative identifiers to uniquely identify a particular piece of equipment as MAC addresses,Subnetwork Access Protocol protocol identifiers, World Wide Names for Fibre Channel devices.
In MAC addresses, the OUI is combined with a 24-bit number (assigned by the owner or 'assignee' of the OUI) to form the address. The first three octets of the address are the OUI.
OUI List: http://standards-oui.ieee.org/oui.txt

Related

What type of instruction set is this?

In a class I am currently taking (Operating Systems), we needed to convert hexadecimal to binary and then split the binary into it's respective parts to get the Opcodes and likewise.
Here is an example of the process
0x4B010000
01001011000000010000000000000000
0000000000000000 001011 0000 0001
0000 MOVI 1 0
0000 MOVI R1 0
0x4C060001
01001100000001100000000000000001
0000000000000001 001100 0000 0110
0001 ADDI 6 0
0001 ADDI R6 0
0x10658000
00010000011001011000000000000000
000000000000 010000 0110 0101 1000
0000 SLT 8 6 5
0000 SLT R8 R6 R5
0x56810018
01010110100000010000000000011000
0000000000011000 010110 1000 0001
0018 BNE 1 8
0018 BNE R1 8
I have provided the GitHub that contains the files stating the instruction set and the instruction format, plus all the data values.
https://github.com/DavidClifford-Pro/InstructionsFormat
The instructions say to decode it like I have, and then to execute it. I just don't understand the instructions and how it's supposed to be executed. I believe it's all been decoded correctly.
EDIT:
The the instruction list is part of the project (the first line - the hex codes), we were given these codes to decode and then execute.

Understanding hex representation of binary file

I need some help understanding the format of looking at a binary file in hex so I get get the correct numbers out of a table using python to parse it
Example:
0000b50: 0400 0000 ffff 0900 0000 ffff 0900 0000 ................
0000b60: ffff 0900 0000 ffff 0900 0000 ffff 0900 ............0..#
When I need to find the start of an object at index 0x0b54 where would that be? Would it be [here]? 0000b50: 0400 [0]000 ffff 0900 0000 ffff 0900 0000
The object is 96 bytes long. is one set of four hex numbers one byte? ie. ffff? or since it is base 16 each individual spot contains 2 bytes? so ffff is 8 bytes? And I need to find 6 bytes for each entry into the table which would be fff?
What does the part at the end represent? ie. ............0..#
f = 16 = 1111 so ff is 16x16 = 11111111 = 256 = one 8 bit byte.
ffff = 2 bytes.
You need to translate the numbers into binary to figure out the byte count.

Store the top half and low half of a 32 bit number

Is there a way to store the 16 leftmost bits, and the 16 rightmost bits of a binary number in MIPS?
For example, let's say I have the binary number: 1111 1111 1111 0000 0011 1111 0011 1100. Now, I want to store 1111 1111 1111 0000 in $t0 and the other half in $t1. Is there a way to do that? Thanks!

Bitwise And Operator

Can someone explain this in simpler terms?
The binary representation of 170 is 0000 0000 1010 1010. The binary representation of 75 is 0000 0000 0100 1011. Performing the bitwise AND operation on these two values produces the binary result 0000 0000 0000 1010, which is decimal 10.
0000 0000 1010 1010
0000 0000 0100 1011
-------------------
0000 0000 0000 1010
This will make will click for me once I know what is being done. I have a basic understanding of binaries and know a few off the top of my head... like 1 represented in binary would be 00000001 and 2 would be 00000010 and 3 would be 00000011 and 4 would be 00000100 and 5 would be 00000101 and 6 would be 00000110. So I understand what is going on when you got up a digit each time.
I also understand what is going on when this sql developer is subtracting, but not something is missing when she uses t-sql code to find her answers.... in regards to what is stated in this link.
http://sqlfool.com/2009/02/bitwise-operations/
Look at the individual binary digits in your example as columns. If there is a 1 in both input rows of a particular column, the output is 1 for that column. Otherwise it is 0.
The AND operator can be used to "mask" values. So if you just want the first four low-order bits of a number, you can AND it with 15, like this:
0010 1101 1110 1100
0000 0000 0000 1111
-------------------
0000 0000 0000 1100 <-- the value of the first four bits in the top number
That's what is happening in the SQL example you linked.
freq_interval is one or more of the following:
1 = Sunday
2 = Monday
4 = Tuesday
8 = Wednesday
16 = Thursday
32 = Friday
64 = Saturday
Corresponds to the bit masks:
0000 0001 = Sunday
0000 0010 = Monday
0000 0100 = Tuesday
0000 1000 = Wednesday
0001 0000 = Thursday
0010 0000 = Friday
0100 0000 = Saturday

converting decimal to signed binary

Let's say I want to convert "-128" into binary.
From what I understand, I get the binary representation of "128", invert the bits and then add 1.
So 128 = 10000000
So the "inverse" is 01111111
So and "01111111" + "1" = "10000000" which is "-0" isn't it?
My textbook makes this seem so easy but I can't figure out what I'm doing wrong. Thanks for the help.
No, that's definitely -128 (in two's complement anyway, which is what you're talking about given your description of negating numbers). It's only -0 for the sign/magnitude representation of negative numbers.
See this answer for details on the two representations plus the third one that C allows, one's complement, but I'll copy a snippet from there to keep this answer as self-contained as possible.
To get the negative representation for a positive number, you:
invert all bits then add one for two's complement.
invert all bits for one's complement.
invert just the sign bit for sign/magnitude.
You can see this in the table below:
number | twos complement | ones complement | sign/magnitude
=======|=====================|=====================|====================
5 | 0000 0000 0000 0101 | 0000 0000 0000 0101 | 0000 0000 0000 0101
-5 | 1111 1111 1111 1011 | 1111 1111 1111 1010 | 1000 0000 0000 0101
You should be aware that there is no 128 in 8-bit two's complement numbers, the largest value is 127.
Where the numbers pass the midpoint is where the "clever" stuff happens:
00000000 -> 0
00000001 -> 1
: :
01111110 -> 126
01111111 -> 127
10000000 -> -128
10000001 -> -127
: :
11111110 -> -2
11111111 -> -1
because adding the bit pattern of (for example) 100 and -1 with an 8-bit wrap-around will auto-magically give you 99:
100+ 0 0110 0100
1- 0 1111 1111
===========
1 0110 0011 99+ (without that leading 1)
It depends on what your binary representation is -- ones complement, twos complement, sign-magnitude, or something else. The "invert bits and add 1" is correct for twos complement, which is what most computers these days use internally for signed numbers. In your example, "10000000" is the 8-bit twos-complement representation of -128, which is what you want. There is no such thing as -0 in twos complement.
For sign-magnitude, you negate by flipping the sign bit. For ones complement, you negate by inverting all bits.