My instructor stated that "an octal byte consists of 6 bits". I am having difficulty understanding why this is, as an octal digit consists of 3 binary bits. I also do not understand the significance of an octal byte being defined as '6 bits' as opposed to some other number.
Can anyone explain why this is, if it is in fact true, or point me to a useful explanation?
This is all speculation and guesswork, since none of this is in any way standard terminology.
An 8-bit byte can be written as two digits of hexadecimals, because each digit expresses 4 bits. The largest such byte value is 0xFF.
By analogy, two digits of octals can express 2 × 3 = 6 bits. Its largest value is 077. So if you like you can call a pair of two octals an "octal byte", but only if you will also call an 8-bit byte a "hexadecimal byte".
In my personal opinion neither notion is helpful or useful, and you'd be best of just to say how many bits your byte has.
Like #Kerrek SB I'd have to guess the same.
Tell him an octal byte is two octal nibbles, that should sort him out.
Two hexadecimal digits is an 8 bit byte, so each four bits were called a nibble.
as for wiki : The byte is a unit of digital information that most commonly consists of 8 bits. The bit is a contraction of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1" or "0".
so byte is a set of bunch of bits. to be specific of 8 bits.
we see that the definition doesnt say anything about dec oct hex and other notations of integer number.
indeed byte and integer number is not the same. so where
does it start ?
if we type bits of byte like so 01001010 we can find that
we can map 1-to-1 this object to binary notation
of integer numbers 01001010.
(byte)01001010 --> (binary integer) 01001010
the two objects look the same but actually is just mapping.
now we work with integer number instead of abstract object "byte". an integer
number can be designated via different notations like : dec, binary, hex, oct etc.
notation like octal(hex,dec etc) is just a method of designation of integer number. it does not influence the nature of initial byte object.
byte has 8 bits whatever notation is used.
ISO/IEC 2382-1:1993 says:
1.The number of bits in a byte is usually 8.
2.Octet is Byte that consists of eight
Bits
Related
During an EMV transaction, all information exchanged between terminal and card is encoded in byte strings. In order to understand the content of the messages and give meaning to the bits, you should first get familiar with the hexadecimal notation. One byte can be represented by two hexadecimal numbers, or eight binary (0,1) numbers.
What is the binary (i.e., in bits) representation of the byte ‘E3’?
The EMV tags follow Packed BCD format. It means a byte can hold two Hexadecimal values. In your case it becomes [ [ 1110 ] [ 0011 ] ]. The first nibble holds the binary representation for E and the second one holds for 3.
You can simply use Programmer Calculator given in Windows and Linux as well to convert Hexadecimal values into binary as follows:
What is the difference between 65 and the letter A in binary as both represent same bit level information?
Basically, a computer only understand numbers, and not every numbers: it only understand binary represented numbers, ie. which can be represented using only two different states (for example, 1 and 2, 0V and 5V, open and close, true or false, etc.).
Unfortunately, we poor humans doesn't really like reading zeros and ones... So, we have created some codes, to use number like if they were characters: one of them is called ASCII (American Standard Code for Information Interchange), but there is also some others, such as Unicode. The principle is simple: all the program have to do is manipulating numbers, what any CPU does very well, but, when it comes to displaying these data, the display represent them as real characters, such as 'A', '4', '#', or even a space or a newline.
Now, as soon as you are using ASCII, the number 65 will represent the letter 'A'. All is a question of representation: for example, the binary number 0bOOOO1111, the hexadecimal one 0x0F, the octal one 017 and the decimal number 15 all represent the same number. It's the same for letter 'A': think of ASCII as a base, but instead of using the base 2 (binary), 8(octal), 10(decimal) or 16(hexadecimal), to display numbers, it's used in a complete different manner.
To answer your question: ASCII 'A' is hexadecimal 0x41 is decimal 65 is octal 0101 is binary 0b01000001.
Every character is represented by a number. The mapping between numbers and characters is called encoding. Many encodings use for the letter A the number 65. Since in memory there are no special cells for characters or numbers, they are represented the same way, but the interpretation in any program could be very different.
I may be misunderstanding the question and if so I apologise for getting it wrong
But if I'm right I believe your asking what's the difference between a char and int in binary representation of the value 65 which is the ascii decimal value for the letter A (in capital form)
First off we need to appreciate data types which reserve blocks of memory in the ram modules
An interget is usually 16 bits or more if a float or long (in c# this declaration is made by stating uint16, int16, or int32, uint32 so on, so forth)
A character is an 8 bit memory block
Therefore the binary would appear as follows
A byte (8 bits) - char
Decimal: 128, 64, 32, 16, 8, 4, 2, 1
Binary: 01000001
2 bytes (16 bit) - int16
Binary; 0000000001000001
Its all down to the size of the memory block reserved based on the data type in the variable declaration
I'd of done the decimal calculations for the 2 bit but I'm on the bus at the moment
First of all, the difference can be in size of the memory (8bits, 16bits or 32bits). This question: bytes of a string in java
Secondly, to store letter 'A' you can have different encodings and different interpretation of memory. The ASCII character of 'A' in C can occupy exact one byte (7bits + an unused sign bit) and it has exact same binary value as 65 in char integer. But the bitwise interpretation of numbers and characters are not always the same. Just consider that you can store signed values in 8bits. This question: what is an unsigned char
If I wanted to represent -2455.1152 as 32 bit I know the first bit is 1 (negative sign) but I can get the 2455 to binary as 10010010111 but for the fractional part I'm not too sure. .1152 could have an infinite number of fractional parts. Would that mean that only up to 23 bits are used to represent the fractional part? So since 2445 uses 11 bits, bits 11 to 0 are for the fractional part?
for the binary representation I have 10010010111.00011101001. Exponent is 10. 10+127=137. 137 as binary is 10001001.
full representation would be:
1 10001001 1001001011100011101001
is that right?
It looks like you are trying to devise your own floating-point representation, but you used a fixed-point tag so I will explain how to convert your real number to a traditional fixed-point representation. First, you need to decide how many bits will be used to represent the fractional part of the number. Just for the sake of discussion let's say that 16 bits will be used for the fractional part, 15 bits for the integer part, and one bit reserved for the sign bit. Now, multiply the absolute value of the real number by 2^{16}: 2455.1152 * 65536 = 160898429.747. You can either round to the nearest integer or just truncate. Suppose we just truncate to 160898429. Converting this to hexadecimal we get 0x09971D7D. To make this negative, invert and add a 1 to the LSB, and the final result is 0xF668E283.
To convert back to a real number just reverse the process. Take the absolute value of the fixed-point representation and divide by 2^{16}. In this case we would find that the fixed-point representation is equal to the real number -2455.1151886 . The accuracy can be improved by rounding instead of truncating when converting from real to fixed-point, or by allowing more bits for the fractional part.
I have faced an interview question related to embedded systems and C/C++. The question is:
If we multiply 2 signed (2's complement) 16-bit data, what should be the size of resultant data?
I've started attempting it with an example of multiplying two signed 4-bit, so, if we multiply +7 and -7, we end up with -49, which requires 7 bits. But, I could not formulate a general relation.
I think I need to understand binary multiply deeply to solve this question.
First, n bits signed integer contains a value in the range -(2^(n-1))..+(2^(n-1))-1.
For example, for n=4, the range is -(2^3)..(2^3)-1 = -8..+7
The range of the multiplication result is -8*+7 .. -8*-8 = -56..+64.
+64 is more than 2^6-1 - it is 2^6 = 2^(2n-2) ! You'll need 2n-1 bits to store such POSITIVE integer.
Unless you're doing proprietary encoding (see next paragraph) - you'll need 2n bits:
One bit for the sign, and 2n-1 bits for the absolute value of the multiplication result.
If M is the result of the multiplication, you can store -M or M-1 instead. this can save you 1 bit.
This will depend on context. In C/C++, all intermediates smaller than int are promoted to int. So if int is larger than 16-bits, then the result will be a signed 32-bit integer.
However, if you assign it back to a 16-bit integer, it will truncate leaving only bottom 16 bits of the two's complement of the new number.
So if your definition of "result" is the intermediate immediately following the multiply, then the answer is the size of int. If you define the size as after you've stored it back to a 16-bit variable, then answer is the size of the 16-bit integer type.
One of my old classmates just asked me this and I'm at a total loss. Google gave me a lot of definitions of endian-ness, but not the term for this special case. IS there even a term for this?
See palindrome. Consider, for example, a 32-bit integer as a sequence of four byte values when stored in memory. If the sequence of four bytes is a palindrome, then the it has the same integer value in both big- and little-endian. So,
all 8-bit integers are palindromes,
all 16-bit integers of the form AA (where A is a byte) are palindromes,
all 32-bit integers of the form AAAA or ABBA (where A and B are bytes) are palindromes,
and so on. Historically, there have been architectures with mixed endianness (notably the VAX), but here I'm limiting myself to pure big- or little-endian representations.
how about zero :-)
The answer is, "A byte".
Palindromic, even though that term is usually used for numbers that have digits that are the same both ways.