Word to represent 5 bits of data? - binary

A "byte" is 8 bits.
A "nibble" (sometimes "nybble") represents 4 bits.
Is there a term to represent a group of 5 bits?
Currently working on a Base32 encoder and need a term to represent a single base32 character, which is represented by 5 bits.

According to sources on Wikipedia, a group of 5 bits has historically been referred to by a variety of names such as a:
pentad
pentade
nickel
nyckle

Related

How computers convert decimal to binary integers

This is surely a duplicate, but I was not able to find an answer to the following question.
Let's consider the decimal integer 14. We can obtain its binary representation, 1110, using e.g. the divide-by-2 method (% represents the modulus operand):
14 % 2 = 0
7 % 2 = 1
3 % 2 = 1
1 % 2 = 1
but how computers convert decimal to binary integers?
The above method would require the computer to perform arithmetic and, as far as I understand, because arithmetic is performed on binary numbers, it seems we would be back dealing with the same issue.
I suppose that any other algorithmic method would suffer the same problem. How do computers convert decimal to binary integers?
Update: Following a discussion with Code-Apprentice (see comments under his answer), here is a reformulation of the question in two cases of interest:
a) How the conversion to binary is performed when the user types integers on a keyboard?
b) Given a mathematical operation in a programming language, say 12 / 3, how does the conversion from decimal to binary is done when running the program, so that the computer can do the arithmetic?
There is only binary
The computer stores all data as binary. It does not convert from decimal to binary since binary is its native language. When the computer displays a number it will convert from the binary representation to any base, which by default is decimal.
A key concept to understand here is the difference between the computers internal storage and the representation as characters on your monitor. If you want to display a number as binary, you can write an algorithm in code to do the exact steps that you performed by hand. You then print out the characters 1 and 0 as calculated by the algorithm.
Indeed, like you mention in one of you comments, if compiler has a small look-up table to associate decimal integers to binary integers then it can be done with simple binary multiplications and additions.
Look-up table has to contain binary associations for single decimal digits and decimal ten, hundred, thousand, etc.
Decimal 14 can be transformed to binary by multipltying binary 1 by binary 10 and added binary 4.
Decimal 149 would be binary 1 multiplied by binary 100, added to binary 4 multiplied by binary 10 and added binary 9 at the end.
Decimal are misunderstood in a program
let's take an example from c language
int x = 14;
here 14 is not decimal its two characters 1 and 4 which are written together to be 14
we know that characters are just representation for some binary value
1 for 00110001
4 for 00110100
full ascii table for characters can be seen here
so 14 in charcter form actually written as binary 00110001 00110100
00110001 00110100 => this binary is made to look as 14 on computer screen (so we think it as decimal)
we know number 14 evntually should become 14 = 1110
or we can pad it with zero to be
14 = 00001110
for this to happen computer/processor only need to do binary to binary conversion i.e.
00110001 00110100 to 00001110
and we are all set

Representing a tic tac toe board in computer memory,

I am trying to solve this problem: Design a method for representing the state of a tic-tac-toe board in computer memory. Can you fit your representation into three bytes?
This is from a textbook without solutions, thank you!
Any help is appreciated!
The state of a Tic-Tac-Toe board can be encoded using 3 bytes as follows.
To represent the state of each cell, 3 states are necessary, namely X, O and undefined. 3 states can be represented by 2 bits (2 bits can in fact represent 4 states, but only 3 are needed here - on the other hand, 1 bit is insufficient).
There are 9 cells in total, so in total
2 * 9 = 18
bits are necessary to represent the board. 18 bits can be encoded in 3 bytes (which in total have 24 bits, which means that 6 bits are not needed).
A Tic-Tac-Toe board consist of 9 fields. Each field can take 3 states: Empty, Circle, Cross. To represent each state you need 2 bits: 00, 01, 10.
With two bits for each field, you can easily represent whole board in 3 bytes, by using two bits as each field, and each byte as row of board.

Do we have to memorize the binary digits' representations for all kinds of numbers or their is a method to know how to represent each?

For example: octal 3 is 011 in the binary digits, and decimal 3 is 0011.
So, is there any way to figure them out or this table must be copied and pasted in my memory?

Is this definition on an octal byte correct?

My instructor stated that "an octal byte consists of 6 bits". I am having difficulty understanding why this is, as an octal digit consists of 3 binary bits. I also do not understand the significance of an octal byte being defined as '6 bits' as opposed to some other number.
Can anyone explain why this is, if it is in fact true, or point me to a useful explanation?
This is all speculation and guesswork, since none of this is in any way standard terminology.
An 8-bit byte can be written as two digits of hexadecimals, because each digit expresses 4 bits. The largest such byte value is 0xFF.
By analogy, two digits of octals can express 2 × 3 = 6 bits. Its largest value is 077. So if you like you can call a pair of two octals an "octal byte", but only if you will also call an 8-bit byte a "hexadecimal byte".
In my personal opinion neither notion is helpful or useful, and you'd be best of just to say how many bits your byte has.
Like #Kerrek SB I'd have to guess the same.
Tell him an octal byte is two octal nibbles, that should sort him out.
Two hexadecimal digits is an 8 bit byte, so each four bits were called a nibble.
as for wiki : The byte is a unit of digital information that most commonly consists of 8 bits. The bit is a contraction of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1" or "0".
so byte is a set of bunch of bits. to be specific of 8 bits.
we see that the definition doesnt say anything about dec oct hex and other notations of integer number.
indeed byte and integer number is not the same. so where
does it start ?
if we type bits of byte like so 01001010 we can find that
we can map 1-to-1 this object to binary notation
of integer numbers 01001010.
(byte)01001010 --> (binary integer) 01001010
the two objects look the same but actually is just mapping.
now we work with integer number instead of abstract object "byte". an integer
number can be designated via different notations like : dec, binary, hex, oct etc.
notation like octal(hex,dec etc) is just a method of designation of integer number. it does not influence the nature of initial byte object.
byte has 8 bits whatever notation is used.
ISO/IEC 2382-1:1993 says:
1.The number of bits in a byte is usually 8.
2.Octet is Byte that consists of eight
Bits

How to used the alphabet binary symbols

I was reading an article on binary numbers and it had some practice problems at the end but it didn't give the solutions to the problems. The last is "How many bits are required to represent the alphabet?". Can tell me the answer to that question and briefly explain why?
Thanks.
You would only need 5 bits because you are counting to 26 (if we take only upper or lowercase letters). 5 bits will count up to 31, so you've actually got more space than you need. You can't use 4 because that only counts to 15.
If you want both upper and lowercase then 6 bits is your answer - 6 bits will happily count to 63, while your double alphabet has (2 * 24 = 48) characters, again leaving plenty of headroom.
It depends on your definition of alphabet. If you want to represent one character from the 26-letter Roman alphabet (A-Z), then you need log2(26) = 4.7 bits. Obviously, in practice, you'll need 5 bits.
However, given an infinite stream of characters, you could theoretically come up with an encoding scheme that got close to 4.7 bits (there just won't be a one-to-one mapping between individual characters and bit vectors any more).
If you're talking about representing actual human language, then you can get away with a far lower number than this (in the region of 1.5 bits/character), due to redundancy. But that's too complicated to get into in a single post here... (Google keywords are "entropy", and "information content").
There are 26 letters in the alphabet so you 2^5 = 32 is the minimum word length than contain all the letters.
How direct does the representation need to be? If you need 1:1 with no translation layer, then 5 bits will do. But if a translation layer is an option, then you can get away with less. Morse code, for example, can do it in 3 bits. :)