What is the binary (i.e., in bits) representation of the byte ‘E3’? - emv

During an EMV transaction, all information exchanged between terminal and card is encoded in byte strings. In order to understand the content of the messages and give meaning to the bits, you should first get familiar with the hexadecimal notation. One byte can be represented by two hexadecimal numbers, or eight binary (0,1) numbers.
What is the binary (i.e., in bits) representation of the byte ‘E3’?

The EMV tags follow Packed BCD format. It means a byte can hold two Hexadecimal values. In your case it becomes [ [ 1110 ] [ 0011 ] ]. The first nibble holds the binary representation for E and the second one holds for 3.

You can simply use Programmer Calculator given in Windows and Linux as well to convert Hexadecimal values into binary as follows:

Related

How computers convert decimal to binary integers

This is surely a duplicate, but I was not able to find an answer to the following question.
Let's consider the decimal integer 14. We can obtain its binary representation, 1110, using e.g. the divide-by-2 method (% represents the modulus operand):
14 % 2 = 0
7 % 2 = 1
3 % 2 = 1
1 % 2 = 1
but how computers convert decimal to binary integers?
The above method would require the computer to perform arithmetic and, as far as I understand, because arithmetic is performed on binary numbers, it seems we would be back dealing with the same issue.
I suppose that any other algorithmic method would suffer the same problem. How do computers convert decimal to binary integers?
Update: Following a discussion with Code-Apprentice (see comments under his answer), here is a reformulation of the question in two cases of interest:
a) How the conversion to binary is performed when the user types integers on a keyboard?
b) Given a mathematical operation in a programming language, say 12 / 3, how does the conversion from decimal to binary is done when running the program, so that the computer can do the arithmetic?
There is only binary
The computer stores all data as binary. It does not convert from decimal to binary since binary is its native language. When the computer displays a number it will convert from the binary representation to any base, which by default is decimal.
A key concept to understand here is the difference between the computers internal storage and the representation as characters on your monitor. If you want to display a number as binary, you can write an algorithm in code to do the exact steps that you performed by hand. You then print out the characters 1 and 0 as calculated by the algorithm.
Indeed, like you mention in one of you comments, if compiler has a small look-up table to associate decimal integers to binary integers then it can be done with simple binary multiplications and additions.
Look-up table has to contain binary associations for single decimal digits and decimal ten, hundred, thousand, etc.
Decimal 14 can be transformed to binary by multipltying binary 1 by binary 10 and added binary 4.
Decimal 149 would be binary 1 multiplied by binary 100, added to binary 4 multiplied by binary 10 and added binary 9 at the end.
Decimal are misunderstood in a program
let's take an example from c language
int x = 14;
here 14 is not decimal its two characters 1 and 4 which are written together to be 14
we know that characters are just representation for some binary value
1 for 00110001
4 for 00110100
full ascii table for characters can be seen here
so 14 in charcter form actually written as binary 00110001 00110100
00110001 00110100 => this binary is made to look as 14 on computer screen (so we think it as decimal)
we know number 14 evntually should become 14 = 1110
or we can pad it with zero to be
14 = 00001110
for this to happen computer/processor only need to do binary to binary conversion i.e.
00110001 00110100 to 00001110
and we are all set

What is the difference between 65 and the letter A in binary?

What is the difference between 65 and the letter A in binary as both represent same bit level information?
Basically, a computer only understand numbers, and not every numbers: it only understand binary represented numbers, ie. which can be represented using only two different states (for example, 1 and 2, 0V and 5V, open and close, true or false, etc.).
Unfortunately, we poor humans doesn't really like reading zeros and ones... So, we have created some codes, to use number like if they were characters: one of them is called ASCII (American Standard Code for Information Interchange), but there is also some others, such as Unicode. The principle is simple: all the program have to do is manipulating numbers, what any CPU does very well, but, when it comes to displaying these data, the display represent them as real characters, such as 'A', '4', '#', or even a space or a newline.
Now, as soon as you are using ASCII, the number 65 will represent the letter 'A'. All is a question of representation: for example, the binary number 0bOOOO1111, the hexadecimal one 0x0F, the octal one 017 and the decimal number 15 all represent the same number. It's the same for letter 'A': think of ASCII as a base, but instead of using the base 2 (binary), 8(octal), 10(decimal) or 16(hexadecimal), to display numbers, it's used in a complete different manner.
To answer your question: ASCII 'A' is hexadecimal 0x41 is decimal 65 is octal 0101 is binary 0b01000001.
Every character is represented by a number. The mapping between numbers and characters is called encoding. Many encodings use for the letter A the number 65. Since in memory there are no special cells for characters or numbers, they are represented the same way, but the interpretation in any program could be very different.
I may be misunderstanding the question and if so I apologise for getting it wrong
But if I'm right I believe your asking what's the difference between a char and int in binary representation of the value 65 which is the ascii decimal value for the letter A (in capital form)
First off we need to appreciate data types which reserve blocks of memory in the ram modules
An interget is usually 16 bits or more if a float or long (in c# this declaration is made by stating uint16, int16, or int32, uint32 so on, so forth)
A character is an 8 bit memory block
Therefore the binary would appear as follows
A byte (8 bits) - char
Decimal: 128, 64, 32, 16, 8, 4, 2, 1
Binary: 01000001
2 bytes (16 bit) - int16
Binary; 0000000001000001
Its all down to the size of the memory block reserved based on the data type in the variable declaration
I'd of done the decimal calculations for the 2 bit but I'm on the bus at the moment
First of all, the difference can be in size of the memory (8bits, 16bits or 32bits). This question: bytes of a string in java
Secondly, to store letter 'A' you can have different encodings and different interpretation of memory. The ASCII character of 'A' in C can occupy exact one byte (7bits + an unused sign bit) and it has exact same binary value as 65 in char integer. But the bitwise interpretation of numbers and characters are not always the same. Just consider that you can store signed values in 8bits. This question: what is an unsigned char

How to convert alphabet to binary?

How to convert alphabet to binary? I search on Google and it says that first convert alphabet to its ASCII numeric value and than convert the numeric value to binary. Is there any other way to convert ?
And if that's the only way than is the binary value of "A" and 65 are same?
BECAUSE ASCII vale of 'A'=65 and when converted to binary its 01000001
AND 65 =01000001
That is indeed the way which text is converted to binary.
And to answer your second question, yes it is true that the binary value of A and 65 are the same. If you are wondering how CPU distinguishes between "A" and "65" in that case, you should know that it doesn't. It is up to your operating system and program to distinguish how to treat the data at hand. For instance, say your memory looked like the following starting at 0 on the left and incrementing right:
00000001 00001111 000000001 01100110
This binary data could mean anything, and only has a meaning in the context of whatever program it is in. In a given program, you could have it be read as:
1. An integer, in which case you'll get one number.
2. Character data, in which case you'll output 4 ASCII characters.
In short, binary is read by CPUs, which do not understand the context of anything and simply execute whatever they are given. It is up to your program/OS to specify instructions in order for data to be handled properly.
Thus, converting the alphabet to binary is dependent on the program in which you are doing so, and outside the context of a program/OS converting the alphabet to binary is really the exact same thing as converting a sequence of numbers to binary, as far as a CPU is concerned.
Number 65 in decimal is 0100 0001 in binary and it refers to letter A in binary alphabet table (ASCII) https://www.bin-dec-hex.com/binary-alphabet-the-alphabet-letters-in-binary. The easiest way to convert alphabet to binary is to use some online converter or you can do it manually with binary alphabet table.

Is this definition on an octal byte correct?

My instructor stated that "an octal byte consists of 6 bits". I am having difficulty understanding why this is, as an octal digit consists of 3 binary bits. I also do not understand the significance of an octal byte being defined as '6 bits' as opposed to some other number.
Can anyone explain why this is, if it is in fact true, or point me to a useful explanation?
This is all speculation and guesswork, since none of this is in any way standard terminology.
An 8-bit byte can be written as two digits of hexadecimals, because each digit expresses 4 bits. The largest such byte value is 0xFF.
By analogy, two digits of octals can express 2 × 3 = 6 bits. Its largest value is 077. So if you like you can call a pair of two octals an "octal byte", but only if you will also call an 8-bit byte a "hexadecimal byte".
In my personal opinion neither notion is helpful or useful, and you'd be best of just to say how many bits your byte has.
Like #Kerrek SB I'd have to guess the same.
Tell him an octal byte is two octal nibbles, that should sort him out.
Two hexadecimal digits is an 8 bit byte, so each four bits were called a nibble.
as for wiki : The byte is a unit of digital information that most commonly consists of 8 bits. The bit is a contraction of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1" or "0".
so byte is a set of bunch of bits. to be specific of 8 bits.
we see that the definition doesnt say anything about dec oct hex and other notations of integer number.
indeed byte and integer number is not the same. so where
does it start ?
if we type bits of byte like so 01001010 we can find that
we can map 1-to-1 this object to binary notation
of integer numbers 01001010.
(byte)01001010 --> (binary integer) 01001010
the two objects look the same but actually is just mapping.
now we work with integer number instead of abstract object "byte". an integer
number can be designated via different notations like : dec, binary, hex, oct etc.
notation like octal(hex,dec etc) is just a method of designation of integer number. it does not influence the nature of initial byte object.
byte has 8 bits whatever notation is used.
ISO/IEC 2382-1:1993 says:
1.The number of bits in a byte is usually 8.
2.Octet is Byte that consists of eight
Bits

Converting binary to hexadecimal?

Just wondering on how I would go about converting binary to hexadecimal??
Would I first have to convert the binary to decimal and then to hexadecimal??
For example, 101101001.101110101010011
How would I go about converting a complex binary such as the above to hexadecimal?
Thanks in advance
Each 4 bits of a binary number represents a hexadecimal digit. So the best way to convert from binary to hexadecimal is to pad the binary number with leading zeroes so that the number of bits is divisible by four.
Then you process four bits at a time and convert them to a single hexadecimal digit:
0000 -> 0
0001 -> 1
0010 -> 2
....
1110 -> E
1111 -> F
No, you don't convert to decimal and then to hexadecimal, you convert to a numeric value, and then to hexadecimal.
(Decimal is also a textual representation of a number, just like binary and hexadecimal. Although decimal representation is used by default, a number doesn't have a textual representation in itself.)
As a hexadecimal digit corresponds to four binary digits you don't have to convert the entire string to a number, you can do it four binary digits at a time.
First fill up the binary number so that it has full groups of four digits:
000101101001.1011101010100110
Then you can convert each group to a number, and then to hexadecimal:
0001 0110 1001.1011 1010 1010 0110
169.BAA6
Alternatively, you can split the number into the two parts before and after the period and convert those from binary. The part before the period can be converted stright off, but the part after has to be padded to be correct.
Example in C#:
string binary = "101101001.101110101010011";
string[] parts = binary.Split('.');
while (parts[1].Length % 4 != 0) {
parts[1] += '0';
}
string result =
Convert.ToInt32(parts[0], 2).ToString("X") +
"." +
Convert.ToInt32(parts[1], 2).ToString("X");
You could simply have a small hash table, or other mapping converting each quadruplet of binary digits (as a string, assuming that's your input) into the corresponding hex digit (0 to 9, A to F) for the output string. You'll have to bunch the input bits up by 4, left-padding before the '.' and right-padding after it, with 0 in both cases, as needed.
So...:
locate the '.'
left of the '.', bunch by 4, left-padding the last bunch, going leftwards: in your example, 1001 leftmost, then 0110, finally 0001 (left-padding), that's it;
ditto to the right -- in your example 1011, then 1010, then 1010, finally 0110 (right-padding)
each bunch of 4 binary digits, via a hash or other form of hashing, turns into the hex digit to put in that place in the output string.
Want some pseudo-code for it, e.g., Python?
The simplest approach, especially if you already can convert from binary digits to internal numeric representation and from internal numeric representation to hexadecimal digits, is to go binary->internal->hex. I say internal and not decimal, because even though it may print as decimal, it is actually being stored internally in binary format. That said, it is possible to go straight from one to the other. This does not apply to your specific example, but in many cases when converting from binary to hex, you can go four digits at a time, and simply lookup the corresponding hex values in a table. There are all sorts of ways to convert.
BIN to HEX
Binary and hex are natively compatible. Just group 4 binary digits(bits) and substitute the corresponding HEX-digit.
More reference here:
http://en.wikipedia.org/wiki/Hexadecimal#Binary_conversion