How to convert alphabet to binary? - binary

How to convert alphabet to binary? I search on Google and it says that first convert alphabet to its ASCII numeric value and than convert the numeric value to binary. Is there any other way to convert ?
And if that's the only way than is the binary value of "A" and 65 are same?
BECAUSE ASCII vale of 'A'=65 and when converted to binary its 01000001
AND 65 =01000001

That is indeed the way which text is converted to binary.
And to answer your second question, yes it is true that the binary value of A and 65 are the same. If you are wondering how CPU distinguishes between "A" and "65" in that case, you should know that it doesn't. It is up to your operating system and program to distinguish how to treat the data at hand. For instance, say your memory looked like the following starting at 0 on the left and incrementing right:
00000001 00001111 000000001 01100110
This binary data could mean anything, and only has a meaning in the context of whatever program it is in. In a given program, you could have it be read as:
1. An integer, in which case you'll get one number.
2. Character data, in which case you'll output 4 ASCII characters.
In short, binary is read by CPUs, which do not understand the context of anything and simply execute whatever they are given. It is up to your program/OS to specify instructions in order for data to be handled properly.
Thus, converting the alphabet to binary is dependent on the program in which you are doing so, and outside the context of a program/OS converting the alphabet to binary is really the exact same thing as converting a sequence of numbers to binary, as far as a CPU is concerned.

Number 65 in decimal is 0100 0001 in binary and it refers to letter A in binary alphabet table (ASCII) https://www.bin-dec-hex.com/binary-alphabet-the-alphabet-letters-in-binary. The easiest way to convert alphabet to binary is to use some online converter or you can do it manually with binary alphabet table.

Related

Reading binary as decimal or ascii confusion

I was looking over some hex data the other day and I’ve got a bit confused with something.
If I see the hex code: #41, 65 in decimal or 0100 0001 in binary.
Fine!
But, what confuses me is that #41 is the code for letter A in ascii.
So when I was looking at the stream of hex bytes in sublime it actually picked up it as “A” and not the number 65.
So the confusion is, how did it know to represent this hex or binary as the letter A instead of the integer 65? Is there some kind of flag in the binary that sublime used to determine if it should show the character or the integer?
In other words, if someone gave me a byte of binary how do I then determine if they wanted me to see it as ascii or an integer without them actually telling me?
I believe the answer to this question (albeit very late) is because the ascii code for the letter A is in the 65th index position of the character set.
The number 6 would be 0000 0110, or just "6". 65 in a text string is just a collection of individual numbers, and not something like an int data type.
Admittedly, I don't know how you'd handle identifying if someone is asking you to give them the ascii value for the hex, or the denary value of the hex. I'm still too new to this concept.
Here's where I derived my answer: https://www.bbc.co.uk/bitesize/guides/zp73wmn/revision/5

What's the exact meaning of the statement "Since ASCII used 7 bits for the character, it could only represent 128 different characters"?

I come across the below statement while studying about HTML Character Sets and Character Encoding :
Since ASCII used 7 bits for the character, it could only represent 128
different characters.
When we convert any decimal value from the ASCII character set to its binary equivalent it comes down to a 7-bits long binary number.
E.g. For Capital English Letter 'E' the decimal value of 69 exists in ASCII table. If we convert '69' to it's binary equivalent it comes down to the 7-bits long binary number 1000101
Then, why in the ASCII Table it's been mentioned as a 8-bits long binary number 01000101 instead of a 7-bits long binary number 1000101 ?
This is contradictory to the statement
Since ASCII used 7 bits for the character, it could only represent 128
different characters.
The above statement is saying that ASCII used 7 bits for the character.
Please clear my confusion about considering the binary equivalent of a decimal value. Whether should I consider a 7-bits long binary equivalent or a 8-bits long binary equivalent of any decimal value from the ASCII Table? Please explain to me in an easy to understand language.
Again, consider the below statement :
Since ASCII used 7 bits for the character, it could only represent 128
different characters.
According to the above statement how does the number of characters(128) that ASCII supports relates to the fact that ASCII uses 7 bits to represent any character?
Please clear the confusion.
Thank You.
In most processors, memory is byte-addressable and not bit-addressable. That is, a memory address gives the location of an 8-bit value. So, almost all data is manipulated in multiples of 8 bits at a time.
If we were to store a value that has by its nature only 7 bits, we would very often use one byte per value. If the data is a sequence of such values, as text might be, we would still use one byte per value to make counting, sizing, indexing and iterating easier.
When we describe the value of a byte, we often show all of its bits, either in binary or hexadecimal. If a value is some sort of integer (say of 1, 2, 4, or 8 bytes) and its decimal representation would be more understandable, we would write the decimal digits for the whole integer. But in those cases, we might lose the concept of how many bytes it is.
BTW—HTML doesn't have anything to do with ASCII. And, Extended ASCII isn't one encoding. The fundamental rule of character encodings is to read (decode) with the encoding the text was written (encoded) with. So, a communication consists of the transferring of bytes and a shared understanding of the character encoding. (That makes saying "Extended ASCII" so inadequate as to be nearly useless.)
An HTML document represents a sequence of Unicode characters. So, one of the Unicode character encodings (UTF-8) is the most common encoding for an HTML document. Regardless, after it is read, the result is Unicode. An HTML document could be encoded in ASCII but, why do that? If you did know it was ASCII, you could just as easily know that it's UTF-8.
Outside of HTML, ASCII is used billions—if not trillions—of times per second. But, unless you know exactly how it pertains to your work, forget about it, you probably aren't using ASCII.

Converting from lowercase to uppercase using decimal/binary representation of alphabets

I'm using RISC-V and I am limited to using just and, or, xori, addition, subtraction, multiplication, division of integer values.
So for instance, the letter "a" will be represented as 97 and "aa" will be represented as 24929, and so on. The UI converts binary sequence into decimal representation, and I cannot directly modify n-th bit.
Is there anyway I can find a simple, general equation of converting from lowercase to uppercase the decimal representation of a maximum of 8 letter sequence of Strings?
Also, I forgot to add, I can't partition the string into individual letters either. Maybe it's possible, but I don't know how to do it.
Letters or characters are usually represented as byte values, which are easier to read in hexadecimal. This can be seen if you convert 97 and 24929 to hex.
You did not mention the system which was used to encode the characters; mentioning the value for one character is not definitive. Assuming your letters are encoded as ASCII, find an ASCII table and figure out the DIFFERENCE between upper- and lowercase character codes.
Use this knowledge to design an algorithm to transform lowercase character codes to uppercase.
A good uppercase conversion algorithm will not modify characters that are not lowercase letters.
This can be extended to SIMD if you are careful to avoid carries between bytes if you need to add or subtract.

What is the difference between binary and ASCII based file comparison?

If I use a file comparison tool like fc in Windows, you can choose between ASCII and binary comparison.
What is the actual difference between these two comparisons? If I compare two ASCII files, don't I want the binary data of the files to be identical?
WARNING: this is 5 year old loose remembrance of knowledge from uni
Binary representation means you compare the binary exactly, and ascii is a comparison of data type. to put it in a simple case the char 'A' is a representation of 01000001, but that is also an 8 bit integer equal to '65', so that means A = 65 in binary. so if you were doing A + A as a string and 65 43 65 (43 is '+' in binary to decimal), in binary they would be equivalent, but in ascii they would not. This is a very loose explanation and i'm sure i missed a lot, but that should sum it up loosely.
In a text file you want ASCII because you write in ascii characters. In say, a program state saved to a file you want binary to get a direct comparison.

What is the difference between 65 and the letter A in binary?

What is the difference between 65 and the letter A in binary as both represent same bit level information?
Basically, a computer only understand numbers, and not every numbers: it only understand binary represented numbers, ie. which can be represented using only two different states (for example, 1 and 2, 0V and 5V, open and close, true or false, etc.).
Unfortunately, we poor humans doesn't really like reading zeros and ones... So, we have created some codes, to use number like if they were characters: one of them is called ASCII (American Standard Code for Information Interchange), but there is also some others, such as Unicode. The principle is simple: all the program have to do is manipulating numbers, what any CPU does very well, but, when it comes to displaying these data, the display represent them as real characters, such as 'A', '4', '#', or even a space or a newline.
Now, as soon as you are using ASCII, the number 65 will represent the letter 'A'. All is a question of representation: for example, the binary number 0bOOOO1111, the hexadecimal one 0x0F, the octal one 017 and the decimal number 15 all represent the same number. It's the same for letter 'A': think of ASCII as a base, but instead of using the base 2 (binary), 8(octal), 10(decimal) or 16(hexadecimal), to display numbers, it's used in a complete different manner.
To answer your question: ASCII 'A' is hexadecimal 0x41 is decimal 65 is octal 0101 is binary 0b01000001.
Every character is represented by a number. The mapping between numbers and characters is called encoding. Many encodings use for the letter A the number 65. Since in memory there are no special cells for characters or numbers, they are represented the same way, but the interpretation in any program could be very different.
I may be misunderstanding the question and if so I apologise for getting it wrong
But if I'm right I believe your asking what's the difference between a char and int in binary representation of the value 65 which is the ascii decimal value for the letter A (in capital form)
First off we need to appreciate data types which reserve blocks of memory in the ram modules
An interget is usually 16 bits or more if a float or long (in c# this declaration is made by stating uint16, int16, or int32, uint32 so on, so forth)
A character is an 8 bit memory block
Therefore the binary would appear as follows
A byte (8 bits) - char
Decimal: 128, 64, 32, 16, 8, 4, 2, 1
Binary: 01000001
2 bytes (16 bit) - int16
Binary; 0000000001000001
Its all down to the size of the memory block reserved based on the data type in the variable declaration
I'd of done the decimal calculations for the 2 bit but I'm on the bus at the moment
First of all, the difference can be in size of the memory (8bits, 16bits or 32bits). This question: bytes of a string in java
Secondly, to store letter 'A' you can have different encodings and different interpretation of memory. The ASCII character of 'A' in C can occupy exact one byte (7bits + an unused sign bit) and it has exact same binary value as 65 in char integer. But the bitwise interpretation of numbers and characters are not always the same. Just consider that you can store signed values in 8bits. This question: what is an unsigned char