This question already has answers here:
Why are hexadecimal numbers prefixed with 0x?
(5 answers)
Closed 8 years ago.
Well, for any kind of address, x is used to represent it.
What does x actually signify?
It's not just for addresses. The 0x prefix is used for hexadecimal literals in, as far as I know, all C-style languages (C, Java, C++, Objective-C, C#...) and probably others as well.
0x10 is, for instance, 10 hexadecimal, or 16 decimal.
More information is available in the answers to this question.
It means it is in hexadecimal format, i.e a number with a base of 16, instead of 10, as are usual.
0x is te prefix used to represent numbers in hexadecimal notation.
In this case, 0x00000000 is the value used to represent null memory adresses (equivalent to the keyword null for high level languages like Java, C# and many others).
Related
Instead of writing ffff why the syntax of writing heaxadecimal number's are like 0*ffff.What is the meaning of "0*". Does it specify something?
Anyhow A,B,C,D,E,F notations only in hexa decimal number system. Then whats the need of "0*".
Sorry "*" was not the character i supposed it is "x" .
Is it a nomenclature or notation for hexadecimal number systems.
I don't know what language you are talking about, but if you for example in C# write
var ffffff = "Some unrelated string";
...
var nowYouveDoneIt = ffffff;
what do you expect to happen? How does the compiler know if ffffff refers to the hexadecimal representation of the decimal number 16777215 or to the string variable defined earlier?
Since identifiers (in C#) can't begin with a number, prefixing with a 0 and some other character (in C# it's 0xffffff or hex and 0b111111111111111111111111 for binary IIRC) is a handy way of communicating what base the number literal is in.
EDIT: Another issue, if you were to write var myCoolNumber = 10, how would you have ANY way of knowing if this means 2, 10 or 16? Or something else entirely.
It's typically 0xFFFF: the letter, not the multiplication symbol.
As for why, 0x is just the most common convention, like how some programming languages allow binary to be prefixed by 0b. Prefixing a number with just 0 is typically reserved for octal, or base 8; they wanted a way to tell the machine that the following number is in hexadecimal, or base 16 (10 != 0b10 [2] != 010 [8] != 0x10 [16]). They typically omitted a small 'o' from identifying octal for human readability purposes.
Interestingly enough, most Assembly-based implementations I've come across use (or at least allow the use of) 0h instead or as well.
It's there to indicate the number as heX. It's not '*', it's 'x' actually.
See:
http://www.tutorialspoint.com/cprogramming/c_constants.htm
What is the difference between 65 and the letter A in binary as both represent same bit level information?
Basically, a computer only understand numbers, and not every numbers: it only understand binary represented numbers, ie. which can be represented using only two different states (for example, 1 and 2, 0V and 5V, open and close, true or false, etc.).
Unfortunately, we poor humans doesn't really like reading zeros and ones... So, we have created some codes, to use number like if they were characters: one of them is called ASCII (American Standard Code for Information Interchange), but there is also some others, such as Unicode. The principle is simple: all the program have to do is manipulating numbers, what any CPU does very well, but, when it comes to displaying these data, the display represent them as real characters, such as 'A', '4', '#', or even a space or a newline.
Now, as soon as you are using ASCII, the number 65 will represent the letter 'A'. All is a question of representation: for example, the binary number 0bOOOO1111, the hexadecimal one 0x0F, the octal one 017 and the decimal number 15 all represent the same number. It's the same for letter 'A': think of ASCII as a base, but instead of using the base 2 (binary), 8(octal), 10(decimal) or 16(hexadecimal), to display numbers, it's used in a complete different manner.
To answer your question: ASCII 'A' is hexadecimal 0x41 is decimal 65 is octal 0101 is binary 0b01000001.
Every character is represented by a number. The mapping between numbers and characters is called encoding. Many encodings use for the letter A the number 65. Since in memory there are no special cells for characters or numbers, they are represented the same way, but the interpretation in any program could be very different.
I may be misunderstanding the question and if so I apologise for getting it wrong
But if I'm right I believe your asking what's the difference between a char and int in binary representation of the value 65 which is the ascii decimal value for the letter A (in capital form)
First off we need to appreciate data types which reserve blocks of memory in the ram modules
An interget is usually 16 bits or more if a float or long (in c# this declaration is made by stating uint16, int16, or int32, uint32 so on, so forth)
A character is an 8 bit memory block
Therefore the binary would appear as follows
A byte (8 bits) - char
Decimal: 128, 64, 32, 16, 8, 4, 2, 1
Binary: 01000001
2 bytes (16 bit) - int16
Binary; 0000000001000001
Its all down to the size of the memory block reserved based on the data type in the variable declaration
I'd of done the decimal calculations for the 2 bit but I'm on the bus at the moment
First of all, the difference can be in size of the memory (8bits, 16bits or 32bits). This question: bytes of a string in java
Secondly, to store letter 'A' you can have different encodings and different interpretation of memory. The ASCII character of 'A' in C can occupy exact one byte (7bits + an unused sign bit) and it has exact same binary value as 65 in char integer. But the bitwise interpretation of numbers and characters are not always the same. Just consider that you can store signed values in 8bits. This question: what is an unsigned char
I have tried to Google quite a while now for an answer why HTML entities can be compiled either in HTML decimal or HTML hex. So my questions are:
What is the difference between HTML decimal and HTML hex?
Why are there two systems to do the same thing?
Originally, HTML was nominally based on SGML, which has decimal character references only. Later, the hexadecimal alternative was added in HTML 4.01 (and soon implemented in browsers), then retrofitted into SGML in the Web Adaptations Annex.
The apparent main reason for adding the hexadecimal alternative was that all modern character code and encoding standards, such as Unicode, use hexadecimal notation for the code numbers of characters. The ability to refer to a character by its Unicode number, written in the conventional hexadecimal notation, just prefixed with &#x and suffixed with ;, helps to avoid errors that may arise if people convert from hexadecimal to decimal notation.
There are three radixes used in computer technologies:
Binary, radix 2, because ultimately integers are arrays of switches, each which may be on (1) or off (0).
Octal, radix 8, because each digit represents exactly 3 bits, so it's easy to convert to binary.
Decimal, radix 10, because humans have 10 fingers and because we grew up using this radix.
Hexadecimal, radix 16, because like octal it's easy to convert to bits, but even better because 2 hex digits can be expressed in exactly 1 byte. If, for example, you see an rgba value given in hex as 0x00ff00ff, you can see instantly that it represents opaque green.
So, to answer the question posed, for some of us hex is the natural way to express integers as it gives more insight into the storage. For others it's decimal. To each his or her own!
Finishing with an HTML example: could &65536; be a character of utf-16? In hex it's easy to see that the answer is no, because its the same as &x10000; which needs more than 16 bits.
I've always wondered why leading zeroes (0) are used to represent octal numbers, instead of — for example — 0o. The use of 0o would be just as helpful, but would not cause as many problems as leading 0es (e.g. parseInt('08'); in JavaScript). What are the reason(s) behind this design choice?
All modern languages import this convention from C, which imported it from B, which imported it from BCPL.
Except BCPL used #1234 for octal and #x1234 for hexadecimal. B has departed from this convention because # was an unary operator in B (integer to floating point conversion), so #1234 could not be used, and # as a base indicator was replaced with 0.
The designers of B tried to make the syntax very compact. I guess this is the reason they did not use a two-character prefix.
Worth noting that in Python 3.0, they decided that octal literals must be prefixed with '0o' and the old '0' prefix became a SyntaxError, for the exact reasons you mention in your question
https://www.python.org/dev/peps/pep-3127/#removal-of-old-octal-syntax
"0b" is often used for binary rather than for octal. The leading "0" is, I suspect for "O -ctal".
If you know you are going to be parsing octal then use parseInt('08', 10); to make it treat the number as base ten.
My instructor stated that "an octal byte consists of 6 bits". I am having difficulty understanding why this is, as an octal digit consists of 3 binary bits. I also do not understand the significance of an octal byte being defined as '6 bits' as opposed to some other number.
Can anyone explain why this is, if it is in fact true, or point me to a useful explanation?
This is all speculation and guesswork, since none of this is in any way standard terminology.
An 8-bit byte can be written as two digits of hexadecimals, because each digit expresses 4 bits. The largest such byte value is 0xFF.
By analogy, two digits of octals can express 2 × 3 = 6 bits. Its largest value is 077. So if you like you can call a pair of two octals an "octal byte", but only if you will also call an 8-bit byte a "hexadecimal byte".
In my personal opinion neither notion is helpful or useful, and you'd be best of just to say how many bits your byte has.
Like #Kerrek SB I'd have to guess the same.
Tell him an octal byte is two octal nibbles, that should sort him out.
Two hexadecimal digits is an 8 bit byte, so each four bits were called a nibble.
as for wiki : The byte is a unit of digital information that most commonly consists of 8 bits. The bit is a contraction of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1" or "0".
so byte is a set of bunch of bits. to be specific of 8 bits.
we see that the definition doesnt say anything about dec oct hex and other notations of integer number.
indeed byte and integer number is not the same. so where
does it start ?
if we type bits of byte like so 01001010 we can find that
we can map 1-to-1 this object to binary notation
of integer numbers 01001010.
(byte)01001010 --> (binary integer) 01001010
the two objects look the same but actually is just mapping.
now we work with integer number instead of abstract object "byte". an integer
number can be designated via different notations like : dec, binary, hex, oct etc.
notation like octal(hex,dec etc) is just a method of designation of integer number. it does not influence the nature of initial byte object.
byte has 8 bits whatever notation is used.
ISO/IEC 2382-1:1993 says:
1.The number of bits in a byte is usually 8.
2.Octet is Byte that consists of eight
Bits