Binary , hexadecimal and octal values in mysql - mysql

I am very interested in working with binary , hexadecimal and octal systems in mysql database. First of all please give me an advice why we need them during storing information , because of ton of information or why ?
Also which type of values must be stored in marked systems ?
In addition here are bit operator like "<<"
here is example => SELECT 50<<2 AS example;
this gives us result 200 , anyone can explain how it is calculating it ?
Thanks for answering :))

First of all please give me an advice why we need them during storing information
Computers store data in binary. Sometimes it's useful for us to think in terms of the actual bits that are stored, in which case our familiar decimal system can be a little awkward (as conversions are not straightforward); we could write the bits out in full, but that's often too cumbersome since even quite small numbers take up a lot of space to write (e.g. decimal 24521 is binary 101111111001001).
Instead, we tend to use bases which are some power of 2, since they're more compact than binary whilst having the property that each 'digit' represents an exact number of bits in the binary representation. For example, a hexadecimal (base-16) digit represents four bits (a "nibble") with the digits 0 through to F (decimal 15 / binary 1111); an octal (base-8) digit represents three bits with the digits 0 through to 7 (binary 111).
Our earlier example of decimal 24521 would be 5FC9 in hex or 57711 in octal: starting from the right you can see that each digit respectively represents 4 and 3 bits in the above binary representation. Therefore it is (relatively) easy for us humans to visualise the binary representation whilst looking at these compact representations in other bases.
Also which type of values must be stored in marked systems?
I'm not sure what you mean by this. As indicated above, the same values can be represented in all of these systems. In MySQL, we can indicate a binary literal by prepending it with 0b and a hexadecimal literal by prepending it with 0x. MySQL does not support octal literals.
anyone can explain how it is calculating it ?
The << operator performs a bitwise left-shift. That is, it shifts the bits of the left-hand operand left by the number of places given by the right-hand operand.
For each position the bits of an integer are shifted left, the value represented by those bits increases two-fold. It's similar to the effect of shifting digits left in our decimal system, whereby values increase ten-fold (for example, 50 shifted one place to the left gives 500, which is a ten-fold increase; in binary 110 (decimal 6) shifted one place left gives 1100 (decimal 12), which is a two-fold increase).
In your case, shifting the bits of the number 50 (i.e. 110010) two places to the left yields 2 two-fold increases (i.e. a four-fold increase overall): 11001000 is decimal 200.

Your first two questions are too vague to answer, but the third one is concrete enough that I'll respond.
The <<2 is shifting the bits to the left 2 places, as documented here. This is the equivalent of multiplying 50 by 2^2:
mysql> SELECT 50<<2 AS example;
+---------+
| example |
+---------+
| 200 |
+---------+
1 row in set (0.00 sec)
mysql>
mysql> SELECT 50 * POW(2,2) AS example;
+---------+
| example |
+---------+
| 200 |
+---------+
1 row in set (0.00 sec)

Related

Binary numbers addition

I have just started doing some binary number exercices to prepare for a class that i will start next month and i got the hang of all the conversion from decimal to binary and viceverca But now with the two letters 'a ' ' b' in this exercise i am not sure how can i apply that knowledge to add the bits with the following exercise
Given two Binary numbers a = (a7a6 ... a0) and b = (b7b6 ... b0).There is a clculator that can add 4-bit binary numbers.How many bits will be used to represent the result of a 4-bit addition? Why?
We would like to use our calculator to calculate a + b. For this we can put as many as eight bits (4 bits of the first and 4 bits of the second number) of our choice in the calculator and continue to use the result bit by bit
How many additions does our calculator have to carry out for the addition of a and b at most? How many bits is the result maximum long?
How many additions does the calculator have to perform at least For the result to be correct for all possible inputs a and b?
The number of bits needed to represent a 4-bit binary addition is 5. This is because there could be a carry-over bit that pushes the result to 5 bits.
For example 1111 + 0010 = 10010.
This can be done the same way as adding decimal numbers. From right to left just add the numbers of the same significance. If the two bits are 1+1, the result is 10 so that place becomes a zero and the 1 carries over to the next pair of bits, just like decimal addition.
With regard to the min/max number of step, these seems more like an algorithm specific question. Look up some different binary addition algorithms, like ripple-carry for instance, and it should give you a better idea of what is meant by the question.

How computers convert decimal to binary integers

This is surely a duplicate, but I was not able to find an answer to the following question.
Let's consider the decimal integer 14. We can obtain its binary representation, 1110, using e.g. the divide-by-2 method (% represents the modulus operand):
14 % 2 = 0
7 % 2 = 1
3 % 2 = 1
1 % 2 = 1
but how computers convert decimal to binary integers?
The above method would require the computer to perform arithmetic and, as far as I understand, because arithmetic is performed on binary numbers, it seems we would be back dealing with the same issue.
I suppose that any other algorithmic method would suffer the same problem. How do computers convert decimal to binary integers?
Update: Following a discussion with Code-Apprentice (see comments under his answer), here is a reformulation of the question in two cases of interest:
a) How the conversion to binary is performed when the user types integers on a keyboard?
b) Given a mathematical operation in a programming language, say 12 / 3, how does the conversion from decimal to binary is done when running the program, so that the computer can do the arithmetic?
There is only binary
The computer stores all data as binary. It does not convert from decimal to binary since binary is its native language. When the computer displays a number it will convert from the binary representation to any base, which by default is decimal.
A key concept to understand here is the difference between the computers internal storage and the representation as characters on your monitor. If you want to display a number as binary, you can write an algorithm in code to do the exact steps that you performed by hand. You then print out the characters 1 and 0 as calculated by the algorithm.
Indeed, like you mention in one of you comments, if compiler has a small look-up table to associate decimal integers to binary integers then it can be done with simple binary multiplications and additions.
Look-up table has to contain binary associations for single decimal digits and decimal ten, hundred, thousand, etc.
Decimal 14 can be transformed to binary by multipltying binary 1 by binary 10 and added binary 4.
Decimal 149 would be binary 1 multiplied by binary 100, added to binary 4 multiplied by binary 10 and added binary 9 at the end.
Decimal are misunderstood in a program
let's take an example from c language
int x = 14;
here 14 is not decimal its two characters 1 and 4 which are written together to be 14
we know that characters are just representation for some binary value
1 for 00110001
4 for 00110100
full ascii table for characters can be seen here
so 14 in charcter form actually written as binary 00110001 00110100
00110001 00110100 => this binary is made to look as 14 on computer screen (so we think it as decimal)
we know number 14 evntually should become 14 = 1110
or we can pad it with zero to be
14 = 00001110
for this to happen computer/processor only need to do binary to binary conversion i.e.
00110001 00110100 to 00001110
and we are all set

How to do bit insertion

How to insert bits to a certain position?
For example, if I have an integer 115 (i.e. b1110011).
And I want to insert integer 3 (i.e. b11) to position 3 from the right. In this case, let's assume that I know the size of the binary that I want to insert. For example, I know that 3 is 2 digits binary.
Therefore the resulting binary will be 475 (i.e. b111011011)
I have a solution in mind but it seems complicated. It involves first, shifting the bits 2 digits to the left to give room for the new bits. Then set rightmost 2+position bits to zero. Therefore now I have 111000000.
Then I will OR the result to change the bold bits part with the rightmost 2+position from the original bits with the first two position is changed to the bits that I want to insert. (i.e. 11011).
Therefore in the end I will have 111000000 | 11011 == 111011011.
But I think my solution is too complicated. Thus, is there any better way to do this operation?

How to know the number of positions on the right of the decimal point in a float?

I'm preparing some mapping sheets for migrating an actual MYSQL database to a new ORACLE one. Some of the data are defined as float, but I would like to know exactly the length of the value in the column having the maximum decimals after the point. This would help me to restrict the data type instead of declaring it as a NUMBER.
Is there an easy way to do this in MySQL? I've tried with a regular expression but it does not match all values (I've found a value like 7.34397493274) but the following regex does not retrieve it:
SELECT column
from `db`.`table`
where column REGEXP '^-?[0-9]+\.[0-9]{7,}$' =1;
Thanks
You are going down the wrong track. There is no convenient answer to "how many digits are to the right of the decimal point in a floating point number". There is an answer to the "precision" of a floating point number. That is 23. The relationship between precision and the numbers to the right of the floating point number depends on the scale factor.
You might want to review the documentation entitled Problems with Floating Point Numbers.
More concretely, the problem is that a particular number might be represented as:
1.200000000001
or
1.199999999997
(I'm not saying these are actual representations, just examples.) What value would you give for the numbers to the right of the decimal point? By representing the values as floats, the database has lost this information.
Instead, you have several options:
Just use NUMBER, which is generally a reasonable type.
Use BINARY_FLOAT, which would be the same type.
Understand the application to figure out how many decimal points are actually needed.
Play games with the representation, looking for strings of zeros and nines (say four in a row) and assume they are not significant.
If you are looking to find the length after the decimal point then in mysql you can use substring_index and length function together as
mysql> select length(substring_index('7.34397493274','.',-1)) as len;
+-----+
| len |
+-----+
| 11 |
+-----+
1 row in set (0.00 sec)

32 bit mantissa representation of 2455.1152

If I wanted to represent -2455.1152 as 32 bit I know the first bit is 1 (negative sign) but I can get the 2455 to binary as 10010010111 but for the fractional part I'm not too sure. .1152 could have an infinite number of fractional parts. Would that mean that only up to 23 bits are used to represent the fractional part? So since 2445 uses 11 bits, bits 11 to 0 are for the fractional part?
for the binary representation I have 10010010111.00011101001. Exponent is 10. 10+127=137. 137 as binary is 10001001.
full representation would be:
1 10001001 1001001011100011101001
is that right?
It looks like you are trying to devise your own floating-point representation, but you used a fixed-point tag so I will explain how to convert your real number to a traditional fixed-point representation. First, you need to decide how many bits will be used to represent the fractional part of the number. Just for the sake of discussion let's say that 16 bits will be used for the fractional part, 15 bits for the integer part, and one bit reserved for the sign bit. Now, multiply the absolute value of the real number by 2^{16}: 2455.1152 * 65536 = 160898429.747. You can either round to the nearest integer or just truncate. Suppose we just truncate to 160898429. Converting this to hexadecimal we get 0x09971D7D. To make this negative, invert and add a 1 to the LSB, and the final result is 0xF668E283.
To convert back to a real number just reverse the process. Take the absolute value of the fixed-point representation and divide by 2^{16}. In this case we would find that the fixed-point representation is equal to the real number -2455.1151886 . The accuracy can be improved by rounding instead of truncating when converting from real to fixed-point, or by allowing more bits for the fractional part.