If I want to see in fast what is the result of a code that does shifting (left/right) I usually write down the binary representation and do the shifting.
But for e.g. shifts of 4 it is actually faster to do it write the hex representation and move the character/digit 1 place to the left/right?
Are there any other tricks for this?
Essentially, shifting 4 bits is removing 1 hex because each hex digit is 4 bits in binary. So shifting 8 bits would be like removing 2 hex, and so on.
If you wanted, you could also do the same type of shift with octal, although instead of 4 bits we would be using 3.
Alternately, if you wish to see the translation in decimal rather than octal or hex, you can view shifting as a way to represent division and multiplication.
With shifting left, you can use x1 << x2 as a form of multiplication by 2^x2.
With shifting right, you can use x1 >> x2 as a form of division by 2^x2. Keep note, this will work for positive numbers, not negative.
Related
How can we simplify (a&b)*(c&b)?
where '&' is bitwise and also * represents product.
Or find b in [L,R] such that (a&b)*(c&b) is maximum?
Assume unsigned. Look at a & mask a bit will be set if it is set in both a and the mask. A zero in the mask will never make the result larger but could make it smaller, if the corresponding bit in a was set.
so:
(a&b)*(c&b) will never be larger than a*c which is achieved when all bits in b is set.
If b should be as small as possible you could clear all the bits which will not decrement either a or c, i.e. a bit set in either one of them:
b = a | c
Can anyone explain this to me?
The first line i don't get. How does the first line make the content in $s3 *4?
i know it shifts it to the left how does shifting $s3 by 2 to the left make it 4 times bigger?
Consider decimals. E.g. you shift 12310 2 positions left. You get two zeroes on the right: 1230010. 1 position is equivalent to 1010. 2 positions are equivalent to 10010.
Same with binary numbers. Shift 1012 2 positions left. You get two zeroes on the right: 101002. 1 position is equivalent to 102, which is 210. 2 positions are equivalent to 1002, which is 410.
I use the following CSS rule to set background color of div:
div {
background-color: rgba(96, 96, 96, .1);
}
In Google Chrome v.42 in 'Computed' tab of Developer Tools I see this result rgba(96, 96, 96, 0.0980392);. I think, it looks like some web-kit optimization...
In FireFox v.36 computed background color equals to rgba(96, 96, 96, 0.1)
I've made a simple http://jsfiddle.net/sergfry/c7Lzf5v2/ that shows it in action.
So, can I prevent opacity changing in Google Chrome?
Thanks!
As stated by Quentin, this is an IEEE floating point issue.
0.1 doesn't actually exist in decimal floating point technically simply due to the way that binary works.
0.1 is one-tenth, or 1/10. To show it in binary, divide binary 1 by binary 1010, using binary long division:
As you can see, 0.1 in binary is 0.0001100110011....0011 and it will keep repeating 0011 on the end to infinity.
Browsers will pick and choose the closest available point to 0.1 and use that as the opacity instead. Some will go over and some will go under.
FireFox i would guess it just showing the human readable version but in reality, its really using a computer usable floating point.
As an example:
body {
color: rgba(0,0,0,0.1); // actually 0.0980392
opacity: 0.1; // actually 0.100000001490116
}
Two completely different values for exactly the same floating point.
This floating point issue can actually be replicated elsewhere within browsers using other languages such as Javascript. Javascript numbers are always 64 bit floating point (which i believe CSS is as well). This is more commonly known as Double-precision floating point. PHP also uses double-precision floating points.
64 bit floating point numbers are as you could guess, stored in 64 bits, where the number (the fraction) is stored in bits 0 to 51, the exponent in bits 52 to 62, and the sign in bit 63.
This causes problems down the line as it means integers are only counted as accurate up to 15 decimal points and can really only calculate up to 17 decimal points.
This means that numbers can round up very easily or may just not be stored correctly.
var x = 999999999999999; // x = 999999999999999
var y = 9999999999999999; // y = 10000000000000000
The arithmetic for floating points can also be out of alignment by quite a lot in places as well. As I've shown above; 0.1 in decimal isn't actual 0.1 but 0.000110011... and so on. This means some basic maths can be completely wrong.
var x = 0.2 + 0.1; // x = 0.30000000000000004
You end up having to confuse the system to get the number you actually want. This can be done by * the number by 10 and then dividing it to get your actual wanted result.
var x = (0.2 * 10 + 0.1 * 10) / 10; // x = 0.3
Precision within computers floating point is very difficult and is even more difficult when there are multiple different implementations (or browsers) trying to do their best for speed and displaying the information they're given correctly.
There are quite a few different pieces of information regarding floating points and what the CSS processor (or JS as I expect may calculations will be the same) may be trying to achieve.
Exploring Binary - Why 0.1 does not exist
Javascript Numbers
Wikipedia - IEEE floating point
Wikipedia - Double-precision floating point
I've implemented some functions according to the HSL->RGB and HSV->RGB algorithms.
They mostly work fine, but I'm not sure what is the right thing to do then a color component overflows as a result of the conversion.
E.g., the red component ends up being 1.2 whereas the allowed range is [0..1]. If I multiply that by 255 I will obviously get a value that is invalid in the RGB world.
What is the correct way of handling this -- truncating (if > 1 then set to 1) or wrapping around (if > 1 then substract 1)?
It is not possible that the values āāR, G and B come out of their range if you have properly implemented standard algorithms and inputs are in their ranges.
What algorithms you've implemented?
overflow = c,n ā c,nā1
I tried it with all four possible cases
c,n c,n-1
-7+2 1001+0010 0 0
7+2 0111+0010 0 1
-7+(-2) 1001+1110 1 0
7+(-2) 0111+1110 1 1
and it seems to work, but can someone explain or prove why?
When adding two numbers with n bits, the result can have n+1 bits. This means that by using n bits for result we cannot represent all of them.
Now, what is an overflow? If we consider a signed number, it will mean that we can extend the number by adding sign bit (MSB) above MSB, and they will be equal. So, when this does not hold (next bit after MSB is not equal to MSB, something that can be detected before truncating the result to n bits), then we say it's overflow.
As is your example, we say that the overflow occurs when after adding two positive number we obtained a negative one (or two negative that gave positive result).
Also, check this answer by Thomas Pornin, I think he explained it wery well.
It's been so long, I have only a vague understanding of what the notation is saying. I'd assume that 'c' is a carry flag, and 'n' is a negative flag. But then what is 'n-1'?
Anyway, I'm guessing your answer pertains to overflow occurring in either direction: from a negative number wrapping over into a positive, and a positive wrapping over into a negative.