I'm trying to find an explanation for the left operand operation.
How much is it in decimal: 100 * 2 << 20 ?
It works like that : a << b means a * 2 ^ b
Related
Can anyone explain, how to calculate the data offset?
Here's the example from the website (link below):
(1 <<((0xE6 & 0x07) + 1)) * 3 = 384 bytes.
E6 equals 230 decimal - but how should I read the rest?
Example
& is bitwise AND. 0xE6 & 0x07 is 6.
<< is the left-shift operator. x << y equivalent to x× 2y. In this case: 1 << (6 + 1) is 128.
* is the multiplication operator. 3 * 128 is 384.
I need to write the function to check if the number binary representation doesn't contain duplications. For example, the function must return true if N equals to 42, because bin(42) equals to 101010, but if N equals to 45 the function must return false, because of binary representation of 45 which equals to 101101 and which contains duplicates 11.
This allows only bits that alternate between 0 and 1, plus possibly leading zeros.
One way to check this is to look at (N << 2) | N. If N is of the correct form, then this is equal to N << 2, except for bit 0 or 1. We can compensate for that as follows:
unsigned N2 = N << 2;
return (N | N2) <= (N2 | 2);
Suppose I have the following script, which constructs a symbolic array, A_known, and a symbolic vector x, and performs a matrix multiplication.
clc; clearvars
try
pkg load symbolic
catch
error('Symbolic package not available!');
end
syms V_l k s0 s_mean
N = 3;
% Generate left-hand-side square matrix
A_known = sym(zeros(N));
for hI = 1:N
A_known(hI, 1:hI) = exp(-(hI:-1:1)*k);
end
A_known = A_known./V_l;
% Generate x vector
x = sym('x', [N 1]);
x(1) = x(1) + s0*V_l;
% Matrix multiplication to give b vector
b = A_known*x
Suppose A_known was actually unknown. Is there a way to deduce it from b and x? If so, how?
Til now, I only had the case where x was unknown, which normally can be solved via x = b \ A.
Mathematically, it is possible to get a solution, but it actually has infinite solutions.
Example
A = magic(5);
x = (1:5)';
b = A*x;
A_sol = b*pinv(x);
which has
>> A
A =
17 24 1 8 15
23 5 7 14 16
4 6 13 20 22
10 12 19 21 3
11 18 25 2 9
but solves A as A_sol like
>> A_sol
A_sol =
3.1818 6.3636 9.5455 12.7273 15.9091
3.4545 6.9091 10.3636 13.8182 17.2727
4.4545 8.9091 13.3636 17.8182 22.2727
3.4545 6.9091 10.3636 13.8182 17.2727
3.1818 6.3636 9.5455 12.7273 15.9091
HEX Article
By this I mean,
In a program if I write this:
1111
I mean 15. Likewise, if I write this:
0xF
I also mean 15. I am not entirely sure how the process is carried out through the chip, but I vagely recall something about flagging, but in essence the compiler will calculate
2^3 + 2^2 + 2^1 + 2^0 = 15
Will 0xF be converted to "1111" before this calculation or is it written that somehow HEX F represents 15?
Or simply, 16^0 ? -- which obviously is not 15.
I cannot find any helpful article that states a conversion from HEX to decimal rather than first converting to binary.
Binary(base2) is converted how I did above (2^n .. etc). How is HEX(base16) converted? Is there an associated system, like base 2, for base 16?
I found in an article:
0x1234 = 1 * 16^3 + 2 * 16^2 + 3 * 16^1 + 4 * 16^0 = 4660
But, what if the number was 0xF, does this represent F in the 0th bit? I cannot find a straightforward example.
There are sixteen hexadecimal digits, 0 through 9 and A through F.
Digits 0 through 9 are the same in hex and in decimal.
0xA is 1010.
0xB is 1110.
0xC is 1210.
0xD is 1310.
0xE is 1410.
0xF is 1510.
For example:
0xA34F = 10 * 163 + 3 * 162 + 4 * 161 + 15 * 160
I have number with binary representation 0000abcd.
How convert it to 0a0b0c0d with smallest number of operations?
How convert 0a0b0c0d back to 0000abcd?
I was searching for a solution here:
http://graphics.stanford.edu/~seander/bithacks.html and other
Generally the problem a bit more than described.
Given first number a₁b₁c₁d₁a₂b₂c₂d₂ and second number a₃a₄b₃b₄c₃c₄d₃d₄
If (a₁ and a₂ = 0) then clear both a₃ and a₄, if (a₃ and a₄ = 0) then clear both a₁ and a₂, etc.
My solution:
a₁b₁c₁d₁a₂b₂c₂d₂
OR 0 0 0 0 a₁b₁c₁d₁ ( a₁b₁c₁d₁a₂b₂c₂d₂ >> 4)
----------------
0 0 0 0 a b c d
? (magic transformation)
? ? ? ? ? ? ? ?
----------------
0 a 0 b 0 c 0 d
OR a 0 b 0 c 0 d 0 (0 a 0 b 0 c 0 d << 1)
----------------
a a b b c c d d
AND a₃a₄b₃b₄c₃c₄d₃d₄
----------------
A₃A₄B₃B₄C₃C₄D₃D₄ (clear bits)
UPDATED: (thanks for #AShelly)
x = a₁b₁c₁d₁a₂b₂c₂d₂
x = (x | x >> 4) & 0x0F
x = (x | x << 2) & 0x33
x = (x | x << 1) & 0x55
x = (x | x << 1)
y = a₃a₄b₃b₄c₃c₄d₃d₄
y = (y | y >> 1) & 0x55
y = (y | y >> 1) & 0x33
y = (y | y >> 2) & 0x0F
y = (y | y << 4)
work for 32-bit with constants 0x0F0F0F0F, 0x33333333, 0x55555555 (and twice long for 64-bit).
If you're looking for the smallest number of operations, use a look-up table.
I have number with binary
representation 0000abcd. How convert
it to 0a0b0c0d with smallest number of
operations?
Isn't this exactly "Interleave bits of X and Y" where Y is 0? Bit Twiddling Hacks has Multiple Solutions that don't use a lookup table.
How convert 0a0b0c0d back to 0000abcd?
See "How to de-interleave bits (UnMortonizing?)"
You can't do it in one go, you should shift bits on per bit basis:
Pseudo code:
X1 = a₁b₁c₁d₁
X2 = a₂b₂c₂d₂
Bm = 1 0 0 0 // Bit mask
Result = 0;
while (/* some bytes left */)
{
Result += (X1 and Bm) << 1 or (X2 and Bm);
Bm = Bm shr 1
Result = Result shl 2;
}
As a result you will get a1a2b1b2c1c2d1d2
I think it is not possible (without lookup table) to do it in less operations using binary arithmetic and x86 or x64 processor architecture. Correct me if I'm mistaken but your problem is about moving bits. Having the abcd bits you want to get 0a0b0c0d bits in one operation. The problem starts when you will look at how many bits the 'a','b','c' and 'd' has to travel.
'a' was 4-th, became 7-th, distance travelled 3 bits
'b' was 3-rd, became 5-th, distance travelled 2 bits
'c' was 2-nd, became 3-rd, distance travelled 1 bit
'd' was 1-st, became 1-st, distance travelled 0 bits
There is no such processor instruction that will move these bits dynamically to a different distance. Though if you have different input representations of the same number for free, for example you have precomputed several values which you are using in a cycle, than maybe it will be possible to gain some optimization, this is the effect you get when using additional knowledge about the topology. You just have to choose whether it will be:
[4 cycles, n^0 memory]
[2 cycles, n^1 memory]
[1 cycle , n^2 memory]