I'm stumped on this one. I have some functioning VBA code in Access that looks like this.
If (intFrontLoaded And 2) > 0 Then boolFrontLoad(1) = True Else boolFrontLoad(1) = False
If (intFrontLoaded And 4) > 0 Then boolFrontLoad(2) = True Else boolFrontLoad(2) = False
If (intFrontLoaded And 8) > 0 Then boolFrontLoad(3) = True Else boolFrontLoad(3) = False
If (intFrontLoaded And 16) > 0 Then boolFrontLoad(4) = True Else boolFrontLoad(4) = False
If (intFrontLoaded And 32) > 0 Then boolFrontLoad(5) = True Else boolFrontLoad(5) = False
If (intFrontLoaded And 64) > 0 Then boolFrontLoad(6) = True Else boolFrontLoad(6) = False
I'm trying to figure out how the (intFrontLoaded And X) > 0) works.
I know what it does, I'm trying to figure out how, example:
If intFrontLoaded = 14 then boolFrontLoad(1), (2) and (3) will be true.
If intFrontLoaded = 28 then boolFrontLoad(2), (3) and (4) will be true.
I understand that 2+4+8 = 12 and 4+8+16 = 28, but how does (intFrontLoaded And X) > 0) do the calculation?
And in this context is a bitwise AND operator. The test is checking for a single flag bit. Let's use your example of intFrontLoaded = 14 with If (intFrontLoaded And 4) > 0 Then.
14 as bitflags is this: 0000 0000 0000 1110
4 is this: 0000 0000 0000 0010
The result of And is every bit that is the same. In the example above, it's only the "flag" bit, 4. So, the result of the And operation is 4.
Now plug that back into the expression:
If 4 > 0 Then
So, it executes the "true" condition. If you'll notice, all of the tests are powers of 2. This is because when they are represented as binary, they'll only be only a single bit.
Basically, intFrontLoaded is storing a single boolean value for each of the bits being tested. This was much more common in early computing when memory was at a premium and using all 16 bits to store a boolean was considered wasteful.
Note that you can simplify this to:
boolFrontLoad(1) = intFrontLoaded And 2
boolFrontLoad(2) = intFrontLoaded And 4
boolFrontLoad(3) = intFrontLoaded And 8
boolFrontLoad(4) = intFrontLoaded And 16
boolFrontLoad(5) = intFrontLoaded And 32
boolFrontLoad(6) = intFrontLoaded And 64
The And operator is a bitwise AND operation - it compares the bits in each operand and returns the value where both operands have bits in common.
That said, your code would be much clearer written as:
boolFrontLoad(1) = (intFrontLoaded And 2) > 0
boolFrontLoad(2) = (intFrontLoaded And 4) > 0
boolFrontLoad(3) = (intFrontLoaded And 8) > 0
boolFrontLoad(4) = (intFrontLoaded And 16) > 0
boolFrontLoad(5) = (intFrontLoaded And 32) > 0
boolFrontLoad(6) = (intFrontLoaded And 64) > 0
This is called a bitwise operation. The logical And is performed bit per bit between intFrontLoaded And X. When X is a power of 2, say 2^a, its binary representation is composed of zeros except a 1 on the (a+1)'th position (numbering the bits from the right to the left).
Therefore, intFrontLoaded And 4 checks whether the third bit in intFrontLoaded is set. if the result is non-zero, the IF will succeed.
In your code intFrontLoaded is used as a bit-set, that is, a set of flags where each bit represents a flag for some boolean condition.
Related
I have a series of values that are each being stored as UInt16. Each of these numbers represents a bitmask - these numbers are commands that have been sent to a microprocessor telling it which pins to set high or low. I would like to parse this arrow of commands to find out which pins were being set high each time in such a way that is easier to analyse later.
Consider the example value 0x3c00, which in decimal is 15360 and in binary is 0011110000000000. Currently I have the following function
function read_message(hex_rep)
return findall.(x -> x .== '1',bitstring(hex_rep))
end
Which gets called on every element of the array of UInt16. Is there a better/more efficient way of doing this?
The best approach probably depends on how you want to handle vectors of hex-values. But here's an approach for processing a single hex which is much faster than the one in the OP:
function readmsg(x::UInt16)
N = count_ones(x)
inds = Vector{Int}(undef, N)
if N == 0
return inds
end
k = trailing_zeros(x)
x >>= k + 1
i = N - 1
inds[N] = n = 16 - k
while i >= 1
(x, r) = divrem(x, 0x2)
n -= 1
if r == 1
inds[i] = n
i -= 1
end
end
return inds
end
I can suggest padding your vector into a Vector{UInt64} and use that to manually construct a BitVector. The following should mostly work (even for input element types other than UInt16), but I haven't taken into account specific endianness you might want to respect:
julia> function read_messages(msgs)
bytes = reinterpret(UInt8, msgs)
N = length(bytes)
nchunks, remaining = divrem(N, sizeof(UInt64))
padded_bytes = zeros(UInt8, sizeof(UInt64) * cld(N, sizeof(UInt64)))
copyto!(padded_bytes, bytes)
b = BitVector(undef, N * 8)
b.chunks = reinterpret(UInt64, padded_bytes)
return b
end
read_messages (generic function with 1 method)
julia> msgs
2-element Vector{UInt16}:
0x3c00
0x8000
julia> read_messages(msgs)
32-element BitVector:
0
0
0
0
0
0
0
0
0
⋮
0
0
0
0
0
0
0
1
julia> read_messages(msgs) |> findall
5-element Vector{Int64}:
11
12
13
14
32
julia> bitstring.(msgs)
2-element Vector{String}:
"0011110000000000"
"1000000000000000"
(Getting rid of the unnecessary allocation of the undef bit vector would require some black magic, I belive.)
There is a simple trick to convert a number to 1 or -1.
Just raise it to the power of 0.
So:
4^0 = 1
-4^0 = -1
However, in AS3:
Math.pow( 4, 0); // = 1
Math.pow(-4, 0); // = 1
Is there a way to get the right answer without an if else?
This could be done bitwise.
Given the number n (avg time: 0.0065ms):
1 + 2 * (n >> 31);
Or slightly slower (avg time: 0.0095ms):
(n < 0 && -1) || 1;
However, Marty's solution is the fastest (avg time: 0.0055ms)
n < 0 ? -1 : 1;
Not sure if without an if/else includes the ternary operator in your eyes, but if not:
// Where x is your input.
var r:int = x < 0 ? -1 : 1;
Will be more efficient than Math.pow() anyway.
There are 2 cases given in the question and on that basis we have to answer.
Cases:
if((NOT(value>=1) OR NOT(value<=10))
if((NOT(value>=1) AND NOT(value<=10))
Now the questions are:
which case you are going to use if the given value either is 1 or 10 ?
which case you are going to use if the given value must be 1 or 10 ?
the problem is whether I takes 1 or 10 I am getting same answer in both the cases. That is if(0) and thus if statement is false in both the cases.?
(NOT(value>=1) OR NOT(value<=10)) = (value < 1) OR (value > 10)
This case is true for [-Infinity ... 0] or [11 ... + Infinity]
Is false for 1 or 10
((NOT(value>=1) AND NOT(value<=10)) = (value < 1) AND (value > 10)
This case is always false, as no number can be smaller than 1 and bigger than 10 the same time.
How can I represent integer as Binary?
so I can print 7 as 111
You write a function to do this.
num=7
function toBits(num)
-- returns a table of bits, least significant first.
local t={} -- will contain the bits
while num>0 do
rest=math.fmod(num,2)
t[#t+1]=rest
num=(num-rest)/2
end
return t
end
bits=toBits(num)
print(table.concat(bits))
In Lua 5.2 you've already have bitwise functions which can help you ( bit32 )
Here is the most-significant-first version, with optional leading 0 padding to a specified number of bits:
function toBits(num,bits)
-- returns a table of bits, most significant first.
bits = bits or math.max(1, select(2, math.frexp(num)))
local t = {} -- will contain the bits
for b = bits, 1, -1 do
t[b] = math.fmod(num, 2)
num = math.floor((num - t[b]) / 2)
end
return t
end
There's a faster way to do this that takes advantage of string.format, which converts numbers to base 8. It's trivial to then convert base 8 to binary.
--create lookup table for octal to binary
oct2bin = {
['0'] = '000',
['1'] = '001',
['2'] = '010',
['3'] = '011',
['4'] = '100',
['5'] = '101',
['6'] = '110',
['7'] = '111'
}
function getOct2bin(a) return oct2bin[a] end
function convertBin(n)
local s = string.format('%o', n)
s = s:gsub('.', getOct2bin)
return s
end
If you want to keep them all the same size, then do
s = string.format('%.22o', n)
Which gets you 66 bits. That's two extra bits at the end, since octal works in groups of 3 bits, and 64 isn't divisible by 3. If you want 33 bits, change it to 11.
If you have the BitOp library, which is available by default in LuaJIT, then you can do this:
function convertBin(n)
local t = {}
for i = 1, 32 do
n = bit.rol(n, 1)
table.insert(t, bit.band(n, 1))
end
return table.concat(t)
end
But note this only does the first 32 bits! If your number is larger than 2^32, the result wont' be correct.
function bits(num)
local t={}
while num>0 do
rest=num%2
table.insert(t,1,rest)
num=(num-rest)/2
end return table.concat(t)
end
Since nobody wants to use table.insert while it's useful here
Here is a function inspired by the accepted answer with a correct syntax which returns a table of bits in wriiten from right to left.
num=255
bits=8
function toBits(num, bits)
-- returns a table of bits
local t={} -- will contain the bits
for b=bits,1,-1 do
rest=math.fmod(num,2)
t[b]=rest
num=(num-rest)/2
end
if num==0 then return t else return {'Not enough bits to represent this number'}end
end
bits=toBits(num, bits)
print(table.concat(bits))
>>11111111
function reverse(t)
local nt = {} -- new table
local size = #t + 1
for k,v in ipairs(t) do
nt[size - k] = v
end
return nt
end
function tobits(num)
local t={}
while num>0 do
rest=num%2
t[#t+1]=rest
num=(num-rest)/2
end
t = reverse(t)
return table.concat(t)
end
print(tobits(7))
# 111
print(tobits(33))
# 100001
print(tobits(20))
# 10100
local function tobinary( number )
local str = ""
if number == 0 then
return 0
elseif number < 0 then
number = - number
str = "-"
end
local power = 0
while true do
if 2^power > number then break end
power = power + 1
end
local dot = true
while true do
power = power - 1
if dot and power < 0 then
str = str .. "."
dot = false
end
if 2^power <= number then
number = number - 2^power
str = str .. "1"
else
str = str .. "0"
end
if number == 0 and power < 1 then break end
end
return str
end
May seem more verbose but it is actually faster than other functions that use the math library functions. Works with any number, be it positive/negative/fractional...
local function tobits(num, str) -- tail call
str = str or "B"
if num == 0 then return str end
return tobits(
num >> 1 , -- right shift
((num & 1)==1 and "1" or "0") .. str )
end
This function uses a lookup table to print a binary number extracted from a hex representation. All using string manipulation essentially. Tested in lua 5.1.
local bin_lookup = {
["0"] = "0000",
["1"] = "0001",
["2"] = "0010",
["3"] = "0011",
["4"] = "0100",
["5"] = "0101",
["6"] = "0110",
["7"] = "0111",
["8"] = "1000",
["9"] = "1001",
["A"] = "1010",
["B"] = "1011",
["C"] = "1100",
["D"] = "1101",
["E"] = "1110",
["F"] = "1111"
}
local print_binary = function(value)
local hs = string.format("%.2X", value) -- convert number to HEX
local ln, str = hs:len(), "" -- get length of string
for i = 1, ln do -- loop through each hex character
local index = hs:sub(i, i) -- each character in order
str = str .. bin_lookup[index] -- lookup a table
str = str .. " " -- add a space
end
return str
end
print(print_binary(45))
#0010 1101
print(print_binary(65000))
#1111 1101 1110 1000
This maybe not work in lua that has no bit32 library
function toBinary(number, bits)
local bin = {}
bits = bits - 1
while bits >= 0 do --As bit32.extract(1, 0) will return number 1 and bit32.extract(1, 1) will return number 0
--I do this in reverse order because binary should like that
table.insert(bin, bit32.extract(number, bits))
bits = bits - 1
end
return bin
end
--Expected result 00000011
print(table.concat(toBinary(3, 8)))
This need at least lua 5.2 (because the code need bit32 library)
As by Dave, but with filled empty bits:
local function toBits(num, bits)
-- returns a table of bits, least significant first.
local t={} -- will contain the bits
bits = bits or 8
while num>0 do
rest=math.fmod(num,2)
t[#t+1]=rest
num=math.floor((num-rest)/2)
end
for i = #t+1, bits do -- fill empty bits with 0
t[i] = 0
end
return t
end
for i = 0, 255 do
local bits = toBits(i)
print(table.concat(bits, ' '))
end
Result:
0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0
1 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0
1 0 1 0 0 0 0 0
...
0 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
0 = A
1 = B
...
25 = Z
26 = AA
27 = AB
...
701 = ZZ
702 = AAA
I cannot think of any solution that does not involve loop-bruteforce :-(
I expect a function/program, that accepts a decimal number and returns a string as a result.
Haskell, 78 57 50 43 chars
o=map(['A'..'Z']:)$[]:o
e=(!!)$o>>=sequence
Other entries aren't counting the driver, which adds another 40 chars:
main=interact$unlines.map(e.read).lines
A new approach, using a lazy, infinite list, and the power of Monads! And besides, using sequence makes me :), using infinite lists makes me :o
If you look carefully the excel representation is like base 26 number but not exactly same as base 26.
In Excel conversion Z + 1 = AA while in base-26 Z + 1 = BA
The algorithm is almost same as decimal to base-26 conversion with just once change.
In base-26, we do a recursive call by passing it the quotient, but here we pass it quotient-1:
function decimalToExcel(num)
// base condition of recursion.
if num < 26
print 'A' + num
else
quotient = num / 26;
reminder = num % 26;
// recursive calls.
decimalToExcel(quotient - 1);
decimalToExcel(reminder);
end-if
end-function
Java Implementation
Python, 44 chars
Oh c'mon, we can do better than lengths of 100+ :
X=lambda n:~n and X(n/26-1)+chr(65+n%26)or''
Testing:
>>> for i in 0, 1, 25, 26, 27, 700, 701, 702:
... print i,'=',X(i)
...
0 = A
1 = B
25 = Z
26 = AA
27 = AB
700 = ZY
701 = ZZ
702 = AAA
Since I am not sure what base you're converting from and what base you want (your title suggests one and your question the opposite), I'll cover both.
Algorithm for converting ZZ to 701
First recognize that we have a number encoded in base 26, where the "digits" are A..Z. Set a counter a to zero and start reading the number at the rightmost (least significant digit). Progressing from right to left, read each number and convert its "digit" to a decimal number. Multiply this by 26a and add this to the result. Increment a and process the next digit.
Algorithm for converting 701 to ZZ
We simply factor the number into powers of 26, much like we do when converting to binary. Simply take num%26, convert it to A..Z "digits" and append to the converted number (assuming it's a string), then integer-divide your number. Repeat until num is zero. After this, reverse the converted number string to have the most significant bit first.
Edit: As you point out, once two-digit numbers are reached we actually have base 27 for all non-least-significant bits. Simply apply the same algorithms here, incrementing any "constants" by one. Should work, but I haven't tried it myself.
Re-edit: For the ZZ->701 case, don't increment the base exponent. Do however keep in mind that A no longer is 0 (but 1) and so forth.
Explanation of why this is not a base 26 conversion
Let's start by looking at the real base 26 positional system. (Rather, look as base 4 since it's less numbers). The following is true (assuming A = 0):
A = AA = A * 4^1 + A * 4^0 = 0 * 4^1 + 0 * 4^0 = 0
B = AB = A * 4^1 + B * 4^0 = 0 * 4^1 + 1 * 4^0 = 1
C = AC = A * 4^1 + C * 4^0 = 0 * 4^1 + 2 * 4^0 = 2
D = AD = A * 4^1 + D * 4^0 = 0 * 4^1 + 3 * 4^0 = 3
BA = B * 4^0 + A * 4^0 = 1 * 4^1 + 0 * 4^0 = 4
And so forth... notice that AA is 0 rather than 4 as it would be in Excel notation. Hence, Excel notation is not base 26.
In Excel VBA ... the obvious choice :)
Sub a()
For Each O In Range("A1:AA1")
k = O.Address()
Debug.Print Mid(k, 2, Len(k) - 3); "="; O.Column - 1
Next
End Sub
Or for getting the column number in the first row of the WorkSheet (which make more sense, since we are in Excel ...)
Sub a()
For Each O In Range("A1:AA1")
O.Value = O.Column - 1
Next
End Sub
Or better yet: 56 chars
Sub a()
Set O = Range("A1:AA1")
O.Formula = "=Column()"
End Sub
Scala: 63 chars
def c(n:Int):String=(if(n<26)""else c(n/26-1))+(65+n%26).toChar
Prolog, 109 123 bytes
Convert from decimal number to Excel string:
c(D,E):- d(D,X),atom_codes(E,X).
d(D,[E]):-D<26,E is D+65,!.
d(D,[O|M]):-N is D//27,d(N,M),O is 65+D rem 26.
That code does not work for c(27, N), which yields N='BB'
This one works fine:
c(D,E):-c(D,26,[],X),atom_codes(E,X).
c(D,B,T,M):-(D<B->M-S=[O|T]-B;(S=26,N is D//S,c(N,27,[O|T],M))),O is 91-S+D rem B,!.
Tests:
?- c(0, N).
N = 'A'.
?- c(27, N).
N = 'AB'.
?- c(701, N).
N = 'ZZ'.
?- c(702, N).
N = 'AAA'
Converts from Excel string to decimal number (87 bytes):
x(E,D):-x(E,0,D).
x([C],X,N):-N is X+C-65,!.
x([C|T],X,N):-Y is (X+C-64)*26,x(T,Y,N).
F# : 166 137
let rec c x = if x < 26 then [(char) ((int 'A') + x)] else List.append (c (x/26-1)) (c (x%26))
let s x = new string (c x |> List.toArray)
PHP: At least 59 and 33 characters.
<?for($a=NUM+1;$a>=1;$a=$a/26)$c=chr(--$a%26+65).$c;echo$c;
Or the shortest version:
<?for($a=A;$i++<NUM;++$a);echo$a;
Using the following formula, you can figure out the last character in the string:
transform(int num)
return (char)num + 47; // Transform int to ascii alphabetic char. 47 might not be right.
char lastChar(int num)
{
return transform(num % 26);
}
Using this, we can make a recursive function (I don't think its brute force).
string getExcelHeader(int decimal)
{
if (decimal > 26)
return getExcelHeader(decimal / 26) + transform(decimal % 26);
else
return transform(decimal);
}
Or.. something like that. I'm really tired, maybe I should stop answering questions and go to bed :P