Lua: print integer as a binary - binary

How can I represent integer as Binary?
so I can print 7 as 111

You write a function to do this.
num=7
function toBits(num)
-- returns a table of bits, least significant first.
local t={} -- will contain the bits
while num>0 do
rest=math.fmod(num,2)
t[#t+1]=rest
num=(num-rest)/2
end
return t
end
bits=toBits(num)
print(table.concat(bits))
In Lua 5.2 you've already have bitwise functions which can help you ( bit32 )
Here is the most-significant-first version, with optional leading 0 padding to a specified number of bits:
function toBits(num,bits)
-- returns a table of bits, most significant first.
bits = bits or math.max(1, select(2, math.frexp(num)))
local t = {} -- will contain the bits
for b = bits, 1, -1 do
t[b] = math.fmod(num, 2)
num = math.floor((num - t[b]) / 2)
end
return t
end

There's a faster way to do this that takes advantage of string.format, which converts numbers to base 8. It's trivial to then convert base 8 to binary.
--create lookup table for octal to binary
oct2bin = {
['0'] = '000',
['1'] = '001',
['2'] = '010',
['3'] = '011',
['4'] = '100',
['5'] = '101',
['6'] = '110',
['7'] = '111'
}
function getOct2bin(a) return oct2bin[a] end
function convertBin(n)
local s = string.format('%o', n)
s = s:gsub('.', getOct2bin)
return s
end
If you want to keep them all the same size, then do
s = string.format('%.22o', n)
Which gets you 66 bits. That's two extra bits at the end, since octal works in groups of 3 bits, and 64 isn't divisible by 3. If you want 33 bits, change it to 11.
If you have the BitOp library, which is available by default in LuaJIT, then you can do this:
function convertBin(n)
local t = {}
for i = 1, 32 do
n = bit.rol(n, 1)
table.insert(t, bit.band(n, 1))
end
return table.concat(t)
end
But note this only does the first 32 bits! If your number is larger than 2^32, the result wont' be correct.

function bits(num)
local t={}
while num>0 do
rest=num%2
table.insert(t,1,rest)
num=(num-rest)/2
end return table.concat(t)
end
Since nobody wants to use table.insert while it's useful here

Here is a function inspired by the accepted answer with a correct syntax which returns a table of bits in wriiten from right to left.
num=255
bits=8
function toBits(num, bits)
-- returns a table of bits
local t={} -- will contain the bits
for b=bits,1,-1 do
rest=math.fmod(num,2)
t[b]=rest
num=(num-rest)/2
end
if num==0 then return t else return {'Not enough bits to represent this number'}end
end
bits=toBits(num, bits)
print(table.concat(bits))
>>11111111

function reverse(t)
local nt = {} -- new table
local size = #t + 1
for k,v in ipairs(t) do
nt[size - k] = v
end
return nt
end
function tobits(num)
local t={}
while num>0 do
rest=num%2
t[#t+1]=rest
num=(num-rest)/2
end
t = reverse(t)
return table.concat(t)
end
print(tobits(7))
# 111
print(tobits(33))
# 100001
print(tobits(20))
# 10100

local function tobinary( number )
local str = ""
if number == 0 then
return 0
elseif number < 0 then
number = - number
str = "-"
end
local power = 0
while true do
if 2^power > number then break end
power = power + 1
end
local dot = true
while true do
power = power - 1
if dot and power < 0 then
str = str .. "."
dot = false
end
if 2^power <= number then
number = number - 2^power
str = str .. "1"
else
str = str .. "0"
end
if number == 0 and power < 1 then break end
end
return str
end
May seem more verbose but it is actually faster than other functions that use the math library functions. Works with any number, be it positive/negative/fractional...

local function tobits(num, str) -- tail call
str = str or "B"
if num == 0 then return str end
return tobits(
num >> 1 , -- right shift
((num & 1)==1 and "1" or "0") .. str )
end

This function uses a lookup table to print a binary number extracted from a hex representation. All using string manipulation essentially. Tested in lua 5.1.
local bin_lookup = {
["0"] = "0000",
["1"] = "0001",
["2"] = "0010",
["3"] = "0011",
["4"] = "0100",
["5"] = "0101",
["6"] = "0110",
["7"] = "0111",
["8"] = "1000",
["9"] = "1001",
["A"] = "1010",
["B"] = "1011",
["C"] = "1100",
["D"] = "1101",
["E"] = "1110",
["F"] = "1111"
}
local print_binary = function(value)
local hs = string.format("%.2X", value) -- convert number to HEX
local ln, str = hs:len(), "" -- get length of string
for i = 1, ln do -- loop through each hex character
local index = hs:sub(i, i) -- each character in order
str = str .. bin_lookup[index] -- lookup a table
str = str .. " " -- add a space
end
return str
end
print(print_binary(45))
#0010 1101
print(print_binary(65000))
#1111 1101 1110 1000

This maybe not work in lua that has no bit32 library
function toBinary(number, bits)
local bin = {}
bits = bits - 1
while bits >= 0 do --As bit32.extract(1, 0) will return number 1 and bit32.extract(1, 1) will return number 0
--I do this in reverse order because binary should like that
table.insert(bin, bit32.extract(number, bits))
bits = bits - 1
end
return bin
end
--Expected result 00000011
print(table.concat(toBinary(3, 8)))
This need at least lua 5.2 (because the code need bit32 library)

As by Dave, but with filled empty bits:
local function toBits(num, bits)
-- returns a table of bits, least significant first.
local t={} -- will contain the bits
bits = bits or 8
while num>0 do
rest=math.fmod(num,2)
t[#t+1]=rest
num=math.floor((num-rest)/2)
end
for i = #t+1, bits do -- fill empty bits with 0
t[i] = 0
end
return t
end
for i = 0, 255 do
local bits = toBits(i)
print(table.concat(bits, ' '))
end
Result:
0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0
1 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0
1 0 1 0 0 0 0 0
...
0 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1

Related

Finding the location of ones in a bit mask - Julia

I have a series of values that are each being stored as UInt16. Each of these numbers represents a bitmask - these numbers are commands that have been sent to a microprocessor telling it which pins to set high or low. I would like to parse this arrow of commands to find out which pins were being set high each time in such a way that is easier to analyse later.
Consider the example value 0x3c00, which in decimal is 15360 and in binary is 0011110000000000. Currently I have the following function
function read_message(hex_rep)
return findall.(x -> x .== '1',bitstring(hex_rep))
end
Which gets called on every element of the array of UInt16. Is there a better/more efficient way of doing this?
The best approach probably depends on how you want to handle vectors of hex-values. But here's an approach for processing a single hex which is much faster than the one in the OP:
function readmsg(x::UInt16)
N = count_ones(x)
inds = Vector{Int}(undef, N)
if N == 0
return inds
end
k = trailing_zeros(x)
x >>= k + 1
i = N - 1
inds[N] = n = 16 - k
while i >= 1
(x, r) = divrem(x, 0x2)
n -= 1
if r == 1
inds[i] = n
i -= 1
end
end
return inds
end
I can suggest padding your vector into a Vector{UInt64} and use that to manually construct a BitVector. The following should mostly work (even for input element types other than UInt16), but I haven't taken into account specific endianness you might want to respect:
julia> function read_messages(msgs)
bytes = reinterpret(UInt8, msgs)
N = length(bytes)
nchunks, remaining = divrem(N, sizeof(UInt64))
padded_bytes = zeros(UInt8, sizeof(UInt64) * cld(N, sizeof(UInt64)))
copyto!(padded_bytes, bytes)
b = BitVector(undef, N * 8)
b.chunks = reinterpret(UInt64, padded_bytes)
return b
end
read_messages (generic function with 1 method)
julia> msgs
2-element Vector{UInt16}:
0x3c00
0x8000
julia> read_messages(msgs)
32-element BitVector:
0
0
0
0
0
0
0
0
0
⋮
0
0
0
0
0
0
0
1
julia> read_messages(msgs) |> findall
5-element Vector{Int64}:
11
12
13
14
32
julia> bitstring.(msgs)
2-element Vector{String}:
"0011110000000000"
"1000000000000000"
(Getting rid of the unnecessary allocation of the undef bit vector would require some black magic, I belive.)

Access VBA IF (X and Y) > 0 then

I'm stumped on this one. I have some functioning VBA code in Access that looks like this.
If (intFrontLoaded And 2) > 0 Then boolFrontLoad(1) = True Else boolFrontLoad(1) = False
If (intFrontLoaded And 4) > 0 Then boolFrontLoad(2) = True Else boolFrontLoad(2) = False
If (intFrontLoaded And 8) > 0 Then boolFrontLoad(3) = True Else boolFrontLoad(3) = False
If (intFrontLoaded And 16) > 0 Then boolFrontLoad(4) = True Else boolFrontLoad(4) = False
If (intFrontLoaded And 32) > 0 Then boolFrontLoad(5) = True Else boolFrontLoad(5) = False
If (intFrontLoaded And 64) > 0 Then boolFrontLoad(6) = True Else boolFrontLoad(6) = False
I'm trying to figure out how the (intFrontLoaded And X) > 0) works.
I know what it does, I'm trying to figure out how, example:
If intFrontLoaded = 14 then boolFrontLoad(1), (2) and (3) will be true.
If intFrontLoaded = 28 then boolFrontLoad(2), (3) and (4) will be true.
I understand that 2+4+8 = 12 and 4+8+16 = 28, but how does (intFrontLoaded And X) > 0) do the calculation?
And in this context is a bitwise AND operator. The test is checking for a single flag bit. Let's use your example of intFrontLoaded = 14 with If (intFrontLoaded And 4) > 0 Then.
14 as bitflags is this: 0000 0000 0000 1110
4 is this: 0000 0000 0000 0010
The result of And is every bit that is the same. In the example above, it's only the "flag" bit, 4. So, the result of the And operation is 4.
Now plug that back into the expression:
If 4 > 0 Then
So, it executes the "true" condition. If you'll notice, all of the tests are powers of 2. This is because when they are represented as binary, they'll only be only a single bit.
Basically, intFrontLoaded is storing a single boolean value for each of the bits being tested. This was much more common in early computing when memory was at a premium and using all 16 bits to store a boolean was considered wasteful.
Note that you can simplify this to:
boolFrontLoad(1) = intFrontLoaded And 2
boolFrontLoad(2) = intFrontLoaded And 4
boolFrontLoad(3) = intFrontLoaded And 8
boolFrontLoad(4) = intFrontLoaded And 16
boolFrontLoad(5) = intFrontLoaded And 32
boolFrontLoad(6) = intFrontLoaded And 64
The And operator is a bitwise AND operation - it compares the bits in each operand and returns the value where both operands have bits in common.
That said, your code would be much clearer written as:
boolFrontLoad(1) = (intFrontLoaded And 2) > 0
boolFrontLoad(2) = (intFrontLoaded And 4) > 0
boolFrontLoad(3) = (intFrontLoaded And 8) > 0
boolFrontLoad(4) = (intFrontLoaded And 16) > 0
boolFrontLoad(5) = (intFrontLoaded And 32) > 0
boolFrontLoad(6) = (intFrontLoaded And 64) > 0
This is called a bitwise operation. The logical And is performed bit per bit between intFrontLoaded And X. When X is a power of 2, say 2^a, its binary representation is composed of zeros except a 1 on the (a+1)'th position (numbering the bits from the right to the left).
Therefore, intFrontLoaded And 4 checks whether the third bit in intFrontLoaded is set. if the result is non-zero, the IF will succeed.
In your code intFrontLoaded is used as a bit-set, that is, a set of flags where each bit represents a flag for some boolean condition.

Find the combinations of 2 1's in a binary number

We have a binary number and we need to generate combination of 2 1's from the given number. If given such a combination of 2 1's we should be able to produce the next combination.
Example:-
Given vector : 10101111 Given combination : 10100000 output : 10001000
Given vector : 10101111 Given combination : 10001000 output : 10000100
Given vector : 10101111 Given combination : 10000010 output : 10000001
Given vector : 10101111 Given combination : 10000001 output : 00101000
Given vector : 10101111 Given combination : 00101000 output : 00100100
Edit:
Once the 2nd 1 reaches the last 1 in the given binary number, the 1st 1 is incremented(set to next '1' in the binary number and the 2nd '1' is made the '1' that comes after the 1st '1'(as in eg 4))
This is to be done in hardware so it should not be computationally complex. How can we design this module in VHDL.
Here is some asynchronous code that will do the job:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity nex2ones is
Port ( vector : in STD_LOGIC_VECTOR (1 to 8);
combo1 : in STD_LOGIC_VECTOR (1 to 8);
combo2 : out STD_LOGIC_VECTOR (1 to 8);
error : out STD_LOGIC);
end nex2ones;
architecture Behavioral of nex2ones is
type int_array_8 is array (1 to 8) of integer range 0 to 8;
begin
process (vector,combo1)
variable ones_ixs : int_array_8;
variable first_combo1_ix : integer range 0 to 8 := 0;
variable second_combo1_ix: integer range 0 to 8 := 0;
variable first_combo1_k : integer range 0 to 9 := 0;
variable second_combo1_k : integer range 0 to 9 := 0;
variable k : integer range 1 to 9;
begin
ones_ixs := (others => 0); -- indices of 1s in vector
combo2 <= (others => '0');
k := 1;
first_combo1_ix := 0;
second_combo1_ix := 0;
first_combo1_k := 0; -- corresponding ptr to ones_ixs
second_combo1_k := 0;
error <= '0';
for j in 1 to 8 loop
if combo1(j) = '1' then
if first_combo1_ix = 0 then
first_combo1_ix := j;
first_combo1_k := k;
else
second_combo1_ix := j;
second_combo1_k := k;
end if;
end if;
if vector(j) = '1' then
ones_ixs(k) := j;
k := k + 1;
end if;
end loop;
if k > 1 then k := k - 1; end if; -- point to last nonzero index
if (first_combo1_ix = 0 or second_combo1_ix = 0)
--or (first_combo1_ix = ones_ixs(k-1) and second_combo1_ix = ones_ixs(k))
or (k < 2) then
error <= '1';
else -- no error proceed
if second_combo1_ix = ones_ixs(k) then -- can't slide 2nd anymore
if (second_combo1_k - first_combo1_k) > 1 then -- is 1st movable
combo2(ones_ixs(first_combo1_k + 1)) <= '1'; -- move 1st
if (second_combo1_k - first_combo1_k) > 2 then -- is 2nd movable
combo2(ones_ixs(first_combo1_k + 2)) <= '1'; -- move 2nd
else
combo2(ones_ixs(second_combo1_k)) <= '1'; -- leave 2nd be
end if;
else
error <= '1'; -- no mas
end if;
else
combo2(ones_ixs(first_combo1_k)) <= '1'; -- leave 1st be
combo2(ones_ixs(second_combo1_k + 1)) <= '1'; -- next
end if;
end if;
end process;
end Behavioral;
Testbench output:
ps vector combo1 combo2
error
0 00000000 00000000 00000000 1
100000 10101111 10100000 10001000 0
200000 10101111 10001000 10000100 0
300000 10101111 10000010 10000001 0
400000 10101111 10000001 00101000 0
500000 10101111 00101000 00100100 0
600000 10101111 00100100 00100010 0
700000 10101111 00000011 00000000 1
800000 11001110 00000110 00000000 1
900000 10001110 00001010 00000110 0
1000000 11001110 00001010 00000110 0

mysql: why comparing a 'string' to 0 gives true?

I was doing some MySQL test queries, and realized that comparing a string column with 0 (as a number) gives TRUE!
select 'string' = 0 as res; -- res = 1 (true), UNexpected! why!??!?!
however, comparing it to any other number, positive or negative, integer or decimal, gives false as expected
(of course unless the string is the representation of the number as string)
select 'string' = -12 as res; -- res = 0 (false), expected
select 'string' = 3131.7 as res; -- res = 0 (false), expected
select '-12' = -12 as res; -- res = 1 (true), expected
Of course comparing the string with '0' as string, gives false, as expected.
select 'string' = '0' as res; -- res = 0 (false), expected
but why does it give true for 'string' = 0 ?
why is that?
MySQL automatically casts a string to a number:
SELECT '1string' = 0 AS res; -- res = 0 (false)
SELECT '1string' = 1 AS res; -- res = 1 (true)
SELECT '0string' = 0 AS res; -- res = 1 (true)
and a string that does not begin with a number is evaluated as 0:
SELECT 'string' = 0 AS res; -- res = 1 (true)
Of course, when we try to compare a string with another string there's no conversion:
SELECT '0string' = 'string' AS res; -- res = 0 (false)
but we can force a conversion using, for example, a + operator:
SELECT '0string' + 0 = 'string' AS res; -- res = 1 (true)
last query returns TRUE because we ar summing a string '0string' with a number 0, so the string has to be converted to a number, it becomes SELECT 0 + 0 = 'string' and then again the string 'string' is converted to a number before being compared to 0, and it then becomes SELECT 0 = 0 which is TRUE.
This will also work:
SELECT '1abc' + '2ef' AS total; -- total = 1+2 = 3
and will return the sum of the strings converted to numbers (1 + 2 in this case).
"Strings are automatically converted to numbers and numbers to strings as necessary." This means that in order to compare a string to a number, it tries to parse a number from the start of the string. In this case there is no number there, so it converts to 0, and 0 = 0 is true.
if you want to fix it you can compare two strings:
select 'string' = convert(0,char) as res --- res = 0

Convert decimal number to excel-header-like number

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
0 = A
1 = B
...
25 = Z
26 = AA
27 = AB
...
701 = ZZ
702 = AAA
I cannot think of any solution that does not involve loop-bruteforce :-(
I expect a function/program, that accepts a decimal number and returns a string as a result.
Haskell, 78 57 50 43 chars
o=map(['A'..'Z']:)$[]:o
e=(!!)$o>>=sequence
Other entries aren't counting the driver, which adds another 40 chars:
main=interact$unlines.map(e.read).lines
A new approach, using a lazy, infinite list, and the power of Monads! And besides, using sequence makes me :), using infinite lists makes me :o
If you look carefully the excel representation is like base 26 number but not exactly same as base 26.
In Excel conversion Z + 1 = AA while in base-26 Z + 1 = BA
The algorithm is almost same as decimal to base-26 conversion with just once change.
In base-26, we do a recursive call by passing it the quotient, but here we pass it quotient-1:
function decimalToExcel(num)
// base condition of recursion.
if num < 26
print 'A' + num
else
quotient = num / 26;
reminder = num % 26;
// recursive calls.
decimalToExcel(quotient - 1);
decimalToExcel(reminder);
end-if
end-function
Java Implementation
Python, 44 chars
Oh c'mon, we can do better than lengths of 100+ :
X=lambda n:~n and X(n/26-1)+chr(65+n%26)or''
Testing:
>>> for i in 0, 1, 25, 26, 27, 700, 701, 702:
... print i,'=',X(i)
...
0 = A
1 = B
25 = Z
26 = AA
27 = AB
700 = ZY
701 = ZZ
702 = AAA
Since I am not sure what base you're converting from and what base you want (your title suggests one and your question the opposite), I'll cover both.
Algorithm for converting ZZ to 701
First recognize that we have a number encoded in base 26, where the "digits" are A..Z. Set a counter a to zero and start reading the number at the rightmost (least significant digit). Progressing from right to left, read each number and convert its "digit" to a decimal number. Multiply this by 26a and add this to the result. Increment a and process the next digit.
Algorithm for converting 701 to ZZ
We simply factor the number into powers of 26, much like we do when converting to binary. Simply take num%26, convert it to A..Z "digits" and append to the converted number (assuming it's a string), then integer-divide your number. Repeat until num is zero. After this, reverse the converted number string to have the most significant bit first.
Edit: As you point out, once two-digit numbers are reached we actually have base 27 for all non-least-significant bits. Simply apply the same algorithms here, incrementing any "constants" by one. Should work, but I haven't tried it myself.
Re-edit: For the ZZ->701 case, don't increment the base exponent. Do however keep in mind that A no longer is 0 (but 1) and so forth.
Explanation of why this is not a base 26 conversion
Let's start by looking at the real base 26 positional system. (Rather, look as base 4 since it's less numbers). The following is true (assuming A = 0):
A = AA = A * 4^1 + A * 4^0 = 0 * 4^1 + 0 * 4^0 = 0
B = AB = A * 4^1 + B * 4^0 = 0 * 4^1 + 1 * 4^0 = 1
C = AC = A * 4^1 + C * 4^0 = 0 * 4^1 + 2 * 4^0 = 2
D = AD = A * 4^1 + D * 4^0 = 0 * 4^1 + 3 * 4^0 = 3
BA = B * 4^0 + A * 4^0 = 1 * 4^1 + 0 * 4^0 = 4
And so forth... notice that AA is 0 rather than 4 as it would be in Excel notation. Hence, Excel notation is not base 26.
In Excel VBA ... the obvious choice :)
Sub a()
For Each O In Range("A1:AA1")
k = O.Address()
Debug.Print Mid(k, 2, Len(k) - 3); "="; O.Column - 1
Next
End Sub
Or for getting the column number in the first row of the WorkSheet (which make more sense, since we are in Excel ...)
Sub a()
For Each O In Range("A1:AA1")
O.Value = O.Column - 1
Next
End Sub
Or better yet: 56 chars
Sub a()
Set O = Range("A1:AA1")
O.Formula = "=Column()"
End Sub
Scala: 63 chars
def c(n:Int):String=(if(n<26)""else c(n/26-1))+(65+n%26).toChar
Prolog, 109 123 bytes
Convert from decimal number to Excel string:
c(D,E):- d(D,X),atom_codes(E,X).
d(D,[E]):-D<26,E is D+65,!.
d(D,[O|M]):-N is D//27,d(N,M),O is 65+D rem 26.
That code does not work for c(27, N), which yields N='BB'
This one works fine:
c(D,E):-c(D,26,[],X),atom_codes(E,X).
c(D,B,T,M):-(D<B->M-S=[O|T]-B;(S=26,N is D//S,c(N,27,[O|T],M))),O is 91-S+D rem B,!.
Tests:
?- c(0, N).
N = 'A'.
?- c(27, N).
N = 'AB'.
?- c(701, N).
N = 'ZZ'.
?- c(702, N).
N = 'AAA'
Converts from Excel string to decimal number (87 bytes):
x(E,D):-x(E,0,D).
x([C],X,N):-N is X+C-65,!.
x([C|T],X,N):-Y is (X+C-64)*26,x(T,Y,N).
F# : 166 137
let rec c x = if x < 26 then [(char) ((int 'A') + x)] else List.append (c (x/26-1)) (c (x%26))
let s x = new string (c x |> List.toArray)
PHP: At least 59 and 33 characters.
<?for($a=NUM+1;$a>=1;$a=$a/26)$c=chr(--$a%26+65).$c;echo$c;
Or the shortest version:
<?for($a=A;$i++<NUM;++$a);echo$a;
Using the following formula, you can figure out the last character in the string:
transform(int num)
return (char)num + 47; // Transform int to ascii alphabetic char. 47 might not be right.
char lastChar(int num)
{
return transform(num % 26);
}
Using this, we can make a recursive function (I don't think its brute force).
string getExcelHeader(int decimal)
{
if (decimal > 26)
return getExcelHeader(decimal / 26) + transform(decimal % 26);
else
return transform(decimal);
}
Or.. something like that. I'm really tired, maybe I should stop answering questions and go to bed :P