Summing binary numbers representing fractions in Sagemath - binary

I'm just starting to learn how to code in Sagemath, I know it's similar to python but I don't have much experience with that either.
I'm trying to add two binary numbers representing fractions. That is, something like
a = '110'
b = '011'
bin(int(a,2) + int(b,2))
But using values representing fractions, such as '1.1'.
Thanks in advance!

If you want to do this in vanilla Python, parsing the binary fractions by hand isn't too bad (the first part being from this answer);
def binstr_to_float(s):
t = s.split('.')
return int(t[0], 2) + int(t[1], 2) / 2.**len(t[1])
def float_to_binstr(f):
i = 0
while int(f) != f:
f *= 2
i += 1
as_str = str(bin(int(f)))
if i == 0:
return as_str[2:]
return as_str[2:-i] + '.' + as_str[-i:]
float_to_binstr(parse_bin('11.1') + parse_bin('0.111')) # is '100.011'

In python you can use the Binary fractions package. With this package you can convert binary-fraction strings into floats and vice-versa. Then, you can perform operations on them.
Example:
>>> from binary_fractions import Binary
>>> sum = Binary("1.1") + Binary("10.01")
>>> str(sum)
'0b11.11'
>>> float(sum)
3.75
>>>
It has many more helper functions to manipulate binary strings such as: shift, add, fill, to_exponential, invert...
PS: Shameless plug, I'm the author of this package.

Related

How can I set the numbering of the x-axis of an Octave plot to engineering notation?

I made a very simple Octave script
a = [10e6, 11e6, 12e6];
b = [10, 11, 12];
plot(a, b, 'rd-')
which outputs the following graph.
Graph
Is it possible to set the numbering on the x-axis to engineering notation, rather than scientific, and have it display "10.5e+6, 11e+6, 11.5e+6" instead of "1.05e+7, 1.1e+7, 1.15+e7"?
While octave provides a 'short eng' formatting option, which does what you're asking for in terms of printing to the terminal, it does not appear to provide this functionality in plots or when formatting strings via sprintf.
Therefore you'll have to find a way to do this by yourself, with some creative string processing of the initial xticks, and substituting the plot's ticklabels accordingly. Thankfully it's not that hard :)
Using your example:
a = [10e6, 11e6, 12e6];
b = [10, 11, 12];
plot(a, b, 'rd-')
format short eng % display stdout in engineering format
TickLabels = disp( xticks ) % collect string as it would be displayed on the stdout
TickLabels = strsplit( TickLabels ) % tokenize at spaces
TickLabels = TickLabels( 2 : end - 1 ) % discard start and end empty tokens
TickLabels = regexprep( TickLabels, '\.0+e', 'e' ) % remove purely zero decimals using a regular expression
TickLabels = regexprep( TickLabels, '(\.[1-9]*)0+e', '$1e' ) % remove non-significant zeros in non-zero decimals using a regular expression
xticklabels( TickLabels ) % set the new ticklabels to the plot
format % reset short eng format back to default, if necessary

Elixir: How to get bit_size of an Integer variable?

I need to get the size of bits used in one Integer variable.
like this:
bit_number = 1
bit_number = bit_number <<< 2
bit_size(bit_number) # must return 3 here
the bit_size/1 function is for 'strings', not for integers but, in the exercise, whe need to get the size of bits of the integer.
I'm doing one exercise of compression of an book (Classic Computer Science Problems in Python, of Daivid Kopec) and I'm trying to do in Elixir for study.
This works:
(iex) import Bitwise
(iex) Integer.digits(1 <<< 1, 2) |> length
2
but I'm sure there are better solutions.
(as #Hauleth mentions, the answer here should be 2, not 3)
You can count how many times you can divide it by two:
defmodule Example do
def bits_required(0), do: 1
def bits_required(int), do: bits_required(int, 1)
defp bits_required(1, acc), do: acc
defp bits_required(int, acc), do: bits_required(div(int, 2), acc + 1)
end
Output:
iex> Example.bits_required(4)
3

Faster or better way to transpose a bytearray into seperate values

I have a bytearray filled with "c-type" reversed order data like sint32_t but also sint24_t. A 24-bit signed value needs to be converted into a integer. Python handles negative values as a value with a minus sign and c uses the signed bit to indicate a negative value.
So I came up changing it to 32-bit first:
raw = bytearray('\x89\x00\x23')
val = (ord(raw[0:1]) | (ord(raw[1:2])<<8) | (ord(raw[2:3])<<16) )
if ( (val & 0x00800000L) > 0):
val |= 0xFF000000L
This works however now I have a 32-bit signed value. I still need it to become a negative value in python. So I came up with:
import ctypes
p_val = ctypes.c_int32(val).value
This will convert it in the correct way.
I would like this to be a bit more efficient and faster. Is there any way to rewrite this in something that is much faster. I need to create 7 values like this per iteration. I read something about "memoryview"?
Anyone?
Various options:
import struct
MAX_INT24 = (1<<23)-1
BIAS_INT24 = (1<<24)
def int24(raw):
val = raw[0] | (raw[1] << 8) | (raw[2] << 16)
# val = struct.unpack('<i',raw + b'\x00')[0] # Another option
return val if val <= MAX_INT24 else val - BIAS_INT24
raws = [b'\xff\xff\x7f',
b'\xff\xff\xff',
b'\xfe\xff\xff']
for raw in raws:
print(int24(raw))
print(int.from_bytes(raw,'little',signed=True)) # Python 3 only
Output:
8388607
8388607
-1
-1
-2
-2

Extracting greek characters from technical PDF documents when using Python 3

I'm currently trying to construct a database of chemicals used in a university department, and their hazard classes. I then wish to output to a csv file. One step is to pull all the synonyms for the various chemicals from standard PDFs, such as this for gamma hexalactone:
sample PDF
At the moment, the code I'm using to extract the text just loses the greek characters which I need to transfer. It looks like this:
pdfReader = PyPDF2.PdfFileReader(inpathf) txtObj = '' for pageNum in range (0, pdfReader.numPages):
pageObj = pdfReader.getPage(pageNum)
txtObj += str(pageObj.extractText())
inpathf.close()
outputf.write(txtObj)
outputf.close()
return txtObj
Parameters are extracted from ~2000 PDFs and stored in a dictionary before being transferred to a csv file:
def Outfile_csv(outfile, dict1, length):
outputfile = open((outfile) + '.csv', 'w', newline ='')
output_list = []
outputWriter = csv.writer(outputfile)
outputWriter.writerow(['PDF file', 'Name', 'Synonyms', 'CAS No.', 'H statements',
'TWA limits /ppm', 'STEL limits /ppm'])
for r in range (0, length):
output_list =[]
for s in range (0,7):
if s == 0 or s == 3:
output_list.append(str((dict1[s][r])).encode('utf-8'))
else:
output_list.append(str(dict1[s][r]))
outputWriter.writerow(output_list)
outputfile.close()
I also can't read out to the CSV in cases where there are greek characters - those data are simply not placed in the csv file. Many thanks for any help - a day playing with codecs and the contents of stackexchange has not helped yet. I'm using Python 3.4 and Windows 8.

What's the correct way to expand a [0,1] interval to [a,b]?

Many random-number generators return floating numbers between 0 and 1.
What's the best and correct way to get integers between a and b?
Divide the interval [0,1] in B-A+1 bins
Example A=2, B=5
[----+----+----+----]
0 1/4 1/2 3/4 1
Maps to 2 3 4 5
The problem with the formula
Int (Rnd() * (B-A+1)) + A
is that your Rnd() generation interval is closed on both sides, thus the 0 and the 1 are both possible outputs and the formula gives 6 when the Rnd() is exactly 1.
In a real random distribution (not pseudo), the 1 has probability zero. I think it is safe enough to program something like:
r=Rnd()
if r equal 1
MyInt = B
else
MyInt = Int(r * (B-A+1)) + A
endif
Edit
Just a quick test in Mathematica:
Define our function:
f[a_, b_] := If[(r = RandomReal[]) == 1, b, IntegerPart[r (b - a + 1)] + a]
Build a table with 3 10^5 numbers in [1,100]:
table = SortBy[Tally[Table[f[1, 100], {300000}]], First]
Check minimum and maximum:
In[137]:= {Max[First /# table], Min[First /# table]}
Out[137]= {100, 1}
Lets see the distribution:
BarChart[Last /# SortBy[Tally[Table[f[1, 100], {300000}]], First],
ChartStyle -> "DarkRainbow"]
X = (Rand() * (B - A)) + A
Another way to look at it, where r is your random number in the range 0 to 1:
(1-r)a + rb
As for your additional requirement of the result being an integer, maybe (apart from using built in casting) the modulus operator can help you out. Check out this question and the answer:
Expand a random range from 1–5 to 1–7
Well, why not just look at how Python does it itself? Read random.py in your installation's lib directory.
After gutting it to only support the behavior of random.randint() (which is what you want) and removing all error checks for non-integer or out-of-bounds arguments, you get:
import random
def randint(start, stop):
width = stop+1 - start
return start + int(random.random()*width)
Testing:
>>> l = []
>>> for i in range(2000000):
... l.append(randint(3,6))
...
>>> l.count(3)
499593
>>> l.count(4)
499359
>>> l.count(5)
501432
>>> l.count(6)
499616
>>>
Assuming r_a_b is the desired random number between a and b and r_0_1 is a random number between 0 and 1 the following should work just fine:
r_a_b = (r_0_1 * (b-a)) + a