How to stop Mathematica rounding the last digit of a number when converting it to string using ToString? - output

In Mathematica 8, numbers with no more than 16 significant digits are converted correctly, e.g.
ToString[
NumberForm[0.000001234567891234567, Infinity ,
ExponentFunction -> (Null &)]
]
gives "0.000001234567891234567".
However numbers with more than 16 significant digits are rounded, e.g.
ToString[
NumberForm[0.0000012345678912345678, Infinity ,
ExponentFunction -> (Null &)]
]
gives "0.000001234567891234568". How can I avoid this behavior?

You should explicitly specify the precision of your number using NumberMarks:
ToString[NumberForm[0.0000012345678912345678`17, Infinity, ExponentFunction -> (Null &)]]
"0.0000012345678912345678"
The reason of the problem is that your number is interpreted as a MachinePrecision number. If you simply add one zero to the end of this number the problem disappears because such (new!) number is interpreted as an arbitrary precision number:
0.00000123456789123456780 // InputForm
1.2345678912345678`17.091514977603566*^-6
0.00000123456789123456780 // MachineNumberQ
False
while your number is interpreted as a MachinePrecision number:
0.0000012345678912345678 // InputForm
1.2345678912345679*^-6
0.0000012345678912345678 // MachineNumberQ
True

Related

Why tilde on negative number(eg ~~-1) return 184467440737... in MySql query

I just found a way that I think simpler and faster to remove decimal using double tilde ~~ in some programming languages.
I'm curious what's the meaning of tilde, then I found out from this answer:
The operator ~ is a binary negation operator (as opposed to boolean
negation), and being that, it inverses all the bits of its operand.
The result is a negative number in two's complement arithmetic.
that answer is for PHP language, and I think it's same for MySQL too. I think I could use ~ to revert negation(also remove decimal) and ~~ to just remove decimal number
I tried in PHP and JS:
single tilde:
~-1 // = 1
~1 // = -1
~-1.55 // = 1
~1.55 // = -1
double tilde:
~~-1 // = -1
~~1 // = 1
~~1.55 // = 1
~~-1.55 // = -1
but why I tried in MySQL shows different result:
select ~1; // 18446744073709551614
select ~-1; // 0
select ~-111; // 110
select ~1.55; // 18446744073709551613
select ~-1.55; // 1
select ~~1; // 1
select ~~-1; // 18446744073709551615
select ~~1.55; // 2
select ~~-1.55; // 18446744073709551614
from above queries, I can get conclusion if ~~ is can be used to remove decimal(with round half up) on positive number, but doesn't work for negative number(will return 18446744073...). And I don't know the use of ~ in MySQL. Anyone can explain it for me?
"... faster to remove decimal ..." -- Don't bother optimizing at this level. Stick to the overall structure of SQL.
For converting floating point values to integers, use a function:
mysql> SELECT FLOOR(12.7), CEIL(12.7), ROUND(12.7), ROUND(12.777, 2), FORMAT(1234.7, 0)\G
*************************** 1. row ***************************
FLOOR(12.7): 12
CEIL(12.7): 13
ROUND(12.7): 13
ROUND(12.777, 2): 12.78
FORMAT(1234.7, 0): 1,235
As for what ~ does with floating-point numbers, we need to get into the IEEE-754 standard. But your eyes may glaze over.

Value that was calculated in the beginning of a function isn't remembered later on in the same function

In the beginning of the function I calculate the total weight of a protein sequence and define it as seq_weight.
After that I calculate the weight of several fragments and make combinations of those weights that sum to the total weight of the first proteins sequence.
The first print statement prints the total weight correctly, but near the end of the function it seems to forget that value when I want to define it as the result of the sum.
When I type the value manually I get the result I want:
def fragmentcombinations(sequence, fragments):
for seq_rec in sequence:
seq_weight = 0.0
for i in seq_rec.seq:
seq_weight += SeqUtils.molecular_weight(i, "protein")
print("The protein sequence: " + seq_rec.seq)
print("The molecular weight: " + str(round(seq_weight, 2)) + " Da.")
nums = []
for a in fragments:
fragment_weights = 0.0
for aa in a.seq:
fragment_weights += SeqUtils.molecular_weight(aa, 'protein')
nums.append(round(fragment_weights, 2))
print(nums)
weights_array = []
combs = []
if len(nums) > 0:
for r in range(0,len(nums)+1):
weights_array += list(combinations(nums, r))
for item in weights_array:
if sum(item) == 4364.85: #Sequence weight needs to inserted manually -> not ideal
combs.append(item)
print(" ")
print("The possible combinations of fragment weights that may cover the protein sequence without overlap are: ")
for row in combs:
print(*row, sep = ", ")
fragmentcombinations(seq_list3, seq_list4)
This is the result:
The protein sequence: IEEATHMTPCYELHGLRWVQIQDYAINVMQCL
The molecular weight: 4364.85 Da.
[3611.86, 2269.63, 469.53, 556.56, 1198.41, 2609.88, 547.69, 1976.23, 2306.48, 938.01, 1613.87, 789.87, 737.75, 2498.71, 2064.25, 1184.39, 1671.87]
The possible combinations of fragment weights that may cover the protein sequence without overlap are:
556.56, 1198.41, 2609.88
469.53, 2609.88, 547.69, 737.75
556.56, 1198.41, 938.01, 1671.87
469.53, 547.69, 938.01, 737.75, 1671.87
If I write
if sum(item) == seq_weight:
the result doesn't print the combination of weights like I intended.
Sorry if the code is kind of messy, I'm still a beginner.
Thanks in advance!
The problem is not that your variable is not remembered anymore. The problem is that you perform an exact comparison between floating point numbers. In programming floating point numbers are the "decimal" numbers, but they are not the exact presentation of your numbers. They only are up to an arbitrary precision.
Let's do some basic maths Python.
>>> a = 0.2 + 0.1
>>> a
0.30000000000000004
>>> a == 0.3
False
As you can see, there is clearly happening something weird here. But this is just how floating point arithmetic works.
Now we have explained that. What should you do to make your program work? There are multiple solutions.
One way to deal with it, is to compare your numbers to some fixed difference. ie
if abs(sum(item) - seq_weight) < 0.00001
Another way to deal with this is using fixed precision decimal objects, but that can be more difficult than you think it is. https://docs.python.org/3/library/decimal.html

implementing a NONinfinite generator

I am borrowing some code from here but I have no idea how to get the code to not run infinitely.
Specifically, what I don't know how to do is to reference previously yielded digits and check that if the currently yielded digit has been returned. I want the function to stop once it starts looping. Is there a way to reference previously yielded values?
Here's a non generator solution
#when doing long division on a pair of numbers, (a,b), when you proceed through the algorigthm,
#you get new pairs (rem,b). You stop the algorithm when rem==0 or when the remainder is present
#in the existing list of digits
def decimals(number):
dividend = 1
digit = []
while dividend:
if dividend//10 in digit:
return digit
digit += [dividend // number]
dividend = dividend % number * 10
a more streamlined version, again still not generator, would be
def decimals(number):
dividend = 1
digit = []
while dividend and dividend//10 not in digit:
digit += [dividend // number]
dividend = dividend % number * 10
return digit

Format number with variable amount of significant figures depending on size

I've got a little function that displays a formatted amount of some number value. The intention is to show a "commonsense" amount of significant figures depending on the size of the number. So for instance, 1,234 comes out as 1.2k while 12,345 comes out as 12k and 123,456 comes out as 123k.
So in other words, I want to show a single decimal when on the lower end of a given order of magnitude, but not for larger values where it would just be useless noise.
I need this function to scale all the way from 1 to a few billion. The current solution is just to branch it:
-- given `current`
local text = (
current > 9,999,999,999 and ('%dB') :format(current/1,000,000,000) or
current > 999,999,999 and ('%.1fB'):format(current/1,000,000,000) or
current > 9,999,999 and ('%dM') :format(current/1,000,000) or
current > 999,999 and ('%.1fM'):format(current/1,000,000) or
current > 9,999 and ('%dk') :format(current/1,000) or
current > 999 and ('%.1fk'):format(current/1,000) or
('%d'):format(current) -- show values < 1000 floored
)
textobject:SetText(text)
-- code formatted for readability
Which I feel is very ugly. Is there some elegant formula for rounding numbers in this fashion without just adding another (two) clauses for every factor of 1000 larger I need to support?
I didn't realize how simple this actually was until a friend gave me a solution (which checked the magnitude of the number based on its length). I converted that to use log to find the magnitude, and now have an elegant working answer:
local suf = {'k','M','B','T'}
local function clean_format(val)
if val == 0 then return '0' end -- *Edit*: Fix an error caused by attempting to get log10(0)
local m = math.min(#suf,math.floor(math.log10(val)/3)) -- find the magnitude, or use the max magnitude we 'understand'
local n = val / 1000 ^ m -- calculate the displayed value
local fmt = (m == 0 or n >= 10) and '%d%s' or '%.1f%s' -- and choose whether to apply a decimal place based on its size and magnitude
return fmt:format(n,suf[m] or '')
end
Scaling it up to support a greater factor of 1000 is as easy as putting the next entry in the suf array.
Note: for language-agnostic purposes, Lua arrays are 1-based, not zero based. The above solution would present an off-by-one error in many other languages.
Put your ranges and their suffixes inside a table.
local multipliers = {
{10^10, 'B', 10^9},
{10^9, 'B', 10^9, true},
{10^7, 'M', 10^6},
{10^6, 'M', 10^6, true},
{10^4, 'k', 10^3},
{10^3, 'k', 10^3, true},
{1, '', 1},
}
The optional true value at the 4th position of alternate variables is for the %.1f placeholder. The third index is for the divisor.
Now, iterate over this table (using ipairs) and format accordingly:
function MyFormatter( current )
for i, t in ipairs( multipliers ) do
if current >= t[1] then
local sHold = (t[4] and "%.1f" or "%d")..t[2]
return sHold:format( current/t[3] )
end
end
end

Howto convert decimal (xx.xx) to binary

This isn't necessarily a programming question but i'm sure you folks know how to do it. How would i convert floating point numbers into binary.
The number i am looking at is 27.625.
27 would be 11011, but what do i do with the .625?
On paper, a good algorithm to convert the fractional part of a decimal number is the "repeated multiplication by 2" algorithm (see details at http://www.exploringbinary.com/base-conversion-in-php-using-bcmath/, under the heading "dec2bin_f()"). For example, 0.8125 converts to binary as follows:
1. 0.8125 * 2 = 1.625
2. 0.625 * 2 = 1.25
3. 0.25 * 2 = 0.5
4. 0.5 * 2 = 1.0
The integer parts are stripped off and saved at each step, forming the binary result: 0.1101.
If you want a tool to do these kinds of conversions automatically, see my decimal/binary converter.
Assuming you are not thinking about inside a PC, just thinking about binary vs decimal as physically represented on a piece of paper:
You know .1 in binary should be .5 in decimal, so the .1's place is worth .5 (1/2)
the .01 is worth .25 (1/4) (half of the previous one)
the .001 is worth (1/8) (Half of 1/4)
Notice how the denominator is progressing just like the whole numbers to the left of the decimal do--standard ^2 pattern? The next should be 1/16...
So you start with your .625, is it higher than .5? Yes, so set the first bit and subtract the .5
.1 binary with a decimal remainder of .125
Now you have the next spot, it's worth .25dec, is that less than your current remainder of .125? No, so you don't have enough decimal "Money" to buy that second spot, it has to be a 0
.10 binary, still .125 remainder.
Now go to the third position, etc. (Hint: I don't think there will be too much etc.)
There are several different ways to encode a non-integral number in binary. By far the most common type are floating point representations, especially the one codified in IEEE 754.
the code works for me is as below , you can use this code to convert any type of dobule values:
private static String doubleToBinaryString( double n ) {
String val = Integer.toBinaryString((int)n)+"."; // Setting up string for result
String newN ="0" + (""+n).substring((""+n).indexOf("."));
n = Double.parseDouble(newN);
while ( n > 0 ) { // While the fraction is greater than zero (not equal or less than zero)
double r = n * 2; // Multiply current fraction (n) by 2
if( r >= 1 ) { // If the ones-place digit >= 1
val += "1"; // Concat a "1" to the end of the result string (val)
n = r - 1; // Remove the 1 from the current fraction (n)
}else{ // If the ones-place digit == 0
val += "0"; // Concat a "0" to the end of the result string (val)
n = r; // Set the current fraction (n) to the new fraction
}
}
return val; // return the string result with all appended binary values
}