I need to show the user the value of e in both of the forms above, but I'm new to computer programming/octave.
binary: not sure if there is a function to convert e to binary, but my guess would be e=10.1011011111100001010...
decimal: e=1*2^1.?? <- not sure how to display the 7 after the decimal point.
The decimal expansion of e is:
e = 2*10^0 + 7*10^-1 + 1*10^-2 + 8*10^-3 + 3*10^-4 + ... = 2.7183 + ...
In Octave you may get the decimal representation like this
>> e
ans = 2.7183
and the binary representation like that
>> p=20; dec2bin(floor(e*2^p))
ans = 1010110111111000010101
p is the number of digits required after the point. When displaying the final result, the point needs to be inserted in front of the p-th digit from the right: e = 10.10110111111000010101 (base-2).
Related
When saving a double to a string there is some loss of precision. Even if you use a very large number of digits the conversion may not be reversible, i.e. if you convert a double x to a string sx and then convert back you will get a number x' which may not be bitwise equal to x. This may cause some problem for instance when checking for differences in a battery of tests. One possibility is to use binary form (for instance the native Binary form, or HDF5) but I want to store the number in a text file, so I need a conversion to a string. I have a working solution but I ask if there is some standard for this or a better solution.
In C/C++ you could cast the double to some integer type like char* and then convert each byte to an hexa of length 2 with printf("%02x",c[j]). Then for instance PI would be converted to a string of length 16: 54442d18400921fb. The problem with this is that if you read the hexa you don get any idea of which number it is. So I would be interested in some mix for instance pi -> 3.14{54442d18400921fb}. The first part is a (probably low precision) decimal representation of the number (typically I would use a "%g" output conversion) and the string in braces is the lossless hexadecimal representation.
EDIT: I pass the code as an aswer
Following the ideas already suggested in the post I wrote the
following functions, that seem to work.
function s = dbl2str(d);
z = typecast(d,"uint32");
s = sprintf("%.3g{%08x%08x}\n",d,z);
endfunction
function d = str2dbl(s);
k1 = index(s,"{");
k2 = index(s,"}");
## Check that there is a balanced {} or not at all
assert((k1==0) == (k2==0));
if k1>0; assert(k2>k1); endif
if (k1==0);
## If there is not {hexa} part convert with loss
d = str2double(s);
else
## Convert lossless
ss = substr(s,k1+1,k2-k1-1);
z = uint32(sscanf(ss,"%8x",2));
d = typecast(z,"double");
endif
endfunction
Then I have
>> spi=dbl2str(pi)
spi = 3.14{54442d18400921fb}
>> pi2 = str2dbl(spi)
pi2 = 3.1416
>> pi2-pi
ans = 0
>> snan = dbl2str(NaN)
snan = NaN{000000007ff80000}
>> nan1 = str2dbl(snan)
nan1 = NaN
A further improvement would be to use other type of enconding, for
instance Base64 (as suggested by #CrisLuengo in a comment) that would
reduce the length of the binary part from 16 to 11 bytes.
I want to print a variable val up to d decimal places, where d is input by the user. The value of d is not known while writing the program.
I tried the following:
printf ('%.df',val);
printf ('%.{%i}f',d,val);
It doen't seem to work though.
You can use the special formatspecifier * to indicate a formatting argument. E.g.:
octave:1> val = exp(1)
val = 2.7183
octave:2> decimalPlaces = 3;
octave:3> fprintf('The value to 3 sig digits is: %.*f \n', decimalPlaces, val)
The value to 3 sig digits is: 2.718
The matlab documentation has a good section on this.
You'll need to generate the format string with the proper value first; it won't be replaced within fprintf:
d = 5;
val = 7.123456789
fmt_str = ['%.' num2str(d) 'f']; % fmt_str = '%.df', d replaced with number
fprintf(fmt_str, val);
In MSACCESS VBA, I convert a HEX string to decimal by prefixing the string with "&h"
?CLng("&h1234")
4660
?CLng("&h80000000")
-2147483648
What should I do to convert it to an unsigned integer?
Using CDbl doesn't work either:
?CDbl("&h80000000")
-2147483648
Your version seems like the best answer, but can be shortened a bit:
Function Hex2Dbl(h As String) As Double
Hex2Dbl = CDbl("&h0" & h) ' Overflow Error if more than 2 ^ 64
If Hex2Dbl < 0 Then Hex2Dbl = Hex2Dbl + 4294967296# ' 16 ^ 8 = 4294967296
End Function
Double will have rounding precision error for most values above 2 ^ 53 - 1 (about 16 decimal digits), but Decimal can be used for values up to 16 ^ 12 - 1 (Decimal uses 16 bytes, but only 12 of them for the number)
Function Hex2Dec(h)
Dim L As Long: L = Len(h)
If L < 16 Then ' CDec results in Overflow error for hex numbers above 16 ^ 8
Hex2Dec = CDec("&h0" & h)
If Hex2Dec < 0 Then Hex2Dec = Hex2Dec + 4294967296# ' 2 ^ 32
ElseIf L < 25 Then
Hex2Dec = Hex2Dec(Left$(h, L - 9)) * 68719476736# + CDec("&h" & Right$(h, 9)) ' 16 ^ 9 = 68719476736
End If
End Function
If you want to go higher than 2^31 you could use Decimal or LongLong. LongLong and CLngLngonly work on 64bit platforms though. Since I only have 32 bit office at the moment, this is for Decimal and CDec.
There seems to be an issue when converting 8-digit Hex numbers because apparently signed 32-bit is used somewhere in the process which results in the sign mistake even though Decimal could handle the number.
'only for positive numbers
Function myHex2Dec(hexString As String) As Variant
'cut off "&h" if present
If Left(hexString, 2) = "&h" Or Left(hexString, 2) = "&H" Then hexString = Mid(hexString, 3)
'cut off leading zeros
While Left(hexString, 1) = "0"
hexString = Mid(hexString, 2)
Wend
myHex2Dec = CDec("&h" & hexString)
'correct value for 8 digits onle
If myHex2Dec < 0 And Len(hexString) = 8 Then
myHex2Dec = CDec("&h1" & hexString) - 4294967296#
'cause overflow for 16 digits
ElseIf myHex2Dec < 0 Then
Error (6) 'overflow
End If
End Function
Test:
Sub test()
Dim v As Variant
v = CDec("&H80000000") '-2147483648
v = myHex2Dec("&H80000000") '2147483648
v = CDec("&H7FFFFFFFFFFFFFFF") '9223372036854775807
v = myHex2Dec("&H7FFFFFFFFFFFFFFF") '9223372036854775807
v = CDec("&H8000000000000000") '-9223372036854775808
v = myHex2Dec("&H8000000000000000") 'overflow
End Sub
With remark of #arcadeprecinct I was able to create a function for it:
Function Hex2UInt(h As String) As Double
Dim dbl As Double: dbl = CDbl("&h" & h)
If dbl < 0 Then
dbl = CDbl("&h1" & h) - 4294967296#
End If
Hex2UInt = dbl
End Function
Some example output:
?Hex2UInt("1234")
4660
?Hex2UInt("80000000")
2147483648
?Hex2UInt("FFFFFFFFFFFF")
281474976710655
Maximum value to represent as an integer is 0x38D7EA4C67FFF
?Hex2UInt("38D7EA4C67FFF")
999999999999999
?Hex2UInt("38D7EA4C68000")
1E+15
a proposal, result in h
sh = "&H80000000"
h = CDbl(sh)
If h < 0 Then
fd = Hex$(CDbl(Left(sh, 3)) - 8)
sh = "&h" & fd & Mid(sh, 4)
h = CDbl(sh) + 2 ^ 31
End If
I found my way here looking for a Word VBA solution, but what I've discovered might also apply to other Office apps. I realise that this is a very old question and that there are some ingenious solutions to it, but I'm surprised that nobody has explained what it is that seems to be the root cause of the problem, and hence what might possibly be a one-line solution in many cases. When I was an assembly language programmer in the 1970s, working more in binary and octal than anything else, this was a very common issue, known as "2s complement".
I'll explain it in its simplest form, from first principles, by the way it works on a byte, so that it's understandable even by absolute beginners.
Normally, the most significant bit is bit-7 at the left which has a value of 128, the least significant bit is bit-0 at the right which has a value of 1. Therefore, the highest possible value if all bits are set is 255. However in 2s complement, bit-7 is the "sign bit". This only leaves the seven bits from 0 to 6 to hold the actual value, giving them a maximum value of 127. The sign bit has a value of -128. If all 8 bits are set, the byte value becomes (-128 + 127) which gives the negative decimal value of -1. The 2s complement range of values for 8 bits is from -128 (with only bit-7 set) to +127 (with only bits 0 to 6 set). If the sign bit is set, the value of the byte is -128 plus the positive value of whatever is stored in bits 0 to 6. E.g. binary 11111101 = hex FD = decimal (-128 + 125) = -3, 10110100 = hex B4 = decimal (-128 + 52) = -76.
2s complement applies the same effect at each increasing 8-bit boundary, thus for 16 bits, the sign bit is bit-15 (with a value of -32,768) and the positive value is in bits 0 to 14, giving a 16-bit range of values from -32768 to 32767. Similarly, the 24-bit range is from -8388608 to 8388607, and so on.
I recently encountered this conversion problem in some code that was converting hexadecimal RGB colour values which originated as a 6-character text string in a Word document. Having successfully processed tens of thousands of these I was suddeny presented with an "out of range" error pop-up. The string that had caused the problem was "008080". The command ... = Val("&H" + variable) had converted this to -32896, an invalid value to pass as a colour property. The Val() function had removed the leading zeros and treated 8080 as a signed 2s complement 16-bit value.
In my case the solution was simple. Because I know that I'll always be dealing with 24-bit, 6-character hex values. I just added an extra "1" text character to the front of the hex code (thus making it longer than 16 bits), then, in effect, subtracted the same value. So, with the original 6-character hex RGB code held in the variable HexCode, I get the right decimal result using the command
DecCode = Val("&H" + "1" + HexCode) - Val("&H" + "1000000")
Problem solved, by just adding a little extra code to an existing line. I hope that my explanation of the cause of the problem helps others to devise their own solutions where it's appropriate.
I'm using wxMaxima 15.08.1 (win 10) and when I input this equation
/* [wxMaxima: input start ] */a*x+b*y+c*z=0;
I get this:
/* [wxMaxima: input end ] */cz+by+ax=0
Why does it change the term's position of the expression? It seems like in descending order somehow.
Then, if I type another equation giving all coefficients the same unknown, maxima outputs it just right.
/* [wxMaxima: input start ] */a*x^2+b*x+c=0;
/* [wxMaxima: input end ] */ax^2+bx+x=0
Maxima has its own idea of the canonical ordering of terms in "+" and "*" expressions. The canonical ordering is expressed by the function ordergreatp (equivalently orderlessp) which tells if one term comes after (respectively, before) another term. If you apply sort to a list of terms, they are sorted, by default, according to the canonical order.
By default, "+" terms are displayed in reverse order (reverse of the canonical order). When the global variable powerdisp is true, "+" terms are displayed in the canonical order. You can decide whether one order or the other works better for you.
(%i2) powerdisp;
(%o2) false
(%i3) a*x + b*y + c*z;
(%o3) c z + b y + a x
(%i4) a*x^2 + b*x + c;
2
(%o4) a x + b x + c
(%i7) powerdisp : true $
(%i8) a*x + b*y + c*z;
(%o8) a x + b y + c z
(%i9) a*x^2 + b*x + c;
2
(%o9) c + b x + a x
Question
Let's say I have a string or array which represents a number in base N, N>1, where N is a power of 2. Assume the number being represented is larger than the system can handle as an actual number (an int or a double etc).
How can I convert that to a decimal string?
I'm open to a solution for any base N which satisfies the above criteria (binary, hex, ...). That is if you have a solution which works for at least one base N, I'm interested :)
Example:
Input: "10101010110101"
-
Output: "10933"
It depends on the particular language. Some have native support for arbitrary-length integers, and others can use libraries such as GMP. After that it's just a matter of doing the lookup in a table for the digit value, then multiplying as appropriate.
This is from a Python-based computer science course I took last semester that's designed to handle up to base-16.
import string
def baseNTodecimal():
# get the number as a string
number = raw_input("Please type a number: ")
# convert it to all uppercase to match hexDigits (below)
number = string.upper(number)
# get the base as an integer
base = input("Please give me the base: ")
# the number of values that we have to change to base10
digits = len(number)
base10 = 0
# first position of any baseN number is 1's
position = 1
# set up a string so that the position of
# each character matches the decimal
# value of that character
hexDigits = "0123456789ABCDEF"
# for each 'digit' in the string
for i in range(1, digits+1):
# find where it occurs in the string hexDigits
digit = string.find(hexDigits, number[-i])
# multiply the value by the base position
# and add it to the base10 total
base10 = base10 + (position * digit)
print number[-i], "is in the " + str(position) + "'s position"
# increase the position by the base (e.g., 8's position * 2 = 16's position)
position = position * base
print "And in base10 it is", base10
Basically, it takes input as a string and then goes through and adds up each "digit" multiplied by the base-10 position. Each digit is actually checked for its index-position in the string hexDigits which is used as the numerical value.
Assuming the number that it returns is actually larger than the programming language supports, you could build up an array of Ints that represent the entire number:
[214748364, 8]
would represent 2147483648 (a number that a Java int couldn't handle).
That's some php code I've just written:
function to_base10($input, $base)
{
$result = 0;
$length = strlen($input);
for ($x=$length-1; $x>=0; $x--)
$result += (int)$input[$x] * pow($base, ($length-1)-$x);
return $result;
}
It's dead simple: just a loop through every char of the input string
This works with any base <10 but it can be easily extended to support higher bases (A->11, B->12, etc)
edit: oh didn't see the python code :)
yeah, that's cooler
I would choose a language which more or less supports natively math representation like 'lisp'. I know it seems less and less people use it, but it still has its value.
I don't know if this is large enough for your usage, but the largest integer number I could represent in my common lisp environment (CLISP) was 2^(2^20)
>> (expt 2 (expt 2 20)
In lisp you can easily represent hex, dec, oct and bin as follows
>> \#b1010
10
>> \#o12
10
>> 10
10
>> \#x0A
10
You can write rationals in other bases from 2 to 36 with #nR
>> #36rABCDEFGHIJKLMNOPQRSTUVWXYZ
8337503854730415241050377135811259267835
For more information on numbers in lisp see: Practical Common Lisp Book