Gnuplot factorial calculation output - function

I wanted to test a recursive function in gnuplot.
The following is the function
factorial(n)= ((n==1?sprintf("1"):sprintf("%f",n*factorial(n-1))))
when factorial(100) tested, it looks fine
93326215443944102188325606108575267240944254854960571509166910400407995064242937148632694030450512898042989296944474898258737204311236641477561877016501813248.000000
to make the number a integer, i changed sprintf like that
factorial(n)= ((n==1?sprintf("1"):sprintf("%d",n*factorial(n-1))))
But, the result is strange ; it seemed to be out of range on integer size?
-2147483648
So i changed the function on real number type without no number below floating point
factorial(n)= ((n==1?sprintf("1"):sprintf("%.0f",n*factorial(n-1))))
But, the result is more strange,
-75703234367175440481992733883343393025021587121454605713387911182978138051561570016048371488788270318477285688499861254195149884149214115360733197762560
Could u explain what it is?
Thanks,

The largest integer that can be represented in 32 bits is 2147483647.
So a 32-bit integer (gnuplot through version 5.2) runs out of bits between 12! and 13!
A 64-bit integer (gnuplot 5.4) runs out of bits between 20! and 21!
If you use double precision floating point arithmetic it can hold larger numbers, but the number of bits still limits the precision. 64-bit floating point has only ~53 bits of precision, so the largest exact factorial it can represent is actually smaller than can be handled by a 64-bit integer.
Note that all of your example gnuplot code uses integer arithmetic if n is an integer. Changing to a floating point format for printing doesn't change that.

Not sure what your final goal is... But just for fun and feasibility, you can calculate Factorial(100) with gnuplot.
However, you will get the result as a string. I haven't tested what the limits for the arguments a and d would be. For a it would be maximum string length and for d probably the integer limit (32 bit or 64 bit).
Code:
### factorial for large numbers (output as string!)
reset session
# a: integer number string
# d: integer number
Multiply(a,d) = (_u=0, _tmp='', _n=strlen(a), sum [_i=1:_n] (_j=_n-_i+1, \
_m1=a[_j:_j]*d, _d1=(_m1+_u)%10, _u=(_m1+_u)/10, \
_tmp=sprintf("%d%s",_d1,_tmp),0), _u ? sprintf("%d%s",_u,_tmp) : _tmp)
Factorial(n) = (_tmp="1", sum [_j=1:n] ( _tmp=Multiply(_tmp,_j), 0), _tmp)
print Factorial(100)
### end of code
Result: 100! (actually, 24 zeros at the end)
93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000

Related

why is byte_size of huge number in erlang binary is 1?

In erlang, how come byte size of huge number represented as binary is one? I'd thought it should be more?
byte_size(<<9999999999994345345645252525254524352425252525245425422222222222222222524524352>>).
1
You're not specifying an integer size, so the value narrows to just a single byte, as you can see using the Erlang shell:
1> <<9999999999994345345645252525254524352425252525245425422222222222222222524524352>>.
<<"#">>
If you specify the proper size, which appears to be 263 bits, you get the right answer:
2> byte_size(<<9999999999994345345645252525254524352425252525245425422222222222222222524524352:263>>).
33
If you want to convert arbitrary integer to its binary representation, you should use binary:encode_unsigned
7> byte_size(binary:encode_unsigned(9999999999994345345645252525254524352425252525245425422222222222222222524524352)).
33
encode_unsigned(Unsigned) -> binary()
Types:
Unsigned = integer() >= 0 Same as encode_unsigned(Unsigned, big).
encode_unsigned(Unsigned, Endianness) -> binary()
Types:
Unsigned = integer() >= 0 Endianness = big | little Converts a
positive integer to the smallest possible representation in a binary
digit representation, either big endian or little endian.
In erlang this data type is a binary. Binary is a sequence of 8-bit (bytes) elements.
You entered only one value into this binary so the resulted value is actually the modulo 256 of this value. If you'll enter only the binary in the shell you'll get:
1> <<9999999999994345345645252525254524352425252525245425422222222222222222524524352>>.
<<"#">>
# ASCII value is 64. Which means that this_long_num modulo 256 = 64.
As you might have already understood, this means that it represents only 1 byte - so this is the reason that the byte_size/1 of this binary is 1.

setting exact bit number for binary output in java

Hello I was wondering how to set the number of bits for a number in java. Eg( integer 2, but i want it to be a 8 bit binary number so 00000010). I have used Integer.toBinaryString but that only prints it out as 10. Thanks in advance
You can use System.out.format() instead of using System.out.println() for this purpose. Here you can specify your desired format, in which you want to print your Integer or Float or any other data type. Try the following code:
int x = 2;
String binary = Integer.toBinaryString(x);
System.out.format("%08d",Integer.parseInt(binary));
In above code, mark the last line. We want to print Integer, so we used %d and we wanted 8 0's before that Integer x so we used 08. Thus the format becomes: %08d. See this for more information about formatting.

Verilog floating point to binary conversion

I am trying convert a signed floating number in Verilog to a signed 24 bit binary value. For example
-0.0065 would become: 24'b100000000110101001111110
0.0901 would become: 24'b000001011100010000110010
Is this correct?
Thank you
Taking the fractional decimal 0.0901 and converting to a fixed-point value with 2 integer bits and 22 fractional bits (24 bit total):
Ruby syntax used for the maths (you can cut and paste into irb (interactive ruby) a command line tool):
i = (0.0901 * 2**20) # 377906.7904
# truncat to integer value
i = i.to_i # 377906
# Convert to string using binary (base 2)
i.to_s(2) # "1011100010000110010"
To add the leading zeros (24 bit length), right justify and pad with 0's
i.to_s(2).rjust(24, '0') # "000001011100010000110010"
# Convert to Hex (base 16)
i.to_s(16) # "5c432"
Signed numbers are a bit more problematic, easiest way is to calculate positive value then perform twos complement :
(0.0065 * 2**22).to_i.to_s(2).rjust(24, '0')
=> "000000000110101001111110"
Twos complement
"000000000110101001111110"
"111111111001010110000001" # Ones complement (bit invert)
"111111111001010110000001" + 1
"111111111001010110000010" #Twos complement
You had 24'b100000000110101001111110 which is just the positive number with the MSB set to 1 which is not how signed numbers normally work. The format you have used is Sign Magnitude, but you can not just feed that into a multiplier (as per your previous question).
NB: I have also skipped over the quantisation effect of converting to fixed point. Your coefficient when scaled by your fractional bit was 377906.7904. but we just take the integer part giving you an error of 0.7904 which will may effect your filter performance.

Dealing with 128-bit numbers in MySQL

My table has a column which I know is a 128-bit unsigned number, stored in base-10 as a varchar e.g.
"25495123833603613494099723681886"
I know the bit-fields in this number, and I want to use the top 64 bits in GROUP BY.
In my example, the top 64-bits would be 1382093432409
I have not found a way, but I have eliminated some leads:
cannot convert to NUMERIC/DECIMAL because these are 64-bit too
cannot use LEFT() because 1<<64 is not base-10 aligned
CONV(N,10,16) would allow LEFT() but CONV() works at 64-bit precision only too :(
How can I get the BIGINT that is the top 64-bits of this number, so I can use that in the GROUP BY?
You can get at least part of the way there using floating-point math.
MySQL uses double-precision floating-point for non-integer math. This gives you about 50 bits of reliable precision for integral values - while this isn't quite 64, it's pretty close. You can use floating-point math to extract the top bits here using the expression:
FLOOR(n / POW(2,64))
where n is the name of the column.
This approach runs out of steam if more than 50 bits are needed, though, as even double-precision floats don't have enough significant bits to represent the whole thing, and trying to get any more using subtraction fails due to cancellation. (The extra bits are lost as soon as the string is converted to a number; they can't be brought back without doing something entirely different.)

when two 16-bit signed data are multiplied, what should be the size of resultant?

I have faced an interview question related to embedded systems and C/C++. The question is:
If we multiply 2 signed (2's complement) 16-bit data, what should be the size of resultant data?
I've started attempting it with an example of multiplying two signed 4-bit, so, if we multiply +7 and -7, we end up with -49, which requires 7 bits. But, I could not formulate a general relation.
I think I need to understand binary multiply deeply to solve this question.
First, n bits signed integer contains a value in the range -(2^(n-1))..+(2^(n-1))-1.
For example, for n=4, the range is -(2^3)..(2^3)-1 = -8..+7
The range of the multiplication result is -8*+7 .. -8*-8 = -56..+64.
+64 is more than 2^6-1 - it is 2^6 = 2^(2n-2) ! You'll need 2n-1 bits to store such POSITIVE integer.
Unless you're doing proprietary encoding (see next paragraph) - you'll need 2n bits:
One bit for the sign, and 2n-1 bits for the absolute value of the multiplication result.
If M is the result of the multiplication, you can store -M or M-1 instead. this can save you 1 bit.
This will depend on context. In C/C++, all intermediates smaller than int are promoted to int. So if int is larger than 16-bits, then the result will be a signed 32-bit integer.
However, if you assign it back to a 16-bit integer, it will truncate leaving only bottom 16 bits of the two's complement of the new number.
So if your definition of "result" is the intermediate immediately following the multiply, then the answer is the size of int. If you define the size as after you've stored it back to a 16-bit variable, then answer is the size of the 16-bit integer type.