setting exact bit number for binary output in java - binary

Hello I was wondering how to set the number of bits for a number in java. Eg( integer 2, but i want it to be a 8 bit binary number so 00000010). I have used Integer.toBinaryString but that only prints it out as 10. Thanks in advance

You can use System.out.format() instead of using System.out.println() for this purpose. Here you can specify your desired format, in which you want to print your Integer or Float or any other data type. Try the following code:
int x = 2;
String binary = Integer.toBinaryString(x);
System.out.format("%08d",Integer.parseInt(binary));
In above code, mark the last line. We want to print Integer, so we used %d and we wanted 8 0's before that Integer x so we used 08. Thus the format becomes: %08d. See this for more information about formatting.

Related

Gnuplot factorial calculation output

I wanted to test a recursive function in gnuplot.
The following is the function
factorial(n)= ((n==1?sprintf("1"):sprintf("%f",n*factorial(n-1))))
when factorial(100) tested, it looks fine
93326215443944102188325606108575267240944254854960571509166910400407995064242937148632694030450512898042989296944474898258737204311236641477561877016501813248.000000
to make the number a integer, i changed sprintf like that
factorial(n)= ((n==1?sprintf("1"):sprintf("%d",n*factorial(n-1))))
But, the result is strange ; it seemed to be out of range on integer size?
-2147483648
So i changed the function on real number type without no number below floating point
factorial(n)= ((n==1?sprintf("1"):sprintf("%.0f",n*factorial(n-1))))
But, the result is more strange,
-75703234367175440481992733883343393025021587121454605713387911182978138051561570016048371488788270318477285688499861254195149884149214115360733197762560
Could u explain what it is?
Thanks,
The largest integer that can be represented in 32 bits is 2147483647.
So a 32-bit integer (gnuplot through version 5.2) runs out of bits between 12! and 13!
A 64-bit integer (gnuplot 5.4) runs out of bits between 20! and 21!
If you use double precision floating point arithmetic it can hold larger numbers, but the number of bits still limits the precision. 64-bit floating point has only ~53 bits of precision, so the largest exact factorial it can represent is actually smaller than can be handled by a 64-bit integer.
Note that all of your example gnuplot code uses integer arithmetic if n is an integer. Changing to a floating point format for printing doesn't change that.
Not sure what your final goal is... But just for fun and feasibility, you can calculate Factorial(100) with gnuplot.
However, you will get the result as a string. I haven't tested what the limits for the arguments a and d would be. For a it would be maximum string length and for d probably the integer limit (32 bit or 64 bit).
Code:
### factorial for large numbers (output as string!)
reset session
# a: integer number string
# d: integer number
Multiply(a,d) = (_u=0, _tmp='', _n=strlen(a), sum [_i=1:_n] (_j=_n-_i+1, \
_m1=a[_j:_j]*d, _d1=(_m1+_u)%10, _u=(_m1+_u)/10, \
_tmp=sprintf("%d%s",_d1,_tmp),0), _u ? sprintf("%d%s",_u,_tmp) : _tmp)
Factorial(n) = (_tmp="1", sum [_j=1:n] ( _tmp=Multiply(_tmp,_j), 0), _tmp)
print Factorial(100)
### end of code
Result: 100! (actually, 24 zeros at the end)
93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000

How to convert exponential number to decimal numbers in action script 3?

I am facing the problem on multiplying two decimal number in flex.
When i am multiplying two decimal number the result is like a exponential number so i don't know how to get the decimal number as a result instead of getting an exponential number.
I am using the following codes to make a multiplication:
var num1:Number = 0.00005;
var num2:Number = 0.000007;
var result:Number = num1 * num2;
And in result variable i am getting the value as 3.5000000000000003E-10.
So i don't know how to get decimal number as result instead of getting an exponential number as above.
If anybody have knowledge how to resolve this problem please help me to solve.
You need to use the .toPrecision(precision:uint) method available in the Number class. This method takes one parameter which is :
precision:uint — An integer between 1 and 21, inclusive, that
represents the desired number of digits to represent in the resulting
string.
So simply do :
trace(result.toPrecision(2));
And you should get an output of 0.00000000035
Official documentation here :
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/Number.html#toPrecision()
Hope this helps. Cheers.

SQL code reading nvarchar variable length

I am having trouble with a sequence of code that is not reading the NVARCHAR length of my variables (they are barcode strings). We have two different barcodes and the inventory system I have set up measures only the format of the original one (has 7 characters). The new barcode has 9 characters. I need to run a loop value through each barcode input, hence how I have set up this line of script.
I originally thought that a DATALENGTH or LEN function would suffice but it seems that it is only measuring the variable as an integer, not the 7 characters in the string. If anybody has any input of how to manipulate my code sequence or a function that will measure a variables nvarchar length, it would more than appreciated!
CASE WHEN #BarcodeID = LEN(7)
THEN UPPER(LEFT(#BarcodeID,2))+CONVERT(nvarchar,RIGHT(#BarcodeID,5)+#LoopValue-1)
ELSE UPPER(LEFT(#BarcodeID,3))+CONVERT(nvarchar,RIGHT(#BarcodeID,6)+#LoopValue-1) END
Once again, the LEN(7) function in the beginning seems to be my issue.
Perhaps what you're trying to do is actually
CASE WHEN LEN(#BarcodeID) = 7
By using #BarcodeID = LEN(7) you are basically testing to see if the #BarcodeID variable is equal to 1 because the LEN() function, "Returns the number of characters of the specified string expression." It is implicitly converting 7 to a one-character string.

How would I convert a number into binary bits, then truncate or enlarge their size, and then insert into a bit container?

As the title of the question says, I want to take a number (declared preferably as int or char or std::uint8_t), convert it into its binary representation, then truncate or pad it by a certain variable number of bits given, and then insert it into a bit container (preferably std::vector<bool> because I need variable bit container size as per the variable number of bits). For example, I have int a= 2, b = 3. And let's say I have to write this as three bits and six bits respectively into the container. So I have to put 010 and 000011 into the bit container. So, how would I go from 2 to 010 or 3 to 000011 using normal STL methods? I tried every possible thing that came to my mind, but I got nothing. Please help. Thank you.
You can use a combination of 'shifting' (>>) and 'bit-wise and' (&).
First lets look at the bitwise &: For instance if you have an int a=7 and you do the &-operation on it with 13, you will get 5. Why?
Because & gives 1 at position i iff both operands have a 1 at position i. So we get:
00...000111 // binary 7
& 00...001101 // binary 13
-------------
00...000101 // binary 5
Next, by using the shift operation >> you can shift the binary representation of your ints. For instance 5 >> 1 is 2. Why?
Because each position gets displaced by 1 to the right. The rightmost bit "falls out". Hence we have:
00...00101 //binary for 5
shift by 1 to the right gives:
00...00010 // binary for 2
Another example: 13 (01101) shifted by 2 is 3 (00011). I hope you get the idea.
Hence, by repeatedly shifting and doing & with 1 (00..0001), you can read out the binary representation of a number.
Finally, you can use this 1 to set the corresponding position in your vector<bool>. Assuming you want to have the representation you show in your post, you will have to fill in your vector from the back. So, you could for instance do something along the lines:
unsigned int f=13; //the number we want to convert
std::vector<bool> binRepr(size, false); //size is the container-size you want to use.
for(int currBit=0; currBit<size; currBit++){
binRepr[size-1-currBit] = (f >> currBit) & 1;
}
If the container is smaller than the binary representation of your int, the container will contain the truncated number. If it is larger, it will fill in with 0s.
I'm using an unsigned int since for an int you would still have to take care of negative numbers (for positive numbers it should work the same) and we would have to dive into the two's complement representation, which is not difficult, but requires a bit more bit-fiddling.

Parsing base 2^32 numbers to decimal (For theorically unlimited numbers)

I am working on a C++ problem where I have to print my class.
My class stores and does arithmetic and logic operations on theorically unlimited long numbers. It has an array of unsigned ints to hold the number. For example:
If the number is {a*(2^32) + b} , the class stores it as {array[0]=b , array[1]=a}.
So it is like a number of base (2^32). The problem is how do i convert this number to decimal so i can print it? Simply {a*(2^32) + b} will not do because it doesnt fit into unsigned int. I do not have to store the decimal number but just print it.
What i have got so far
I have thought of firstly converting the number to binary (which is an easy task) then printing it. But same problem arises because there is still no big enough variable to hold the multiplication.
Wild thought
I wonder if I can use my own class to hold the multiplication and with some iterative method do the printing?
I also wonder if this can be solved with some use of logarithmics?
Note: I am not allowed to use other libraries or other long types like double and longer.
Although I say this is for theorically unlimited numbers it would help if I could just find the way to print array of size 2. Then I can think about longer numbers.