Converting Decimal to binary - binary

I am taking an image file and converting it into binary format. Then I am converting that binary as a decimal format. But according to my algorithm I want to take 50,000 bits at a time following I am explaining my algorithm.
Read an image file from any programming language.
Convert that into binary format(pure 0's and 1's).
Take 50,000 bits at a time and convert it into decimal format(here I am taking only 1000 bits right now)
Convert that decimal again into again binary format.
Now problem is:
How can I take 50,000 bits at time to convert that into binary format
How will I convert that decimal number to binary again.
Here are 2 demos
Converting Binary to decimal https://repl.it/IHMY/1
Converting decimal to binary https://repl.it/IHMY
Thanks

Finally I have done please follow following link:
https://repl.it/IHMY/4
import math;
binary=0b11111111110110001111111111100001
decimal=int(binary)
print(decimal)
print("{0:#b}".format(decimal))

Related

Data type for huge binary numbers

I have to handle huge binary numbers (<=4096 digits) - what is the best way to handle such big numbers? I have to multiply them afterward and apply the %-operation on these numbers.
Do I have to use structs or how am I supposed to handle such data?
If you've got it as a string of 4096 digit, you can convert it into a list with separate smaller chunks (eg into bytes each consisting of 8 bits), then if you need to multiply/apply the %-operation on these numbers, you probably will need create a function that converts those "chunks" from binary to denary (so you can multiply them and so on.)

How to Convert (0.ABBA)16 hexadecimal into octal?

How to convert this ,
I m not getting this?
I tried converting that into decimal 10*16^-1 and so on and got this (0.6708068848)10
Now it become really a complex task is there any short method to do so?
I think you're over-complicating it; I find it easiest to first convert it to binary (base-2) and then to octal (base-8).
Binary (bits partitioned into 3's because octal numbers have 3 bits):
0.101_010_111_011_101_000
Octal:
0.527350

How to convert alphabet to binary?

How to convert alphabet to binary? I search on Google and it says that first convert alphabet to its ASCII numeric value and than convert the numeric value to binary. Is there any other way to convert ?
And if that's the only way than is the binary value of "A" and 65 are same?
BECAUSE ASCII vale of 'A'=65 and when converted to binary its 01000001
AND 65 =01000001
That is indeed the way which text is converted to binary.
And to answer your second question, yes it is true that the binary value of A and 65 are the same. If you are wondering how CPU distinguishes between "A" and "65" in that case, you should know that it doesn't. It is up to your operating system and program to distinguish how to treat the data at hand. For instance, say your memory looked like the following starting at 0 on the left and incrementing right:
00000001 00001111 000000001 01100110
This binary data could mean anything, and only has a meaning in the context of whatever program it is in. In a given program, you could have it be read as:
1. An integer, in which case you'll get one number.
2. Character data, in which case you'll output 4 ASCII characters.
In short, binary is read by CPUs, which do not understand the context of anything and simply execute whatever they are given. It is up to your program/OS to specify instructions in order for data to be handled properly.
Thus, converting the alphabet to binary is dependent on the program in which you are doing so, and outside the context of a program/OS converting the alphabet to binary is really the exact same thing as converting a sequence of numbers to binary, as far as a CPU is concerned.
Number 65 in decimal is 0100 0001 in binary and it refers to letter A in binary alphabet table (ASCII) https://www.bin-dec-hex.com/binary-alphabet-the-alphabet-letters-in-binary. The easiest way to convert alphabet to binary is to use some online converter or you can do it manually with binary alphabet table.

Large number conversion decimal to binary in mysql

Below is the converted values in different bases ie Hexadecimal, Decimal, Binary.
HexaDecimal - 33161fa59009c58000006198
Decimal - 15810481316372905437683540376
Binary - 1100110001011000011111101001011001000000001001110001011000000000000000000000000110000110011000
This one i have achieved correctly in Java. But for project i need to do this kind conversion in MySQL. I found out about Conv() http://dev.mysql.com/doc/refman/5.1/en/mathematical-functions.html#function_conv function which seems work for small no but not for big one sas given above.
Kindly help me if there is any work around for to get these desired results.
Regards,
Amit

Mathematica import large integers from .csv?

I'm importing some data from a CSV into Mathematica. The first few lines of the CSV look like this:
"a_use","tstart","tend"
"bind items on truck to prevent from flying off",1328661514469,1328661531032
"hang laundry on",1328661531035,1328661541700
"tie firewood with",1328661541702,1328661554940
"anchor tent",1328661554942,1328661559797
Mathematica handles this almost perfectly:
data = Import["mystuff.csv"]
The problem is that those big timestamps get converted into scientific notation, and the precision is lost:
In[283]:= data[[2,2]]
Out[283]= 1.32866*10^12
As you can see, even though 1328661531035 is not the same as 1328661541700, the imported data is no longer precise enough to tell the two apart, since both get imported as 1.32866*10^12. I know Mathematica can handle integers of arbitrary length, so how can I get it to import these numbers as (large) integers instead of converting them into this lossy scientific notation?
What version are you using? No problem on Mma 8.0.1.
If you are creating the CSV file in Excel set the format of the timestamps to Number with zero decimal places (via More Number Formats...)