Large number conversion decimal to binary in mysql - mysql

Below is the converted values in different bases ie Hexadecimal, Decimal, Binary.
HexaDecimal - 33161fa59009c58000006198
Decimal - 15810481316372905437683540376
Binary - 1100110001011000011111101001011001000000001001110001011000000000000000000000000110000110011000
This one i have achieved correctly in Java. But for project i need to do this kind conversion in MySQL. I found out about Conv() http://dev.mysql.com/doc/refman/5.1/en/mathematical-functions.html#function_conv function which seems work for small no but not for big one sas given above.
Kindly help me if there is any work around for to get these desired results.
Regards,
Amit

Related

How does a computer tell the difference between a float binary values and an integer binary value?

I was working on some binary practice problems when I noticed something interesting. How does a computer differentiate between binary values. For instance 13 in binary is 1101 and 0.8125 is also 1101 in binary. Since their binary values are the same, how does a computer know which is which. Or if I were converting it back to base 10, how would I know if the number was originally 13 or 0.8125?
The computer doesn't care about the "meaning" of the binary values till the point you instruct it to use it. When you do that, you explicitly "tell" the computer what the meaning is.
The binary value in a nylocation in memory could be anything (a number, a program instruction, a floating point number, etc)... the program has to know what type to expect at that location.
Data Type is the answer.
Computer looks for the datatype.
If binary value is 1101 and data type mentioned when declaring the variable was integer, then it will be 13. If data type mentioned was float, then it will be 0.8125. If the data type mentioned was char, then 13 would be the ascii value of the character.
I hope you understood what I explained.

Converting base-8 to base-32 without using any other base?

I have base-8 number 64276. How would I convert this number directly into base-32 without converting it into binary, decimal or any other base.
Edit: I am trying to solve the problem with pencil and paper
this sounds like a fun homework challenge, so here's a hint -- how could you convert a 5,000-digit base 64 number into base 32?

Emulating IBM floating point multiplication/addition in VBA

I am attempting to emulate a (no longer existing) mainframe report generator in an Access 2003 or Access 2010 environment. The data it generates must match exactly with paper reports from the early 70s. Unfortunately, the earliest years data were run on hardware that used IBM floating point representation instead of IEEE. With the help of Google, I've found a library of VBA functions that will convert a float from decimal to the IEEE 754 32bit binary format. I had to modify the library to accept either 32bit or 64bit floats, so I have a modest working knowledge of floating point formats, however, I'm having trouble making the conversion from IEEE to IBM binary format, as well as trouble multiplying and adding either the IBM or the IEEE numbers.
I haven't turned up any other libraries for performing this conversion and arithmetic operations in VBA - is there an easier way to go about this, or an existing library that I'm not finding? Failing that, a clear and straightforward explanation of the relevant algorithms?
Thanks in advance.
To be honest you'd probably do better to start by looking at the Hercules emulator.
http://www.hercules-390.org/ Other than that in theory with VBA you can use the Decimal type to get good results (note you have to CDec to create these) it uses 12 bits with a variable power of ten scalar.
A quick google shows this post from the hercules group, which confirms Alberts point about needing to know the hardware:
---Snip--
In theory, but rather less so in practice. S/360 and S/370 had a
choice of Scientific or Commercial instruction sets. The former added
the FP instructions and registers to the base; the latter the decimal
instructions, including Edit and Edit & Mark. But larger 360 (iirc /65
and up) and 370 (/155 and up) models had the union of the two, called
the Universal instruction set, and at some point the S/370 dropped the
option.
---snip---
I have to say that having looked at the hercules source code you'll probably need to figure out exactly which floating point operation codes (in terms of precision single,long, extended) are being performed.
The problem is here's your confusing the issue of decimal type in access, and that of single and double type floating point values available in access.
If you use the currency data type in access, this is a scaled integer, and will not produce rounding (that is what most of us use for financial calculations and reports). You can also use decimal values in access, and again they don't round at all as they are packed decimals.
However, both the single and double values available inside of access are in fact the same format and conform to the IEEE floating point standard.
For an access single variable, this is a 32bit number, and the range is:
-3.402823E38
to
-1.401298E-45 for negative values
and
1.401298E-45
to
3.402823E38 for positive values
That looks to be the same to me as the IEEE 754 standard.
So, if you add up values in access as a single, you should get the rouding same results.
So, Intel based, and Access single and doubles I believe are the same as this IEEE standard.
The only real issue it and here is what is the format of the original data you're pulling into access, and what kinds of text or string or conversion process is occurring when that data is pulled in and stored?
Access can convert numbers. Try typing these values at the access command line prompt (debug window)
? hex(255)
Above will show FF
? csng(&hFF)
Above will show 255
Edit:
Ah, ok, I see now I have this reversed, my wrong here. The problem here is assuming you convert a number to the older IBM format (Excess 64?), you will THEN have to get your hands on their code that they used for adding those numbers. In fact, even back then, different IBM models depending on what you purchased actually produced different results (more money = more precision).
So, not only do you need conversion routines to convert to the internal representation, you THEN need the routines that add/subtract/multiply those numbers. So, just having conversion routines is not going to get you very far, since you also have to duplicate their exact routines that do math. Those types of routines are likely not all created equal in terms of how they round numbers etc.

MySql: convert a float to decimal produce more decimal number then the stored in back.sql file

i want to understand this:
i have a dump of a table (a sql script file) from a database that use float 9,2 as default type for numbers.
In the backup file i have a value like '4172.08'.
I restore this file in a new database and i convert the float to decimal 20,5.
Now the value in the field is 4172.08008
...where come from the 008??
tnx at all
where come from the 008??
Short answer:
In order to avoid the float inherent precision error, cast first to decimal(9,2), then to decimal(20,5).
Long answer:
Floating point numbers are prone to rounding errors in digital computers. It is a little hard to explain without throwing up a lot of math, but lets try: the same way 1/3 represented in decimal requires an infinite number of digits (it is 1.3333333...), some numbers that are "round" in decimal notation have infinite number of digits in binary. Because this format is stored in binary and has finite precision, there is an implicit rounding error and you may experience funny things like getting 0.30000000000000004 as the result of 1.1 + 1.2.
This is the difference between float and decimal. Float is a binary type, and can't represent that value exactly. So when you convert to decimal (as expected, a decimal type), its not exactly the original value.
See http://floating-point-gui.de/ for some more information.

floating point hex octal binary

I am working on a calculator that allows you to perform calculations past the decimal point in octal, hexadecimal, binary, and of course decimal. I am having trouble though finding a way to convert floating point decimal numbers to floating point hexadecimal, octal, binary and vice versa.
The plan is to do all the math in decimal and then convert the result into the appropriate number system. Any help, ideas or examples would be appreciated.
Thanks!
Hmm... this was a homework assignment in my university's CS "weed-out" course.
The operations for binary are described in Schaum's Outline Series: Essential Computer Mathematics by Seymour Lipschutz. For some reason it is still on my bookshelf 23 years later.
As a hint, convert octal and hex to binary, perform the operations, convert back to binary.
Or you can perform decimal operations and perform the conversions to octal/hex/binary afterward. The process is essentially the same for all positional number systems arithmetic.
What Programming Language are you using?
it is definatly a good idea to change everything into binary do the math on the binary then convert back. if you multiply a (unsigneD) binary number by 2, ti is the same as a Bit Shift Left ( << 1 in C), a division by 2 is the same as a Bit shit Right (>> in C).
addition and subtraction is the same as you would do in elementary school.
also, remember that if you cast a float as an int it will truncated it int(10.5) = 10;
I had this same problem a few days ago. I found http://www.binaryconvert.com, which allows you to convert between floating point decimal, binary, octal, and hexadecimal in any order you want.