I am new in solidity language. Have seen their documentation. Is there any floating point data type ?
There is no floating point in Solidity. You should keep numbers in whole number format.
You can place decimal place in your front-end code. Take a look at how ERC20 contract was designed.
Good read: https://medium.com/#jgm.orinoco/understanding-erc-20-token-contracts-a809a7310aa5
There is no native support for floating-point numbers in the core language, but they are available via libraries, such as ABDKMathQuad.
You could try Paul Razvan Berg's PRBMath
You can have floating point constants in Solidity, but only with a unit designation gwei (which multiplies by 10^9, or ether (which multiplies by 10^18). For example, if you write 0.1 ether in your Solidity code, it is turned into 100000000000000000 (= 10^17 wei). However, after multiplication by the fixed point multiplier in the unit suffix, there can't be any fractional decimals left. For example you can't specify 1.00000000000000000000000001 ether, because that would leave you with a fractional number of wei.
More generally, what you are probably asking about is how to do fixed point mathematical calculations in Solidity, since floating point is not supported. There are many tutorials online explaining how to calculate fractional values using fixed point (just search for "fixed point arithmetic tutorial", for a wide range of languages (the principles apply directly to Solidity too). You can generally implement fixed point with very little code (addition and subtraction work without any special care, as long as both operands have already been multiplied by the same fixed point multiplier, whereas for multiplying two fixed point numbers, you just divide by the fixed point multiplier after multiplying two fixed point numbers that have already each been multiplied by the same fixed point multiplier). Or you can use one of the math libraries linked in other answers.
Related
I have been designing a Delta-Sigma DAC and have run into confusion and despair over the handling of signed numbers in my (sigma)counters and (delta and Vref) comparators.
I have tried to employ signed 2's complement but the EDAcompiler doesn't seem to notice when I do it, its most likely my own mistake!
So basically my question is, how (in Verilog) do I represent negative numbers in a way that they can be used in counters (which can therefore count up and down)? I am aware that a counter register that will hold signed numbers must be declared reg signed [:0]
Thanks!
Gavin
Well I am not that clear on your question. Some compilers may not be able to handle signed registers directly since if I recall correctly it is a Verilog 2001 feature. But generally if you use digital logic that works with signed numbers you shouldn't have an issue. For example if you use an adder ip just mention that inputs are signed numbers. As for the simulator you can select the type of data you need, generally by selecting the register/value, right clicking on the waveform window and changing the type.
Finally if you have to create the logic yourself just use sign extension. so lets say you are working with 4 bit values
-5 would be 11111011 and 5 wold be 00000101. So you can see that for negative numbers the MSB is 1 and for positive its zero. Using this you can interpret the numbers in your code but just make sure that the size is bigger then what you want to use so no overflow occurs.
I am writing a program for an embedded hardware that only supports 32-bit single-precision floating-point arithmetic. The algorithm I am implementing, however, requires a 64-bit double-precision addition and comparison. I am trying to emulate double datatype using a tuple of two floats. So a double d will be emulated as a struct containing the tuple: (float d.hi, float d.low).
The comparison should be straightforward using a lexicographic ordering. The addition however is a bit tricky because I am not sure which base should I use. Should it be FLT_MAX? And how can I detect a carry?
How can this be done?
Edit (Clarity): I need the extra significant digits rather than the extra range.
double-float is a technique that uses pairs of single-precision numbers to achieve almost twice the precision of single precision arithmetic accompanied by a slight reduction of the single precision exponent range (due to intermediate underflow and overflow at the far ends of the range). The basic algorithms were developed by T.J. Dekker and William Kahan in the 1970s. Below I list two fairly recent papers that show how these techniques can be adapted to GPUs, however much of the material covered in these papers is applicable independent of platform so should be useful for the task at hand.
https://hal.archives-ouvertes.fr/hal-00021443
Guillaume Da Graça, David Defour
Implementation of float-float operators on graphics hardware,
7th conference on Real Numbers and Computers, RNC7.
http://andrewthall.org/papers/df64_qf128.pdf
Andrew Thall
Extended-Precision Floating-Point Numbers for GPU Computation.
This is not going to be simple.
A float (IEEE 754 single-precision) has 1 sign bit, 8 exponent bits, and 23 bits of mantissa (well, effectively 24).
A double (IEEE 754 double-precision) has 1 sign bit, 11 exponent bits, and 52 bits of mantissa (effectively 53).
You can use the sign bit and 8 exponent bits from one of your floats, but how are you going to get 3 more exponent bits and 29 bits of mantissa out of the other?
Maybe somebody else can come up with something clever, but my answer is "this is impossible". (Or at least, "no easier than using a 64-bit struct and implementing your own operations")
It depends a bit on what types of operations you want to perform. If you only care about additions and subtractions, Kahan Summation can be a great solution.
If you need both the precision and a wide range, you'll be needing a software implementation of double precision floating point, such as SoftFloat.
(For addition, the basic principle is to break the representation (e.g. 64 bits) of each value into its three consitituent parts - sign, exponent and mantissa; then shift the mantissa of one part based on the difference in the exponents, add to or subtract from the mantissa of the other part based on the sign bits, and possibly renormalise the result by shifting the mantissa and adjusting the exponent correspondingly. Along the way, there are a lot of fiddly details to account for, in order to avoid unnecessary loss of accuracy, and deal with special values such as infinities, NaNs, and denormalised numbers.)
Given all the constraints for high precision over 23 magnitudes, I think the most fruitful method would be to implement a custom arithmetic package.
A quick survey shows Briggs' doubledouble C++ library should address your needs and then some. See this.[*] The default implementation is based on double to achieve 30 significant figure computation, but it is readily rewritten to use float to achieve 13 or 14 significant figures. That may be enough for your requirements if care is taken to segregate addition operations with similar magnitude values, only adding extremes together in the last operations.
Beware though, the comments mention messing around with the x87 control register. I didn't check into the details, but that might make the code too non-portable for your use.
[*] The C++ source is linked by that article, but only the gzipped tar was not a dead link.
This is similar to the double-double arithmetic used by many compilers for long double on some machines that have only hardware double calculation support. It's also used as float-float on older NVIDIA GPUs where there's no double support. See Emulating FP64 with 2 FP32 on a GPU. This way the calculation will be much faster than a software floating-point library.
However in most microcontrollers there's no hardware support for floats so they're implemented purely in software. Because of that, using float-float may not increase performance and introduce some memory overhead to save the extra bytes of exponent.
If you really need the longer mantissa, try using a custom floating-point library. You can choose whatever is enough for you, for example change the library to adapt a new 48-bit float type of your own if only 40 bits of mantissa and 7 bits of exponent is needed. No need to spend time for calculating/storing the unnecessary 16 bits anymore. But this library should be very efficient because compiler's libraries often have assembly level optimization for their own type of float.
Another software-based solution that might be of use: GNU MPFR
It takes care of many other special cases and allows arbitrary precision (better than 64-bit double) that you would have to otherwise take care of yourself.
That's not practical. If it was, every embedded 32-bit processor (or compiler) would emulate double precision by doing that. As it stands, none do it that I am aware of. Most of them just substitute float for double.
If you need the precision and not the dynamic range, your best bet would be to use fixed point. IF the compiler supports 64-bit this will be easier too.
As (hopefully) most of you know, floating point arithmetic is different from real number arithmetic. It's for starters imprecise. Many numbers, especially decimals (0.1, 0.3) cannot be represented, leading to problems like this. A more thorough list can be found here.
Are there any general purpose languages that have built-in support for something closer to real number arithmetic? If not, what are good libraries that support this?
EDIT: Arbitrary precision decimal
datatypes are not what I am looking
for. I want to be able to represent
numbers like 1/3, sqrt(3), or 1 + 2i as well.
Though I hate to say it, Fortran. It has extensive support for arbitrary-precision arithmetic and tons of support for big-number calculations. It's ancient and gross, but it gets the job done.
All the numbers used in your examples are algebraic numbers, and can be represented
finitely as roots of polynomials with integer coefficients.
The same cannot be said of real numbers in general, which is easily seen when one
considers that the reals are uncountable, but the set of computer programs is
countable. Therefore most reals will not have a finite representation in code.
What you are looking for is symbolic calculation (MATLAB and other tools used in math and engineering are good at it).
If you want a general purposed language, I think expression tree in C# is good point to start with. In the essence, the ability to store the expression (instead of evaluate the expression into real values) is the key to be able to perform symbolic calculation. Note that expression tree does not provide symbolic calculation, it just provides the data structure that supports symbolic calculation.
This question is interesting, but raises some issues. First, you will never be able to represent all the real numbers using a (even theoretically infinite) computer, for cardinality reasons.
What you are looking for is a "symbolic numbers" datatype. You can imagine some sort of expression tree, with predefined constants, arithmetical operations, and perhaps algebraic (roots of polynomials) and transcendantal (exp, sin, cos, log, etc) functions.
Now the fun part of the story: you cannot find an algorithm which tells whether two such trees are representing the same number (or equivalently, which test whether such a tree is zero). I won't state anything precise, but as a hint, this is similar to the Halting Problem (for computer scientists) or the Gödel Incompleteness Theorem (for mathematicians).
This renders such a class pretty useless.
For some subfields of the reals, you have canonical forms, like a/b for the rationals, or finite algebraic extensions of the rationals (a/b + ic/d for complex rationals, a/b + sqrt(2) * a/b for Q[sqrt(2)], etc). These can be used to represent some particular sets of algebraic numbers.
In practice, this is the most complicated thing you will need. If you have a particular necessity, like ranges of floating point numbers (to prove some result is whithin a specified interval, this is probably the closest you can get to real numbers), or arbitrary precision numbers, you have freely available classes everywhere. Google boost::range for the former, and gmp for the latter.
There are several languages with support for rational and complex numbers. Scheme, for instance, has support built in for arbitrarily precise rational numbers, and complex numbers with either rational, floating point, or integral coefficients:
> (+ 1/2 1/3)
5/6
> (* 3 1+1/2i)
3+3/2i
> (+ 1/2 .5)
1.0
If you want to go beyond rational numbers or complex numbers with rational coefficients, to algebraic numbers such as sqrt(2) or closed-form numbers like e, you will probably have to look beyond general purpose programming languages, and use a special purpose mathematical language like Mathematica or Maxima.
To cover the real numbers with any flair you'll need a symbolic package.
Boost, the C++ project, has a Rational library, but that's only part of the story.
You have irrational numbers in all sorts of forms (pi, base of the natural logarithm, square and cube roots, the Champernowne constant, to name only a few). The only way I know of to handle arithmetic operations is a symbolic package with smarts as to the relationship amongst all of these numbers. Assuming you could express e^pi, how would you add one to it? Or take the square root of it?
Mathematica might handle these cases.
Java: java.math.BigDecimal
C#: decimal
A lot of languages have support for that: Java has BigDecimal, Perl has Math::BigFloat and Math::BigRat, Haskell has Integer and a lot of libraries and languages are listed in the wikipedia.
Ada natively supports fixed-point math as well as floating-point. Fixed-point can be much more exact than floating-point, as long as the number's exponents remain in range.
If you need floating-points, but more precision than IEEE gives, there are bignum packages around for just about every language.
I think that's about the best you can do. Neither scheme can exactly represent repeating decimals (like 1/3). It would probably be possible to come up with a scheme that does, but I know of no language that supports such a thing with a built-in type. Even that won't help you with irrational numbers (like pi and e). I believe there's even a theorem that says there will always be unrepresentable numbers, no matter what scheme you come up with.
EDIT: Arbitrary precision decimal
datatypes are not what I am looking
for. I want to be able to represent
numbers like 1/3, sqrt(3), or 1 + 2i
as well.
Ruby has a Rational class, so 1/3 can be expressed exactly as Rational(1,3). It also has a Complex class.
Scheme defines rationals, bignums, floating point and complex numbers. An implementation is not required to support them all, but if they are present, you can mix them and they will to "the right thing".
While its not "built-in", I think C++ (maybe C#) is your best bet. There are classes out there that have been written for this purpose.
http://www.oonumerics.org/oon/
I'm in a basic Engineering class and we're going through binary conversions. I can figure out the base 10 to binary or hex conversions really well, however the 8bit floating point conversions are kicking my ass and I can't find anything online that breaks it down in a n00b level and shows the steps? Wondering if any gurus have found anything online that would be helpful for this situation.
I have questions like 00101010(8bfp) = what number in base 10
Whenever I want to remember how floating point works, I refer back to the wikipedia page on 32 bit floats. I think it lays out the concepts pretty well.
http://en.wikipedia.org/wiki/Single_precision_floating-point_format
Note that wikipedia doesn't know what 8 bit floats are, I think your professor may have invented them ;)
Binary floating point formats are usually broken down into 3 fields: Sign bit, exponent and mantissa. The sign bit is simply set to 1 if the entire number should be negative, and 0 if the number is positive. The exponent is usually an unsigned int with an offset, where 2 to the 0'th power (1) is in the middle of the range. It's simpler in hardware and software to compare sizes this way. The mantissa works similarly to the mantissa in regular scientific notation, with the following caveat: The most significant bit is hidden. This is due to the requirement of normalizing scientific notation to have one significant digit above the decimal point. Remember when your math teacher in elementary school would whack your knuckles with a ruler for writing 35.648 x 10^6 or 0.35648 x 10^8 instead of the correct 3.5648 x 10^7? Since binary only has two states, this required digit above the decimal point is always one, and eliminating it allows another bit of accuracy at the low end of the mantissa.
So far I've seen many posts dealing with equality of floating point numbers. The standard answer to a question like "how should we decide if x and y are equal?" is
abs(x - y) < epsilon
where epsilon is a fixed, small constant. This is because the "operands" x and y are often the results of some computation where a rounding error is involved, hence the standard equality operator == is not what we mean, and what we should really ask is whether x and y are close, not equal.
Now, I feel that if x is "almost equal" to y, then also x*10^20 should be "almost equal" to y*10^20, in the sense that the relative error should be the same (but "relative" to what?). But with these big numbers, the above test would fail, i.e. that solution does not "scale".
How would you deal with this issue? Should we rescale the numbers or rescale epsilon? How?
(Or is my intuition wrong?)
Here is a related question, but I don't like its accepted answer, for the reinterpret_cast thing seems a bit tricky to me, I don't understand what's going on. Please try to provide a simple test.
It all depends on the specific problem domain. Yes, using relative error will be more correct in the general case, but it can be significantly less efficient since it involves an extra floating-point division. If you know the approximate scale of the numbers in your problem, using an absolute error is acceptable.
This page outlines a number of techniques for comparing floats. It also goes over a number of important issues, such as those with subnormals, infinities, and NaNs. It's a great read, I highly recommend reading it all the way through.
As an alternative solution, why not just round or truncate the numbers and then make a straight comparison? By setting the number of significant digits in advance, you can be certain of the accuracy within that bound.
The problem is that with very big numbers, comparing to epsilon will fail.
Perhaps a better (but slower) solution would be to use division, example:
div(max(a, b), min(a, b)) < eps + 1
Now the 'error' will be relative.
Using relative error is at least not as bad as using absolute errors, but it has subtle problems for values near zero due to rounding issues. A far from perfect, but somewhat robust algorithm combines absolute and relative error approaches:
boolean approxEqual(float a, float b, float absEps, float relEps) {
// Absolute error check needed when comparing numbers near zero.
float diff = abs(a - b);
if (diff <= absEps) {
return true;
}
// Symmetric relative error check without division.
return (diff <= relEps * max(abs(a), abs(b)));
}
I adapted this code from Bruce Dawson's excellent article Comparing Floating Point Numbers, 2012 Edition, a required read for anyone doing floating-point comparisons -- an amazingly complex topic with many pitfalls.
Most of the time when code compares values, it is doing so to answer some sort of question. For example:
If I know what a function returned when given a value of X, can I assume it will return the same thing if given Y?
If I have a method of computing a function which is slow but accurate, I am willing to accept some inaccuracy in exchange for speed, and I want to test a candidate function which seems to fit the bill, are the outputs from that function close enough to the known-accurate one to be considered "correct".
To answer the first question, code should ideally do a bit-wise comparison on the value, though unless a language supports the new operators added to IEEE-754 in 2009 that may be less efficient than ideal. To answer the second question, one should define what degree of accuracy is required and test against that.
I don't think there's much merit in a general-purpose method which regards as equal things which are close, since different applications will have differing requirements for both absolute and relative tolerance, based upon what exact questions the tests are supposed to answer.