How best to represent negative-binary numbers for use in counters and comparators in Verilog HDL? - binary

I have been designing a Delta-Sigma DAC and have run into confusion and despair over the handling of signed numbers in my (sigma)counters and (delta and Vref) comparators.
I have tried to employ signed 2's complement but the EDAcompiler doesn't seem to notice when I do it, its most likely my own mistake!
So basically my question is, how (in Verilog) do I represent negative numbers in a way that they can be used in counters (which can therefore count up and down)? I am aware that a counter register that will hold signed numbers must be declared reg signed [:0]
Thanks!
Gavin

Well I am not that clear on your question. Some compilers may not be able to handle signed registers directly since if I recall correctly it is a Verilog 2001 feature. But generally if you use digital logic that works with signed numbers you shouldn't have an issue. For example if you use an adder ip just mention that inputs are signed numbers. As for the simulator you can select the type of data you need, generally by selecting the register/value, right clicking on the waveform window and changing the type.
Finally if you have to create the logic yourself just use sign extension. so lets say you are working with 4 bit values
-5 would be 11111011 and 5 wold be 00000101. So you can see that for negative numbers the MSB is 1 and for positive its zero. Using this you can interpret the numbers in your code but just make sure that the size is bigger then what you want to use so no overflow occurs.

Related

can we interpret negative binary as positive too(read the question please)?

I already know the concept of negative binary numbers, The 0 at the position most significant bit represents that the binary is positive and 1 at the position of most significant bit represents that the binary is negative.
BUT THE PROBLEM THAT INTIMIDATED ME TO ASK A QUESTION ON STACKOVERFLOW IS THIS:
what about the times that we might want to represent a huge number that it's representation has occurred to have 1 in msb.
let me explain it in this way: by considering the above rule for making negative counterparts of our binary numbers we could say that ;
in an 8-bit system we have, For example, a value of positive 12 (decimal) would be written as 00001100 in binary, but negative 12 (decimal) would be written as
10001100 but what makes me confused a bit is that 10001100 could also be interpreted as 268 in decimal while we know that its the negative form of 12 in binary using this method of conversion.
I just want to know how to deal with this tricky, two-faced possible ways of interpreting a binary number, just like the example i gave above(it seem's to be negative, OH! but wait it might also not be:).
It depends on the type you use. If you're using an 8-bit representation which is signed, then the largest number you can store is 1111111 (i.e. the first bit is set aside).
In our example, that would convert to an integer value of 127.
If you use an unsigned type, then you have the extra bit available, allowing you to store 11111111, or 255 expressed as an integer.
A strongly typed language with a good compiler should catch you trying to assign, say, 134 to a signed 8 bit integer and vomit errors all over you.
If you're doing something strange fiddling around with bits yourself, you're on your own! There's no way of reconstructing, post hoc, whether it was intended to be a negative or a large positive, so you'll have to choose a system and stick with it.
The general convention nowadays is to stick with signed representations always - although I have seen code (usually for extreme compute tasks like astrophysical calculations) use unsigned values simply to save memory. And of course images will use unsigned values by convention, usually.

Binary -> Signed Mantissa

How do I write 0xFA in signed mantissa. I converted it to binary = 1111_1010. Not sure where to go from here.
The question is "If the register file has 8 bits width total, write the following in signed mantissa."
Also, an explanation of signed mantissa would be great!
So what you have to work with is a byte of data with an unknown type, apparently.
In order to write a number in signed mantissa (see Significand) one would expect that your dealing with a floating point type such as single or double. However you've only got a single byte.
A single is 8 bytes so surely it can't be that and double is double trouble. Also a half requires 16 bits. The only logical alternative type would be SByte but in that case you will never get any numbers that have any mantissa (significant digits) after the decimal. In fact there is no decimal. So Perhaps this is a trick question?
If you go on the assumption of SByte, you get -6x10^0
Just in case you want proof, or if your curious how this looks during debug:
private void SByte2Dec()
{
sbyte convertsHexToSByte = Convert.ToSByte("0xFA", 16);
Single yourAnswer = Convert.ToSingle(convertsHexToSByte);
label1.Text = Convert.ToString(youranswer);
}
In this example I had a windows form with nothing but label1 on it.
Then I put SByte2Dec(); right under InitializeComponent();
The solution is -122. Not sure how to get there...any ideas?
Working backwards from the answer it's simple to see what your professor has done. He is assuming the MSB is the sign bit and the rest is treated like a 7 bit integer. There is a precedent for this, called "Signed Magnitude Representation" but it's not used in modern computing. These days pretty much everyone is using Two's compliment.
I take it this is a beginners course and rather than go though all the trouble of explaining two's compliment and data types your professor is mainly trying to drive home the point of the MSB being a sign bit. If you got the whole sign bit thing and don't know anything else about the way modern computer hardware performs calculations, then you would probably arrive at the same answer.
My guess is that your professor also took to wording the question in a strange way so as to throw you off the path if you tried to Google the answer. If you want to get him back, ask him what the difference between "1000 0000" and "000 0000" is. Also if you or anyone else in the class answered -6 and he counted it wrong, he should be fired. Those students should be awarded bonus points for teaching themselves about two's compliment.
Why would the signed mantissa be -6? I see that the 2's complement is -6 but signed mantissa is different?
I have you read the wiki article I linked to on "Significand"?
The important thing to realize is that "signed mantissa" is not a data type. However, there are (were?) many different machine-specific data types that implemented their own versions of storing floating point numbers before the IEEE standard became widely adopted. These early data types were often referred to as DFP or decimal floating point numbers as opposed to binary floating point. Read this paper and for more in-depth understanding. Also this paper covers the topic quite well.
As I stated earlier, your professor most likely used the terminology "signed mantissa" to throw you off if you went searching the internet for the answer. Apparently you were expected to read between the lines and know that what he was really asking for was a form of decimal floating point, or Signed Magnitude Representation.
"Signed Mantissa" ≠ Two's Compliment
"Signed Mantissa" is to be interpreted as some form of Decimal Floating Point
Where as, Two's Compliment is a form of Binary Floating Point

Handling Double values on CUDA ( Compute Capability 1.1) [duplicate]

I am writing a program for an embedded hardware that only supports 32-bit single-precision floating-point arithmetic. The algorithm I am implementing, however, requires a 64-bit double-precision addition and comparison. I am trying to emulate double datatype using a tuple of two floats. So a double d will be emulated as a struct containing the tuple: (float d.hi, float d.low).
The comparison should be straightforward using a lexicographic ordering. The addition however is a bit tricky because I am not sure which base should I use. Should it be FLT_MAX? And how can I detect a carry?
How can this be done?
Edit (Clarity): I need the extra significant digits rather than the extra range.
double-float is a technique that uses pairs of single-precision numbers to achieve almost twice the precision of single precision arithmetic accompanied by a slight reduction of the single precision exponent range (due to intermediate underflow and overflow at the far ends of the range). The basic algorithms were developed by T.J. Dekker and William Kahan in the 1970s. Below I list two fairly recent papers that show how these techniques can be adapted to GPUs, however much of the material covered in these papers is applicable independent of platform so should be useful for the task at hand.
https://hal.archives-ouvertes.fr/hal-00021443
Guillaume Da Graça, David Defour
Implementation of float-float operators on graphics hardware,
7th conference on Real Numbers and Computers, RNC7.
http://andrewthall.org/papers/df64_qf128.pdf
Andrew Thall
Extended-Precision Floating-Point Numbers for GPU Computation.
This is not going to be simple.
A float (IEEE 754 single-precision) has 1 sign bit, 8 exponent bits, and 23 bits of mantissa (well, effectively 24).
A double (IEEE 754 double-precision) has 1 sign bit, 11 exponent bits, and 52 bits of mantissa (effectively 53).
You can use the sign bit and 8 exponent bits from one of your floats, but how are you going to get 3 more exponent bits and 29 bits of mantissa out of the other?
Maybe somebody else can come up with something clever, but my answer is "this is impossible". (Or at least, "no easier than using a 64-bit struct and implementing your own operations")
It depends a bit on what types of operations you want to perform. If you only care about additions and subtractions, Kahan Summation can be a great solution.
If you need both the precision and a wide range, you'll be needing a software implementation of double precision floating point, such as SoftFloat.
(For addition, the basic principle is to break the representation (e.g. 64 bits) of each value into its three consitituent parts - sign, exponent and mantissa; then shift the mantissa of one part based on the difference in the exponents, add to or subtract from the mantissa of the other part based on the sign bits, and possibly renormalise the result by shifting the mantissa and adjusting the exponent correspondingly. Along the way, there are a lot of fiddly details to account for, in order to avoid unnecessary loss of accuracy, and deal with special values such as infinities, NaNs, and denormalised numbers.)
Given all the constraints for high precision over 23 magnitudes, I think the most fruitful method would be to implement a custom arithmetic package.
A quick survey shows Briggs' doubledouble C++ library should address your needs and then some. See this.[*] The default implementation is based on double to achieve 30 significant figure computation, but it is readily rewritten to use float to achieve 13 or 14 significant figures. That may be enough for your requirements if care is taken to segregate addition operations with similar magnitude values, only adding extremes together in the last operations.
Beware though, the comments mention messing around with the x87 control register. I didn't check into the details, but that might make the code too non-portable for your use.
[*] The C++ source is linked by that article, but only the gzipped tar was not a dead link.
This is similar to the double-double arithmetic used by many compilers for long double on some machines that have only hardware double calculation support. It's also used as float-float on older NVIDIA GPUs where there's no double support. See Emulating FP64 with 2 FP32 on a GPU. This way the calculation will be much faster than a software floating-point library.
However in most microcontrollers there's no hardware support for floats so they're implemented purely in software. Because of that, using float-float may not increase performance and introduce some memory overhead to save the extra bytes of exponent.
If you really need the longer mantissa, try using a custom floating-point library. You can choose whatever is enough for you, for example change the library to adapt a new 48-bit float type of your own if only 40 bits of mantissa and 7 bits of exponent is needed. No need to spend time for calculating/storing the unnecessary 16 bits anymore. But this library should be very efficient because compiler's libraries often have assembly level optimization for their own type of float.
Another software-based solution that might be of use: GNU MPFR
It takes care of many other special cases and allows arbitrary precision (better than 64-bit double) that you would have to otherwise take care of yourself.
That's not practical. If it was, every embedded 32-bit processor (or compiler) would emulate double precision by doing that. As it stands, none do it that I am aware of. Most of them just substitute float for double.
If you need the precision and not the dynamic range, your best bet would be to use fixed point. IF the compiler supports 64-bit this will be easier too.

Emulating IBM floating point multiplication/addition in VBA

I am attempting to emulate a (no longer existing) mainframe report generator in an Access 2003 or Access 2010 environment. The data it generates must match exactly with paper reports from the early 70s. Unfortunately, the earliest years data were run on hardware that used IBM floating point representation instead of IEEE. With the help of Google, I've found a library of VBA functions that will convert a float from decimal to the IEEE 754 32bit binary format. I had to modify the library to accept either 32bit or 64bit floats, so I have a modest working knowledge of floating point formats, however, I'm having trouble making the conversion from IEEE to IBM binary format, as well as trouble multiplying and adding either the IBM or the IEEE numbers.
I haven't turned up any other libraries for performing this conversion and arithmetic operations in VBA - is there an easier way to go about this, or an existing library that I'm not finding? Failing that, a clear and straightforward explanation of the relevant algorithms?
Thanks in advance.
To be honest you'd probably do better to start by looking at the Hercules emulator.
http://www.hercules-390.org/ Other than that in theory with VBA you can use the Decimal type to get good results (note you have to CDec to create these) it uses 12 bits with a variable power of ten scalar.
A quick google shows this post from the hercules group, which confirms Alberts point about needing to know the hardware:
---Snip--
In theory, but rather less so in practice. S/360 and S/370 had a
choice of Scientific or Commercial instruction sets. The former added
the FP instructions and registers to the base; the latter the decimal
instructions, including Edit and Edit & Mark. But larger 360 (iirc /65
and up) and 370 (/155 and up) models had the union of the two, called
the Universal instruction set, and at some point the S/370 dropped the
option.
---snip---
I have to say that having looked at the hercules source code you'll probably need to figure out exactly which floating point operation codes (in terms of precision single,long, extended) are being performed.
The problem is here's your confusing the issue of decimal type in access, and that of single and double type floating point values available in access.
If you use the currency data type in access, this is a scaled integer, and will not produce rounding (that is what most of us use for financial calculations and reports). You can also use decimal values in access, and again they don't round at all as they are packed decimals.
However, both the single and double values available inside of access are in fact the same format and conform to the IEEE floating point standard.
For an access single variable, this is a 32bit number, and the range is:
-3.402823E38
to
-1.401298E-45 for negative values
and
1.401298E-45
to
3.402823E38 for positive values
That looks to be the same to me as the IEEE 754 standard.
So, if you add up values in access as a single, you should get the rouding same results.
So, Intel based, and Access single and doubles I believe are the same as this IEEE standard.
The only real issue it and here is what is the format of the original data you're pulling into access, and what kinds of text or string or conversion process is occurring when that data is pulled in and stored?
Access can convert numbers. Try typing these values at the access command line prompt (debug window)
? hex(255)
Above will show FF
? csng(&hFF)
Above will show 255
Edit:
Ah, ok, I see now I have this reversed, my wrong here. The problem here is assuming you convert a number to the older IBM format (Excess 64?), you will THEN have to get your hands on their code that they used for adding those numbers. In fact, even back then, different IBM models depending on what you purchased actually produced different results (more money = more precision).
So, not only do you need conversion routines to convert to the internal representation, you THEN need the routines that add/subtract/multiply those numbers. So, just having conversion routines is not going to get you very far, since you also have to duplicate their exact routines that do math. Those types of routines are likely not all created equal in terms of how they round numbers etc.

Can coordinates of constructable points be represented exactly?

I'd like to write a program that lets users draw points, lines, and circles as though with a straightedge and compass. Then I want to be able to answer the question, "are these three points collinear?" To answer correctly, I need to avoid rounding error when calculating the points.
Is this possible? How can I represent the points in memory?
(I looked into some unusual numeric libraries, but I didn't find anything that claimed to offer both exact arithmetic and exact comparisons that are guaranteed to terminate.)
Yes.
I highly recommend Introduction to constructions, which is a good basic guide.
Basically you need to be able to compute with constructible numbers - numbers that are either rational, or of the form a + b sqrt(c) where a,b,c were previously created (see page 6 on that PDF). This could be done with algebraic data type (e.g. data C = Rational Integer Integer | Root C C C in Haskell, where Root a b c = a + b sqrt(c)). However, I don't know how to perform tests with that representation.
Two possible approaches are:
Constructible numbers are a subset of algebraic numbers, so you can use algebraic numbers.
All algebraic numbers can be represented using polynomials of whose they are roots. The operations are computable, so if you represent a number a with polynomial p and b with polynomial q (p(a) = q(b) = 0), then it is possible to find a polynomial r such that r(a+b) = 0. This is done in some CASes like Mathematica, example. See also: Computional algebraic number theory - chapter 4
Use Tarski's test and represent numbers. It is slow (doubly exponential or so), but works :) Example: to represent sqrt(2), use the formula x^2 - 2 && x > 0. You can write equations for lines there, check if points are colinear etc. See A suite of logic programs, including Tarski's test
If you turn to computable numbers, then equality, colinearity etc. get undecidable.
I think the only way this would be possible is if you used a symbolic representation,
as opposed to trying to represent coordinate values directly -- so you would have
to avoid trying to coerce values like sqrt(2) into some numerical format. You will
be dealing with irrational numbers that are not finitely representable in binary,
decimal, or any other positional notation.
To expand on Jim Lewis's answer slightly, if you want to operate on points that are constructible from the integers with exact arithmetic, you will need to be able to operate on representations of the form:
a + b sqrt(c)
where a, b, and c are either rational numbers, or representations in the form given above. Wikipedia has a pretty decent article on the subject of what points are constructible.
Answering the question of exact equality (as necessary to establish colinearity) with such representations is a rather tricky problem.
If you try to compare co-ordinates for your points, then you have a problem. Leaving aside co-linearity for a moment, how about just working out whether two points are the same or not?
Supposing that one has given co-ordinates, and the other is a compass-straightedge construction starting from certain other co-ordinates, you want to determine with certainty whether they're the same point or not. Either way is a theorem of Euclidean geometry, it's not something you can just measure. You can prove they aren't the same by spotting some difference in their co-ordinates (for example by computing decimal places of each until you encounter a difference). But in general to prove they are the same cannot be done by approximate methods. Compute as many decimal places as you like of some expansions of 1/sqrt(2) and sqrt(2)/2, and you can prove they're very close together but you won't ever prove they're equal. That takes algebra (or geometry).
Similarly, to show that three points are co-linear you will need theorem-proving software. Represent the points A, B, C by their constructions, and attempt to prove the theorem "A, B and C are colinear". This is very hard - your program will prove some theorems but not others. Much easier is to ask the user for a proof that they are co-linear, and then verify (or refute) that proof, but that's probably not what you want.
In general, constructable points may have an arbitrarily complex symbolic form, so you must use a symbolic representation to work them exactly. As Stephen Canon noted above, you often need numbers of the form a+b*sqrt(c), where a and b are rational and c is an integer. All numbers of this form form a closed set under arithmetic operations. I have written some C++ classes (see rational_radical1.h) to work with these numbers if that is all you need.
It is also possible to construct numbers which are sums of any number of terms of rational multiples of radicals. When dealing with more than a single radicand, the numbers are no longer closed under multiplication and division, so you will need to store them as variable length rational coefficient arrays. The time complexity of operations will then be quadratic in the number of terms.
To go even further, you can construct the square root of any given number, so you could potentially have nested square roots. Here, the representations must be tree-like structures to deal with root hierarchy. While difficult to implement, there is nothing in principle preventing you from working with these representations. I'm not sure just what additional numbers can be constructed, but beyond a certain point, your symbolic representation will be expressive enough to handle very large classes of numbers.
Addendum
Found this Google Books link.
If the grid axes are integer valued then the answer is fairly straight forward, the points are either exactly colinear or they are not.
Typically however, one works with real numbers (well, floating points) and then draws the rounded values on the screen which does exist in integer space. In this case you have no choice but to pick a tolerance and use it to determine colinearity. Keep it small and the users will never know the difference.
You seem to be asking, in effect, "Can the normal mathematics (integer or floating point) used by computers be made to represent real numbers perfectly, with no rounding errors?" And, of course, the answer to that is "No." If you want theoretical correctness, then you will be stuck with the much harder problem of symbolic manipulation and coding up the equivalent of the inferences that are done in geometry. (In short, I'm agreeing with Steve Jessop, above.)
Some thoughts in the hope that they might help.
The sort of constructions you're talking about will require multiplication and division, which means that to preserve exactness you'll have to use rational numbers, which are generally easy to implement on top of a suitable sort of big integer (i.e., of unbounded magnitude). (Common Lisp has these built-in, and there have to be other languages.)
Now, you need to represent square roots of arbitrary numbers, and these have to be mixed in.
Therefore, a number is one of: a rational number, a rational number multiplied by a square root of a rational number (or, alternately, just the square root of a rational), or a sum of numbers. In order to prove anything, you're going to have to get these numbers into some sort of canonical form, which for all I can figure offhand may be annoying and computationally expensive.
This of course means that the users will be restricted to rational points and cannot use arbitrary rotations, but that's probably not important.
I would recommend no to try to make it perfectly exact.
The first reason for this is what you are asking here, the rounding error and all that stuff that comes with floating point calculations.
The second one is that you have to round your input as the mouse and screen work with integers. So, initially all user input would be integers, and your output would be integers.
Beside, from a usability point of view, its easier to click in the neighborhood of another point (in a line for example) and that the interface consider you are clicking in the point itself.