What is the smallest signed number (most negative) that can be represented using signed magnitude? - binary

What is the formula for finding the smallest signed number that can be represented using signed magnitude.

Related

Number of binary additions for an m*n binary multiplication

I want to determine the maximum number of possible binary additions that are needed to carry out a an m*n binary multiplication. For example if I am multiplying a 3-bit number (101) and a 2-bit number (10), the number of additions I get is 2 whereas if I multiply a 3-bit (111) and a 2-bit (11) the number of additions is 4. I have shown this in the attached image. How can I determine the number of additions if I consider the possibility of a carry from a previous addition? Thank you.

Floating Point numbers number accuracy distribution

To start out I just want to state that I have read this discussion.
Are floating points uniformly inaccurate across all possible values? Or does inaccuracy increase as the values gets farther and farther away from 0?
To understand this, you need to clearly determine what kind of accuracy you are talking about. It is usually a measure of errors occurring in calculation, and I suspect you are not thinking about calculations in only the relevant floating point format.
These are all answers to your question:
The precision - expressed in number of significant bits - of floating point numbers is constant over most of the range. (Only for denormal numbers the precision reduces as the number gets smaller.)
The accuracy of floating point operations is typically limited by the precision, so mostly constant over the range. See the previous point.
The accuracy by which you can convert decimal numbers to binary floating point will be higher for integers than for numbers with a fractional component. This is because integers can be represented as some multiple of powers of two, while decimal fractions can't be represented as a multiple of negative powers of two. (The typical example is that 0.1 becomes a repeating fraction in binary floating point).
The consequence of the last point is that when you start out with slightly large decimal numbers in scientific notation, e.g. 1.123*10^4, these have the same value as an integer and can therefore be converted accurately to binary floating point.

Which arithmetic operations are the same on unsigned and two's complement signed numbers?

I'm designing a simple toy instruction set and accompanying emulator, and I'm trying to figure out what instructions to support. In the way of arithmetic, I currently have unsigned add, subtract, multiply, and divide. However, I can't seem to find a definitive answer to the following question: Which of the arithmetic operators need signed versions, and for which are the unsigned and two's complement signed versions equivalent?
So, for example, 1111 in two's complement is equal to -1. If you add 1 to it and pretend that it's an unsigned number , you get 0000, which is correct even when thinking of it as -1. However, does that hold for all numbers? And what about for the other three operations (subtraction, multiplication, division)?
Addition, subtraction and multiplication are the same provided:
Your inputs and outputs are the same size
Your behaviour on overflow is wraparound modulo 2n
Division is different.
Many instruction sets offer multiplication operations where the output is larger than the input, again these are different for signed and unsigned.
Furthermore if you are writing your emulator in C there are some misfeatures of the language that you need to be aware of.
Overflow of signed arithmetic in C is undefined behaviour. To get reliable modulo 2n behaviour arithmetic must be performed using unsigned types.
C will promote types smaller than int to int. Great care is needed to avoid such promotions (adding 0u or multiplying by 1u at the start of your calculation is one way).
Conversion from unsigned types to signed types is implementation defined, the implementations i've seen do the sensible thing but there may be some that don't.
Add and subtract are the same for signed and unsigned 2s complement, assuming you're going to handle overflow/underflow in the normal way for most CPUs, i.e. just wrap around. Multiply and divide are different. So you only need one addition routine and one subtraction routine regardless of signedness, but you need separate signed and unsigned multiply and divide.
All your operations need overflow checks, or they will return incorrect values in some cases. The unsigned versions of these checks are different from the signed ones, so you'll need to implement each routine separately.

MySQL field type for weight and height data

What is the adequate MySQL field type for weight (in kilograms) and height (meters) data?
That all depends. There are advantages and disadvantages to the different numeric types:
If you absolutely need the height/weight to be exactly a certain number of decimal places and not subject to the issues that floating point numbers can cause, use decimal() (decimal takes two parameters, the amount number of digits > 1 and the number of digits < 1, so decimal(5,2) would be able to store numbers as low as 0.01 and as high as 999.99). Fixed precision math is a bit slower, and it's more costly to store them because of the algorithms involved in doing it. It's absolutely mandatory to store currency values or any value you intend to do math with involving currency as a numeric(). If you're, for instance, paying people per pound or per meter for a baby, it would need to be numeric().
If you're willing to accept the possibility that some numeric values will be less precise than others use either FLOAT or DOUBLE depending on the amount of precision and the size of the numbers you're storing. Floating points are approximations based on the way computers do math, we use decimal points to mean out of powers of 10, computers do math in base2, the two are not directly comparable, so to get around that algorithms were developed for cases in which 100% precision is not required, but more speed is necessary that cannot be achieved by modeling it using collections of integers.
For more info:
http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html
http://dev.mysql.com/doc/refman/5.0/en/fixed-point-types.html
http://dev.mysql.com/doc/refman/5.0/en/floating-point-types.html

how does a computer work out if a value is greater than?

I understand basic binary logic and how to do basic addition, subtraction etc. I get that each of the characters in this text is just a binary number representing a number in a charset. The numbers dont really mean anything to the computer. I'm confused however as to how a computer works out that a number is greater than another. what does it do at the bit level?
If you have two numbers, you can compare each bit, from most significant to least significant, using a 1-bit comparator gate:
Of course n-bit comparator gates exist and are described further here.
It subtracts one from the other and sees if the result is less than 0 (by checking the highest-order bit, which is 1 on a number less than 0 since computers use 2's complement notation).
http://academic.evergreen.edu/projects/biophysics/technotes/program/2s_comp.htm
It substracts the two numbers and checks if the result is positive, negative (highest bit - aka "the minus bit" is set), or zero.
Within the processor, often there will be microcode to do operations, using hardwired options, such as add/subtract, that is already there.
So, to do a comparison of an integer the microcode can just do a subtraction, and based on the result determine if one is greater than the other.
Microcode is basically just low-level programs that will be called by assembly, to make it look like there are more commands than is actually hardwired on the processor.
You may find this useful:
http://www.osdata.com/topic/language/asm/intarith.htm
I guess it does a bitwise comparison of two numbers from the most significant bit to the least significant bit, and when they differ, the number with the bit set to "1" is the greater.
In a Big-endian architecture, the comparison of the following Bytes:
A: 0010 1101
B: 0010 1010
would result in A being greatest than B for its 6th bit (from the left) is set to one, while the precedent bits are equal to B.
But this is just a quick theoretic answer, with no concerns about floating point numbers and negative numbers.