Ternary computers: what would the third (unknown) part of a trit be used for? - binary

I'm very interested in the idea of creating/designing (but most likely only imagining) a ternary computer rather than a binary computer.
If I were to do this, I would used a balanced base-3 system, so a trit (trit is to base-3 as bit is to base-2) could be -1, 0, or +1. Storing data using trits would be approximately 36% more compact than storing data using bits like we do on today's computers, however ternary arithmetic would be much more complicated so there's no telling whether an ALU using ternary-computing would be faster or slower than binary.
But I digress, that's just a little background stuff that doesn't entirely pertain to the question, but it is related. :)
So, possible values for a trit:
-1 is off/false, same as 0 in binary.
0 is unknown. No equivalent in binary.
+1 is on/true, same as 1 in binary.
My question is...what is the point of that 0 in terms of computing? For example, I've been reading up a lot on logic gates and I understand both how they work and how they can work together to create an ALU. A binary AND gate is very simple, and combined with other binary logic gates they could all be used in combination to perform arithmetic such as creating an Adder, or a unit that performs addition.
I can't even comprehend how this would be done using ternary-computing. How would the unknown (0) factor into the logic gates and be used to perform arithmetic? Hell, I can't even comprehend what outputs a ternary AND gate would put out and how they'd be used.
For example, I would assume for a ternary computer an AND gate would accept 3 inputs instead of 2. Let's call the inputs A, B, and C.
In a binary AND gate, A and B can be 0 or 1. There are four possible combinations of what A and B could be input into the AND gate as. This results in only four possible outputs from the AND gate. If A and B are both 1 then the AND gate outputs a 1. If it is any of the other three combinations of A and B then it outputs 0. (possible results from AND gate considering all possible A/B combinations: 0, 0, 0, 1)
A ternary AND gate would take in 3 inputs, right? So in a ternary AND gate, A, B, and C could be -1, 0, or 1. This means that there are 27 possible combinations for A/B/C. Rather than listing out the possible outcomes for each combination, I'll just add them up for you guys. :)
Anyway, there is only one combination in which 1 will be the output, there are 7 combinations in which 0 will be the output (assuming if A, B, and C are all 0 the AND gate will output 0), and there are 19 combinations in which -1 will be the output. In a binary Adder if the gate throws a 1 it would be sent off to another gate to be evaluated and so on until the addition is complete. In ternary...what would a gate do if it received a 0?
I know that's a lot of reading, so I'll try to sum it up and list the main questions below:
How would the 0 of a trit in a balanced ternary system be used/handled in logic gates?
If a logic gate outputs a 0, and the gate is being used in an ALU to perform arithmetic (let's say addition for example), how would the gate that receives the 0 be expected to handle it? Basically, how would one go about creating an Adder using ternary logic?
And lastly, am I correct assuming that in a ternary computer logic gates would accept 3 inputs instead of 2 like binary computers, or would logic gates still be dyadic?

An essential goal of ALU design is to perform arithmetic on integers. In the first place addition (subtraction), then multiplication and division.
When written in base 3, these operations are well defined. For instance
+ | 0 1 2
------------
0 | 0 1 2
1 | 1 2 10
2 | 2 10 11
As with binary arithmetic, on needs to compute a sum trit and a carry. When the carry is propagated, the following table applies
+c| 0 1 2
------------
0 | 1 2 10
1 | 2 10 11
2 | 10 11 12
So you indeed need two three-input functions (two trits in and a carry in), giving the sum trit and the carry out bit. (Notice that binary ALUs add the same way: two bits in and a carry in giving a sum and a carry out bit.)
Whether this can be implemented from elementary dyadic or triadic gates would be technology dependent.
The logical predicates AND/OR have no reason to be modified and should remain binary. Boolean arithmetic remains Boolean.
Besides, if you enumerate all ternary functions of two ternary arguments (i.e. 9 input combinations), you find 19683 of them. Contrast this to 16 binary functions. This mess is unmanageable. (Don't even think of all triadic ternary functions, 7625597484987 of them.)

Okay, so I believe I may have found the answer to my questions.
So, there's no reason to reinvent the wheel, using standard binary logic gates would work just fine in a ternary computer.
In binary, 0 is false and 1 is true, and a number in binary is to be read from right to left, each digit following a power of two. For example, 10010 would be 18 (2 + 16). This number system is incremental, meaning the more 1's you have to the left the higher the number would be if converted to decimal, but there's no way to decrement with this system. All of this is done using transistors that make use of only caring about whether there is voltage flowing through it or if there isn't, thus determining if there is voltage it is on, and the bit is a 1, and if not then it is off, and the bit is a 0.
In ternary, there wouldn't be an on/off. Unlike the standard transistor in binary computers that tests whether there is or isn't voltage to determine the bit's value, a transistor for a ternary computer would test for if the voltage is negative, positive, or a ground. (-1 being negative voltage, 0 being the ground, +1 being positive voltage).
Using this system, decimal numbers would be written in ternary similarly to how a decimal number would be written in binary with one exception: ternary allows for decrementation. Take a number in binary for example, with the first digit starting on the right the digits correlate to 2^0, then 2^1, and so on. In binary all the digits with 1's and their correlating power of 2 is then added up to give you your number.
Now imagine ternary. From right-to-left it would follow 3^0, 3^1, 3^2, and so on. however, a +1 trit would indicate that the power of 3 that the digit correlates to is positive, a 0 trit would mean that digit is ignored just as a 0 does the same in binary, and a -1 trit would mean that the power of 3 that the digit correlates to is negative. This allows for both incrementation and decrementation.
Take this ternary number for example: (I'm going to use '-' instead of '-1', and '+' to represent '+1': +-0+-
this would be, reading from left to right, (-1)+(3)+(-27)+(81) = 56
While it's true that ternary and dyadic logic gates are compatible, an ALU would have to be designed very differently. Basically that's it. :)

Related

Why is Binary (machine code) based off Boolean Algebra? What would happen if numbers went from 0-9 instead of 0 or 1?

I'm curious to know if it would be possible to create a computer that uses binary that can go from 0000 up to 9999 by having true and false be 1 and 0, but add numbers 2-9 to get more possibilities for numbers. Is binary code only consisting of 0's and 1's for simplicity? Is it because for some reason computers can only understand True and False?
Binary code starts with 0 (0000) and increases to 1 (0001) to 2 (0010) and 10 (1010). Could is be possible for a computer to recognize 0's and 1's but then go to 2's and other numbers? For example, 0000 = 0, 0001 = 1, 0002 = 2, 0009 = 9 then 0010 = 10, and so on.
If this isn't possible somehow, please explain why and give a general explanation of how computers work because I'm interested and want to learn more. If this isn't used because it's inefficient, please epxlain what makes it inefficient and what makes 0's and 1's more efficient.
Thank you.
I expect that it would be possible to create a computer like this but I searched online and couldn't find out why binary code can't have numbers other than 0's and 1's.
Answer to myself for future reference:
Binary is based on Boolean Algebra because it's a base 2 system, and Decimal is a base 10 system that goes from 0-9 instead of 0 or 1 like Binary which is a base 2 system. Computers easily understand binary because its based off on and off states (0 or 1) with 0 being off and 1 being on. Computers use logic gates which are composed of a multitude of transistors that use boolean logic to store data for the computer. Binary makes hardware convenient for computers. Other number systems are used for other purposes different from Binary's purpose. For example, hexadecimal is used to represent numbers that are large in a more simple way that decimal is able to, take the number one million for example,in decimal, it would be 1000000, in binary it would be 11110100001001000000, and in hexadecimal it would be F4240. This is why the Binary number system is based off boolean alegbra and why computers use binary and not other number systems.
It is based on how the data is stored. Each piece of data stored in your memory can have only two values. Think of your memory as a number of glasses which can either be empty or full. Which means the data is stored as bunch of 1s and 0s. This is the result of moving from analog systems to digital, analog values can be between 0 and 1. In analog systems for example, you can have 0.25 or 0.7. But since computers became digital, the logic became binary.
It will be really beneficial to research the history of computers and learn how they evolved over time, if you are interested in this topic.

LC-3 algorithm for converting ASCII strings to Binary Values

Figure 10.4 provides an algorithm for converting ASCII strings to binary values. Suppose the decimal number is arbitrarily long. Rather than store a table of 10 values for the thousands-place digit, another table for the 10 ten-thousands-place digit, and so on, design an algorithm to do the conversion without resorting to any tables whatsoever.
I have attached pictures of figure 10.4. I am not looking for an answer to the problem, but rather can someone please explain this problem and perhaps give some direction on how to go about creating the algorithm?
Figure 10.4
Figure 10.4 second image
I am unsure as to what it means by tables and do not know where to start really.
The tables are those global, initialized arrays: one called Lookup10 holding 10, 20, 30, 40, ..., and another called Lookup100 holding 100, 200, 300, 400...
You can ignore the tables: as per the assignment instructions you're supposed to find a different way to accomplish this anyway.  Or, you can run that code in simulator or mentally to understand how it works.
The bottom line is that LC-3, while it can do anything (it is turning complete), it can't do much in any one instruction.  For arithmetic & logic, it can do add, not, and.  That's pretty much it!  But that's enough — let's note that modern hardware does everything with only one logic gate, namely NAND, which is a binary operator (so NAND directly available; NOT by providing NAND with the same operand for both inputs; AND by doing NOT after NAND; OR using NOT on both inputs first and then NAND; etc..)
For example, LC-3 cannot multiply or divide or modulus or right shift directly — each of those operations is many instructions and in the general case, some looping construct.  Multiplication can be done by repetitive addition, and division/modulus by repetitive subtraction.  These are super inefficient for larger operands, and there are much more efficient algorithms that are also substantially more complex, so those greatly increase program complexity beyond that already with the repetitive operation approach.
That subroutine goes backwards through the use input string.  It takes a string length count in R1 as parameter supplied by caller (not shown).  It looks at the last character in the input and converts it from an ASCII character to a binary number.
(We would commonly do that conversion from ascii character to numeric value using subtraction: moving the character values from the ascii character range of 0x30..0x39 to numeric values in the range 0..9, but they do it with masking, which also works.  The subtraction approach integrates better with error detection (checking if not a valid digit character, which is not done here), whereas the masking approach is simpler for LC-3.)
The subroutine then obtains the 2nd last digit (moving backwards through the user's input string), converting that to binary using the mask approach.  That yields a number between 0 and 9, which is used as an index into the first table Lookup10.  The value obtained from the table at that index position is basically the index × 10.  So this table is a × 10 table.  The same approach is used for the third (and first or, last-going-backwards) digit, except it uses the 2nd table which is a × 100 table.
The standard approach for string to binary is called atoi (search it) standing for ascii to integer.  It moves forwards through the string, and for every new digit, it multiples the existing value, computed so far, by 10 before adding in the next digit's numeric value.
So, if the string is 456, the first it obtains 4, then because there is another digit, 4 × 10 = 40, then + 5 for 45, then × 10 for 450, then + 6 for 456, and so on.
The advantage of this approach is that it can handle any number of digits (up to overflow).  The disadvantage, of course, is that it requires multiplication, which is a complication for LC-3.
Multiplication where one operand is the constant 10 is fairly easy even in LC-3's limited capabilities, and can be done with simple addition without looping.  Basically:
n × 10 = n + n + n + n + n + n + n + n + n + n
and LC-3 can do those 9 additions in just 9 instructions.  Still, we can also observe that:
n × 10 = n × 8 + n × 2
and also that:
n × 10 = (n × 4 + n) × 2     (which is n × 5 × 2)
which can be done in just 4 instructions on LC-3 (and none of these needs looping)!
So, if you want to do this approach, you'll have to figure out how to go forwards through the string instead of backwards as the given table version does, and, how to multiply by 10 (use any one of the above suggestions).
There are other approaches as well if you study atoi.  You could keep the backwards approach, but now will have to multiply by 10, by 100, by 1000, a different factor for each successive digit .  That might be done by repetitive addition.  Or a count of how many times to multiply by 10 — e.g. n × 1000 = n × 10 × 10 × 10.

Binary substraction : 2's complement & carry

I want to substract 1 to the number in binary representation 1010 1101. I write the two s complement of 1: 1111 1111, and I sum with the first number:
bitwise addition, with carry, gives 1 1010 1100: because of carry, I end up with 1 bit more. how is this dealt with in binary addition?
also, I am right in the use of two's complement to do addition?
thanks.
That is an entirely valid and common way to do subtraction, but the 'carry' flag doesn't mean the same thing that it does for normal addition. Since instead of subtracting n, you're adding a large number, the carry flag needs to be handled differently. That extra 1 would usually signify a carry in bitwise addition, whereas here it signifies that everything worked out right. If there wasn't a carry there, it actually means that the result should have been negative - a - b was converted to a + 2^n - b which was less than 2^n, meaning that b > a and so a - b < 0. Either way, it doesn't matter as your result will show up correctly within the 8 bits of your result.

Wrapping my head around hardware representations of numbers: a hypothetical two's complement question

This is a super naive question (I know), but I think that it will make for a good jumping off point into considering how the basic instruction set of a CPU actually gets carried out:
In a two's complement system, you cannot invert the sign of the most negative number that your implementation can represent. The theoretical reason for this is obvious in that the negation of the most negative number would be out of the range of the implementation (the range is always something like
-128 to 127).
However, what actually happens when you try to carry out the negation operation on the most negative number is pretty strange. For example, in an 8 bit representation, the most negative number is -128, or 1000 0000 in binary. Normally, to negate a number you would flip all the bits and then add one. However, if you try to do this with -128 you end up with:
1000 0000 ->
0111 1111 ->
1000 0000
the same number that you started out with. For this reason, wikipedia calls it "the weird number".
In that same wikipedia article, it says that the above negation
is detected as an overflow condition since there was a carry into but not out of the most-significant bit.
So my question is this:
A) What the heck does that mean? and
B) It seems like the CPU would need to perform an extra error checking step each and every time it carried out a basic arithmetic operation in order to avoid accidents relating to this negation, creating significant overhead. If that is the case, why not just truncate the range of numbers that can be represented to leave the weird number out (i.e. -127 to 127 for 8 bits)? If that isn't the case, how can you implement such error checking without creating extra overhead?
The carry-out bit from the MSB is used as a flag to indicate that we
need more bits. Without it, we would have a system of modular
arithmetic1 without any way of detecting when we wrap around.
In modular arithmetic, you don’t deal with numbers but with
equivalence classes of numbers that have the same remainder. In such
a system, after adding 1 to 127, you would get −128, and you would
conclude that +128 and −128 belong to the same equivalence class.
If you restricted yourself to numbers in the range −127 to +127, you
would have to redefine addition, since 127 + 1 = −127 is nonsense.
Two’s-complement arithmetic, when presented to you by a computer, is
essentially modular arithmetic with the ability to detect an overflow.
This is what a 4-bit adder would look like when adding 0001 to
0111. You can see that in the MSB the carry-in and carry-out are
different:
0 0 0 1
| 0 | 1 | 1 | 1
| | | | | | | |
v v v v v v v v
0 <- ADD <-1- ADD <-1- ADD <-1- ADD <- 0
^ | ^ | | |
v v v v
1 0 0 0
It is this flag that the ALU uses to signal that an overflow occurred,
without any extra steps.
1. Modular arithmetic goes from 0 to 255 instead of −127 to 128, but the basic idea is the same.
It's not that the CPU does another check, its that the transistors are arranged to notice when this happens. And they are built that way because the engineers picked two-complement before they started designing the thing.
The result is that it happens during the same clock cycle as a non-overflowing result would be returned.
How does it work?
The "add 1" stage implements a cascade logic: starting with the LSB each bit is subjected in turn to the truth table
old-bit carry-in new-bit carry-out
-------------------------------------
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
(that is new-bit = old-bit xor carry-in and carry-out = old-bit and carry-in). The "carry-in" for the LSB is the 1 that we're adding, and for the rest of the bits it is the "carry-out" of the previous one (which is why this has to be done in a cascade).
The last of these circuits just adds a circuit for signed-overflow = (carry-in and not carry-out).
First off the wikipedia article states it cannot be negated from a negative signed number to a signed number. And what they mean is because it takes 9 bits to represent positive 128, which you cannot do with an 8 bit register. If you are going from negative signed to positive unsigned as a conversion, then you have enough bits. And the hardware should give you 0x80 when you negate 0x80 because that is the right answer.
For add, subtract, multiply, etc addition in twos complement is no different than decimal math from elementary school. You line up your binary numbers, add the columns, the result for that column is the least significant digit and the rest is carried over to the next column. So adding a 0b001 to 0b001 for example
1
001
001
===
010
Add the two ones in the rightmost column, the result is 0b10 (2 decimal), write zero then carry the one, one plus zero plus zero is one, nothing to carry, zero plus zero is zero, the result is 0b010.
The right most column where 1 plus 1 is 0b10 and we write 0 carry the one, well that carry the one is at the same time the carry out of the right most column and is the carry in of the second column. Also, with pencil and paper math we normally only talk about carry of something when it is non-zero but if you think about it you are always carrying a number like our second columns one plus zero is one carry the zero.
You can think of a twos complement negate as invert and add one, or walking the bits and inverting up to a point then not inverting, or taking the result of zero minus the number.
You can work subtract in binary using pencil and paper for what it is worth, makes your head hurt when borrowing compared to decimal, but works. For what you are asking though think of invert and add one.
It is easier to wrap your head around this if you take it down to even fewer bits than 8, three is a manageable number, it all scales from there.
So the first column below is the input, the second column is the inverted version and the third column is the second column plus one. The fourth column is the carry in to the msbit, the fifth column is the carry out of the msbit.
000 111 000 1 1
001 110 111 0 0
010 101 110 0 0
011 100 101 0 0
100 011 100 1 0
101 010 011 0 0
110 001 010 0 0
111 000 001 0 0
Real quick look at at adding a one to two bits:
00+1 = 001
01+1 = 010
10+1 = 011
11+1 = 100
For the case of adding one to a number, the only case where you carry out from the second bit into the third bit is when your bits are all ones, a single zero in there stops the cascading carry bits. So in the three bit inversion table above the only two cases where you have a carry into the msbit is 111 and 011 because those are the only two cases where those lower bits are all set. For the 111 case the msbit has a carry in and a carry out. for the 011 case the msbit has a carry in but not a carry out.
So as stated by someone else there are transistors wired up in the chip, if msbit carry in is set and msbit carry out is not set then set some flag somewhere, otherwise clear the flag.
So note that the three bit examples above scale. if after you invert and before you add one you have 0b01111111 then you are going to get a carry in without the carry out. If you have 0b11111111 then you get a carry in and a carry out. Note that zero is also a number where you get the same number back when you invert it, the difference is that when the bits are considered as signed, zero can be represented when negated, 1 with all zeros cannot.
The bottom line though is that this is not a crisis or end of the world thing there is a whole lot of math and other operations in the processor where carry bits and significant bits are falling off one side or the other and overflows and underflows are firing off, etc. Most of the time the programmers never check for such conditions and those bits just fall on the floor, sometimes causing the program to crash or sometimes the programmer has used 16 bit numbers for 8 bit math just to make sure nothing bad happens, or uses 8 bit numbers for 5 bit math for the same reason.
Note that the hardware doesnt know signed or unsigned for addition and subtraction. also the hardware doesnt know how to subtract. Hardware adders are three bit adders (two operands and carry in) with a result and carry out. Wire 8 of these up you have an 8 bit adder or subtractor, add without carry is the two operands wired directly with a zero wired in as the lsbit carry in. Add with carry is the two operands wired directly with the carry bit wired to the lsbit carry in. Subtract is add with the second operand inverted and a one on the carry in bit. At least from a high level perspective, that logic can all get optimized and implemented in ways often two hard to understand on casual inspection.
The really fun exercise is multiply, think about doing binary multiplication with pencil and paper, then realize it is much easier than decimal, because it is just a series of shifts and adds. given enough gates you can represent each result bit as a equation with the inputs to the equation being the operands. meaning you can do a single clock multiply if you wish, in the early days that was too many gates, so multi clock shift and adds were done, today we burn the gates and get single clock multiplies. Also note that understanding this also means that if you do say a 16 bit = 8 bit times 8 bit multiply, the lower 8 bit result is the same whether it is a signed multiply or unsigned. Since most folks do things like int = int * int; you really dont need a signed or unsigned multiply if all you care about is the result bits (no checking of flags, etc). fun stuff..
In the ARM Architecture Manual (DDI100E):
OverflowFrom
Returns 1 if the addition or subtraction specified as its parameter
caused a 32-bit signed overflow. [...]
Subtraction causes an overflow if the operands have different signs,
and the first operand and the result have different signs.
NEG
[...]
V Flag = OverflowFrom(0 - Rm)
NEG is the instruction for computing the negation of a number, i.e. the twos complement.
The V flag signals signed overflow and can be used for conditional branching. It's fairly standard across different processor architectures, together with the three other flags Z (zero), C (carry) and N (negative).
For 0 - (-128) = 0 + 128 = -128 the first operand is 0 and the second operand as well as the result is -128, so the condition for overflow is satisfied, and the V flag is set.

2's complement example, why not carry?

I'm watching some great lectures from David Malan (here) that is going over binary. He talked about signed/unsigned, 1's compliment, and 2's complement representations. There was an addition done of 4 + (-3) which lined up like this:
0100
1101 (flip 0011 to 1100, then add "1" to the end)
----
0001
But he waved his magical hands and threw away the last carry. I did some wikipedia research bit didn't quite get it, can someone explain to me why that particular carry (in the 8's ->16's columns) was dropped, but he kept the one just prior to it?
Thanks!
The last carry was dropped because it does not fit in the target space. It would be the fifth bit.
If he had carried out the same addition, but with for example 8 bit storage, it would have looked like this:
00000100
11111101
--------
00000001
In this situation we would also be stuck with an "unused" carry.
We have to treat carries this way to make addition with two's compliment work properly, but that's all good, because this is the easiest way of treating carries when you have limited storage. Anyway, we get the correct result, right :)
x86-processors store such an additional carry in the carry flag (CF), which is possible to test with certain instructions.
A carry is not the same as an overflow
In the example you do have a carry out of the MSB. By definition, this carry ends up on the floor. (If there was someplace for it to go, then it would not have been out of the MSB.)
But adding two numbers with different signs cannot overflow. An overflow can only happen when two numbers with the same sign produce a result with a different sign.
If you extend the left-hand side by adding more digit positions, you'll see that the carry rolls over into an infinite number of bit positions towards the left, so you never really get a final carry of 1. So the answer is positive.
...000100
+...111101
----------
....000001
At some point you have to set the number of bits to represent the numbers. He chose 4 bits. Any carry into the 5th bit is lost. But that's OK because he decided to represent the number in just 4 bits.
If he decided to use 5 bits to represent the numbers he would have gotten the same result.
That's the beauty of it... Your result will be the same size as the terms you are adding. So the fifth bit is thrown out
In 2's complement you use the carry bit to signal if there was an overflow in the last operation.
You must look at the LAST two carry bits to see if there was overflow. In your example, the last two carry bits were 11 meaning that there was no overflow.
If the last two carry bits are 11 or 00 then no overflow occurred. If the last two carry bits are 10 or 01 then there was overflow. That is why he sometimes cared about the carry bit and other times he ignored it.
The first row below is the carry row. The left-most bits in this row are used to determine if there was overflow.
1100
0100
1101
----
0001
Looks like you're only using 4 bits, so there is no 16's column.
If you were using more than 4 bits then the -3 representation would be different, and the carry of the math would still be thrown out the end. For example, with 6 bits you'd have:
000100
111101
------
1000001
and since the carry is outside the bit range of your representation it's gone, and you only have 000001
Consider 25 + 15:
5+5 = 10, we keep the 0 and let the 1 go to the tens-column. Then it's 2 + 1 (+ 1) = 4. Hence the result is 40 :)
It's the same thing with binaries. 0 + 1 = 1, 0 + 0 = 0, 1 + 1 = 10 => send the 1 the 8-column, 0 + 1 ( + 1 ) = 10 => send the 1 to the next column - Here's the overflow and why we just throw the 1 away.
This is why 2's complement is so great. It allows you to add / substract just like you do with base-10, because you (ab)use the fact that the sign-bit is the MSB, which will cascade operations all the way to overflows, when nessecary.
Hope I made myself understood. Quite hard to explan this when english is not you native tongue :)
When performing 2's complement addition, the only time that a carry indicates a problem is when there's an overflow condition - that can't happen if the 2 operands have a different sign.
If they have the same sign, then the overflow condition is when the sign bit changes from the 2 operands, ie., there's a carry into the most significant bit.
If I remember my computer architecture learnin' this is often detected at the hardware level by a flag that's set when the carry into the most significant bit is different than the carry out of the most significant bit. Which is not the case in your example (there's a carry into the msb as well as out of the msb).
One simple way to think of it is as "the sign not changing". If the carry into the msb is different than the carry out, then the sign has improperly changed.
The carry was dropped because there wasn't anything that could be done with it. If it's important to the result, it means that the operation overflowed the range of values that could be stored in the result. In assembler, there's usually an instruction that can test for the carry beyond the end of the result, and you can explicitly deal with it there - for example, carrying it into the next higher part of a multiple precision value.
Because you are talking about 4 bit representations. It's unussual compared to an actual machine, but if we were to take for granted that a computer has 4 bits in each byte for a moment, then we have the following properties: a byte wraps at 15 to -15. Anything outside that range cannot be stored. Besides, what would you do with an extra 5th bit beyond the sign bit anyway?
Now, given that, we can see from everyday math that 4 + (-3) = 1, which is exactly what you got.