i made a long post but I'll keep it simpler.
Can someone show me a step by step for -10+-10 in hexadecimal for signed 16-bits?
the hexadecimal numbers would look like 0xFFF6+0xFFF6
I've heard it should equal 0xFFEC which should be -20. Anyone? Pretty please?
Addition
When adding the two numbers, use the usual method of adding the digits by place value.
0xFFF6 (-10) 0xFFF 6 (6)
+ 0xFFF6 (-10) >> - 0xFFF 6 (6)
----------------- ------------ ------
C (12)
Carry when needed.
1 <-- Carried
0x F F F 6 (15) 0x F F F 6
- 0x F F F 6 (15) >> - 0x F F F 6
--------------- ------ --------------
1E C (30) E C
^ +-- need to carry 16
|
Carry this to next place value
Continue until all digits are accounted for. Discard overflow carry. Overflow is checked using sign.
1 1 1
0x F F F 6 0x F F F 6 0xFFF6 (-10)
- 0x F F F 6 >> - 0x F F F 6 >> - 0xFFF6 (-10)
-------------- -------------- -----------------
1F E C 1F F E C 0xFFEC (-20)
^
|
Discard
Subtraction
Adding a negative is the same as subtracting the positive. Turn -10 + -10 into -10 - 10 by taking the 2's complement of the subtrahend.
0xFFF6 (-10) 0xFFF6 (-10)
+ 0xFFF6 (-10) >> 2's complement >> - 0x000A (+10)
----------------- -----------------
Next, use binary subtraction and borrow as needed.
Borrow Borrowed
| |
v v
0xFFF 6 ( 6) 0xFFE 16 ( 22)
- 0x000 A (-10) >> - 0x000 A (-10)
------------ ------- ------------- -------
Continue until all digits are accounted for.
0xFFE 16 ( 22) 0xFFE 16 0xFFF6
- 0x000 A (-10) >> - 0x000 A >> - 0x000A
------------- ------- ------------- ----------
C ( 12) 0xFFE C 0xFFEC
Overflow
Once finished, check for overflow. The sign will be incorrect if overflow occurred (subtracting from a negative has to be negative).
0xFFEC -> negative (no overflow)
Related
Generate the function generateExponents k l, which for given k and l generates a stream of all unique possible numbers x^k*y^l in increasing order. For example generateExponents 2 3 = [1,4,8,9,16,25,27...]
For obvious reasons this doesn't work:
generateExponents k l = sort [x^k*y^l | x <- [1..], y <- [1..]]
Then I tried this, which doesn't work either:
generateExponents k l = [n | n <- [1 ..], n `elem` products n]
where
xs n = takeWhile (\x -> x ^ k <= n) [1 ..]
ys n = takeWhile (\y -> y ^ l <= n) [1 ..]
products n = liftA2 (*) (xs n) (ys n)
What am I doing wrong?
Your algorithm is pretty slow -- it checks every number, and for every number it searches for an appropriate factorization! You can do better by producing an infinite table of answers, and then collapsing the table appropriately. For example, for x^2*y^3, the table looks like:
x 1 2 3 4 5
y
1 1 4 9 16 25
2 8 32 72 128 200
3 27 108 243 432 675
4 64 256 576 1024 1600
5 125 500 1125 2000 3125
Note two nice features of this table: each row is sorted, and the rows themselves are sorted. This means we can merge them efficiently by simply taking the top-left value, then re-inserting the tail of the first row in its new sorted position. For example, the table above, after emitting 1, would look like:
4 9 16 25 36
8 32 72 128 200
27 108 243 432 675
64 256 576 1024 1600
125 500 1125 2000 3125
Then, after emitting the top-left value 4:
8 32 72 128 200
9 16 25 36 49
27 108 243 432 675
64 256 576 1024 1600
125 500 1125 2000 3125
Note how the top row has now become the second row to keep the doubly-sorted property.
This is an efficient way to construct all the right numbers in the right order. Then, the only remaining trick needed is to deduplicate, and for that you can deploy the standard trick map head . group, since duplicates are guaranteed to be next to each other. Here's the full code:
import Data.List
generateExponents' k l = map head . group . go $ [[x^k*y^l | x <- [1..]] | y <- [1..]] where
go ((x:xs):xss) = x:go (insert xs xss)
It's much, much faster. Compare:
> sum . take 400 $ generateExponents 2 3
5994260
(8.26 secs, 23,596,249,112 bytes)
> sum . take 400 $ generateExponents' 2 3
5994260
(0.01 secs, 1,172,864 bytes)
> sum . take 1000000 {- a million -} $ generateExponents' 2 3
72001360441854395
(6.99 secs, 13,460,753,616 bytes)
I think you just forgot to map the actual function over the xs and ys:
generateExponents k l = [n | n <- [1 ..], n `elem` products n]
where
xs n = takeWhile (<= n) $ map (^ k) [1 ..]
ys n = takeWhile (<= n) $ map (^ l) [1 ..]
products n = liftA2 (*) (xs n) (ys n)
I have a databasedump with appr. 6.0000 lines.
They all look like this:
{"student”:”12345”,”achieved_date":1576018800,"expiration_date":1648677600,"course_code”:”SOMECODE,”certificate”:”STRING WITH A LOT OF CHARACTERS”,”certificate_code”:”ABCDE,”certificate_date":1546297200}
"STRING WITH A LOT OF CHARACTERS" is a string with around 600.000 characters (!)
I need those characters on each line removed... I tried with:
sed 's/certificate\":\"*","certificate_code//'
But it seems it did not do the trick.
I also couldn't find an answer to work with here, so reaching out to you, hopefully you can help me.. is this best done with SED? or any other method?
For now I don't care if the all the characters on "STRING WITH A LOT OF CHARACTERS" are removed or replaced by I.E. a 0, even that would make it workable for me ;)
The output for od -xc filename | head is:
0000000 2d2d 4d20 5379 4c51 6420 6d75 2070 3031
- - M y S Q L d u m p 1 0
0000020 312e 2033 4420 7369 7274 6269 3520 372e
. 1 3 D i s t r i b 5 . 7
0000040 322e 2c39 6620 726f 4c20 6e69 7875 2820
. 2 9 , f o r L i n u x (
0000060 3878 5f36 3436 0a29 2d2d 2d0a 202d 6f48
x 8 6 _ 6 4 ) \n - - \n - - H o
0000100 7473 203a 3231 2e37 2e30 2e30 2031 2020
s t : 1 2 7 . 0 . 0 . 1
hope you can help me!
When I do the od command on the sample text you've supplied, the output includes :
0000520 454d 4f43 4544 e22c 9d80 6563 7472 6669
M E C O D E , ” ** ** c e r t i f
0000540 6369 7461 e265 9d80 e23a 9d80 5453 4952
i c a t e ” ** ** : ” ** ** S T R I
0000560 474e 5720 5449 2048 2041 4f4c 2054 464f
N G W I T H A L O T O F
0000600 4320 4148 4152 5443 5245 e253 9d80 e22c
C H A R A C T E R S ” ** ** , ”
0000620 9d80 6563 7472 6669 6369 7461 5f65 6f63
** ** c e r t i f i c a t e _ c o
0000640 6564 80e2 3a9d 80e2 419d 4342 4544 e22c
d e ” ** ** : ” ** ** A B C D E , ”
So you can see the "quotes" are the byte sequences e2 80 9d, which is unicode U+201d (see https://www.utf8-chartable.de/unicode-utf8-table.pl?start=8192&number=128 )
Probably the simplest would be to simply skip these unicode characters with the single-character wildcard .
sed "s/certificate.:.*.certificate_code/certificate_code/"
Unfortunately, sed doesn't appear to take the unicode \u201d syntax, so some other answers suggest using the hex sequence (\xe2\x80\x9d) - eg : Escaping double quotation marks in sed (but unfortunately I haven't got that to work just yet, and I have to sign off now)
This answer explains why it could have happened, with some remedial action if that's possible in your situation : Unknown UTF-8 code units closing double quotes
If you are working with bash, would you please try the following:
q=$'\xe2\x80\x9d'
sed "s/certificate${q}:${q}.*${q},${q}certificate_code//" file
Result:
{"student”:”12345”,”achieved_date":1576018800,"expiration_date":1648677600,"course_code”:”SOMECODE,””:”ABCDE,”certificate_date":1546297200}
So the exercise says: "Consider binary encoding of real numbers on 16 bits. Fill the empty points of the binary encoding of the number -0.625 knowing that "1110" stands for the exposant and is minus one "-1"
_ 1110_ _ _ _ _ _ _ _ _ _ _ "
I can't find the answer and I know this is not a hard exercise (at least it doesn't look like a hard one).
Let's ignore the sign for now, and decompose the value 0.625 into (negative) powers of 2:
0.625(dec) = 5 * 0.125 = 5 * 1/8 = 0.101(bin) * 2^0
This should be normalized (value shifted left until there is a one before the decimal point, and exponent adjusted accordingly), so it becomes
0.625(dec) = 1.01(bin) * 2^-1 (or 1.25 * 0.5)
With hidden bit
Assuming you have a hidden bit scenario (meaning that, for normalized values, the top bit is always 1, so it is not stored), this becomes .01 filled up on the right with zero bits, so you get
sign = 1 -- 1 bit
exponent = 1110 -- 4 bits
significand = 0100 0000 000 -- 11 bits
So the bits are:
1 1110 01000000000
Grouped differently:
1111 0010 0000 0000(bin) or F200(hex)
Without hidden bit (i.e. top bit stored)
If there is no hidden bit scenario, it becomes
1 1110 10100000000
or
1111 0101 0000 0000(bin) = F500(hex)
First of all you need to understand that each number "z" can be represented by
z = m * b^e
m = Mantissa, b = bias, e = exponent
So -0.625 could be represented as:
-0.625 * 10^ 0
-6,25 * 10^-1
-62,5 * 10^-2
-0,0625 * 10^ 1
With the IEEE conversion we aim for the normalized floating point number which means there is only one preceding number before the comma (-6,25 * 10^-1)
In binary the single number before the comma will always be a 1, so this number will not be stored.
You're converting into a 16 bit float so you have:
1 Bit sign 5 Bits Exponent 10 Bits mantissa == 16Bits
Since the exponent can be negative and positive (as you've seen above this depends only on the comma shifting) they came up with the so called bias. For 5 bits the bias value is 01 111 == 15(dez) with 14 beeing ^-1 and 16 beeing ^1 ...
Ok enough small talk lets convert your number as an example to show the process of conversion:
Convert the pre-decimal position to binary as always
Multiply the decimal place by 2 if the result is greater 1, subtract 1 and notate 1 if it's smaller 0 notate 0.
Proceed this step until the result is == 0 or you've notated as many numbers as your mantissa has
shift the comma to only one pre-decimal and count the shiftings. if you shifted to the left add the count to the bias if you have to shift to the right subtract the count from the bias. This is your exponent
Dertmine your sign and add all parts together
-0.625
1. 0 to binary == 0
2. 0.625 * 2 = 1.25 ==> -1
0.25 * 2 = 0.5 ==> 0
0.5 * 2 = 1 ==> -1
Abort
3. The intermediary result therefore is -0.101
shift the comma 1 times to the right for a normalized floating point number:
-1.01
exponent = bias + (-1) == 15 - 1 == 14(dez) == 01110(bin)
4. put the parts together, sign = 1(negative), (and remember we do not store the leading 1 of number)
1 01110 01
since we aborted during our mantissa calculation fill the rest of the bits with 0:
1 01110 01 000 000 00
The IEEE 754 standard specifies a binary16 as having the following format:
Sign bit: 1 bit
Exponent width: 5 bits
Significand precision: 11 bits (10 explicitly stored)
Equation = exp(-1, signbit) x exp(2, exponent-15) x (1.significantbits)
Solution is as follows,
-0.625 = -1 x 0.5 x 1.25
significant bits = 25 = 11001
exponent = 14 = 01110
signbit = 1
ans = (1)(01110)(0000011001)
I'm taking a beginner Computer Science course at my local college and one of the parts of this assignment asks me to convert a hex number to its hex equivalent. We use an online basic computer to do this that takes specific inputs specific inputs.
So according to my Appendix, when I type in a certain code it is supposed to "add the bit patterns [ED] and [09] as though they were two's complement representations." When I type the code into the system, it gives an output of F6... but I have no idea how it got there.
I understand how adding in two's complement works and I understand how to add two normal hex numbers, but when I add 09 (which is supposed to be the hex version of two's complement 9) and ED (which is supposed to be the hex version of two's complement -19), I get 10 if adding in two's complement or 162 if adding in hex.
Okay, you're just confusing yourself. Stop converting. This is all in hexadecimal:
ED
+ 09
----
D + 9 = 16 // keep the 6 and carry the 1
1
ED
+ 09
----
6
1 + E = F
ED
+ 09
----
F6
Regarding the first step, using 0x to denote hex numbers so we don't get lost:
0xD = 13,
0x9 = 9,
13 + 9 = 22,
22 = 0x16
therefore
0xD + 0x9 = 0x16
Gotta run, but just one more quick edit before I go.
D + 1 = E
D + 2 = F
D + 3 = 10 (remember, this is hex, so this is not "ten")
D + 4 = 11
...
D + 9 = 16
I am currently reading a book about "bit fiddling" and the following formula appears:
x-y = x+¬y+1
But this doesn't seem to work. Example:
x = 0100
y = 0010
x-y = 0010
¬y = 1101
¬y+1 = 1110
x+1110 = 10010
But 10010 != 0010...
Where did I make a mistake (if any)?
(The book is "Hacker's Delight" by Henry S. Warren.)
You only have a four bit system! That extra 1 on the left of your final result can't exist. It should be:
x = 0100
y = 0010
~y = 1101
~y + 1 = 1110
x + 1110 = 0010
The other bit overflows, and isn't part of your result. You may want to read up on two's complement arithmetic.
You are carrying the extra bit. In real computers if you overflow the word, the bit disappears. (actually it gets saved in a carry flag.) .
Assuming the numbers are constrained to 4 bits, then the fifth 1 would be truncated, leaving you with 0010.
It's all about overflow. You only have four bits, so it's not 10010, but 0010.
Just to add to the answers, in a 2's complement system:
~x + 1 = -x
Say x = 2. In 4 bits, that's 0010.
~x = 1101
~x + 1 = 1110
And 1110 is -2