How to add hex numbers using two's complement? - binary

I'm taking a beginner Computer Science course at my local college and one of the parts of this assignment asks me to convert a hex number to its hex equivalent. We use an online basic computer to do this that takes specific inputs specific inputs.
So according to my Appendix, when I type in a certain code it is supposed to "add the bit patterns [ED] and [09] as though they were two's complement representations." When I type the code into the system, it gives an output of F6... but I have no idea how it got there.
I understand how adding in two's complement works and I understand how to add two normal hex numbers, but when I add 09 (which is supposed to be the hex version of two's complement 9) and ED (which is supposed to be the hex version of two's complement -19), I get 10 if adding in two's complement or 162 if adding in hex.

Okay, you're just confusing yourself. Stop converting. This is all in hexadecimal:
ED
+ 09
----
D + 9 = 16 // keep the 6 and carry the 1
1
ED
+ 09
----
6
1 + E = F
ED
+ 09
----
F6
Regarding the first step, using 0x to denote hex numbers so we don't get lost:
0xD = 13,
0x9 = 9,
13 + 9 = 22,
22 = 0x16
therefore
0xD + 0x9 = 0x16
Gotta run, but just one more quick edit before I go.
D + 1 = E
D + 2 = F
D + 3 = 10 (remember, this is hex, so this is not "ten")
D + 4 = 11
...
D + 9 = 16

Related

A negative floating number to binary

So the exercise says: "Consider binary encoding of real numbers on 16 bits. Fill the empty points of the binary encoding of the number -0.625 knowing that "1110" stands for the exposant and is minus one "-1"
_ 1110_ _ _ _ _ _ _ _ _ _ _ "
I can't find the answer and I know this is not a hard exercise (at least it doesn't look like a hard one).
Let's ignore the sign for now, and decompose the value 0.625 into (negative) powers of 2:
0.625(dec) = 5 * 0.125 = 5 * 1/8 = 0.101(bin) * 2^0
This should be normalized (value shifted left until there is a one before the decimal point, and exponent adjusted accordingly), so it becomes
0.625(dec) = 1.01(bin) * 2^-1 (or 1.25 * 0.5)
With hidden bit
Assuming you have a hidden bit scenario (meaning that, for normalized values, the top bit is always 1, so it is not stored), this becomes .01 filled up on the right with zero bits, so you get
sign = 1 -- 1 bit
exponent = 1110 -- 4 bits
significand = 0100 0000 000 -- 11 bits
So the bits are:
1 1110 01000000000
Grouped differently:
1111 0010 0000 0000(bin) or F200(hex)
Without hidden bit (i.e. top bit stored)
If there is no hidden bit scenario, it becomes
1 1110 10100000000
or
1111 0101 0000 0000(bin) = F500(hex)
First of all you need to understand that each number "z" can be represented by
z = m * b^e
m = Mantissa, b = bias, e = exponent
So -0.625 could be represented as:
-0.625 * 10^ 0
-6,25 * 10^-1
-62,5 * 10^-2
-0,0625 * 10^ 1
With the IEEE conversion we aim for the normalized floating point number which means there is only one preceding number before the comma (-6,25 * 10^-1)
In binary the single number before the comma will always be a 1, so this number will not be stored.
You're converting into a 16 bit float so you have:
1 Bit sign 5 Bits Exponent 10 Bits mantissa == 16Bits
Since the exponent can be negative and positive (as you've seen above this depends only on the comma shifting) they came up with the so called bias. For 5 bits the bias value is 01 111 == 15(dez) with 14 beeing ^-1 and 16 beeing ^1 ...
Ok enough small talk lets convert your number as an example to show the process of conversion:
Convert the pre-decimal position to binary as always
Multiply the decimal place by 2 if the result is greater 1, subtract 1 and notate 1 if it's smaller 0 notate 0.
Proceed this step until the result is == 0 or you've notated as many numbers as your mantissa has
shift the comma to only one pre-decimal and count the shiftings. if you shifted to the left add the count to the bias if you have to shift to the right subtract the count from the bias. This is your exponent
Dertmine your sign and add all parts together
-0.625
1. 0 to binary == 0
2. 0.625 * 2 = 1.25 ==> -1
0.25 * 2 = 0.5 ==> 0
0.5 * 2 = 1 ==> -1
Abort
3. The intermediary result therefore is -0.101
shift the comma 1 times to the right for a normalized floating point number:
-1.01
exponent = bias + (-1) == 15 - 1 == 14(dez) == 01110(bin)
4. put the parts together, sign = 1(negative), (and remember we do not store the leading 1 of number)
1 01110 01
since we aborted during our mantissa calculation fill the rest of the bits with 0:
1 01110 01 000 000 00
The IEEE 754 standard specifies a binary16 as having the following format:
Sign bit: 1 bit
Exponent width: 5 bits
Significand precision: 11 bits (10 explicitly stored)
Equation = exp(-1, signbit) x exp(2, exponent-15) x (1.significantbits)
Solution is as follows,
-0.625 = -1 x 0.5 x 1.25
significant bits = 25 = 11001
exponent = 14 = 01110
signbit = 1
ans = (1)(01110)(0000011001)

Trouble understanding an exercise given the two's complement in hex format to convert into decimal format

I am trying to convert the two's complement of the following hex values to their decimal values:
23, 57, 94 and 87.
a) 23
Procedure: (3 x 16^0) + (2 x 16^1) -> (3) + (32) = 35 (Correct)
b) 57
Procedure: (7 x 16^0) + (5 x 16^1) -> (7) + (80) = 87 (Correct)
For 94 and 87, the correct values are -108 & -121 respectively.
If I follow the procedure I used for numbers a) and b) I get 148 & 128 for 94 & 87.
Can someone explain to me how do I get to the correct results since mine are wrong? Do I need to convert the byte to binary first and then proceed from there?
Thanks a lot in advance!
0x94 = 0b10010100
now you can convert it to a decimal number like it is an normal binary number, except that the MSB counts negative:
1 * -2^7 + 0 * 2^6 + 0 * 2^5 + 1 * 2^4 + 0 * 2^3 + 1 * 2^2 + 0 * 2^1 + 0 * 2^0 =
-2^7 + 2^4 + 2^2 =
-128 + 16 + 4 =
-108
the other number works similar
First write down the binary representation of the hex value:
94h = 10010100b
To take the two's complement, you flip all bits and add 00000001b, so the two's complement of this binary string is
01101011b + 00000001b = 01101100b
Then the first bit is interpreted as the sign (in this case minus), and the remaining 7 bits constitute the magnitude, so:
01101100b = -108d
The other works similarly.

How are Hex and Binary parsed?

HEX Article
By this I mean,
In a program if I write this:
1111
I mean 15. Likewise, if I write this:
0xF
I also mean 15. I am not entirely sure how the process is carried out through the chip, but I vagely recall something about flagging, but in essence the compiler will calculate
2^3 + 2^2 + 2^1 + 2^0 = 15
Will 0xF be converted to "1111" before this calculation or is it written that somehow HEX F represents 15?
Or simply, 16^0 ? -- which obviously is not 15.
I cannot find any helpful article that states a conversion from HEX to decimal rather than first converting to binary.
Binary(base2) is converted how I did above (2^n .. etc). How is HEX(base16) converted? Is there an associated system, like base 2, for base 16?
I found in an article:
0x1234 = 1 * 16^3 + 2 * 16^2 + 3 * 16^1 + 4 * 16^0 = 4660
But, what if the number was 0xF, does this represent F in the 0th bit? I cannot find a straightforward example.
There are sixteen hexadecimal digits, 0 through 9 and A through F.
Digits 0 through 9 are the same in hex and in decimal.
0xA is 1010.
0xB is 1110.
0xC is 1210.
0xD is 1310.
0xE is 1410.
0xF is 1510.
For example:
0xA34F = 10 * 163 + 3 * 162 + 4 * 161 + 15 * 160

Convert decimal number to excel-header-like number

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
0 = A
1 = B
...
25 = Z
26 = AA
27 = AB
...
701 = ZZ
702 = AAA
I cannot think of any solution that does not involve loop-bruteforce :-(
I expect a function/program, that accepts a decimal number and returns a string as a result.
Haskell, 78 57 50 43 chars
o=map(['A'..'Z']:)$[]:o
e=(!!)$o>>=sequence
Other entries aren't counting the driver, which adds another 40 chars:
main=interact$unlines.map(e.read).lines
A new approach, using a lazy, infinite list, and the power of Monads! And besides, using sequence makes me :), using infinite lists makes me :o
If you look carefully the excel representation is like base 26 number but not exactly same as base 26.
In Excel conversion Z + 1 = AA while in base-26 Z + 1 = BA
The algorithm is almost same as decimal to base-26 conversion with just once change.
In base-26, we do a recursive call by passing it the quotient, but here we pass it quotient-1:
function decimalToExcel(num)
// base condition of recursion.
if num < 26
print 'A' + num
else
quotient = num / 26;
reminder = num % 26;
// recursive calls.
decimalToExcel(quotient - 1);
decimalToExcel(reminder);
end-if
end-function
Java Implementation
Python, 44 chars
Oh c'mon, we can do better than lengths of 100+ :
X=lambda n:~n and X(n/26-1)+chr(65+n%26)or''
Testing:
>>> for i in 0, 1, 25, 26, 27, 700, 701, 702:
... print i,'=',X(i)
...
0 = A
1 = B
25 = Z
26 = AA
27 = AB
700 = ZY
701 = ZZ
702 = AAA
Since I am not sure what base you're converting from and what base you want (your title suggests one and your question the opposite), I'll cover both.
Algorithm for converting ZZ to 701
First recognize that we have a number encoded in base 26, where the "digits" are A..Z. Set a counter a to zero and start reading the number at the rightmost (least significant digit). Progressing from right to left, read each number and convert its "digit" to a decimal number. Multiply this by 26a and add this to the result. Increment a and process the next digit.
Algorithm for converting 701 to ZZ
We simply factor the number into powers of 26, much like we do when converting to binary. Simply take num%26, convert it to A..Z "digits" and append to the converted number (assuming it's a string), then integer-divide your number. Repeat until num is zero. After this, reverse the converted number string to have the most significant bit first.
Edit: As you point out, once two-digit numbers are reached we actually have base 27 for all non-least-significant bits. Simply apply the same algorithms here, incrementing any "constants" by one. Should work, but I haven't tried it myself.
Re-edit: For the ZZ->701 case, don't increment the base exponent. Do however keep in mind that A no longer is 0 (but 1) and so forth.
Explanation of why this is not a base 26 conversion
Let's start by looking at the real base 26 positional system. (Rather, look as base 4 since it's less numbers). The following is true (assuming A = 0):
A = AA = A * 4^1 + A * 4^0 = 0 * 4^1 + 0 * 4^0 = 0
B = AB = A * 4^1 + B * 4^0 = 0 * 4^1 + 1 * 4^0 = 1
C = AC = A * 4^1 + C * 4^0 = 0 * 4^1 + 2 * 4^0 = 2
D = AD = A * 4^1 + D * 4^0 = 0 * 4^1 + 3 * 4^0 = 3
BA = B * 4^0 + A * 4^0 = 1 * 4^1 + 0 * 4^0 = 4
And so forth... notice that AA is 0 rather than 4 as it would be in Excel notation. Hence, Excel notation is not base 26.
In Excel VBA ... the obvious choice :)
Sub a()
For Each O In Range("A1:AA1")
k = O.Address()
Debug.Print Mid(k, 2, Len(k) - 3); "="; O.Column - 1
Next
End Sub
Or for getting the column number in the first row of the WorkSheet (which make more sense, since we are in Excel ...)
Sub a()
For Each O In Range("A1:AA1")
O.Value = O.Column - 1
Next
End Sub
Or better yet: 56 chars
Sub a()
Set O = Range("A1:AA1")
O.Formula = "=Column()"
End Sub
Scala: 63 chars
def c(n:Int):String=(if(n<26)""else c(n/26-1))+(65+n%26).toChar
Prolog, 109 123 bytes
Convert from decimal number to Excel string:
c(D,E):- d(D,X),atom_codes(E,X).
d(D,[E]):-D<26,E is D+65,!.
d(D,[O|M]):-N is D//27,d(N,M),O is 65+D rem 26.
That code does not work for c(27, N), which yields N='BB'
This one works fine:
c(D,E):-c(D,26,[],X),atom_codes(E,X).
c(D,B,T,M):-(D<B->M-S=[O|T]-B;(S=26,N is D//S,c(N,27,[O|T],M))),O is 91-S+D rem B,!.
Tests:
?- c(0, N).
N = 'A'.
?- c(27, N).
N = 'AB'.
?- c(701, N).
N = 'ZZ'.
?- c(702, N).
N = 'AAA'
Converts from Excel string to decimal number (87 bytes):
x(E,D):-x(E,0,D).
x([C],X,N):-N is X+C-65,!.
x([C|T],X,N):-Y is (X+C-64)*26,x(T,Y,N).
F# : 166 137
let rec c x = if x < 26 then [(char) ((int 'A') + x)] else List.append (c (x/26-1)) (c (x%26))
let s x = new string (c x |> List.toArray)
PHP: At least 59 and 33 characters.
<?for($a=NUM+1;$a>=1;$a=$a/26)$c=chr(--$a%26+65).$c;echo$c;
Or the shortest version:
<?for($a=A;$i++<NUM;++$a);echo$a;
Using the following formula, you can figure out the last character in the string:
transform(int num)
return (char)num + 47; // Transform int to ascii alphabetic char. 47 might not be right.
char lastChar(int num)
{
return transform(num % 26);
}
Using this, we can make a recursive function (I don't think its brute force).
string getExcelHeader(int decimal)
{
if (decimal > 26)
return getExcelHeader(decimal / 26) + transform(decimal % 26);
else
return transform(decimal);
}
Or.. something like that. I'm really tired, maybe I should stop answering questions and go to bed :P

Code Golf: Calculate Orthodox Easter date

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
The Challenge
Calculate the Date of the Greek Orthodox Easter (http://www.timeanddate.com/holidays/us/orthodox-easter-day) Sunday in a given Year (1900-2100) using the least amount of characters.
Input is just a year in the form '2010'. It's not relevant where you get it (Input, CommandLineArgs etc.) but it must be dynamic!
Output should be in the form day-month-year (say dd/mm/yyyy or d/m/yyyy)
Restrictions No standard functions, such as Mathematica's EasterSundayGreekOrthodox or PHP's easter_date(), which return the (not applicable gregorian) date automatic must be used!
Examples
2005 returns 1/5/2005
2006 returns 23/4/2006
2007 returns 8/4/2007
2008 returns 27/4/2008
2009 returns 19/4/2009
2010 returns 4/4/2010
2011 returns 24/4/2011
2012 returns 15/4/2012
2013 returns 5/5/2013
2014 returns 20/4/2014
2015 returns 12/4/2015
Code count includes input/output (i.e full program).
Edit:
I mean the Eastern Easter Date.
Reference: http://en.wikipedia.org/wiki/Computus
Python (101 140 132 115 chars)
y=input()
d=(y%19*19+15)%30
e=(y%4*2+y%7*4-d+34)%7+d+127
m=e/31
a=e%31+1+(m>4)
if a>30:a,m=1,5
print a,'/',m,'/',y
This one uses the Meeus Julian algorithm but since this one only works between 1900 and 2099, an implementation using Anonymous Gregorian algorithm is coming right up.
Edit: Now 2005 is properly handled. Thanks to Mark for pointing it out.
Edit 2: Better handling of some years, thanks for all the input!
Edit 3: Should work for all years in range. (Sorry for hijacking it Juan.)
PHP CLI, no easter_date(), 125 characters
Valid for dates from 13 March 1900 to 13 March 2100, now works for Easters that fall in May
Code:
<?=date("d/m/Y",mktime(0,0,0,floor(($b=($a=(19*(($y=$argv[1])%19)+15)%30)+(2*($y%4)+4*$y%7-$a+34)%7+114)/31),($b%31)+14,$y));
Invocation:
$ php codegolf.php 2010
$ php codegolf.php 2005
Output:
04/04/2010
01/05/2005
With whitespace:
<?=date("d/m/Y", mktime(0, 0, 0, floor(($b = ($a = (19 * (($y = $argv[1]) % 19) + 15) % 30) + (2 * ($y % 4) + 4 * $y % 7 - $a + 34) % 7 + 114) / 31), ($b % 31) + 14, $y));
This iteration is no longer readable thanks to PHP's handling of assignments. It's almost a functional language!
For completeness, here's the previous, 127 character solution that does not rely on short tags:
Code:
echo date("d/m/Y",mktime(0,0,0,floor(($b=($a=(19*(($y=$argv[1])%19)+15)%30)+(2*($y%4)+4*$y%7-$a+34)%7+114)/31),($b%31)+14,$y));
Invocation:
$ php -r 'echo date("d/m/Y",mktime(0,0,0,floor(($b=($a=(19*(($y=$argv[1])%19)+15)%30)+(2*($y%4)+4*$y%7-$a+34)%7+114)/31),($b%31)+14,$y));' 2010
$ php -r 'echo date("d/m/Y",mktime(0,0,0,floor(($b=($a=(19*(($y=$argv[1])%19)+15)%30)+(2*($y%4)+4*$y%7-$a+34)%7+114)/31),($b%31)+14,$y));' 2005
C#, 155 157 182 209 212 characters
class P{static void Main(string[]i){int y=int.Parse(i[0]),c=(y%19*19+15)%30,d=c+(y%4*2+y%7*4-c+34)%7+128;System.Console.Write(d%31+d/155+"/"+d/31+"/"+y);}}
Python 2.3, 97 characters
y=int(input())
c=(y%19*19+15)%30
d=c+(y%4*2+y%7*4-c+34)%7+128
print"%d/%d/%d"%(d%31+d/155,d/31,y)
This also uses the Meeus Julian algorithm (and should work for dates in May).
removed no longer necessary check for modern years and zero-padding in output
don't expect Easters in March anymore because there are none between 1800-2100
included Python 2.3 version (shortest so far)
Mathematica
<<Calendar`;a=Print[#3,"/",#2,"/",#]&##EasterSundayGreekOrthodox##&
Invoke with
a[2010]
Output
4/4/2010
Me too: I don't see the point in not using built-in functions.
Java - 252 196 190 chars
Update 1: The first algo was for Western Gregorian Easter. Fixed to Eastern Julian Easter now. Saved 56 chars :)
Update 2: Zero padding seem to not be required. Saved 4 chars.
class E{public static void main(String[]a){long y=new Long(a[0]),b=(y%19*19+15)%30,c=b+(y%4*2+y%7*4-b+34)%7+(y>1899&y<2100?128:115),m=c/31;System.out.printf("%d/%d/%d",c%31+(m<5?0:1),m,y);}}
With newlines
class E{
public static void main(String[]a){
long y=new Long(a[0]),
b=(y%19*19+15)%30,
c=b+(y%4*2+y%7*4-b+34)%7+(y>1899&y<2100?128:115),
m=c/31;
System.out.printf("%d/%d/%d",c%31+(m<5?0:1),m,y);
}
}
JavaScript (196 characters)
Using the Meeus Julian algorithm. This implementation assumes that a valid four-digit year was given.
y=~~prompt();d=(19*(y%19)+15)%30;x=d+(2*(y%4)+4*(y%7)-d+34)%7+114;m=~~(x/31);d=x%31+1;if(y>1899&&y<2100){d+=13;if(m==3&&d>31){d-=31;m++}if(m==4&&d>30){d-=30;m++}}alert((d<10?"0"+d:d)+"/0"+m+"/"+y)
Delphi 377 335 317 characters
Single line:
var y,c,n,i,j,m:integer;begin Val(ParamStr(1),y,n);c:=y div 100;n:=y-19*(y div 19);i:=c-c div 4-(c-((c-17)div 25))div 3+19*n+15;i:=i-30*(i div 30);i:=i-(i div 28 )*(1-(i div 28)*(29 div(i+1))*((21 -n)div 11));j:=y+y div 4 +i+2-c+c div 4;j:=j-7*(j div 7);m:=3+(i-j+40 )div 44;Write(i-j+28-31*(m div 4),'/',m,'/',y)end.
Formatted:
var
y,c,n,i,j,m:integer;
begin
Val(ParamStr(1),y,n);
c:=y div 100;
n:=y-19*(y div 19);
i:=c-c div 4-(c-((c-17)div 25))div 3+19*n+15;
i:=i-30*(i div 30);
i:=i-(i div 28 )*(1-(i div 28)*(29 div(i+1))*((21 -n)div 11));
j:=y+y div 4 +i+2-c+c div 4;j:=j-7*(j div 7);
m:=3+(i-j+40 )div 44;
Write(i-j+28-31*(m div 4),'/',m,'/',y)
end.
Tcl
Eastern Easter
(116 chars)
puts [expr 1+[incr d [expr ([set y $argv]%4*2+$y%7*4-[
set d [expr ($y%19*19+15)%30]]+34)%7+123]]%30]/[expr $d/30]/$y
Uses the Meeus algorithm. Takes the year as a command line argument, produces Eastern easter. Could be a one-liner, but it's slightly more readable when split...
Western Easter
(220 chars before splitting over lines)
interp alias {} tcl::mathfunc::s {} set;puts [expr [incr 3 [expr {
s(2,(s(4,$argv)%100/4*2-s(3,(19*s(0,$4%19)+s(1,$4/100)-$1/4-($1-($1+8)/25+46)
/3)%30)+$1%4*2-$4%4+4)%7)-($0+11*$3+22*$2)/451*7+114}]]%31+1]/[expr $3/31]/$4
Uses the Anonymous algorithm.
COBOL, 1262 chars
WORKING-STORAGE SECTION.
01 V-YEAR PIC S9(04) VALUE 2010.
01 V-DAY PIC S9(02) VALUE ZERO.
01 V-EASTERDAY PIC S9(04) VALUE ZERO.
01 V-CENTURY PIC S9(02) VALUE ZERO.
01 V-GOLDEN PIC S9(04) VALUE ZERO.
01 V-GREGORIAN PIC S9(04) VALUE ZERO.
01 V-CLAVIAN PIC S9(04) VALUE ZERO.
01 V-FACTOR PIC S9(06) VALUE ZERO.
01 V-EPACT PIC S9(06) VALUE ZERO.
PROCEDURE DIVISION
XX-CALCULATE EASTERDAY.
COMPUTE V-CENTURY = (V-YEAR / 100) + 1
COMPUTE V-GOLDEN= FUNCTION MOD(V-YEAR, 19) + 1
COMPUTE V-GREGORIAN = (V-CENTURY * 3) / 4 - 12
COMPUTE V-CLAVIAN
= (V-CENTURY * 8 + 5) / 25 - 5 - V-GREGORIAN
COMPUTE V-FACTOR
= (V-YEAR * 5) / 4 - V-GREGORIAN - 10
COMPUTE V-EPACT
= FUNCTION MOD((V-GOLDEN * 11 + 20 + V-CLAVIAN), 30)
IF V-EPACT = 24
ADD 1 TO V-EPACT
ELSE
IF V-EPACT = 25
IF V-GOLDEN > 11
ADD 1 TO V-EPACT
END-IF
END-IF
END-IF
COMPUTE V-DAY = 44 - V-EPACT
IF V-DAY < 21
ADD 30 TO V-DAY
END-IF
COMPUTE V-DAY
= V-DAY + 7 - (FUNCTION MOD((V-DAY + V-FACTOR), 7))
IF V-DAY <= 31
ADD 300 TO V-DAY GIVING V-EASTERDAY
ELSE
SUBTRACT 31 FROM V-DAY
ADD 400 TO V-DAY GIVING V-EASTERDAY
END-IF
.
XX-EXIT.
EXIT.
Note: Not mine, but I like it
EDIT: I added a char count with spaces but I don't know how spacing works in COBOL so I didn't change anything from original. ~vlad003
UPDATE: I've found where the OP got this code: http://www.tek-tips.com/viewthread.cfm?qid=31746&page=112. I'm just putting this here because the author deserves it. ~vlad003
C, 128 121 98 characters
Back to Meeus' algorithm. Computing the day in Julian, but adjusting for Gregorian (this still seems naive to me, but I cannot find a shorter alternative).
main(y,v){int d=(y%19*19+15)%30;d+=(y%4*2+y%7*4-d+34)%7+128;printf("%d/%d/%d",d%31+d/155,d/31,y);}
I have not found a case where floor(d/31) would actually be needed. Also, to account for dates in May, the m in Meeus' algorithm must be at least 5, therefore the DoM is greater than 154, hence the division.
The year is supplied as the number of program invocation arguments plus one, ie. for 1996 you must provide 1995 arguments. The range of ARG_MAX on modern systems is more than enough for this.
PS. I see Gabe has come to the same implementation in Python 2.3, surpassing me by one character. Aw. :(
PPS. Anybody looking at a tabular method for 1800-2099?
Edit - Shortened Gabe's answer to 88 characters:
y=input()
d=(y%19*19+15)%30
d+=(y%4*2+y%7*4-d+34)%7+128
print"%d/%d/%d"%(d%31+d/155,d/31,y)
BASIC, 973 chars
Sub EasterDate (d, m, y)
Dim FirstDig, Remain19, temp 'intermediate results
Dim tA, tB, tC, tD, tE 'table A to E results
FirstDig = y \ 100 'first 2 digits of year
Remain19 = y Mod 19 'remainder of year / 19
' calculate PFM date
temp = (FirstDig - 15) \ 2 + 202 - 11 * Remain19
Select Case FirstDig
Case 21, 24, 25, 27 To 32, 34, 35, 38
temp = temp - 1
Case 33, 36, 37, 39, 40
temp = temp - 2
End Select
temp = temp Mod 30
tA = temp + 21
If temp = 29 Then tA = tA - 1
If (temp = 28 And Remain19 > 10) Then tA = tA - 1
'find the next Sunday
tB = (tA - 19) Mod 7
tC = (40 - FirstDig) Mod 4
If tC = 3 Then tC = tC + 1
If tC > 1 Then tC = tC + 1
temp = y Mod 100
tD = (temp + temp \ 4) Mod 7
tE = ((20 - tB - tC - tD) Mod 7) + 1
d = tA + tE
'return the date
If d > 31 Then
d = d - 31
m = 4
Else
m = 3
End If
End Sub
Credit: Astronomical Society of South Australia
EDIT: I added a char count but I think many spaces could be removed; I don't know BASIC so I didn't make any changes to the code. ~vlad003
I'm not going to implement it, but I'd like to see one where the code e-mails the Pope, scans any answer that comes back for a date, and returns that.
Admittedly, the calling process may be blocked for a while.
Javascript 125 characters
This will handle years 1900 - 2199. Some of the other implementations cannot handle the year 2100 correctly.
y=prompt();k=(y%19*19+15)%30;e=(y%4*2+y%7*4-k+34)%7+k+127;m=~~(e/31);d=e%31+m-4+(y>2099);alert((d+=d<30||++m-34)+"/"+m+"/"+y)
Ungolfed..ish
// get the year to check.
y=prompt();
// do something crazy.
k=(y%19*19+15)%30;
// do some more crazy...
e=(y%4*2+y%7*4-k+34)%7+k+127;
// estimate the month. p.s. The "~~" is like Math.floor
m=~~(e/31);
// e % 31 => get the day
d=e%31;
if(m>4){
d += 1;
}
if(y > 2099){
d += 1;
}
// if d is less than 30 days add 1
if(d<30){
d += 1;
}
// otherwise, change month to May
// and adjusts the days to match up with May.
// e.g., 32nd of April is 2nd of May
else{
m += 1;
d = m - 34 + d;
}
// alert the result!
alert(d + "/" + m + "/" + y);
A fix for dates up to 2399.
I'm sure there is a way to algorithmically calculate dates beyond this but I don't want to figure it out.
y=prompt();k=(y%19*19+15)%30;e=(y%4*2+y%7*4-k+34)%7+k+127;m=~~(e/31);d=e%31+m-4+(y<2200?0:~~((y-2000)/100));alert((d+=d<30||++m-34)+"/"+m+"/"+y)
'VB .Net implementation of:
'http://aa.usno.navy.mil/faq/docs/easter.php
Dim y As Integer = 2010
Dim c, d, i, j, k, l, m, n As Integer
c = y \ 100
n = y - 19 * (y \ 19)
k = (c - 17) \ 25
i = c - c \ 4 - (c - k) \ 3 + 19 * n + 15
i = i - 30 * (i \ 30)
i = i - (i \ 28) * (1 - (i \ 28) * (29 \ (i + 1)) * ((21 - n) \ 11))
j = y + y \ 4 + i + 2 - c + c \ 4
j = j - 7 * (j \ 7)
l = i - j
m = 3 + (l + 40) \ 44
d = l + 28 - 31 * (m \ 4)
Easter = DateSerial(y, m, d)