Ocaml function, what is the nature of the problem? - function

Currently i'm trying
I have a function to calculate the inverse sum of a number
let inverseSum n =
let rec sI n acc =
match n with
| 1 -> acc
| _ -> sI (n - 1) ((1.0 /. float n) +. acc)
in sI n 1.0;;
For example, inverseSum 2 -> 1/2 + 1 = 3/2 = 1.5
I try to test the function with 2 and 5, it's okay:
inverseSum 2;;
inverseSum 5;;
inverseSum 2;;
- : float = 1.5
inverseSum 5;;
- : float = 2.28333333333333321
For the moment, no problem.
After that, I initialize a list which contains all numbers between 1 and 10000 ([1;…;10000])
let initList = List.init 10000 (fun n -> n + 1);;
no problem.
I code a function so that an element of the list becomes the inverse sum of the element
(e.g. [1;2;3] -> [inverseSum 1; inverseSum 2; inverseSum 3])
let rec invSumLst lst =
match lst with
| [] -> []
| h::t -> (inverseSum h) :: invSumLst t;;
and I use it on the list initList:
let invInit = invSumLst initList;;
So far so good, but I start to have doubts from this stage:
I select the elements of invList strictly inferior to 5.0
let listLess5 = List.filter (fun n -> n < 5.0) invInit;;
And I realize the sum of these elements using fold_left:
let foldLess5 = List.fold_left (+.) 0.0 listLess5;;
I redo the last two steps with floats greater than or equal to 5.0
let moreEg5 = List.filter (fun n -> n >= 5.0) invInit;;
let foldMore5 = List.fold_left (+.) 0.0 moreEg5;;
Finally, I sum all the numbers of the list:
let foldInvInit = List.fold_left (+.) 0.0 invInit;;
but at the end when I try to calculate the absolute error between the numbers less than 5, those greater than 5 and all the elements of the list, the result is surprising:
Float.abs ((foldLess5 +. foldMore5) -. foldInvInit);;
Printf.printf "%f\n" (Float.abs ((foldLess5 +. foldMore5) -. foldInvInit));;
Printf.printf "%b\n" ((foldLess5+.foldMore5) = foldInvInit);;
returns:
let foldMore5 = List.fold_left (+.) 0.0 moreEg5;;
val foldMore5 : float = 87553.6762998474733
let foldInvInit = List.fold_left (+.) 0.0 invInit;;
val foldInvInit : float = 87885.8479664799379
Float.abs ((foldLess5 +. foldMore5) -. foldInvInit);;
- : float = 1.45519152283668518e-11
Printf.printf "%f\n" (Float.abs ((foldLess5 +. foldMore5) -. foldInvInit));;
0.000000
- : unit = ()
Printf.printf "%b\n" ((foldLess5+.foldMore5) = foldInvInit);;
false
- : unit = ()
it's probably a rounding problem, but I would like to know where exactly the error comes from?
Because here I am using an interpreter, so I see the error "1.45519152283668518e-11"
But if I used a compiler like ocamlpro, I would just get 0.000000 and false on the terminal and I wouldn't understand anything.
So I would just like to know if the problem comes from one of the functions of the code, or from a rounding made by the Printf.printf function which wrote the result with a "non scientific" notation.

OCaml is showing you the actual results of the operations you performed. The difference between the two sums is caused by the finite precision of floating values.
When adding up a list of large-ish numbers, by the time you reach the end of the list the number is large enough that the lowest-order bits of the new values simply can't be represented. But when adding a list of small-ish numbers, fewer bits are lost.
A system that shows foldLess5 +. foldMore5 as equal to foldInvInit is most likely lying to you for your convenience.

Related

Type casting within a recursive function in Haskell

I am learning Haskell and recursion and different types in Haskell is making my brain hurt. I am trying to create a recursive function that will take a 32 bit binary number string and convert it to a decimal number. I think my idea for how the recursion will work is fine but implementing it into Haskell is giving me headaches. This is what I have so far:
bin2dec :: String -> Int
bin2dec xs = ""
bin2dec (x:xs) = bin2dec xs + 2^(length xs) * x
The function is supposed to take a 32 bit number string and then take off the first character of the string. For example, "0100101010100101" becomes "0" and "100101010100101". It then should turn the first character into a integer and multiply it by 2^length of the rest of the string and add it to the function call again. So if the first character in the 32 bit string is "1" then it becomes 1 * 2^(31) + recursive function call.
But, whenever I try to compile it, it returns:
traceProcP1.hs:47:14: error:
* Couldn't match type `[Char]' with `Int'
Expected: Int
Actual: String
* In the expression: ""
In an equation for `bin2dec': bin2dec xs = ""
|
47 | bin2dec xs = ""
| ^^
traceProcP1.hs:48:31: error:
* Couldn't match expected type `Int' with actual type `Char'
* In the second argument of `(+)', namely `2 ^ (length xs) * x'
In the expression: bin2dec xs + 2 ^ (length xs) * x
In an equation for `bin2dec':
bin2dec (x : xs) = bin2dec xs + 2 ^ (length xs) * x
|
48 | bin2dec (x:xs) = bin2dec xs + 2^(length xs) * x
| ^^^^^^^^^^^^^^^^^^
I know this has to do with changing the datatypes, but I am having trouble type casting in Haskell. I have tried type casting x with read and I have tried making guards that will turn the '0' into 0 and '1' into 1, but I am having trouble getting these to work. Any help would be very appreciated.
There is no casting. If you want to convert from one type to another, there needs to be a function with the right type signature to do so. When looking for any function in Haskell, Hoogle is often a good start. In this case, you're looking for Char -> Int, which has several promising options. The first one I see is digitToInt, which sounds about right for you.
But if you'd rather do it yourself, it's quite easy to write a function with the desired behavior, using pattern matching:
bit :: Char -> Int
bit '0' = 0
bit '1' = 1
bit c = error $ "Invalid digit '" ++ [c] ++ "'"

Recursive Haskell function but only does part of it once

I've got a function that needs to work out the minimum change required to break down a certain amount of money.
A user would enter a value of money and the possible denominations, and it would output how many of each denomination would be needed to make up the money eg.
> coinChange 34 [1, 5, 10, 25, 50, 100]
[4,1,0,1,0,0]
Here is what I have so far:
coinChange :: Integer -> [Integer] -> [Integer]
coinChange _ [] = []
coinChange 0 xs = []
coinChange v xs = do
dropWhile (>v) (reverse xs)
createNewList :: Integer -> [Integer]
createNewList 0 = []
createNewList x = 0 : createNewList (x - 1)
I have a separate function called createNewList which I want to call to set up a new list of 0s at the beginning.
However, because I plan on making coinChange a recursive function when working out how much money is left, if I just call makeNewList in coinChange, at the moment it would reset the list because it would be called every recursion.
So my question is, is there a way to make a function call another function only once, before proceeding with the recursion.
Hope I've made it clear, thanks
In general you are allowed to create local helper functions in Haskell. Maybe this example will inspire you:
prepareData x = if x < 0 then -x else x
mainFunction n =
let recursiveLocalFunction x =
if x < 2 then 1 else x * recursiveLocalFunction (x - 1)
in recursiveLocalFunction (prepareData n)

OCaml and creating a list of lists

I am currently trying to make use functions to create:
0 V12 V13 V14
V21 0 V23 V24
V31 V32 0 V34
V41 V42 V43 0
A way that I found to do this was to use theses equations:
(2*V1 - 1)*(2*V2-1) = for spot V(1,2) in the Matrix
(2*V1 - 1)*(2*V3-1) = for spot V(1,3) in the Matrix
etc
Thus far I have:
let singleState state =
if state = 0.0 then 0.0
else
((2.0 *. state) -. 1.0);;
let rec matrixState v =
match v with
| [] -> []
| hd :: [] -> v
| hd :: (nx :: _ as tl) ->
singleState hd *. singleState nx :: matrixState tl;;
My results come out to be:
float list = [-3.; -3.; -3.; -1.]
When they should be a list of lists that look as follows:
0 -1 1 -1
-1 0 -1 1
1 -1 0 -1
-1 1 -1 0
So instead of it making list of lists it is making just one list. I also have trouble figuring out how to make the diagonals 0.
The signatures should look like:
val singleState : float list -> float list list = <fun>
val matrixState : float list list -> float list list = <fun>
and I am getting
val singleState : float -> float = <fun>
val matrixState : float list -> float list = <fun>
Any ideas?
With some fixing up, your function would make one row of the result. Then you could call it once for each row you need. A good way to do the repeated calling might be with List.map.
Assuming this is mostly a learning exercise, it might be good to first make a matrix like this:
V11 V12 V13 V14
V21 V22 V23 V24
V31 V32 V33 V34
V41 V42 V43 V44
I think this will be a lot easier to calculate.
Then you can replace the diagonal with zeroes. Here's some code that would replace the diagonal:
let replnth r n l =
List.mapi (fun i x -> if i = n then r else x) l
let zerorow row (n, res) =
(n - 1, replnth 0.0 n row :: res)
let zerodiag m =
let (_, res) = List.fold_right zerorow m (List.length m - 1, []) in
res
I would prefer to go with an array for your work.
A nice function to use is then Array.init, it works like so,
# Array.init 5 (fun x -> x);;
- : int array = [|0; 1; 2; 3; 4|]
We note that 5 play the role of the size of our Array.
But as you want a matrix we need to build an Array of Array which is achieve with two call of Array.init, the last one nested into the first one,
# Array.init 3 (fun row -> Array.init 3 (fun col -> row+col));;
- : int array array = [|[|0; 1; 2|]; [|1; 2; 3|]; [|2; 3; 4|]|]
Note, I've called my variable row and col to denote the fact that they correspond to the row index and column index of our matrix.
Last, as your formula use a vector of reference V holding value [|V1;V2;V3;V4|], we need to create one and incorporate call to it into our matrix builder, (The value hold on the cell n of an array tab is accessed like so tab.(n-1))
Which finally lead us to the working example,
let vect = [|1;2;3;4|]
let built_matrix =
Array.init 4 (fun row ->
Array.init 4 (fun col ->
if col=row then 0
else vect.(row)+vect.(col)))
Of course you'll have to adapt it to your convenience in order to match this piece of code according to your requirement.
A side note about syntax,
Repeating Array each time can be avoid using some nice feature of OCaml.
We can locally open a module like so,
let built_matrix =
let open Array in
init 4 (fun row ->
init 4 (fun col ->
if col=row then 0
else vect.(row)+vect.(col)))
Even shorter, let open Array in ... can be write as Array.(...), Below a chunk of code interpreted under the excellent utop to illustrate it (and I going to profit of this opportunity to incorporate a conversion of our matrix to a list of list.)
utop #
Array.(
to_list
## map to_list
## init 4 (fun r ->
init 4 (fun c ->
if r = c then 0
else vect.(r)+ vect.(c))))
;;
- : int list list = [[0; 3; 4; 5]; [3; 0; 5; 6]; [4; 5; 0; 7]; [5; 6; 7; 0]]
I hope it helps

Code-Golf: Modulus Divide

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Challenge:
Without using the modulus divide operator provided already by your language, write a program that will take two integer inputs from a user and then displays the result of the first number modulus divided number by the second number. Assume all input is positive.
Example:
Input of first number:2
Input of second number:2
Result:0
Who wins:
In case you don't know how Code Golf works, the winner is the person who writes this program in the least amount of characters.
CSS: 107 chars :)
CSS (ungolfed):
li {
counter-increment: a;
}
li:after {
content: counter(a);
}
li:nth-child(3n) { /* replace 3 with 2nd input ("b" in "a % b") */
counter-reset: a;
counter-increment: none;
}
Accompanying HTML:
<ol> <li></li> <li></li> <li></li> <!-- etc. --> </ol>
This doesn't work in IE (surprise surprise!).
J, 10 characters
([-]*<.#%)
Usage:
10 ([-]*<.#%) 3
1
J, 17 characters (with input as a list)
({.-{:*[:<.{.%{:)
Usage:
({.-{:*[:<.{.%{:) 10 3
1
({.-{:*[:<.{.%{:) 225 13
4
Explanation:
I took a totem pole and turned it into a smiley, and it worked.
Golfscript, 6 7 13 chars:
2*~/*-
Usage (only way to input into golfscript):
echo 14 3 | ruby golfscript.rb modulo.gs
2
Explanation:
2*~ #double the input string and eval (so now 14 3 14 3 are on the stack)
/ #int divide 14 / 3, gives quotient
*- #multiply that result by 3, subtract from 14, gives remainder
RePeNt, 5 chars
2?/*-
Run using:
RePeNt mod.rpn 17 3
RePeNt "2?/*-" 17 3
RePeNt is a stack-based toy language I made myself where every operator/command/loop is entered in Reverse Polish Notation (RPN). I will release the interpreter when I have tidied it up a bit.
Command Explanation Stack
------- ----------- -----
n/a The program takes 2 parameters ( 17 3 ) and pushes them 17 3
onto the stack
2 Pushes a 2 onto the stack 17 3 2
? Pops a number (x) off the stack + copies the last x 17 3 17 3
stack items onto the stack
/ Divides on stack 17 3 5
* Multiplies on stack 17 15
- Subtracts on stack 2
Ruby (32):
p(a=gets.to_i)-a/(b=gets.to_i)*b
Sure I won't win, but here goes nothing:
<?php
$a=readline("#1:");
$b=readline("#2:");
while($b<=$a)$a-=$b;
echo "Result: $a";
I know there's already two Ruby answers, but why not; getting the input this way is a different enough approach to knock off a few characters.
Ruby 1.8.7+, 29 chars
a,n=*$*.map(&:to_i);p a-a*n/n
$ ruby a.rb 10 3
1
C: 52
main(a,b){scanf("%d%d",&a,&b);printf("%d",a-a/b*b);}
Python: 25 chars
Behaves with negative numbers, identically to modulus operator. Takes two comma-separated numbers.
x,y=input()
print x-x/y*y
Clojure: 30 characters
#(if(>%2%1)%1(recur(-%1%2)%2)))
Unefunge-98: 14 13 22 chars
&:7p&:' \/*-.#
Unefunge is the 1-dimensional instance of Funge-98: http://quadium.net/funge/spec98.html
Explanation (Command <- Explaination [Stack]):
& <- Get integer input of value A and store on stack.
[A]
: <- Duplicate top of stack.
[A A]
7 <- Push 7 on stack. Used for the `p` command.
[A A 7]
p <- Pop top two values (7 then A). Place the character whose ASCII value
is A at position 7 in the code (where the space is).
[A]
& <- Get integer input of value B and store on stack.
[A B]
: <- Duplicate top of stack.
[A B B]
' <- Jump over next character and grap the ASCII value of the jumped character.
[A B B A]
<- Because of the `p` command, this is actually the character whose ASCII
value is A at this point in the code. This was jumped over by the
previous instruction.
\ <- Swap top two values of stack.
[A B A B]
/ <- Pop top two values (B then A). Push (A/B) (integer division) onto stack.
[A B (A/B)]
* <- Pop top two values ((A/B) then B). Push (B*(A/B)) onto stack.
[A (B*(A/B))]
- <- Pop top two values ((B*(A/B)) then A). Push (A-(B*(A/B))) onto stack.
[(A-(B*(A/B)))]
. <- Pop top value and print it as an integer.
[]
# <- Exit program.
Code tested is this incomplete (but complete enough) Unefunge-98 interpreter I wrote to test the code:
module Unefunge where
import Prelude hiding (subtract)
import qualified Data.Map as Map
import Control.Exception (handle)
import Control.Monad
import Data.Char (chr, ord)
import Data.Map (Map)
import System.Environment (getArgs)
import System.Exit (exitSuccess, exitFailure, ExitCode (..))
import System.IO (hSetBuffering, BufferMode (..), stdin, stdout)
-----------------------------------------------------------
iterateM :: (Monad m) => (a -> m a) -> m a -> m b
iterateM f m = m >>= iterateM f . f
-----------------------------------------------------------
data Cell = Integer Integer | Char Char
-----------------------------------------------------------
newtype Stack = Stack [Integer]
mkStack = Stack []
push :: Integer -> Stack -> Stack
push x (Stack xs) = Stack (x : xs)
pop :: Stack -> Stack
pop (Stack xs) = case xs of
[] -> Stack []
_:ys -> Stack ys
top :: Stack -> Integer
top (Stack xs) = case xs of
[] -> 0
y:_ -> y
-----------------------------------------------------------
data Env = Env {
cells :: Map Integer Cell
, position :: Integer
, stack :: Stack
}
withStack :: (Stack -> Stack) -> Env -> Env
withStack f env = env { stack = f $ stack env }
pushStack :: Integer -> Env -> Env
pushStack x = withStack $ push x
popStack :: Env -> Env
popStack = withStack pop
topStack :: Env -> Integer
topStack = top . stack
-----------------------------------------------------------
type Instruction = Env -> IO Env
cellAt :: Integer -> Env -> Cell
cellAt n = Map.findWithDefault (Char ' ') n . cells
currentCell :: Env -> Cell
currentCell env = cellAt (position env) env
lookupInstruction :: Cell -> Instruction
lookupInstruction cell = case cell of
Integer n -> pushInteger n
Char c -> case c of
'\''-> fetch
'\\'-> swap
'0' -> pushInteger 0
'1' -> pushInteger 1
'2' -> pushInteger 2
'3' -> pushInteger 3
'4' -> pushInteger 4
'5' -> pushInteger 5
'6' -> pushInteger 6
'7' -> pushInteger 7
'8' -> pushInteger 8
'9' -> pushInteger 9
' ' -> nop
'+' -> add
'-' -> subtract
'*' -> multiply
'/' -> divide
'#' -> trampoline
'&' -> inputDecimal
'.' -> outputDecimal
':' -> duplicate
'p' -> put
'#' -> stop
instructionAt :: Integer -> Env -> Instruction
instructionAt n = lookupInstruction . cellAt n
currentInstruction :: Env -> Instruction
currentInstruction = lookupInstruction . currentCell
runCurrentInstruction :: Instruction
runCurrentInstruction env = currentInstruction env env
nop :: Instruction
nop = return
swap :: Instruction
swap env = return $ pushStack a $ pushStack b $ popStack $ popStack env
where
b = topStack env
a = topStack $ popStack env
inputDecimal :: Instruction
inputDecimal env = readLn >>= return . flip pushStack env
outputDecimal :: Instruction
outputDecimal env = putStr (show n ++ " ") >> return (popStack env)
where
n = topStack env
duplicate :: Instruction
duplicate env = return $ pushStack (topStack env) env
pushInteger :: Integer -> Instruction
pushInteger n = return . pushStack n
put :: Instruction
put env = return env' { cells = Map.insert loc c $ cells env'}
where
loc = topStack env
n = topStack $ popStack env
env' = popStack $ popStack env
c = Char . chr . fromIntegral $ n
trampoline :: Instruction
trampoline env = return env { position = position env + 1 }
fetch :: Instruction
fetch = trampoline >=> \env -> let
cell = currentCell env
val = case cell of
Char c -> fromIntegral $ ord c
Integer n -> n
in pushInteger val env
binOp :: (Integer -> Integer -> Integer) -> Instruction
binOp op env = return $ pushStack (a `op` b) $ popStack $ popStack env
where
b = topStack env
a = topStack $ popStack env
add :: Instruction
add = binOp (+)
subtract :: Instruction
subtract = binOp (-)
multiply :: Instruction
multiply = binOp (*)
divide :: Instruction
divide = binOp div
stop :: Instruction
stop = const exitSuccess
tick :: Instruction
tick = trampoline
-----------------------------------------------------------
buildCells :: String -> Map Integer Cell
buildCells = Map.fromList . zip [0..] . map Char . concat . eols
eols :: String -> [String]
eols "" = []
eols str = left : case right of
"" -> []
'\r':'\n':rest -> eols rest
_:rest -> eols rest
where
(left, right) = break (`elem` "\r\n") str
data Args = Args { sourceFileName :: String }
processArgs :: IO Args
processArgs = do
args <- getArgs
case args of
[] -> do
putStrLn "No source file! Exiting."
exitFailure
fileName:_ -> return $ Args { sourceFileName = fileName }
runUnefunge :: Env -> IO ExitCode
runUnefunge = iterateM round . return
where
round = runCurrentInstruction >=> tick
main :: IO ()
main = do
args <- processArgs
contents <- readFile $ sourceFileName args
let env = Env {
cells = buildCells contents
, position = 0
, stack = mkStack
}
mapM_ (`hSetBuffering` NoBuffering) [stdin, stdout]
handle return $ runUnefunge env
return ()
Ruby: 36 chars
a,b=gets.split.map(&:to_i);p a-a/b*b
Scheme: 38
(define(m a b)(- a(*(quotient a b)b)))
JavaScript, 11 chars
a-b*(0|a/b)
Assumes input integers are contained the variables a and b:
a = 2;
b = 2;
alert(a-b*(0|a/b)); // => 0
PHP, 49 chars
Assuming query string input in the form of script.php?a=27&b=7 and short tags turned on:
<?echo($a=$_GET['a'])-(int)($a/$b=$_GET['b'])*$b;
(That could be shortened by four by taking out the single-quotes, but that would throw notices.)
With the vile register_globals turned on you can get it down to 25 chars:
<?echo $a-(int)($a/$b)*b;
Perl, 33 chars
Reading the inputs could probably be shortened further.
($a,$b)=#ARGV;print$a-$b*int$a/$b
Usage
$ perl -e "($a,$b)=#ARGV;print$a-$b*int$a/$b" 2457 766
159
Java. Just for fun
Assuming that s[0] and s[1] are ints. Not sure this is worth anything but it was a bit of fun.
Note that this won't suffer from the loop effect (large numbers) but will only work on whole numbers. Also this solution is equally fast no matter how large the numbers are. A large percentage of the answers provided will generate a huge recursive stack or take infinitely long if givin say a large number and a small divisor.
public class M
{
public static void main(String [] s)
{
int a = Integer.parseInt(s[0]);
int b = Integer.parseInt(s[1]);
System.out.println(a-a/b*b);
}
}
Bash, 21 chars
echo $(($1-$1/$2*$2))
C, 226 chars
Late entry: I decided to go for the least number of characters while avoiding arithmetic operations altogether. Instead, I use the file system to compute the result:
#include <stdio.h>
#define z "%d"
#define y(x)x=fopen(#x,"w");
#define g(x)ftell(x)
#define i(x)fputs(" ",x);
main(a,b){FILE*c,*d;scanf(z z,&a,&b);y(c)y(d)while(g(c)!=a){i(c)i(d)if(g(d)==b)fseek(d,0,0);}printf(z,g(d));}
Java: 127 Chars
import java.util.*;enum M{M;M(){Scanner s=new Scanner(System.in);int a=s.nextInt(),b=s.nextInt();System.out.println(a-a/b*b);}}
Note the program does work, but it also throws
Exception in thread "main" java.lang.NoSuchMethodError: main
after the inputs are entered and after the output is outputted.
Common Lisp, 170 chars (including indentation):
(defun mod-divide()
(flet((g(p)(format t"Input of ~a number:"p)(read)))
(let*((a(g"first"))(b(g"second")))
(format t "Result:~d~%"(- a(* b(truncate a b)))))))
Old version (187 characters):
(defun mod-divide()
(flet((g(p)(format t"Input of ~a number:"p)(read)))
(let*((a(g"first"))(b(g"second")))
(multiple-value-bind(a b)(truncate a b)(format t "Result:~d~%"b)))))
DC: 8 chars
odO/O*-p
$ echo '17 3 odO/O*-p' | dc
2
Java, 110 chars
class C{public static void main(String[]a){Long x=new Long(a[0]),y=x.decode(a[1]);System.out.print(x-x/y*y);}}
Rebmu: 10 chars (no I/O) and 15 chars (with I/O)
If I/O is not required as part of the program source and you're willing to pass in named arguments then we can get 10 characters:
>> rebmu/args [sbJmpDVjKk] [j: 20 k: 7]
== 6
If I/O is required then that takes it to 15:
>> rebmu [rJrKwSBjMPdvJkK]
Input Integer: 42
Input Integer: 13
3
But using multiplication and division isn't as interesting (or inefficient) as this 17-character solution:
rJrKwWGEjK[JsbJk]
Which under the hood is turned into the equivalent:
r j r k w wge j k [j: sb j k]
Documented:
r j ; read j from user
r k ; read k from user
; write out the result of...
w (
; while j is greater than or equal to k
wge j k [
; assign to j the result of subtracting k from j
j: sb j k
]
; when a while loop exits the expression of the while will have the
; value of the last calculation inside the loop body. In this case,
; that last calculation was an assignment to j, and will have the
; value of j
)
Perl 25 characters
<>=~/ /;say$`-$'*int$`/$'
usage:
echo 15 6 | perl modulo.pl
3
Haskell, 30 chars
m a b=a-last((a-b):[b,2*b..a])
This is my first code golf, be free to comment on code and post improvements. ;-)
I know I won't win, but I just wanted to share my solution using lists.
In ruby with 38 chars
p (a=gets.to_i)-((b=gets.to_i)*(a/b))
Not a winner :(
DC: 7 Chars (maybe 5 ;)
??37axp
Used as follows:
echo "X\nY" | dc -e "??37axp"
[And, referencing some other examples above, if input is allowed to be inserted into the code, it can be 5 chars:
37axp
as in:
dc -e "17 3 37axp"
Just thought it worth a mention]
F#, 268 chars
Did I win?
printf "Input of first number:"
let x = stdin.ReadLine() |> int
printf "Input of second number:"
let y = stdin.ReadLine() |> int
let mutable z = x
while z >= 0 do
z <- z - y
// whoops, overshot
z <- z + y
// I'm not drunk, really
printfn "Result:%d" z

Has anyone seen a programming puzzle similar to this?

"Suppose you want to build a solid panel out of rows of 4×1 and 6×1 Lego blocks. For structural strength, the spaces between the blocks must never line up in adjacent rows. As an example, the 18×3 panel shown below is not acceptable, because the spaces between the blocks in the top two rows line up.
There are 2 ways to build a 10×1 panel, 2 ways to build a 10×2 panel, 8 ways to build an 18×3 panel, and 7958 ways to build a 36×5 panel.
How many different ways are there to build a 64×10 panel? The answer will fit in a 64-bit signed integer. Write a program to calculate the answer. Your program should run very quickly – certainly, it should not take longer than one minute, even on an older machine. Let us know the value your program computes, how long it took your program to calculate that value, and on what kind of machine you ran it. Include the program’s source code as an attachment.
"
I was recently given a programming puzzle and have been racking my brains trying to solve it. I wrote some code using c++ and I know the number is huge...my program ran for a few hours before I decided just to stop it because the requirement was 1 minute of run time even on a slow computer. Has anyone seen a puzzle similar to this? It has been a few weeks and I can't hand this in anymore, but this has really been bugging me that I couldn't solve it correctly. Any suggestions on algorithms to use? Or maybe possible ways to solve it that are "outside the box". What i resorted to was making a program that built each possible "layer" of 4x1 and 6x1 blocks to make a 64x1 layer. That turned out to be about 3300 different layers. Then I had my program run through and stack them into all possible 10 layer high walls that have no cracks that line up...as you can see this solution would take a long, long, long time. So obviously brute force does not seem to be effective in solving this within the time constraint. Any suggestions/insight would be greatly appreciated.
The main insight is this: when determining what's in row 3, you don't care about what's in row 1, just what's in row 2.
So let's call how to build a 64x1 layer a "row scenario". You say that there are about 3300 row scenarios. That's not so bad.
Let's compute a function:
f(s, r) = the number of ways to put row scenario number "s" into row "r", and legally fill all the rows above "r".
(I'm counting with row "1" at the top, and row "10" at the bottom)
STOP READING NOW IF YOU WANT TO AVOID SPOILERS.
Now clearly (numbering our rows from 1 to 10):
f(s, 1) = 1
for all values of "s".
Also, and this is where the insight comes in, (Using Mathematica-ish notation)
f(s, r) = Sum[ f(i, r-1) * fits(s, i) , {i, 1, 3328} ]
where "fits" is a function that takes two scenario numbers and returns "1" if you can legally stack those two rows on top of each other and "0" if you can't. This uses the insight because the number of legal ways to place scenario depends only on the number of ways to place scenarios above it that are compatible according to "fits".
Now, fits can be precomputed and stored in a 3328 by 3328 array of bytes. That's only about 10 Meg of memory. (Less if you get fancy and store it as a bit array)
The answer then is obviously just
Sum[ f(i, 10) , {i, 1, 3328} ]
Here is my answer. It's Haskell, among other things, you get bignums for free.
EDIT: It now actually solves the problem in a reasonable amount of time.
MORE EDITS: With a sparse matrix it takes a half a second on my computer.
You compute each possible way to tile a row. Let's say there are N ways to tile a row. Make an NxN matrix. Element i,j is 1 if row i can appear next to row j, 0 otherwise. Start with a vector containing N 1s. Multiply the matrix by the vector a number of times equal to the height of the wall minus 1, then sum the resulting vector.
module Main where
import Data.Array.Unboxed
import Data.List
import System.Environment
import Text.Printf
import qualified Data.Foldable as F
import Data.Word
import Data.Bits
-- This records the index of the holes in a bit field
type Row = Word64
-- This generates the possible rows for given block sizes and row length
genRows :: [Int] -> Int -> [Row]
genRows xs n = map (permToRow 0 1) $ concatMap comboPerms $ combos xs n
where
combos [] 0 = return []
combos [] _ = [] -- failure
combos (x:xs) n =
do c <- [0..(n `div` x)]
rest <- combos xs (n - x*c)
return (if c > 0 then (x, c):rest else rest)
comboPerms [] = return []
comboPerms bs =
do (b, brest) <- choose bs
rest <- comboPerms brest
return (b:rest)
choose bs = map (\(x, _) -> (x, remove x bs)) bs
remove x (bc#(y, c):bs) =
if x == y
then if c > 1
then (x, c - 1):bs
else bs
else bc:(remove x bs)
remove _ [] = error "no item to remove"
permToRow a _ [] = a
permToRow a _ [_] = a
permToRow a n (c:cs) =
permToRow (a .|. m) m cs where m = n `shiftL` c
-- Test if two rows of blocks are compatible
-- i.e. they do not have a hole in common
rowCompat :: Row -> Row -> Bool
rowCompat x y = x .&. y == 0
-- It's a sparse matrix with boolean entries
type Matrix = Array Int [Int]
type Vector = UArray Int Word64
-- Creates a matrix of row compatibilities
compatMatrix :: [Row] -> Matrix
compatMatrix rows = listArray (1, n) $ map elts [1..n] where
elts :: Int -> [Int]
elts i = [j | j <- [1..n], rowCompat (arows ! i) (arows ! j)]
arows = listArray (1, n) rows :: UArray Int Row
n = length rows
-- Multiply matrix by vector, O(N^2)
mulMatVec :: Matrix -> Vector -> Vector
mulMatVec m v = array (bounds v)
[(i, sum [v ! j | j <- m ! i]) | i <- [1..n]]
where n = snd $ bounds v
initVec :: Int -> Vector
initVec n = array (1, n) $ zip [1..n] (repeat 1)
main = do
args <- getArgs
if length args < 3
then putStrLn "usage: blocks WIDTH HEIGHT [BLOCKSIZE...]"
else do
let (width:height:sizes) = map read args :: [Int]
printf "Width: %i\nHeight %i\nBlock lengths: %s\n" width height
$ intercalate ", " $ map show sizes
let rows = genRows sizes width
let rowc = length rows
printf "Row tilings: %i\n" rowc
if null rows
then return ()
else do
let m = compatMatrix rows
printf "Matrix density: %i/%i\n"
(sum (map length (elems m))) (rowc^2)
printf "Wall tilings: %i\n" $ sum $ elems
$ iterate (mulMatVec m) (initVec (length rows))
!! (height - 1)
And the results...
$ time ./a.out 64 10 4 6
Width: 64
Height 10
Block lengths: 4, 6
Row tilings: 3329
Matrix density: 37120/11082241
Wall tilings: 806844323190414
real 0m0.451s
user 0m0.423s
sys 0m0.012s
Okay, 500 ms, I can live with that.
I solved a similar problem for a programming contest tiling a long hallway with tiles of various shapes. I used dynamic programming: given any panel, there is a way to construct it by laying down one row at a time. Each row can have finitely many shapes at its end. So for each number of rows, for each shape, I compute how many ways there are to make that row. (For the bottom row, there is exactly one way to make each shape.) Then the shape of each row determines the number of shapes that the next row can take (i.e. never line up the spaces). This number is finite for each row and in fact because you have only two sizes of bricks, it is going to be small. So you wind up spending constant time per row and the program finishes quickly.
To represent a shape I would just make a list of 4's and 6's, then use this list as a key in a table to store the number of ways to make that shape in row i, for each i.