Using fuction args in Haskell - function

I need to write a Haskell function that resolves the Newton-Raphson algorithm by passing a function, its derivative and the initial point (x0) as arguments, and returns an approximation of the function root.
Example:
f(x) = x^2 − 2x + 1
f′(x) = 3x^2 − 2
x0​ = −1.5
.
x3 ​= −1.618 ​= x2​ − f′(x2​)/f(x2​)
I would appreciate a lot all of your help and suggestions.
What I have tried before is this:
newtonR f g x0 =
if (x0 - (f x0 / g x0)) /= 0 then
newtonR f g (x0 - f x0 / g x0)
else
x0
... and it returns the following error message:
No instance for (Show (Double -\> Double))
arising from a use of \`print'
(maybe you haven't applied a function to enough arguments?)

One can start by writing a function that performs a single iteration of Newton-Raphson. Note that it is good and common practice to provide an explicit type signature before the body of the function:
nrIter :: (Double -> Double) -> (Double -> Double) -> Double -> Double
nrIter f f' x0 = x0 - (f x0 / f' x0)
It is one the idiosyncrasies of Haskell that f' is a valid identifier, unlike in classic imperative languages.
As mentioned in the comments, testing floating-point numbers for equality is error prone, because of inevitable rounding errors. Instead, we can provide an explicit tolerance level, and write the upper level logic for Newton-Raphson like this:
newtonR :: (Double -> Double) -> (Double -> Double) -> Double -> Double -> Double
newtonR f f' tol x0 =
let
x1 = nrIter f f' x0
dx = abs (x1 -x0)
in
if (dx < tol) then x1
else newtonR f f' tol x1
Sample test code:
We can compute an approximation of the cubic root of 2 like this:
main :: IO ()
main = do
let cr2 = newtonR (\x -> x^3 - 2.0) (\x -> 3.0*x^2) 1.0e-9 1.0
putStrLn $ "Cubic root of 2 is close to: " ++ (show cr2)

Related

Overriding + in Haskell [duplicate]

This question already has answers here:
Can you overload + in haskell?
(5 answers)
Closed 4 years ago.
I am trying to override the + symbol in an effort to learn how to define my own types. I am very new to Haskell, and I cannot seem to get past this error.
Here is my simple new type:
newtype Matrix x = Matrix x
(+):: (Num a, Num b, Num c) => Matrix [[a]] -> Matrix [[b]] -> Matrix [[c]]
x + y = Matrix zipWith (\ a b -> zipWith (+) a b) x y
When I try to load this into ghci, I get the error
linear_algebra.hs:9:42:
Ambiguous occurrence ‘+’
It could refer to either ‘Main.+’, defined at linear_algebra.hs:9:3
or ‘Prelude.+’,
imported from ‘Prelude’ at linear_algebra.hs:1:1
(and originally defined in ‘GHC.Num’)
Failed, modules loaded: none.
Replacing my last line of code with
x + y = Matrix zipWith (\ a b -> zipWith (Prelude.+) a b) x y
gives me the error
Couldn't match expected type ‘([Integer] -> [Integer] -> [Integer])
-> Matrix [[a]] -> Matrix [[b]] -> Matrix [[c]]’
with actual type ‘Matrix
((a0 -> b0 -> c0) -> [a0] -> [b0] -> [c0])’
Relevant bindings include
y :: Matrix [[b]] (bound at linear_algebra.hs:9:5)
x :: Matrix [[a]] (bound at linear_algebra.hs:9:1)
(+) :: Matrix [[a]] -> Matrix [[b]] -> Matrix [[c]]
(bound at linear_algebra.hs:9:1)
The function ‘Matrix’ is applied to four arguments,
but its type ‘((a0 -> b0 -> c0) -> [a0] -> [b0] -> [c0])
-> Matrix ((a0 -> b0 -> c0) -> [a0] -> [b0] -> [c0])’
has only one
In the expression:
Matrix zipWith (\ a b -> zipWith (Prelude.+) a b) x y
In an equation for ‘+’:
x + y = Matrix zipWith (\ a b -> zipWith (Prelude.+) a b) x y
Failed, modules loaded: none.
Can you please help me understand what the error is? I would really appreciate it. Thanks!
First of all, the type
(+) :: (Num a, Num b, Num c) => Matrix [[a]] -> Matrix [[b]] -> Matrix [[c]]
is too much general. It states that you can sum any numeric matrix with any other numeric matrix, even if the element types are different, to produce a matrix of a third numeric type (potentially distinct from the first two). That is, in particular a matrix of floats can be summed to a matrix of doubles to produce a matrix of ints.
You want instead
(+) :: Num a => Matrix [[a]] -> Matrix [[a]] -> Matrix [[a]]
I would recommend to move the "list of list" type inside the newtype
newtype Matrix a = Matrix [[a]]
reflecting that the list of lists implements the Matrix concept. That gives the type signature
(+) :: Num a => Matrix a -> Matrix a -> Matrix a
To "override" (+): there's no overriding/overloading in Haskell. The closest options are:
define a module-local function (+). This will clash with Prelude.(+), so that every + will now need to be qualified to remove the ambiguity. We can not write x + y, but we need x Prelude.+ y or x MyModuleName.+ y.
implement a Num instance for Matrix a. This is not a great idea since a matrix is not exactly a number, but we can try anyway.
instance Num a => Num (Matrix a) where
Matrix xs + Matrix ys = Matrix (zipWith (zipWith (+)) xs ys)
-- other Num operators here
(*) = error "not implemented" -- We can't match dimension
negate (Matrix xs) = Matrix (map (map negate) xs)
abs = error "not implemented"
signum = error "not implemented"
fromInteger = error "not implemented"
This is very similar to your code, which lacks some parentheses. Not all the other methods can be implemented in a completely meaningful way, since Num is for numbers, not matrices.
use a different operator, e.g. (^+) or whatever

Appending nil to a dependently typed length indexed vector in Lean

Assume the following definition:
def app {α : Type} : Π{m n : ℕ}, vector α m → vector α n → vector α (n + m)
| 0 _ [] v := by simp [add_zero]; assumption
| (nat.succ _) _ (h :: t) v' := begin apply vector.cons,
exact h,
apply app t v'
end
Do note that (n + m) are flipped in the definition, so as to avoid plugging add_symm into the definition. Also, remember that add / + is defined on rhs in Lean. vector is a hand rolled nil / cons defined length indexed list.
So anyway, first we have a lemma that follows from definition:
theorem nil_app_v {α : Type} : ∀{n : ℕ} (v : vector α n),
v = [] ++ v := assume n v, rfl
Now we have a lemma that doesn't follow from definition, as such I use eq.rec to formulate it.
theorem app_nil_v {α : Type} : ∀{n : ℕ} (v : vector α n),
v = eq.rec (v ++ []) (zero_add n)
Note that eq.rec is just C y → Π {a : X}, y = a → C a.
The idea of a proof is trivial by induction on v. The base case follows immediately from definition, the recursive case should follow immediately from the inductive hypothesis and definition, but I can't convince Lean of this.
begin
intros n v,
induction v,
-- base case
refl,
-- inductive case
end
The inductive hypothesis I get from Lean is a_1 = eq.rec (a_1 ++ vector.nil) (zero_add n_1).
How do I use it with conclusion a :: a_1 = eq.rec (a :: a_1 ++ vector.nil) (zero_add (nat.succ n_1))? I can unfold app to reduce the term a :: a_1 ++ vector.nil to a :: (a_1 ++ vector.nil), and now I am stuck.

Haskell - redundant part in function

I have an intricate Haskell function:
j :: [Int]
j = ((\f x -> map x) (\y -> 3 + y) (\z -> 2*z)) [1,2,3]
My guess was that (\y -> 3 + y) (\z -> 2*z) will change to (\w -> 3 + 2*w) and together with f there should be print out the list [5,7,9].
When I checked with ghci I got [2,4,6] as a result.
Question: So is (\y -> 3 + y) a redundant expression here? If so, why?
I think you've gone wrong somewhere in the design of your function, but I'll give you an explanation of the mechanics going on here. Stick around for the detailed explanation if you still get stuck understanding.
Quick answer: Evaluating the expression
j = ((\f x -> map x) (\y -> 3 + y) (\z -> 2*z)) [1,2,3]
= ((\x -> map x) (\z -> 2*z)) [1,2,3]
= (map (\z -> 2*z)) [1,2,3] -- f := (\y -> 3 + y), x := (\z -> 2*z)
= [2,4,6]
You may see that when Haskell evaluates (\f x -> map x) (\y -> 3 + y), which it will because function applications are evaluated left-to-right, the substitution f := (\y -> 3 + y) is made, but f doesn't appear anywhere in the function, so it just becomes (\x -> map x).
Detailed answer: Left (and Right) associative operators
In Haskell we say that function application is left associative. This is to say, when we write a function application, it is evaluated like so:
function arg1 arg2 arg3 = ((function arg1) arg2) arg3
No matter what the types of the arguments are. This is called left associative because the brackets are always on the left.
You seem to expect your expression to behave like this:
(\f x -> map x) (\y -> 3 + y) (\z -> 2*z)
= (\f x -> map x) ((\y -> 3 + y) (\z -> 2*z)) -- Incorrect
However, as we've seen, functions associate left, not right as you assumed, and so your expression looks to Haskell like this:
(\f x -> map x) (\y -> 3 + y) (\z -> 2*z)
= ((\f x -> map x) (\y -> 3 + y)) (\z -> 2*z)
= (\x -> map x) (\z -> 2*z) -- Correct!
Notice that the brackets I put in to order the evaluation are on the left, not the right.
However, Haskell has defined a very useful right-associative function application operator, namely ($) :: (a -> b) -> a -> b. You could re-write your expression like so to obtain the result you seemed to expect.
(\f x -> map x) $ (\y -> 3 + y) (\z -> 2*z)
= (\f x -> map x) $ ((\y -> 3 + y) (\z -> 2*z)) -- Correct, though nonsensical.
However, as you'll notice, f still isn't referred to in (\f x -> map x) and so is ignored entirely. I'm not sure exactly what your goal was in making this function
Further issue: Lambda expressions & composition
I realise that another issue may be your understanding of lambda expressions. Consider this function:
f x = x + 2
We could re-write this as a lambda expression, like so:
f = \x -> x+2
However, what if we have two arguments? This is what we do:
g x y = x + y
g = \x -> (\y -> x+y)
The way Haskell models multiple arguments is called currying. You can see that the function actually returns another function, which then returns a function that finally returns what it should have. However, this notation is long and cumbersome, so Haskell provides an alternative:
g = \x y -> x + y
This may seem to be different, but in fact it is syntactic sugar for exactly the same expression as before. Now, looking at your first lambda expression:
\f x -> map x = \f -> (\x -> map x)
You can see that the argument f isn't actually referred to at all in the function, so if I apply something to it:
(\f x -> map x) foo
= (\f -> (\x -> map x)) foo
= \x -> map x
That's why your (\y -> 3 + y) is ignored; you haven't actually used it in your functions.
Furthermore, you expect the expression (\y -> 3 + y) (\z -> 2*z) to evaluate to \w -> 3 + 2*w. This is not true. The lambda on the left replaces each occurrence of y with (\z -> 2*z), so the result is the completely nonsensical 3 + (\z -> 2*z). How do you add a function to a number?!
What you're looking for is called composition. We have an operator in Haskell, namely (.) :: (b -> c) -> (a -> b) -> (a -> c) which can help you with this. It takes a function on the left and the right and creates a new function that 'pipes' the functions into each other. That is to say:
(\y -> 3 + y) . (\z -> 2*z) = \w -> 3 + 2*w
Which is what you're looking for.
Conclusion: Corrected expression
I think the expression you're looking for is:
j = ( (\x -> map x) $ (\y -> 3 + y) . (\z -> 2*z) ) [1,2,3]
Which is equivalent to saying:
j = map (\w -> 3 + 2*w) [1,2,3]
Since you seem to be having a lot of trouble with the more basic parts of Haskell, I'd recommend the quintessential beginner's book, Learn You a Haskell for Great Good.
It is because it goes the other way round, that is you apply (\f x -> map x) to (\y -> 3 + y) first. But
(\f x -> map x) something g
becomes
let f = something; x = g in map x
And this finally is
map g
So something doesn't appear in the resulting expression, since f is not mentioned anywhere right from the arrow.
If I understand you correctly, you want
(\f x -> map (f . x))
Additional note: From your argumentation, it looks like you have not grasped yet how beta reduction works.
For example, even if the expressions would be applied like you think:
(\y -> 3 + y) (\z -> 2*z)
the result would be
3 + (\z -> 2*z)
No, it is doubtful that this makes any sense, however, it's the way beta reduction works: It gives the part right from the arrow where every occurrence of a parameter is replaced by the actual argument.

Function for defining vector doesn't work in Haskell

I'm beginner in Haskell (currently learning pattern matching) and I tried to write simple function for defining vector:
vector :: (Num a) => a -> a -> String
vector 0 0 = "This vector has 0 magnitude."
vector x y = "This vector has a magnitude of " ++ show(sqrt(x^2 + y^2)) ++ "."
But I get bunch of errors which I don't understand at all.
helloworld.hs:9:8: error:
• Could not deduce (Eq a) arising from the literal ‘0’
from the context: Num a
bound by the type signature for:
vector :: Num a => a -> a -> String
at helloworld.hs:8:1-37
Possible fix:
add (Eq a) to the context of
the type signature for:
vector :: Num a => a -> a -> String
• In the pattern: 0
In an equation for ‘vector’:
vector 0 0 = "This vector has 0 magnitude."
helloworld.hs:10:51: error:
• Could not deduce (Show a) arising from a use of ‘show’
from the context: Num a
bound by the type signature for:
vector :: Num a => a -> a -> String
at helloworld.hs:8:1-37
Possible fix:
add (Show a) to the context of
the type signature for:
vector :: Num a => a -> a -> String
• In the first argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2))’
In the second argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2)) ++ "."’
In the expression:
"This vector has a magnitude of "
++ show (sqrt (x ^ 2 + y ^ 2)) ++ "."
helloworld.hs:10:56: error:
• Could not deduce (Floating a) arising from a use of ‘sqrt’
from the context: Num a
bound by the type signature for:
vector :: Num a => a -> a -> String
at helloworld.hs:8:1-37
Possible fix:
add (Floating a) to the context of
the type signature for:
vector :: Num a => a -> a -> String
• In the first argument of ‘show’, namely ‘(sqrt (x ^ 2 + y ^ 2))’
In the first argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2))’
In the second argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2)) ++ "."’
Failed, modules loaded: none.
Prelude> :load helloworld
[1 of 1] Compiling Main ( helloworld.hs, interpreted )
helloworld.hs:10:51: error:
• Could not deduce (Show a) arising from a use of ‘show’
from the context: Integral a
bound by the type signature for:
vector :: Integral a => a -> a -> String
at helloworld.hs:8:1-42
Possible fix:
add (Show a) to the context of
the type signature for:
vector :: Integral a => a -> a -> String
• In the first argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2))’
In the second argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2)) ++ "."’
In the expression:
"This vector has a magnitude of "
++ show (sqrt (x ^ 2 + y ^ 2)) ++ "."
helloworld.hs:10:56: error:
• Could not deduce (Floating a) arising from a use of ‘sqrt’
from the context: Integral a
bound by the type signature for:
vector :: Integral a => a -> a -> String
at helloworld.hs:8:1-42
Possible fix:
add (Floating a) to the context of
the type signature for:
vector :: Integral a => a -> a -> String
• In the first argument of ‘show’, namely ‘(sqrt (x ^ 2 + y ^ 2))’
In the first argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2))’
In the second argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2)) ++ "."’
Failed, modules loaded: none.
Can somebody explain me how to write this function properly, at least what's wrong with vector 0 0?
The first type error is because you're pattern matching on the literal 0. You've only required Num a where you needed (Num a, Eq a) in order for this to be possible.
The second type error is because you've tried to use show on a computation involving your a. So now, you need (Num a, Eq a, Show a).
The third, because you've used sqrt, which does not reside in Num but in Floating, so now it's (Num a, Eq a, Show a, Floating a).
Alternatively, you could have just removed the type signature entirely and prompted ghci for the type:
λ> :t vector
vector :: (Show a, Floating a, Eq a) => a -> a -> [Char]
Note that Floating implies Num.
The error messages tell you everything you need to know.
Could not deduce (Eq a)
When you use a pattern-match on a numeric literal, it needs the constraints (Eq a, Num a) because Haskell uses == internally to do matching on literals.
Could not deduce (Show a)
When you use show, you need the constraint (Show a), because not all types are showable by default.
Could not deduce (Floating a)
When you use fractional operations such as sqrt, you need the constraint (Floating a), because not all numeric types support these operations.
Putting it all together:
vector :: (Eq a, Num a, Show a, Floating a) => a -> a -> String
vector 0 0 = "This vector has 0 magnitude."
vector x y = "This vector has a magnitude of " ++ show(sqrt(x^2 + y^2)) ++ "."
You could also leave off the type signature, then ask ghci:
> let vector 0 0 = "This vector has 0 magnitude."; vector x y = "This vector has a magnitude of " ++ show(sqrt(x^2 + y^2)) ++ "."
> :t vector
vector :: (Eq a, Floating a, Show a) => a -> a -> [Char]
And here you can see that you don’t need the extra Num a constraint, because Floating is a subclass of Num, by way of Fractional. You can get information on these classes in ghci as well:
> :info Floating
class Fractional a => Floating a where
pi :: a
exp :: a -> a
log :: a -> a
sqrt :: a -> a
...
-- Defined in ‘GHC.Float’
instance Floating Float -- Defined in ‘GHC.Float’
instance Floating Double -- Defined in ‘GHC.Float’
When writing very generic code like this, you generally have to specify all the constraints you use. But you could also give this function a simpler non-generic type such as Double -> Double -> String.

Haskell basic function definition problem

I'm learning Haskell and I'm trying to write a function to return a list of factors for a number. Here's what I have:
factors :: Int -> [Int]
factors n = [x | x <- [2..s], n `mod` x == 0]
where s = floor (sqrt n)
When I try to load the module in ghci, I get two errors,
p003.hs:3:14:
No instance for (RealFrac Int)
arising from a use of `floor' at p003.hs:3:14-27
Possible fix: add an instance declaration for (RealFrac Int)
In the expression: floor (sqrt n)
In the definition of `s': s = floor (sqrt n)
In the definition of `factors':
factors n = [x | x <- [2 .. s], n `mod` x == 0]
where
s = floor (sqrt n)
p003.hs:3:21:
No instance for (Floating Int)
arising from a use of `sqrt' at p003.hs:3:21-26
Possible fix: add an instance declaration for (Floating Int)
In the first argument of `floor', namely `(sqrt n)'
In the expression: floor (sqrt n)
In the definition of `s': s = floor (sqrt n)
Failed, modules loaded: none.
Any suggestions?
The parameter has type Int, so you cannot calculate a square root for it. You need to convert it to a floating point type first, which you can do with fromIntegral. Unlike some other languages, Haskell does not automatically promote integers to floating point numbers (nor do any other automatic type conversions).
So change sqrt n to sqrt (fromIntegral n).
The cause of the problem
The type of the sqrt function is
sqrt :: (Floating a) => a -> a
You can check this by typing :t sqrt in ghci.
Int is not an instance of Floating, which is why you're seeing the second error.
The cause of the first error is the same; checking :t floor reveals that the type is:
floor :: (RealFrac a, Integral b) => a -> b
The function is expecting an instance of RealFrac, and you're supplying an Int.
Typing :info RealFrac or :info Floating reveals that neither has an instance for Int, which is why the body of the error says
No instance for ... Int
The solution
The solution to this problem, is to make sure that the types are correct; they must be members of the proper type classes.
A simple way to do this is to use the fromIntegral function, which :t reveals is of type:
fromIntegral :: (Integral a, Num b) => a -> b
Using fromIntegral is necessary because the incoming type is Int, but the functions floor and sqrt operate on types RealFrac and Floating, respectively.
It's allowed because, as you can see from the type signature, fromIntegral returns an instance of Num, which includes both the RealFrac and Floating types. You can convince yourself of this by typing :info Num and :info Float into ghci, and viewing the output.
Making his change to your program would have the final result below, which should work as you want:
factors :: Int -> [Int]
factors n = [x | x <- [2..s], n `mod` x == 0]
where s = floor (sqrt $ fromIntegral n)
Further reading
Two good resources for understanding exactly what's going on are the Haskell tutorial's sections on Type Classes and Numbers.