I'm beginner in Haskell (currently learning pattern matching) and I tried to write simple function for defining vector:
vector :: (Num a) => a -> a -> String
vector 0 0 = "This vector has 0 magnitude."
vector x y = "This vector has a magnitude of " ++ show(sqrt(x^2 + y^2)) ++ "."
But I get bunch of errors which I don't understand at all.
helloworld.hs:9:8: error:
• Could not deduce (Eq a) arising from the literal ‘0’
from the context: Num a
bound by the type signature for:
vector :: Num a => a -> a -> String
at helloworld.hs:8:1-37
Possible fix:
add (Eq a) to the context of
the type signature for:
vector :: Num a => a -> a -> String
• In the pattern: 0
In an equation for ‘vector’:
vector 0 0 = "This vector has 0 magnitude."
helloworld.hs:10:51: error:
• Could not deduce (Show a) arising from a use of ‘show’
from the context: Num a
bound by the type signature for:
vector :: Num a => a -> a -> String
at helloworld.hs:8:1-37
Possible fix:
add (Show a) to the context of
the type signature for:
vector :: Num a => a -> a -> String
• In the first argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2))’
In the second argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2)) ++ "."’
In the expression:
"This vector has a magnitude of "
++ show (sqrt (x ^ 2 + y ^ 2)) ++ "."
helloworld.hs:10:56: error:
• Could not deduce (Floating a) arising from a use of ‘sqrt’
from the context: Num a
bound by the type signature for:
vector :: Num a => a -> a -> String
at helloworld.hs:8:1-37
Possible fix:
add (Floating a) to the context of
the type signature for:
vector :: Num a => a -> a -> String
• In the first argument of ‘show’, namely ‘(sqrt (x ^ 2 + y ^ 2))’
In the first argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2))’
In the second argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2)) ++ "."’
Failed, modules loaded: none.
Prelude> :load helloworld
[1 of 1] Compiling Main ( helloworld.hs, interpreted )
helloworld.hs:10:51: error:
• Could not deduce (Show a) arising from a use of ‘show’
from the context: Integral a
bound by the type signature for:
vector :: Integral a => a -> a -> String
at helloworld.hs:8:1-42
Possible fix:
add (Show a) to the context of
the type signature for:
vector :: Integral a => a -> a -> String
• In the first argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2))’
In the second argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2)) ++ "."’
In the expression:
"This vector has a magnitude of "
++ show (sqrt (x ^ 2 + y ^ 2)) ++ "."
helloworld.hs:10:56: error:
• Could not deduce (Floating a) arising from a use of ‘sqrt’
from the context: Integral a
bound by the type signature for:
vector :: Integral a => a -> a -> String
at helloworld.hs:8:1-42
Possible fix:
add (Floating a) to the context of
the type signature for:
vector :: Integral a => a -> a -> String
• In the first argument of ‘show’, namely ‘(sqrt (x ^ 2 + y ^ 2))’
In the first argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2))’
In the second argument of ‘(++)’, namely
‘show (sqrt (x ^ 2 + y ^ 2)) ++ "."’
Failed, modules loaded: none.
Can somebody explain me how to write this function properly, at least what's wrong with vector 0 0?
The first type error is because you're pattern matching on the literal 0. You've only required Num a where you needed (Num a, Eq a) in order for this to be possible.
The second type error is because you've tried to use show on a computation involving your a. So now, you need (Num a, Eq a, Show a).
The third, because you've used sqrt, which does not reside in Num but in Floating, so now it's (Num a, Eq a, Show a, Floating a).
Alternatively, you could have just removed the type signature entirely and prompted ghci for the type:
λ> :t vector
vector :: (Show a, Floating a, Eq a) => a -> a -> [Char]
Note that Floating implies Num.
The error messages tell you everything you need to know.
Could not deduce (Eq a)
When you use a pattern-match on a numeric literal, it needs the constraints (Eq a, Num a) because Haskell uses == internally to do matching on literals.
Could not deduce (Show a)
When you use show, you need the constraint (Show a), because not all types are showable by default.
Could not deduce (Floating a)
When you use fractional operations such as sqrt, you need the constraint (Floating a), because not all numeric types support these operations.
Putting it all together:
vector :: (Eq a, Num a, Show a, Floating a) => a -> a -> String
vector 0 0 = "This vector has 0 magnitude."
vector x y = "This vector has a magnitude of " ++ show(sqrt(x^2 + y^2)) ++ "."
You could also leave off the type signature, then ask ghci:
> let vector 0 0 = "This vector has 0 magnitude."; vector x y = "This vector has a magnitude of " ++ show(sqrt(x^2 + y^2)) ++ "."
> :t vector
vector :: (Eq a, Floating a, Show a) => a -> a -> [Char]
And here you can see that you don’t need the extra Num a constraint, because Floating is a subclass of Num, by way of Fractional. You can get information on these classes in ghci as well:
> :info Floating
class Fractional a => Floating a where
pi :: a
exp :: a -> a
log :: a -> a
sqrt :: a -> a
...
-- Defined in ‘GHC.Float’
instance Floating Float -- Defined in ‘GHC.Float’
instance Floating Double -- Defined in ‘GHC.Float’
When writing very generic code like this, you generally have to specify all the constraints you use. But you could also give this function a simpler non-generic type such as Double -> Double -> String.
Related
I'm learning Haskell and have some problems with list comprehension.
If I define a function to get a list of the divisors of a given number, I get an error.
check n = [x | x <- [1..(floor (n/2))], mod n x == 0]
I don't get why it's causing an error. If I want to generate a list from 1 to n/2 I can do it with [1..(floor (n/2))], but not if I do it in the list comprehension.
I tried another way but I get also an error (in this code I want to get all so called "perfect numbers")
f n = [1..(floor (n/2))]
main = print $ filter (\t -> foldr (+) 0 (f t) == t) [2..100]
Usually it is better to start writing a signature. While signatures are often not required, it makes it easier to debug a single function.
The signature of your check function is:
check :: (RealFrac a, Integral a) => a -> [a]
The type of input (and output) a thus needs to be both a RealFrac and an Integral. While technically speaking we can make such type, it does not make much sense.
The reason this happens is because of the use of mod :: Integral a => a -> a -> a this requires x and n to be both of the same type, and a should be a member of the Integral typeclass.
Another problem is the use of n/2, since (/) :: Fractional a => a -> a -> a requires that n and 2 have the same type as n / 2, and n should also be of a type that is a member of Fractional. To make matters even worse, we use floor :: (RealFrac a, Integral b) => a -> b which enforces that n (and thus x as well) have a type that is a member of the RealFrac typeclass.
We can prevent the Fractional and RealFrac type constaints by making use of div :: Integral a => a -> a -> a instead. Since mod already required n to have a type that is a member of the Integral typeclass, this thus will not restrict the types further:
check n = [x | x <- [1 .. div n 2], mod n x == 0]
This for example prints:
Prelude> print (check 5)
[1]
Prelude> print (check 17)
[1]
Prelude> print (check 18)
[1,2,3,6,9]
I want to write a function that takes a mathematical function (/,x,+,-), a number to start with and a list of numbers. Then, it's supposed to give back a list.
The first element is the starting number, the second element the value of the starting number plus/minus/times/divided by the first number of the given list. The third element is the result of the previous result plus/minus/times/divided by the second result of the given list, and so on.
I've gotten everything to work if I tell the code which function to use but if I want to let the user input the mathematical function he wants, there are problems with the types. Trying :t (/) for example gives out Fractional a => a -> a -> a, but if you put that at the start of your types, it fails.
Is there a specific type to distinguish these functions (/,x,+,-)? Or is there another way to write this function succesfully?
prefix :: (Fractional a, Num a) => a -> a -> a -> a -> [a] -> [a]
prefix (f) a b = [a] ++ prefix' (f) a b
prefix' :: (Fractional a, Num a) => a -> a -> a -> a -> [a] -> [a]
prefix' (z) x [] = []
prefix' (z) x y = [x z (head y)] ++ prefix' (z) (head (prefix' (z) x y)) (tail y)
A right solution would be something like this:
prefix (-) 0 [1..5]
[0,-1,-3,-6,-10,-15]
Is there a specific type to distinguish these functions (/,*,+,-)?
I don't see a reason to do this. Why is \x y -> x+y considered "better" than \x y -> x + y + 1. Sure adding two numbers is something that most will consider more "pure". But it is strange to restrict yourself to a specific subset of functions. It is also possible that for some function \x y -> f x y - 1 "happens" to be equal to (+), except that the compiler can not determine that.
The type checking will make sure that one can not pass functions that operate on numbers, given the list contains strings, etc. But deliberately restricting this further is not very useful. Why would you prevent programmers to use your function for different purposes?
Or is there another way to write this function succesfully?
What you here describe is the scanl :: (b -> a -> b) -> b -> [a] -> [b] function. If we call scanl with scanl f z [x1, x2, ..., xn], then we obtain a list [z, f z x1, f (f z x1) x2, ...]. scanl can be defined as:
scanl :: (b -> a -> b) -> b -> [a] -> [b]
scanl f = go
where go z [] = [z]
go z (x:xs) = z : go (f z x) xs
We thus first emit the accumulator (that starts with the initial value), and then "update" the accumulator to f z x with z the old accumulator, and x the head of the list, and recurse on the tail of the list.
If you want to restrict to these four operations, just define the type yourself:
data ArithOp = Plus | Minus | Times | Div
as_fun Plus = (+)
as_fun Minus = (-)
as_fun Times = (*)
as_fun Div = (/)
This question already has answers here:
Can you overload + in haskell?
(5 answers)
Closed 4 years ago.
I am trying to override the + symbol in an effort to learn how to define my own types. I am very new to Haskell, and I cannot seem to get past this error.
Here is my simple new type:
newtype Matrix x = Matrix x
(+):: (Num a, Num b, Num c) => Matrix [[a]] -> Matrix [[b]] -> Matrix [[c]]
x + y = Matrix zipWith (\ a b -> zipWith (+) a b) x y
When I try to load this into ghci, I get the error
linear_algebra.hs:9:42:
Ambiguous occurrence ‘+’
It could refer to either ‘Main.+’, defined at linear_algebra.hs:9:3
or ‘Prelude.+’,
imported from ‘Prelude’ at linear_algebra.hs:1:1
(and originally defined in ‘GHC.Num’)
Failed, modules loaded: none.
Replacing my last line of code with
x + y = Matrix zipWith (\ a b -> zipWith (Prelude.+) a b) x y
gives me the error
Couldn't match expected type ‘([Integer] -> [Integer] -> [Integer])
-> Matrix [[a]] -> Matrix [[b]] -> Matrix [[c]]’
with actual type ‘Matrix
((a0 -> b0 -> c0) -> [a0] -> [b0] -> [c0])’
Relevant bindings include
y :: Matrix [[b]] (bound at linear_algebra.hs:9:5)
x :: Matrix [[a]] (bound at linear_algebra.hs:9:1)
(+) :: Matrix [[a]] -> Matrix [[b]] -> Matrix [[c]]
(bound at linear_algebra.hs:9:1)
The function ‘Matrix’ is applied to four arguments,
but its type ‘((a0 -> b0 -> c0) -> [a0] -> [b0] -> [c0])
-> Matrix ((a0 -> b0 -> c0) -> [a0] -> [b0] -> [c0])’
has only one
In the expression:
Matrix zipWith (\ a b -> zipWith (Prelude.+) a b) x y
In an equation for ‘+’:
x + y = Matrix zipWith (\ a b -> zipWith (Prelude.+) a b) x y
Failed, modules loaded: none.
Can you please help me understand what the error is? I would really appreciate it. Thanks!
First of all, the type
(+) :: (Num a, Num b, Num c) => Matrix [[a]] -> Matrix [[b]] -> Matrix [[c]]
is too much general. It states that you can sum any numeric matrix with any other numeric matrix, even if the element types are different, to produce a matrix of a third numeric type (potentially distinct from the first two). That is, in particular a matrix of floats can be summed to a matrix of doubles to produce a matrix of ints.
You want instead
(+) :: Num a => Matrix [[a]] -> Matrix [[a]] -> Matrix [[a]]
I would recommend to move the "list of list" type inside the newtype
newtype Matrix a = Matrix [[a]]
reflecting that the list of lists implements the Matrix concept. That gives the type signature
(+) :: Num a => Matrix a -> Matrix a -> Matrix a
To "override" (+): there's no overriding/overloading in Haskell. The closest options are:
define a module-local function (+). This will clash with Prelude.(+), so that every + will now need to be qualified to remove the ambiguity. We can not write x + y, but we need x Prelude.+ y or x MyModuleName.+ y.
implement a Num instance for Matrix a. This is not a great idea since a matrix is not exactly a number, but we can try anyway.
instance Num a => Num (Matrix a) where
Matrix xs + Matrix ys = Matrix (zipWith (zipWith (+)) xs ys)
-- other Num operators here
(*) = error "not implemented" -- We can't match dimension
negate (Matrix xs) = Matrix (map (map negate) xs)
abs = error "not implemented"
signum = error "not implemented"
fromInteger = error "not implemented"
This is very similar to your code, which lacks some parentheses. Not all the other methods can be implemented in a completely meaningful way, since Num is for numbers, not matrices.
use a different operator, e.g. (^+) or whatever
Beginning to learn Haskell:
*Main> map double [1,2,3]
[2,4,6]
*Main> sum (map double [1,2,3])
12
*Main> (sum . map) (double) ([1,2,3])
<interactive>:71:8:
Couldn't match type ‘[b0] -> [b0]’ with ‘[[t0] -> t]’
Expected type: (b0 -> b0) -> [[t0] -> t]
Actual type: (b0 -> b0) -> [b0] -> [b0]
Relevant bindings include it :: t (bound at <interactive>:71:1)
Probable cause: ‘map’ is applied to too few arguments
In the second argument of ‘(.)’, namely ‘map’
In the expression: sum . map
According to this answer: Haskell: difference between . (dot) and $ (dollar sign) "The primary purpose of the . operator is not to avoid parenthesis, but to chain functions. It lets you tie the output of whatever appears on the right to the input of whatever appears on the left.".
OK, so why my example does not work? Actual and expected types are different, but why? After all, according to this description map should take (double) ([1,2,3]) on the input and pass its output to sum's input?
The reason for this is that . only allows the function to take one argument before being passed to the next. So:
(sum . map) double [1,2,3]
Will become
(sum (map double)) [1,2,3]
...and we can't sum a function, can we? Type error galore!
What you'll want to do is this:
(sum . map double) [1,2,3]
which reduces to:
sum (map double [1,2,3])
If you want to see, here's how . is defined:
(.) :: (b -> c) -> (a -> b) -> (a -> c)
(.) f g arg = f (g arg)
If you're being really smart, you can doubly compose something, so that it takes two arguments before it's passed along:
((sum .) . map) double [1,2,3]
which reduces to:
(sum . map double) [1,2,3]
and finally:
sum (map double [1,2,3])
I'm learning Haskell and I'm trying to write a function to return a list of factors for a number. Here's what I have:
factors :: Int -> [Int]
factors n = [x | x <- [2..s], n `mod` x == 0]
where s = floor (sqrt n)
When I try to load the module in ghci, I get two errors,
p003.hs:3:14:
No instance for (RealFrac Int)
arising from a use of `floor' at p003.hs:3:14-27
Possible fix: add an instance declaration for (RealFrac Int)
In the expression: floor (sqrt n)
In the definition of `s': s = floor (sqrt n)
In the definition of `factors':
factors n = [x | x <- [2 .. s], n `mod` x == 0]
where
s = floor (sqrt n)
p003.hs:3:21:
No instance for (Floating Int)
arising from a use of `sqrt' at p003.hs:3:21-26
Possible fix: add an instance declaration for (Floating Int)
In the first argument of `floor', namely `(sqrt n)'
In the expression: floor (sqrt n)
In the definition of `s': s = floor (sqrt n)
Failed, modules loaded: none.
Any suggestions?
The parameter has type Int, so you cannot calculate a square root for it. You need to convert it to a floating point type first, which you can do with fromIntegral. Unlike some other languages, Haskell does not automatically promote integers to floating point numbers (nor do any other automatic type conversions).
So change sqrt n to sqrt (fromIntegral n).
The cause of the problem
The type of the sqrt function is
sqrt :: (Floating a) => a -> a
You can check this by typing :t sqrt in ghci.
Int is not an instance of Floating, which is why you're seeing the second error.
The cause of the first error is the same; checking :t floor reveals that the type is:
floor :: (RealFrac a, Integral b) => a -> b
The function is expecting an instance of RealFrac, and you're supplying an Int.
Typing :info RealFrac or :info Floating reveals that neither has an instance for Int, which is why the body of the error says
No instance for ... Int
The solution
The solution to this problem, is to make sure that the types are correct; they must be members of the proper type classes.
A simple way to do this is to use the fromIntegral function, which :t reveals is of type:
fromIntegral :: (Integral a, Num b) => a -> b
Using fromIntegral is necessary because the incoming type is Int, but the functions floor and sqrt operate on types RealFrac and Floating, respectively.
It's allowed because, as you can see from the type signature, fromIntegral returns an instance of Num, which includes both the RealFrac and Floating types. You can convince yourself of this by typing :info Num and :info Float into ghci, and viewing the output.
Making his change to your program would have the final result below, which should work as you want:
factors :: Int -> [Int]
factors n = [x | x <- [2..s], n `mod` x == 0]
where s = floor (sqrt $ fromIntegral n)
Further reading
Two good resources for understanding exactly what's going on are the Haskell tutorial's sections on Type Classes and Numbers.