Following this approach, I'm trying to model functional programs using effect handlers in Coq, based on an implementation in Haskell. There are two approaches presented in the paper:
Effect syntax is represented as a functor and combined with the free monad.
data Prog sig a = Return a | Op (sig (Prog sig a))
Due to the termination check not liking non-strictly positive definitions, this data type can't be defined directly. However, containers can be used to represent strictly-positive functors, as described in this paper. This approach works but since I need to model a scoped effect that requires explicit scoping syntax, mismatched begin/end tags are possible. For reasoning about programs, this is not ideal.
The second approach uses higher-order functors, i.e. the following data type.
data Prog sig a = Return a | Op (sig (Prog sig) a)
Now sig has the type (* -> *) -> * -> *. The data type can't be defined in Coq for the same reasons as before. I'm looking for ways to model this data type, so that I can implement scoped effects without explicit scoping tags.
My attempts of defining a container for higher-order functors have not been fruitful and I can't find anything about this topic. I'm thankful for pointers in the right direction and helpful comments.
Edit: One example of scoped syntax from the paper that I would like to represent is the following data type for exceptions.
data HExc e m a = Throw′ e | forall x. Catch′ (m x) (e -> m x) (x -> m a)
Edit2: I have merged the suggested idea with my approach.
Inductive Ext Shape (Pos : Shape -> Type -> Type -> Type) (F : Type -> Type) A :=
ext : forall s, (forall X, Pos s A X -> F X) -> Ext Shape Pos F A.
Class HContainer (H : (Type -> Type) -> Type -> Type):=
{
Shape : Type;
Pos : Shape -> Type -> Type -> Type;
to : forall M A, Ext Shape Pos M A -> H M A;
from : forall M A, H M A -> Ext Shape Pos M A;
to_from : forall M A (fx : H M A), #to M A (#from M A fx) = fx;
from_to : forall M A (e : Ext Shape Pos M A), #from M A (#to M A e) = e
}.
Section Free.
Variable H : (Type -> Type) -> Type -> Type.
Inductive Free (HC__F : HContainer H) A :=
| pure : A -> Free HC__F A
| impure : Ext Shape Pos (Free HC__F) A -> Free HC__F A.
End Free.
The code can be found here. The Lambda Calculus example works and I can prove that the container representation is isomorphic to the data type.
I have tried to to the same for a simplified version of the exception handler data type but it does not fit the container representation.
Defining a smart constructor does not work, either. In Haskell, the constructor works by applying Catch' to a program where an exception may occur and a continuation, which is empty in the beginning.
catch :: (HExc <: sig) => Prog sig a -> Prog sig a
catch p = inject (Catch' p return)
The main issue I see in the Coq implementation is that the shape needs to be parameterized over a functor, which leads to all sorts of problems.
This answer gives more intuition about how to derive containers from functors than my previous one. I'm taking quite a different angle, so I'm making a new answer instead of revising the old one.
Simple recursive types
Let's consider a simple recursive type first to understand non-parametric containers, and for comparison with the parameterized generalization. Lambda calculus, without caring about scopes, is given by the following functor:
Inductive LC_F (t : Type) : Type :=
| App : t -> t -> LC_F t
| Lam : t -> LC_F t
.
There are two pieces of information we can learn from this type:
The shape tells us about the constructors (App, Lam), and potentially also auxiliary data not relevant to the recursive nature of the syntax (none here). There are two constructors, so the shape has two values. Shape := App_S | Lam_S (bool also works, but declaring shapes as standalone inductive types is cheap, and named constructors also double as documentation.)
For every shape (i.e., constructor), the position tells us about recursive occurences of syntax in that constructor. App contains two subterms, hence we can define their two positions as booleans; Lam contains one subterm, hence its position is a unit. One could also make Pos (s : Shape) an indexed inductive type, but that is a pain to program with (just try).
(* Lambda calculus *)
Inductive ShapeLC :=
| App_S (* The shape App _ _ *)
| Lam_S (* The shape Lam _ *)
.
Definition PosLC s :=
match s with
| App_S => bool
| Lam_S => unit
end.
Parameterized recursive types
Now, properly scoped lambda calculus:
Inductive LC_F (f : Type -> Type) (a : Type) : Type :=
| App : f a -> f a -> LC_F a
| Lam : f (unit + a) -> LC_F a
.
In this case, we can still reuse the Shape and Pos data from before. But this functor encodes one more piece of information: how each position changes the type parameter a. I call this parameter the context (Ctx).
Definition CtxLC (s : ShapeLC) : PosLC s -> Type -> Type :=
match s with
| App_S => fun _ a => a (* subterms of App reuse the same context *)
| Lam_S => fun _ a => unit + a (* Lam introduces one variable in the context of its subterm *)
end.
This container (ShapeLC, PosLC, CtxLC) is related to the functor LC_F by an isomorphism: between the sigma { s : ShapeLC & forall p : PosLC s, f (CtxLC s p a) } and LC_F a. In particular, note how the function y : forall p, f (CtxLC s p) tells you exactly how to fill the shape s = App_S or s = Lam_S to construct a value App (y true) (y false) : LC_F a or Lam (y tt) : LC_F a.
My previous representation encoded Ctx in some additional type indices of Pos. The representations are equivalent, but this one here looks tidier.
Exception handler calculus
We'll consider just the Catch constructor. It has four fields: a type X, the main computation (which returns an X), an exception handler (which also recovers an X), and a continuation (consuming the X).
Inductive Exc_F (E : Type) (F : Type -> Type) (A : Type) :=
| ccatch : forall X, F X -> (E -> F X) -> (X -> F A) -> Exc_F E F A.
The shape is a single constructor, but you must include X. Essentially, look at all the fields (possibly unfolding nested inductive types), and keep all the data that doesn't mention F, that's your shape.
Inductive ShapeExc :=
| ccatch_S (X : Type) (* The shape ccatch X _ (fun e => _) (fun x => _) *)
.
(* equivalently, Definition ShapeExc := Type. *)
The position type lists all the ways to get an F out of an Exc_F of the corresponding shape. In particular, a position contains the arguments to apply functions with, and possibly any data to resolve branching of any other sort. In particular, you need to know the exception type to store exceptions for the handler.
Inductive PosExc (E : Type) (s : ShapeExc) : Type :=
| main_pos (* F X *)
| handle_pos (e : E) (* E -> F X *)
| continue_pos (x : getX s) (* X -> F A *)
.
(* The function getX takes the type X contained in a ShapeExc value, by pattern-matching: getX (ccatch_S X) := X. *)
Finally, for each position, you need to decide how the context changes, i.e., whether you're now computing an X or an A:
Definition Ctx (E : Type) (s : ShapeExc) (p : PosExc E s) : Type -> Type :=
match p with
| main_pos | handle_pos _ => fun _ => getX s
| continue_pos _ => fun A => A
end.
Using the conventions from your code, you can then encode the Catch constructor as follows:
Definition Catch' {E X A}
(m : Free (C__Exc E) X)
(h : E -> Free (C__Exc E) X)
(k : X -> Free (C__Exc E) A) : Free (C__Exc E) A :=
impure (#ext (C__Exc E) (Free (C__Exc E)) A (ccatch_S X) (fun p =>
match p with
| main_pos => m
| handle_pos e => h e
| continue_pos x => k x
end)).
(* I had problems with type inference for some reason, hence #ext is explicitly applied *)
Full gist https://gist.github.com/Lysxia/6e7fb880c14207eda5fc6a5c06ef3522
The main trick in the "first-order" free monad encoding is to encode a functor F : Type -> Type as a container, which is essentially a dependent pair { Shape : Type ; Pos : Shape -> Type }, so that, for all a, the type F a is isomorphic to the sigma type { s : Shape & Pos s -> a }.
Taking this idea further, we can encode a higher-order functor F : (Type -> Type) -> (Type -> Type) as a container { Shape : Type & Pos : Shape -> Type -> (Type -> Type) }, so that, for all f and a, F f a is isomorphic to { s : Shape & forall x : Type, Pos s a x -> f x }.
I don't quite understand what the extra Type parameter in Pos is doing there, but It Works™, at least to the point that you can construct some lambda calculus terms in the resulting type.
For example, the lambda calculus syntax functor:
Inductive LC_F (f : Type -> Type) (a : Type) : Type :=
| App : f a -> f a -> LC_F a
| Lam : f (unit + a) -> LC_F a
.
is represented by the container (Shape, Pos) defined as:
(* LC container *)
Shape : Type := bool; (* Two values in bool = two constructors in LC_F *)
Pos (b : bool) : Type -> (Type -> Type) :=
match b with
| true => App_F
| false => Lam_F
end;
where App_F and Lam_F are given by:
Inductive App_F (a : Type) : TyCon :=
| App_ (b : bool) : App_F a a
.
Inductive Lam_F (a : Type) : TyCon :=
| Lam_ : Lam_F a (unit + a)
.
Then the free-like monad Prog (implicitly parameterized by (Shape, Pos)) is given by:
Inductive Prog (a : Type) : Type :=
| Ret : a -> Prog a
| Op (s : Shape) : (forall b, Pos s a b -> Prog b) -> Prog a
.
Having defined some boilerplate, you can write the following example:
(* \f x -> f x x *)
Definition omega {a} : LC a :=
Lam (* f *) (Lam (* x *)
(let f := Ret (inr (inl tt)) in
let x := Ret (inl tt) in
App (App f x) x)).
Full gist: https://gist.github.com/Lysxia/5485709c4594b836113736626070f488
Related
I'm just starting to learn haskell, and I'm trying to implement some common monads as exercises for myself. As I was fiddling with the ((->) r) monad, I implemented this (wrong) definition:
instance Monad ((->) r) where
return x = \_ -> x
m >>= f = \c -> (f . m) c
GHCi complained to me with
• Couldn't match expected type ‘b’ with actual type ‘t -> b’
`b’ is a rigid type variable bound by
the type signature for:
(>>=) :: forall a b. (t -> a) -> (a -> t -> b) -> t -> b
whereas the type should be:
(>>=) :: (t -> a ) -> (a -> t -> b) -> t -> b
Why does my implementation break it? And seemingly, the forall version should give the exact same type, but GHCi thinks otherwise. What's the difference?
My fatal misunderstanding was that I thought the error message meant that my implementation had an incorrect type signature. (I thought that because there was a forall in the type signature.) It turns out that's not the actual error, and my error in the implementation was that I needed to apply another r term to both f and m in my implementation. Thanks to #luqui and #melpomene for pointing it out to me
Assume the following definition:
def app {α : Type} : Π{m n : ℕ}, vector α m → vector α n → vector α (n + m)
| 0 _ [] v := by simp [add_zero]; assumption
| (nat.succ _) _ (h :: t) v' := begin apply vector.cons,
exact h,
apply app t v'
end
Do note that (n + m) are flipped in the definition, so as to avoid plugging add_symm into the definition. Also, remember that add / + is defined on rhs in Lean. vector is a hand rolled nil / cons defined length indexed list.
So anyway, first we have a lemma that follows from definition:
theorem nil_app_v {α : Type} : ∀{n : ℕ} (v : vector α n),
v = [] ++ v := assume n v, rfl
Now we have a lemma that doesn't follow from definition, as such I use eq.rec to formulate it.
theorem app_nil_v {α : Type} : ∀{n : ℕ} (v : vector α n),
v = eq.rec (v ++ []) (zero_add n)
Note that eq.rec is just C y → Π {a : X}, y = a → C a.
The idea of a proof is trivial by induction on v. The base case follows immediately from definition, the recursive case should follow immediately from the inductive hypothesis and definition, but I can't convince Lean of this.
begin
intros n v,
induction v,
-- base case
refl,
-- inductive case
end
The inductive hypothesis I get from Lean is a_1 = eq.rec (a_1 ++ vector.nil) (zero_add n_1).
How do I use it with conclusion a :: a_1 = eq.rec (a :: a_1 ++ vector.nil) (zero_add (nat.succ n_1))? I can unfold app to reduce the term a :: a_1 ++ vector.nil to a :: (a_1 ++ vector.nil), and now I am stuck.
I want to define a function, the behavior of which depends on whether it's argument is (at least) an n-place function. A rudimentary (failed) attempt is
Definition rT {y:Type}(x:y) := ltac: (match y with
| _ -> _ -> _ => exact True
| _ => exact False end).
Check prod: Type -> Type -> Type.
Compute rT prod. (*= False: Prop*)
Print rT. (*rT = fun (y : Type) (_ : y) => False: forall y : Type, y -> Prop*)
As you see, rT maps everything to False. Why? The result remains the same if I replace y in the match clause w/ type of x
The function you want cannot exist within Gallina at the type you expect.
Your function is accepted, but if you print it, you can see its body is:
rT = fun (y : Type) (_ : y) => False
Gallina has no way of match-ing on a Type. There are ways to deal with n-ary functions, in such a way that you can inspect their arity, but it involves dependent types to statically capture the arity. For instance, for uniform n-ary functions:
https://coq.inria.fr/library/Coq.Numbers.NaryFunctions.html
In my simple Haskell DSL, I have the following functions to call other functions:
callF :: forall a. (Typeable a)
=> (V a) -> (V a)
callF fp#(V (FunP name)) =
pack $ FunAppl (prettyV fp) []
callF1 :: forall a b. (Typeable a, Typeable b)
=> (V (V a -> V b)) -> V a -> (V b)
callF1 fp#(V (FunP name)) arg =
pack $ FunAppl (prettyV fp) [prettyV arg]
callF2 :: forall a b c. (Typeable a, Typeable b, Typeable c)
=> (V (V a -> V b -> V c)) -> V a -> V b -> (V c)
callF2 fp#(V (FunP name)) arg1 arg2 =
pack $ FunAppl (prettyV fp) [prettyV arg1, prettyV arg2]
I would like to generalize this for any number of arguments using typeclasses.
This is what I've tried, but it only works for 0 or 1 arguments, and it doesn't enforce calling functions with the correct number of arguments.
class Callable a b | a -> b where
call :: V a -> [String] -> b
instance Callable a (V b) where
call fp#(V (FunP name)) x = pack $ FunAppl (prettyV fp) x
instance (Typeable c, Typeable d) => Callable (V a -> V b) (V c -> V d) where
call fun#(V (FunP name)) list arg = call fun (list ++ [prettyV arg])
The normal technique for functions with multiple arguments--like printf--is to use a recursive typeclass. For printf, this is done with a class called PrintfType. The important insight is the recursive instance:
(PrintfArg a, PrintfType r) => PrintfType (a -> r)
This basically says that if you can return a PrintfType, your function is also an instance. The "base case" is then a type like String. So, if you want to call printf with one argument, it fires two instances: PrintfType String and PrintfType (a -> r) where r is String. If you want two arguments, it goes: String, (a -> r) where r is String and (a -> r) where r is the previous (a -> r).
However, your problem is actually a bit more complex. You want to have an instance that handles two different tasks. You want your instance to apply to functions of different types (e.g. V (V a -> V b), V (V a -> V b -> V c) and so on) as well as ensuring that the right number of arguments is presented.
The first step to do this is to stop using [String] to pass in arguments. The [String] type loses information about how many values it has, so you can't check that there is an appropriate number of arguments. Instead, you should use a type for the argument lists that reflects how many arguments it has.
This type could look something like this:
data a :. b = a :. b
it is just a type for combining two other types, which can be used like this:
"foo" :. "bar" :: String :. String
"foo" :. "bar" :. "baz" :: String :. String :. String
Now you just need to write a typeclass with a recursive instance that traverses both the type-level list of arguments and the function itself. Here's a very rough standalone sketch of what I mean; you'll have to adopt it to your particular problem yourself.
infixr 8 :.
data a :. b = a :. b
class Callable f a b | f -> a b where
call :: V f -> a -> b
instance Callable rf ra (V rb) => Callable (String -> rf) (String :. ra) (V rb) where
call (V f) (a :. rest) = call (V (f a)) rest
instance Callable String () (V String) where
call (V f) () = V f
You will also have to enable a few extensions: FlexibleInstances, FucntionalDepenedencies and UndecidableInstances.
You could then use it like this:
*Main> call (V "foo") ()
V "foo"
*Main> call (V (\ x -> "foo " ++ x)) ("bar" :. ())
V "foo bar"
*Main> call (V (\ x y -> "foo " ++ x ++ y)) ("bar" :. " baz" :. ())
V "foo bar baz"
If you pass in the wrong number of arguments, you will get a type error. Admittedly, it's not the prettiest error message in the world! That said, the important part of the error (Couldn't match type `()' with `[Char] :. ()') does point out the core problem (argument lists that don't match), which should be easy enough to follow.
*Main> call (V (\ x -> "foo " ++ x)) ("bar" :. "baz" :. ())
<interactive>:101:1:
Couldn't match type `()' with `[Char] :. ()'
When using functional dependencies to combine
Callable String () (V String),
arising from the dependency `f -> a b'
in the instance declaration at /home/tikhon/Documents/so/call.hs:16:14
Callable [Char] ([Char] :. ()) (V [Char]),
arising from a use of `call' at <interactive>:101:1-4
In the expression:
call (V (\ x -> "foo " ++ x)) ("bar" :. "baz" :. ())
In an equation for `it':
it = call (V (\ x -> "foo " ++ x)) ("bar" :. "baz" :. ())
Note that this might be a bit over-complicated for your particular task--I'm not convinced it's the best solution to the problem. But it's a very good exercise in enforcing more complicated type-level invariants using some more advanced typeclass features.
I have below code to take the args to set some offset time.
setOffsetTime :: (Ord a, Num b)=>[a] -> b
setOffsetTime [] = 200
setOffsetTime (x:xs) = read x::Int
But compiler says "Could not deduce (b ~ Int) from the context (Ord a, Num b) bound by the type signature for setOffsetTime :: (Ord a, Num b) => [a] -> b
Also I found I could not use 200.0 if I want float as the default value. The compilers says "Could not deduce (Fractional b) arising from the literal `200.0'"
Could any one show me some code as a function (not in the prelude) that takes an arg to store some variable so I can use in other function? I can do this in the main = do, but hope
to use an elegant function to achieve this.
Is there any global constant stuff in Hasekll? I googled it, but seems not.
I wanna use Haskell to replace some of my python script although it is not easy.
I think this type signature doesn't quite mean what you think it does:
setOffsetTime :: (Ord a, Num b)=>[a] -> b
What that says is "if you give me a value of type [a], for any type a you choose that is a member of the Ord type class, I will give you a value of type b, for any type b that you choose that is a member of the Num type class". The caller gets to pick the particular types a and b that are used each time setOffsetTime is called.
So trying to return a value of type Int (or Float, or any particular type) doesn't make sense. Int is indeed a member of the type class Num, but it's not any member of the type class Num. According to that type signature, I should be able to make a brand new instance of Num that you've never seen before, import setOffsetTime from your module, and call it to get a value of my new type.
To come up with an acceptable return value, you can only use functions that likewise return an arbitrary Num. You can't use any functions of particular concrete types.
Existential types are essentially a mechanism for allowing the callee to choose the value for a type variable (and then the caller has to be written to work regardless of what that type is), but that's not really something you want to be getting into while you're still learning.
If you are convinced that the implementation of your function is correct, i.e., that it should interpret the first element in its input list as the number to return and return 200 if there is no such argument, then you only need to make sure that the type signature matches that implementation (which it does not do, right now).
To do so, you could, for example, remove the type signature and ask ghci to infer the type:
$ ghci
GHCi, version 7.6.2: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude> :{
Prelude| let setOffsetTime [] = 200
Prelude| setOffsetTime (x : xs) = read x :: Int
Prelude| :}
Prelude> :t setOffsetTime
setOffsetTime :: [String] -> Int
Prelude> :q
Leaving GHCi.
$
And indeed,
setOffsetTime :: [String] -> Int
setOffsetTime [] = 200
setOffsetTime (x : xs) = read x :: Int
compiles fine.
If you want a slightly more general type, you can drop the ascription :: Int from the second case. The above method then tells you that you can write
setOffsetTime :: (Num a, Read a) => [String] -> a
setOffsetTime [] = 200
setOffsetTime (x : xs) = read x
From the comment that you added to your question, I understand that you want your function to return a floating-point number. In that case, you can write
setOffsetTime :: [String] -> Float
setOffsetTime [] = 200.0
setOffsetTime (x : xs) = read x
or, more general:
setOffsetTime :: (Fractional a, Read a) => [String] -> a
setOffsetTime [] = 200.0
setOffsetTime (x : xs) = read x