Imagine the following code:
let d = dict [1, "one"; 2, "two" ]
let CollectionHasValidItems keys =
try
let values = keys |> List.map (fun k -> d.Item k)
true
with
| :? KeyNotFoundException -> false
Now let us test it:
let keys1 = [ 1 ; 2 ]
let keys2 = [ 1 ; 2; 3 ]
let result1 = CollectionHasValidItems keys1 // true
let result2 = CollectionHasValidItems keys2 // false
This works as I would expect. But if we change List to Seq in the function, we get different behavior:
let keys1 = seq { 1 .. 2 }
let keys2 = seq { 1 .. 3 }
let result1 = CollectionHasValidItems keys1 // true
let result2 = CollectionHasValidItems keys2 // true
Here with keys2 I can see the exception message within values object in the debugger but no exception is thrown...
Why is it like this? I need some similar logic in my app and would prefer to work with sequences.
This is a classic example of a problem with side effects and lazy evaluation. Seq functions such as Seq.map are lazily evaluated, that means that the result of Seq.map will not be computed until the returned sequence is enumerated. In your example, this never occurs because you never do anything with values.
If you force the evaluation of the sequence by generating a concrete collection, like a list, you will get your exception and the function will return false:
let CollectionHasValidItems keys =
try
let values = keys |> Seq.map (fun k -> d.Item k) |> Seq.toList
true
with
| :? System.Collections.Generic.KeyNotFoundException -> false
As you've noticed, using List.map instead of Seq.map also resolves your issue because it will be eagerly evaluated when called, returning a new concrete list.
The key takeaway is, you have to be really careful about combining side effects with lazy evaluation. You can't rely on effects happening in the order that you initially expect.
Related
This question already has answers here:
Force a broader type for optional argument with more restrictive default value
(3 answers)
Closed 1 year ago.
I have this function to add logs in a file :
let log_datas f file (datas: 'a list) =
let oc = open_out file.file_name in
List.iter (fun x -> Printf.fprintf oc "%s" ## f x) datas;
close_out oc
let () = let f = string_of_int in log_datas f {file_name="log"} [1;2]
Which works.
I tried to make it by default accepting string list as argument :
let log_datas ?(f:'a -> string = fun x -> x^"\n") file (datas: 'a list) =
let oc = open_out file.file_name in
List.iter (fun x -> Printf.fprintf oc "%s" ## f x) datas;
close_out oc
but when I try
let () = let f = string_of_int in log_datas ~f {file_name="log"} [1;2]
I get a type error
23 | let () = let f = string_of_int in log_datas ~f {file_name="log"} [1;2]
^
Error: This expression has type int -> string
but an expression was expected of type string -> string
Type int is not compatible with type string
An obvious solution would be to make 2 function, one with no f argument and one with a f argument. But I was wondering, is there any other workaround possible ?
No, it is not possible, you have to specify both parameters to keep it polymorphic. Basically, your example could be distilled to,
let log ?(to_string=string_of_int) data =
print_endline (to_string data)
If OCaml would keep it polymorphic then the following would be allowed,
log "hello"
and string_of_int "hello" is not well-typed.
So you have to keep both parameters required, e.g.,
let log to_string data =
print_endline (to_string data)
I would also suggest looking into the Format module and defining your own polymorphic function that uses format specification to define how data of different types are written, e.g.,
let log fmt =
Format.kasprintf print_endline fmt
Substitute print_endline with our own logging facility. The log function could be used as printf, e.g.,
log "%s %d" "hello" 42
Is there an easy way to make Json.Decode case insensitive in elm (0.18)?
decodeDepartmentDate : Json.Decode.Decoder DepartmentDate
decodeDepartmentDate =
Json.Decode.map6 DepartmentDate
(field "nameOfDay" Json.Decode.string)
(field "orderDate" Convert.datePart)
(field "mealTimeID" Json.Decode.string)
(field "mealTime" Json.Decode.string)
(field "departmentID" Json.Decode.string)
(field "department" Json.Decode.string)
I want to be able to use the same elm SPA against multiple back ends and avoid issues like this by default:
BadPayload "Expecting an object with a field named `nameOfDay` at _[11]
but instead got: {\"NameOfDay\":\"Wednesday\",\"OrderDate\":\"2018-09-05T00:00:00\",
\"MealTimeID\":\"546ccee0-e070-403e-a15b-63f4e1366054\",\"MealTime\":\"All Day\",
\"StartTime\":\"2018/06/05 05:04:38\",\"DepartmentID\":\"066a1c9f-97da-487e-b82f-f933b159c042\",
\"Department\":\"Side walk\"}"
Thanks
As far as I'm aware, there's no ready-made solution for doing so. But you can make your own!
The easiest way is probably to just generate the different casings and make your own field decoder using oneOf:
myField name decoder =
Decode.oneOf
[ Decode.field name decoder
, Decode.field (String.toLower) decoder
]
Another approach would be to decode the object as key/value pairs without decoding the values, transforming the keys and then re-encoding it to be able to use the existing JSON decoders on it:
lowerCaseKeys =
Decode.keyValuePairs Decode.value
|> Decode.map (List.map (\(key, value) -> (String.toLower key, value)))
|> Decode.map (Encode.object)
But since the value is now wrapped in a Decoder you'd have to use decodeValue on that and ultimately end up with a double-wrapped Result, which isn't very nice. I might be missing some elegant way of making this work though.
Instead it seems better to not re-encode it, but just make your own field decoder to work on the dict. This will also allow you to ignore casing on the keys you specify.
lowerCaseKeys : Decode.Decoder (Dict.Dict String Decode.Value)
lowerCaseKeys =
Decode.keyValuePairs Decode.value
|> Decode.map (List.map (\( key, value ) -> ( String.toLower key, value )))
|> Decode.map Dict.fromList
myField : String -> Decode.Decoder a -> Dict.Dict String Decode.Value -> Decode.Decoder a
myField name decode dict =
case Dict.get (String.toLower name) dict of
Just value ->
case Decode.decodeValue decode value of
Ok v ->
Decode.succeed v
Err e ->
e |> Decode.errorToString |> Decode.fail
Nothing ->
Decode.fail "missing key"
result =
Decode.decodeString (lowerCaseKeys |> Decode.andThen (myField "fOO" Decode.int)) """{ "Foo": 42 }"""
You can define a variant of field that disregards case.
fieldInsensitive : String -> Decode.Decoder a -> Decode.Decoder a
fieldInsensitive f d =
let
flow = String.toLower f
in
Decode.keyValuePairs Decode.value |> Decode.andThen
(\ l -> l |> List.filter (\(k, v) -> String.toLower k == flow)
|> List.map (\(k, v) -> v)
|> List.head
|> Maybe.map Decode.succeed
|> Maybe.withDefault (Decode.fail "field not found")
) |> Decode.andThen
(\ v -> case Decode.decodeValue d v of
Ok w -> Decode.succeed w
Err e -> Decode.fail (Decode.errorToString e)
)
This is more or less the same code as #glennsl's answer, but wrapped up in a self-contained function. The advantage is a simpler interface, the disadvantage is that if you lookup multiple fields in the same object you will be repeating work.
Note that this code makes a rather arbitrary decision if there are multiple fields with the same key up to case! For more reliable code, it might be a better idea to fail if a key exists more than once.
I am trying to create a lazy list with list elements which together represent all the combinations of zeros and ones.
Example: [[], [0], [1], [0,0], [0,1], [1,0]...]
Is this even possible in ML? I can't seem to find a way to change the pattern of the list elements once I have defined it. It seems that there is also a need to define a change in the binary pattern, which is not really possible in a functional language (I've never encountered binary representations in functional language)?
There seem to be two different issues at hand here:
How do we generate this particular infinite data structure?
In ML, how do we implement call-by-need?
Let's begin by considering the first point. I would generate this particular data structure in steps where the input to the nth step is a list of all bit patterns of length n. We can generate all bit patterns of length n+1 by prepending 0s and 1s onto each pattern of length n. In code:
fun generate patterns =
let
val withZeros = List.map (fn pat => 0 :: pat) patterns
val withOnes = List.map (fn pat => 1 :: pat) patterns
val nextPatterns = withZeros # withOnes
in
current # generate nextPatterns
end
val allPatterns = generate [[]]
If you were to implement this approach in a call-by-need language such as Haskell, it will perform well out of the box. However, if you run this code in ML it will not terminate. That brings us to the second problem: how do we do call-by-need in ML?
To do call-by-need in ML, we'll need to work with suspensions. Intuitively, a suspension is a piece of computation which may or may not have been run yet. A suitable interface and implementation are shown below. We can suspend a computation with delay, preventing it from running immediately. Later, when we need the result of a suspended computation, we can force it. This implementation uses references to remember the result of a previously forced suspension, guaranteeing that any particular suspension will be evaluated at most once.
structure Susp :>
sig
type 'a susp
val delay : (unit -> 'a) -> 'a susp
val force : 'a susp -> 'a
end =
struct
type 'a susp = 'a option ref * (unit -> 'a)
fun delay f = (ref NONE, f)
fun force (r, f) =
case !r of
SOME x => x
| NONE => let val x = f ()
in (r := SOME x; x)
end
end
Next, we can define a lazy list type in terms of suspensions, where the tail of the list is delayed. This allows us to create seemingly infinite data structures; for example, fun zeros () = delay (fn _ => Cons (0, zeros ())) defines an infinite list of zeros.
structure LazyList :>
sig
datatype 'a t = Nil | Cons of 'a * 'a t susp
val singleton : 'a -> 'a t susp
val append : 'a t susp * 'a t susp -> 'a t susp
val map : ('a -> 'b) -> 'a t susp -> 'b t susp
val take : 'a t susp * int -> 'a list
end =
struct
datatype 'a t = Nil | Cons of 'a * 'a t susp
fun singleton x =
delay (fn _ => Cons (x, delay (fn _ => Nil)))
fun append (xs, ys) =
delay (fn _ =>
case force xs of
Nil => force ys
| Cons (x, xs') => Cons (x, append (xs', ys)))
fun map f xs =
delay (fn _ =>
case force xs of
Nil => Nil
| Cons (x, xs') => Cons (f x, map f xs'))
fun take (xs, n) =
case force xs of
Nil => []
| Cons (x, xs') =>
if n = 0 then []
else x :: take (xs', n-1)
end
With this machinery in hand, we can adapt the original code to use lazy lists and suspensions in the right places:
fun generate patterns =
delay (fn _ =>
let
val withZeros = LazyList.map (fn pat => 0 :: pat) patterns
val withOnes = LazyList.map (fn pat => 1 :: pat) patterns
val nextPatterns = LazyList.append (withZeros, withOnes)
in
force (LazyList.append (patterns, generate nextPatterns))
end)
val allPatterns = generate (LazyList.singleton [])
We can force a piece of this list with LazyList.take:
- LazyList.take (allPatterns, 10);
val it = [[],[0],[1],[0,0],[0,1],[1,0],[1,1],[0,0,0],[0,0,1],[0,1,0]]
: int list list
I'm trying to read a bunch of csv files in SQL Server using SQL Bulk Insert and DataContext.ExecuteCommand. (Maybe this isn't the best way to do it, but it does allow me stay in the Type Provider context--as opposed to with SqlBulkCopy I think.) Now the upload is glitchy with intermittent success. Some files read in, some fail with "Data conversion error (truncation)". I think this has to do with the row terminators not always working.
When the upload works, it seems to be with the '0x0A' terminator. But when that fails, I want to try repeatedly again with other row terminators. So I want to go into a Try statement, and on failure go into another Try statement, and another if that one fails, ... . This may not be the best way to upload, but I am still curious about the Try logic for it's own state.
Here's what I've come up with so far and it's not too pretty (but it works). Cutting out a few nested layers:
let FileRead path =
try
db.DataContext.ExecuteCommand(#"BULK INSERT...ROWTERMINATOR='0x0A')") |> ignore
true
with
| exn ->
try
db.DataContext.ExecuteCommand(#"BULK INSERT...ROWTERMINATOR='\r')") |> ignore
true
with
| exn ->
try
db.DataContext.ExecuteCommand(#"BULK INSERT...ROWTERMINATOR='\n')") |> ignore
true
with
| exn ->
false
This doens't feel right but I haven't figured out any other syntax.
EDIT: What I ended up doing, just for the record. Appreciate being put on a productive path. There's plenty to improve in this. With one of the more significant things being to use Async's and run it Parallel (which I have gotten experience with in other sections).
type dbSchema = SqlDataConnection<dbConnection>
let db = dbSchema.GetDataContext()
let TryUpLd table pathFile rowTerm =
try
db.DataContext.ExecuteCommand( #"BULK INSERT " + table + " FROM '" + pathFile +
#"' WITH (FIELDTERMINATOR=',', FIRSTROW = 2, ROWTERMINATOR='"
+ rowTerm + "')" ) |> ignore
File.Delete (pathFile) |> Some
with
| exn -> None
let NxtUpLd UL intOpt =
match intOpt with
| None -> UL
| _ -> intOpt
let MoveTable ID table1 table2 =
//...
()
let NxtMoveTable MT intOpt =
match intOpt with
| Some i -> MT
| _ -> ()
let UpLdFile path (file:string) =
let (table1, table2) =
match path with
| p when p = dlXPath -> ("Data.dbo.ImportXs", "Data.dbo.Xs")
| p when p = dlYPath -> ("Data.dbo.ImportYs", "Data.dbo.Ys")
| _ -> ("ERROR path to tables", "")
let ID = file.Replace(fileExt, "")
let TryRowTerm = TryUpLd table1 (path + file)
TryRowTerm "0x0A"
|> NxtUpLd (TryRowTerm "\r")
|> NxtUpLd (TryRowTerm "\n")
|> NxtUpLd (TryRowTerm "\r\n")
|> NxtUpLd (TryRowTerm "\n\r")
|> NxtUpLd (TryRowTerm "\0")
|> NxtMoveTable (MoveTable ID table1 table2)
let UpLdData path =
let dir = new DirectoryInfo(path)
let fileList = dir.GetFiles()
fileList |> Array.iter (fun file -> UpLdFile path file.Name ) |> ignore
Here's one way to do it, using monadic composition.
First, define a function that takes another function as input, but converts any exception to a None value:
let attempt f =
try f () |> Some
with | _ -> None
This function has the type (unit -> 'a) -> 'a option; that is: f is inferred to be any function that takes unit as input, and returns a value. As you can see, if no exception happens, the return value from invoking f is wrapped in a Some case. The attempt function suppresses all exceptions, which you shouldn't normally do.
Next, define this attemptNext function:
let attemptNext f = function
| Some x -> Some x
| None -> attempt f
This function has the type (unit -> 'a) -> 'a option -> 'a option. If the input 'a option is Some then it's simply returned. In other words, the value is interpreted as already successful, so there's no reason to try the next function.
Otherwise, if the input 'a option is None, this is interpreted as though the previous step resulted in a failure. In that case, the input function f is attempted, using the attempt function.
This means that you can now compose functions together, and get the first successful result.
Here are some functions to test with:
let throwyFunction () = raise (new System.InvalidOperationException("Boo"))
let throwyFunction' x y = raise (new System.InvalidOperationException("Hiss"))
let goodFunction () = "Hooray"
let goodFunction' x y = "Yeah"
Try them out in F# Interactive:
> let res1 =
attempt throwyFunction
|> attemptNext (fun () -> throwyFunction' 42 "foo")
|> attemptNext goodFunction
|> attemptNext (fun () -> goodFunction' true 13.37);;
val res1 : string option = Some "Hooray"
> let res2 =
attempt goodFunction
|> attemptNext throwyFunction
|> attemptNext (fun () -> throwyFunction' 42 "foo")
|> attemptNext (fun () -> goodFunction' true 13.37);;
val res2 : string option = Some "Hooray"
> let res3 =
attempt (fun () -> throwyFunction' 42 "foo")
|> attemptNext throwyFunction
|> attemptNext (fun () -> goodFunction' true 13.37)
|> attemptNext goodFunction;;
val res3 : string option = Some "Yeah"
> let res4 =
attempt (fun () -> throwyFunction' 42 "foo")
|> attemptNext (fun () -> goodFunction' true 13.37)
|> attemptNext throwyFunction
|> attemptNext goodFunction;;
val res4 : string option = Some "Yeah"
I am currently trying to make use functions to create:
0 V12 V13 V14
V21 0 V23 V24
V31 V32 0 V34
V41 V42 V43 0
A way that I found to do this was to use theses equations:
(2*V1 - 1)*(2*V2-1) = for spot V(1,2) in the Matrix
(2*V1 - 1)*(2*V3-1) = for spot V(1,3) in the Matrix
etc
Thus far I have:
let singleState state =
if state = 0.0 then 0.0
else
((2.0 *. state) -. 1.0);;
let rec matrixState v =
match v with
| [] -> []
| hd :: [] -> v
| hd :: (nx :: _ as tl) ->
singleState hd *. singleState nx :: matrixState tl;;
My results come out to be:
float list = [-3.; -3.; -3.; -1.]
When they should be a list of lists that look as follows:
0 -1 1 -1
-1 0 -1 1
1 -1 0 -1
-1 1 -1 0
So instead of it making list of lists it is making just one list. I also have trouble figuring out how to make the diagonals 0.
The signatures should look like:
val singleState : float list -> float list list = <fun>
val matrixState : float list list -> float list list = <fun>
and I am getting
val singleState : float -> float = <fun>
val matrixState : float list -> float list = <fun>
Any ideas?
With some fixing up, your function would make one row of the result. Then you could call it once for each row you need. A good way to do the repeated calling might be with List.map.
Assuming this is mostly a learning exercise, it might be good to first make a matrix like this:
V11 V12 V13 V14
V21 V22 V23 V24
V31 V32 V33 V34
V41 V42 V43 V44
I think this will be a lot easier to calculate.
Then you can replace the diagonal with zeroes. Here's some code that would replace the diagonal:
let replnth r n l =
List.mapi (fun i x -> if i = n then r else x) l
let zerorow row (n, res) =
(n - 1, replnth 0.0 n row :: res)
let zerodiag m =
let (_, res) = List.fold_right zerorow m (List.length m - 1, []) in
res
I would prefer to go with an array for your work.
A nice function to use is then Array.init, it works like so,
# Array.init 5 (fun x -> x);;
- : int array = [|0; 1; 2; 3; 4|]
We note that 5 play the role of the size of our Array.
But as you want a matrix we need to build an Array of Array which is achieve with two call of Array.init, the last one nested into the first one,
# Array.init 3 (fun row -> Array.init 3 (fun col -> row+col));;
- : int array array = [|[|0; 1; 2|]; [|1; 2; 3|]; [|2; 3; 4|]|]
Note, I've called my variable row and col to denote the fact that they correspond to the row index and column index of our matrix.
Last, as your formula use a vector of reference V holding value [|V1;V2;V3;V4|], we need to create one and incorporate call to it into our matrix builder, (The value hold on the cell n of an array tab is accessed like so tab.(n-1))
Which finally lead us to the working example,
let vect = [|1;2;3;4|]
let built_matrix =
Array.init 4 (fun row ->
Array.init 4 (fun col ->
if col=row then 0
else vect.(row)+vect.(col)))
Of course you'll have to adapt it to your convenience in order to match this piece of code according to your requirement.
A side note about syntax,
Repeating Array each time can be avoid using some nice feature of OCaml.
We can locally open a module like so,
let built_matrix =
let open Array in
init 4 (fun row ->
init 4 (fun col ->
if col=row then 0
else vect.(row)+vect.(col)))
Even shorter, let open Array in ... can be write as Array.(...), Below a chunk of code interpreted under the excellent utop to illustrate it (and I going to profit of this opportunity to incorporate a conversion of our matrix to a list of list.)
utop #
Array.(
to_list
## map to_list
## init 4 (fun r ->
init 4 (fun c ->
if r = c then 0
else vect.(r)+ vect.(c))))
;;
- : int list list = [[0; 3; 4; 5]; [3; 0; 5; 6]; [4; 5; 0; 7]; [5; 6; 7; 0]]
I hope it helps