Haskell: Dealing With Types And Exceptions - exception

I'd like to know the "Haskell way" to catch and handle exceptions. As shown below, I understand the basic syntax, but I'm not sure how to deal with the type system in this situation.
The below code attempts to return the value of the requested environment variable. Obviously if that variable isn't there I want to catch the exception and return Nothing.
getEnvVar x = do {
var <- getEnv x;
Just var;
} `catch` \ex -> do {
Nothing
}
Here is the error:
Couldn't match expected type `IO a'
against inferred type `Maybe String'
In the expression: Just var
In the first argument of `catch', namely
`do { var <- getEnv x;
Just var }'
In the expression:
do { var <- getEnv x;
Just var }
`catch`
\ ex -> do { Nothing }
I could return string values:
getEnvVar x = do {
var <- getEnv x;
return var;
} `catch` \ex -> do {
""
}
however, this doesn't feel like the Haskell way. What is the Haskell way?
Edit: Updated code to properly reflect description.

You cannot strip away the IO and return Maybe String within a do-block. You need to return an IO (Maybe String).
getEnvVar x = do {
var <- getEnv x;
return (Just var);
} `catch` \ex -> do {
return Nothing
}
Why not use
import qualified System.IO.Error as E
getEnvVar :: String -> IO (Either IOError String)
getEnvVar = E.try . getEnv
Instead of Nothing and Just var, you get Left error and Right var.

Once you get that anything involving getEnv is going to involve returning a result in the IO monad, then there is nothing wrong with your basic approach. And while you could use System.IO.Error (and I would), it is just as valid, and instructive, to write it the way you did. However, you did use a bit more punctuation than idomatic Haskell would use:
getEnvVar x = (Just `fmap` getEnv x) `catch` const (return Nothing)
or
getEnvVar x = getEnv x `catch` const (return "")

You could also try
import System.Environment
getEnvVar :: String -> IO (Maybe String)
getEnvVar x = getEnvironment >>= return . lookup x
or a bit longer, but maybe easier to follow:
getEnvVar x = do
fullEnvironment <- getEnvironment
return (lookup x fullEnvironment)
if you don't mind going through the whole environment the whole time.

Related

How to convert scala.some to scala.mutable.Map?

i am trying to write code to mask nested json fields..
def maskRecursively(map :mutable.Map[String,Object]):mutable.Map[String,Object] ={
val maskColumns = PII_Data.getPIIData()
for((k,v) <- map){
if(v.isInstanceOf[Map[String,Object]]){
maskRecursively(map.get(k).asInstanceOf[mutable.Map[String,Object]])
}else if(v.isInstanceOf[List[Object]]) {
} else {
if(maskColumns.contains(k)){map+=(k->"*****")}
}
}
map }
calling this method from ..
val mapper = new ObjectMapper()
mapper.registerModule(DefaultScalaModule)
val result = mapper.readValue(jsonStr, classOf[ java.util.Map[String,Object] ])
import scala.collection.JavaConverters._
val myScalaMap = result.asScala
maskRecursively(result.asScala)
i am getting error while trying to iterate a nested json object ..
Cannot cast value of type 'scala.Some' to type 'scala.collection.mutable.Map'
how do i recurse a complex nested json object this way ?
Your mistake was
if(v.isInstanceOf[Map[String,Object]]){
maskRecursively(map.get(k).asInstanceOf[mutable.Map[String,Object]])
There are a few issues:
You check if v is an instance of Map, but then attempt to cast it to mutable.Map. They are technically different types (mutable vs immutable).
You check the type of v, but then apply the cast to map.get(k), which is going to be a different value and type from v. A map's get method returns an Option, hence the error message.
Thanks to type erasure on the JVM, the runtime won't be able to tell the difference between e.g. a Map[String, Object] and a Map[SomethingElse, Whatever] - both will just look like Map at runtime. The compiler should have given you a warning about the isInstanceOf call for this reason.
If you do an isInstanceOf / asInstanceOf combo, make sure the operand is the same each time. You already have v, so you don't need to look it up a second time from the map. And make sure you use the same type on both instanceOf calls.
Fix this by changing it to
if(v.isInstanceOf[mutable.Map[_, _]]){
maskRecursively(v.asInstanceOf[mutable.Map[String,Object]])
After some digging , i was able to solve this..
def maskJson(jsonStr: String): String = {
implicit val formats = org.json4s.DefaultFormats
val mapper = new ObjectMapper()
mapper.registerModule(DefaultScalaModule)
val result = mapper.readValue(jsonStr, classOf[Map[String, Object]])
val maskedJson = maskRecursively(result)
mapper.writeValueAsString(maskedJson)
}
def maskRecursively(map: Map[String, Object]): collection.mutable.Map[String, Object] = {
val mutable = collection.mutable.Map[String, Object]()
val maskColumns = PII_Data.getJsonPIIFields()
for ((k, v) <- map) {
if (v.isInstanceOf[Map[String, Object]]) {
mutable += k -> maskRecursively(v.asInstanceOf[Map[String, Object]])
} else if (v.isInstanceOf[List[Object]]) {
val list = v.asInstanceOf[List[Map[String, Object]]].map(i => maskRecursively(i)).toList
mutable += k -> list
} else {
if (maskColumns.contains(k)) {
mutable += (k -> "*****")
}
else {
mutable += k -> v
}
}
}
mutable
}

How to pass a operator as a parameter

I'm trying to pass an operator to a module so the module can be built generically. I pass a two-input operator parameter and then use it in a reduction operation. If I replace the passed parameter with a concrete operator this works OK.
What's the correct way to pass a Chisel/UInt/Data operator as a module parameter?
val io = IO(new Bundle {
val a = Vec(n, Flipped(Decoupled(UInt(width.W))))
val z = Decoupled(UInt(width.W))
})
val a_int = for (n <- 0 until n) yield DCInput(io.a(n))
val z_int = Wire(Decoupled(UInt(width.W)))
val all_valid = a_int.map(_.valid).reduce(_ & _)
z_int.bits := a_int.map(_.bits).reduce(_ op _)
...
Here's a fancy Scala way of doing it
import chisel3._
import chisel3.tester._
import chiseltest.ChiselScalatestTester
import org.scalatest.{FreeSpec, Matchers}
class ChiselFuncParam(mathFunc: UInt => UInt => UInt) extends Module {
val io = IO(new Bundle {
val a = Input(UInt(8.W))
val b = Input(UInt(8.W))
val out = Output(UInt(8.W))
})
io.out := mathFunc(io.a)(io.b)
}
class CFPTest extends FreeSpec with ChiselScalatestTester with Matchers {
def add(a: UInt)(b: UInt): UInt = a + b
def sub(a: UInt)(b: UInt): UInt = a - b
"add works" in {
test(new ChiselFuncParam(add)) { c =>
c.io.a.poke(9.U)
c.io.b.poke(5.U)
c.io.out.expect(14.U)
}
}
"sub works" in {
test(new ChiselFuncParam(sub)) { c =>
c.io.a.poke(9.U)
c.io.b.poke(2.U)
c.io.out.expect(7.U)
}
}
}
Although it might be clearer to just pass in a string form of the operator and then use simple Scala ifs to control the appropriate code generation. Something like
class MathOp(code: String) extends Module {
val io = IO(new Bundle {
val a = Input(UInt(8.W))
val b = Input(UInt(8.W))
val out = Output(UInt(8.W))
})
io.out := (code match {
case "+" => io.a + io.b
case "-" => io.a - io.b
// ...
})
}
Chick has already provided a good answer, but I want to provide another example to illustrate and explain some of the really powerful features of Chisel and Scala for hardware design. I know you (Guy) probably know most of this but I wanted to provide a detailed answer for anyone else coming across this question.
I'll start with the complete example and then highlight some of the features being used.
class MyModule[T <: Data](n: Int, gen: T)(op: (T, T) => T) extends Module {
require(n > 0, "reduce only works on non-empty Vecs")
val io = IO(new Bundle {
val in = Input(Vec(n, gen))
val out = Output(gen)
})
io.out := io.in.reduce(op)
}
[T <: Data] This is called a Type Parameter (T) with an Upper Type Bound (<: Data). This allows us to make the Module generic to the hardware type with which we parameterize it. We give T an upper bound of Data (which is a type from Chisel) to tell Scala that this is a hardware type we can use to generate hardware with Chisel. The upper-bound means it must be a subtype of Data, which includes all of the Chisel hardware types (eg. UInt, SInt, Vec, Bundle and user classes that extend Bundle). This is the exact same way that the Chisel constructors like Reg(...) are parameterized.
You will notice that there are multiple parameter lists, (n: Int, gen: T) and (op: (T, T) => T). The first argument, n: Int, is a simple integer parameter. The second argument, gen: T, is our generic type T, and thus a subtype of Data that serves as a template for the hardware we will generate inside the Module.
The second parameter list (op: (T, T) => T) is a function. As a functional programming language, functions are values in Scala, and thus can be used as arguments just like our Int argument. (T, T) => T reads as a function of two arguments, both of type T, that returns a T. Remember that T is our hardware type that is a subclass of Data. Because op is in a second parameter list, this is telling Scala that it should infer T from gen, and then use the same T for op. For example, if gen is UInt(8.W), Scala infers T as UInt. This then constrains op to be a function of type (UInt, UInt) => UInt. Bitwise AND is such a function, so we can pass an anonymous function to AND two UInts: (_ & _).
Now that we have our abstract, type parameterized MyModule class, how do we actually use it? Above I gave snippets of how to use it with UInts, but let's see how to get some actual Verilog:
object MyMain extends App {
println(chisel3.Driver.emitVerilog(new MyModule(4, UInt(8.W))(_ & _)))
}
Alternatively, we can parameterize MyModule with a more complex type:
class MyBundle extends Bundle {
val bar = Bool()
val baz = Bool()
}
object MyMain extends App {
def combineMyBundle(a: MyBundle, b: MyBundle): MyBundle = {
val w = Wire(new MyBundle)
w.bar := a.bar && b.bar
w.baz := a.baz && b.baz
w
}
println(chisel3.Driver.emitVerilog(new MyModule(4, new MyBundle)(combineMyBundle)))
}
We also had to define a function of type (MyBundle, MyBundle) => MyBundle which we did with combineMyBundle.
You can see a complete, runnable version of the code I presented above on Scastie.
I hope someone finds this example useful!

How to dynamically build function calls with different numbers of arguments in Rust?

How do I take a vector of function argument AST variants, extract the values, and use them to instantiate a function call?
I am writing an interpreter that evaluates certain expressions. Some of the expressions are function calls. I am having a hard time figuring out how to translate the function calls AST to the actual call. The AST gives me the function name and a vector of arguments. I can lookup the function pointer to call from the name using a map, but passing the arguments to the function pointer is problem.
Rust does not have a splat operator (argument expansion). I could pass them as a tuple and use destructuring of the arguments, but I can't figure out how to convert the vector of AST argument enum variants to a tuple of the concrete types.
I can't simply map or loop over the AST arguments to extract the values and produce a tuple.
I can use nested tuples to build a heterogenous list incrementally:
fn prepend<I,T>(i: I, t: T) -> (I,T) { (i, t) }
fn foo() {
let x = ();
let x = prepend(1, x);
let x = prepend(2.0, x);
let x = prepend(true, x);
}
But that only works because x gets shadowed and the new binding has a different type. This won't work:
fn foo() {
let mut x = ();
x = prepend(1, x);
x = prepend(2.0, x);
x = prepend(true, x);
}
Any ideas?
You don't. Rust is a statically typed language and you are attempting to do non-statically-determinable actions.
Instead, all of your functions need to take in a collection of arguments, verify that there is the right number of arguments (and type, if appropriate to your interpreter), then call the appropriate Rust function with a fixed number of arguments:
// All of the panicking can be replaced by proper error handling.
enum Arg {
Bool(bool),
Int(i32),
}
impl Arg {
fn into_bool(self) -> bool {
match self {
Arg::Bool(b) => b,
_ => panic!("Not a bool"),
}
}
fn into_int(self) -> i32 {
match self {
Arg::Int(i) => i,
_ => panic!("Not an int"),
}
}
}
fn some_fn_wrapper(mut args: Vec<Arg>) {
assert_eq!(args.len(), 3);
let c = args.pop().unwrap();
let b = args.pop().unwrap();
let a = args.pop().unwrap();
some_fn(a.into_bool(), b.into_int(), c.into_bool())
}
fn some_fn(_a: bool, _b: i32, _c: bool) {}
All of this will happen at runtime, as you want to create a highly dynamic language.
See also:
How do I pass each element of a slice as a separate argument to a variadic C function?
How to pass a dynamic amount of typed arguments to a function?
Calling a function only known at runtime
How can I create a function with a variable number of arguments?
Is Reflection possible in Rust, and if so how can I invoke an unknown function with some arguments?

Why compiler treats closures and local functions differently?

I thought closures and functions are same thing. But when referencing to a property inside local function compiler doesn't require self. But inside closure it requires to write self. What i mean is why this two things are different?
The sample code for clarity:
class Foo {
let bar = "bar"
func baz() {
func localBaz() {
println(bar) // No complain from compiler.
}
let bazClosure = {
println(self.bar) // Here if I write just println(bar), compiler complains.
}
}
}
You expectation is wrong - functions and closures in Swift are not the same thing. A func essentially sets up a lazy var binding with a [unowned self] declaration. Thus, if you want to get rid of func you could transform the following:
class Foo {
let bar = "bar"
// this is not your 'baz'; just an example
func baz () { println (bar) }
}
}
as
class Foo {
let bar = "bar"
lazy var baz = { [unowned self] in println (self.bar) }
}
You can see that func is doing more than just a closure.
Furthermore, and importantly, func sets up a recursive binding environment which allows the body of func bar to reference bar. Thus you can write:
1> class Foo {
2. func fact (x:Int) -> Int {
3. if 1 == x { return x }
4. else { return x * fact (x - 1) }}
5. }
6> Foo().fact(5)
$R0: (Int) = 120
but not
7> class Foo {
8. lazy var fact = { (x:Int) -> Int in
9. if 1 == x { return x }
10. else { return x * fact (x - 1) }}}
repl.swift:10:27: error: variable used within its own initial value
else { return x * fact (x - 1) }}}
^
Indeed, I do not know why closure need self in swift to access instance properties but let's think about it.
Your baz() is a class function, I mean it belongs to the class Foo and the closure like a external function. In Objective-C all class function actually need a self argument to invoke that function.
Therefore a closure need a self pointer (or something named self reference the instance of Foo) to access its property.

How to pass a function as argument in Rust

Given the following rust program:
fn call_twice<A>(val: A, f: fn(A) -> A) -> A {
f(f(val))
}
fn main() {
fn double(x: int) -> int {x + x};
println!("Res is {}", call_twice(10i, double));
// println!("Res is {}", call_twice(10i, (x: int) -> int {x + x}));
// ^ this line will fail
}
Why can I pass double as the function, but not inlined? What is a good way to achieve the same behaviour without defining the function somewhere?
2016-04-01 Update:
As of Rust 1.0, the code should look like this:
fn call_twice<A, F>(val: A, mut f: F) -> A
where F: FnMut(A) -> A {
let tmp = f(val);
f(tmp)
}
fn main() {
fn double(x: i32) -> i32 {x + x};
println!("Res is {}", call_twice(10, double));
println!("Res is {}", call_twice(10, |x| x + x));
}
The change to the closure parameter is because closure are now unboxed.
Original:
Insofar as I know, you can't define functions inline like that.
What you want is a closure. The following works:
fn call_twice<A>(val: A, f: |A| -> A) -> A {
let tmp = f(val);
f(tmp)
}
fn main() {
fn double(x: int) -> int {x + x};
println!("Res is {}", call_twice(10i, double));
println!("Res is {}", call_twice(10i, |x| x + x));
}
There are a few things to note:
Functions coerce to closures, but the opposite isn't true.
You need to store the result of f(val) in a temporary due to borrowing rules. Short version: you need unique access to a closure to call it, and the borrow checker isn't quite clever enough to realise the two calls are independent in their original positions.
Closures are in the process of being replaced by unboxed closures, so this will change in the future, but we're not quite there yet.