def or val for defining Function in Scala - function

I'm learning Programming Paradigms in my University and reading this course material provided by the lecturer that defined a function this way:
val double = (x: Int) => 2 * x
double: Int => Int = <function1>
But from my own studies I found and got used to defining the same function like this:
def d (x: Int) = 2 * x
d: (x: Int)Int
I'm new to Scala. And both definitions give a result of:
res21: Int = 8
Upon passing 4 as the parameter.
Now my main question is why would the lecturer prefer to use val to define a function? I see it as longer and not really necessary unless using val gives some added advantages that I don't know of. Besides I understand using val makes some name a placeholder so later in the program, I could mistakenly write val double = 5 and the function would be gone!
At this stage I'm quite convinced I learned a better way of defining a function unless someone would tell me otherwise.

Strictly speaking def d (x: Int) = 2 * x is a method, not a Function, however scala can transparently convert (lift) methods into Functions for us. So that means you can use the d method anywhere that requires a Int => Int Function.
There is a small overhead of performing this conversion, as a new Function instance is created every time. We can see this happening here:
val double = (x: Int) => 2 * x
def d (x: Int) = 2 * x
def printFunc(f: Int => Int) = println(f.hashCode())
printFunc(double)
printFunc(double)
printFunc(d)
printFunc(d)
Which results in output like so:
1477986427
1477986427
574533740
1102091268
You can see when explicitly defining a Function using a val, our program only creates a single Function and reuses it when we pass as an argument to printFunc (we see the same hash code). When we use a def, the conversion to a Function happens every time we pass it to printFunc and we create several instances of the Function with different hash codes. Try it
That said, the performance overhead is small and often doesn't make any real difference to our program, so defs are often used to define Functions as many people find them more concise and easier to read.

In Scala, function values are monomorphic (i.e. they can not have type parameters, aka "generics"). If you want a polymorphic function, you have to work around this, for example by defining it using a method:
def headOption[A]: List[A] => Option[A] = {
case Nil => None
case x::xs => Some(x)
}
It would not be valid syntax to write val headOption[A]. Note that this didn't make a polymorphic function value, it is just a polymorphic method, returning a monomorphic function value of the appropriate type.

Because you might have something like the following:
abstract class BaseClass {
val intToIntFunc: Int => Int
}
class A extends BaseClass {
override val intToIntFunc = (i: Int) => i * 2
}
So its purpose might not be obvious with a very simple example. But that Function value could itself be passed to higher order functions: functions that take functions as parameters. If you look in the Scala collections documentation you will see numerous methods that take functions as parameters. Its a very powerful and versatile tool, but you need to get to a certain complexity and familiarity with algorithms before the cost /benefit becomes obvious.
I would also suggest not using "double" as an identifier name. Although legal Scala, it is easy to confuse it with the type Double.

Related

How to get the Index of Max element in UInt Vec , Chisel

I'm trying to get the index of the Max element in a UInt vector.
My code looks like this
val pwr = Vec.tabulate(N) {i => energyMeters(i).io.pwr}
val maxPwr = pwr.indexOf(pwr.max)
However this code generate compilation error:
No implicit Ordering Defined for Chisel.UInt.
val maxPwr = pwr.indexOf(pwr.max)
^
I understand that I probably need to implement the max function , can someone give an example how this should be done ?
Edit:
I also tried this:
val pwr = Vec.tabulate(N) {i => energyMeters(i).io.pwr}
val maxPwr = pwr reduceLeft {(x,y) => Mux(x > y,x,y)}
val maxPwridx = pwr.indexOf(maxPwr)
But it fails on elaboration , when I tried to cast maxPwridx to UInt.
I've ended up with this workaround:
val pwr = Vec.tabulate(N) {i => energyMeters(i).io.pwr}
val maxPwr = pwr reduceLeft {(x,y) => Mux(x > y,x,y)}
val maxPwridx = pwr.indexWhere((x : UInt => x === maxPwr))
Chisel's Vec extends Scala's Seq. This means that a Vec has both dynamic access hardware methods that will allow you to generate hardware to search for something in a Vec (e.g., indexWhere, onlyIndexWhere, lastIndexWhere) as well as all the methods available to normal Scala sequences (e.g., indexOf).
For the purposes of doing hardware operations, you want to use the former (as you found in your last edit---which looks great!) as opposed to the latter.
To get some handle on this, the screenshot below shows the Chisel 3.3.0-RC1 API documentation for VecLike, filtered to excluded inherited methods. Notable here are indexWhere, onlyIndexWhere, lastIndexWhere, exists, forall, and contains:
And the documentation for Vec. The only interesting method here would be reduceTree:

Error while passing values using peekpoketester

I am trying to pass some random integers (which I have stored in an array) to my hardware as an Input through the poke method in peekpoketester. But I am getting this error:
chisel3.internal.ChiselException: Error: Not in a UserModule. Likely cause: Missed Module() wrap, bare chisel API call, or attempting to construct hardware inside a BlackBox.
What could be the reason? I don't think I need a module wrap here as this is not hardware.
class TesterSimple (dut: DeviceUnderTest)(parameter1 : Int)(parameter2 : Int) extends
PeekPokeTester (dut) {
var x = Array[Int](parameter1)
var y = Array[Int](parameter2)
var z = 1
poke(dut.io.IP1, z.asUInt)
for(i <- 0 until parameter1){poke(dut.io.IP2(i), x(i).asUInt)}
for(j <- 0 until parameter2){poke(dut.io.IP3(j), y(j).asUInt)}
}
object TesterSimple extends App {
implicit val parameter1 = 2
implicit val parameter2 = 2
chisel3.iotesters.Driver (() => DeviceUnderTest(parameter1 :Int, parameter2 :Int)) { c =>
new TesterSimple (c)(parameter1, parameter2)}
}
I'd suggest a couple of things.
Main problem, I think you are not initializing your arrays properly
Try using Array.fill or Array.tabulate to create and initialize arrays
val rand = scala.util.Random
var x = Array.fill(parameter1)(rand.nextInt(100))
var y = Array.fill(parameter2)(rand.nextInt(100))
You don't need the .asUInt in the poke, it accepts Ints or BigInts
When defining hardware constants, use .U instead of .asUInt, the latter is a way of casting other chisel types, it does work but it a backward compatibility thing.
It's better to not start variables or methods with capital letters
I suggest us class DutName(val parameter1: Int, val parameter2: Int) or class DutName(val parameter1: Int)(val parameter2: Int) if you prefer.
This will allow to use the dut's paremeters when you are writing your test.
E.g. for(i <- 0 until dut.parameter1){poke(dut.io.IP2(i), x(i))}
This will save you have to duplicate parameter objects on your DUT and your Tester
Good luck!
Could you also share your DUT?
I believe the most likely case is your DUT does not extend Module

Examples in Chisel that make use of MuxCase

How do I implement a 4:1 Mux in chisel without using 2:1 Muxes? Is there a way where we can select one of the inputs of the N inputs by having something like Mux(sel, A,B,C,D.......N) where N can be taken in as a parameter? I am aware of the MuxCase in chisel but I am yet to find an example that makes use of MuxCase, any sort of documentation or example regarding this is greatly appreciated. Thank You.
There are a couple of usages in rocket-chip in MultiWidthFifo.scala
It's pretty straightforward. It takes a default value for what happens if none of the supplied conditions is true, otherwise it looks through a sequence of tuples, where each tuple of the form (bool condition, result) often written condition -> result. The return value is the result from the first boolean condition that is true.
Here is a toy example of a module that passes the number of input bools into a module which then uses that value to construct a sequence of mux cases.
class UsesMuxCase(numCases: Int) extends Module {
val io = IO(new Bundle {
val output = Output(UInt(10.W))
val inputs = Input(Vec(numCases, Bool()))
})
val cases = io.inputs.zipWithIndex.map { case (bool, index) =>
bool -> index.U(10.W)
}
io.output := MuxCase(0.U(10.W), cases)
}

Scala: Passing Vs applying function

Let's say we have the following code snippet:
List(1, 2, 3)
.map(doubleIt) // passing function
.map(x => doubleIt(x)) // applying function
def doubleIt(i: Int): Int = 2 * i
As you can see we can either pass doubleIt as a function literal or apply it inside another anonymous Lambda. I have always wondered which approach is better. I personally prefer passing a function literal as it seems like second approach would end up creating an extra wrapper Lambda for no good reason, but I am not 100% positive my reasoning is correct.
I am curious to know what the pro/cons of each style are and whether one is definitely better than the other.
This might change in Scala 2.12+, but at the moment both approaches are identical. As a test, I created the following:
class Test {
def testPassingFunction: List[Int] = List(1, 2, 3).map(doubleIt)
def testApplyingFunction: List[Int] = List(1, 2, 3).map(x => doubleIt(x))
def doubleIt(i: Int): Int = 2 * i
}
I then compiled it and used javap to disassemble the bytecode. Both functions are identical (except for different Strings. In all cases a new class that extends from Function1 is created that calls the appropriate method. As #Mike says in the comments, the Scala compiler converts everything to the second form.
It turns out that it depends somewhat on what your "function" is. If it is actually a function (that is, a function value, defined as val doubleIt = (x: Int) => 2 * x), then your hunch is correct. The version in which you pass a function literal that simply applies doubleIt (i.e., l map { x => doubleIt(x) } is compiled just as written, resulting in an anonymous function that delegates to doubleIt. Passing doubleIt as a function value takes out the middle man. If doubleIt is a method, on the other hand, then both forms are compiled identically.
You can easily verify this yourself at the REPL. Define the following class:
class A {
val l = List(1,2,3)
val f = (x: Int) => 2 * x
def g(x: Int) = 2 * x
def m1 = l map f
def m2 = l map { x => f(x) }
def m3 = l map g
def m4 = l map { x => g(x) }
}
Then run :power and :javap -v A.
That said, the distinction is unlikely to make a practical difference in any but the most performance-critical code. In ordinary circumstances, code clarity is the more important consideration and depends somewhat on who will be reading your code in the future. Personally, I tend to prefer the concise lst map doubleIt form; this form eliminates a bunch of syntactic noise that adds nothing semantically. I suppose the longer form may be considered more explicit, especially for developers that aren't very familiar with the map method. The literal reading matches the intent quite well: "(Given) list, map (each) x to doubleIt(x)". Your team will have to decide what's best for you and your organization.

Passing functions and operating on their results within Scala's Actors

I'm implementing an actor-based app in scala and I'm trying to be able to pass functions between the actors for them to be processed only when some message is received by the actor.
import actors.Actor
import java.util.Random
import scala.Numeric._
import Implicits._
class Constant(val n:Number) extends Actor{
def act(){
loop{
receive{
case "value" => reply( {n} )
}
}
}
}
class Arithmetic[T: Numeric](A: ()=>T, B: ()=>T) extends Actor{
def act(){
receive{
case "sum" => reply ( A() + B() )
/* case "mul" => reply ( A * B )
*/
}
}
}
object Main extends App{
val c5 = new Constant(5)
c5.start
val a = new Arithmetic({c5 !! "value"}, {c5!!"value"} )
a.start
println(a!?"sum")
println(a!?"mul")
}
In the example code above I would expect the output to be both 5+5 and 5*5. The issue is that reply is not a typed function and as such I'm unable to have the operator (+,*) to operate over the result from A and B.
Can you provide any help on how to better design/implement such system?
Edit: Code updated to better reflect the problem. Error in:
error: could not find implicit value for evidence parameter of type Numeric[Any]
val a = new Arithmetic({c5 !! "value"}, {c5!!"value"} )
I need to be able to pass the function to be evaluated in the actor whenever I call it. This example uses static values but I'll bu using dynamic values in the future, so, passing the value won't solve the problem. Also, I would like to receive different var types (Int/Long/Double) and still be able to use the same code.
The error: Error in: error: could not find implicit value for evidence parameter of type Numeric[Any]. The definition of !!:
def !! (msg: Any): Future[Any]
So the T that Arithmetic is getting is Any. There truly isn't a Numeric[Any].
I'm pretty sure that is not your problem. First, A and B are functions, and functions don't have + or *. If you called A() and B(), then you might stand a chance... except for the fact that they are java.lang.Number, which also does not have + or * (or any other method you'd expect it to have).
Basically, there's no "Number" type that is a superclass or interface of all numbers for the simple reason that Java doesn't have it. There's a lot of questions touching this subject on Stack Overflow, including some of my own very first questions about Scala -- investigate scala.math.Numeric, which is the best approximation for the moment.
Method vs Function and lack of parenthesis
Methods and functions are different things -- see tons of related questions here, and the rule regarding dropping parenthesis is different as well. I'll let REPL speak for me:
scala> def f: () => Int = () => 5
f: () => Int
scala> def g(): Int = 5
g: ()Int
scala> f
res2: () => Int = <function0>
scala> f()
res3: Int = 5
scala> g
res4: Int = 5
scala> g()
res5: Int = 5