What is the purpose of the makeSink method in making IOs for a periphery - chisel

I was following some examples of adding peripheries to the rocketchip.
I used the sifive-blocks as reference.
below is an example from their I2C example (I hope it's ok to post it here)
case object PeripheryI2CKey extends Field[Seq[I2CParams]]
trait HasPeripheryI2C { this: BaseSubsystem =>
val i2cNodes = p(PeripheryI2CKey).map { ps =>
I2C.attach(I2CAttachParams(ps, pbus, ibus.fromAsync)).ioNode.makeSink()
}
}
trait HasPeripheryI2CBundle {
val i2c: Seq[I2CPort]
}
trait HasPeripheryI2CModuleImp extends LazyModuleImp with HasPeripheryI2CBundle {
val outer: HasPeripheryI2C
val i2c = outer.i2cNodes.zipWithIndex.map { case(n,i) => n.makeIO()(ValName(s"i2c_$i")) }
}
I understand the makeIO step which is take a bundle and apply the IO on it, but don't understand the makeSink step.
Why do they do this step , isn't makeIO enough ?

I'm not an expert in rocket-chip's Diplomacy, but from glancing at the code, I think makeIO and makeSink do fundamentally different things.
makeIO takes the BundleBridgeSource and materializes ports in the Chisel Module implementation for driving that source. There is the same method on BundleBrigeSink. I believe this method is the way you take either side of a Bundle bridge and interface with it in the actual Chisel part of the generator (as opposed to the Diplomatic part).
makeSink turns a BundleBridgeSource into a BundleBridgeSink. It doesn't materialize Chisel ports and it stays in Diplomacy world rather than in the Chisel world.
In the example from I2C you included, note how the part with makeSink is a trait to mix into something that extends BaseSubsystem, it's diplomatic. On the other hand, HasPeripheryI2CModuleImp which has the makeIO extends LazyModuleImp which is the Chisel part. One way to think about this is two different "views" of the same thing. Chisel and Diplomacy use different objects, thus i2cNodes (diplomatic) vs. i2c (Chisel).

Related

How to specify a Blackbox Bundle that maps to `input [127:0]` but acts like a Vec(2, UInt(64)) in the Chisel source?

I want to interface with a full AXI crossbar whose ports
look roughly like the following:
input [(NM * AXI_DATA_WIDTH) - 1:0] S_AXI_XYZ0,
...
output [(NS * AXI_SOMETHING_WIDTH - 1:0] M_AXI_XYZ0
...
where NM denotes the number of masters and NS denotes
the number of slaves. As you can imagine, each of the ports of
the crossbar will only partially connected to each slave/master.
Now, Ideally I'd like to define a Chisel Bundle like
class AxiXbar(val AXI_DATA_WIDTH: Int, val NS: Int, ...) extends Bundle {
val S_AXI_XYZ0 = Vec(NS, UInt(AXI_DATA_WIDTH))
...
}
but as soon as I use that in order to give a Blackbox
its interface
class MyXbar extends Blackbox {
val io = IO(new AxiXbar)
}
Chisel will try to verilogify MyXbar to something like
MyXbar inst #(...) (
...
.S_AXI_XYZ0_0(foo_wire_0)
.S_AXI_XYZ0_1(foo_wire_0)
...
)
This is fine when Chisel generates the complete design, but
when interfacing with a Blackbox module,
I need Chisel to flatten the Chisel Vec into
only one S_AXI_XYZ0 port instead. How can I achieve that?
I am aware that Chisel3 supports DataView as method to
change the "shape" of a interface bundle, but in their DataView Cookbook they state that subword viewing (split/concat) is not yet supported.
I tried to use Chisel's cast macros asType(...) or asUInt. Without success though as I do not fully grasp what they do under the hood. As such it is hard to apply them correctly.

Scala function to Json

Can I map Scala functions to JSON; or perhaps via a different way than JSON?
I know I can map data types, which is fine. But I'd like to create a function, map it to JSON send it via a REST method to another server, then add that function to a list of functions in another application and apply it.
For instance:
def apply(f: Int => String, v: Int) = f(v)
I want to make a list of functions that can be applied within an application, over different physical locations. Now I want to add and remove functions to the list. By means of REST calls.
Let's assume I understand security problems...
ps.. If you downvote, you might as well have the decency to explain why
If I understand correctly, you want to be able to send Scala code to be executed on different physical machines. I can think of a few different ways of achieving that
Using tools for distributed computing e.g. Spark. You can set up Spark clusters on different machines and then select to which cluster you want to submit Spark jobs. There are a lot of other tools for distributed computing that might also be worth looking into.
Pass scala code as a string and compile it either within your server side code (here's an example) or by invoking scalac as an external process.
Send the functions as byte code and execute the byte code on the remote machine.
If it fits with what you want to do, I'd recommend option #1.
Make sure that you can really trust the code that you want to execute, to not expose yourself to malicious code.
The answer is you can't do this, and even if you could you shouldn't!
You should never, never, never write a REST API that allows the client to execute arbitrary code in your application.
What you can do is create a number of named operations that can be executed. The client can then pass the name of the operation which the server can look up in a Map[String, <function>] and execute the result.
As mentioned in my comment, here is an example of how to turn a case class into JSON. Things to note: don't question the implicit val format line (it's magic); each case class requires a companion object in order to work; if you have Optional fields in your case class and define them as None when turning it into JSON, those fields will be ignored (if you define them as Some(whatever), they will look like any other field). If you don't know much about Scala Play, ignore the extra stuff for now - this is just inside the default Controller you're given when you make a new Project in IntelliJ.
package controllers
import javax.inject._
import play.api.libs.json.{Json, OFormat}
import play.api.mvc._
import scala.concurrent.Future
#Singleton
class HomeController #Inject()(cc: ControllerComponents) extends AbstractController(cc) {
case class Attributes(heightInCM: Int, weightInKG: Int, eyeColour: String)
object Attributes {
implicit val format: OFormat[Attributes] = Json.format[Attributes]
}
case class Person(name: String, age: Int, attributes: Attributes)
object Person {
implicit val format: OFormat[Person] = Json.format[Person]
}
def index: Action[AnyContent] = Action.async {
val newPerson = Person("James", 24, Attributes(192, 83, "green"))
Future.successful(Ok(Json.toJson(newPerson)))
}
}
When you run this app with sbt run and hit localhost:9000 through a browser, the output you see on-screen is below (formatted for better reading). This is also an example of how you might send JSON as a response to a GET request. It isn't the cleanest example but it works.
{
"name":"James",
"age":24,
"attributes":
{
"heightInCM":187,
"weightInKG":83,
"eyeColour":"green"
}
}
Once more though, I would never recommend passing actual functions between services. If you insist though, maybe store them as a String in a Case Class and turn it into JSON like this. Even if you are okay with passing functions around, it might even be a good exercise to practice security by validating the functions you receive to make sure they're not malicious.
I also have no idea how you'll convert them back into a function either - maybe write the String you receive to a *.scala file and try to run them with a Bash script? Idk. I'll let you figure that one out.

kotlin, how to simplify passing parameters to base class constructor?

We have a package that we are looking to convert to kotlin from python in order to then be able to migrate systems using that package.
Within the package there are a set of classes that are all variants, or 'flavours' of a common base class.
Most of the code is in the base class which has a significant number of optional parameters. So consider:
open class BaseTree(val height:Int=10,val roots:Boolean=true, //...... lots more!!
class FruitTree(val fruitSize, height:Int=10, roots:Boolean=true,
// now need all possible parameters for any possible instance
):BaseTree(height=height, roots=roots //... yet another variation of same list
The code is not actually trees, I just thought this was a simple way to convey the idea. There are about 20 parameters to the base class, and around 10 subclasses, and each subclass effectively needs to repeat the same two variations of the parameter list from the base class. A real nightmare if the parameter list ever changes!
Those from a Java background may comment "20 parameters is too many", may miss that this is optional parameters, the language features which impacts this aspect of design. 20 required parameters would be crazy, but 10 or even 20 optional parameters is not so uncommon, check sqlalchemy Table for example.
In python, you to call a base class constructor you can have:
def __init__(self, special, *args, **kwargs):
super().__init(*args, **kwargs) # pass all parameters except special to base constructor
Does anyone know a technique, using a different method (perhaps using interfaces or something?) to avoid repeating this parameter list over and over for each subclass?
There is no design pattern to simplify this use case.
Best solution: Refactor the code to use a more Java like approach: using properties in place of optional parameters.
Use case explained: A widely used class or method having numerous optional parameters is simply not practical in Java, and kotlin is most evolved as way of making java code better. A python class with 5 optional parameters, translated to Java with no optional parameters, could have 5! ( and 5 factorial is 60) different Java signatures...in other words a mess.
Obviously no object should routinely be instanced with a huge parameter list, so normall python classes only evolve for classes when the majority of calls do not need to specify these optional parameters, and the optional parameters are for the exception cases. The actual use case here is the implementation of a large number of optional parameters, where it should be very rare for any individual object to be instanced using more than 3 of the optional parameter. So a class with 10 optional parameters that is used 500 times in an application, would still expect 3 of the optional parameters to be the maximum ever used in one instance. But this is simply a design approach not workable in Java, no matter how often the class is reused.
In Java, functions do hot have optional parameters, which means this case where an object is instanced in this way in a Java library simply could never happen.
Consider an object with one mandatory instance parameter, and five possible options. In Java these options would each be properties able to be set by setters, and objects would then be instanced, and the setter(s) called for setting any relevant option, but infrequently required change to the default value for that option.
The downside is that these options cannot be set from the constructor and remain immutable, but the resultant code reduces the optional parameters.
Another approach is to have a group of less 'swiss army knife' objects, with a set of specialised tools replacing the one do-it-all tool, even when the code could be seen as just slightly different nuances of the same theme.
Despite the support for Optional parameters in kotlin, The inheritance structure in kotlin is not yet optimised for heavier use of this feature.
You can skip the name like BaseTree(height, roots) by put the variable in order but you cannot do things like Python because Python is dynamic language.
It is normal that Java have to pass the variables to super class too.
FruitTree(int fruitSize, int height, boolean root) {
super(height, root);
}
There are about 20 parameters to the base class, and around 10 subclasses
This is most likely a problem of your classes design.
Reading your question I started to experiment myself and this is what I came up with:
interface TreeProperties {
val height: Int
val roots: Boolean
}
interface FruitTreeProperties: TreeProperties {
val fruitSize: Int
}
fun treeProps(height: Int = 10, roots: Boolean = true) = object : TreeProperties {
override val height = height
override val roots = roots
}
fun TreeProperties.toFruitProperty(fruitSize: Int): FruitTreeProperties = object: FruitTreeProperties, TreeProperties by this {
override val fruitSize = fruitSize
}
open class BaseTree(val props: TreeProperties)
open class FruitTree(props: FruitTreeProperties): BaseTree(props)
fun main(args: Array<String>){
val largTree = FruitTree(treeProps(height = 15).toFruitProperty(fruitSize = 5))
val rootlessTree = BaseTree(treeProps(roots = false))
}
Basically I define the parameters in an interface and extend the interface for sub-classes using the delegate pattern. For convenience I added functions to generate instances of those interface which also use default parameters.
I think this achieves the goal of repeating parameter lists quite nicely but also has its own overhead. Not sure if it is worth it.
If your subclass really has that many parameters in the constructur -> No way around that. You need to pass them all.
But (mostly) it's no good sign, that a constructor/function has that many parameters...
You are not alone on this. That is already discussed on the gradle-slack channel. Maybe in the future, we will get compiler-help on this, but for now, you need to pass the arguments yourself.

why (self: Runnable =>) in scala compiles?

/** A marker indicating that a `java.lang.Runnable` provided to `scala.concurrent.ExecutionContext`
* wraps a callback provided to `Future.onComplete`.
* All callbacks provided to a `Future` end up going through `onComplete`, so this allows an
* `ExecutionContext` to special-case callbacks that were executed by `Future` if desired.
*/
trait OnCompleteRunnable {
self: Runnable =>
}
When I check the source code of Future in scala, I couldn't understand why self:Runnable => above compiles.
I know the symbol => can be used in method parameter as call by name, also it can be used to define function. The code above seems like defining a function, but it puzzles me.
This syntax denotes an explicitly typed self reference. It essentially means that whatever extends OnCompleteRunnable must also extend Runnable.
You may wonder how does that differ from plain inheritance:
trait OnCompleteRunnable extends Runnable
In short, the difference is that self typed reference is just a type constraint. It does not establish a subtyping relation so OnCompleteRunnable cannot implement or override anything from Runnable. So it's something like a weaker form of inheritance.

Does Swift have dynamic dispatch and virtual methods?

Coming form a C++/Java/C# background I was expecting to see virtual methods in Swift, however reading the swift documentation I see no mention of virtual methods.
What am I missing?
Due to large number of views, I have decided to offer a reward for an upto date and very clear/detail answer.
Unlike C++, it is not necessary to designate that a method is virtual in Swift. The compiler will work out which of the following to use:
(the performance metrics of course depend on hardware)
Inline the method : 0 ns
Static dispatch: < 1.1ns
Virtual dispatch 1.1ns (like Java, C# or C++ when designated).
Dynamic Dispatch 4.9ns (like Objective-C).
Objective-C of course always uses the latter. The 4.9ns overhead is not usually a problem as this would represent a small fraction of the overall method execution time. However, where necessary developers could seamlessly fall-back to C or C++. In Swift, however the compiler will analyze which of the fastest can be used and try to decide on your behalf, favoring inline, static and virtual but retaining messaging for Objective-C interoperability. Its possible to mark a method with dynamic to encourage messaging.
One side-effect of this, is that some of the powerful features afforded by dynamic dispatch may not be available, where as this could previously have been assumed to be the case for any Objective-C method. Dynamic dispatch is used for method interception, which is in turn used by:
Cocoa-style property observers.
CoreData model object instrumentation.
Aspect Oriented Programming
The kinds of features above are those afforded by a late binding language. Note that while Java uses vtable dispatch for method invocation, its still considered a late binding language, and therefore capable of the above features by virtue of having a virtual machine and class loader system, which is another approach to providing run-time instrumentation. "Pure" Swift (without Objective-C interop) is like C++ in that being a direct-to-executable compiled language with static dispatch, then these dynamic features are not possible at runtime. In the tradition of ARC, we might see more of these kinds of features moving to compile time, which gives an edge with regards to "performance per watt" - an important consideration in mobile computing.
All methods are virtual; however you need to declare that you are overriding a method from a base class using the override keyword:
From the Swift Programming Guide:
Overriding
A subclass can provide its own custom implementation of an instance
method, class method, instance property, or subscript that it would
otherwise inherit from a superclass. This is known as overriding.
To override a characteristic that would otherwise be inherited, you
prefix your overriding definition with the override keyword. Doing so
clarifies that you intend to provide an override and have not provided
a matching definition by mistake. Overriding by accident can cause
unexpected behavior, and any overrides without the override keyword
are diagnosed as an error when your code is compiled.
The override keyword also prompts the Swift compiler to check that
your overriding class’s superclass (or one of its parents) has a
declaration that matches the one you provided for the override. This
check ensures that your overriding definition is correct.
class A {
func visit(target: Target) {
target.method(self);
}
}
class B: A {}
class C: A {
override func visit(target: Target) {
target.method(self);
}
}
class Target {
func method(argument: A) {
println("A");
}
func method(argument: B) {
println("B");
}
func method(argument: C) {
println("C");
}
}
let t = Target();
let a: A = A();
let ab: A = B();
let b: B = B();
let ac: A = C();
let c: C = C();
a.visit(t);
ab.visit(t);
b.visit(t);
ac.visit(t);
c.visit(t);
Note the self reference in the visit() of A and C. Just like in Java it gets not copied over but instead self keeps the same type until it is used in an override again.
The result is A, A, A, C, C so there's no dynamic dispatch available. Unfortunately.
As of Xcode 8.x.x and 9 Beta, virtual methods in C++ might be translated in Swift 3 and 4 like this:
protocol Animal: AnyObject { // as a base class in C++; class-only protocol in Swift
func hello()
}
extension Animal { // implementations of the base class
func hello() {
print("Zzz..")
}
}
class Dog: Animal { // derived class with a virtual function in C++
func hello() {
print("Bark!")
}
}
class Cat: Animal { // another derived class with a virtual function in C++
func hello() {
print("Meow!")
}
}
class Snoopy: Animal { // another derived class with no such a function
//
}
Give it a try.
func test_A() {
let array = [Dog(), Cat(), Snoopy()] as [Animal]
array.forEach() {
$0.hello()
}
// Bark!
// Meow!
// Zzz..
}
func sayHello<T: Animal>(_ x: T) {
x.hello()
}
func test_B() {
sayHello(Dog())
sayHello(Cat())
sayHello(Snoopy())
// Bark!
// Meow!
// Zzz..
}
In sum, the similar things we do in C++ can be achieved with Protocol and Generic in Swift, I think.
I also came from C++ world and faced the same question. The above seems to work, but it looks like a C++ way, not somewhat Swifty way, though.
Any further suggestions will be welcome!
Let’s begin by defining dynamic dispatch.
Dynamic dispatch is considered a prime characteristic of object-oriented languages.
It is the process of selecting which implementation of a polymorphic operation (method/function) to call at run time, according to Wikipedia.
I emhasized run time for a reason since this is what differentiates it from static dispatch. With static dispatch, a call to a method is resolved at compile time. In C++, this is the default form of dispatch. For dynamic dispatch, the method must be declared as virtual.
Now let’s examine what a virtual function is and how it behaves in the context of C++
In C++, a virtual function is a member function which is declared within a base class and is overriden by a derived class.
Its main feature is that if we have a function declared as virtual in the base class, and the same function defined in the derived class, the function in the derived class is invoked for objects of the derived class, even if it is called using a reference to the base class.
Consider this example, taken from here:
class Animal
{
public:
virtual void eat() { std::cout << "I'm eating generic food."; }
};
class Cat : public Animal
{
public:
void eat() { std::cout << "I'm eating a rat."; }
};
If we call eat() on a Cat object, but we use a pointer to Animal, the output will be “I’m eating a rat.”
Now we can study how this all plays out in Swift.
We have four dispatch types, which are the following (from fastest to slowest):
Inline dispatch
Static dispatch
Virtual dispatch
Dynamic dispatch
Let's take a closer look at dynamic dispatch. As a preliminary, you have to know about the difference between value and reference types. To keep this answer at a reasonable length, let’s just say that if an instance is a value type, it keeps a unique copy of its data. If it’s a reference type, it shares a single copy of the data with all other instances.
Static dispatch is supported by both value and reference types.
For dynamic dispatch, however, you need a reference type. The reason for this is that for dynamic dispatch you need inheritance, and for inheritance, which value types do not support, you need reference types.
How to achieve dynamic dispatch in Swift? There are two ways to do it.
The first is to use inheritance: subclass a base class and then override a method of the base class. Our previous C++ example looks something like this in Swift:
class Animal {
init() {
print("Animal created.")
}
func eat() {
print("I'm eating generic food.")
}
}
class Cat: Animal {
override init() {
print("Cat created.")
}
override func eat() {
print("I'm eating a rat.")
}
}
If you now run the following code:
let cat = Cat()
cat.eat()
The console output will be:
Cat created.
Animal created.
I'm eating a rat.
As you can see, there is no need to mark the base class method as virtual, the compiler will automatically decide which dispatch option to use.
The second way to achieve dynamic dispatch is to use the dynamic keyword along with the #objc prefix. We need #objc to expose our method to the Objective-C runtime, which relies solely on dynamic dispatch. Swift, however, only uses it if it has no other choice. If the compiler can decide at compile time which implementation to use, it opts out of dynamic dispatch. We might use #objc dynamic for Key-Value Observing or method swizzling, both of which are outside the scope of this answer.
Swift was made to be easy to learn for Objective-C programmers, and in Objective-C there are no virtual methods, at least not in the way that you might think of them. If you look for instruction on how to create an abstract class or virtual method in Objective-C here on SO, usually it's a normal method that just throws an exception and crashes the app. (Which kinda makes sense, because you're not supposed to call a virtual method)
Therefore if Swift documentation says nothing about virtual methods, my guess is that, just as in Objective-C, there are none.