How to specify a Blackbox Bundle that maps to `input [127:0]` but acts like a Vec(2, UInt(64)) in the Chisel source? - chisel

I want to interface with a full AXI crossbar whose ports
look roughly like the following:
input [(NM * AXI_DATA_WIDTH) - 1:0] S_AXI_XYZ0,
...
output [(NS * AXI_SOMETHING_WIDTH - 1:0] M_AXI_XYZ0
...
where NM denotes the number of masters and NS denotes
the number of slaves. As you can imagine, each of the ports of
the crossbar will only partially connected to each slave/master.
Now, Ideally I'd like to define a Chisel Bundle like
class AxiXbar(val AXI_DATA_WIDTH: Int, val NS: Int, ...) extends Bundle {
val S_AXI_XYZ0 = Vec(NS, UInt(AXI_DATA_WIDTH))
...
}
but as soon as I use that in order to give a Blackbox
its interface
class MyXbar extends Blackbox {
val io = IO(new AxiXbar)
}
Chisel will try to verilogify MyXbar to something like
MyXbar inst #(...) (
...
.S_AXI_XYZ0_0(foo_wire_0)
.S_AXI_XYZ0_1(foo_wire_0)
...
)
This is fine when Chisel generates the complete design, but
when interfacing with a Blackbox module,
I need Chisel to flatten the Chisel Vec into
only one S_AXI_XYZ0 port instead. How can I achieve that?
I am aware that Chisel3 supports DataView as method to
change the "shape" of a interface bundle, but in their DataView Cookbook they state that subword viewing (split/concat) is not yet supported.
I tried to use Chisel's cast macros asType(...) or asUInt. Without success though as I do not fully grasp what they do under the hood. As such it is hard to apply them correctly.

Related

Chisel: getting signal name in final Verilog

I'd like to automate as much as possible the instantiation of an ILA directly from the Chisel code. This means instantiating a module that looks like this:
i_ila my_ila(
.clk(clock),
.probe0(a_signal_to_monitor),
.probe1(another_signal_to_monitor),
// and so on
);
I'm planning to store the signals that I want to monitor in a list of UInt so that at the end of module elaboration I can generate the instantiation code above, which I will copy/paste in the final Verilog code (or write a Python script that does that automatically).
First, is there a better way of doing this, perhaps at the level of FIRRTL?
Even if I go with this semi-manual approach, I need to know what would be the name of the signals in the final Verilog, which is not necessarily the name of the UInt vals in the code (and which, besides, I don't know how to get automatically without having to retype the name of the variable as a string somewhere). How can I get them?
I'd like to provide a more complete example, but I wanted to make sure to at least write something up. This also needs to be fleshed out as a proper example/tutorial on the website.
FIRRTL has robust support for tracking names of signals across built-in and custom transformations. This is a case where the infrastructure is all there, but it's very much a power user API. In short, you can create FIRRTL Annotations that will track Targets. You can then emit custom metadata files or use the normal FIRRTL annotation file (try the CLI option -foaf / --output-annotation-file).
An example FIRRTL Annotation that has will emit a custom metadata file at the end of compilation:
// Example FIRRTL annotation with Custom serialization
// FIRRTL will track the name of this signal through compilation
case class MyMetadataAnno(target: ReferenceTarget)
extends SingleTargetAnnotation[ReferenceTarget]
with CustomFileEmission {
def duplicate(n: ReferenceTarget) = this.copy(n)
// API for serializing a custom metadata file
// Note that multiple of this annotation will collide which is an error, not handled in this example
protected def baseFileName(annotations: AnnotationSeq): String = "my_metadata"
protected def suffix: Option[String] = Some(".txt")
def getBytes: Iterable[Byte] =
s"Annotated signal: ${target.serialize}".getBytes
}
The case class declaration and duplicate method are enough to track a single signal through compilation. The CustomFileEmission and related baseFileName, suffix, and getBytes methods define how to serialize my custom metadata file. As mentioned in the comment, as implemented in this example we can only have 1 instance of this MyMetadataAnno or they will try to write the same file which is an error. This can be handled by customizing the filename based on the Target, or writing a FIRRTL transform to aggregate multiple of this annotation into a single annotation.
We then need a way to create this annotation in Chisel:
def markSignal[T <: Data](x: T): T = {
annotate(new ChiselAnnotation {
// We can't call .toTarget until end of Chisel elaboration
// .toFirrtl is called by Chisel at the end of elaboration
def toFirrtl = MyMetadataAnno(x.toTarget)
})
x
}
Now all we need to do is use this simple API in our Chisel
// Simple example with a marked signal
class Foo extends MultiIOModule {
val in = IO(Flipped(Decoupled(UInt(8.W))))
val out = IO(Decoupled(UInt(8.W)))
markSignal(out.valid)
out <> in
}
This will result in writing the file my_metadata.txt to the target directory with the contents:
Annotated signal: ~Foo|Foo>out_valid
Note that this is special FIRRTL target syntax saying that out_valid is the annotated signal that lives in module Foo.
Complete code in an executable example:
https://scastie.scala-lang.org/moEiIqZPTRCR5mLQNrV3zA

What is the purpose of the makeSink method in making IOs for a periphery

I was following some examples of adding peripheries to the rocketchip.
I used the sifive-blocks as reference.
below is an example from their I2C example (I hope it's ok to post it here)
case object PeripheryI2CKey extends Field[Seq[I2CParams]]
trait HasPeripheryI2C { this: BaseSubsystem =>
val i2cNodes = p(PeripheryI2CKey).map { ps =>
I2C.attach(I2CAttachParams(ps, pbus, ibus.fromAsync)).ioNode.makeSink()
}
}
trait HasPeripheryI2CBundle {
val i2c: Seq[I2CPort]
}
trait HasPeripheryI2CModuleImp extends LazyModuleImp with HasPeripheryI2CBundle {
val outer: HasPeripheryI2C
val i2c = outer.i2cNodes.zipWithIndex.map { case(n,i) => n.makeIO()(ValName(s"i2c_$i")) }
}
I understand the makeIO step which is take a bundle and apply the IO on it, but don't understand the makeSink step.
Why do they do this step , isn't makeIO enough ?
I'm not an expert in rocket-chip's Diplomacy, but from glancing at the code, I think makeIO and makeSink do fundamentally different things.
makeIO takes the BundleBridgeSource and materializes ports in the Chisel Module implementation for driving that source. There is the same method on BundleBrigeSink. I believe this method is the way you take either side of a Bundle bridge and interface with it in the actual Chisel part of the generator (as opposed to the Diplomatic part).
makeSink turns a BundleBridgeSource into a BundleBridgeSink. It doesn't materialize Chisel ports and it stays in Diplomacy world rather than in the Chisel world.
In the example from I2C you included, note how the part with makeSink is a trait to mix into something that extends BaseSubsystem, it's diplomatic. On the other hand, HasPeripheryI2CModuleImp which has the makeIO extends LazyModuleImp which is the Chisel part. One way to think about this is two different "views" of the same thing. Chisel and Diplomacy use different objects, thus i2cNodes (diplomatic) vs. i2c (Chisel).

Scala function to Json

Can I map Scala functions to JSON; or perhaps via a different way than JSON?
I know I can map data types, which is fine. But I'd like to create a function, map it to JSON send it via a REST method to another server, then add that function to a list of functions in another application and apply it.
For instance:
def apply(f: Int => String, v: Int) = f(v)
I want to make a list of functions that can be applied within an application, over different physical locations. Now I want to add and remove functions to the list. By means of REST calls.
Let's assume I understand security problems...
ps.. If you downvote, you might as well have the decency to explain why
If I understand correctly, you want to be able to send Scala code to be executed on different physical machines. I can think of a few different ways of achieving that
Using tools for distributed computing e.g. Spark. You can set up Spark clusters on different machines and then select to which cluster you want to submit Spark jobs. There are a lot of other tools for distributed computing that might also be worth looking into.
Pass scala code as a string and compile it either within your server side code (here's an example) or by invoking scalac as an external process.
Send the functions as byte code and execute the byte code on the remote machine.
If it fits with what you want to do, I'd recommend option #1.
Make sure that you can really trust the code that you want to execute, to not expose yourself to malicious code.
The answer is you can't do this, and even if you could you shouldn't!
You should never, never, never write a REST API that allows the client to execute arbitrary code in your application.
What you can do is create a number of named operations that can be executed. The client can then pass the name of the operation which the server can look up in a Map[String, <function>] and execute the result.
As mentioned in my comment, here is an example of how to turn a case class into JSON. Things to note: don't question the implicit val format line (it's magic); each case class requires a companion object in order to work; if you have Optional fields in your case class and define them as None when turning it into JSON, those fields will be ignored (if you define them as Some(whatever), they will look like any other field). If you don't know much about Scala Play, ignore the extra stuff for now - this is just inside the default Controller you're given when you make a new Project in IntelliJ.
package controllers
import javax.inject._
import play.api.libs.json.{Json, OFormat}
import play.api.mvc._
import scala.concurrent.Future
#Singleton
class HomeController #Inject()(cc: ControllerComponents) extends AbstractController(cc) {
case class Attributes(heightInCM: Int, weightInKG: Int, eyeColour: String)
object Attributes {
implicit val format: OFormat[Attributes] = Json.format[Attributes]
}
case class Person(name: String, age: Int, attributes: Attributes)
object Person {
implicit val format: OFormat[Person] = Json.format[Person]
}
def index: Action[AnyContent] = Action.async {
val newPerson = Person("James", 24, Attributes(192, 83, "green"))
Future.successful(Ok(Json.toJson(newPerson)))
}
}
When you run this app with sbt run and hit localhost:9000 through a browser, the output you see on-screen is below (formatted for better reading). This is also an example of how you might send JSON as a response to a GET request. It isn't the cleanest example but it works.
{
"name":"James",
"age":24,
"attributes":
{
"heightInCM":187,
"weightInKG":83,
"eyeColour":"green"
}
}
Once more though, I would never recommend passing actual functions between services. If you insist though, maybe store them as a String in a Case Class and turn it into JSON like this. Even if you are okay with passing functions around, it might even be a good exercise to practice security by validating the functions you receive to make sure they're not malicious.
I also have no idea how you'll convert them back into a function either - maybe write the String you receive to a *.scala file and try to run them with a Bash script? Idk. I'll let you figure that one out.

kotlin, how to simplify passing parameters to base class constructor?

We have a package that we are looking to convert to kotlin from python in order to then be able to migrate systems using that package.
Within the package there are a set of classes that are all variants, or 'flavours' of a common base class.
Most of the code is in the base class which has a significant number of optional parameters. So consider:
open class BaseTree(val height:Int=10,val roots:Boolean=true, //...... lots more!!
class FruitTree(val fruitSize, height:Int=10, roots:Boolean=true,
// now need all possible parameters for any possible instance
):BaseTree(height=height, roots=roots //... yet another variation of same list
The code is not actually trees, I just thought this was a simple way to convey the idea. There are about 20 parameters to the base class, and around 10 subclasses, and each subclass effectively needs to repeat the same two variations of the parameter list from the base class. A real nightmare if the parameter list ever changes!
Those from a Java background may comment "20 parameters is too many", may miss that this is optional parameters, the language features which impacts this aspect of design. 20 required parameters would be crazy, but 10 or even 20 optional parameters is not so uncommon, check sqlalchemy Table for example.
In python, you to call a base class constructor you can have:
def __init__(self, special, *args, **kwargs):
super().__init(*args, **kwargs) # pass all parameters except special to base constructor
Does anyone know a technique, using a different method (perhaps using interfaces or something?) to avoid repeating this parameter list over and over for each subclass?
There is no design pattern to simplify this use case.
Best solution: Refactor the code to use a more Java like approach: using properties in place of optional parameters.
Use case explained: A widely used class or method having numerous optional parameters is simply not practical in Java, and kotlin is most evolved as way of making java code better. A python class with 5 optional parameters, translated to Java with no optional parameters, could have 5! ( and 5 factorial is 60) different Java signatures...in other words a mess.
Obviously no object should routinely be instanced with a huge parameter list, so normall python classes only evolve for classes when the majority of calls do not need to specify these optional parameters, and the optional parameters are for the exception cases. The actual use case here is the implementation of a large number of optional parameters, where it should be very rare for any individual object to be instanced using more than 3 of the optional parameter. So a class with 10 optional parameters that is used 500 times in an application, would still expect 3 of the optional parameters to be the maximum ever used in one instance. But this is simply a design approach not workable in Java, no matter how often the class is reused.
In Java, functions do hot have optional parameters, which means this case where an object is instanced in this way in a Java library simply could never happen.
Consider an object with one mandatory instance parameter, and five possible options. In Java these options would each be properties able to be set by setters, and objects would then be instanced, and the setter(s) called for setting any relevant option, but infrequently required change to the default value for that option.
The downside is that these options cannot be set from the constructor and remain immutable, but the resultant code reduces the optional parameters.
Another approach is to have a group of less 'swiss army knife' objects, with a set of specialised tools replacing the one do-it-all tool, even when the code could be seen as just slightly different nuances of the same theme.
Despite the support for Optional parameters in kotlin, The inheritance structure in kotlin is not yet optimised for heavier use of this feature.
You can skip the name like BaseTree(height, roots) by put the variable in order but you cannot do things like Python because Python is dynamic language.
It is normal that Java have to pass the variables to super class too.
FruitTree(int fruitSize, int height, boolean root) {
super(height, root);
}
There are about 20 parameters to the base class, and around 10 subclasses
This is most likely a problem of your classes design.
Reading your question I started to experiment myself and this is what I came up with:
interface TreeProperties {
val height: Int
val roots: Boolean
}
interface FruitTreeProperties: TreeProperties {
val fruitSize: Int
}
fun treeProps(height: Int = 10, roots: Boolean = true) = object : TreeProperties {
override val height = height
override val roots = roots
}
fun TreeProperties.toFruitProperty(fruitSize: Int): FruitTreeProperties = object: FruitTreeProperties, TreeProperties by this {
override val fruitSize = fruitSize
}
open class BaseTree(val props: TreeProperties)
open class FruitTree(props: FruitTreeProperties): BaseTree(props)
fun main(args: Array<String>){
val largTree = FruitTree(treeProps(height = 15).toFruitProperty(fruitSize = 5))
val rootlessTree = BaseTree(treeProps(roots = false))
}
Basically I define the parameters in an interface and extend the interface for sub-classes using the delegate pattern. For convenience I added functions to generate instances of those interface which also use default parameters.
I think this achieves the goal of repeating parameter lists quite nicely but also has its own overhead. Not sure if it is worth it.
If your subclass really has that many parameters in the constructur -> No way around that. You need to pass them all.
But (mostly) it's no good sign, that a constructor/function has that many parameters...
You are not alone on this. That is already discussed on the gradle-slack channel. Maybe in the future, we will get compiler-help on this, but for now, you need to pass the arguments yourself.

Scala Wrapper class by extending Component and with the SequentialContainer.Wrapper trait, do I have the correct understanding of traits?

The following code was taken from this post: How to create Scala swing wrapper classes with SuperMixin?
import scala.swing._
import javax.swing.JPopupMenu
class PopupMenu extends Component with SequentialContainer.Wrapper {
override lazy val peer: JPopupMenu = new JPopupMenu with SuperMixin
def show(invoker: Component, x: Int, y: Int): Unit = peer.show(invoker.peer, x, y)
}
I've been trying to make custom wrappers so need to understand this, which is simple enough but since
I'm only starting to get acquainted with Scala so I'm a little unsure about traits. So what I've been hearing is that traits is like multiple inheritance and you can mix and match them?
I've drawn a diagram representing where PopupMenu sits within the whole inheritance structure. Just to clarify a few things:
1) It seems to override lazy val peer:JComponent from Component and also gets the contents property from SequentialContainer.Wrapper? (purple text) Is that right?
2) Sequential.Wrapper also has a abstract def peer: JComponent.. but this isn't the one being overriden, so it isn't used at all here?
3) What's confusing is that Component and Sequential.Wrapper have some identical properties: both of them have def publish and def subscribe (red text).. but the one that the popupMenu will use is subscribe/publish from the Component class?
4) why can't we write PopupMenu extends SequentialContainer.Wrapper with Component instead?
Hopefully that isn't too many questions at once. Help would be much appreciated, I'm a beginner to Scala..
I'll answer using the numbers of your questions:
Correct
Correct. The top trait is UIElement which defines an abstract member def peer: java.awt.Component. Then you have Container which merely adds abstract member def contents: Seq[Component] to be able to read the child components. Container.Wrapper is a concrete implementation of Container which assumes (abstractly) the Java peer is a javax.swing.JComponent. Note that in Java's own hierarchy, javax.swing.JComponent is a sub-type of java.awt.Component, so there is no conflict. Sub-types can refine the types of their members ("covariance"). SequentialContainer refines Container by saying that contents is a mutable buffer (instead of the read-only sequence). Consequently, its implementation SequentialContainer.Wrapper mixes in Container.Wrapper but replaces the contents by a standard Scala buffer. At no point has a concrete peer been given, yet. For convenience, Component does implement that member, but then as you have seen, the final class PopupMenu overrides the peer. Because of the way the type system works, all the participating traits can access peer, but only PopupMenu "knows" that the type has been refined to be javax.swing.JPopupMenu. For example SequentialContainer.Wrapper only knows there is a javax.swing.JComponent, and so it can use this part of the API of the peer.
The Publisher trait is introduced by UIElement, so you will find it in all types deriving from UIElement. There is nothing wrong with having the same trait appear multiple times in the hierarchy. In the final class, there is only one instance of Publisher, there do not exist multiple "versions" of it. Even if Publisher had not been defined at the root, but independently in for example Component and SequentialContainer.Wrapper, you would only get one instance in the final class.
This is an easy one. In Scala you can only extend one class, but mix in any number of traits. Component is a class while all other things else are traits. It's class A extends <trait-or-class> with <trait> with <trait> ....
To sum up, all GUI elements inherit from trait UIElement which is backed up by a java.awt.Component. Elements which have child elements use trait Container, and all the normal panel type elements which allow you to add and remove elements in a specific order use SequentialContainer. (Not all panels have a sequential order, for example BorderPanel does not). These are abstract interfaces, to get all the necessary implementations, you have the .Wrapper types. Finally to get a useable class, you have Component which extends UIElement and requires that the peer is javax.swing.JComponent, so it can implement all the standard functionality.
When you implement a new wrapper, you usually use Component and refine the peer type so that you can access the specific functionality of that peer (e.g. the show method of JPopupMenu).