How do you test modules with IO input port of type Vec, Bundle, or composition of these?
In other words, using PeekPokeTester, how do you properly poke() a port that is of type Vec, Bundle, or more complex composition of these two types? Thanks!
The PeekPokeTester has poke methods for Bundle and Vec, but I don't think they have handling for nested versions.
From the ScalaDoc (all Chisel-related ScalaDoc can be found at https://www.chisel-lang.org/):
def poke(signal: Aggregate, value: IndexedSeq[BigInt]): Unit
def poke(signal: Bundle, map: Map[String, BigInt]): Unit
These accept types analogous to Bundle and Vec, but unfortunately it doesn't appear to be nested which isn't ideal.
You can always write special helper functions that just specify each individual field to poke, but it is not very general.
Better solution is to use the newer chisel unit test library ChiselTest. It has support for poking, peeking and expecting Bundle literals. Specifically check out the examples in ChiselTest unittest BundleLiteralsSpec.
VectorLiterals are still a work in progress in Chisel but they are easier than bundles to write more generic helper functions for.
Related
Note that I'm new to the C language. According to Basic Tutorial of Cython, I believe there are two ways of using Cython: Building the extension of pure Python code, and using Ctype variable (cdef).
What I don't understand is the difference between them. Which one of them is the more efficient or proper way to using Cython?
It's mostly historical.
Originally Cython only supported cdef declarations.
Pure Python mode was added as a way of adding declarations to a file to help speed it up while not requiring Cython.
Python added type annotations. Cython can increasingly use these (with the annotation_typing directive, which defaults to true). If you like the syntax of these better than cdef then use them. Or not.
The cdef version is slightly better tested and there's still gaps in what you can do in "pure Python" mode. Especially with respect to interfacing with native C/C++. But mostly they are different ways to achieve the same thing and they should generate largely the same code, so you should use whichever you prefer. You can also use a mixture.
Most Python codes can be directly "cythonized", with no change to your code. Nevertheless, to get the best of Cython, you need to adapt your Python code by providing the cdef and the type of your variables. Not mandatory, but essential to get the decent speed up that you expect from Cython.
It appears that most/all of the Data types in Chisel are sealed classes which do not allow a user to extend from them. Is it possible to add information regarding some user defined fields or to add support in the future?
I think there are a few cases where it could be helpful to have additional information:
Port descriptions possibly for documentation
Voltage levels/biases
If you are doing some chip top level connections you may have to make certain connection
Also many times signals will have a set_dont_touch (an SDC, not to be confused with Chisel dontTouch) placed on them, so it may be possible to add these for auto SDC constraints.
Modeling purposes
Chisel obviously doesn't deal with behavioral modeling, but there are times where a Verilog/SV real is used for modeling. This could be used to print out where these signals are for any post processing.
I don't expect Chisel to handle all of the actual cases (such as making the document or dealing with connections), but if these members can be added/extended a user can either check these during construction and/or after elaboration for additional flows.
Thanks
Chisel and FIRRTL have a fairly robust annotation system for handling such metadata. It is an area of active development, the handling of annotating instances (rather than modules) is improved in soon-to-be-released Chisel 3.4.0 / FIRRTL 1.4.0. That being said, I can provide a simple example to give a flavor of how it works
Basically, FIRRTL has this notion of an Annotation which can be associated with zero, one, or many Targets. A Target is the name of a hardware component (like a register or wire) or a module. This is exactly how Chisel's dontTouch is implemented
import chisel3._
import chisel3.stage._
import firrtl.annotations.JsonProtocol
import firrtl.transforms.DontTouchAnnotation
class Foo extends Module {
val io = IO(new Bundle {
val in = Input(Bool())
val out = Output(Bool())
})
dontTouch(io)
io.out := ~io.in
}
val resultAnnos = (new ChiselStage).run(ChiselGeneratorAnnotation(() => new Foo) :: Nil)
val dontTouches = resultAnnos.collect { case dt: DontTouchAnnotation => dt }
println(JsonProtocol.serialize(dontTouches))
/* Prints:
[
{
"class":"firrtl.transforms.DontTouchAnnotation",
"target":"~Foo|Foo>io_in"
},
{
"class":"firrtl.transforms.DontTouchAnnotation",
"target":"~Foo|Foo>io_out"
}
]
*/
Note that this is fully extensible, it is fairly straightforward (though not well-documented) to define your own "dontTouch-like" API. Unfortunately, this flow does not have as much documentation as the Chisel APIs, but the overall structure is there and in heavy use in projects like FireSim (https://fires.im/).
A common use of annotations is to associate certain metadata with annotations (like physical design information), propagate it through compilation, and then emit a file in whatever format to hook into follow on flows.
Any exciting feature also coming in Chisel 3.4 that helps with this is the new "CustomFileEmission" API. When writing custom annotations it will be possible to tell FIRRTL how to emit the annotation such that you could, for example, have some annotation with physical design information and emit a TCL file.
I discovered that a lot of "special forms" are just macros that use their asterisks version in the background (fn*, let* and all the others).
In case of fn, for example, it adds the destructuring capability into the mix which fn* alone does not provide. I tried to find some detailed documentation of what fn* can and can't do on its own, but I was not so lucky.
It definitely supports:
the &/catchall indicator
(fn* [x & rest] (do-smth-here...))
and curiously also the overloading on arity, as in:
(fn* ([x] (smth-with-one-arg ...)
([x y] (smth-with-two-args ...))
So my question finally is, why not only define:
(fn& [& all-args] ...)
which would be absolutely minimal and could provide all the arity selection via macros (checking size of parameter list, if/case statement to direct code path, bind first few parameters to desired symbol, etc..).
Is this for performance reasons? Maybe someone even has a link to the actual standard definition of the asterisks special forms handy.
Arity selection leverages the JVM virtual method dispatch: each arity (from 0 to 20 arguments) has its own method and there's a single method for 21+-arg arities.
You may notice the applyTo method which is the generic method akin to what you propose with fn&. It's implementation is just a giant switch to select the correct specialized method.
Yes, you could do all that as macros on top of the ultra-primitive fn& that you propose, and it would certainly simplify the compiler implementation. The reason this is not done is partly for performance reasons (it would be rather slow, and the JVM already has a fast facility for dispatching based on arity), and partly "cosmetic": it means that each arity of a function is a different method of the JVM class that a function is compiled down to, which makes stacktraces nicer. This also helps the JIT "understand" our functions better, so that it can optimize accordingly.
My guess would be that this is convenience/extensibility driven. The compiler (where fn* is actually "defined"/processed) is written in java and handles the minimum needed functionality in order to bootstrap the language while fn is a macro that builds on top of it. Same with some of the other forms. Somewhere there was a statement from Rich that he could rewrite the compiler from java to clojure but does not see the benefit (correct me if wrong).
I'm following this guide http://javaeenotes.blogspot.com/2011/06/short-introduction-to-jmock.html
I've received the error
java.lang.SecurityException: class "org.hamcrest.TypeSafeMatcher"'s signer information does not match signer information of other classes in the same package.
In the guide the author says:
The solution is make sure the jMock libraries are included before the
standard jUnit libraries in the build path.
What makes up the "standard jmock libraries" and the "junit libraries"?
Junit only has one jar so that's easy, but jmock comes with over 10 different jars.
I've been using: j-unit4.10, jmock-2.5, hamrest-core and hamcrest-library
What are the hamcrest core and library classes for?
i'm a committer on both libraries. JMock depends on hamcrest to help it decide whether an call to an object is expected. I suggest just using the hamcrest-all jar. The split between hamcrest core and library was to separate the fundamental behaviour of matching and reporting differences from a convenient implementations of the most common cases.
Finally, if you're using hamcrest, I suggest you use the junit-dep jar to avoid clashes with some features of hamcrest that are included in the junit.jar
JUnit is used to do Unit test in order to test your methods. JMock is used to test your program inside a context, You will have to know what you are expecting to send to the context (ENV) and what will answer the context.
JMock use JUnit, that is why, in order to avoid dependency conflicts, you need to include it before JUnit.
The 10 libraries of JMock are kind of add-ons if you need to use JMock script or any other functionnality not available in the JMock core.
You don't need to know about Hamcrest-core library to use JMock. Just follows the guide on the web site (don't use version 1 of JMock) and Organize your libraries in the correct order (JUnit should be last in order to avoid your error)
mock frameworks licke jmock do some black magic behind the scenes
( including, but not limited to runtime byte code manipulation )
to provide mock methods classes and whatever. To be able to do this,
some tweaks in basic junit classes are necessary, and the only way to do this is to
register itself as java agent before JU classes are loaded.
Also, put your mock framework before junit in classpath
what is the purpose of namespaces ?
and, more important, should they be used as objects in java (things that have data and functions and that try to achieve encapsulation) ? is this idea to far fetched ? :)
or should they be used as packages in java ?
or should they be used more generally as a module system or something ?
Given that you use the Clojure tag, I suppose that you'll be interested in a Clojure-specific answer:
what is the purpose of namespaces ?
Clojure namespaces, Java packages, Haskell / Python / whatever modules... At a very high level, they're all different names for the same basic mechanism whose primary purpose is to prevent name clashes in non-trivial codebases. Of course, each solution has its own little twists and quirks which make sense in the context of a given language and would not make sense outside of it. The rest of this answer will deal with the twists and quirks specific to Clojure.
A Clojure namespace groups Vars, which are containers holding functions (most often), macro functions (functions used by the compiler to generate macroexpansions of appropriate forms, normally defined with defmacro; actually they are just regular Clojure functions, although there is some magic to the way in which they are registered with the compiler) and occasionally various "global parameters" (say, clojure.core/*in* for standard input), Atoms / Refs etc. The protocol facility introduced in Clojure 1.2 has the nice property that protocols are backed by Vars, as are the individual protocol functions; this is key to the way in which protocols present a solution to the expression problem (which is however probably out of the scope of this answer!).
It stands to reason that namespaces should group Vars which are somehow related. In general, creating a namespace is a quick & cheap operation, so it is perfectly fine (and indeed usual) to use a single namespace in early stages of development, then as independent chunks of functionality emerge, factor those out into their own namespaces, rinse & repeat... Only the things which are part of the public API need to be distributed between namespaces up front (or rather: prior to a stable release), since the fact that function such-and-such resides in namespace so-and-so is of course a part of the API.
and, more important, should they be used as objects in java (things that have data and functions and that try to achieve encapsulation) ? is this idea to far fetched ? :)
Normally, the answer is no. You might get a picture not too far from the truth if you approach them as classes with lots of static methods, no instance methods, no public constructors and often no state (though occasionally there may be some "class data members" in the form of Vars holding Atoms / Refs); but arguably it may be more useful not to try to apply Java-ish metaphors to Clojure idioms and to approach a namespace as a group of functions etc. and not "a class holding a group of functions" or some such thing.
There is an important exception to this general rule: namespaces which include :gen-class in their ns form. These are meant precisely to implement a Java class which may later be instantiated, which might have instance methods and per-instance state etc. Note that :gen-class is an interop feature -- pure Clojure code should generally avoid it.
or should they be used as packages in java ?
They serve some of the same purposes packages were designed to serve (as already mentioned above); the analogy, although it's certainly there, is not that useful, however, just because the things which packages group together (Java classes) are not at all like the things which Clojure namespaces group together (Clojure Vars), the various "access levels" (private / package / public in Java, {:private true} or not in Clojure) work very differently etc.
That being said, one has to remember that there is a certain correspondence between namespaces and packages / classes residing in particular packages. A namespace called foo.bar, when compiled, produces a class called bar in the package foo; this means, in particular, that namespace names should contain at least one dot, as so-called single-segment names apparently lead to classes being put in the "default package", leading to all sorts of weirdness. (E.g. I find it impossible to have VisualVM's profiler notice any functions defined in single-segment namespaces.)
Also, deftype / defrecord-created types do not reside in namespaces. A (defrecord Foo [...] ...) form in the file where namespace foo.bar is defined creates a class called Foo in the package foo.bar. To use the type Foo from another namespace, one would have to :import the class Foo from the foo.bar package -- :use / :require would not work, since they pull in Vars from namespaces, which records / types are not.
So, in this particular case, there is a certain correspondence between namespaces and packages which Clojure programmers who wish to take advantage of some of the newer language features need to be aware of. Some find that this gives an "interop flavour" to features which are not otherwise considered to belong in the realm of interop (defrecord / deftype / defprotocol are a good abstraction mechanism even if we forget about their role in achieving platform speed on the JVM) and it is certainly possible that in some future version of Clojure this flavour might be done away with, so that the namespace name / package name correspondence for deftype & Co. can be treated as an implementation detail.
or should they be used more generally as a module system or something ?
They are a module system and this is indeed how they should be used.
A package in Java has its own namespace, which provides a logical grouping of classes. It also helps prevent naming collisions. For example in java you will find java.util.Date and java.sql.Date - two different classes with the same name differentiated by their namespace. If you try an import both into a java file, you will see that it wont compile. At least one version will need to use its explicit namespace.
From a language independant view, namespaces are a way to isolate things (i.e. encapsulate in a sens). It's a more general concept (see xml namespaces for example). You can "create" namespaces in several ways, depending on the language you use: packages, static classes, modules and so on. All of these provides namespaces to the objects/data/functions they contain. This allow to organize the code better, to isolate features, tends for better code reuse and adaptability (as encapsulation)
As stated in the "Zen of Python", "Namespaces are one honking great idea -- let's do more of those !".
Think of them as containers for your classes. As in if you had a helper class for building strings and you wanted it in your business layer you would use a namespace such as MyApp.Business.Helpers. This allows your classes to be contained in sensical locations so when you or some else referencing your code wants to cosume them they can be located easily. For another example if you wanted to consume a SQL connection helper class you would probably use something like:
MyApp.Data.SqlConnectionHelper sqlHelper = new MyApp.Data.SqlConnectionHelper();
In reality you would use a "using" statement so you wouldn't need to fully qualify the namespace just to declare the variable.
Paul