Using Multiple Clocks in Testers - chisel

I have a module with multiple clocks, using withClockAndReset. When writing the testbench, how do I provide a clock stimulus on the named clock port?

chisel-testers has very limited abilities for doing what you want. I think your best bet within the standard chisel stack is to try and use the new chisel-testers2. There is an example in the unit tests in that repo: ClockDividerTest.scala and a couple of other clock related tests. This is a very active area of chisel development right now. If you can try this, the team is very interested in making this work.

I ended up following the spirit of that example, but didn't need chisel-testers2.
class MyModuleTestFixture(<params> extends Module {
val dut = Module(new MyModule(<params>))
val divClock = RegInit(true.B)
divClock := ~divClock
dut.io.explicit_clk := divClock.asClock()
dut.io.all_other_ios <> io.all_other_ios
}
It's a restrictive way of generating the explicit clock, but served my purpose.

Related

Extending Data Types or way to add information

It appears that most/all of the Data types in Chisel are sealed classes which do not allow a user to extend from them. Is it possible to add information regarding some user defined fields or to add support in the future?
I think there are a few cases where it could be helpful to have additional information:
Port descriptions possibly for documentation
Voltage levels/biases
If you are doing some chip top level connections you may have to make certain connection
Also many times signals will have a set_dont_touch (an SDC, not to be confused with Chisel dontTouch) placed on them, so it may be possible to add these for auto SDC constraints.
Modeling purposes
Chisel obviously doesn't deal with behavioral modeling, but there are times where a Verilog/SV real is used for modeling. This could be used to print out where these signals are for any post processing.
I don't expect Chisel to handle all of the actual cases (such as making the document or dealing with connections), but if these members can be added/extended a user can either check these during construction and/or after elaboration for additional flows.
Thanks
Chisel and FIRRTL have a fairly robust annotation system for handling such metadata. It is an area of active development, the handling of annotating instances (rather than modules) is improved in soon-to-be-released Chisel 3.4.0 / FIRRTL 1.4.0. That being said, I can provide a simple example to give a flavor of how it works
Basically, FIRRTL has this notion of an Annotation which can be associated with zero, one, or many Targets. A Target is the name of a hardware component (like a register or wire) or a module. This is exactly how Chisel's dontTouch is implemented
import chisel3._
import chisel3.stage._
import firrtl.annotations.JsonProtocol
import firrtl.transforms.DontTouchAnnotation
class Foo extends Module {
val io = IO(new Bundle {
val in = Input(Bool())
val out = Output(Bool())
})
dontTouch(io)
io.out := ~io.in
}
val resultAnnos = (new ChiselStage).run(ChiselGeneratorAnnotation(() => new Foo) :: Nil)
val dontTouches = resultAnnos.collect { case dt: DontTouchAnnotation => dt }
println(JsonProtocol.serialize(dontTouches))
/* Prints:
[
{
"class":"firrtl.transforms.DontTouchAnnotation",
"target":"~Foo|Foo>io_in"
},
{
"class":"firrtl.transforms.DontTouchAnnotation",
"target":"~Foo|Foo>io_out"
}
]
*/
Note that this is fully extensible, it is fairly straightforward (though not well-documented) to define your own "dontTouch-like" API. Unfortunately, this flow does not have as much documentation as the Chisel APIs, but the overall structure is there and in heavy use in projects like FireSim (https://fires.im/).
A common use of annotations is to associate certain metadata with annotations (like physical design information), propagate it through compilation, and then emit a file in whatever format to hook into follow on flows.
Any exciting feature also coming in Chisel 3.4 that helps with this is the new "CustomFileEmission" API. When writing custom annotations it will be possible to tell FIRRTL how to emit the annotation such that you could, for example, have some annotation with physical design information and emit a TCL file.

Which invalidate method to use

I am a bit confused about which invalidate method to use and when to use it. I need to change x and y of a component and in that case one should call a invalidation method for optimization, but i don't know which one and when exactly
target.addElement(node);
node.x = 100 + target.horizontalScrollPosition;
node.y = 100 + target.verticalScrollPosition;
node and target are both Groups
It depends on the component and perhaps you don't have to call any at all. From the given piece of code i'd say it's invalidateSize(). But containers usually make a good job of measuring their dimensions property. invalidateDisplayListmight be a good call, if you need to change the way the component is displayed.
So, generally speaking, it depends on the component (super type etc) you are implementing.
Edit:
As both instances are groups, you shouldn't call any invalidation methods at all. You would only call the methods when implementing a custom component with additional properties. In the case of Groups, anything has been done for you in advance. The component live cycle is implemented and the various layouts provide a comfortable level of indirection.
When you extend Group (or any other component) then you should be familiar with the component live cycle.
Rule of thumbs:
ignore the invalidation calls in pure MXML as it is done by the MXML compiler and the components themselves.
use the invalidation calls in overridden setters, which mutate the state of the component (even in MXML). This usually leads to a clean yet simple design of components if the setters are used everywhere - even inside the components private methods.
use validateSize, validateNow etc carefully, as these are simple synchronous shortcuts avoiding the component live cycle.
The invalidation live cycles is based on the flash players elastic race track, which which divides rendering and processing of the data in different aspects processing the code.
Further readings regarding the idea behind the invalidation calls:
Updated elastic racetrack[1] and The Elastic racetrack[2]
[1]: http://www.craftymind.com/updated-elastic-racetrack-for-flash-9-and-avm2/
[2]: http://tedpatrick.com/2005/07/19/flash-player-mental-model-the-elastic-racetrack/

Groovy performance

Hi
We are going to start a CRUD project.
I have some experience using groovy and
I think it is the right tool.
My concern is about performance.
How good is groovy compared to a java solution.
It is estimated that we can have up to 100
simultaneosly users. We are going to use a
MySql DB and a tomcat server.
Any comment or suggestion?
Thanks
I've recently gathered five negative votes (!) on an answer on Groovy performance; however, I think there should be, indeed, a need for objective facts. Personally, I think it's productive and fun to work with Groovy and Grails; nevertheless, there is a performance issue that needs to be addressed.
There are a number of benchmark comparisons on the web, including this one. You can never trust single benchmarks (and the cited one isn't even close to being scientific), but you'll get the idea.
Groovy strongly relies on runtime meta programming. Every object in Groovy (well, except Groovy scripts) extends from GroovyObject with its invokeMethod(..) method, for example. Every time you call a method in your Groovy classes, the method will not be called, directly, as in Java, but by invoking the aforementioned invokeMethod(..) (which does a whole bunch of reflection and lookups).
Additionally, every GroovyObject has an associated MetaClass. The concepts of method invocation, etc., are similar.
There are other factors that decrease Groovy performance in comparison to Java, including boxing of primitive data types and (optional) weak typing, but the aforementioned concept of runtime meta programming is crucial. You cannot even think of a JIT compiler with Groovy, that compiles Java bytecode to native code to speed up execution.
To address these issues, there's the Groovy++ project. You simply annotate your Groovy classes with #Typed, and they'll be statically compiled to (real) Java bytecode. Unfortunately, however, I found Groovy++ to be not quite mature, and not well integrated with the main Groovy line, and IDEs. Groovy++ also contradicts basic Groovy programming paradigms. Moreover, Groovy++' #Typed annotation does not work recursively, that is, does not affect underlying libraries like GORM or the Grails controllers infrastructure.
I guess you're evaluating employing a Grails project, as well.
When looking at Grails' GORM, that framework makes heavily use of runtime meta programming, using Hibernate directly, should perform much better.
At the controllers or (especially) services level, extensive computations can be externalized to Java classes. However, GORMs proportion in typical CRUD applications is higher.
Potential performance in Grails are typically addressed by caching layers at the database level or by avoiding to call service or controllers methods (see the SpringCache plugin or the Cache Filter plugin). These are typically implemented on top of the Ehcache infrastructure.
Caching, obviously, may suit well with static data in contrast to (database) data that frequently changes, or web output that is rather variable.
And, finally, you can "throw hardware at it". :-)
In conclusion, the most decisive factor for or against using Groovy/Grails in a large-scaling website ought to be the question whether caching fits with the specific website's nature.
EDIT:
As for the question whether Java's JIT compiler had a chance to step in ...
A simple Groovy class
class Hello {
def getGreeting(name) {
"Hello " + name
}
}
gets compiled to
public class Hello
implements GroovyObject
{
public Hello()
{
Hello this;
CallSite[] arrayOfCallSite = $getCallSiteArray();
}
public Object getGreeting(Object name) {
CallSite[] arrayOfCallSite = $getCallSiteArray();
return arrayOfCallSite[0].call("Hello ", name);
}
static
{
Long tmp6_3 = Long.valueOf(0L);
__timeStamp__239_neverHappen1288962446391 = (Long)tmp6_3;
tmp6_3;
Long tmp20_17 = Long.valueOf(1288962446391L);
__timeStamp = (Long)tmp20_17;
tmp20_17;
return;
}
}
This is just the top of an iceberg. Jochen Theodoru, an active Groovy developer, put it that way:
A method invocation in Groovy consists
usually of several normal method
calls, where the arguments are stored
in a array, the classes of the
arguments must be retrieved, a key is
generated out of them, a hashmap is
used to lookup the method and if that
fails, then we have to test the
available methods for compatible
methods, select one of the methods
based on the runtime type, create a
key for the hasmap and then in the
end, do a reflection like call on the
method.
I really don't think that the JIT inlines such dynamic, highly complex invocations.
As for a "solution" to your question, there is no "do it that way and you're fine". Instead, the task is to identify the factors that are more crucial than others and possible alternatives and mitigation strategies, to evaluate their impact on your current use cases ("can I live with it?"), and, finally, to identify the mix of technologies that meets the requirements best (not completely).
Performance (in the context of web applications) is an aspect of your application and not of the framework/language you are using. Any discussion and comparison about method invocation speed, reflection speed and the amount of framework layers a call goes through is completely irrelevant. You are not implementing photoshop filters, fractals or a raytracer. You are implementing web based CRUD.
Your showstopper will most probably be inefficient database design, N+1 queries (in case you use ORM), full table scans etc.
To answer your question: use any modern language/web framework you feel more confident with and focus on correct architecture/design to solve the business problem at hand.
Thanks for the answers and advices. I like groovy. It might be some performance problems under some circumstances. Groovy++ might be a better choice. At his point I would prefer to give a chance to "spring roo" which has a huge overlapping with Groovy but you remain at java and NO roo.jar is added to your project. Therefore you are not paying any extra cost for using it.
Moreover "roo" allows backward engineering and roundtrip engineering.
Unfortunately the plug-in library is pretty small up to now.
Luis
50 to 100 active users is not much of a traffic. As long as you have cached pages correctly, mysql queries are properly indexes, you should be ok.
Here is a site I am running in my basement in a $1000 server. It's written in Grails.
Checkout performance yourself http://www.ewebhostguide.com
Caution: Sometimes Comcast connections are down and site may appear down. But that happens only for few minutes. Cons of running site in basement.

Using functional language concepts with OO - is there a language?

I was recently thinking how I'm not always using the beautiful concepts of OO when writing Pythonic programs. In particular, I thought I'd be interested in seeing a language where I could write the typical web script as
# Fictional language
# This script's combined effect is to transform (Template, URI, Database) -> HTTPOutput
HTTPOutput:
HTTPHeaders + Maintext
Flags: # This is a transform URI -> Flags
value = URI.split('?').after
refresh = 'r' in value
sort = /sort=([a-z])/.search(value)
HTTPHeaders: # This is a transform Flags -> HTTPHeaders
'Content-type:...' + Flags.refresh ? 'Refresh: ...' : ''
Maintext:
Template.replace('$questions', PresentedQuestions [:20] )
Questions:
(Flags.sort = 'r') ? RecentQuestions : TopQuestions
PresentedQuestions:
Questions % '<h4>{title}</h4><p>{body}</p>'
RecentQuestions:
Database.Questions . sort('date')
TopQuestions:
Database.Questions . sort('votes')
See what happens? I am trying to make as many objects as possible; each paragraph declares something I call transform. For example, there is a transform HTTPHeaders. In an imperative language that would be a declaration of class, object and function combined:
class HTTPHeaders_class
{
public char* value
HTTPHeaders_class()
{
value = ... + Flags.refresh ? + ... // [1]
}
}
class Flags_class
{
public char* flagstring;
public bool refresh;
...
Flags_class()
{
value = ... /* [3] */
refresh = ...
}
}
Flags = new Flags_class (URI)
HTTPHeaders = new HTTPHeaders_class (Flags) // [2]
However, I want to have no way to specify that an object should change unless the inputs from which the objects is made change; and no way to have side effects. This makes for a drastic simplification of language. I believe this means we're doing a functional programming ("a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data").
I certainly try to use things like Python classes, M-V-C framework and Django (thanks to the answer), but I don't think they have the concepts above and below.
Each object has a value field that can be referred just by writing the class name.
If HTTPHeader is referred somewhere, this means that a static, unchangeable object HTTPHeader is created as soon as possible. All references to HTTPHeader then refer to this object.
Suppose I want to repeat the program with the same URI object while the interpreter is still in memory. Since Flags depends only on URI and HTTPHeaders only on Flags, those are not recalculated. However, if Database is modified, then Questions need to be recalculated, and thus the HTTPOutput may change too.
The interpreter automatically deduces the correct sequence of initializing the classes. Their dependency must form a tree for that to happen, of course.
I believe this will be a useful models for programs like web scripts where there are no side effects. Is there a useful language where one writes program similar to this already?
If you really want to delve into web application development with Python, then look at Django. You are better off using a MVC architecture in this case and Django does a very nice job of supporting MVC applications.
What you are probably interested in is more of a Declarative programming approach than a functional one. Functional programming is more concerned with mapping an input to an output as a pure (mathematical) function. The declarative approach is all about stating what should happen instead of how to do it.
In any case, dig into Model-View-Controller and Django. You will probably find that it fits the bill in a completely different manner.
Take a look at F#. It is specifically designed as a functional language (based on OCaml) with OO support utilizing the .NET stack.
I don't think it's exactly what you are looking for but Scala tries to integrate OO and functional features under a common language.
Your code looks like a DSL for web applications and Sinatra is such a DSL. Sinatra does not do exactly what you do there but it's in the same ballpark. http://www.sinatrarb.com/ - it's written in Ruby but hey, let's all be friends here in dynamic languages land.
This actually feels very much like Haskell, except that you're not using pure functions here. For example, Flags doesn't have the URI passed into it; URI is a separate definition that is presumably not producing the same URI every time it's called, and so on.
For URI to be a pure function, it would have to have a parameter that would give it the current request, so that it can always return the same value for the same inputs. (Without any parameters to work on, a pure function can only return the same result over the life of a closure.) However, if you want to avoid explicitly giving URI a parameter every time, this can be done with various techniques; we do this with monads in Haskell.
It seems to me that the style of programming you're thinking of might be based on "combinators," having small functions that are glued together inside a framework to produce a large, complex function that does the overall processing.
I see my favourite language has not been mentioned yet, so I'd like to jump in and suggest Dyalog APL as a language for 100% function programming. APL has a looong history and was developed when there was no Internet - but Dyalog is the most active provider of APL-Implementations and they also have a fully function webserver that is available free of charge. (The interpreter is also available free of charge for non-commercial use.)

Why would you want Dependency Injection without configuration?

After reading the nice answers in this question, I watched the screencasts by Justin Etheredge. It all seems very nice, with a minimum of setup you get DI right from your code.
Now the question that creeps up to me is: why would you want to use a DI framework that doesn't use configuration files? Isn't that the whole point of using a DI infrastructure so that you can alter the behaviour (the "strategy", so to speak) after building/releasing/whatever the code?
Can anyone give me a good use case that validates using a non-configured DI like Ninject?
I don't think you want a DI-framework without configuration. I think you want a DI-framework with the configuration you need.
I'll take spring as an example. Back in the "old days" we used to put everything in XML files to make everything configurable.
When switching to fully annotated regime you basically define which component roles yor application contains. So a given
service may for instance have one implementation which is for "regular runtime" where there is another implementation that belongs
in the "Stubbed" version of the application. Furthermore, when wiring for integration tests you may be using a third implementation.
When looking at the problem this way you quickly realize that most applications only contain a very limited set of component roles
in the runtime - these are the things that actually cause different versions of a component to be used. And usually a given implementation of a component is always bound to this role; it is really the reason-of-existence of that implementation.
So if you let the "configuration" simply specify which component roles you require, you can get away without much more configuration at all.
Of course, there's always going to be exceptions, but then you just handle the exceptions instead.
I'm on a path with krosenvold, here, only with less text: Within most applications, you have a exactly one implementation per required "service". We simply don't write applications where each object needs 10 or more implementations of each service. So it would make sense to have a simple way say "this is the default implementation, 99% of all objects using this service will be happy with it".
In tests, you usually use a specific mockup, so no need for any config there either (since you do the wiring manually).
This is what convention-over-configuration is all about. Most of the time, the configuration is simply a dump repeating of something that the DI framework should know already :)
In my apps, I use the class object as the key to look up implementations and the "key" happens to be the default implementation. If my DI framework can't find an override in the config, it will just try to instantiate the key. With over 1000 "services", I need four overrides. That would be a lot of useless XML to write.
With dependency injection unit tests become very simple to set up, because you can inject mocks instead of real objects in your object under test. You don't need configuration for that, just create and injects the mocks in the unit test code.
I received this comment on my blog, from Nate Kohari:
Glad you're considering using Ninject!
Ninject takes the stance that the
configuration of your DI framework is
actually part of your application, and
shouldn't be publicly configurable. If
you want certain bindings to be
configurable, you can easily make your
Ninject modules read your app.config.
Having your bindings in code saves you
from the verbosity of XML, and gives
you type-safety, refactorability, and
intellisense.
you don't even need to use a DI framework to apply the dependency injection pattern. you can simply make use of static factory methods for creating your objects, if you don't need configurability apart from recompiling code.
so it all depends on how configurable you want your application to be. if you want it to be configurable/pluggable without code recompilation, you'll want something you can configure via text or xml files.
I'll second the use of DI for testing. I only really consider using DI at the moment for testing, as our application doesn't require any configuration-based flexibility - it's also far too large to consider at the moment.
DI tends to lead to cleaner, more separated design - and that gives advantages all round.
If you want to change the behavior after a release build, then you will need a DI framework that supports external configurations, yes.
But I can think of other scenarios in which this configuration isn't necessary: for example control the injection of the components in your business logic. Or use a DI framework to make unit testing easier.
You should read about PRISM in .NET (it's best practices to do composite applications in .NET). In these best practices each module "Expose" their implementation type inside a shared container. This way each module has clear responsabilities over "who provide the implementation for this interface". I think it will be clear enough when you will understand how PRISM work.
When you use inversion of control you are helping to make your class do as little as possible. Let's say you have some windows service that waits for files and then performs a series of processes on the file. One of the processes is to convert it to ZIP it then Email it.
public class ZipProcessor : IFileProcessor
{
IZipService ZipService;
IEmailService EmailService;
public void Process(string fileName)
{
ZipService.Zip(fileName, Path.ChangeFileExtension(fileName, ".zip"));
EmailService.SendEmailTo(................);
}
}
Why would this class need to actually do the zipping and the emailing when you could have dedicated classes to do this for you? Obviously you wouldn't, but that's only a lead up to my point :-)
In addition to not implementing the Zip and email why should the class know which class implements the service? If you pass interfaces to the constructor of this processor then it never needs to create an instance of a specific class, it is given everything it needs to do the job.
Using a D.I.C. you can configure which classes implement certain interfaces and then just get it to create an instance for you, it will inject the dependencies into the class.
var processor = Container.Resolve<ZipProcessor>();
So now not only have you cleanly separated the class's functionality from shared functionality, but you have also prevented the consumer/provider from having any explicit knowledge of each other. This makes reading code easier to understand because there are less factors to consider at the same time.
Finally, when unit testing you can pass in mocked dependencies. When you test your ZipProcessor your mocked services will merely assert that the class attempted to send an email rather than it really trying to send one.
//Mock the ZIP
var mockZipService = MockRepository.GenerateMock<IZipService>();
mockZipService.Expect(x => x.Zip("Hello.xml", "Hello.zip"));
//Mock the email send
var mockEmailService = MockRepository.GenerateMock<IEmailService>();
mockEmailService.Expect(x => x.SendEmailTo(.................);
//Test the processor
var testSubject = new ZipProcessor(mockZipService, mockEmailService);
testSubject.Process("Hello.xml");
//Assert it used the services in the correct way
mockZipService.VerifyAlLExpectations();
mockEmailService.VerifyAllExceptions();
So in short. You would want to do it to
01: Prevent consumers from knowing explicitly which provider implements the services it needs, which means there's less to understand at once when you read code.
02: Make unit testing easier.
Pete