Accessing regmap RegFields - chisel

I am trying to find a clean way to access the regmap that is used with *RegisterNode for creating documentation and testing files. The TLRegisterNode has methods for generating the json through some Annotations. These are done in the regmap method by adding them to the ElaborationArtefacts object. Other protocols don't seem to have these annotations.
Is there anyway to iterate over the "regmap" Register Fields post elaboration or during?
I cannot just access the regmap as it's not really a val/var since it's a method. I can't quite figure out where this information is being stored. I don't really believe it's actually "storing" any information as much as it is simply creating the hardware to attach the specified logic to the RegisterNode based logic.
The JSON output is actually fine for me as I could just write a post processing script to convert JSON to my required formats, but I'm wondering if I can access this information OR if I could add a custom function call at the end. I cannot extend the case class *RegisterNode, but I'm not sure if it's possible to add custom functions to run at the end of the regmap method.
Here is something I threw together quickly:
//in *RegisterRouter.scala
def customregmap(customFunc: (RegField.Map*) => Unit, mapping: RegField.Map*) = {
regmap(mapping:_*)
customFunc(mapping:_*)
}
def regmap(mapping: RegField.Map*) = {
//normal stuff
}
A user could then create a custom function to run and pass it to the regmap or to the RegisterRouter
def myFunc(mapping: RegField.Map*): Unit = {
println("I'm doing my custom function for regmap!")
}
// ...
node.customregmap(myFunc,
0x0 -> coreControlRegFields,
0x4 -> fdControlRegFields,
0x8 -> fdControl2RegFields,
)
This is just a quick example I have. I believe what would be better, if something like this was possible, would be to have a Seq of functions that could be added to the RegisterNode that are ran at the end of the regmap method, similar to how TLRegisterNode currently works. So a user could add an arbitrary number and you still use the regmap call.
Background (not directly part of question):
I have a unified register script that I have built over the years in which I describe the registers for a particular IP. It works very similar to the RegField/node.regmap, except it obviously doesn't know about diplomacy and the like. It will generate the Verilog, but also a variety of files for DV (basic `defines for simple verilog simulations and more complex uvm_reg_block defines also with the ability to describe multiple of the IPs for a subsystem all the way up to an SoC level). It will also print out C Header files for SW and Sphinx reStructuredText for documentation.
Diplomacy actually solves one of the main issues I've been dealing with so I'm obviously trying to push most of my newer designs to Chisel/Diplo.

I ended up solving this by creating my own RegisterNode which is the same as the rocketchip RegisterNodes except that I use a different Elaboration Artifact to grab the info and store it for later.

Related

What is the difference between LSP and OCP?

I have been trying to break down the differences between the Open Closed Principal and Liskov's Substitution Principal. And the best and most common examples of either use the exact same problem. Finding the Area of a shape class.
They use slightly different means, but effectively solve the same problem with the same solution.
As these are both parts of SOLID, I'm really trying to find a reason to support why both are called out.
I'm looking for an explanation that doesn't work for both.
Thanks.
LSP:
Consumers target an abstraction (e.g. interface) and should not need to know which concrete implementation stands behind the interface. For example, a client (e.g. a DocumentProcessor class) holds a dependency on IDocumentStore. If in V1, you gave it a SqlSeverDocumentStore instance, and then in V2 you gave it a FileSystemDocumentStore, the client (DocumentProcessor) should work without modification. This can be achieved by making sure the contract of IDocumentStore is well defined and that DocumentProcessor, SqlSeverDocumentStore and FileSystemDocumentStore abide by this contract.
The contract means much more than an interface. Having two classes implement the same interface does not mean they abide by the same contract (although they should).
For example, does both implementations support saving documents which are smaller or equal to 20MB? Or does one of them support documents that are at most 10MB? If it is part of the contract that an implementation should support 20MB documents, then all implementations should support this.
OCP:
We should avoid modifying a unit of composition (e.g. a class or a function) after we release it. One way to achieve this is to make the units parameterized. For example, if you have a function (say, ProcessImages) that reads images from the file system, compresses them and then sends them to some web service, you can parameterize this function to accept other functions that are responsible for (1) reading images, (2) compressing them, (3) sending them.
E.g. (in C#):
public static void ProcessImages(
Func<Image[]> getImages,
Func<Image, CompressedImage> compressImage,
Action<CompressedImage> sendImage)
{
//... Orchestrate the operation here
}
And, in the Composition Root:
Action processImages = () => ProcessImages(ReadImages, CompressImage, SendImage);
Where ReadImages, CompressImage, SendImage are themselves functions.
This way, if you want to change how the images are compressed, you will not modify the ProcessImages function. Instead you will create a new compression function (say CompressImageInADifferentWay) and then compose the ProcessImages function in the Composition Root like this:
Action processImages =
() => ProcessImages(ReadImages, CompressImageInADifferentWay, SendImage);
If you apply the OCP in a perfect way, only the Composition Root itself will change.
LSP allows us to achieve OCP. For example, because CompressImage and CompressImageInADifferentWay abide by some contract that ProcessImages knows about, we can replace CompressImage with CompressImageInADifferentWay without modifying ProcessImages.

Switch between 2 or more templates with an action in controllers?

I have a default Phoenix application. This app will have a page_controller
which will load an index.html.eex file.
The app will know to use the view to access templates/page/index.html.eex.
Now say you have created another html page which is identical to index.html.eex in every way except it is in French.
As we do not want to create a whole new Phoenix application which will have all the same code, with the exception being the French translation of the current page/index.html.eex, is there a way to tell
the view or the controller which file needs to be loaded.
Is there a plug which can be placed in the router to alter where render will look for it's templates?
First of all I would suggest you to use Gettext to use labels for French pages.
For example you can all French templates keep in the very same folders (to don't change logic for view), but to name them with suffix eg. "index_fr.html.eex" etc. and then you can write quite simple helper (not necessarily a plug) that will add to all of your templates this suffix.
Still, I would recommend you using Gettext - template's source code is only in place and almost all of the logic Gettext handles for you.
I suggest you pick the #patnowak's answer. Use Gettext, that's the tool made for translation and is powerful enough.
If you still want to do it, remember render/3 in controller calls render/2 functions defined in views, if defined. If not, it runs default rendering function and looks for the template. Read docs for more information.
So for example, this is the controller:
def index(conn, params) do
# defined assigns as you wish
render(conn, "index.html", assigns)
end
Now, define this in the view:
def render("index.html, assigns) do
case assigns[:lang] do
"fr" -> render("index_fr.html", assigns)
_others -> render("index_en.html", assigns)
end
end
You may also write a plug to automatically put :lang into assigns:
def lang_plug(conn, opts) do
conn
|> fetch_query_params()
|> (fn cn -> assign(cn, :lang, cn.query_params[:lang] || "en").()
end
Look Plug.Conn to see docs of fetch_query_params/1 and assign/3, and also other functions to fetch language from other places like headers or body.
You get the idea. In the plug, fill assigns with :lang, fetch them inside your defined render function and act appropriately.
Still, Don't do this. Using Gettext is the proper way.

Automatically generate R source code to build a package

I wrote bindings to an API and put everything into an R package, including tests, vignettes, etc., but the API keeps constantly changing. This brings up some issues
updating my package is error-prone, maybe I miss a new function or forget to mark an old as deprecated
submitting the package to CRAN is not a good idea, since it's changing frequently and packages are reviewed by hand
I got a hard time keeping this software up2date, since the API chance irregularly and therefor I maybe miss them
I came up with the idea to generate the bindings automatically. The API itself provides everything required for that via an online JSON documentation. These docs reflect constantly the current definition of the API.
Writing some code which converts the JSON docs to R functions is not the problem. But if I do so, I still need to update the package on CRAN. The best solution would be, to create a package that (on load) looks up the API definition and creates the required functions. Ideally these functions should be unit tested.
I am thankful for any hint on that.
Best
Edit: The API is the firebrowse API with an example of what the input would be.
This is really challenging and thus there's no obvious way to do it. The whole idea behind wsdl was to be able to do this easily using a standardized XML description. That was never really implemented in R and it never really took off more broadly (because of the emergence of RESTful services and JSON).
You can definitely generate functions dynamically by creating a so-called "function factories" (Hadley discussed these a bit here). In short, you write a function that takes JSON as input and returns a function that does whatever is described in the JSON. (Creating such a factory that dynamically does this whenever the package is loaded seems risky but I suppose it's possible. I'd probably just keep the factory to myself and use it to create and update the package.)
I'm not going to attempt to deal with your API specifically, but to see how this would work:
# create factory with arguments to control returned function
factory <- function(action, endpoint, content = TRUE, parsed = FALSE) {
if (content) {
if(parsed) {
out <- function() httr::content(httr::VERB(action, endpoint))
} else {
out <- function() httr::content(httr::VERB(action, endpoint), "text")
}
} else {
out <- function() httr::VERB(action, endpoint)
}
return(out)
}
# use factory to create different functions
(a <- factory("GET", "http://example.com", content = TRUE, parsed = FALSE))
## function() httr::content(httr::VERB(action, endpoint), "text")
(b <- factory("GET", "http://example.com", content = TRUE, parsed = TRUE))
## function() httr::content(httr::VERB(action, endpoint))
(c <- factory("GET", "http://example.com", content = FALSE))
function() httr::VERB(action, endpoint)
# evaluate each function
a() # returns a character string
b() # returns parsed HTML
c() # returns an httr response object
The best solution would be, to create a package that (on load) looks up the API definition and creates the required functions. Ideally these functions should be unit tested.
This is a very well known problem. React to server changes without breaking the clients is a pain not just in your situation, but also for mobile applications (that needs to be resubmitted every time API changes).
While your approach may work (generate the client on the fly), the best result can be reached if the server may collaborate to reach the achievement.
You have to decouple the client from API implementation. How? Using REST (for real), thous introducing the concept of state and transitions.
This is not the right place to explain how it works, but a great introduction can be found in this great presentation by Glenn Block, and then continuing to read.
This won't solve your particular problem, but it is, in my opinion, the right way to approach the problem.
You may want to have a look to this video as well, 15:24 part.

Finding draw_if_interactive() in pyplot.py

There are multiple draw_if_interactive() expressions in the pyplot module but I can't find this function's definition anywhere in the module.
From intuition and readings, it's an easy guess that the function enables on-demand plotting but where can I read its definition? Thanks.
The function is actually in the backend code. The actual implementation depends on your backend. For example the function with the TkAgg backend is in backend_tkagg.py:
def draw_if_interactive():
if matplotlib.is_interactive():
figManager = Gcf.get_active()
if figManager is not None:
figManager.show()
Same kind of functions seem to be for other backends, they use the matplotlib.is_interactive to determine if this is an interactive session and then use the backend specific drawing commands to draw the image.

Logging different project libraries, with a single logging library

I have a project in Apps script that uses several libraries. The project needed a more complex logger (logging levels, color coding) so I wrote one that outputs to google docs. All is fine and dandy if I immediately print the output to the google doc, when I import the logger in all of the libraries separately. However I noticed that when doing a lot of logging it takes much longer than without. So I am looking for a way to write all of the output in a single go at the end when the main script finishes.
This would require either:
Being able to define the logging library once (in the main file) and somehow accessing this in the attached libs. I can't seem to find a way to get the main projects closure from within the libraries though.
Some sort of singleton logger object. Not sure if this is possible from with a library, I have trouble figuring it out either way.
Extending the built-in Logger to suit my needs, not sure though...
My project looks at follows:
Main Project
Library 1
Library 2
Library 3
Library 4
This is how I use my current logger:
var logger = new BetterLogger(/* logging level */);
logger.warn('this is a warning');
Thanks!
Instead of writing to the file at each logged message (which is the source of your slow down), you could write your log messages to the Logger Library's ScriptDB instance and add a .write() method to your logger that will output the messages in one go. Your logger constructor can take a messageGroup parameter which can serve as a unique identifier for the lines you would like to write. This would also allow you to use different files for logging output.
As you build your messages into proper output to write to the file (don't write each line individually, batch operations are your friend), you might want to remove the message from the ScriptDB. However, it might also be a nice place to pull back old logs.
Your message object might look something like this:
{
message: "My message",
color: "red",
messageGroup: "groupName",
level: 25,
timeStamp: new Date().getTime(), //ScriptDB won't take date objects natively
loggingFile: "Document Key"
}
The query would look like:
var db = ScriptDb.getMyDb();
var results = db.query({messageGroup: "groupName"}).sortBy("timeStamp",db.NUMERIC);