Is there a way to disable the CheckCombLoops FIRRTL pass? (These loops are false.)
If possible, I want to do this in the tester driver. I saw the option for the FIRRTL intepreter, but I still get an exception during the FIRRTL run. I also want to be able to use VCS as a backend.
class LazyStackNWait2Test extends FlatSpec with Matchers {
behavior of "LazyStackNWait2"
it should "work" in {
chisel3.iotesters.Driver.execute( Array( "--fr-allow-cycles", "--backend-name", "firrtl"), () => new LazyStackN(10, () => new LazyStackWait2)) { c =>
new LazyStackNTester( c)
} should be ( true)
}
}
Here is part of the log:
[info] [1.057] Done elaborating.
[info] - should work *** FAILED ***
[info] firrtl.passes.PassExceptions:
firrtl.passes.CheckCombLoops$CombLoopException: #[:#5437.2]: [module LazyStackN] Combinational loop detected:
The build is from the latest github HEAD.
EDIT: This is now supported via the --no-check-comb-loops. Relevant PR
There currently is not a way but I've created an issue to add this feature and it shouldn't be that difficult to do. https://github.com/ucb-bar/firrtl/issues/600
Just out of curiosity, what kind of false combinational loops are you seeing? We find that they are pretty rare* and usually easy to work around, so if you have any examples you can share I would greatly appreciate it.
* For example, they usually happen with aggregate types that have a dependence between subelements if the aggregate gets cast to bits and back (through a chisel3.util.Mux1H for example). Just trying to see what other common use patterns can cause false loops.
Related
I am using the checkerFramework gradle plugin to statically analyze nullness and tainting in my code. When I run the checker via gradle, only one of my classes are properly checked. All the other classes return with the ambiguous error about the checker not running:
error: [type.checking.not.run] NullnessChecker did not run because of a previous error issued by ja
vac
public class Main {
^
The manual linked does not metion what potentially causes this. I had some #Nullable annotations prepended to some static instance variables of the primary class I am using, but undoing those did not fix the issue.
My build.gradle is set up like so:
plugins {
// Checker Framework pluggable type-checking
id 'org.checkerframework' version '0.6.3'
}
checkerFramework {
checkers = [
'org.checkerframework.checker.nullness.NullnessChecker',
'org.checkerframework.checker.tainting.TaintingChecker'
]
}
apply plugin: 'org.checkerframework'
Where do I find more detail on this error?
You didn't show the full javac output. The relevant errors should be just above the error: [type.checking.not.run] line that you did show.
The Checker Framework runs as a plugin to javac. When javac issues an error in one class (including any Checker Framework error), javac may or may not process other classes. Unfortunately, there is no good way for a user to predict how far javac will get. Your best bet is to focus on the code that matters most to you, and resolve each error in turn before proceeding to other classes.
I am trying to find a clean way to access the regmap that is used with *RegisterNode for creating documentation and testing files. The TLRegisterNode has methods for generating the json through some Annotations. These are done in the regmap method by adding them to the ElaborationArtefacts object. Other protocols don't seem to have these annotations.
Is there anyway to iterate over the "regmap" Register Fields post elaboration or during?
I cannot just access the regmap as it's not really a val/var since it's a method. I can't quite figure out where this information is being stored. I don't really believe it's actually "storing" any information as much as it is simply creating the hardware to attach the specified logic to the RegisterNode based logic.
The JSON output is actually fine for me as I could just write a post processing script to convert JSON to my required formats, but I'm wondering if I can access this information OR if I could add a custom function call at the end. I cannot extend the case class *RegisterNode, but I'm not sure if it's possible to add custom functions to run at the end of the regmap method.
Here is something I threw together quickly:
//in *RegisterRouter.scala
def customregmap(customFunc: (RegField.Map*) => Unit, mapping: RegField.Map*) = {
regmap(mapping:_*)
customFunc(mapping:_*)
}
def regmap(mapping: RegField.Map*) = {
//normal stuff
}
A user could then create a custom function to run and pass it to the regmap or to the RegisterRouter
def myFunc(mapping: RegField.Map*): Unit = {
println("I'm doing my custom function for regmap!")
}
// ...
node.customregmap(myFunc,
0x0 -> coreControlRegFields,
0x4 -> fdControlRegFields,
0x8 -> fdControl2RegFields,
)
This is just a quick example I have. I believe what would be better, if something like this was possible, would be to have a Seq of functions that could be added to the RegisterNode that are ran at the end of the regmap method, similar to how TLRegisterNode currently works. So a user could add an arbitrary number and you still use the regmap call.
Background (not directly part of question):
I have a unified register script that I have built over the years in which I describe the registers for a particular IP. It works very similar to the RegField/node.regmap, except it obviously doesn't know about diplomacy and the like. It will generate the Verilog, but also a variety of files for DV (basic `defines for simple verilog simulations and more complex uvm_reg_block defines also with the ability to describe multiple of the IPs for a subsystem all the way up to an SoC level). It will also print out C Header files for SW and Sphinx reStructuredText for documentation.
Diplomacy actually solves one of the main issues I've been dealing with so I'm obviously trying to push most of my newer designs to Chisel/Diplo.
I ended up solving this by creating my own RegisterNode which is the same as the rocketchip RegisterNodes except that I use a different Elaboration Artifact to grab the info and store it for later.
I'm trying to deploy an app to production and getting a little confused by environment and application variables and what is happening at compile time vs runtime.
In my app, I have a genserver process that requires a token to operate. So I use config/releases.exs to set the token variable at runtime:
# config/releases.exs
import Config
config :my_app, :my_token, System.fetch_env!("MY_TOKEN")
Then I have a bit of code that looks a bit like this:
defmodule MyApp.SomeService do
use SomeBehaviour, token: Application.get_env(:my_app, :my_token),
other_config: :stuff
...
end
In production the genserver process (which does some http stuff) gives me 403 errors suggesting the token isn't there. So can I clarify, is the use keyword getting evaluated at compile time (in which case the application environment doest exist yet)?
If so, what is the correct way of getting runtime environment variables in to a service like this. Is it more correct to define the config in application.ex when starting the process? eg
children = [
{MyApp.SomeService, [
token: Application.get_env(:my_app, :my_token),
other_config: :stuff
]}
...
]
Supervisor.start_link(children, opts)
I may have answered my own questions here, but would be helpful to get someone who knows what they're doing confirm and point me in the right way. Thanks
elixir has two stages: compilation and runtime, both written in Elixir itself. To clearly understand what happens when one should figure out, that everything is macro and Elixir, during compilation stage, expands these macros until everything is expanded. That AST comes to runtime.
In your example, use SomeBehaviour, foo: :bar is implicitly calling SomeBehaviour.__using__/1 macro. To expand the AST, it requires the argument (keyword list) to be expanded as well. Hence, Application.get_env(:my_app, :my_token) call happens in compile time.
There are many possibilities to move it to runtime. If you are the owner of SomeBehaviour, make it accept the pair {:my_app, :my_token} and call Application.get_env/2 somewhere from inside it.
Or, as you suggested, pass it as a parameter to children; this code belongs to function body, meaning it won’t be attempted to expand during compilation stage, but would rather be passed as AST to the resulting BEAM to be executed in runtime.
As indicated by Tom Browder in this issue, the $*ARGFILES dynamic variable might contain invalid filehandles if any of the files mentioned in the command line is not present.
for $*ARGFILES.handles -> $fh {
say $fh;
}
will fail with and X::AdHoc exception (this should probably be improved too):
Failed to open file /home/jmerelo/Code/perl6/my-perl6-examples/args/no-file: No such file or directory
The problem will occur as soon as the invalid filehandle is used for anything. Would there be a way of checking if the filehandle is valid before incurring in an exception?
You can check if something is a Failure by checking for truthiness or definedness without the Failure throwing:
for $*ARGFILES.handles -> $fh {
say $fh if $fh; # check truthiness
.say with $fh; # check definedness + topicalization
}
If you still want to throw the Exception that the Failure encompasses, then you can just .throw it.
TL;DR I thought Liz had it nailed but it seems like there's a bug or perhaps Ugh.
A bug?
It looks like whenever the IO::CatHandle class's .handles method reaches a handle that ought by rights produce a Failure (delaying any exception throw) it instead immediately throws an exception (perhaps the very one that would work if it were just delayed or perhaps something broken).
This seems either wrong or very wrong.
Ugh
See the exchange between Zoffix and Brad Gilbert and Zoffix's answer to the question How should I handle Perl 6 $*ARGFILES that can't be read by lines()?
Also:
https://github.com/rakudo/rakudo/issues/1313
https://github.com/rakudo/rakudo/search?q=argfiles&type=Issues
https://github.com/rakudo/rakudo/search?q=cathandle&type=Issues
A potential workaround is currently another bug?
In discussing "Implement handler for failed open on IO::CatHandle" Zoffix++ closed it with this code as a solution:
.say for ($*ARGFILES but role {
method next-handle {
loop {try return self.IO::CatHandle::next-handle}
}
})
I see that tbrowder has reopened this issue as part of the related issue this SO is about saying:
If this works, it would at least be a usable example for the $*ARGFILES var in the docs.
But when I run it in 6.d (and see similar results for a 6.c), with or without valid input, I get:
say not yet implemented
(similar if I .put or whatever).
This is nuts and suggests something gutsy is getting messed up.
I've searched rt and gh/rakudo issues for "not yet implemented" and see no relevant matches.
Another workaround?
Zoffix clearly intended their code as a permanent solution, not merely a workaround. But it unfortunately doesn't seem to work at all for now.
The best I've come up with so far:
try {$*ARGFILES} andthen say $_ # $_ is a defined ArgFiles instance
orelse say $!; # $! is an error encountered inside the `try`
Perhaps this works as a black-and-white it either all works or none of it does solution. (Though I'm not convinced it's even that.)
What the doc has to say about $*ARGFILES
$*ARGFILES says it is an instance of
IO::ArgFiles which is doc'd as a class which
exists for backwards compatibility reasons and provides no methods.
And
All the functionality is inherited from
IO::CatHandle which is subtitled as
Use multiple IO handles as if they were one
and doc'd as a class that is
IO::Handle which is subtitled as
Opened file or stream
and doc'd as a class that doesn't inherit from any other class (so defaults to inheriting from Any) or do any role.
So, $*ARGFILES is (exactly functionally the same as) a IO::CatHandle object which is (a superset of the functionality of) an IO::Handle object, specifically:
The IO::CatHandle class provides a means to create an IO::Handle that seamlessly gathers input from multiple IO::Handle and IO::Pipe sources. All of IO::Handle's methods are implemented, and while attempt to use write methods will (currently) throw an exception, an IO::CatHandle is usable anywhere a read-only IO::Handle can be used.
Exploring the code for IO::CatHandle
(To be filled in later?)
I wrote bindings to an API and put everything into an R package, including tests, vignettes, etc., but the API keeps constantly changing. This brings up some issues
updating my package is error-prone, maybe I miss a new function or forget to mark an old as deprecated
submitting the package to CRAN is not a good idea, since it's changing frequently and packages are reviewed by hand
I got a hard time keeping this software up2date, since the API chance irregularly and therefor I maybe miss them
I came up with the idea to generate the bindings automatically. The API itself provides everything required for that via an online JSON documentation. These docs reflect constantly the current definition of the API.
Writing some code which converts the JSON docs to R functions is not the problem. But if I do so, I still need to update the package on CRAN. The best solution would be, to create a package that (on load) looks up the API definition and creates the required functions. Ideally these functions should be unit tested.
I am thankful for any hint on that.
Best
Edit: The API is the firebrowse API with an example of what the input would be.
This is really challenging and thus there's no obvious way to do it. The whole idea behind wsdl was to be able to do this easily using a standardized XML description. That was never really implemented in R and it never really took off more broadly (because of the emergence of RESTful services and JSON).
You can definitely generate functions dynamically by creating a so-called "function factories" (Hadley discussed these a bit here). In short, you write a function that takes JSON as input and returns a function that does whatever is described in the JSON. (Creating such a factory that dynamically does this whenever the package is loaded seems risky but I suppose it's possible. I'd probably just keep the factory to myself and use it to create and update the package.)
I'm not going to attempt to deal with your API specifically, but to see how this would work:
# create factory with arguments to control returned function
factory <- function(action, endpoint, content = TRUE, parsed = FALSE) {
if (content) {
if(parsed) {
out <- function() httr::content(httr::VERB(action, endpoint))
} else {
out <- function() httr::content(httr::VERB(action, endpoint), "text")
}
} else {
out <- function() httr::VERB(action, endpoint)
}
return(out)
}
# use factory to create different functions
(a <- factory("GET", "http://example.com", content = TRUE, parsed = FALSE))
## function() httr::content(httr::VERB(action, endpoint), "text")
(b <- factory("GET", "http://example.com", content = TRUE, parsed = TRUE))
## function() httr::content(httr::VERB(action, endpoint))
(c <- factory("GET", "http://example.com", content = FALSE))
function() httr::VERB(action, endpoint)
# evaluate each function
a() # returns a character string
b() # returns parsed HTML
c() # returns an httr response object
The best solution would be, to create a package that (on load) looks up the API definition and creates the required functions. Ideally these functions should be unit tested.
This is a very well known problem. React to server changes without breaking the clients is a pain not just in your situation, but also for mobile applications (that needs to be resubmitted every time API changes).
While your approach may work (generate the client on the fly), the best result can be reached if the server may collaborate to reach the achievement.
You have to decouple the client from API implementation. How? Using REST (for real), thous introducing the concept of state and transitions.
This is not the right place to explain how it works, but a great introduction can be found in this great presentation by Glenn Block, and then continuing to read.
This won't solve your particular problem, but it is, in my opinion, the right way to approach the problem.
You may want to have a look to this video as well, 15:24 part.