Here is some genereated Verilog from the PassTrough module found in:
https://github.com/freechipsproject/chisel-bootcamp/blob/master/2.1_first_module.ipynb
module PassTrough( // #[:#3.2]
input clock, // #[:#4.4]
input reset, // #[:#5.4]
input [9:0] io_in, // #[:#6.4]
output [9:0] io_out // #[:#6.4]
);
assign io_out = io_in; // #[buffer.scala 10:10:#8.4]
endmodule
Are there any resources about understanding what is in the comments. I can see that they related to the code location in the original scala file but would like to know more details.
// #[buffer.scala 10:10:#8.4]
A more detailed explanation of this line would be useful.
These are source locators and will show up in generated FIRRTL or Verilog. These tell you what line in a source file (Chisel or FIRRTL) was used to generate a specific line in the downstream FIRRTL or Verilog.
The format is generally: #[<file> <line>:<column> ...]
More than one source locator may be present.
Example
Consider the following example pulled from the BoringUtilsSpec. The line numbers (which do not start at zero as this was extracted from a larger file) are shown along with the column numbers. You can see how things line up between them. For example, the declaration of notA happens on line 27 column 20 and the assignment notA := ~a happens on line 30, column 10. You see 27:20 and 30:10 show up in the FIRRTL. In the Verilog, these get merged somewhat and you wind up with source locators indicating both 27:20 and 30:10:
// -------------------------------------------+----+
// File: BoringUtilsSpec.scala | |
// -------------------------------------------+----+
// Column Number | |
// -------------------------------------------+----+
// 1 2 3 4 | |
// 01234567890123456789012345678901234567890 | |
// -------------------------------------------+----|
class BoringInverter extends Module { // | 24 | Line Number
val io = IO(new Bundle{}) // | 5 |
val a = Wire(UInt(1.W)) // | 6 |
val notA = Wire(UInt(1.W)) // | 7 |
val b = Wire(UInt(1.W)) // | 8 |
a := 0.U // | 9 |
notA := ~a // | 30 |
b := a // | 1 |
chisel3.assert(b === 1.U) // | 2 |
BoringUtils.addSource(notA, "x") // | 3 |
BoringUtils.addSink(b, "x") // | 4 |
} // | 5 |
// -------------------------------------------+----+
This produces the following FIRRTL:
module BoringUtilsSpecBoringInverter :
input clock : Clock
input reset : UInt<1>
output io : {}
wire a : UInt<1> #[BoringUtilsSpec.scala 26:17]
wire notA : UInt<1> #[BoringUtilsSpec.scala 27:20]
wire b : UInt<1> #[BoringUtilsSpec.scala 28:17]
a <= UInt<1>("h00") #[BoringUtilsSpec.scala 29:7]
node _T = not(a) #[BoringUtilsSpec.scala 30:13]
notA <= _T #[BoringUtilsSpec.scala 30:10]
b <= a #[BoringUtilsSpec.scala 31:7]
node _T_1 = eq(b, UInt<1>("h01")) #[BoringUtilsSpec.scala 32:22]
node _T_2 = bits(reset, 0, 0) #[BoringUtilsSpec.scala 32:19]
node _T_3 = or(_T_1, _T_2) #[BoringUtilsSpec.scala 32:19]
node _T_4 = eq(_T_3, UInt<1>("h00")) #[BoringUtilsSpec.scala 32:19]
// assert not shown
And the following Verilog:
module BoringUtilsSpecBoringInverter(
input clock,
input reset
);
wire _T; // #[BoringUtilsSpec.scala 30:13]
wire notA; // #[BoringUtilsSpec.scala 27:20 BoringUtilsSpec.scala 30:10]
wire _T_3; // #[BoringUtilsSpec.scala 32:19]
wire _T_4; // #[BoringUtilsSpec.scala 32:19]
assign _T = 1'h1; // #[BoringUtilsSpec.scala 30:13]
assign notA = 1'h1; // #[BoringUtilsSpec.scala 27:20 BoringUtilsSpec.scala 30:10]
assign _T_3 = _T | reset; // #[BoringUtilsSpec.scala 32:19]
assign _T_4 = _T_3 == 1'h0; // #[BoringUtilsSpec.scala 32:19]
// assert not shown
endmodule
Caveats
Generator Bootcamp
If you are running this in the Chisel Bootcamp Jupyter Notebook or through an sbt console/REPL, the source locators may not make as much sense as there really isn't a file here with lines.
Difference with Annotation
These source locators are not Annotations, in case anyone has come across that name.
Annotations are metadata associated with circuit components. Source locators (which map to Info in the FIRRTL IR) are associated with specific statements in some source file. Under the hood they're just strings that get generated and then copied around. There is no guarantee that source locators will be preserved---they may be changed or deleted arbitrarily. Conversely, Annotations are preserved and renamed across transformations and have strong guarantees on how they behave.
Consequently, do not rely on source locators for anything other than an aid if you need to debug the Chisel or FIRRTL compiler stages.
Related
I've got the function fi(ϕ)=γi+sin(2⋅sinϕ) for i=1,2 where γ1=0.01 and γ2=0.02
ϕ1(0)=0.1 and ϕ2(0)=0.2
ϕ1/dt=f1(ϕ)+d⋅sin(ϕ2−ϕ1)
ϕ2/dt=f2(ϕ)+d⋅sin(ϕ1−ϕ2)
where d=0.1
So there should be something like for example this table:
t | ϕ1 | ϕ2
0.00 | 0.1 |0.2
0.01 | ... |...
0.02 | ... |...
...
100.00| ... | ...
And so using the received values it's needed to plot a graph by the coordinates
So the question is how to plot the function ϕ2(ϕ1) on the the following graph using MATLAB?
So the story of the system might be that you start with two uncoupled and slightly different equations
ϕ1/dt=f1(ϕ1)
ϕ2/dt=f2(ϕ2)
and connect them with a coupling or exchange term sin(ϕ2-ϕ1),
ϕ1/dt=f1(ϕ1)+d⋅sin(ϕ2−ϕ1)
ϕ2/dt=f2(ϕ2)+d⋅sin(ϕ1−ϕ2)
In a matlab script you would implement this as
y0 = [ 0.1; 0.2 ];
[T,Y] = ode45(eqn,[0, 100], y0);
plot(Y(:,1),Y(:,2));
function dy_dt = eqn(t,y)
d = 0.1;
g = [ 0.01; 0.02 ];
f = g+sin(2*sin(y));
exch = d*sin(y(2)-y(1));
dy_dt = f+[d;-d];
end%function
which gives almost a diagonal line ending at [pi; pi]. With a stronger coupling constant d this becomes slightly more interesting.
You can give the parameters as parameter arguments, then you have to declare them via odeset in an options object, or use anonymous functions to bind the parameters in the solver call.
I have a working python code to analyze logs. Logs are at least 10 MBytes of size and they can sometimes reach 250-300 Mbytes depending on failures, retries.
I used generator which could yield the big file as chunks and it can be configurable and I normally use 1 or 2 Mbytes of log to yield. So I analyze logs as 1Mb chunks for verification of some tests.
My problem is when I use generator it could bring up some edge cases. In log analyzing I check for subsequent appearance of some patterns as the following, so if only those 4 list seen then I keep them for next verification part of the code. The following 4 pattern can be seen in the logs once or twice, not more.
listA
listB
listC
listD
if these all occurs subsequently then I keep them all to evaluate in next step, otherwise ignore..
However now there is a small change the following could happen, some patterns (lists as I use regex findall method to find patterns) can be in next chunk to complete the check. So in the following I have 3 matching case chunk 3-4 and 5-6 and 7-8 creates the condition to take into account.
---- chunk 1 -----
listA
listB
----- chunk 2 -----
nothing
----- chunk 3 -----
listA
listB
----- chunk 4 -----
listC
listD
----- chunk 5 -----
listA
----- chunk 6 -----
listB
listC
listD
---- chunk 7 ------
listA
listB
listC
----- chunk 8 ------
listD
---------------------
Usually it does not happen like this, some patterns (B,C,D) is mostly seen subsequently in logs but listA can be seen 20 maybe the most 30 rows earlier than the rest. But any scenario like above can happen.
Please advise a good approach, I'm not sure what to use, I know there is next() function can be used to check next chunk, in such case
should I use any([listA, listB, listC, listD]) method and if any of the patterns occurs then do I need to check one by one the rest in next chunk like the following?:
if any([listA, listB, listC, listD]):
Then here check which of the patterns not seen and keep them in a notSeen list then check them one by one in next chunk?
next_chunk = next(gen_func(chunksize))
isListA = re.findall(pattern, next_chunk)
Or maybe I completely miss an easier approach for this little project. please let me know your thoughts as you might experience such situation before.
I have used next_chunk = next(gen_func(chunksize))
and added necessary if statements underneath to check only 1 next log piece becase I would arrange log chunks with a generator suitably:
I shared only a part of the code as the rest confidential
import re, os
def __init__(self, logfile):
self.read = self.ReadLog(logfile)
self.search = self.SearchData(logfile)
self.file = os.path.abspath(logfile)
self.data = self.read.read_in_chunks
r_search_combined, scan_result, r_complete, r_failed = [], [], [], []
def test123(self, r_reason: str, cyc: int, b_r):
''' Usage : verify_log.py
--log ${LOGS_FOLDER}/log --reason r_low 1 <True | False>'''
ret = False
r_count = 2*int(cyc) if b_r.lower() == "true" else int(cyc)
r_search_combined, scan_result, r_complete, r_failed = [], [], [], []
result_pattern = self.search.r_scan_pattern()
def check_patterns(chunk):
search_cached = re.findall(self.search.r_search_cached, chunk)
search_full = re.findall(self.search.r_search_full, chunk)
scan_complete = re.findall(self.search.r_scan_complete, chunk)
scan_result = re.findall(result_pattern, chunk)
r_complete = re.findall(self.search.r_auth_complete, chunk)
return search_cached, search_full, scan_complete, scan_result, r_complete
with open(self.file) as rf:
for idx, piece in enumerate(self.data(rf), start=1):
is_failed = re.findall(self.search.r_failure, piece)
if is_failed:
print(f'general failure received : {is_failed}')
r_failed.extend(is_failed)
is_r_search_cached, is_r_search_full, is_scan_complete, is_scan, is_r_complete = check_patterns(piece)
if (is_r_search_cached or is_r_search_full) and all([is_scan_complete, is_scan, is_r_complete]):
if is_r_search_cached:
r_search_combined.extend(is_r_search_cached)
if is_r_search_full:
r_search_combined.extend(is_r_search_full)
scan_result.extend(is_scan)
r_complete.extend(is_r_complete)
elif (is_r_search_cached or is_r_search_full) and not any([is_scan, is_r_complete]):
next_piece = next(self.data(rf))
_, _, _, is_scan_next, is_r_complete_next = check_patterns(next_piece)
if (is_r_search_cached or is_r_search_full) and all([is_scan_next, is_r_complete_next]):
r_search_combined.extend(is_r_search_cached)
r_search_combined.extend(is_r_search_full)
scan_result.extend(is_scan_next)
r_complete.extend(is_r_complete_next)
elif (is_r_search_cached or is_r_search_full) and is_scan and not is_r_complete:
next_piece = next(self.data(rf))
_, _, _, _, is_r_complete_next = check_patterns(next_piece)
if (is_r_search_cached or is_r_search_full) and all([is_scan, is_r_complete_next]):
r_search_combined.extend(is_r_search_cached)
r_search_combined.extend(is_r_search_full)
scan_result.extend(is_scan)
r_complete.extend(is_r_complete_next)
I want to make a to_string() fn in Rust with &self as parameter, and calling the references of the elements of &self inside the function:
//! # Messages
//!
//! Module that builds and returns messages with user and time stamps.
use time::{Tm};
/// Represents a simple text message.
pub struct SimpleMessage<'a, 'b> {
pub moment: Tm,
pub content: &'b str,
}
impl<'a, 'b> SimpleMessage<'a, 'b> {
/// Gets the elements of a Message and transforms them into a String.
pub fn to_str(&self) -> String {
let mut message_string =
String::from("{}/{}/{}-{}:{} => {}",
&self.moment.tm_mday,
&self.moment.tm_mon,
&self.moment.tm_year,
&self.moment.tm_min,
&self.moment.tm_hour,
&self.content);
return message_string;
}
}
But $ cargo run returns:
error[E0061]: this function takes 1 parameter but 8 parameters were supplied
--> src/messages.rs:70:13
|
70 | / String::from("{}/{}/{}-{}:{}, {}: {}",
71 | | s.moment.tm_mday,
72 | | s.moment.tm_mon,
73 | | s.moment.tm_year,
... |
76 | | s.user.get_nick(),
77 | | s.content);
| |___________________________________^ expected 1 parameter
I really don't understand the problem of this syntax, what am I missing?
You probably meant to use the format! macro:
impl<'b> SimpleMessage<'b> {
/// Gets the elements of a Message and transforms them into a String.
pub fn to_str(&self) -> String {
let message_string =
format!("{}/{}/{}-{}:{} => {}",
&self.moment.tm_mday,
&self.moment.tm_mon,
&self.moment.tm_year,
&self.moment.tm_min,
&self.moment.tm_hour,
&self.content);
return message_string;
}
}
String::from comes from the From trait, which defines a from method that takes a single parameter (hence "this function takes 1 parameter" in the error message).
format! already produces a String, so no conversion is necessary.
When I try to run predict() on the dataset, it keeps giving me error -
Error in eval(expr, envir, enclos) : object 'LoanRange' not found
Here is the part of dataset -
LoanRange Loan.Type N WAFICO WALTV WAOrigRev WAPTValue
1 0-99999 Conventional 109 722.5216 63.55385 6068.239 0.6031879
2 0-99999 FHA 30 696.6348 80.00100 7129.650 0.5623650
3 0-99999 VA 13 698.6986 74.40525 7838.894 0.4892977
4 100000-149999 Conventional 860 731.2333 68.25817 6438.330 0.5962638
5 100000-149999 FHA 285 673.2256 82.42225 8145.068 0.5211495
6 100000-149999 VA 125 704.1686 87.71306 8911.461 0.5020074
7 150000-199999 Conventional 1291 738.7164 70.08944 8125.979 0.6045117
8 150000-199999 FHA 403 672.0891 84.65318 10112.192 0.5199632
9 150000-199999 VA 195 694.1885 90.77495 10909.393 0.5250807
10 200000-249999 Conventional 1162 740.8614 70.65027 8832.563 0.6111419
11 200000-249999 FHA 348 667.6291 85.13457 11013.856 0.5374226
12 200000-249999 VA 221 702.9796 91.76759 11753.642 0.5078298
13 250000-299999 Conventional 948 742.0405 72.22742 9903.160 0.6106858
Following is the code used for predicting count data N after determining the overdispersion-
model2=glm(N~Loan.Type+WAFICO+WALTV+WAOrigRev+WAPTValue, family=quasipoisson(link = "log"), data = DF)
summary(model2)
This is what I have done to create a sequence of count and use predict function-
countaxis <- seq (0,1500,150)
Y <- predict(model2, list(N=countaxis, type = "response")
At this step, I get the error -
Error in eval(expr, envir, enclos) : object 'LoanRange' not found
Can someone please point me where is the problem here.
Think about what exactly you are trying to predict. You are providing the predict function values of N (via countaxis), but in fact the way you set up your model, N is your response variable and the remaining variables are the predictors. That's why R is asking for LoanRange. It actually needs values for LoanRange, Loan.Type, ..., WAPTValue in order to predict N. So you need to feed predict inputs that let the model try to predict N.
For example, you could do something like this:
# create some fake data to predict N
newdata1 = data.frame(rbind(c("0-99999", "Conventional", 722.5216, 63.55385, 6068.239, 0.6031879),
c("150000-199999", "VA", 12.5216, 3.55385, 60.239, 0.0031879)))
colnames(newdata1) = c("LoanRange" ,"Loan.Type", "WAFICO" ,"WALTV" , "WAOrigRev" ,"WAPTValue")
# ensure that numeric variables are indeed numeric and not factors
newdata1$WAFICO = as.numeric(as.character(newdata1$WAFICO))
newdata1$WALTV = as.numeric(as.character(newdata1$WALTV))
newdata1$WAPTValue = as.numeric(as.character(newdata1$WAPTValue))
newdata1$WAOrigRev = as.numeric(as.character(newdata1$WAOrigRev))
# make predictions - this will output values of N
predict(model2, newdata = newdata1, type = "response")
Is there a possibility to get the full path of the currently executing TCL script?
In PHP it would be: __FILE__
Depending on what you mean by "currently executing TCL script", you might actually seek info script, or possibly even info nameofexecutable or something more esoteric.
The correct way to retrieve the name of the file that the current statement resides in, is this (a true equivalent to PHP/C++'s __FILE__):
set thisFile [ dict get [ info frame 0 ] file ]
Psuedocode (how it works):
set thisFile <value> : sets variable thisFile to value
dict get <dict> file : returns the file value from a dict
info frame <#> : returns a dict with information about the frame at the specified stack level (#), and 0 will return the most recent stack frame
NOTICE: See end of post for more information on info frame.
In this case, the file value returned from info frame is already normalized, so file normalize <path> in not needed.
The difference between info script and info frame is mainly for use with Tcl Packages. If info script was used in a Tcl file that was provided durring a package require (require package <name>), then info script would return the path to the currently executing Tcl script and would not provide the actual name of the Tcl file that contained the info script command; However, the info frame example provided here would correctly return the file name of the file that contains the command.
If you want the name of the script currently being evaluated, then:
set sourcedScript [ info script ]
If you want the name of the script (or interpreter) that was initially invoked, then:
set scriptAtInvocation $::argv0
If you want the name of the executable that was initially invoked, then:
set exeAtInvocation [ info nameofexecutable ]
UPDATE - Details about: info frame
Here is what a stacktrace looks like within Tcl. The frame_index is the showing us what info frame $frame_index looks like for values from 0 through [ info frame ].
Calling info frame [ info frame ] is functionally equivalent to info frame 0, but using 0 is of course faster.
There are only actually 1 to [ info frame ] stack frames, and 0 behaves like [ info frame ]. In this example you can see that 0 and 5 (which is [ info frame ]) are the same:
frame_index: 0 | type = source | proc = ::stacktrace | line = 26 | level = 0 | file = /tcltest/stacktrace.tcl | cmd = info frame $frame_counter
frame_index: 1 | type = source | line = 6 | level = 4 | file = /tcltest/main.tcl | cmd = a
frame_index: 2 | type = source | proc = ::a | line = 2 | level = 3 | file = /tcltest/a.tcl | cmd = b
frame_index: 3 | type = source | proc = ::b | line = 2 | level = 2 | file = /tcltest/b.tcl | cmd = c
frame_index: 4 | type = source | proc = ::c | line = 5 | level = 1 | file = /tcltest/c.tcl | cmd = stacktrace
frame_index: 5 | type = source | proc = ::stacktrace | line = 26 | level = 0 | file = /tcltest/stacktrace.tcl | cmd = info frame $frame_counter
See:
https://github.com/Xilinx/XilinxTclStore/blob/master/tclapp/xilinx/profiler/app.tcl#L273
You want $argv0
You can use [file normalize] to get the fully normalized name, too.
file normalize $argv0
file normalize [info nameofexecutable]
seconds after I've posted my question ... lindex $argv 0 is a good starting point ;-)