Chisel randomly initialize register value when simulating with verilator - chisel

I'm using Chisel and blackbox to run my chisel logic against a verilog register file.
The registerfile does not have reset signal so I expect the register to be randomly initialized.
I passed the --x-initial unique to verilator,
Basically this is how I launch the test:
private val backendName = "verilator"
"NOCDMA" should s" do blkwrite and blkread correctly (with $backendName)" in {
Driver.execute(Array("--fint-write-vcd","--backend-name",s"$backendName",
"--more-vcs-flags","--trace-depth 1 --x-initial unique"),
()=>new DMANetworkWithMem(memAddrWidth,memDataWidth)(nocDataWidth)(nNodesX,nNodesY)){
c => new DMANetworkRWTest(c)
}
}
But The data I read from the register file is all zero before I wrote anything to it.
The read data is correct after I wrote to it.
So, is there anything inside chisel that I need to tune or I did not do everything properly ?
Any suggestions?

I'm not certain, but I found the following issue on Verilator with a similar issue: https://github.com/verilator/verilator/issues/1399.
From skimming the above issue, I think you also need to pass +verilator+seed+<value> and +verilator+rand+reset+<value> at runtime. I am not an expert in the iotesters, but I believe you can add these runtime values through the iotesters argument: --more-vcs-c-flags.
Side note, I would also set --x-assign unique in Verilator if there are cases in the Verilog where runtime would otherwise inject an X (eg. out-of-bounds index).
I hope this helps!

Related

How to add to / amend / consolidate JRuby Profiler data?

Say I have inside my JRuby program the following loop:
loop do
x=foo()
break if x
bar()
end
and I want to collect profiling information just for the invocations of bar. How to do this? I got so far:
pd = []
loop do
x=foo()
break if x
pd << JRuby::Profiler.profile { bar() }
end
This leaves me with an array pd of profile data objects, one for each invocation of bar. Is there a way to create a "summary" data object, by combining all the pd elements? Or even better, have a single object, where profile would just add to the existing profiling information?
I googled for a documentation of the JRuby::Profiler API, but couldn't find anything except a few simple examples, none of them covering my case.
UPDATE : Here is another attempt I tried, which does not work either.
Since the profile method initially clears the profile data inside the Profiler, I tried to separate the profiling steps from the data initializing steps, like this:
JRuby::Profiler.clear
loop do
x=foo()
break if x
JRuby::Profiler.send(:current_thread_context).start_profiling
bar()
JRuby::Profiler.send(:current_thread_context).stop_profiling
end
profile_data = JRuby::Profiler.send(:profile_data)
This seems to work at first, but after investigation, I found that profile_data then contains the profiling information from the last (most recent) execution of bar, not of all executions collected together.
I figured out a solution, though I have the feeling that I'm using a ton of undocumented features to get it working. I also must add that I am using (1.7.27), so later JRuby versions might or might not need a different approach.
The problem with profiling is that start_profiling (corresponding to the Java method startProfiling in the class Java::OrgJrubyRuntime::ThreadContext) not only turns on the profiling flag, but also allocates a fresh ProfileData object. What we want to do, is to reuse the old object. stop_profiling OTOH only toggles the profiling switch and is uncritical.
Unfortunately, ThreadContext does not provide a method to manipulate the isProfiling toggle, so as a first step, we have to add one:
class Java::OrgJrubyRuntime::ThreadContext
field_writer :isProfiling
end
With this, we can set/reset the internal isProfiling switch. Now my loop becomes:
context = JRuby::Profiler.send(:current_thread_context)
JRuby::Profiler.clear
profile_data_is_allocated = nil
loop do
x=foo()
break if x
# The first time, we allocate the profile data
profile_data_is_allocated ||= context.start_profiling
context.isProfiling = true
bar()
context.isProfiling = false
end
profile_data = JRuby::Profiler.send(:profile_data)
In this solution, I tried to keep as close as possible to the capabilities of the JRuby::Profiler class, but we see, that the only public method still used is the clear method. Basically, I have reimplemented profiling in terms of the ThreadContext class; so if someone comes up with a better way to solve it, I will highly appreciate it.

What mechanism works to show component ID in chisel3 elaboration

Chisel throws an exception with an elaboration error message. The following is a result of my code as an example.
chisel3.core.Binding$ExpectedHardwareException: data to be connected 'chisel3.core.Bool#81' must be hardware, not a bare Chisel type. Perhaps you forgot to wrap it in Wire(_) or IO(_)?
This exception message is interesting because 81 behind chisel3.core.Bool# looks like ID, not hashcode.
Indeed, Data type extends HasId trait which has _id field, and
_id field seems to generate a unique ID for each components.
I've thought Data type overrides toString to make string that has type#ID, but it does not override. That is why $node in below code should not be able to use ID.
throw Binding.ExpectedHardwareException(s"$prefix'$node' must be hardware, " +
"not a bare Chisel type. Perhaps you forgot to wrap it in Wire(_) or IO(_)?")
Instead of toString, toNamed method exists in Data. However, this method seems to be called to generate a firrtl code, not to convert component into string.
Why can Data type show its ID?
If it is not ID, but exactly hashcode, this question is from my misunderstanding.
I think you should take a look at Chisel PR #985. It changes the way that Data's toString method is implemented. I'm not sure if it answers your question directly but it's possible this will make the meaning and location of the error clearer. If not you should comment on it.
Scala classes come with a default toString method that is of the form className#hashCode.
As you noted, the chisel3.core.Bool#81 sure looks like it's using the _id rather than the hashCode. That's because in the most recently published version of Chisel (3.1.6), the hashcode was the id! You can see this if you inspect the source files at the tag for that version: https://github.com/freechipsproject/chisel3/blob/dc4200f8b622e637ec170dc0728c7887a7dbc566/chiselFrontend/src/main/scala/chisel3/internal/Builder.scala#L81
This is no longer the case on master which probably the source of any confusion! As Chick noted, we have just changed the .toString method to be more informative than the default; expect more informative representations in 3.2.0!

How to define an function in the eshell (Erlang shell)?

Is there any way to define an Erlang function from within the Erlang shell instead of from an erl file (aka a module)?
Yes but it is painful. Below is a "lambda function declaration" (aka fun in Erlang terms).
1> F=fun(X) -> X+2 end.
%%⇒ #Fun <erl_eval.6.13229925>
Have a look at this post. You can even enter a module's worth of declaration if you ever needed. In other words, yes you can declare functions.
One answer is that the shell only evaluates expressions and function definitions are not expressions, they are forms. In an erl file you define forms not expressions.
All functions exist within modules, and apart from function definitions a module consists of attributes, the more important being the modules name and which functions are exported from it. Only exported functions can be called from other modules. This means that a module must be defined before you can define the functions.
Modules are the unit of compilation in erlang. They are also the basic unit for code handling, i.e. it is whole modules which are loaded into, updated, or deleted from the system. In this respect defining functions separately one-by-one does not fit into the scheme of things.
Also, from a purely practical point of view, compiling a module is so fast that there is very little gain in being able to define functions in the shell.
This depends on what you actually need to do.
There are functions that one could consider as 'throwaways', that is, are defined once to perform a test with, and then you move on. In such cases, the fun syntax is used. Although a little cumbersome, this can be used to express things quickly and effectively. For instance:
1> Sum = fun(X, Y) -> X + Y end.
#Fun<erl_eval.12.128620087>
2> Sum(1, 2).
3
defines an anonymous fun that is bound to the variable (or label) Sum. Meanwhile, the following defines a named fun, called F, that is used to create a new process whose PID (<0.80.0>) is bound to Pid. Note that F is called in a tail recursive fashion in the second clause of receive, enabling the process to loop until the message stop is sent to it.
3> Pid = spawn(fun F() -> receive stop -> io:format("Stopped~n"); Msg -> io:format("Got: ~p~n", [Msg]), F() end end).
<0.80.0>
4> Pid ! hello.
hello
Got: hello
5> Pid ! stop.
Stopped
stop
6>
However you might need to define certain utility functions that you intend to use over and over again in the Erlang shell. In this case, I would suggest using the user_default.erl file together with .erlang to automatically load these custom utility functions into the Erlang shell as soon as this is launched. For instance, you could write a function that compiles all the Erlang files in living in the current directory.
I have written a small guide on how to do this on this GitHub link. You might find it helpful and instructive.
If you want to define a function on the shell to use it as macro (because it encapsulates some functionality that you need frequently), have a look at
https://erldocs.com/current/stdlib/shell_default.html

How to resolve naming conflicts when multiply instantiating a program in VxWorks

I need to run multiple instances of a C program in VxWorks (VxWorks has a global namespace). The problem is that the C program defines global variables (which are intended for use by a specific instance of that program) which conflict in the global namespace. I would like to make minimal changes to the program in order to make this work. All ideas welcomed!
Regards
By the way ... This isn't a good time to mention that global variables are not best practice!
The easiest thing to do would be to use task Variables (see taskVarLib documentation).
When using task variables, the variable is specific to the task now in context. On a context switch, the current variable is stored and the variable for the new task is loaded.
The caveat is that a task variable can only be a 32-bit number.
Each global variable must also be added independently (via its own call to taskVarAdd?) and it also adds time to the context switch.
Also, you would NOT be able to share the global variable with other tasks.
You can't use task variables with ISRs.
Another Possibility:
If you are using Vxworks 6.x, you can make a Real Time Process application.
This follows a process model (similar to Unix/Windows) where each instance of your program has it's own global memory space, independent of any other instance.
I had to solve this when integrating two third-party libraries from the same vendor. Both libraries used some of the same symbol names, but they were not compatible with each other. Because these were coming from a vendor, we couldn't afford to search & replace. And task variables were not applicable either since (a) the two libs might be called from the same task and (b) some of the dupe symbols were functions.
Assume we have app1 and app2, linked, respectively, to lib1 and lib2. Both libs define the same symbols so must be hidden from each other.
Fortunately (if you're using GNU tools) objcopy allows you to change the type of a variable after linking.
Here's a sketch of the solution, you'll have to modify it for your needs.
First, perform a partial link for app1 to bind it to lib1. Here, I'm assuming that you've already partially linked *.o in app1 into app1_tmp1.o.
$(LD_PARTIAL) $(LDFLAGS) -Wl,-i -o app1_tmp2.o app1_tmp1.o $(APP1_LIBS)
Then, hide all of the symbols from lib1 in the tmp2 object you just created to generate the "real" object for app1.
objcopymips `nmmips $(APP1_LIBS) | grep ' [DRT] ' | sed -e's/^[0-9A-Fa-f]* [DRT] /-L /'` app1_tmp2.o app1.o
Repeat this for app2. Now you have app1.o and app2.o ready to link into your final application without any conflicts.
The drawback of this solution is that you don't have access to any of these symbols from the host shell. To get around this, you can temporarily turn off the symbol hiding for one or the other of the libraries for debugging.
Another possible solution would be to put your application's global variables in a static structure. For example:
From:
int global1;
int global2;
int someApp()
{
global2 = global1 + 3;
...
}
TO:
typedef struct appGlobStruct {
int global1;
int global2;
} appGlob;
int someApp()
{
appGlob.global2 = appGlob.global1 + 3;
}
This simply turns into a search & replace in your application code. No change to the structure of the code.

Accessing the Body of a Function with Lua

I'm going back to the basics here but in Lua, you can define a table like so:
myTable = {}
myTable [1] = 12
Printing the table reference itself brings back a pointer to it. To access its elements you need to specify an index (i.e. exactly like you would an array)
print(myTable ) --prints pointer
print(myTable[1]) --prints 12
Now functions are a different story. You can define and print a function like so:
myFunc = function() local x = 14 end --Defined function
print(myFunc) --Printed pointer to function
Is there a way to access the body of a defined function. I am trying to put together a small code visualizer and would like to 'seed' a given function with special functions/variables to allow a visualizer to 'hook' itself into the code, I would need to be able to redefine the function either from a variable or a string.
There is no way to get access to body source code of given function in plain Lua. Source code is thrown away after compilation to byte-code.
Note BTW that function may be defined in run-time with loadstring-like facility.
Partial solutions are possible — depending on what you actually want to achieve.
You may get source code position from the debug library — if debug library is enabled and debug symbols are not stripped from the bytecode. After that you may load actual source file and extract code from there.
You may decorate functions you're interested in manually with required metadata. Note that functions in Lua are valid table keys, so you may create a function-to-metadata table. You would want to make this table weak-keyed, so it would not prevent functions from being collected by GC.
If you would need a solution for analyzing Lua code, take a look at Metalua.
Check out Lua Introspective Facilities in the debugging library.
The main introspective function in the
debug library is the debug.getinfo
function. Its first parameter may be a
function or a stack level. When you
call debug.getinfo(foo) for some
function foo, you get a table with
some data about that function. The
table may have the following fields:
The field you would want is func I think.
Using the debug library is your only bet. Using that, you can get either the string (if the function is defined in a chunk that was loaded with 'loadstring') or the name of the file in which the function was defined; together with the line-numbers at which the function definition starts and ends. See the documentation.
Here at my current job we have patched Lua so that it even gives you the column numbers for the start and end of the function, so you can get the function source using that. The patch is not very difficult to reproduce, but I don't think I'll be allowed to post it here :-(
You could accomplish this by creating an environment for each function (see setfenv) and using global (versus local) variables. Variables created in the function would then appear in the environment table after the function is executed.
env = {}
myFunc = function() x = 14 end
setfenv(myFunc, env)
myFunc()
print(myFunc) -- prints pointer
print(env.x) -- prints 14
Alternatively, you could make use of the Debug Library:
> myFunc = function() local x = 14 ; debug.debug() end
> myFunc()
> lua_debug> _, x = debug.getlocal(3, 1)
> lua_debug> print(x) -- prints 14
It would probably be more useful to you to retrieve the local variables with a hook function instead of explicitly entering debug mode (i.e. adding the debug.debug() call)
There is also a Debug Interface in the Lua C API.