ScriptingContainer, Ruby Runtime and Variable Map - jruby

(Crossposting note: This question has also been posted at the JRuby
mailing list (jruby#ruby-lang.org) on December 20 and on JRuby Forum on January 2nd, but hasn't got any
response yet).
This question is about understanding the impact of the LocalContextScope
parameter in the presence of multi-threading.
We can find at the JRuby Wiki
a
recipe which helps about choosing the best value for the
LocalContextScope parameter. This page explains, that this parameter
controls, whether ScriptingContainer and/or Ruby Runtime and/or Variable
Map is shared among threads. However, I would like to get a somewhat
deeper understanding to this issue, in particular, which part of the
"system" is implemented in which one of those three components.
As a concrete example: When I create global variables in Ruby, or new classes, or
functions and variables in the top level context, do they belong to the
ScriptingContainer, to the runtime, or to the variable map? Unless knowing this, I don't know which LocalContextScope I have to use.

Related

How to represent standalone functions calling other standalone functions

I'm documenting the current state of a javascript package which is comprised of several modules predominantly consisting of standalone functions. As the result of using callbacks extensively, the package includes nested calls between standalone functions from multiple packages.
With this in mind, does anyone know what is the best way to represent calls between standalone functions in a sequence diagram?
Are the details of the standalone function worth it?
A common wisdom recommend to avoid the trap of UML as graphical programming language. Things that are easier expressed in code and easy to understand by readers better stay as code. Prefer to use UML to give the big picture, and explain complex relationships that are less obvious to spot in the code.
Automate obvious documentation?
Manually modelling a very precise sequence diagram is time consuming. Moreover, such diagram is quickly outdated with the next version of code.
Therefore, if your interest is to give an overview on how the function relate to each other, you may be interested to provide instead a visual overview using a simpler call graph. The reader can grasp the overall structure easily and look for more details in the code:
The advantage is that this task can be automated, using one of the many call graph generators available on the market (just google for javascript call graph generator to find some). There's by the way an excellent book on further automating documentation, that I can only recommend with enthousiasm: "Living Documentation: Continuous Knowledge Sharing by Design, First Edition"
If you have to set the focus on the detailed chronological sequencing of the calls a call graph would however not be sufficient. In this case, the sequence diagrams may then indeed be more relevant.
UML sequence diagrams with standalone functions?
A sequence diagram shows interactions between lifelines within an enclosing classifier. Usually a lifeline is used to represent an object, i.e. an instance of a class, but its definition is flexible enough to accommodate with any participant in an interaction.
Individual standalone functions can moreover be considered as individual objects that instantiate a more general class of functions (that's the concept behind a functor, like C++ std::function). This is particularly relevant in javascript where functions can be assigned to variables or used as parameters. So you may just use a lifeline that clarifies this. Up to you to decide how you will name the call message (e.g. operator()(a,b,c) or using its real name for readability ? ):
You can also group a bunch of related standalone functions into a pseudo-class that would represent in your model a module, compilation unit or namespace. Although a module is not stricto sensu an object, you may in your modeling deal with it as if it was a class with only one (anonymous) instance (i.e. its state would be the global variables defined in the module scope. The related standalone functions could be seen as operations of this imaginary class). The lifeline would correspond to a module, and function calls would be represented as synchronous messages either to another module or a message to itself with nested activation to visualise the “nested” calls.

Are there any configuration libraries that provide a native implementations for merging modifications found in split files (as seen in .d folders)

I've been having trouble sorting through the noise while trying to find an answer to this. There are certainly a lot of libraries available for handling configuration files but what I'm looking for is an answer to whether there is a solution available for this specific kind of split configuration.
On Linux systems I've found that it's not uncommon to find a program which has split its default configuration away from user modifications by instructing the user to place a subset of the default configuration into a separate folder (commonly found with a .d suffix). These changes override what is found in the default configuration and provide a very easy way to track at a later date what has been modified.
There is a wide variety of incompatible syntaxes used across different configuration files, and I am not aware of a library that can parse them all. But if you are willing to restrict yourself to just one configuration syntax, then two answers to your question come to mind.
The first is to use a scripting language as your configuration file syntax. Let's assume an application reads both default.cfg and override.cfg (in that order). If default.cfg contains name="john", then override.cfg might contain name="mary", which will have the effect of overriding the value of name. This occurs because the shell interpreter provides a common place to store the global variables assigned within the script files. The following pseudocode shows how your application could interact with a scripting language interpreter:
interp = new Interpreter();
interp.executeFile("default.cfg");
interp.executeFile("override.cfg");
name = interp.getValueOfVariable("name");
The second is to use Config4*, which is a Configuration-file parser I wrote; it is available in C++ and Java. I recommend you read Chapter 2 (Overivew of Config4* Syntax) in the Config4* Getting Started Guide. The "adaptive configuration" abilities of Config4* seem to fit the question I think you are asking. If, after reading Chapter 2, you agree, then read Chapter 3, which provides an overview of the C++ and Java API. That will probably be enough to get you started.

What do the (site, here, up) arguments mean when creating rocket-chip configurations?

When creating a new "Config" we define a function that takes three "View"s (site, here, up) as arguments. What is the meaning of these three Views?
As purely a historical reference, take a look at the Chisel2 Advanced Parameterization Manual (with the huge caveat to not take this too literally as it's old). However, I believe that the motivation and discussion of site, here, and up still holds in sections 2.6, 2.7, 2.8, and 3.6.
Roughly, site, here, and up help with handling and resolving dependencies on other parameters.
site allows you to disambiguate different parameters that may have the same name, e.g., Width, based on a defined location. here allows parameters to query other parameters defined in the same group. up allows you to access a parent configuration's parameter object with the intended purpose being if you want to copy it while modifying parameters.
class Blah extends Config ((site, here, up)) {..}
is the parameter tuple, which allows partial function application. This allows partial configuration of the Rocket core and setting default parameters, preserving elasticity and type correctness.
You may check its implementation here

tcl - when to use package+namespace vs interp?

I'm just starting with TCL and trying to get my head around how to best define and integrate modules. There seem to be much effort put into the package+namespace concept, but from what I can tell interp is more powerful and lean for every thinkable scenario. In particular when it comes to hiding and renaming procedures, but also the lack of creep in the global namespace. The only reason to use package+namespaces seem to be because "once upon a time Sun said so".
When should I ever use package+namespace instead of interp?
Namespaces and packages work together. Interpreters are something else.
A namespace is a small scale naming context in Tcl. It can contain commands, variables and other namespaces. You can refer to entities in a namespace via either local names (foo) or via qualified names (bar::foo); if a qualified name starts with ::, it is relative to the (interpreter-)global namespace, and can be used to refer to its command or variable from anywhere in the interpreter. (FWIW, the TclOO object system builds extensively on top of namespaces; there is one namespace per object.)
A package is a high-level concept for a bunch of code supplied by some sort of library. Packages have abstract names (the name do not have to correspond to how the library's implementation is stored on disk) and a distinct version; you can ask for a particular version if necessary, though most of the time you don't bother. Packages can be implemented by multiple mechanisms, but they almost all come down to sourceing some number of Tcl scripts and loading some number of DLLs. Almost all packages declare commands, and they conventionally are encouraged to put those commands in a namespace with the same general name as the package. However, quite a few older packages do not do this for various reasons, mostly to do with compatibility with existing code.
An interpreter is a security context in Tcl. By default, Tcl creates one interpreter (plus another if it sets up the console window in wish). Named entities in one interpreter are completely distinct from named entities in another interpreter with a few key exceptions:
Channels have common names across all interpreters. This means that an interpreter can talk about channels owned by another interpreter, but merely being able to mention its name doesn't give permission to access the channel. (The stdin, stdout and stderr channels are shared by default.)
The interp alias command can be used to make alias commands, which are such that invoking a command (the alias) in one interpreter can cause a command (the implementation) in another interpreter to be called, with all arguments safely passed over. This allows one interpreter to expose whatever special calls it wants another interpreter to access without losing control, but it is up to the implementation of those commands to act safely on those arguments.
A safe interpreter is one with the unsafe commands of Tcl profiled out by default. (That's things like open, socket, source, load, cd, etc.) The parent interpreter that created the safe child interpreter can use the alias mechanism to add in exactly the functionality desired; it's very much analogous to an OS system call except you can easily make your own application-specific ones.
Tcl's threading package is designed to create one interpreter per thread (and the aliasing mechanism does not work across threads). That means that there's very little in the way of shared resources by default, and inter-thread communication is done via queued message passing.
In general, packages are required at most once per interpreter and are how you are recommended to access most third-party functionality. Namespaces are fairly lightweight and are used for all sorts of things, and interpreters are considered to be expensive; lots of quite thoroughly production-grade Tcl scripts only ever work with a single interpreter. (Threads are even more expensive than interpreters; it's good practice to match the number of threads you create to the hardware load that you wish to impose, probably through the use of suitable thread pools.)
The purpose of a module is to provide modular code, i.e. code that can easily be used by applications beyond the module writer's knowledge and control, and that encapsulates their own internals.
Package-namespace- and interpreter-based modules are probably equally good at encapsulation, but it's not as easy to make interpreter-based modules that play well with arbitrary applications (it is of course possible).
My own opinion is that interpreters are application level (I mostly use them for user input and for controlled evaluation), not module level. Both namespaces and packages have their warts, but in most cases they do what is expected of them with a minimum of fuss.
My recommendation is that if you are writing modules for your own benefit and interpreters serve you well, by all means use them. If you write modules that other people are to use, possibly including yourself in 18 months, you should stick with namespaces and packages.

Besides Logo and Emacs Lisp, what are other pure dynamically scoped languages?

What are some examples of a dynamically scoped language? And what are the reasons for choosing that design? Is it because it is easy to implement?
Mathematica is another language that is dynamically scoped, via the Block construct. This is actually quite useful when working with formulas. It allows you to write things like
In[1]:= expr = a*t^2 + b*t+ c;
In[2]:= Block[{a = 1, b = -1, c = 2}, Table[expr, {t, 5}]]
Out[2]= {2, 4, 8, 14, 22}
which wouldn't work at all if variables like a and t were scoped lexically. It works particularly nicely with Mathematica's rule-rewriting system, which will, among other things, leave variables unevaluated (as symbolic expressions) if it doesn't have an existing definition for them.
Mathematica can fake lexical scoping with the Module construct, but what this really does is rewrite the expression in terms of new, allegedly unique symbol (you can cause clashes if you predict what the next unique symbol will be, which is easy in most cases). This means
Module[{x = 4},
Table[x * t, {t, 5}]]
will be turned into something like this:
Block[{x$134 = 4},
Table[x$134 * t, {t, 5}]
Emacs Lisp, in one of its libraries, has a construct (really a Lisp macro) called lexical-let that pulls exactly the same trick to fake lexical scoping.
There are performance advantages to real lexical scoping when you're compiling your language which you don't get with the fake lexicals of ELisp or Mathematica, since you need some mapping between the dynamic variable and its current value, which means doing lookups (through a hash table or property list or something) and additional layers of indirection.
EDIT: If you have only lexical variables, you can fake dynamic scoping by storing the original value of a global, lexical variable on entering the scope and guaranteeing that the old value is restored upon exiting the scope. In order to ensure that, you'll need something like Lisp's UNWIND-PROTECT or a finally block. I've seen this done using C++ destructors as well, mostly as an exercise.
Dynamically scoped languages are much easier to implement. To access variables which is not in the current activaiton record / stack frame, one just follows the control links. Static/lexical access links are then not needed, making stack frames smaller.
Dynamic variables can be "unpredictable" at runtime, because one needs to know in which order the actual stackframes are to know which variable will be used. This information is not available by just looking at the static structure of the code. One could quite easily get caught out if the actual call graph of the program is not easy to predict at implementation time. Thats why most languages today have static scoping (most Exception systems however, are dynamic as this is the most practical).
However in some cases, dynamically scoped variables are very useful. For example when redirecting output, you could using dynamic variables set standard output for local code and all code called from there on.
(let ((*standard-output* *some-other-stream*))
(stuff))
In this common-lisp example (from Seibel), standard output is bound to another stream for the duration of the let form, (inside its enclosing parens). When execution leaves the let, it goes back to whatever it was beforehand. See http://gigamonkeys.com/book/variables.html Peter Seibels free and excellent book, Practical Common Lisp, for a good discussion. In Seibels own words:
Dynamic bindings make global variables much more manageable, but it's important to notice they still allow action at a distance. Binding a global variable has two at a distance effects--it can change the behavior of downstream code, and it also opens the possibility that downstream code will assign a new value to a binding established higher up on the stack. You should use dynamic variables only when you need to take advantage of one or both of these characteristics.
Well, there's a bunch of websites that discuss the pro's and con's, so I'm not going there.
One interesting language that has some features that faintly resemble dynamic scope is XSLT; although XSLT's templates and variables and the like are lexically scoped, XSLT is of course all about XML - and the current position in the xml tree is "dynamically scoped" in the sense that the context node is global and thus that XPath expressions are evaluated not according to XSLT's lexical scope but according to it's dynamic evaluation.
Dynamic scope is/was easier to implement with interpreters. Most early Lisp interpreters were using dynamic scope. After several years lexical scope was found to have an advantage, but was first mostly implemented in Lisp compilers. Several implementations appeared that implemented dynamic scope in interpreted code and lexical scope in compiled code. Some provided a special language construct to provide closures. Lisp dialects like Scheme and Common Lisp required then that there is no difference between interpreted and compiled code and thus interpreted based implementations had to implement lexical scope, too.
Early Smalltalk implementations implemented dynamic scope. All kinds of Lisp dialect implementations implemented dynamic scope (Interlisp, UCI Lisp, Lisp Machine Lisp, MacLisp, ...).
Almost all new Lisp dialects from the last 20 years use lexical scope by default or even exclusively. Several publications have described in detail how to implement Lisp with lexical scope - so there is no excuse not to use lexical scope.
All the shell languages (bash, ksh, etc) use dynamic scoping.