Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have been looking at the source code of the IronPython project and the Orchard CMS project. IronPython operates with a namespace called Microsoft.Scripting.Hosting.Shell (part of the DLR). The Orchard Project also operates with the concept of a 'shell' indirectly in various interfaces (IShellContainerFactory, IShellSettings).
None of the projects mentioned above have elaborate documentation, so picking up the meaning of a type (class etc.) from its name is pretty valuable if you are trying to figure out the overall application structure/architecture by reading the source code.
Now I am wondering: what do the authors of this source code have in mind when they refer to a 'shell'? When I hear the word 'shell', I think of something like a command line interpreter. This makes sense for IronPython, since it has an interactive interpreter. But to me, it doesn't make much sense with respect to a Web CMS.
What should I think of, when I encounter something called a 'shell'? What is, in general terms, the role and responsibility of a 'shell'? Can that question even be answered? Is the meaning of 'shell' subjective (making the term useless)?
Thanks.
In Orchard the term "shell" is really more of a metaphor for a scope. There are three nested scopes: host, shell, and work.
The host is a single container that lives for the duration of the web app domain.
The shell is a child container created by the host that is built according to the current configuration. If the configuration is changed a new shell is built up and the existing one is let go.
The work is another a container, created by the shell, that holds the components that live for the duration of a single request.
One nice thing about the use of a shell container it that it helps avoids the use of static variables and the need to cycle the app domain on when configuration changes. Another nice thing is that it enables an Orchard app domain to serve more than one "site" at the same time when the host holds a number of shells and uses the appropriate one for each request.
I think that a general meaning for shell would be 'user process that interprets and executes commands'.
'User process': as distinct from a process built into the operating system kernel. JCL in the IBM mainframe world would be hard-pressed to count as a shell.
'interprets and executes': in some shape or form, a shell reads commands from a file or a terminal, and reacts to what is presented, rather than being rigidly programmed to do a certain sequence of commands.
'commands: what the commands are depends on the context. In the standard Unix shells, the commands executed are mainly other programs, with the shell linking them together appropriately. Obviously, there are built-in commands, and also there is usually flow-control syntax to allow for appropriate reactions to the results of executing commands.
In other contexts, it is reasonable to think of other sorts of commands being executed. For example, one could envision an 'SQL Shell' which allowed the user to execute SQL statements while connected to a database.
A Python shell would support Pythonic notations and would execute Python-like statements, with a syntax closely related to the syntax of Python. A Perl Shell would support Perl-like notations and would execute Perl-like statements, ... And so the list goes on. (For example, Tcl has tclsh - the Tcl Shell.)
Related
I think this is more of a Tcl configuration question rather than a Tcl coding question...
I inherited a whole series of Tcl scripts that are used within a simulation tool that my company built in-house. In my scripts, I'm finding numerous instances where there are function calls to functions that don't seem to be declared anywhere. How can I trace the path to these phantom functions?
For example, rather than use source, someone build a custom include function that they named INCLUDE. Tclsh obviously balks when I try to run it there, but with my simulation software, it runs fine.
I've tried grep-ing through the entire simulation software for INCLUDE, but I'm not having any luck. Are there any other obvious locations outside the simulation software where a Tcl function might be defined?
The possibilities:
Within your software. (you have checked for this).
Within some other package included by the software.
Check and see if the environment variable TCLLIBPATH is set.
Also check and see if the simulation software sets TCLLIBPATH.
This will be a list of directories to search for Tcl packages, and you
will need to search the packages that are located outside of the
main source tree.
Another possibility is that the locations are specified in the pkgIndex.tcl file.
Check any pkgIndex.tcl files and look for locations outside the main source tree.
Within an unknown command handler. This could be in
your software or within some other package. You should be able to find
some code that processes the INCLUDE statement.
Within a binary package. These are shared libraries that are loaded
by Tcl. If this is the case, there should be some C code used to
build the shared library that can be searched.
Since you say there are numerous instances of unknown functions, my first
guess is that you have
not found all the directories where packages are loaded from. But an
''unknown'' command handler is also a possibility.
Edit:
One more possibility I forgot. Check and see if your software sets the auto_path variable. Check any directories added to the auto_path for
other packages.
This isn't a great answer for you, but I suspect it is the best you're going to get...
The procedure could be defined in a great many places. Your best bet for finding it is to use a tool like findstr (on Windows) or grep -R (on POSIX platforms) to search across all the relevant source files. But that still might not help! It might not be a procedure but instead a general command, which could be implemented in C and not as a procedure, or it could be defined in a packaged application archive (which are usually awkward to look inside). There are also other types of script-implemented command too, which could make things awkward. Generally searching and investigating is your best bet, but it might not work.
Tcl doesn't really differentiate strongly between different types of command except in some introspection operations. If you're lucky, you could find that info body tells you the definition of the procedure (and info args and info default tell you about the arguments) but that won't help with other command types at all. Tcl 8.7 will include a command (info cmdtype) that would help a lot with narrowing down what to do next, but that's no use to you now and it definitely doesn't exist in older versions.
I'm just starting with TCL and trying to get my head around how to best define and integrate modules. There seem to be much effort put into the package+namespace concept, but from what I can tell interp is more powerful and lean for every thinkable scenario. In particular when it comes to hiding and renaming procedures, but also the lack of creep in the global namespace. The only reason to use package+namespaces seem to be because "once upon a time Sun said so".
When should I ever use package+namespace instead of interp?
Namespaces and packages work together. Interpreters are something else.
A namespace is a small scale naming context in Tcl. It can contain commands, variables and other namespaces. You can refer to entities in a namespace via either local names (foo) or via qualified names (bar::foo); if a qualified name starts with ::, it is relative to the (interpreter-)global namespace, and can be used to refer to its command or variable from anywhere in the interpreter. (FWIW, the TclOO object system builds extensively on top of namespaces; there is one namespace per object.)
A package is a high-level concept for a bunch of code supplied by some sort of library. Packages have abstract names (the name do not have to correspond to how the library's implementation is stored on disk) and a distinct version; you can ask for a particular version if necessary, though most of the time you don't bother. Packages can be implemented by multiple mechanisms, but they almost all come down to sourceing some number of Tcl scripts and loading some number of DLLs. Almost all packages declare commands, and they conventionally are encouraged to put those commands in a namespace with the same general name as the package. However, quite a few older packages do not do this for various reasons, mostly to do with compatibility with existing code.
An interpreter is a security context in Tcl. By default, Tcl creates one interpreter (plus another if it sets up the console window in wish). Named entities in one interpreter are completely distinct from named entities in another interpreter with a few key exceptions:
Channels have common names across all interpreters. This means that an interpreter can talk about channels owned by another interpreter, but merely being able to mention its name doesn't give permission to access the channel. (The stdin, stdout and stderr channels are shared by default.)
The interp alias command can be used to make alias commands, which are such that invoking a command (the alias) in one interpreter can cause a command (the implementation) in another interpreter to be called, with all arguments safely passed over. This allows one interpreter to expose whatever special calls it wants another interpreter to access without losing control, but it is up to the implementation of those commands to act safely on those arguments.
A safe interpreter is one with the unsafe commands of Tcl profiled out by default. (That's things like open, socket, source, load, cd, etc.) The parent interpreter that created the safe child interpreter can use the alias mechanism to add in exactly the functionality desired; it's very much analogous to an OS system call except you can easily make your own application-specific ones.
Tcl's threading package is designed to create one interpreter per thread (and the aliasing mechanism does not work across threads). That means that there's very little in the way of shared resources by default, and inter-thread communication is done via queued message passing.
In general, packages are required at most once per interpreter and are how you are recommended to access most third-party functionality. Namespaces are fairly lightweight and are used for all sorts of things, and interpreters are considered to be expensive; lots of quite thoroughly production-grade Tcl scripts only ever work with a single interpreter. (Threads are even more expensive than interpreters; it's good practice to match the number of threads you create to the hardware load that you wish to impose, probably through the use of suitable thread pools.)
The purpose of a module is to provide modular code, i.e. code that can easily be used by applications beyond the module writer's knowledge and control, and that encapsulates their own internals.
Package-namespace- and interpreter-based modules are probably equally good at encapsulation, but it's not as easy to make interpreter-based modules that play well with arbitrary applications (it is of course possible).
My own opinion is that interpreters are application level (I mostly use them for user input and for controlled evaluation), not module level. Both namespaces and packages have their warts, but in most cases they do what is expected of them with a minimum of fuss.
My recommendation is that if you are writing modules for your own benefit and interpreters serve you well, by all means use them. If you write modules that other people are to use, possibly including yourself in 18 months, you should stick with namespaces and packages.
This question already has answers here:
Executing system commands safely while coding in Perl
(3 answers)
Closed 6 years ago.
Like executing a command using backticks or exec() or system().
Yes, this is bad practice in almost all cases! This is especially true if you have a choice.
Executing external commands is:
Very hard to do correctly (but very easy to get sort-of-working). You can't just escape the shell args and call it a day, you have to account for potentially multiple levels of escaping for potentially different languages.
Slow. A fork+exec to e.g. rm is easily a thousand times slower than the corresponding syscall.
A rigid, error-prone and inexpressive integration point. You typically have to convert data to flat lists of strings and back. You can't use the language's features like exception handling, nested data structures or callbacks.
Due to this, the following are BAD reasons to call external commands:
Not knowing how to do X in your language, but knowing a shell command for it. A typical example is cp -R foo bar.
Not knowing how something works, but knowing a shell oneliner that does it. A typical example is foo *.mp4 > >(tee file).
Not wanting to learn a new API for e.g. json or http, and instead using shell tools like jq or curl.
However, if you are calling a program that does non-trivial things, that doesn't have a native library or bindings, AND that you know how to invoke with execve semantics (NOT system nor perl exec semantics that invoke shells), this is a valuable tool.
Examples of good uses of executing external commands that follow all the above is invoking make to build a project from an installer, or running java -jar ... to start a Minecraft server.
It's a great* practice. Perl and PHP are great for lots of things, but they're not great at everything, and there's a use case for using external programs and other tools in your project. But one of things that Perl is definitely great at is gluing together input and output formats and letting you mash together several different tools into a single project, letting each part of the project do what they do best.
* by which I mean, often a great practice. Things like #files=qx(ls $dir) and #txt=qx(cat $textfile) make all right-thinking Perl programmers cringe.
Some operating system commands might have built-in functionality not available in language (Perl's mkdir() lacks *ix mkdir's -p). Then again, something might be easier to do using languages constructs instead of parsing output (readdir() vs ls).
And it is important to remember that something written in Perl might be more portable to non-Unix systems than calling OS-specific external programs.
Why not?
But, be careful and escape all strings inserted in the command:
In PHP: escapeshellarg()
In Perl: Perl equivalent of PHP's escapeshellarg
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I have some configuration files that I store the complex object values as serialized json. Currently there is a configuration file for each environment (localhost, dev, prod etc.) and for each installation by client. Most of the values are identically for the configurations between environments but not all. So for three environments and four clients I currently have 12 total files to manage.
If this were a web.config file there would be web.config transforms that would solve the problem. If this was c# I'd have compiler preprocessor directives that could be useed to substitute the different values based on the current build configuration.
Does anyone know of anything that works basically this way or have some good suggestion on tried and true ways to proceed? What I would like is to reduce the number of files down to a single instance for each installation that can suffice for each environment.
Configuration of configuration always seems a bit overdone to me, but you could use a properties file for the parts that change, and apache ant's <replace> task to do the substitutions. Something like this:
<replace
file="configure.json"
propertyFile="config-of-config.properties">
<replacefilter
token="#token1#"
property="property.key"/>
</replace>
Jsonnet from Google is a language that with with a super-set syntax based on JSON, adding high level language features that help to model data in JSON fromat. The compilation step produces JSON. I have used it in a project to describe complex deployment environments that inherit from one another at times and that share domain attributes albeit utilizing them differently from one instance to another.
As an example, an instance contains applications, tenant subscriptions for those applications, contracts, destinations and so forth. The values for all of these attributes are objects the recur throughout environments.
Their docs are very thorough and don't miss the std functions because they make for some very powerful data rendering capabilities.
I wrote a Squirrelistic JSON Preprocessor which uses Golang Text Templates syntax to generate JSON files based on parameters provided.
JSON template can include reference to other templates, use conditional logic, comments, variables and everything else which Golang Text templates package provides.
This really comes down to your full stack.
If you're talking about some application that runs solely client-side, with no server-side processing, whatsoever, then there's really no such thing as pre-processing.
You can process the data further before actually using it, but that won't mean that it will be processed prior to the page being served -- it means that people have to sit around, waiting for that to happen before the apps which need that data can be initialized.
The benefit of using JSON, to begin with is that it's just a data-store, and is quite language-agnostic, and quite widely supported, now. So if it's not 100% client-side, there's nothing stopping you from pre-processing in whatever language you're using on the server, and caching those versions of those files, to serve (and cache) to users, based on their need.
If you really, really need a system to do live processing of config-files, on the client-side, and you've gone through the work of creating app-views which load early, but show the user that they're deferring initialization (ie: "loading..."/spinners), then download a second JSON file, which holds all of the needed implementation-specific data (you'll have 12 of these tiny little files, which should be simple to manage), parse both JSON files into JS objects, and extend the large config object with the additional data in the secondary file.
Please note: Use localhost or some other storage facility to cache this, so that for html5-browsers, this longer load only happens one time.
There is one, https://www.npmjs.com/package/json-variables
Conceptually, it is a function which takes a string, JSON contents, sprinkled with specially marked variables and it produces a string with those variables resolved. Same like Sass or Less does for CSS - it's used to DRY up the source code.
Here's an example.
You'd put something like this in JSON file:
{
"firstName": "customer.firstName",
"message": "Hi %%_firstName_%%",
"preheader": "%%_firstName_%%, look what's inside"
}
Notice how it's DRY — single source of truth for the firstName value.
json-variables would process it into:
{
"firstName": "customer.firstName",
"message": "Hi customer.firstName",
"preheader": "customer.firstName, look what's inside"
}
that is, Hi %%_firstName_%% would look for firstName at the root level (but equally, it could be a deeper path, for example, data1.data2.firstName). Resolving also "bubbles up" to the root level, also you can use custom data structures and more.
Missing pieces of a JSON-processing task puzzle are:
Means to merge multiple JSON files, various ways (object-merge-advanced)
Means to orchestrate actions — Gulp is good if you're preferred programming language is JS
Means to get/set values by path (object-path - its notation uses dots only, no brackets key1.key2.array.2 instead of key1.key2.array[2])
Means to maintain the same set of keys across set of JSON files - you add a key in one, it's added on all others (object-fill-missing-keys)
In described case, we can do at least two approaches: one-to-many, or many-to-many.
Former - Gulp could be "baking" many JSON files from one or more JSON-like source files, json-variables DRY-ing up the references.
Later - alternatively, it could be "managed" set of JSON files rendered into set of distribution files — Gulp watches src folder, runs object-fill-missing-keys to normalise schemas, maybe even sorting objects (yes, it's possible, sorted-object).
It all depends how similar is the desired set of JSON files and how values are customised and is it done manually or programmatically.
I am in the process of building interactive front-ends to a
distributed application which to date has been used to run workloads
that had a batch-job like structures and needed no UI at all. The application is mostly written in Perl and C and runs on a mix of Unix and Windows machines, but I think this isn't relevant to the UI.
The first such frontend is going have a command-line user interface --
currently, I envision something similar to the CLIs of the Procurve
switches and Cisco routers that I have worked with.
Like modern network gear CLIs, commands are going to resemble
simple sentences, (i.e. show vlans ports 1-4) and the CLI will
have some implicit state, much in the way that Unix shells and
cmd.exe in Windows have environment variables and current working
directories. Moreover, I'd like to implement great tab completion that
is aware of the application's state as much as possible and I want to be able to do that with as
little application-specific code as possible.
The low-level functionality (terminal I/O) seems easy to implement on
top of GNU Readline or similar libraries, but that's only where the
real fun starts. So far I have looked at the Perl modules
Term::Shell
and
Term::ShellUI,
but I'm not convinced that I want to use either of them. I am still
considering rolling my own solution and at the moment I am primarily looking for
inspiration.
Can you recommend any application or library, regardless of
implementation language, that implements a good CLI from which I can
borrow ideas?
I suggest you take a look at the philosophy underlying Microsoft PowerShell. From the idea of piping typed objects between commands to the consistency of its commands and argument syntax, I think it can be a source of inspiration.
You could try having a look at libcli :
"Libcli provides a shared library for
including a Cisco-like command-line
interface into other software."
http://code.google.com/p/libcli/
BTW - I forgot to mention that it is GNU Lesser GPL and actually used by Cisco in some products.
As for your last sentence/question, I'm particularly fond of zsh completion and line editing (zle).