I can't find answer what is the difference in namespace definition using double :: ( when I read source files where both are used ) between:
namespace eval somenamespace {
}
and
namespace eval ::somenamespace {
}
sample without ::
https://github.com/tcltk/tcllib/blob/master/modules/generator/generator.tcl#L16
sample with ::
https://github.com/tcltk/tcllib/blob/master/modules/ftp/ftp.tcl#L56
It's a bit like path names. If you are in the root directory (the unnamed / path) it makes no difference if you use bar or /bar: both refer to the /bar directory. If you are in /foo, it matters very much if you use bar or /bar: the first refers to the /foo/bar directory, and the second still refers to the /bar directory.
:: is like / for namespace names. In the root namespace (the empty :: name) it makes no difference if you use bar or ::bar: both refer to the ::bar namespace. If you are in ::foo, it matters very much if you use bar or ::bar: the first refers to the ::foo::bar namespace, and the second still refers to the ::bar namespace.
Documentation: namespace
In general, it depends on the context in which the code is run. If it is run in the global namespace, there is no difference between the two. If it is run inside another namespace (e.g., in ::foo for the sake of argument) there's a difference (since one creates ::foo::somenamespace).
For packages it makes little difference, the scripts provided by package ifneeded — and hence run by package require — are actually run by this line (inside tclPkg.c, in the function PkgRequireCore):
code = Tcl_EvalEx(interp, script, -1, TCL_EVAL_GLOBAL);
That is, they're always in the global context, the :: namespace.
Related
To try to understand, I looked for some code on the internet and found the following declaration of what I suppose to be functions, and that I don't understand at all.
sext #(.inwidth(1), .outwidth(32)) scc_sext_i0(
.i0(paw_0_i0_outport0[32]),
.o0(scc_sext_i0_o0));
combine2_wn #(.inwidth0(32), .inwidth1(32)) scc_combine2_wn_i0(
.i0(paw_0_i0_outport0[31 : 0]),
.i1(scc_sext_i0_o0),
.o0(scc_combine2_wn_i0_o0));
combine2_wn #(.inwidth0(32), .inwidth1(32)) scc_combine2_wn_i1(
.i0(scc_combine2_wn_i2_o0[31 : 0]),
.i1(scc_combine2_wn_i2_o0[63 : 32]),
.o0(scc_combine2_wn_i1_o0));
My questions are the following:
Are these really functions mapping?
If yes, they are not defined in any other lower level .v file (and no library is included either in the top-level file). So what is their use?
What does # symbol mean?
What does .inwidth(32) mean? input of 32 bits? (impossible to find on the internet...)
If yes, the combine2_wn blocks should have only 2 inputs, why is there an output mapped each time?
More generally, are these any kind of concatenation functions?
These are most likely module instantiations, not function calls.
You should have a module named sext and another named combine2_wn declared in files somewhere in your Verilog search path.
#() means you are assigning values to parameters inside the named modules.
There is a parameter named inwidth in the sext module. You are assigning it a value of 1.
There are plenty of references on the web. Look at the verilog wiki site.
I have several functions defined in namespace "b" which I export. I then import these functions to namespace ::x::Y, thusly:
namespace eval ::x::y "namespace import fun"
some time later I do:
namespace eval ::x::y fun
Where fun does:
proc fun {} {
puts "[namespace current]"
uplevel {puts "[namespace current]"}
}
What is printed is:
::b
::x::y
What I want and need is for 'fun' to happen in ::x::y and not in ::b. What am I doing wrong?
That's not how Tcl's namespaces work. Each procedure is associated with exactly one namespace, which is the one in which its name is located. When you use namespace import, an alias to the procedure is placed in the importing namespace that allows the procedure to be invoked from that other namespace, but the procedure itself remains in its original namespace and executes in that one.
If you want to know the caller's namespace, use uplevel namespace current (or uplevel 1 {namespace current} for a slightly windier but more efficient version). This doesn't actually tell you what namespace contained the command that was used to invoke the procedure though; for that, you need this monstrosity (in the invoked command):
namespace qualifiers [uplevel 1 [list namespace which [lindex [info level 0] 0]]]
Of course, if you're needing that a lot then you're probably doing something wrong. (That's obvious, given the length and complexity of code required to get the information.)
In particular, if you're pretending to do object orientation with this, please stop and use a real object system that gets all the tricky details right. Tcl 8.6.0 includes one (two, if you've got the contributed extensions), and there are many for older versions available as extension packages.
Is it possible to split a Clojure namespace over multiple source files when doing ahead-of-time compilation with :gen-class? How do (:main true) and (defn- ...) come into play?
Overview
Certainly you can, in fact clojure.core namespace itself is split up this way and provides a good model which you can follow by looking in src/clj/clojure:
core.clj
core_deftype.clj
core_print.clj
core_proxy.clj
..etc..
All these files participate to build up the single clojure.core namespace.
Primary File
One of these is the primary file, named to match the namespace name so that it will be found when someone mentions it in a :use or :require. In this case the main file is clojure/core.clj, and it starts with an ns form. This is where you should put all your namespace configuration, regardless of which of your other files may need them. This normally includes :gen-class as well, so something like:
(ns my.lib.of.excellence
(:use [clojure.java.io :as io :only [reader]])
(:gen-class :main true))
Then at appropriate places in your primary file (most commonly all at the end) use load to bring in your helper files. In clojure.core it looks like this:
(load "core_proxy")
(load "core_print")
(load "genclass")
(load "core_deftype")
(load "core/protocols")
(load "gvec")
Note that you don't need the current directory as a prefix, nor do you need the .clj suffix.
Helper files
Each of the helper files should start by declaring which namespace they're helping, but should do so using the in-ns function. So for the example namespace above, the helper files would all start with:
(in-ns 'my.lib.of.excellence)
That's all it takes.
gen-class
Because all these files are building a single namespace, each function you define can be in any of the primary or helper files. This of course means you can define your gen-class functions in any file you'd like:
(defn -main [& args]
...)
Note that Clojure's normal order-of-definition rules still apply for all functions, so you need to make sure that whatever file defines a function is loaded before you try to use that function.
Private Vars
You also asked about the (defn- foo ...) form which defines a namespace-private function. Functions defined like this as well as other :private vars are visible from within the namespace where they're defined, so the primary and all helper files will have access to private vars defined in any of the files loaded so far.
Is there any way to define an Erlang function from within the Erlang shell instead of from an erl file (aka a module)?
Yes but it is painful. Below is a "lambda function declaration" (aka fun in Erlang terms).
1> F=fun(X) -> X+2 end.
%%⇒ #Fun <erl_eval.6.13229925>
Have a look at this post. You can even enter a module's worth of declaration if you ever needed. In other words, yes you can declare functions.
One answer is that the shell only evaluates expressions and function definitions are not expressions, they are forms. In an erl file you define forms not expressions.
All functions exist within modules, and apart from function definitions a module consists of attributes, the more important being the modules name and which functions are exported from it. Only exported functions can be called from other modules. This means that a module must be defined before you can define the functions.
Modules are the unit of compilation in erlang. They are also the basic unit for code handling, i.e. it is whole modules which are loaded into, updated, or deleted from the system. In this respect defining functions separately one-by-one does not fit into the scheme of things.
Also, from a purely practical point of view, compiling a module is so fast that there is very little gain in being able to define functions in the shell.
This depends on what you actually need to do.
There are functions that one could consider as 'throwaways', that is, are defined once to perform a test with, and then you move on. In such cases, the fun syntax is used. Although a little cumbersome, this can be used to express things quickly and effectively. For instance:
1> Sum = fun(X, Y) -> X + Y end.
#Fun<erl_eval.12.128620087>
2> Sum(1, 2).
3
defines an anonymous fun that is bound to the variable (or label) Sum. Meanwhile, the following defines a named fun, called F, that is used to create a new process whose PID (<0.80.0>) is bound to Pid. Note that F is called in a tail recursive fashion in the second clause of receive, enabling the process to loop until the message stop is sent to it.
3> Pid = spawn(fun F() -> receive stop -> io:format("Stopped~n"); Msg -> io:format("Got: ~p~n", [Msg]), F() end end).
<0.80.0>
4> Pid ! hello.
hello
Got: hello
5> Pid ! stop.
Stopped
stop
6>
However you might need to define certain utility functions that you intend to use over and over again in the Erlang shell. In this case, I would suggest using the user_default.erl file together with .erlang to automatically load these custom utility functions into the Erlang shell as soon as this is launched. For instance, you could write a function that compiles all the Erlang files in living in the current directory.
I have written a small guide on how to do this on this GitHub link. You might find it helpful and instructive.
If you want to define a function on the shell to use it as macro (because it encapsulates some functionality that you need frequently), have a look at
https://erldocs.com/current/stdlib/shell_default.html
I need to run multiple instances of a C program in VxWorks (VxWorks has a global namespace). The problem is that the C program defines global variables (which are intended for use by a specific instance of that program) which conflict in the global namespace. I would like to make minimal changes to the program in order to make this work. All ideas welcomed!
Regards
By the way ... This isn't a good time to mention that global variables are not best practice!
The easiest thing to do would be to use task Variables (see taskVarLib documentation).
When using task variables, the variable is specific to the task now in context. On a context switch, the current variable is stored and the variable for the new task is loaded.
The caveat is that a task variable can only be a 32-bit number.
Each global variable must also be added independently (via its own call to taskVarAdd?) and it also adds time to the context switch.
Also, you would NOT be able to share the global variable with other tasks.
You can't use task variables with ISRs.
Another Possibility:
If you are using Vxworks 6.x, you can make a Real Time Process application.
This follows a process model (similar to Unix/Windows) where each instance of your program has it's own global memory space, independent of any other instance.
I had to solve this when integrating two third-party libraries from the same vendor. Both libraries used some of the same symbol names, but they were not compatible with each other. Because these were coming from a vendor, we couldn't afford to search & replace. And task variables were not applicable either since (a) the two libs might be called from the same task and (b) some of the dupe symbols were functions.
Assume we have app1 and app2, linked, respectively, to lib1 and lib2. Both libs define the same symbols so must be hidden from each other.
Fortunately (if you're using GNU tools) objcopy allows you to change the type of a variable after linking.
Here's a sketch of the solution, you'll have to modify it for your needs.
First, perform a partial link for app1 to bind it to lib1. Here, I'm assuming that you've already partially linked *.o in app1 into app1_tmp1.o.
$(LD_PARTIAL) $(LDFLAGS) -Wl,-i -o app1_tmp2.o app1_tmp1.o $(APP1_LIBS)
Then, hide all of the symbols from lib1 in the tmp2 object you just created to generate the "real" object for app1.
objcopymips `nmmips $(APP1_LIBS) | grep ' [DRT] ' | sed -e's/^[0-9A-Fa-f]* [DRT] /-L /'` app1_tmp2.o app1.o
Repeat this for app2. Now you have app1.o and app2.o ready to link into your final application without any conflicts.
The drawback of this solution is that you don't have access to any of these symbols from the host shell. To get around this, you can temporarily turn off the symbol hiding for one or the other of the libraries for debugging.
Another possible solution would be to put your application's global variables in a static structure. For example:
From:
int global1;
int global2;
int someApp()
{
global2 = global1 + 3;
...
}
TO:
typedef struct appGlobStruct {
int global1;
int global2;
} appGlob;
int someApp()
{
appGlob.global2 = appGlob.global1 + 3;
}
This simply turns into a search & replace in your application code. No change to the structure of the code.