How to define an function in the eshell (Erlang shell)? - function

Is there any way to define an Erlang function from within the Erlang shell instead of from an erl file (aka a module)?

Yes but it is painful. Below is a "lambda function declaration" (aka fun in Erlang terms).
1> F=fun(X) -> X+2 end.
%%⇒ #Fun <erl_eval.6.13229925>
Have a look at this post. You can even enter a module's worth of declaration if you ever needed. In other words, yes you can declare functions.

One answer is that the shell only evaluates expressions and function definitions are not expressions, they are forms. In an erl file you define forms not expressions.
All functions exist within modules, and apart from function definitions a module consists of attributes, the more important being the modules name and which functions are exported from it. Only exported functions can be called from other modules. This means that a module must be defined before you can define the functions.
Modules are the unit of compilation in erlang. They are also the basic unit for code handling, i.e. it is whole modules which are loaded into, updated, or deleted from the system. In this respect defining functions separately one-by-one does not fit into the scheme of things.
Also, from a purely practical point of view, compiling a module is so fast that there is very little gain in being able to define functions in the shell.

This depends on what you actually need to do.
There are functions that one could consider as 'throwaways', that is, are defined once to perform a test with, and then you move on. In such cases, the fun syntax is used. Although a little cumbersome, this can be used to express things quickly and effectively. For instance:
1> Sum = fun(X, Y) -> X + Y end.
#Fun<erl_eval.12.128620087>
2> Sum(1, 2).
3
defines an anonymous fun that is bound to the variable (or label) Sum. Meanwhile, the following defines a named fun, called F, that is used to create a new process whose PID (<0.80.0>) is bound to Pid. Note that F is called in a tail recursive fashion in the second clause of receive, enabling the process to loop until the message stop is sent to it.
3> Pid = spawn(fun F() -> receive stop -> io:format("Stopped~n"); Msg -> io:format("Got: ~p~n", [Msg]), F() end end).
<0.80.0>
4> Pid ! hello.
hello
Got: hello
5> Pid ! stop.
Stopped
stop
6>
However you might need to define certain utility functions that you intend to use over and over again in the Erlang shell. In this case, I would suggest using the user_default.erl file together with .erlang to automatically load these custom utility functions into the Erlang shell as soon as this is launched. For instance, you could write a function that compiles all the Erlang files in living in the current directory.
I have written a small guide on how to do this on this GitHub link. You might find it helpful and instructive.

If you want to define a function on the shell to use it as macro (because it encapsulates some functionality that you need frequently), have a look at
https://erldocs.com/current/stdlib/shell_default.html

Related

How do i know when a function body ends in assembly

I want to know when a function body end in assemby, for example in c you have this brakets {} that tell you when the function body start and when it ends but how do i know this in assembly?
Is there a parser that can extract me all the functions from assembly and start line and endline of their body?
There's no foolproof way, and there might not even be a well-defined correct answer in hand-written asm.
Usually (e.g. in compiler-generated code) you know a function ends when you see the next global symbol, like objdump does to decide when to print a new "banner". But without all function-start symbols being visible, there's no unambigious way. That's why some object file formats have room for size metadata associated with a symbol. Like .size foo, . - foo in GAS syntax.
It's not as easy as looking for a ret; some functions end with a jmp tail-call to another function. And some call a noreturn function like abort or __stack_chk_fail (not tailcall because they want to push a return address for a backtrace.) Or just fall off into whatever's next because that path had undefined behaviour in the source so the compiler assumed it wasn't reachable and stopped generating instructions for it, e.g. a C++ non-void function where execution can/does fall off the end without a return.
In general, assembly can blur the lines of what a function is.
Asm has features you can use to implement the high-level concept of a function, but you're not restricted to that.
e.g. multiple asm functions could all return by jumping to a common block of code that pops some registers before a ret. Is that shared tail a separate function that's called with a tail-called with a special calling convention?
Compilers don't usually do that, but humans could.
As for function entry points, usually some other code somewhere in the program will contain a call to it. But not necessarily; it might only be reachable via a table of function pointers, and you don't know that a block of .rodata holds function pointers until you find some code loading from it and calling or jumping.
But that doesn't work if the lowest-address instruction of the function isn't its entry point. See Does a function with instructions before the entry-point label cause problems for anything (linking)? for an example
Compilers don't generate code like that, but humans can. (It's a handy trick sometimes for https://codegolf.stackexchange.com/ questions.)
Or in the general case, a function might have multiple entry points. Or you could describe that as multiple functions with overlapping implementations. Sometimes it's as simple as one tailcalling another by falling into it without needing a jmp, i.e. it starts a few instructions before another.
I wan't to know when a function body ends in assembly, [...]
There are mainly four ways that the execution of a stream of (userspace) instructions can "end":
An unconditional jump like jmp or a conditional one like Jcc (je,jnz,jg ...)
A ret instruction (meaning the end of a subroutine) which probably comes closest to the intent of your question (including the ExitProcess "ret" command)
The call of another "function"
An exception. Not a C style exception, but rather an exception like "Invalid instruction" or "Division by 0" which terminates the user space program
[...] for example in c you have this brakets {} that tell you when the function body start and when it ends but how do i know this in assembly?
Simple answer: you don't. On the machine level every address can (theoretically) be an entry point to a "function". So there is no unique entry point to a "function" other than defined - and you can define anything.
On a tangent, this relates to self-modifying code and viruses, but it must not. The exit/end is as described in the first part above.
Is there a parser that can extract me all the functions from assembly and
start line and endline of their body?
Disassemblers create some kind of "functions" with entry and exit points. But they are merely assumed. No way to know if that assumption is correct. This may cause problems.
The usual approach is using a disassembler and the work to recombinate the stream of instructions to different "functions" remains to the person that mandated this task (vulgo: you). Some tools exist that claim to simplify this, but I cannot judge their efficacy.
From the perspective of a high level language, there are decompilers that try to reverse the transformation from (for example) C to assembly/machine code that try to automatize that task and will work more or less or in some cases.

functions in Module (Fortran) [duplicate]

I use the Intel Visual Fortran. According to Chapmann's book, declaration of function type in the routine that calls it, is NECESSARY. But look at this piece of code,
module mod
implicit none
contains
function fcn ( i )
implicit none
integer :: fcn
integer, intent (in) :: i
fcn = i + 1
end function
end module
program prog
use mod
implicit none
print *, fcn ( 3 )
end program
It runs without that declaration in the calling routine (here prog) and actually when I define its type (I mean function type) in the program prog or any other unit, it bears this error,
error #6401: The attributes of this name conflict with those made accessible by a USE statement. [FCN] Source1.f90 15
What is my fault? or if I am right, How can it be justified?
You must be working with a very old copy of Chapman's book, or possibly misinterpreting what it says. Certainly a calling routine must know the type of a called function, and in Fortran-before-90 it was the programmer's responsibility to ensure that the calling function had that information.
However, since the 90 standard and the introduction of modules there are other, and better, ways to provide information about the called function to the calling routine. One of those ways is to put the called functions into a module and to use-associate the module. When your program follows this approach the compiler takes care of matters. This is precisely what your code has done and it is not only correct, it is a good approach, in line with modern Fortran practice.
association is Fortran-standard-speak for the way(s) in which names (such as fcn) become associated with entities, such as the function called fcn. use-association is the way implemented by writing use module in a program unit, thereby making all the names in module available to the unit which uses module. A simple use statement makes all the entities in the module known under their module-defined names. The use statement can be modified by an only clause, which means that only some module entities are made available. Individual module entities can be renamed in a use statement, thereby associating a different name with the module entity.
The error message you get if you include a (re-)declaration of the called function's type in the calling routine arises because the compiler will only permit one declaration of the called function's type.

Warning for variables with function names in Matlab

Sometimes I accidentally declare variables that have the name of a function.
Here is a constructed example:
max(4:5) % 5
max(1:10)=10*ones(10,1); % oops, should be == instead of =
max(4:5) % [10 10]
At the moment I always find this out the hard way and it especially happens with function names that I don't use frequently.
Is there any way to let matlab give a warning about this? It would be ideal to see this on the right hand side of the screen with the other warnings, but I am open to other suggestions.
Since Matlab allows you to overload built-in functionality, you will not receive any warnings when using existing names.
There are, however, a few tricks to minimize the risk of overloading existing functions:
Use explicitFunctionNames. It is much less likely that there is a function maxIndex instead of max.
Use the "Tab"-key often. Matlab will auto-complete functions on the path (as well as variables that you've declared previously). Thus, if the variable auto-completes, it already exists. In case you don't remember whether it's also a function, hit "F1" to see whether there exists a help page for it.
Use functions rather than scripts, so that "mis-"assigned variables in the workspace won't mess up your code.
I'm pretty sure mlint can also check for that.
Generally I would wrap code into functions as much as possible. That way the range of such an override is limited to the scope of the function - so no lasting problems, besides the accidental assumption of course.
When in doubt, check:
exist max
ans =
5
Looking at help exist, you can see that "max" is a function, and shouldn't be assigned as a variable.
>> help exist
exist Check if variables or functions are defined.
exist('A') returns:
0 if A does not exist
1 if A is a variable in the workspace
2 if A is an M-file on MATLAB's search path. It also returns 2 when
A is the full pathname to a file or when A is the name of an
ordinary file on MATLAB's search path
3 if A is a MEX-file on MATLAB's search path
4 if A is a MDL-file on MATLAB's search path
5 if A is a built-in MATLAB function
6 if A is a P-file on MATLAB's search path
7 if A is a directory
8 if A is a class (exist returns 0 for Java classes if you
start MATLAB with the -nojvm option.)

Organizing Notebooks & Saving Results in Mathematica

As of now I use 3 Notebook :
Functions
Where I have all the functions I created and call in the other Notebooks.
Transformation
Based on the original data, I compute transformations and add columns/List
When data is my raw data, I then call :
t1data : the result of the first transformation
t2data : the result of the second transformation
and so on,
I am yet at t20.
Display & Analysis
Using both the above I create Manipulate object that enable me to analyze the data.
Questions
Is there away to save the results of the Transformation Notebook such that t13data for example can be used in the Display & Analysis Notebooks without running all the previous computations (t1,t2,t3...t12) it is based on ?
Is there a way to use my Functions or transformed data without opening the corresponding Notebook ?
Does my separation strategy make sense at all ?
As of now I systematically open the 3 and have to run them all before being able to do anything, and it takes a while given my poor computing power and yet inefficient codes.
Saving variable states: can be done using DumpSave, Save or Put. Read back using Get or <<
You could make a package from your functions and read those back using Needs or <<
It's not something I usually do. I opt for a monolithic notebook containing everything (nicely layered with sections and subsections so that you can fold open or close) or for a package + slightly leaner analysis notebook depending on the weather and some other hidden variables.
Saving intermediate results
The native file format for Mathematica expressions is the .m file. This is human readable text format, and you can view the file in a text editor if you ever doubt what is, or is not being saved. You can load these files using Get. The shorthand form for Get is:
<< "filename.m"
Using Get will replace or refresh any existing assignments that are explicitly made in the .m file.
Saving intermediate results that are simple assignments (dat = ...) may be done with Put. The shorthand form for Put is:
dat >> "dat.m"
This saves only the assigned expression itself; to restore the definition you must use:
dat = << "dat.m"
See also PutAppend for appending data to a .m file as new results are created.
Saving results and function definitions that are complex assignments is done with Save. Examples of such assignments include:
f[x_] := subfunc[x, 2]
g[1] = "cat"
g[2] = "dog"
nCr = #!/(#2! (# - #2)!) &;
nPr = nCr[##] #2! &;
For the last example, the complexity is that nPr depends on nCr. Using Save it is sufficient to save only nPr to get a fully working definition of nPr: the definition of nCr will automatically be saved as well. The syntax is:
Save["nPr.m", nPr]
Using Save the assignments themselves are saved; to restore the definitions use:
<< "nPr.m" ;
Moving functions to a Package
In addition to Put and Save, or manual creation in a text editor, .m files may be generated automatically. This is done by creating a Notebook and setting Cell > Cell Properties > Initialization Cell on the cells that contain your function definitions. When you save the Notebook for the first time, Mathematica will ask if you want to create an Auto Save Package. Do so, and Mathematica will generate a .m file in parallel to the .nb file, containing the contents of all Initialization Cells in the Notebook. Further, it will update this .m file every time you save the Notebook, so you never need to manually update it.
Sine all Initialization Cells will be saved to the parallel .m file, I recommend using the Notebook only for the generation of this Package, and not also for the rest of your computations.
When managing functions, one must consider context. Not all functions should be global at all times. A series of related functions should often be kept in its own context which can then be easily exposed to or removed from $ContextPath. Further, a series of functions often rely on subfunctions that do not need to be called outside of the primary functions, therefore these subfunctions should not be global. All of this relates to Package creation. Incidentally, it also relates to the formatting of code, because knowing that not all subfunctions must be exposed as global gives one the freedom to move many subfunctions to the "top level" of the code, that is, outside of Module or other scoping constructs, without conflicting with global symbols.
Package creation is a complex topic. You should familiarize yourself with Begin, BeginPackage, End and EndPackage to better understand it, but here is a simple framework to get you started. You can follow it as a template for the time being.
This is an old definition I used before DeleteDuplicates existed:
BeginPackage["UU`"]
UnsortedUnion::usage = "UnsortedUnion works like Union, but doesn't \
return a sorted list. \nThis function is considerably slower than \
Union though."
Begin["`Private`"]
UnsortedUnion =
Module[{f}, f[y_] := (f[y] = Sequence[]; y); f /# Join###] &
End[]
EndPackage[]
Everything above goes in Initialization Cells. You can insert Text cells, Sections, or even other input cells without harming the generated Package: only the contents of the Initialization Cells will be exported.
BeginPackage defines the Context that your functions will belong to, and disables all non-System` definitions, preventing collisions. (There are ways to call other functions from your package, but that is better for another question).
By convention, a ::usage message is defined for each function that it to be accessible outside the package itself. This is not superfluous! While there are other methods, without this, you will not expose your function in the visible Context.
Next, you Begin a context that is for the package alone, conventionally "`Private`". After this point any symbols you define (that are not used outside of this Begin/End block) will not be exposed globally after the Package is loaded, and will therefore not collide with Global` symbols.
After your function definition(s), you close the block with End[]. You may use as many Begin/End blocks as you like, and I typically use a separate one for each function, though it is not required.
Finally, close with EndPackage[] to restore the environment to what it was before using BeginPackage.
After you save the Notebook and generate the .m package (let's say "mypackage.m"), you can load it with Get:
<< "mypackage.m"
Now, there will be a function UnsortedUnion in the Context UU` and it will be accessible globally.
You should also look into the functionality of Needs, but that is a little more advanced in my opinion, so I shall stop here.

How to resolve naming conflicts when multiply instantiating a program in VxWorks

I need to run multiple instances of a C program in VxWorks (VxWorks has a global namespace). The problem is that the C program defines global variables (which are intended for use by a specific instance of that program) which conflict in the global namespace. I would like to make minimal changes to the program in order to make this work. All ideas welcomed!
Regards
By the way ... This isn't a good time to mention that global variables are not best practice!
The easiest thing to do would be to use task Variables (see taskVarLib documentation).
When using task variables, the variable is specific to the task now in context. On a context switch, the current variable is stored and the variable for the new task is loaded.
The caveat is that a task variable can only be a 32-bit number.
Each global variable must also be added independently (via its own call to taskVarAdd?) and it also adds time to the context switch.
Also, you would NOT be able to share the global variable with other tasks.
You can't use task variables with ISRs.
Another Possibility:
If you are using Vxworks 6.x, you can make a Real Time Process application.
This follows a process model (similar to Unix/Windows) where each instance of your program has it's own global memory space, independent of any other instance.
I had to solve this when integrating two third-party libraries from the same vendor. Both libraries used some of the same symbol names, but they were not compatible with each other. Because these were coming from a vendor, we couldn't afford to search & replace. And task variables were not applicable either since (a) the two libs might be called from the same task and (b) some of the dupe symbols were functions.
Assume we have app1 and app2, linked, respectively, to lib1 and lib2. Both libs define the same symbols so must be hidden from each other.
Fortunately (if you're using GNU tools) objcopy allows you to change the type of a variable after linking.
Here's a sketch of the solution, you'll have to modify it for your needs.
First, perform a partial link for app1 to bind it to lib1. Here, I'm assuming that you've already partially linked *.o in app1 into app1_tmp1.o.
$(LD_PARTIAL) $(LDFLAGS) -Wl,-i -o app1_tmp2.o app1_tmp1.o $(APP1_LIBS)
Then, hide all of the symbols from lib1 in the tmp2 object you just created to generate the "real" object for app1.
objcopymips `nmmips $(APP1_LIBS) | grep ' [DRT] ' | sed -e's/^[0-9A-Fa-f]* [DRT] /-L /'` app1_tmp2.o app1.o
Repeat this for app2. Now you have app1.o and app2.o ready to link into your final application without any conflicts.
The drawback of this solution is that you don't have access to any of these symbols from the host shell. To get around this, you can temporarily turn off the symbol hiding for one or the other of the libraries for debugging.
Another possible solution would be to put your application's global variables in a static structure. For example:
From:
int global1;
int global2;
int someApp()
{
global2 = global1 + 3;
...
}
TO:
typedef struct appGlobStruct {
int global1;
int global2;
} appGlob;
int someApp()
{
appGlob.global2 = appGlob.global1 + 3;
}
This simply turns into a search & replace in your application code. No change to the structure of the code.