As of now I use 3 Notebook :
Functions
Where I have all the functions I created and call in the other Notebooks.
Transformation
Based on the original data, I compute transformations and add columns/List
When data is my raw data, I then call :
t1data : the result of the first transformation
t2data : the result of the second transformation
and so on,
I am yet at t20.
Display & Analysis
Using both the above I create Manipulate object that enable me to analyze the data.
Questions
Is there away to save the results of the Transformation Notebook such that t13data for example can be used in the Display & Analysis Notebooks without running all the previous computations (t1,t2,t3...t12) it is based on ?
Is there a way to use my Functions or transformed data without opening the corresponding Notebook ?
Does my separation strategy make sense at all ?
As of now I systematically open the 3 and have to run them all before being able to do anything, and it takes a while given my poor computing power and yet inefficient codes.
Saving variable states: can be done using DumpSave, Save or Put. Read back using Get or <<
You could make a package from your functions and read those back using Needs or <<
It's not something I usually do. I opt for a monolithic notebook containing everything (nicely layered with sections and subsections so that you can fold open or close) or for a package + slightly leaner analysis notebook depending on the weather and some other hidden variables.
Saving intermediate results
The native file format for Mathematica expressions is the .m file. This is human readable text format, and you can view the file in a text editor if you ever doubt what is, or is not being saved. You can load these files using Get. The shorthand form for Get is:
<< "filename.m"
Using Get will replace or refresh any existing assignments that are explicitly made in the .m file.
Saving intermediate results that are simple assignments (dat = ...) may be done with Put. The shorthand form for Put is:
dat >> "dat.m"
This saves only the assigned expression itself; to restore the definition you must use:
dat = << "dat.m"
See also PutAppend for appending data to a .m file as new results are created.
Saving results and function definitions that are complex assignments is done with Save. Examples of such assignments include:
f[x_] := subfunc[x, 2]
g[1] = "cat"
g[2] = "dog"
nCr = #!/(#2! (# - #2)!) &;
nPr = nCr[##] #2! &;
For the last example, the complexity is that nPr depends on nCr. Using Save it is sufficient to save only nPr to get a fully working definition of nPr: the definition of nCr will automatically be saved as well. The syntax is:
Save["nPr.m", nPr]
Using Save the assignments themselves are saved; to restore the definitions use:
<< "nPr.m" ;
Moving functions to a Package
In addition to Put and Save, or manual creation in a text editor, .m files may be generated automatically. This is done by creating a Notebook and setting Cell > Cell Properties > Initialization Cell on the cells that contain your function definitions. When you save the Notebook for the first time, Mathematica will ask if you want to create an Auto Save Package. Do so, and Mathematica will generate a .m file in parallel to the .nb file, containing the contents of all Initialization Cells in the Notebook. Further, it will update this .m file every time you save the Notebook, so you never need to manually update it.
Sine all Initialization Cells will be saved to the parallel .m file, I recommend using the Notebook only for the generation of this Package, and not also for the rest of your computations.
When managing functions, one must consider context. Not all functions should be global at all times. A series of related functions should often be kept in its own context which can then be easily exposed to or removed from $ContextPath. Further, a series of functions often rely on subfunctions that do not need to be called outside of the primary functions, therefore these subfunctions should not be global. All of this relates to Package creation. Incidentally, it also relates to the formatting of code, because knowing that not all subfunctions must be exposed as global gives one the freedom to move many subfunctions to the "top level" of the code, that is, outside of Module or other scoping constructs, without conflicting with global symbols.
Package creation is a complex topic. You should familiarize yourself with Begin, BeginPackage, End and EndPackage to better understand it, but here is a simple framework to get you started. You can follow it as a template for the time being.
This is an old definition I used before DeleteDuplicates existed:
BeginPackage["UU`"]
UnsortedUnion::usage = "UnsortedUnion works like Union, but doesn't \
return a sorted list. \nThis function is considerably slower than \
Union though."
Begin["`Private`"]
UnsortedUnion =
Module[{f}, f[y_] := (f[y] = Sequence[]; y); f /# Join###] &
End[]
EndPackage[]
Everything above goes in Initialization Cells. You can insert Text cells, Sections, or even other input cells without harming the generated Package: only the contents of the Initialization Cells will be exported.
BeginPackage defines the Context that your functions will belong to, and disables all non-System` definitions, preventing collisions. (There are ways to call other functions from your package, but that is better for another question).
By convention, a ::usage message is defined for each function that it to be accessible outside the package itself. This is not superfluous! While there are other methods, without this, you will not expose your function in the visible Context.
Next, you Begin a context that is for the package alone, conventionally "`Private`". After this point any symbols you define (that are not used outside of this Begin/End block) will not be exposed globally after the Package is loaded, and will therefore not collide with Global` symbols.
After your function definition(s), you close the block with End[]. You may use as many Begin/End blocks as you like, and I typically use a separate one for each function, though it is not required.
Finally, close with EndPackage[] to restore the environment to what it was before using BeginPackage.
After you save the Notebook and generate the .m package (let's say "mypackage.m"), you can load it with Get:
<< "mypackage.m"
Now, there will be a function UnsortedUnion in the Context UU` and it will be accessible globally.
You should also look into the functionality of Needs, but that is a little more advanced in my opinion, so I shall stop here.
Related
Can we change the output dataset path dynamically in the my_compute_function as show below
from transforms.api import transform, Input, Output
#transform(
my_output=Output("/path/to/my/dataset"),
my_input=Input("/path/to/input"),
)
def my_compute_function(my_output, my_input):
**my_output.path = "new path"**
my_output.write_dataframe(
my_input.dataframe()
)
No, this is not possible. The reason is that the inputs/outputs/transforms are fixed at "CI-time" or "build-time". When you press "commit" in Authoring or you merge a PR, a CI job is kicked off.
In this CI job, all the relations between inputs and outputs are determined. Output datasets that don't exist yet are created, and a "jobspec" is added to them. A "jobspec" is a snippet of JSON that describes to foundry how a particular dataset is generated.
Anytime you press the "build" button on a dataset (or build the dataset through a schedule or similar), the jobspec is consulted. It contains a reference to the repository, revision, source file and entry point of the function that builds this dataset. From there the build is orchestrated and kicks off, invoking your function to produce the final output.
This mechanism allows you to get a "static view" of the entire pipeline, which you can then visualize with Monocle, as you might have seen.
Depending on what your needs are, here are some solutions you might be able to use instead:
Tag the rows you're producing in your transform in some way, so that even though you put them into a single dataset, you can later select them by this tag/category
If your set of categories does not change often, you can instead create the output datasets ahead of time and then filter the rows into the appropriate dataset they should go into.
The main drawback with the latter approach is that it's not very dynamic, so if a new category shows up, you'll manually have to change the code to "triage" it into a new dataset, until the data becomes available.
There's other solutions (ultimately it is possible to make API calls and to manually adjust inputs/outputs as well, for instance) but they are more complex and undesirable from a maintenance perspective.
When I run my PsychoPy experiment, PsychoPy saves a CSV file that contains my trials and the values of my variables.
Among these, there are some variables I would like to NOT be included. There are some variables which I decided to include in the CSV, but many others which automatically felt in it.
is there a way to manually force (from the code block) the exclusion of some variables in the CSV?
is there a way to decide the order of the saved columns/variables in the CSV?
It is not really important and I know I could just create myself an output file without using the one of PsychoPy, or I can easily clean it afterwards but I was just curious.
PsychoPy spits out all the variables it thinks you could need. If you want to drop some of them, that is a task for the analysis stage, and is easily done in any processing pipeline. Unless you are analysing data in a spreadsheet (which you really shouldn't), the number of columns in the output file shouldn't really be an issue. The philosophy is that you shouldn't back yourself into a corner by discarding data at the recording stage - what about the reviewer who asks about the influence of a variable that you didn't think was important?
If you are using the Builder interface, the saving of onset & offset times for each component is optional, and is controlled in the "data" tab of each component dialog.
The order of variables is also not under direct control of the user, but again, can be easily manipulated at the analysis stage.
As you note, you can of course write code to save custom output files of your own design.
there is a special block called session_variable_order: [var1, var2, var3] in experiment_config.yaml file, which you probably should be using; also, you should consider these methods:
from psychopy import data
data.ExperimentHandler.saveAsWideText(fileName = 'exp_handler.csv', delim='\t', sortColumns = False, encoding = 'utf-8')
data.TrialHandler.saveAsText(fileName = 'trial_handler.txt', delim=',', encoding = 'utf-8', dataOut = ('n', 'all_mean', 'all_raw'), summarised = False)
notice the sortColumns and dataOut params
First, here the way i'm calling the function :
eval([functionName '(''stringArg'')']); % functionName = 'someStringForTheFunctionName'
Now, I have two functionName functions in my path, one that take the stringArg and another one that takes something else. I'm getting some errors because right now the first one it finds is the function that doesn't take the stringArg. Considering the way i'm calling the functionName function, how is it possible to call the correct function?
Edit:
I tried the function which :
which -all someStringForTheFunctionName
The result :
C:\........\x\someStringForTheFunctionName
C:\........\y\someStringForTheFunctionName % Shadowed
The shadowed function is the one i want to call.
Function names must be unique in MATLAB. If they are not, so there are duplicate names, then MATLAB uses the first one it finds on your search path.
Having said that, there are a few options open to you.
Option 1. Use # directories, putting each version in a separate directory. Essentially you are using the ability of MATLAB to apply a function to specific classes. So, you might set up a pair of directories:
#char
#double
Put your copies of myfun.m in the respective directories. Now when MATLAB sees a double input to myfun, it will direct the call to the double version. When MATLAB gets char input, it goes to the char version.
BE CAREFUL. Do not put these # directories explicitly on your search path. DO put them INSIDE a directory that is on your search path.
A problem with this scheme is if you call the function with a SINGLE precision input, MATLAB will probably have a fit, so you would need separate versions for single, uint8, int8, int32, etc. You cannot just have one version for all numeric types.
Option 2. Have only one version of the function, that tests the first argument to see if it is numeric or char, then branches to perform either task as appropriate. Both pieces of code will most simply be in one file then. The simple scheme will have subfunctions or nested functions to do the work.
Option 3. Name the functions differently. Hey, its not the end of the world.
Option 4: As Shaun points out, one can simply change the current directory. MATLAB always looks first in your current directory, so it will find the function in that directory as needed. One problem is this is time consuming. Any time you touch a directory, things slow down, because there is now disk input needed.
The worst part of changing directories is in how you use MATLAB. It is (IMHO) a poor programming style to force the user to always be in a specific directory based on what code inputs they wish to run. Better is a data driven scheme. If you will be reading in or writing out data, then be in THAT directory. Use the MATLAB search path to categorize all of your functions, as functions tend not to change much. This is a far cleaner way to work than requiring the user to migrate to specific directories based on how they will be calling a given function.
Personally, I'd tend to suggest option 2 as the best. It is clean. It has only ONE main function that you need to work with. If you want to keep the functions district, put them as separate nested or sub functions inside the main function body. Inside of course, they will have distinct names, based on how they are driven.
OK, so a messy answer, but it should do it. My test function was 'echo'
funcstr='echo'; % string representation of function
Fs=which('-all',funcstr);
for v=1:length(Fs)
if (strcmp(Fs{v}(end-1:end),'.m')) % Don''t move built-ins, they will be shadowed anyway
movefile(Fs{v},[Fs{v} '_BK']);
end
end
for v=1:length(Fs)
if (strcmp(Fs{v}(end-1:end),'.m'))
movefile([Fs{v} '_BK'],Fs{v});
end
try
eval([funcstr '(''stringArg'')']);
break;
catch
if (strcmp(Fs{v}(end-1:end),'.m'))
movefile(Fs{v},[Fs{v} '_BK']);
end
end
end
for w=1:v
if (strcmp(Fs{v}(end-1:end),'.m'))
movefile([Fs{v} '_BK'],Fs{v});
end
end
You can also create a function handle for the shadowed function. The problem is that the first function is higher on the matlab path, but you can circumvent that by (temporarily) changing the current directory.
Although it is not nice imo to change that current directory (actually I'd rather never change it while executing code), it will solve the problem quite easily; especially if you use it in the configuration part of your function with a persistent function handle:
function outputpars = myMainExecFunction(inputpars)
% configuration
persistent shadowfun;
if isempty(shadowfun)
funpath1 = 'C:\........\x\fun';
funpath2 = 'C:\........\y\fun'; % Shadowed
curcd = cd;
cd(funpath2);
shadowfun = #fun;
cd(curcd); % and go back to the original cd
end
outputpars{1} = shadowfun(inputpars); % will use the shadowed function
oupputpars{2} = fun(inputparts); % will use the function highest on the matlab path
end
This problem was also discussed here as a possible solution to this problem.
I believe it actually is the only way to overload a builtin function outside the source directory of the overloading function (eg. you want to run your own sum.m in a directory other than where your sum.m is located.)
EDIT: Old answer no longer good
The run command won't work because its a function, not a script.
Instead, your best approach would be honestly just figure out which of the functions need to be run, get the current dir, change it to the one your function is in, run it, and then change back to your start dir.
This approach, while not perfect, seems MUCH easier to code, to read, and less prone to breaking. And it requires no changing of names or creating extra files or function handles.
i just wrote a m-file with some defined input in which a simulink file is called.
it worked correctly
but when I'm going to define a function based on the same m-file ( so i can give multiple inputs to it) it give's me this error :
""
Invalid matrix-format variable specified as workspace input in 'blocks/From Workspace'. The matrix
must have two dimensions and at least two columns. Complex signals of any data type and non-double
real signals must be in structure format. The first column must contain time values and the
remaining columns the data values.
""
but i'm pretty sure that variable has 2 dimension and has twoo coloumns.
i don't have any idea what to do here.
what can i do here ?
Are you saying the mfile that runs your Simulink simulation works when the mfile is a script, but not when the mfile is a function? If so, this answer may provide some insight. Despite a preference for functions, I use scripts to run Simulink parameter studies - it was just easier to set up.
Is there any way to define an Erlang function from within the Erlang shell instead of from an erl file (aka a module)?
Yes but it is painful. Below is a "lambda function declaration" (aka fun in Erlang terms).
1> F=fun(X) -> X+2 end.
%%⇒ #Fun <erl_eval.6.13229925>
Have a look at this post. You can even enter a module's worth of declaration if you ever needed. In other words, yes you can declare functions.
One answer is that the shell only evaluates expressions and function definitions are not expressions, they are forms. In an erl file you define forms not expressions.
All functions exist within modules, and apart from function definitions a module consists of attributes, the more important being the modules name and which functions are exported from it. Only exported functions can be called from other modules. This means that a module must be defined before you can define the functions.
Modules are the unit of compilation in erlang. They are also the basic unit for code handling, i.e. it is whole modules which are loaded into, updated, or deleted from the system. In this respect defining functions separately one-by-one does not fit into the scheme of things.
Also, from a purely practical point of view, compiling a module is so fast that there is very little gain in being able to define functions in the shell.
This depends on what you actually need to do.
There are functions that one could consider as 'throwaways', that is, are defined once to perform a test with, and then you move on. In such cases, the fun syntax is used. Although a little cumbersome, this can be used to express things quickly and effectively. For instance:
1> Sum = fun(X, Y) -> X + Y end.
#Fun<erl_eval.12.128620087>
2> Sum(1, 2).
3
defines an anonymous fun that is bound to the variable (or label) Sum. Meanwhile, the following defines a named fun, called F, that is used to create a new process whose PID (<0.80.0>) is bound to Pid. Note that F is called in a tail recursive fashion in the second clause of receive, enabling the process to loop until the message stop is sent to it.
3> Pid = spawn(fun F() -> receive stop -> io:format("Stopped~n"); Msg -> io:format("Got: ~p~n", [Msg]), F() end end).
<0.80.0>
4> Pid ! hello.
hello
Got: hello
5> Pid ! stop.
Stopped
stop
6>
However you might need to define certain utility functions that you intend to use over and over again in the Erlang shell. In this case, I would suggest using the user_default.erl file together with .erlang to automatically load these custom utility functions into the Erlang shell as soon as this is launched. For instance, you could write a function that compiles all the Erlang files in living in the current directory.
I have written a small guide on how to do this on this GitHub link. You might find it helpful and instructive.
If you want to define a function on the shell to use it as macro (because it encapsulates some functionality that you need frequently), have a look at
https://erldocs.com/current/stdlib/shell_default.html