octave: load many functions from single file - octave

How can I put multiple functions in one file and later get access to all of them in the octave interpreter ? I don't want to have a thousand files and want to group functions together. I'd like something like 'import' in python.

Once you have saved all your function definitions in a single script file, use source("filename") to have access to these functions within the interpreter.

What you're looking for is called “script files”, defined there: http://www.gnu.org/software/octave/doc/interpreter/Script-Files.html

Related

gtkwave tcl script for adding specific signals

I have a huge VCD file that I use in combination with gtkwave to observe certain signal behaviors. I have a list of signals stored into a .txt file which are the ones that I wish to probe. The thing is that by doing the insertion of the signals manually by hand is a painstakingly long process. So my question here is,
Is there a way, given the .txt file to compose a .tcl script that filters and adds the designated signals from the list to the waveform editor?
Well, after scouting on manuals and some gists I found here and there seems that there is a load of gtkwave instructions one can use that are listed (most of them) on the gtkwave manual (Appendix E) here. So in a nutshell all one has to do is to write a .tcl script in the following format:
# add_waves.tcl
set sig_list [list sig_name_a, register_name\[32:0\], ... ] # note the escaping of the [,] brackets
gtkwave::addSignalsFromList $sig_list
and then invoke the gktwave as:
gtkwave VCD_file.vcd --script=add_waves.tcl
Furthermore, access to the GUI menu options are viable as well via the following syntax in tcl:
gtkwave::/Edit/<Option> <value>

Octave: class and function file names

When I create a class or a function file in octave. Is there a possibility to give the file a name that is different from the classes/functions name? (Class test is in the file test.m, but i would prefer to name the file for example test.class.m)
# test.m
classdef test
% ...
endclassdef
No. The way Octave is designed, classes and functions must be defined in files whose file names exactly match the class or function name. This is so that, given a function or class name, Octave knows where to locate its definition on disk, so it can lazily load in their definitions as they are needed by other source code, instead of having to load the entire source tree up front.
The exception is "command-line" functions, where you can define global functions inside a script, and their definition appears when the script is executed. This isn't a good way to organize code for larger projects, though, because you'd need to arrange for execution of all the scripts at the appropriate time, before those function definitions are needed.
If you want to distinguish classes from functions, you have the option of sticking them in subdirectories named #<class>, for example, #test/test.m.

In fish, what are the pros and cons of a function call versus sourcing a "partial" file?

Say for example you have a few different fish functions for scaffolding different types of projects, but within each you'd like to have a reusable block for running some git commands.
So you could create a separate function, and call it from the other functions, or you could take the separate function file, remove the function name -d "description" and end lines out of it, and then from the other functions just invoke it with source /path/to/partial.
So I'm wondering when a person should use one method instead of the other.
Using functions gives you a few advantages over sourcing a file:
You can tab-complete function names, and add custom completions to tab-complete their arguments
They can be modified using funced and funcsave
There's a standard place for them in ~/.config/fish/functions
They can be autoloaded.
The one advantage of sourcing a file is that it can introduce variables into the caller's scope. For example, set var value inside a sourced file will be scoped to the caller's function. (Of course you can make the variable explicitly global from a function or a sourced file: set -g var value).

In Octave, how do you source a file if the file name is stored in a variable?

I feel like this must be very easy, but I can't find the answer anywhere. In octave (and probably matlab but I haven't verified), you can source the contents of a file by doing
source /path/to/filename
However, let's say I have the filename stored in a variable called file. If you do source file, it treats file as the path rather than what is stored in file. I have tried inserting eval in various places but if that is the answer, I haven't found the correct invocation. I don't know much octave; there is surely a trivial answer to this that I am overlooking?
Not sure about octave, but try to use a function call
source(fname)
That's what you do in matlab at least.

SSIS For Each File Enumerator multiple file filters

Is it possible in SQL 2008 (SSIS) to specify multiple file filters in the for each loop control?
Something like HH*.* and U*.*.
That or a cool workaround would be great.
Thanks,
I don't think that it is possible to do multiple file types. The only way I know of is to do *.* and conditional logic.
What about a foreach loop with regex support?
http://microsoft-ssis.blogspot.com/2012/04/custom-ssis-component-foreach-file.html
It is possible to specify multiple file extensions. All you need to do is specify in the "Files:" section of the Foreach Loop Editor SampleFileSpec*.* and that will retrieve any files that start with SampleFileSpec regardless of the file type or other trailing characters. You can also create an expression in the FileSpec of the Foreach Loop Editor.
If you need to process multiple known file schemas then you can add multiple data flows in the Foreach Loop Container and set the enabled flag for the data flow based on a conditional statement.
The only advantage I can see to doing this is that you only have to iterate through a folder once with a For Each Loop Container. I would recommend having multiple Foreach Loop Containers with their own dedicated data flows. This would make it easier to maintain the code in the future.
Does this resolve your issue? What use-case are you trying to solve that wouldn't be handled better by separate Foreach Loop Containers?
Another option would be something that I prefer to call a "subpackage". In your case, the package would contain just one ForEach loop, configured to loop over the FileSpec as configured in a package variable. The package itself would receive the value for the FileSpec variable through Parent Package Configuration.
That way you have a generic package that does what you expect it to do, available to be called from any other package. To process files with two different filters, all that you need to do is call the package twice, each time with a different value for the variable.
If you're not in favor of using Parent Package configs, that can be avoided by calling the package using an Execute Process task that calls dtexec.exe, while the value for the parameter is passed through the Arguments property.