I am implementing Tcl Filesystem object. Can someone explain what are mount points. Why they are needed. And what will happen if my matchInDirectoryProc will not return any mount point like native filesystem implementation does?
Let's say there is foo/bar/vfs.myzip where vfs.myzip is a container file for which I am implementing filesystem. I am assuming that vfs.myzip is a mount point. Should my implementation return foo/bar/vfs.myzip if type is TCL_GLOB_TYPE_MOUNT, path is foo/bar/ and patter is "*". What if patter will be "*/*"?
A mount point is a prefix of a path that is the root of a particular virtual filesystem (the native filesystem is special-cased, IIRC). Everything in a VFS will appear below that mount point.
So, suppose /foo/bar/vfs.myzip is the mount point, and inside the VFS is a file abc.txt, a directory def, and another file def/ghi.html. In that case, once correctly mounted the following would exist:
/foo/bar/vfs.myzip/abc.txt
/foo/bar/vfs.myzip/def
/foo/bar/vfs.myzip/def/ghi.html
Now, the matchInDirectoryProc is used inside the globbing code. It's purpose is to return the list of directory entries that match a particular set of constraints in a particular (virtual) directory. It's wrapped inside the Tcl API function Tcl_FSMatchInDirectory, whose documentation notes that:
Note that the glob code implements recursive patterns internally, so this function will only ever be passed simple patterns, which can be matched using the logic of string match. To handle recursion, Tcl will call this function frequently asking only for directories to be returned. A special case of being called with a NULL pattern indicates that the path needs to be checked only for the correct type.
That is, don't worry about that */* pattern; you'll never see it.
I'm not entirely sure how the search for mounts works, but I think it is determining if there is a mount handled by the particular VFS that matches a path. The main example of doing this that I can find online is the TclVFS package, which is rather odd in a few ways. Here's the relevant code but I think that it isn't easy to understand. But for all that, one thing is relatively clear: it's asking about mounts within a particular directory, and not recursively.
Thus, if your mount point is /foo/bar/vfs.myzip then when your code is called asking about mount points in /foo/bar it ought to return an entry for vfs.myzip. If that's the only mount point you maintain, that's the only thing you need to handle in that case.
Assuming that I'm correct anyway. I don't know the virtual filesystem layer well, so this is based on reading code and documentation, not real experience…
Related
I am trying to remove as much LOC as possible to prevent to have to extend my plan.
I setup the exclusion as follow :
I tried multiple way using wildcard (single and double, not file extensions specified), and tried those patterns to be sure on https://toools.cloud/miscellaneous/glob-tester
But when I run the analysis on my main branch, I still get files like
jest.config.ts
XXX.stories.tsx
tailwind.preset.js
Scanned.
Everything that is folder works, but my file name specific exclusions doesnt.
What is wrong with my setup ?
I know that you can make a function file in Octave in which the file name is the same as the function which defines one function, but I would like to define multiple functions in one file. Is there any way to do this, or do I need a separate file for each function.
In this answer I will assume that your main objective is a tidy workspace rather than explicitly a one-file requirement.
Let's get the one-file approach out of the way. You can create a script m-file (not a function m-file), and define a number of command-line functions there. The octave manual has a section on this. Here's an example:
% in file loadfunctionDefinitions.m
1; % statement with side-effect, to mark this file as a script. See docs.
function Out = Return1(); Out = 1; end
function Out = Return2(); Out = 2; end
% ... etc
% in your main octave session / main script:
X = Return1() + Return2();
However, this is generally not recommended. Especially if you would require matlab compatible code, since matlab introduced 'script-local functions' much later than octave, and decided to do it in a manner incompatible to the existing octave implementation: matlab expects script-local functions to be defined at the end of the script; octave expects them to be defined before first use. However, if you use normal function files, everything is fine.
While I appreciate the "I don't like a folder full of functions" sentiment, the one-function-per-file approach actually has a lot of benefits (especially if you program from the terminal, which makes a wealth of tools twice as useful). E.g. you can easily use grep to find which functions make use of a particular variable. Or compare changes in individual functions from different commits, etc.
Typically the problem is more one of having such function files littering the directory, when other important files are present, e.g. data etc, and having so many files in one place makes finding what you want hard to spot, and feels untidy. But rather than have a single file with command-line definitions, there are a number of other approaches you can take, which are probably also better from a programmatic point of view, e.g.:
Simply create a 'helper functions' folder, and add it to your path.
Use subfunctions in your main functions whenever this is appropriate, to minimize the number of unnecessary files
Use a private functions folder
Use a 'package directory', i.e. a folder starting with the character '+', which creates a namespace for the functions contained inside. E.g. ~/+MyFunctions/myfun.m would be accessed from ~/ via MyFunctions.myfun(), without having to add +MyFunctions to the path (in fact you're not supposed to).
Create a proper class directory, and make your functions methods of that class
The last option may also achieve a one-file solution, if you use a newer-style classdef based class, which allows you to define methods in the same file as the class definition. Note however that octave-support for classdef-defined classes is still somewhat limited.
I have two libraries libA and libB.
libA contains a file Action.h
libB contains a file action.h
I want to generate doxygen documentation in the same output directory for both libraries. This directory is to be used in Windows, for which action.html and Action.html is unfortunately considered to be the same file. To prevent this clash, I wish to render the generated files unique by prepending their path names to them.
Therefore, I set FULL_PATH_NAMES to YES.
I expect to see something like libA_Action.html and libB_action.html when I generate the documentation, but I don't! I still see Action.html and action.html. Its as if the FULL_PATH_NAMES parameter does nothing at all. Do I also need to set some other parameter in the Doxyfile to make the FULL_PATH_NAMES parameter work correctly?
You're probably running doxygen twice - one time for each library. If that is the case, doxygen isn't aware of the fact that it might clash with an output from another run, so when it find an existing file, it assumes that it is leftover from a previous run, and overrides it.
Setting FULL_PATH_NAMES doesn't help, as doxygen has no idea that multiple libraries exist, so, as far as doxygen is concerned, the prefix is identical to all files, so even when you adding a force it, it adds nothing (That's probably a bug).
The solution to your problem is setting both libraries as inputs to the same doxygen project.
You can do it by setting INPUT to multiple folders in the configuration file:
INPUT = ...bla\Lib1 \
...bla\Lib2
My function is definitely working; it's tested and was at one point being recognized.
Here's the function prototype:
function [X Y] = calculateEllipse(x, y, a, b, angle)
%# Code here
end
Here's the call I'm making from the Matlab terminal:
calculateEllipse (612, 391, 107, 60, 331)
Here's the error popping out at me:
??? Undefined function or method 'calculateEllipse' for input arguments of
type 'double'.
Now, I am 100% positive I am in the same directory as the function. I even used
addpath('C:\path-to-function')
to make sure. It's just not working, and I'm baffled.
Any help is appreciated.
To summarise other posts, here is a workflow for determining the cause of the problem.
You mistyped the name of the function. Check the function definition and make sure it really it called calculateEllipse.
You saved the function to a file named something other than the function name. Check the filename of the function and make sure it matches the function name.
The folder containing the function name isn't on the MATLAB path. There are several ways to check this. Type path to see the current path, or which calculateEllipse to find the location that MATLAB is using for that file. (If there is a problem, that last command will display 'calculateEllipse' not found.. Note that addpath does not permanently update the path, so when you close down MATLAB, the path will be reset. Use savepath for this.
The folder containing the function is a subdirectory of matlabroot. These folders are reserved for fully fledged toolboxes; bad things happen when you store your code here. See Bob's answer for more information.
Other useful things to check are:
Can you call other functions that are stored in the same folder?
If you save the function in a different folder, will it run then?
Adding to what Jeff said; another possibility is that you placed the function somewhere inside of your MATLAB installation. By default MATLAB does't re-search its own file structure for new files; it assumes that its internal file structure remains unchanging. Make sure that you're saving the file (which, as Jeff pointed out, must be named calculateEllipse.m) somewhere outside of your MATLAB installation.
See https://www.mathworks.com/help/matlab/matlab_env/toolbox-path-caching-in-the-matlab-program.html, or go to the MathWorks web site and search for
path cache
for more information.
The key to this problem is this: %Has no license available. This implies that a function in the directory of the function you are trying to use has the same name as a function in a toolbox you do not own. MATLAB by default disables the whole directory and not just the function of the same name in a toolbox you do not own. Here is an example:
files in directory:
myfunction.m
scoobydoo.m
blackman.m
If I do not own the "Signal processing toolbox," then blackman.m will disable the whole directory.
I can think of a couple of reasons this could happen.
First, as Jeff said, you could have named the file 'calcEllipse.m' instead of 'calculateEllipse.m'. In which case you need to rename the function to be the same as the m file you saved.
Second, you have not added the correct path. There is no reason for this to give an error to my knowledge otherwise. Double check that you have added the path to the m file that is being saved. An easy way to check is if you type in 'calculateEll' and then press tab, does the autocomplete work? If not you are out of the path.
Hope it is one of those thing you can quickly fix!
I need to distribute some sort of static configuration through my application. What is the best practice to do that?
I see three options:
Call application:get_env directly whenever a module requires to get configuration value.
Plus: simpler than other options.
Minus: how to test such modules without bringing the whole application thing up?
Minus: how to start certain module with different configuration (if required)?
Pass the configuration (retrieved from application:get_env), to application modules during start-up.
Plus: modules are easier to test, you can start them with different configuration.
Minus: lot of boilerplate code. Changing the configuration format requires fixing several places.
Hold the configuration inside separate configuration process.
Plus: more-or-less type-safe approch. Easier to track where certain parameter is used and change those places.
Minus: need to bring up configuration process before running the modules.
Minus: how to start certain module with different configuration (if required)?
Another approach is to transform your configuration data into an Erlang source module that makes the configuration data available through exports. Then you can change the configuration at any time in a running system by simply loading a new version of the configuration module.
For static configuration in my own projects, I like option (1). I'll show you the steps I take to access a configuration parameter called max_widgets in an application called factory.
First, we'll create a module called factory_env which contains the following:
-define(APPLICATION, factory).
get_env(Key, Default) ->
case application:get_env(?APPLICATION, Key) of
{ok, Value} -> Value;
undefined -> Default
end.
set_env(Key, Value) ->
application:set_env(?APPLICATION, Key, Value).
Next, in a module that needs to read max_widgets we'll define a macro like the following:
-define(MAX_WIDGETS, factory_env:get_env(max_widgets, 1000)).
There are a few nice things about this approach:
Because we used application:set_env/3 and application:get_env/2, we don't actually need to start the factory application in order to have our tests pass.
max_widgets gets a default value, so our code will still work even if the parameter isn't defined.
A second module could use a different default value for max_widgets.
Finally, when we are ready to deploy, we'll put a sys.config file in our priv directory and load it with -config priv/sys.config during startup. This allows us to change configuration parameters on a per-node basis if desired. This cleanly separates configuration from code - e.g. we don't need to make another commit in order to change max_widgets to 500.
You could use a process (a gen_server maybe?) to store your configuration parameters in its state. It should expose a get/set interface. If a value hasn't been explicitly set, it should retrieve a default value.
-export([get/1, set/2]).
...
get(Param) ->
gen_server:call(?MODULE, {get, Param}).
...
handle_call({get, Param}, _From, State) ->
case lookup(Param, State#state.params) of
undefined ->
application:get_env(...);
Value ->
{ok, Value}
end.
...
You could then easily mockup this module in your tests. It will also be easy to update the process with some new configuration at run-time.
You could use pattern matching and tuples to associate different configuration parameters to different modules:
set({ModuleName, ParamName}, Value) ->
...
get({ModuleName, ParamName}) ->
...
Put the process under a supervision tree, so it's started before all the other processes which are going to need the configuration.
Oh, I'm glad nobody suggested parametrized modules so far :)
I'd do option 1 for static configuration. You can always test by setting options via application:set_env/3,4. The reason you want to do this is that your tests of the application will need to run the whole application anyway at some time. And the ability to set test-specific configuration at that point is really neat.
The application controller runs by default, so it is not a problem that you need to go the application-way (you need to do that anyway too!)
Finally, if a process needs specific configuration, say so in the configuration data! You can store any Erlang-term, in particular, you can store a term which makes you able to override configuration parameters for a specific node.
For dynamic configuration, you are probably better off by using a gen_server or using the newest gproc features that lets you store such dynamic configuration.
I've also seen people use a .hrl (erlang header file) where all the configuration is defined and include it at the start of any file that needs configuration.
It makes for very concise configuration lookups, and you get configuration of arbitrary complexity.
I believe you can also reload configuration at runtime by performing hot code reloading of the module. The disadvantage is that if you use configuration in several modules and reload only one of them, only that one module will get its configuration updated.
However, I haven't actually checked if it works like that, and I couldn't find definitive documentation on how .hrl and hot code reloading interact, so make sure to double-check this before you actually use it.