I have a function that takes optional arguments as name/value pairs.
function example(varargin)
% Lots of set up stuff
vargs = varargin;
nargs = length(vargs);
names = vargs(1:2:nargs);
values = vargs(2:2:nargs);
validnames = {'foo', 'bar', 'baz'};
for name = names
validatestring(name{:}, validnames);
end
% Do something ...
foo = strmatch('foo', names);
disp(values(foo))
end
example('foo', 1:10, 'bar', 'qwerty')
It seems that there is a lot of effort involved in extracting the appropriate values (and it still isn't particularly robust again badly specified inputs). Is there a better way of handling these name/value pairs? Are there any helper functions that come with MATLAB to assist?
I prefer using structures for my options. This gives you an easy way to store the options and an easy way to define them. Also, the whole thing becomes rather compact.
function example(varargin)
%# define defaults at the beginning of the code so that you do not need to
%# scroll way down in case you want to change something or if the help is
%# incomplete
options = struct('firstparameter',1,'secondparameter',magic(3));
%# read the acceptable names
optionNames = fieldnames(options);
%# count arguments
nArgs = length(varargin);
if round(nArgs/2)~=nArgs/2
error('EXAMPLE needs propertyName/propertyValue pairs')
end
for pair = reshape(varargin,2,[]) %# pair is {propName;propValue}
inpName = lower(pair{1}); %# make case insensitive
if any(strcmp(inpName,optionNames))
%# overwrite options. If you want you can test for the right class here
%# Also, if you find out that there is an option you keep getting wrong,
%# you can use "if strcmp(inpName,'problemOption'),testMore,end"-statements
options.(inpName) = pair{2};
else
error('%s is not a recognized parameter name',inpName)
end
end
InputParser helps with this. See Parse Function Inputs for more information.
I could yack for hours about this, but still don't have a good gestalt view of general Matlab signature handling. But here's a couple pieces of advice.
First, take a laissez faire approach to validating input types. Trust the caller. If you really want strong type testing, you want a static language like Java. Try to enforce type safety every where in Matlab, and you'll end up with a good part of your LOC and execution time devoted to run time type tests and coercion in userland, which trades in a lot of the power and development speed of Matlab. I learned this the hard way.
For API signatures (functions intended to be called from other functions, instead of from the command lines), consider using a single Args argument instead of varargin. Then it can be passed around between multiple arguments without having to convert it to and from a comma-separated list for varargin signatures. Structs, like Jonas says, are very convenient. There's also a nice isomorphism between structs and n-by-2 {name,value;...} cells, and you could set up a couple functions to convert between them inside your functions to whichever it wants to use internally.
function example(args)
%EXAMPLE
%
% Where args is a struct or {name,val;...} cell array
Whether you use inputParser or roll your own name/val parser like these other fine examples, package it up in a separate standard function that you'll call from the top of your functions that have name/val signatures. Have it accept the default value list in a data structure that's convenient to write out, and your arg-parsing calls will look sort of like function signature declarations, which helps readability, and avoid copy-and-paste boilerplate code.
Here's what the parsing calls could look like.
function out = my_example_function(varargin)
%MY_EXAMPLE_FUNCTION Example function
% No type handling
args = parsemyargs(varargin, {
'Stations' {'ORD','SFO','LGA'}
'Reading' 'Min Temp'
'FromDate' '1/1/2000'
'ToDate' today
'Units' 'deg. C'
});
fprintf('\nArgs:\n');
disp(args);
% With type handling
typed_args = parsemyargs(varargin, {
'Stations' {'ORD','SFO','LGA'} 'cellstr'
'Reading' 'Min Temp' []
'FromDate' '1/1/2000' 'datenum'
'ToDate' today 'datenum'
'Units' 'deg. C' []
});
fprintf('\nWith type handling:\n');
disp(typed_args);
% And now in your function body, you just reference stuff like
% args.Stations
% args.FromDate
And here's a function to implement the name/val parsing that way. You could hollow it out and replace it with inputParser, your own type conventions, etc. I think the n-by-2 cell convention makes for nicely readable source code; consider keeping that. Structs are typically more convenient to deal with in the receiving code, but the n-by-2 cells are more convenient to construct using expressions and literals. (Structs require the ",..." continuation at each line, and guarding cell values from expanding to nonscalar structs.)
function out = parsemyargs(args, defaults)
%PARSEMYARGS Arg parser helper
%
% out = parsemyargs(Args, Defaults)
%
% Parses name/value argument pairs.
%
% Args is what you pass your varargin in to. It may be
%
% ArgTypes is a list of argument names, default values, and optionally
% argument types for the inputs. It is an n-by-1, n-by-2 or n-by-3 cell in one
% of these forms forms:
% { Name; ... }
% { Name, DefaultValue; ... }
% { Name, DefaultValue, Type; ... }
% You may also pass a struct, which is converted to the first form, or a
% cell row vector containing name/value pairs as
% { Name,DefaultValue, Name,DefaultValue,... }
% Row vectors are only supported because it's unambiguous when the 2-d form
% has at most 3 columns. If there were more columns possible, I think you'd
% have to require the 2-d form because 4-element long vectors would be
% ambiguous as to whether they were on record, or two records with two
% columns omitted.
%
% Returns struct.
%
% This is slow - don't use name/value signatures functions that will called
% in tight loops.
args = structify(args);
defaults = parse_defaults(defaults);
% You could normalize case if you want to. I recommend you don't; it's a runtime cost
% and just one more potential source of inconsistency.
%[args,defaults] = normalize_case_somehow(args, defaults);
out = merge_args(args, defaults);
%%
function out = parse_defaults(x)
%PARSE_DEFAULTS Parse the default arg spec structure
%
% Returns n-by-3 cellrec in form {Name,DefaultValue,Type;...}.
if isstruct(x)
if ~isscalar(x)
error('struct defaults must be scalar');
end
x = [fieldnames(s) struct2cell(s)];
end
if ~iscell(x)
error('invalid defaults');
end
% Allow {name,val, name,val,...} row vectors
% Does not work for the general case of >3 columns in the 2-d form!
if size(x,1) == 1 && size(x,2) > 3
x = reshape(x, [numel(x)/2 2]);
end
% Fill in omitted columns
if size(x,2) < 2
x(:,2) = {[]}; % Make everything default to value []
end
if size(x,2) < 3
x(:,3) = {[]}; % No default type conversion
end
out = x;
%%
function out = structify(x)
%STRUCTIFY Convert a struct or name/value list or record list to struct
if isempty(x)
out = struct;
elseif iscell(x)
% Cells can be {name,val;...} or {name,val,...}
if (size(x,1) == 1) && size(x,2) > 2
% Reshape {name,val, name,val, ... } list to {name,val; ... }
x = reshape(x, [2 numel(x)/2]);
end
if size(x,2) ~= 2
error('Invalid args: cells must be n-by-2 {name,val;...} or vector {name,val,...} list');
end
% Convert {name,val, name,val, ...} list to struct
if ~iscellstr(x(:,1))
error('Invalid names in name/val argument list');
end
% Little trick for building structs from name/vals
% This protects cellstr arguments from expanding into nonscalar structs
x(:,2) = num2cell(x(:,2));
x = x';
x = x(:);
out = struct(x{:});
elseif isstruct(x)
if ~isscalar(x)
error('struct args must be scalar');
end
out = x;
end
%%
function out = merge_args(args, defaults)
out = structify(defaults(:,[1 2]));
% Apply user arguments
% You could normalize case if you wanted, but I avoid it because it's a
% runtime cost and one more chance for inconsistency.
names = fieldnames(args);
for i = 1:numel(names)
out.(names{i}) = args.(names{i});
end
% Check and convert types
for i = 1:size(defaults,1)
[name,defaultVal,type] = defaults{i,:};
if ~isempty(type)
out.(name) = needa(type, out.(name), type);
end
end
%%
function out = needa(type, value, name)
%NEEDA Check that a value is of a given type, and convert if needed
%
% out = needa(type, value)
% HACK to support common 'pseudotypes' that aren't real Matlab types
switch type
case 'cellstr'
isThatType = iscellstr(value);
case 'datenum'
isThatType = isnumeric(value);
otherwise
isThatType = isa(value, type);
end
if isThatType
out = value;
else
% Here you can auto-convert if you're feeling brave. Assumes that the
% conversion constructor form of all type names works.
% Unfortunately this ends up with bad results if you try converting
% between string and number (you get Unicode encoding/decoding). Use
% at your discretion.
% If you don't want to try autoconverting, just throw an error instead,
% with:
% error('Argument %s must be a %s; got a %s', name, type, class(value));
try
out = feval(type, value);
catch err
error('Failed converting argument %s from %s to %s: %s',...
name, class(value), type, err.message);
end
end
It is so unfortunate that strings and datenums are not first-class types in Matlab.
MathWorks has revived this beaten horse, but with very useful functionality that answers this need, directly. It's called Function Argument Validation (a phrase one can and should search for in the documentation) and comes with release R2019b+. MathWorks created a video about it, also. Validation works much like the "tricks" people have come up with over the years. Here is an example:
function ret = example( inputDir, proj, options )
%EXAMPLE An example.
% Do it like this.
% See THEOTHEREXAMPLE.
arguments
inputDir (1, :) char
proj (1, 1) projector
options.foo char {mustBeMember(options.foo, {'bar' 'baz'})} = 'bar'
options.Angle (1, 1) {double, integer} = 45
options.Plot (1, 1) logical = false
end
% Code always follows 'arguments' block.
ret = [];
switch options.foo
case 'bar'
ret = sind(options.Angle);
case 'baz'
ret = cosd(options.Angle);
end
if options.Plot
plot(proj.x, proj.y)
end
end
Here's the unpacking:
The arguments block must come before any code (OK after help block) and must follow the positional order defined in the function definition, and I believe every argument requires a mention. Required arguments go first, followed by optional arguments, followed by name-value pairs. MathWorks also recommends to no longer use the varargin keyword, but nargin and nargout are still useful.
Class requirements can be custom classes, such as projector, in this case.
Required arguments may not have a default value (i.e. they are known because they don't have a default value).
Optional arguments must have a default value (i.e. they are known because they have a default value).
Default values must be able to pass the same argument validation. In other words, a default value of zeros(3) won't work as a default value for an argument that's supposed to be a character vector.
Name-value pairs are stored in an argument that are internally converted to a struct, which I'm calling options, here (hinting to us that we can use structs to pass keyword arguments, like kwargs in Python).
Very nicely, name-value arguments will now show up as argument hints when you hit tab in a function call. (If completion hints interest you, I encourage you to also look up MATLAB's functionSignatures.json functionality).
So in the example, inputDir is a required argument because it's given no default value. It also must be a 1xN character vector. As if to contradict that statement, note that MATLAB will try to convert the supplied argument to see if the converted argument passes. If you pass 97:122 as inputDir, for example, it will pass and inputDir == char(97:122) (i.e. inputDir == 'abcdefghijklmnopqrstuvwxyz'). Conversely, zeros(3) won't work on account of its not being a vector. And forget about making strings fail when you specify characters, making doubles fail when you demand uint8, etc. Those will be converted. You'd need to dig deeper to circumvent this "flexibility."
Moving on, 'foo' specifies a name-value pair whose value may be only 'bar' or 'baz'.
MATLAB has a number of mustBe... validation functions (start typing
mustBe and hit tab to see what's available), and it's easy enough to
create your own. If you create your own, the validation function must
give an error if the input doesn't match, unlike, say, uigetdir,
which returns 0 if the user cancels the dialog. Personally, I
follow MATLAB's convention and call my validation functions
mustBe..., so I have functions like mustBeNatural for natural
numbers, and mustBeFile to ensure I passed a file that actually
exists.
'Angle' specifies a name-value pair whose value must be a scalar double or integer, so, for example, example(pwd, 'foo', 'baz', 'Angle', [30 70]) won't work since you passed a vector for the Angle argument.
You get the idea. There is a lot of flexibility with the arguments block -- too much and too little, I think -- but for simple functions, it's fast and easy. You still might rely on one or more of inputParser, validateattributes, assert, and so on for addressing greater validation complexity, but I always try to stuff things into an arguments block, first. If it's becoming unsightly, maybe I'll do an arguments block and some assertions, etc.
Personally I use a custom function derived from a private method used by many Statistics Toolbox functions (like kmeans, pca, svmtrain, ttest2, ...)
Being an internal utility function, it changed and was renamed many times over the releases. Depending on your MATLAB version, try looking for one of the following files:
%# old versions
which -all statgetargs
which -all internal.stats.getargs
which -all internal.stats.parseArgs
%# current one, as of R2014a
which -all statslib.internal.parseArgs
As with any undocumented function, there are no guarantees and it could be removed from MATLAB in subsequent releases without any notice... Anyways, I believe someone posted an old version of it as getargs on the File Exchange..
The function processes parameters as name/value pairs, using a set of valid parameter names along with their default values. It returns the parsed parameters as separate output variables. By default, unrecognized name/value pairs raise an error, but we could also silently capture them in an extra output. Here is the function description:
$MATLABROOT\toolbox\stats\stats\+internal\+stats\parseArgs.m
function varargout = parseArgs(pnames, dflts, varargin)
%
% [A,B,...] = parseArgs(PNAMES, DFLTS, 'NAME1',VAL1, 'NAME2',VAL2, ...)
% PNAMES : cell array of N valid parameter names.
% DFLTS : cell array of N default values for these parameters.
% varargin : Remaining arguments as name/value pairs to be parsed.
% [A,B,...]: N outputs assigned in the same order as the names in PNAMES.
%
% [A,B,...,SETFLAG] = parseArgs(...)
% SETFLAG : structure of N fields for each parameter, indicates whether
% the value was parsed from input, or taken from the defaults.
%
% [A,B,...,SETFLAG,EXTRA] = parseArgs(...)
% EXTRA : cell array containing name/value parameters pairs not
% specified in PNAMES.
Example:
function my_plot(x, varargin)
%# valid parameters, and their default values
pnames = {'Color', 'LineWidth', 'LineStyle', 'Title'};
dflts = { 'r', 2, '--', []};
%# parse function arguments
[clr,lw,ls,txt] = internal.stats.parseArgs(pnames, dflts, varargin{:});
%# use the processed values: clr, lw, ls, txt
%# corresponding to the specified parameters
%# ...
end
Now this example function could be called as any of the following ways:
>> my_plot(data) %# use the defaults
>> my_plot(data, 'linestyle','-', 'Color','b') %# any order, case insensitive
>> my_plot(data, 'Col',[0.5 0.5 0.5]) %# partial name match
Here are some invalid calls and the errors thrown:
%# unrecognized parameter
>> my_plot(x, 'width',0)
Error using [...]
Invalid parameter name: width.
%# bad parameter
>> my_plot(x, 1,2)
Error using [...]
Parameter name must be text.
%# wrong number of arguments
>> my_plot(x, 'invalid')
Error using [...]
Wrong number of arguments.
%# ambiguous partial match
>> my_plot(x, 'line','-')
Error using [...]
Ambiguous parameter name: line.
inputParser:
As others have mentioned, the officially recommended approach to parsing functions inputs is to use inputParser class. It supports various schemes such as specifying required inputs, optional positional arguments, and name/value parameters. It also allows to perform validation on the inputs (such as checking the class/type and the size/shape of the arguments)
Read Loren's informative post on this issue. Don't forget to read the comments section... - You will see that there are quite a few different approaches to this topic. They all work, so selecting a prefered method is really a matter of personal taste and maintainability.
I'm a bigger fan of home-grown boiler plate code like this:
function TestExample(req1, req2, varargin)
for i = 1:2:length(varargin)
if strcmpi(varargin{i}, 'alphabet')
ALPHA = varargin{i+1};
elseif strcmpi(varargin{i}, 'cutoff')
CUTOFF = varargin{i+1};
%we need to remove these so seqlogo doesn't get confused
rm_inds = [rm_inds i, i+1]; %#ok<*AGROW>
elseif strcmpi(varargin{i}, 'colors')
colors = varargin{i+1};
rm_inds = [rm_inds i, i+1];
elseif strcmpi(varargin{i}, 'axes_handle')
handle = varargin{i+1};
rm_inds = [rm_inds i, i+1];
elseif strcmpi(varargin{i}, 'top-n')
TOPN = varargin{i+1};
rm_inds = [rm_inds i, i+1];
elseif strcmpi(varargin{i}, 'inds')
npos = varargin{i+1};
rm_inds = [rm_inds i, i+1];
elseif strcmpi(varargin{i}, 'letterfile')
LETTERFILE = varargin{i+1};
rm_inds = [rm_inds i, i+1];
elseif strcmpi(varargin{i}, 'letterstruct')
lo = varargin{i+1};
rm_inds = [rm_inds i, i+1];
end
end
This way I can simulate the 'option', value pair that's nearly identical to how most Matlab functions take their arguments.
Hope that helps,
Will
Here's the solution I'm trialling, based upon Jonas' idea.
function argStruct = NameValuePairToStruct(defaults, varargin)
%NAMEVALUEPAIRTOSTRUCT Converts name/value pairs to a struct.
%
% ARGSTRUCT = NAMEVALUEPAIRTOSTRUCT(DEFAULTS, VARARGIN) converts
% name/value pairs to a struct, with defaults. The function expects an
% even number of arguments to VARARGIN, alternating NAME then VALUE.
% (Each NAME should be a valid variable name.)
%
% Examples:
%
% No defaults
% NameValuePairToStruct(struct, ...
% 'foo', 123, ...
% 'bar', 'qwerty', ...
% 'baz', magic(3))
%
% With defaults
% NameValuePairToStruct( ...
% struct('bar', 'dvorak', 'quux', eye(3)), ...
% 'foo', 123, ...
% 'bar', 'qwerty', ...
% 'baz', magic(3))
%
% See also: inputParser
nArgs = length(varargin);
if rem(nArgs, 2) ~= 0
error('NameValuePairToStruct:NotNameValuePairs', ...
'Inputs were not name/value pairs');
end
argStruct = defaults;
for i = 1:2:nArgs
name = varargin{i};
if ~isvarname(name)
error('NameValuePairToStruct:InvalidName', ...
'A variable name was not valid');
end
argStruct = setfield(argStruct, name, varargin{i + 1}); %#ok<SFLD>
end
end
Inspired by Jonas' answer, but more compact:
function example(varargin)
defaults = struct('A',1, 'B',magic(3)); %define default values
params = struct(varargin{:});
for f = fieldnames(defaults)',
if ~isfield(params, f{1}),
params.(f{1}) = defaults.(f{1});
end
end
%now just access them as params.A, params.B
There is a nifty function called parsepvpairs that takes care of this nicely, provided you have access to MATLAB's finance toolbox. It takes three arguments, expected field names, default field values, and the actual arguments received.
For example, here's a function that creates an HTML figure in MATLAB and can take the optional field value pairs named 'url', 'html', and 'title'.
function htmldlg(varargin)
names = {'url','html','title'};
defaults = {[],[],'Padaco Help'};
[url, html,titleStr] = parsepvpairs(names,defaults,varargin{:});
%... code to create figure using the parsed input values
end
Since ages I am using process_options.m. It is stable, easy to use and has been included in various matlab frameworks. Don't know anything about performance though – might be that there are faster implementations.
Feature I like most with process_options is the unused_args return value, that can be used to split input args in groups of args for, e.g., subprocesses.
And you can easily define default values.
Most importantly: using process_options.m usually results in readable and maintainable option definitions.
Example code:
function y = func(x, y, varargin)
[u, v] = process_options(varargin,
'u', 0,
'v', 1);
If you are using MATLAB 2019b or later, the best way to deal with name-value pairs in your function is to use "Declare function argument validation".
function result = myFunction(NameValueArgs)
arguments
NameValueArgs.Name1
NameValueArgs.Name2
end
% Function code
result = NameValueArgs.Name1 * NameValueArgs.Name2;
end
see: https://www.mathworks.com/help/matlab/ref/arguments.html
function argtest(varargin)
a = 1;
for ii=1:length(varargin)/2
[~] = evalc([varargin{2*ii-1} '=''' num2str(varargin{2*ii}) '''']);
end;
disp(a);
who
This does of course not check for correct assignments, but it's simple and any useless variable will be ignored anyway. It also only works for numerics, strings and arrays, but not for matrices, cells or structures.
I ended up writing this today, and then found these mentions.
Mine uses struct's and struct 'overlays' for options. It essentially mirrors the functionality of setstructfields() except that new parameters can not be added. It also has an option for recursing, whereas setstructfields() does it automatically.
It can take in a cell array of paired values by calling struct(args{:}).
% Overlay default fields with input fields
% Good for option management
% Arguments
% $opts - Default options
% $optsIn - Input options
% Can be struct(), cell of {name, value, ...}, or empty []
% $recurseStructs - Applies optOverlay to any existing structs, given new
% value is a struct too and both are 1x1 structs
% Output
% $opts - Outputs with optsIn values overlayed
function [opts] = optOverlay(opts, optsIn, recurseStructs)
if nargin < 3
recurseStructs = false;
end
isValid = #(o) isstruct(o) && length(o) == 1;
assert(isValid(opts), 'Existing options cannot be cell array');
assert(isValid(optsIn), 'Input options cannot be cell array');
if ~isempty(optsIn)
if iscell(optsIn)
optsIn = struct(optsIn{:});
end
assert(isstruct(optsIn));
fields = fieldnames(optsIn);
for i = 1:length(fields)
field = fields{i};
assert(isfield(opts, field), 'Field does not exist: %s', field);
newValue = optsIn.(field);
% Apply recursion
if recurseStructs
curValue = opts.(field);
% Both values must be proper option structs
if isValid(curValue) && isValid(newValue)
newValue = optOverlay(curValue, newValue, true);
end
end
opts.(field) = newValue;
end
end
end
I'd say that using the naming convention 'defaults' and 'new' would probably be better :P
I have made a function based on Jonas and Richie Cotton. It implements both functionalities (flexible arguments or restricted, meaning that only variables existing in the defaults are allowed), and a few other things like syntactic sugar and sanity checks.
function argStruct = getnargs(varargin, defaults, restrict_flag)
%GETNARGS Converts name/value pairs to a struct (this allows to process named optional arguments).
%
% ARGSTRUCT = GETNARGS(VARARGIN, DEFAULTS, restrict_flag) converts
% name/value pairs to a struct, with defaults. The function expects an
% even number of arguments in VARARGIN, alternating NAME then VALUE.
% (Each NAME should be a valid variable name and is case sensitive.)
% Also VARARGIN should be a cell, and defaults should be a struct().
% Optionally: you can set restrict_flag to true if you want that only arguments names specified in defaults be allowed. Also, if restrict_flag = 2, arguments that aren't in the defaults will just be ignored.
% After calling this function, you can access your arguments using: argstruct.your_argument_name
%
% Examples:
%
% No defaults
% getnargs( {'foo', 123, 'bar', 'qwerty'} )
%
% With defaults
% getnargs( {'foo', 123, 'bar', 'qwerty'} , ...
% struct('foo', 987, 'bar', magic(3)) )
%
% See also: inputParser
%
% Authors: Jonas, Richie Cotton and LRQ3000
%
% Extract the arguments if it's inside a sub-struct (happens on Octave), because anyway it's impossible that the number of argument be 1 (you need at least a couple, thus two)
if (numel(varargin) == 1)
varargin = varargin{:};
end
% Sanity check: we need a multiple of couples, if we get an odd number of arguments then that's wrong (probably missing a value somewhere)
nArgs = length(varargin);
if rem(nArgs, 2) ~= 0
error('NameValuePairToStruct:NotNameValuePairs', ...
'Inputs were not name/value pairs');
end
% Sanity check: if defaults is not supplied, it's by default an empty struct
if ~exist('defaults', 'var')
defaults = struct;
end
if ~exist('restrict_flag', 'var')
restrict_flag = false;
end
% Syntactic sugar: if defaults is also a cell instead of a struct, we convert it on-the-fly
if iscell(defaults)
defaults = struct(defaults{:});
end
optionNames = fieldnames(defaults); % extract all default arguments names (useful for restrict_flag)
argStruct = defaults; % copy over the defaults: by default, all arguments will have the default value.After we will simply overwrite the defaults with the user specified values.
for i = 1:2:nArgs % iterate over couples of argument/value
varname = varargin{i}; % make case insensitive
% check that the supplied name is a valid variable identifier (it does not check if the variable is allowed/declared in defaults, just that it's a possible variable name!)
if ~isvarname(varname)
error('NameValuePairToStruct:InvalidName', ...
'A variable name was not valid: %s position %i', varname, i);
% if options are restricted, check that the argument's name exists in the supplied defaults, else we throw an error. With this we can allow only a restricted range of arguments by specifying in the defaults.
elseif restrict_flag && ~isempty(defaults) && ~any(strmatch(varname, optionNames))
if restrict_flag ~= 2 % restrict_flag = 2 means that we just ignore this argument, else we show an error
error('%s is not a recognized argument name', varname);
end
% else alright, we replace the default value for this argument with the user supplied one (or we create the variable if it wasn't in the defaults and there's no restrict_flag)
else
argStruct = setfield(argStruct, varname, varargin{i + 1}); %#ok<SFLD>
end
end
end
Also available as a Gist.
And for those interested in having real named arguments (with a syntax similar to Python, eg: myfunction(a=1, b='qwerty'), use InputParser (only for Matlab, Octave users will have to wait until v4.2 at least or you can try a wrapper called InputParser2).
Also as a bonus, if you don't want to always have to type argstruct.yourvar but directly use yourvar, you can use the following snippet by Jason S:
function varspull(s)
% Import variables in a structures into the local namespace/workspace
% eg: s = struct('foo', 1, 'bar', 'qwerty'); varspull(s); disp(foo); disp(bar);
% Will print: 1 and qwerty
%
%
% Author: Jason S
%
for n = fieldnames(s)'
name = n{1};
value = s.(name);
assignin('caller',name,value);
end
end
Related
R = [cos(pi/3) sin(pi/3); -sin(pi/3) cos(pi/3)]
[i,j]=round([1 1] * R)
returns
i =
-0 1
error: element number 2 undefined in return list
While I want i=0 and j=1
Is there a way to work around that? Or just Octave being stupid?
Octave is not being stupid; it's just that you expect the syntax [a,b] = [c,d] to result in 'destructuring', but that's not how octave/matlab works. Instead, you are assigning a 'single' output (a matrix) to two variables. Since you are not generating multiple outputs, there is no output to assign to the second variable you specify (i.e. j) so this is ignored.
Long story short, if you're after a 'destructuring' effect, you can convert your matrix to a cell, and then perform cell expansion to generate two outputs:
[i,j] = num2cell( round( [1 1] * R ) ){:}
Or, obviously, you can collect the output into a single object, and then assign to i, and j separately via that object:
[IJ] = round( [1 1] * R ) )
i = IJ(1)
j = IJ(2)
but presumably that's what you're trying to avoid.
Explanation:
The reason [a,b] = bla bla doesn't work, is because syntactically speaking, the [a,b] here isn't a normal matrix; it represents a list of variables you expect to assign return values to. If you have a function or operation that returns multiple outputs, then each output will be assigned to each of those variables in turn.
However, if you only pass a single output, and you specified multiple return variables, Octave will assign that single output to the first return variable, and ignore the rest. And since a matrix is a single object, it assigns this to i, and ignores j.
Converting the whole thing to a cell allows you to then index it via {:}, which returns all cells as a comma separated list (this can be used to pass multiple arguments into functions, for instance). You can see this if you just index without capturing - this results in 'two' answers, printed one after another:
num2cell( round( [1 1] * R ) ){:}
% ans = 0
% ans = 1
Note that many functions in matlab/octave behave differently, based on whether you call them with 1 or 2 output arguments. In other words, think of the number of output arguments with which you call a function to be part of its signature! E.g., have a look at the ind2sub function:
[r] = ind2sub([3, 3], [2,8]) % returns 1D indices
% r = 2 8
[r, ~] = ind2sub([3, 3], [2,8]) % returns 2D indices
% r = 2 2
If destructuring worked the way you assumed on normal matrices, it would be impossible to know if one is attempting to call a function in "two-outputs" mode, or simply trying to call it in "one-output" mode and then destructure the output.
I'd like to use an indexed variable in a function definition (this is a MWE, in the real world I have many a[i], used as coefficients of a polynomial)
f(x):=a[0]*x $
But when I evaluate this function assigning a value to a[0], the assignment is ignored:
ev(f(z),[a[0]=99]);
> a[0]*z
In order to get the desired result I need to make an additional assignment
expr:f(z) $
ev(expr,[a[0]=99]);
> 99*z
What's happening here? Is there a way to avoid the additional step?
Thanks in advance for any clue.
I see that if you write the function as f(x) := a0*x and then ev(f(z), a0=99), you'll get the expected result 99*z. The different behavior for f(x) := a[0]*x is therefore a bug; I'll file a bug report.
In general a more predictable strategy for replacing placeholders with values is to use the subst function which substitutes values into an expression. In this case you could write:
subst (a[0] = 99, f(z));
If you have several values to substitute, you can write:
subst ([a[0] = 99, a[1] = 42, a[2] = 2*foo], myexpr);
where myexpr is an expression containing a[0], a[1], and a[2].
subst is serial (one value at a time) substitution. See also psubst which is parallel substitution (all values at once).
I have a string function (and I am sure it is reversible, so no need to test this), could I call it in reverse to perform the opposite operation?
For example:
def sample(s):
return s[1:]+s[:1]
would put the first letter of a string on the end and return it.
'Output' would become 'utputO'.
When I want to get the opposite operation, could I use this same function?
'utputO' would return 'Output'.
Short answer: no.
Longer answer: I can think of 3, maybe 4 ways to approach what you want -- all of which depend on how are you allowed to change your functions (possibly restricting to a sub-set of Python or mini language), train them, or run them normally with the operands you are expecting to invert later.
So, method (1) - would probably not reach 100% determinism, and would require training with a lot of random examples for each function: use a machine learning approach. That is cool, because it is a hot topic, this would be almost a "machine learning hello world" to implement using one of the various frameworks existing for Python or even roll your own - just setup a neural network for string transformation, train it with a couple thousand (maybe just a few hundred) string transformations for each function you want to invert, and you should have the reverse function. I think this could be the best - at least the "least incorrect" approach - at least it will be the more generic one.
Method(2): Create a mini language for string transformation with reversible operands. Write your functions using this mini language. Introspect your functions and generate the reversed ones.
May look weird, but imagine a minimal stack language that could remove an item from a position in a string, and push it on the stack, pop an item to a position on the string, and maybe perform a couple more reversible primitives you might need (say upper/lower) -
OPSTACK = []
language = {
"push_op": (lambda s, pos: (OPSTACK.append(s[pos]), s[:pos] + s[pos + 1:])[1]),
"pop_op": (lambda s, pos: s[:pos] + OPSTACK.pop() + s[pos:]),
"push_end": (lambda s: (OPSTACK.append(s[-1]), s[:-1])[1]),
"pop_end": lambda s: s + OPSTACK.pop(),
"lower": lambda s: s.lower(),
"upper": lambda s: s.upper(),
# ...
}
# (or pip install extradict and use extradict.BijectiveDict to avoid having to write double entries)
reverse_mapping = {
"push_op": "pop_op",
"pop_op": "push_op",
"push_end": "pop_end",
"pop_end": "push_end",
"lower": "upper",
"upper": "lower"
}
def engine(text, function):
tokens = function.split()
while tokens:
operator = tokens.pop(0)
if operator.endswith("_op"):
operand = int(tokens.pop(0))
text = language[operator](text, operand)
else:
text = language[operator](text)
return text
def inverter(function):
inverted = []
tokens = function.split()
while tokens:
operator = tokens.pop(0)
inverted.insert(0, reverse_mapping[operator])
if operator.endswith("_op"):
operand = tokens.pop(0)
inverted.insert(1, operand)
return " ".join(inverted)
Example:
In [36]: sample = "push_op 0 pop_end"
In [37]: engine("Output", sample)
Out[37]: 'utputO'
In [38]: elpmas = inverter(sample)
In [39]: elpmas
Out[39]: 'push_end pop_op 0'
In [40]: engine("utputO", elpmas)
Out[40]: 'Output'
Method 3: If possible, it is easy to cache the input and output of each call, and just use that to operate in reverse - it could be done as a decorator in Python
from functools import wraps
def reverse_cache(func):
reverse_cache = {}
wraps(func)
def wrapper(input_text):
result = func(input_text)
reverse_cache[result] = input_text
return result
wrapper.reverse_cache = reverse_cache
return wrapper
Example:
In [3]: #reverse_cache
... def sample(s):
... return s[1:]+s[:1]
In [4]:
In [5]: sample("Output")
Out[5]: 'utputO'
In [6]: sample.reverse_cache["utputO"]
Out[6]: 'Output'
Method 4: If the string operations are limited to shuffling the string contents in a deterministic way, like in your example, (and maybe offsetting the character code values by a constant - but no other operations at all), it is possible to write a learner function without the use of neural-network programming: it would construct a string with one character of each (possibly with code-points in ascending order), pass it through the function, and note down the numeric order of the string that was output -
so, in your example, the reconstructed output order would be (1,2,3,4,5,0) - given that sequence, one just have to reorder the input for the inverted function according to those indexes - which is trivial in Python:
def order_map(func, length):
sample_text = "".join(chr(i) for i in range(32, 32 + length))
result = func(sample_text)
return [ord(char) - 32 for char in result]
def invert(func, text):
map_ = order_map(func, len(text))
reordered = sorted(zip(map_, text))
return "".join(item[1] for item in reordered)
Example:
In [47]: def sample(s):
....: return s[1:] + s[0]
....:
In [48]: sample("Output")
Out[48]: 'utputO'
In [49]: invert(sample, "uputO")
Out[49]: 'Ouput'
In [50]:
In the Lua wiki I found a way to define default values for missing arguments:
function myfunction(a,b,c)
b = b or 7
c = c or 5
print (a,b,c)
end
Is that the only way? The PHP style myfunction (a,b=7,c=5) does not seem to work. Not that the Lua way doesn't work, I am just wondering if this is the only way to do it.
If you want named arguments and default values like PHP or Python, you can call your function with a table constructor:
myfunction{a,b=3,c=2}
(This is seen in many places in Lua, such as the advanced forms of LuaSocket's protocol modules and constructors in IUPLua.)
The function itself could have a signature like this:
function myfunction(t)
setmetatable(t,{__index={b=7, c=5}})
local a, b, c =
t[1] or t.a,
t[2] or t.b,
t[3] or t.c
-- function continues down here...
end
Any values missing from the table of parameters will be taken from the __index table in its metatable (see the documentation on metatables).
Of course, more advanced parameter styles are possible using table constructors and functions- you can write whatever you need. For example, here is a function that constructs a function that takes named-or-positional argument tables from a table defining the parameter names and default values and a function taking a regular argument list.
As a non-language-level feature, such calls can be changed to provide new behaviors and semantics:
Variables could be made to accept more than one name
Positional variables and keyword variables can be interspersed - and defining both can give precedence to either (or cause an error)
Keyword-only positionless variables can be made, as well as nameless position-only ones
The fairly-verbose table construction could be done by parsing a string
The argument list could be used verbatim if the function is called with something other than 1 table
Some useful functions for writing argument translators are unpack (moving to table.unpack in 5.2), setfenv (deprecated in 5.2 with the new _ENV construction), and select (which returns a single value from a given argument list, or the length of the list with '#').
In my opinion there isn't another way. That's just the Lua mentality: no frills, and except for some syntactic sugar, no redundant ways of doing simple things.
Technically, there's b = b == nil and 7 or b (which should be used in the case where false is a valid value as false or 7 evaluates to 7), but that's probably not what you're looking for.
The only way i've found so far that makes any sense is to do something like this:
function new(params)
params = params or {}
options = {
name = "Object name"
}
for k,v in pairs(params) do options[k] = v end
some_var = options.name
end
new({ name = "test" })
new()
If your function expects neither Boolean false nor nil to be passed as parameter values, your suggested approach is fine:
function test1(param)
local default = 10
param = param or default
return param
end
--[[
test1(): [10]
test1(nil): [10]
test1(true): [true]
test1(false): [10]
]]
If your function allows Boolean false, but not nil, to be passed as the parameter value, you can check for the presence of nil, as suggested by Stuart P. Bentley, as long as the default value is not Boolean false:
function test2(param)
local default = 10
param = (param == nil and default) or param
return param
end
--[[
test2(): [10]
test2(nil): [10]
test2(true): [true]
test2(false): [false]
]]
The above approach breaks when the default value is Boolean false:
function test3(param)
local default = false
param = (param == nil and default) or param
return param
end
--[[
test3(): [nil]
test3(nil): [nil]
test3(true): [true]
test3(false): [false]
]]
Interestingly, reversing the order of the conditional checks does allow Boolean false to be the default value, and is nominally more performant:
function test4(param)
local default = false
param = param or (param == nil and default)
return param
end
--[[
test4(): [false]
test4(nil): [false]
test4(true): [true]
test4(false): [false]
]]
This approach works for reasons that seem counter-intuitive until further examination, upon which they are discovered to be kind of clever.
If you want default parameters for functions that do allow nil values to be passed, you'll need to do something even uglier, like using variadic parameters:
function test5(...)
local argN = select('#', ...)
local default = false
local param = default
if argN > 0 then
local args = {...}
param = args[1]
end
return param
end
--[[
test5(): [false]
test5(nil): [nil]
test5(true): [true]
test5(false): [false]
]]
Of course, variadic parameters completely thwart auto-completion and linting of function parameters in functions that use them.
Short answer is that it's simplest and best way . in lua , variables by default equal with nil . this means if we don't pass argument to lua functions ,the argument is exits but is nil and lua programmers uses of this lua attribute for set the default value .
also it's not a way for set default value but you can use following function
this function create a error is you don't pass values to arguments
function myFn(arg1 , arg2)
err = arg1 and arg2
if not err then error("argument") end
-- or
if not arg1 and arg2 then error("msg") end
but it's not a good way and better is don't use of this function
and in diagrams shows optional argument in [,arg]
function args(a1 [,a2])
-- some
end
function args ( a1 [,a2[,a3]])
-- some
end
As always, "Lua gives you the power, you build the mechanisms". The first distinction to make here is that between named parameters and the commonly used parameter list.
The parameter list
Assuming all your args are given in the parameter list as follows, they will all be initialized. At this point, you can't distinguish between "wasn't passed" and "was passed as nil" - both will simply be nil. Your options for setting defaults are:
Using the or operator if you expect a truthy value (not nil or false). Defaulting to something even if false is given might be a feature in this case.
Using an explicit nil check param == nil, used either as if param == nil then param = default end or the typical Lua ternary construct param == nil and default or param.
If you find yourself frequently repeating the patterns from point (2), you might want to declare a function:
function default(value, default_value)
if value == nil then return default_value end
return value
end
(whether to use global or local scope for this function is another issue I won't get into here).
I've included all three ways the following example:
function f(x, y, z, w)
x = x or 1
y = y == nil and 2 or y
if z == nil then z == 3 end
w = default(w, 4
print(x, y, z, w)
end
f()
f(1)
f(1, 2)
f(1, 2, 3)
f(1, 2, 3, 4)
note that this also allows omitting arguments inbetween; trailing nil arguments will also be treated as absent:
f(nil)
f(nil, 2, 3)
f(nil, 2, nil, 4)
f(1, 2, 3, nil)
Varargs
A lesser known feature of Lua is the ability to actually determine how many arguments were passed, including the ability to distinguish between explicitly passed nil arguments and "no argument" through the select function. Let's rewrite our function using this:
function f(...)
local n_args = select("#", ...) -- number of arguments passed
local x, y, z, w = ...
if n_args < 4 then w = 4 end
if n_args < 3 then z = 3 end
if n_args < 2 then y = 2 end
if n_args < 1 then x = 1 end
print(x, y, z, w)
end
f() -- prints "1 2 3 4"
f(nil) -- prints "nil 2 3 4"
f(1, nil) -- prints "1 nil 3 4"
f(1, nil, 3) -- prints "1 nil 3 4"
f(nil, nil, nil, nil) -- prints 4x nil
Caveat: (1) the argument list got dragged into the function, hurting readability (2) this is rather cumbersome to write manually, and should probably be abstracted away, perhaps time using a wrapper function wrap_defaults({1, 2, 3, 4}, f) that supplies the defaults as appropriate. Implementation of this is left up to the reader as an exercise (hint: the straightforward way would first collect the args into a garbage table, then unpack that after setting the defaults).
Table calls
Lua provides syntactic sugar for calling functions with a single table as the only argument: f{...} is equivalent to f({...}). Furthermore, {f(...)} can be used to capture a vararg returned by f (caveat: if f returns nils, the table will have holes in it's list part).
Tables also allow implementing named "arguments" as table fields: Tables allow mixing a list and a hash part, making f{1, named_arg = 2} perfectly valid Lua.
In terms of limitations, the advantage of table call is that it only leaves a single argument - the table - on the stack rather than multiple arguments. For recursive functions, this allows hitting the stack overflow later. Since PUC Lua drastically increased the stack limit to ~1M this isn't much of an issue anymore; LuaJIT still has a stack limit of ~65k however, and PUC Lua 5.1 is even lower at around 15k.
In terms of performance & memory consumption, the table call is obviously worse: It requires Lua to build a garbage table, which will then waste memory until the GC gets rid of it. Garbage parameter tables should therefore probably not be used in hotspots where plenty of calls happen. Indexing a hashmap is also obviously slower than getting values straight off the stack.
That said, let's examine the ways to implement defaults for tables:
Unpacking / Destructuring
unpack (table.unpack in later versions (5.2+)) can be used to convert a table into a vararg, which can be treated like a parameter list; note however that in Lua the list part can't have trailing nil values, not allowing you to distinguish "no value" and nil. Unpacking / destructuring to locals also helps performance since it gets rid of repeated table indexing.
function f(params)
local x, y, z, w = unpack(params)
-- use same code as if x, y, z, w were regular params
end
f{1, 2, nil}
if you use named fields, you'll have to explicitly destructure those:
function f(params)
local x, y, z, w = params.x, params.y, params.z, params.w
-- use same code as if x, y, z, w were regular params
end
f{x = 1, w = 4}
mix & match is possible:
function f(params)
local x, y, z = unpack(params)
local w = params.w
-- use same code as if x, y, z, w were regular params
end
f{1, 2, w = 4}
Metatables
The __index metatable field can be used to set a table which is indexed with name if params.name is nil, providing defaults for nil values. One major drawback of setting a metatable on a passed table is that the passed table's metatable will be lost, perhaps leading to unexpected behavior on the caller's end. You could use getmetatable and setmetatable to restore the metatable after you're done operating with the params, but that would be rather dirty, hence I would recommend against it.
Bad
function f(params)
setmetatable(params, {__index = {x = 1, y = 2, z = 3, w = 4}})
-- use params.[xyzw], possibly unpacking / destructuring
end
f{x = 1}
in addition to the presumably garbage params table, this will create (1) a garbage metatable and (2) a garbage default table every time the function is called. This is pretty bad. Since the metatable is constant, simply drag it out of the function, making it an upvalue:
Okay
local defaults_metatable = {__index = {x = 1, y = 2, z = 3, w = 4}}
function f(params)
setmetatable(params, defaults_metatable)
-- use params.[xyzw], possibly unpacking / destructuring
end
Avoiding metatables
If you want a default table without the hackyness of metatables, consider once again writing yourself a helper function to complete a table with default values:
local function complete(params, defaults)
for param, default in pairs(defaults) do
if params[param] == nil then
params[param] = default
end
end
end
this will change the params table, properly setting the defaults; use as params = complete(params, defaults). Again, remember to drag the defaults table out of the function.
Update 2: examples removed, because they were misleading. The ones below are more relevant.
My question:
Is there a programming language with such a construct?
Update:
Now when I think about it, Prolog has something similar.
I even allows defining operations at definition line.
(forget about backtracking and relations - think about syntax)
I asked this question because I believe, it's a nice thing to have symmetry in a language.
Symmetry between "in" parameters and "out" parameters.
If returning values like that would be easy, we could drop explicit returning in designed language.
retruning pairs ... I think this is a hack. we do not need a data structure to pass multiple parameters to a function.
Update 2:
To give an example of syntax I'm looking for:
f (s, d&) = // & indicates 'out' variable
d = s+s.
main =
f("say twice", &twice) // & indicates 'out' variable declaration
print(twice)
main2 =
print (f("say twice", _))
Or in functional + prolog style
f $s (s+s). // use $ to mark that s will get it's value in other part of the code
main =
f "say twice" $twice // on call site the second parameter will get it's value from
print twice
main2 =
print (f "Say twice" $_) // anonymous variable
In a proposed language, there are no expressions, because all returns are through parameters. This would be cumbersome in situations where deep hierarchical function calls are natural. Lisp'ish example:
(let x (* (+ 1 2) (+ 3 4))) // equivalent to C x = ((1 + 2) * (3 + 4))
would need in the language names for all temporary variables:
+ 1 2 res1
+ 3 4 res2
* res1 res2 x
So I propose anonymous variables that turn a whole function call into value of this variable:
* (+ 1 2 _) (+ 3 4 _)
This is not very natural, because all the cultural baggage we have, but I want to throw away all preconceptions about syntax we currently have.
<?php
function f($param, &$ret) {
$ret = $param . $param;
}
f("say twice", $twice);
echo $twice;
?>
$twice is seen after the call to f(), and it has the expected value. If you remove the ampersand, there are errors. So it looks like PHP will declare the variable at the point of calling. I'm not convinced that buys you much, though, especially in PHP.
"Is there a programming language with such a construct?"
Your question is in fact a little unclear.
In a sense, any language that supports assignment to [the variable state associated with] a function argument, supports "such a construct".
C supports it because "void f (type *address)" allows modification of anything address points to. Java supports it because "void f (Object x)" allows any (state-modifying) invocation of some method of x. COBOL supports it because "PROCEDURE DIVISION USING X" can involve an X that holds a pointer/memory address, ultimately allowing to go change the state of the thing pointed to by that address.
From that perspective, I'd say almost every language known to mankind supports "such a construct", with the exception perhaps of languages such as Tutorial D, which claim to be "absolutely pointer-free".
I'm having a hard time understanding what you want. You want to put the return type on call signature? I'm sure someone could hack that together but is it really useful?
// fakelang example - use a ; to separate ins and outs
function f(int in1, int in2; int out1, int out2, int out3) {...}
// C++0x-ish
auto f(int in1, int in2) -> int o1, int o2, int o3 {...}
int a, b, c;
a, b, c = f(1, 2);
I get the feeling this would be implemented internally this way:
LEA EAX, c // push output parameter pointers first, in reverse order
PUSH EAX
LEA EAX, b
PUSH EAX
LEA EAX, a
PUSH EAX
PUSH 1 // push input parameters
PUSH 2
CALL f // Caller treat the outputs as references
ADD ESP,20 // clean the stack
For your first code snippet, I'm not aware of any such languages, and frankly I'm glad it is the case. Declaring a variable in the middle of expression like that, and then using it outside said expression, looks very wrong to me. If anything, I'd expect the scope of such variable to be restricted to the function call, but then of course it's quite pointless in the first place.
For the second one - multiple return values - pretty much any language with first-class tuple support has something close to that. E.g. Python:
def foo(x, y):
return (x + 1), (y + 1)
x, y = foo(1, 2)
Lua doesn't have first-class tuples (i.e. you can't bind a tuple value to a single variable - you always have to expand it, possibly discarding part of it), but it does have multiple return values, with essentially the same syntax:
function foo(x, y)
return (x + 1), (y + 1)
end
local x, y = foo(x, y)
F# has first-class tuples, and so everything said earlier about Python applies to it as well. But it can also simulate tuple returns for methods that were declared in C# or VB with out or ref arguments, which is probably the closest to what you describe - though it is still implicit (i.e. you don't specify the out-argument at all, even as _). Example:
// C# definition
int Foo(int x, int y, out int z)
{
z = y + 1;
return x + 1;
}
// explicit F# call
let mutable y = 0
let x = Foo(1, 2, byref y);
// tupled F# call
let x, y = Foo(1, 2)
Here is how you would do it in Perl:
sub f { $_[1] = $_[0] . $_[0] } #in perl all variables are passed by reference
f("say twice", my $twice);
# or f("...", our $twice) or f("...", $twice)
# the last case is only possible if you are not running with "use strict;"
print $twice;
[edit] Also, since you seem interested in minimal syntax:
sub f { $_[1] = $_[0] x 2 } # x is the repetition operator
f "say twice" => $twice; # => is a quoting comma, used here just for clarity
print $twice;
is perfectly valid perl. Here's an example of normal quoting comma usage:
("abc", 1, "d e f", 2) # is the same as
(abc => 1, "d e f" => 2) # the => only quotes perl /\w+/ strings
Also, on return values, unless exited with a "return" early, all perl subroutines automatically return the last line they execute, be it a single value, or a list. Lastly, take a look at perl6's feed operators, which you might find interesting.
[/edit]
I am not sure exactly what you are trying to achieve with the second example, but the concept of implicit variables exists in a few languages, in Perl, it is $_.
an example would be some of perl's builtins which look at $_ when they dont have an argument.
$string = "my string\n";
for ($string) { # loads "my string" into $_
chomp; # strips the last newline from $_
s/my/our/; # substitutes my for our in $_
print; # prints $_
}
without using $_, the above code would be:
chomp $string;
$string =~ s/my/our/;
print $string;
$_ is used in many cases in perl to avoid repeatedly passing temporary variables to functions
Not programming languages, but various process calculi have syntax for binding names at the receiver call sites in the scope of process expressions dependent on them. While Pict has such syntax, it doesn't actually make sense in the derived functional syntax that you're asking about.
You might have a look at Oz. In Oz you only have procedures and you assign values to variables instead of returning them.
It looks like this:
proc {Max X Y Z}
if X >= Y then Z = X else Z = Y end
end
There are functions (that return values) but this is only syntactic sugar.
Also, Concepts, Techniques, and Models of Computer Programming is a great SICP-like book that teaches programming by using Oz and the Mozart Programming System.
I don't think so. Most languages that do something like that use Tuples so that there can be multiple return values. Come to think of it, the C-style reference and output parameters are mostly hacks around not being about to return Tuples...
Somewhat confusing, but C++ is quite happy with declaring variables and passing them as out parameters in the same statement:
void foo ( int &x, int &y, int &z ) ;
int a,b,c = (foo(a,b,c),c);
But don't do that outside of obfuscation contests.
You might also want to look at pass by name semantics in Algol, which your fuller description smells vaguely similar to.