I am a novice with PowerShell.
In Msys2 (or Lnux), I have defined a function npp
npp ()
{
${NPP_PATH} "$#"
}
such that if I call from the command prompt npp it launches Notepad++ (as defined in ${NPP_PATH}).
If I call npp "mydir/stage 1/a.txt" it opens that file for editing.
Generally speaking, it allows:
Any number of parameters.
Parameters containing spaces, if suitably escaped.
What would be the equivalent in PowerShell?
I guess in PS I should also go for a function to obtain a similar behavior.
So far, I could receive an undefined number of parameters, and use them in a foreach loop, see code below.
But I could not find the equivalent of the simple "$#" to pass all parameters as they are received.
Moreover, if I use quotes in one of the arguments, they are removed, so it will probably have problems with file paths including blanks.
function multi_params {
param(
[Parameter(
ValueFromRemainingArguments=$true
)][string[]]
$listArgs
)
$count = 0
foreach($listArg in $listArgs) {
'$listArgs[{0}]: {1}' -f $count, $listArg
$count++
}
}
Assuming that NPP_PATH is an environment variable, the equivalent PowerShell function is:
function npp {
& $env:NPP_PATH $args
}
If NPP_PATH is the name of a regular PowerShell variable, use & $NPP_PATH $args.
& is the call operator, which is needed for syntactic reasons whenever you want to invoke a command whose name/path is specified in quotes and/or via a variable.
In simple functions (as opposed to advanced functions) such as the above (use of neither [CmdletBinding()] nor [Parameter()] attributes), you can use the automatic $args variable to pass any arguments through to another command.
If the target command is not an external program, such as here, but a PowerShell command, use the form #args to ensure that all arguments - including those preceded by their parameter names - are properly passed through - see about_Splatting.
Note that the form #args works with external programs too, where it is generally equivalent to $args (the only difference is that only #args recognizes and removes --%, the stop-parsing token)
Note that passing arguments with embedded " chars. and empty arguments to external programs is still broken as of PowerShell v7.0 - see this answer.
Passing arguments through in simple vs. advanced functions (scripts):
In simple functions only, $args contains all arguments that did not bind to declared parameters, if any, on invocation.
If your simple function doesn't declare any parameters, as in the example above, $args contains all arguments passed on invocation.
If your simple function does declare parameters (typically via param(...)), $args contains only those arguments that didn't bind to declared parameters; in short: it collects any arguments your function did not declare parameters for.
Therefore, $args is a simple mechanism for collecting arguments not declared or known in advance, either to be used in the function itself - notably if declaring parameters isn't worth the effort - or to pass those arguments through to another command.
To pass arguments that comprise named arguments (e.g., -Path foo instead of just foo) through to another PowerShell command, splatting is needed, i.e. the form #args.
Note that while $args is technically a regular PowerShell array ([object[]]), it also has built-in magic to support passing named arguments through; a custom array can not be used for this, and the hash-table form of splatting is then required - see about_Splatting
In advanced functions, $args is not available, because advanced functions by definition only accept arguments for which parameters have been declared.
To accept extra, positional-only arguments, you must define a catch-all ValueFromRemainingArguments parameter, as shown in the question, which collects such arguments in an array-like[1] data structure by default.
To also support named pass-through arguments, you have two basic option:
If you know the set of potential pass-through parameters, declare them as part of your own function.
You can then use splatting with the $PSBoundParameters dictionary (hash table) - see below - to pass named arguments through, possibly after removing arguments meant for your function itself from the dictionary.
This technique is used when writing proxy (wrapper) functions for existing commands; the PowerShell SDK makes duplicating the pass-through parameters easier by allowing you to scaffold a proxy function based on an existing command - see this answer.
Otherwise, there is only a suboptimal solution where you emulate PowerShell's own parameter parsing to parse the positional arguments into parameter-name/value pairs - see this answer.
The automatic $PSBoundParameters variable is a dictionary that is available in both simple and advanced functions:
$PSBoundParameters applies only if your function declares parameters, and contains entries only for those among the declared parameters to which arguments were actually bound (passed) on invocation; the dictionary keys are the parameter names, albeit without the initial -.
Note that parameters bound by a default value are not included - see this GitHub issue for a discussion.
Again, note that in advanced functions you can only pass a given argument if a parameter was declared for it, so any argument passed in a given invocation is by definition reflected in $PSBoundParameters.
Because it is a dictionary (hash table), it can be used with hash-table based splatting - #PSBoundParameters - to pass named arguments through to other PowerShell commands and, since it is mutable, you have the option of adding or removing named arguments (such as the ones intended for your function itself).
[1] That type is [System.Collections.Generic.List[object]]; however, you can specify a collection type explicitly, such as [object[]] to get a regular PowerShell array.
Related
I'm trying to pass a function to a method and then pass parameters to the method I passed when calling it, but if I pass more than one parameter then the method fails with an error:
function debugMeStuffs($someBlah, $somePoo) {
Write-Host $someBlah
Write-Host $somePoo
}
function invokeOnHosts ([scriptblock]$funcToCall, $param1, $param2, $startRange, $endRange) {
#Param($funcToCall)
$i = $startRange
for($i = [int]$startRange; $i -le $endRange; $i++) {
# HOW DO I MAKE THIS WORK WITH MULTIPLE PARAMETERS?!?!?!?
$funcToCall.Invoke('blah' 'poo')
}
}
invokeOnHosts $function:debugMeStuffs "param1" "param2" 4 7
Things I've tried:
$funcToCall("blah" "poo")
$funcToCall('blah' 'poo')
$funcToCall.Invoke("blah" "poo")
$funcToCall.Invoke('blah' 'poo')
$funcToCall 'blah' 'poo'
$funcToCall.Invoke 'blah' 'poo'
$funcToCall "blah" "poo"
$funcToCall.Invoke "blah" "poo"
None of the above seem to work. Is there something else I need to do to make this work?
.Invoke() is a .NET method, so the usual method-call syntax applies: you need
parentheses - (...) - around the list of arguments
you must separate the arguments with ,
$funcToCall.Invoke('blah', 'poo')
This contrasts with PowerShell's own syntax for calling cmdlets and functions, which is shell-like[1]:
no (...) around the argument list
arguments must be separated with spaces.
& $funcToCall blah poo # equivalent of the method call above.
A command such as the above is parsed in argument mode, which is why quoting the arguments in this simple case is optional.
Note the need for &, PowerShell's call operator, which is needed to execute the script block stored in $funcToCall; this is generally necessary for invoking a command stored in a variable, and also for command names / paths that are quoted.
Given that it's easy to get confused between PowerShell's command syntax and .NET's method syntax, it's best to stick with PowerShell-native features[2], if possible.
That said, being able to call methods on .NET types directly is a wonderful extensibility option.
To help avoid accidental use of method syntax when calling PowerShell commands, you can use Set-StrictMode -Version 2 or higher, but note that that entails additional strictness checks.
[1] PowerShell is, after all, a shell - but it is also a full-featured scripting language that offers near-unlimited access to the .NET framework.
Reconciling these two personalities is a difficult balancing act, and the shell-like command-invocation syntax is a frequent problem for newcomers with a programming background, given that the rest of the language looks like a traditional programming language and that calling methods on .NET types does use the traditional syntax.
[2] This means preferring PowerShell's cmdlets, functions, and operators to use of the underlying .NET types' methods; doing so also usually provides rewards you with operating at a higher level of abstraction.
I have a function that looks something like this:
function global:Test-Multi {
Param([string]$Suite)
& perl -S "$Suite\runall.pl" -procs:$env:NUMBER_OF_PROCESSORS
}
I would like to allow the user to specify more parameters to Test-Multi and pass them directly to the underlying legacy perl script.
Does powershell provide a mechanism to allow additional variadic behavior for this purpose?
After seeing your comment, option 3 sounds like exactly what you want.
You have a few options:
Use $args (credit to hjpotter92's answer)
Explicitly define your additional parameters, then parse them all in your function to add them to your perl call.
Use a single parameter with the ValueFromRemainingArguments argument, e.g.
function global:Test-Multi {
Param(
[string]$Suite,
[parameter(ValueFromRemainingArguments = $true)]
[string[]]$Passthrough
)
& perl -S "$Suite\runall.pl" -procs:$env:NUMBER_OF_PROCESSORS #Passthrough
}
$args is not going to pass through arguments correctly. if you want the arguments to remain as separate arguments, you should use #args instead.
I'm not sure about what you wish to achieve, but the arguments passed to a function are accessible in the $args variable available inside the function.
I wonder if it is right semantics to have a variable as an argument, something like this:
proc p1 {$aa} {}
I tried it on tclsh, there is no complaint, but the following experiment fails:
% set aa bb
bb
% set bb 200
200
% proc p1 {$aa} {puts $bb}
% p1 bb
can't read "bb": no such variable
Do you see what is wrong?
[UPDATE - after seeing Peter's answer]
I know the upvar semantic, thanks.
My main curiosity is still around using variable as proc argument. I know it is not common, but just cannot help musing what it really can do if the language syntax allows it.
Yes, your upvar example is exactly what I want to explore using a variable as proc argument, but my exploration so far tells me, really, there is no way we can do this because "$" is interpreted as a plain char.
Do debunk me please if I am wrong.
Tcl does not support reference arguments as such: the usual pass-by-reference semantics is too static for Tcl. Instead, the logic of the command can, by use of upvar, dynamically create reference parameters including indirect reference parameters and calculated reference parameters, and also retarget the local name to another external variable. The upvar mechanism may look ungainly, but is very powerful indeed.
(The (edited) remains of my original answer follows:)
The usual idiom for doing this is
proc p varName {
upvar 1 $varName var
puts $var
}
The upvar command looks into another stack frame (in this case 1, which is the caller's stack frame) and makes a variable named $varName (i.e. the variable's name is the value of varName) in that stack frame and a variable named "var" in the command's stack frame refer to the same data object.
I won't explain this further since this is not useful to the asker.
Documentation: proc, puts, upvar
There are cases where it makes sense for the arguments in a procedure creation call to be supplied from a variable. The main example is where you are creating procedures dynamically, calculating the arguments you want to use as you go along.
That's actually a use-case that isn't done very frequently! It's not particularly easy to use well. But I have done it. (OK, that's a method call, but the syntax of formal arguments is shared.)
The main reason that the capability is there is that it's part of Tcl's general syntax. Tcl tries very hard to not have special cases in how it parses things (other than in how a command parses the strings passed into it) and this includes in things that would be very special cases in the enormous majority of other programming languages, such as formal parameter lists to procedures. In Tcl, these are just ordinary values and can be produced using any technique that gives ordinary values.
The usual thing with putting them in braces is just how you do it reliably and is easy to teach. It's also overwhelmingly what people want to do.
All this is independent of the facts that Tcl's commands (including its procedures) can handle variable numbers of arguments (check out the special args parameter and how to specify default values) and that $aa is a legal (but strange) name for a local variable.
I'm trying to have a script with both executable code and a function, like the following:
function CopyFiles {
Param( ... )
...
}
// Parameter for the script
param ( ... )
// Executable code
However, I run into the following error: "The assignment expression is not valid. The input to an assignment operator must be an object that is able to accept assignments, such as a variable or a property"
When I list my function at the end of the file, it says that the function name is undefined. How do I call a powershell function from executable code within the same script?
The correct order is:
1.Script parameters
# Parameter for the script
param([string]$foo)
2.Function definitons
function CopyFiles {
Param([string]$bar)
...
}
3.Script code
# Executable code
CopyFiles $foo $bar
Why would you want it any other way?
Parameters go first always. I had a similar issue at one point in time with providing parameter input to a script. Your script should go:
param ( . . . )
# functions
# script body
For some reason, the PowerShell parsing engine doesn't appreciate the param keyword not being on the first line of a script, not counting comment lines. You can also do this:
param (
# params must be at the top of the file
)
You can also check to see if your parameters have been declared, or if they have the input you want, using Get-Variable. One other thing; if you want to cast data to a certain type, such as System.Boolean, I would do it AFTER the param block, and BEFORE functions. If you type-cast something to System.Boolean in the parameter declaration, then you'll have errors if people running your script don't submit the input argument in a Boolean value, which is much harder than using the .NET System.Convert static method to convert the value afterwards, and checking to see what it evaluated to.
Since named parameters are those parameters that are identified by their explicit name, instead of their ordering, what's the name of their cousins without names, the ones that are identified merely by the order?
Anonymous parameters? Unnamed parameters? Do they have a name to begin with?
positional parameters.
If you google "positional parameters", you'll usually find it referring to the $1, $2, $3 variables you get in shell scripting, but it works for "normal" parameters as well.
Parameters.
In Python, there are keyword arguments and arguments.
It's in the tutorial, kinda.
http://docs.python.org/tutorial/controlflow.html#keyword-arguments
In general, an argument list must have
any positional arguments followed by
any keyword arguments, where the
keywords must be chosen from the
formal parameter names.
And the glossary:
http://docs.python.org/glossary.html#glossary
argument
A value passed to a function or method, assigned to a named local
variable in the function body. A
function or method may have both
positional arguments and keyword
arguments in its definition.
positional argument
The arguments assigned to local names
inside a function or method,
determined by the order in which they
were given in the call. * is used to
either accept multiple positional
arguments (when in the definition), or
pass several arguments as a list to a
function. See argument.