Is there a way to have some of the parameters mandatory based on some condition (for example, if one of the parameters is absent or false) in a PowerShell function?
My idea is to be able to call a function in two ways. A concrete example is a function that gets a list from SharePoint - I should be able to call it with relative URL of the list (one and only parameter) OR with a web URL and a list display name (two parameters, both mandatory, but only if list relative URL is not used).
As Christian indicated, this can be accomplished via ParameterSetNames. Take a look at this example:
function Get-MySPWeb {
[CmdletBinding(DefaultParameterSetName="set1")]
param (
[parameter(ParameterSetName="set1")] $RelativeUrl,
[parameter(ParameterSetName="set2")] $WebUrl,
[parameter(ParameterSetName="set2", Mandatory=$true)] $DisplayName
)
Write-Host ("Parameter set in action: " + $PSCmdlet.ParameterSetName)
Write-Host ("RelativeUrl: " + $RelativeUrl)
Write-Host ("WebUrl: " + $WebUrl)
Write-Host ("DisplayName: " + $DisplayName)
}
If you run it with -RelativeUrl Foo it will bind to "set1". If you call this function without any parameters it will also bind to "set1".
(Note - when no parameters are provided in PowerShell v3 (with Windows 8 consumer preview) it will bind to "set1", however it will error binding in PowerShell v2 unless you add [CmdletBinding(DefaultParameterSetName="set1")] to the parameter block. Thanks #x0n for the DefaultParameterSetName tip!)
If you try to run it with a parameter value from both sets you will get an error.
If you run it with -WebUrl Bar it will prompt you for a parameter value for DisplayName, because it's a mandatory parameter.
There is a much more powerful option, called dynamic parameters, which allows to dynamically add parameters depending on the value of other parameters or any other condition.
You must structure your script in a different way, declaring the regular parameters as usual, and including a DynamicParam block to create dynamic parameters, a Begin block to initialize variables using the dynamic parameters , and a Process block with the code run by the script, which can use regular parameters, and variables initialized in Begin. It looks like this:
param(
# Regular parameters here
)
DynamicParam {
# Create a parameter dictionary
$runtimeParams = New-Object System.Management.Automation.RuntimeDefinedParameterDictionary
# Populate it with parameters, with optional attributes
# For example a parameter with mandatory and pattern validation
$attribs = New-Object System.Collections.ObjectModel.Collection[System.Attribute]
$mandatoryAttrib = New-Object System.Management.Automation.ParameterAttribute
$mandatoryAttrib.Mandatory = $true
$attribs.Add($mandatory)
$patternAttrib = New-Object System.Management.Automation.ValidatePatternAttribute('your pattern here')
$attribs.Add($patternAttrib)
# Create the parameter itself with desired name and type and attribs
$param = New-Object System.Management.Automation.RuntimeDefinedParameter('ParameterName', String, $attribs)
# Add it to the dictionary
$runtimeParams.Add('ParameterName', $param)
# Return the dictionary
$ruintimeParams
}
Begin {
# If desired, move dynamic parameter values to variables
$ParameterName = $PSBoundParameters['ParameterName']
}
Process {
# Implement the script itself, which can use both regular an dynamic parameters
}
Of course, the interesting part is that you can add conditions on the DynamicParam section and the Begin section to create different parameters depending on anything, for example other parameter values. The dynamic parameters can have any name, type (string, int, bool, object...) an attributes (mandatory, position, validate set...), and they are created before the execution of the script so that you get parameter tab completion (IntelliSense) in any environment which supports it, like the PowerShell console, the PowerShell ISE or the Visual Studio Code editor.
A typical example would be to create a different set of dynamic parameters depending on the value of a regular parameter, by using a simple if in the DynamicParam section.
Google "PowerShell dynamic parameters" for extra information, like showing help for dynamic parameters. For example:
PowerShell Magazine Dynamic Parameters in PowerShell
You need to use parameters set naming.
You can assign an exclusive parameter to a different parameter set name.
Related
There is a module that has an "initialize" function that sets a variable that gets used in other scripts/functions in the module to validate that the initialize function was run. Something like
Start-InitializeThing
Connect to the API
$Script:SNOWinit = $true
Then in another script/function it will check:
if ($Script:SNOWinit -eq $true) { Do the thing!}
Is there a way to grab that $Script:SNOWinit in the same PowerShell window, but not the same module?
I want to run the same check but for a different function that is not in the module.
Can I do this, can I "dig" into like the modules runspace and check that variable. I don't have the means to edit the functions in the module so I cant change what type of variable is set once the initialize script has run.
Assuming that the module of interest is named foo and that it has already been imported (loaded):
. (Get-Module foo) { $SNOWinit }
If you want to import the module on demand:
. (Import-Module -PassThru foo) { $SNOWinit }
The above returns the value of the $SNOWinit variable defined in the root scope of module foo.
See this blog post for background information.
Note that it is generally not advisable to use this technique, because it violates the intended encapsulation that modules provide. In the case at hand, $SNOWinit, as a non-public module variable, should be considered an implementation detail, which is why you shouldn't rely on its presence in production code.
From the bible, WPiA. More mysterious uses for the call operator.
# get a variable in module scope
$m = get-module counter
& $m Get-Variable count
& $m Set-Variable count 33
I am a novice with PowerShell.
In Msys2 (or Lnux), I have defined a function npp
npp ()
{
${NPP_PATH} "$#"
}
such that if I call from the command prompt npp it launches Notepad++ (as defined in ${NPP_PATH}).
If I call npp "mydir/stage 1/a.txt" it opens that file for editing.
Generally speaking, it allows:
Any number of parameters.
Parameters containing spaces, if suitably escaped.
What would be the equivalent in PowerShell?
I guess in PS I should also go for a function to obtain a similar behavior.
So far, I could receive an undefined number of parameters, and use them in a foreach loop, see code below.
But I could not find the equivalent of the simple "$#" to pass all parameters as they are received.
Moreover, if I use quotes in one of the arguments, they are removed, so it will probably have problems with file paths including blanks.
function multi_params {
param(
[Parameter(
ValueFromRemainingArguments=$true
)][string[]]
$listArgs
)
$count = 0
foreach($listArg in $listArgs) {
'$listArgs[{0}]: {1}' -f $count, $listArg
$count++
}
}
Assuming that NPP_PATH is an environment variable, the equivalent PowerShell function is:
function npp {
& $env:NPP_PATH $args
}
If NPP_PATH is the name of a regular PowerShell variable, use & $NPP_PATH $args.
& is the call operator, which is needed for syntactic reasons whenever you want to invoke a command whose name/path is specified in quotes and/or via a variable.
In simple functions (as opposed to advanced functions) such as the above (use of neither [CmdletBinding()] nor [Parameter()] attributes), you can use the automatic $args variable to pass any arguments through to another command.
If the target command is not an external program, such as here, but a PowerShell command, use the form #args to ensure that all arguments - including those preceded by their parameter names - are properly passed through - see about_Splatting.
Note that the form #args works with external programs too, where it is generally equivalent to $args (the only difference is that only #args recognizes and removes --%, the stop-parsing token)
Note that passing arguments with embedded " chars. and empty arguments to external programs is still broken as of PowerShell v7.0 - see this answer.
Passing arguments through in simple vs. advanced functions (scripts):
In simple functions only, $args contains all arguments that did not bind to declared parameters, if any, on invocation.
If your simple function doesn't declare any parameters, as in the example above, $args contains all arguments passed on invocation.
If your simple function does declare parameters (typically via param(...)), $args contains only those arguments that didn't bind to declared parameters; in short: it collects any arguments your function did not declare parameters for.
Therefore, $args is a simple mechanism for collecting arguments not declared or known in advance, either to be used in the function itself - notably if declaring parameters isn't worth the effort - or to pass those arguments through to another command.
To pass arguments that comprise named arguments (e.g., -Path foo instead of just foo) through to another PowerShell command, splatting is needed, i.e. the form #args.
Note that while $args is technically a regular PowerShell array ([object[]]), it also has built-in magic to support passing named arguments through; a custom array can not be used for this, and the hash-table form of splatting is then required - see about_Splatting
In advanced functions, $args is not available, because advanced functions by definition only accept arguments for which parameters have been declared.
To accept extra, positional-only arguments, you must define a catch-all ValueFromRemainingArguments parameter, as shown in the question, which collects such arguments in an array-like[1] data structure by default.
To also support named pass-through arguments, you have two basic option:
If you know the set of potential pass-through parameters, declare them as part of your own function.
You can then use splatting with the $PSBoundParameters dictionary (hash table) - see below - to pass named arguments through, possibly after removing arguments meant for your function itself from the dictionary.
This technique is used when writing proxy (wrapper) functions for existing commands; the PowerShell SDK makes duplicating the pass-through parameters easier by allowing you to scaffold a proxy function based on an existing command - see this answer.
Otherwise, there is only a suboptimal solution where you emulate PowerShell's own parameter parsing to parse the positional arguments into parameter-name/value pairs - see this answer.
The automatic $PSBoundParameters variable is a dictionary that is available in both simple and advanced functions:
$PSBoundParameters applies only if your function declares parameters, and contains entries only for those among the declared parameters to which arguments were actually bound (passed) on invocation; the dictionary keys are the parameter names, albeit without the initial -.
Note that parameters bound by a default value are not included - see this GitHub issue for a discussion.
Again, note that in advanced functions you can only pass a given argument if a parameter was declared for it, so any argument passed in a given invocation is by definition reflected in $PSBoundParameters.
Because it is a dictionary (hash table), it can be used with hash-table based splatting - #PSBoundParameters - to pass named arguments through to other PowerShell commands and, since it is mutable, you have the option of adding or removing named arguments (such as the ones intended for your function itself).
[1] That type is [System.Collections.Generic.List[object]]; however, you can specify a collection type explicitly, such as [object[]] to get a regular PowerShell array.
I'm currently working on a PowerShell module, and I've come across something rather unusual that I cannot figure out how to duplicate. I'm using a module from Az PowerShell 3.2.0 as a reference.
I have the following example from Microsoft's Az.Dns Module:
https://learn.microsoft.com/en-us/powershell/module/az.dns/Add-AzDnsRecordConfig
$RecordSet = Get-AzDnsRecordSet -Name www -RecordType A -ResourceGroupName MyResourceGroup -ZoneName myzone.com
Add-AzDnsRecordConfig -RecordSet $RecordSet -Ipv4Address 1.2.3.4
Set-AzDnsRecordSet -RecordSet $RecordSet
The $RecordSet variable is being set locally, passed as a parameter to the Add-AzDnsRecordConfig command of this module, and the value of the $RecordSet local variable is then being automatically updated. When this variable is passed to the Set-AzDnsRecordSet command as a parameter, it contains the updated value it was assigned and not its initial value. Note that there is no additional assignment statement of the return value of Add-AzDnsRecordConfig.
How is this possible?
I know that I can define a function parameter as type [ref] or System.Management.Automation.PSReference and then pass by reference when it is called as function -param ([ref]$myVariable). I can then update the value using $myVariable.Value, but that is not what is happening here. Somehow, this variable is being passed by value, and the value is being updated back in the local scope as if it were passed by reference.
Changing the name of the local variable also does not break this functionality. I've also done a Show-Command -Name Add-AzDnsRecordConfig and I can confirm that the type is not System.Management.Automation.PSReference.
I have a need to duplicate this functionality as closely as possible, as I am building a wrapper of sorts around this, but I am not sure how Microsoft is making this magic happen within this command.
#zett42 Thank you for the concise answer. I definitely over-complicated this, and I did not realize that objects were automatically passed by reference without the need to specify it. As it turns out, I can simply reference the parameter within the function as $RecordSet.Property = "New Value".
Back in the local scope, that does update the initially defined variable.
https://johnfabry.azurewebsites.net/2015/06/26/powershell-reference-types-and-value-types/
This article also helped me to understand how this works.
Motivation
Reduce the maintenance of an Azure DevOps task that invokes a Powershell script with a lot of parameters ("a lot" could be 5).
The idea relies on the fact that Azure DevOps generates environment variables to reflect the build variables. So, I devised the following scheme:
Prefix all non secret Azure DevOps variables with MyBuild.
The task powershell script would call a function to check the script parameters against the MyBuild_ environment variables and would automatically assign the value of the MyBuild_xyz environment variable to the script parameter xyz if the latter has no value.
This way the task command line would only contain secret parameters (which are not reflected in the environment). Often, there are no secret parameters and so the command line remains empty. We find this scheme to reduce the maintenance of the tasks driven by a powershell script.
Example
param(
$DBUser,
[ValidateNotNullOrEmpty()]$DBPassword,
$DBServer,
$Configuration,
$Solutions,
$ClientDB = $env:Build_DefinitionName,
$RawBuildVersion = $env:Build_BuildNumber,
$BuildDefinition = $env:Build_DefinitionName,
$Changeset = $env:Build_SourceVersion,
$OutDir = $env:Build_BinariesDirectory,
$TempDir,
[Switch]$EnforceNoMetadataStoreChanges
)
$ErrorActionPreference = "Stop"
. $PSScriptRoot\AutomationBootstrap.ps1
$AutomationScripts = GetToolPackage DevOpsAutomation
. "$AutomationScripts\vNext\DefaultParameterValueBinding.ps1" $PSCommandPath -Required 'ClientDB' -Props #{
OutDir = #{ DefaultValue = [io.path]::GetFullPath("$PSScriptRoot\..\..\bin") }
TempDir = #{ DefaultValue = 'D:\_gctemp' }
DBUser = #{ DefaultValue = 'SomeUser' }
}
The described parameter binding logic is implemented in the script DefaultParameterValueBinding.ps1 which is published in a NuGet package. The code installs the package and thus gets access to the script.
In the example above, some parameters default to predefined Azure Devops variables, like $RawBuildVersion = $env:Build_BuildNumber. Some are left uninitialized, like $DBServer, which means it would default to $env:MyBuild_DBServer.
We can get away without the special function to do the binding, but then the script author would have to write something like this:
$DBServer = $env:MyBuild_DBServer,
$Configuration = $env:MyBuild_Configuration,
$Solutions = $env:MyBuild_Solutions,
I wanted to avoid this, because of the possibility of an accidental name mismatch.
The Problem
The approach does not work when I package the logic of DefaultParameterValueBinding.ps1 into a module function. This is because of the module scope isolation - I just cannot modify the parameters of the caller script.
Is it still possible to do? Is it possible to achieve my goal in a more elegant way? Remember, I want to reduce the cost associated with maintaining the task command line in Azure DevOps.
Right now I am inclined to retreat back to this scheme:
$xyz = $(Resolve-ParameterValue 'xyz' x y z ...)
Where Resolve-ParameterValue would first check $env:MyBuild_xyz and if not found select the first not null value out of x,y,z,...
But if the Resolve-ParameterValue method comes from a module, then the script must assume the module has already been installed, because it has no way to install it before the parameters are evaluated. Or has it?
EDIT 1
Notice the command line used to invoke the DefaultParameterValueBinding.ps1 script does not contain the caller script parameters! It does include $PSCommandPath, which is used to obtain the PSBoundParameters collection.
Yea, but it will require modifications to the calling script and the function. Pass the parameters by reference. Adam B. has a nice piece on passing parameters by reference in the following:
https://mcpmag.com/articles/2015/06/04/reference-variables-in-powershell.aspx
Net-net, the following is an example:
$age = 12;
function birthday {
param([ref]$age)
$age.value += 1
}
birthday -age ([ref]$age)
Write-Output $age
I've got an age of 12. I pass it into a function as a parameter. The function increments the value of $age by 1. You can do the same thing with a function in a module. You get my drift.
I'm trying to have a script with both executable code and a function, like the following:
function CopyFiles {
Param( ... )
...
}
// Parameter for the script
param ( ... )
// Executable code
However, I run into the following error: "The assignment expression is not valid. The input to an assignment operator must be an object that is able to accept assignments, such as a variable or a property"
When I list my function at the end of the file, it says that the function name is undefined. How do I call a powershell function from executable code within the same script?
The correct order is:
1.Script parameters
# Parameter for the script
param([string]$foo)
2.Function definitons
function CopyFiles {
Param([string]$bar)
...
}
3.Script code
# Executable code
CopyFiles $foo $bar
Why would you want it any other way?
Parameters go first always. I had a similar issue at one point in time with providing parameter input to a script. Your script should go:
param ( . . . )
# functions
# script body
For some reason, the PowerShell parsing engine doesn't appreciate the param keyword not being on the first line of a script, not counting comment lines. You can also do this:
param (
# params must be at the top of the file
)
You can also check to see if your parameters have been declared, or if they have the input you want, using Get-Variable. One other thing; if you want to cast data to a certain type, such as System.Boolean, I would do it AFTER the param block, and BEFORE functions. If you type-cast something to System.Boolean in the parameter declaration, then you'll have errors if people running your script don't submit the input argument in a Boolean value, which is much harder than using the .NET System.Convert static method to convert the value afterwards, and checking to see what it evaluated to.