I am new to Zabbix, i have a requirement to monitor only 1 scheduletasks from a big list of schedule tasks.
I tried below but its not working.
Entry in zabbix_agentd.win.conf
UserParameter=TaskSchedulerMon[*],powershell -NoProfile -ExecutionPolicy Bypass -File C:\zabbix\WindowScheduleTasks.ps1 WSTask
Item property file:
$TaskInfo = Get-ScheduledTask -TaskName "$name1"
Write-Output ($TaskInfo.State)
Trigger Property file:
{TaskSchedulerMon[WSTask].prev()}<>0
getting error as "Unsupported Item Key
Related
I'm writing a script to backup existing bit locker keys to the associated device in Azure AD, I've created a function which goes through the bit locker enabled volumes and backs up the key to Azure however would like to know how I can check that the function has completed successfully without any errors. Here is my code. I've added a try and catch into the function to catch any errors in the function itself however how can I check that the Function has completed succesfully - currently I have an IF statement checking that the last command has run "$? - is this correct or how can I verify please?
function Invoke-BackupBDEKeys {
##Get all current Bit Locker volumes - this will ensure keys are backed up for devices which may have additional data drives
$BitLockerVolumes = Get-BitLockerVolume | select-object MountPoint
foreach ($BDEMountPoint in $BitLockerVolumes.mountpoint) {
try {
#Get key protectors for each of the BDE mount points on the device
$BDEKeyProtector = Get-BitLockerVolume -MountPoint $BDEMountPoint | select-object -ExpandProperty keyprotector
#Get the Recovery Password protector - this will be what is backed up to AAD and used to recover access to the drive if needed
$KeyId = $BDEKeyProtector | Where-Object {$_.KeyProtectorType -eq 'RecoveryPassword'}
#Backup the recovery password to the device in AAD
BackupToAAD-BitLockerKeyProtector -MountPoint $BDEMountPoint -KeyProtectorId $KeyId.KeyProtectorId
}
catch {
Write-Host "An error has occured" $Error[0]
}
}
}
#Run function
Invoke-BackupBDEKeys
if ($? -eq $true) {
$ErrorActionPreference = "Continue"
#No errors ocurred running the last command - reg key can be set as keys have been backed up succesfully
$RegKeyPath = 'custom path'
$Name = 'custom name'
New-ItemProperty -Path $RegKeyPath -Name $Name -Value 1 -Force
Exit
}
else {
Write-Host "The backup of BDE keys were not succesful"
#Exit
}
Unfortunately, as of PowerShell 7.2.1, the automatic $? variable has no meaningful value after calling a written-in-PowerShell function (as opposed to a binary cmdlet) . (More immediately, even inside the function, $? only reflects $false at the very start of the catch block, as Mathias notes).
If PowerShell functions had feature parity with binary cmdlets, then emitting at least one (non-script-terminating) error, such as with Write-Error, would set $? in the caller's scope to $false, but that is currently not the case.
You can work around this limitation by using $PSCmdlet.WriteError() from an advanced function or script, but that is quite cumbersome. The same applies to $PSCmdlet.ThrowTerminatingError(), which is the only way to create a statement-terminating error from PowerShell code. (By contrast, the throw statement generates a script-terminating error, i.e. terminates the entire script and its callers - unless a try / catch or trap statement catches the error somewhere up the call stack).
See this answer for more information and links to relevant GitHub issues.
As a workaround, I suggest:
Make your function an advanced one, so as to enable support for the common -ErrorVariable parameter - it allows you to collect all non-terminating errors emitted by the function in a self-chosen variable.
Note: The self-chosen variable name must be passed without the $; e.g., to collection in variable $errs, use -ErrorVariable errs; do NOT use Error / $Error, because $Error is the automatic variable that collects all errors that occur in the entire session.
You can combine this with the common -ErrorAction parameter to initially silence the errors (-ErrorAction SilentlyContinue), so you can emit them later on demand. Do NOT use -ErrorAction Stop, because it will render -ErrorVariable useless and instead abort your script as a whole.
You can let the errors simply occur - no need for a try / catch statement: since there is no throw statement in your code, your loop will continue to run even if errors occur in a given iteration.
Note: While it is possible to trap terminating errors inside the loop with try / catch and then relay them as non-terminating ones with $_ | Write-Error in the catch block, you'll end up with each such error twice in the variable passed to -ErrorVariable. (If you didn't relay, the errors would still be collected, but not print.)
After invocation, check if any errors were collected, to determine whether at least one key wasn't backed up successfully.
As an aside: Of course, you could alternatively make your function output (return) a Boolean ($true or $false) to indicate whether errors occurred, but that wouldn't be an option for functions designed to output data.
Here's the outline of this approach:
function Invoke-BackupBDEKeys {
# Make the function an *advanced* function, to enable
# support for -ErrorVariable (and -ErrorAction)
[CmdletBinding()]
param()
# ...
foreach ($BDEMountPoint in $BitLockerVolumes.mountpoint) {
# ... Statements that may cause errors.
# If you need to short-circuit a loop iteration immediately
# after an error occurred, check each statement's return value; e.g.:
# if (-not $BDEKeyProtector) { continue }
}
}
# Call the function and collect any
# non-terminating errors in variable $errs.
# IMPORTANT: Pass the variable name *without the $*.
Invoke-BackupBDEKeys -ErrorAction SilentlyContinue -ErrorVariable errs
# If $errs is an empty collection, no errors occurred.
if (-not $errs) {
"No errors occurred"
# ...
}
else {
"At least one error occurred during the backup of BDE keys:`n$errs"
# ...
}
Here's a minimal example, which uses a script block in lieu of a function:
& {
[CmdletBinding()] param() Get-Item NoSuchFile
} -ErrorVariable errs -ErrorAction SilentlyContinue
"Errors collected:`n$errs"
Output:
Errors collected:
Cannot find path 'C:\Users\jdoe\NoSuchFile' because it does not exist.
As stated elsewhere, the try/catch you're using is what is preventing the relay of the error condition. That is by design and the very intentional reason for using try/catch.
What I would do in your case is either create a variable or a file to capture the error info. My apologies to anyone named 'Bob'. It's the variable name that I always use for quick stuff.
Here is a basic sample that works:
$bob = (1,2,"blue",4,"notit",7)
$bobout = #{} #create a hashtable for errors
foreach ($tempbob in $bob) {
$tempbob
try {
$tempbob - 2 #this will fail for a string
} catch {
$bobout.Add($tempbob,"not a number") #store a key/value pair (current,msg)
}
}
$bobout #output the errors
Here we created an array just to use a foreach. Think of it like your $BDEMountPoint variable.
Go through each one, do what you want. In the }catch{}, you just want to say "not a number" when it fails. Here's the output of that:
-1
0
2
5
Name Value
---- -----
notit not a number
blue not a number
All the numbers worked (you can obvious surpress output, this is just for demo).
More importantly, we stored custom text on failure.
Now, you might want a more informative error. You can grab the actual error that happened like this:
$bob = (1,2,"blue",4,"notit",7)
$bobout = #{} #create a hashtable for errors
foreach ($tempbob in $bob) {
$tempbob
try {
$tempbob - 2 #this will fail for a string
} catch {
$bobout.Add($tempbob,$PSItem) #store a key/value pair (current,error)
}
}
$bobout
Here we used the current variable under inspection $PSItem, also commonly referenced as $_.
-1
0
2
5
Name Value
---- -----
notit Cannot convert value "notit" to type "System.Int32". Error: "Input string was not in ...
blue Cannot convert value "blue" to type "System.Int32". Error: "Input string was not in a...
You can also parse the actual error and take action based on it or store custom messages. But that's outside the scope of this answer. :)
I'm running the Rundeck community edition (3.4.3 2021-08-23).
I have a (winRM) Powershell command step that creates a simple JSON output.
#{Hello="World";Simple="Test"} | ConvertTo-Json
In this step #1, I have the "JSON jq key/value mapper" added.
The configuration of the LogFilter is very simple. Just a . (dot) direct passthrough of JSON data. The prefix is set to data.
The following step #2 is a simple output. Running as (winRM) Powershell command. Just writing the variable output.
Write-Output "$result.Simple"
Once I run it I can see JSON is produced in step #1 and correctly parsed by the log filter. If I try to access the variable value in step #2 it's empty and produces "No output".
Both steps 1 and 2 ok. But variable empty. No output from step #2.
I had success using the Log Filter "Key Value Data" and "Multiline Regex Data Capture". But the "JSON jq key/value mapper" seems to work differently.
I have also tried with upper and lowercase variable names. With "result", "data" and without prefix in the LogFilter configuration. But I can't get around to how to access the data in the variables.
Log Filter: https://resources.rundeck.com/plugins/jq-json-log-filter/
Use ${data.name} for the command step or #data.name# for script step. I made a basic example that gets the value from a JSON file.
Rundeck job definition:
- defaultTab: nodes
description: ''
executionEnabled: true
id: fb6042da-5e4c-46d2-a2c1-d3a486ce019f
loglevel: INFO
name: JSONHelloWorld
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: cat /Users/myuser/Desktop/example.json
plugins:
LogFilter:
- config:
filter: .
logData: 'true'
prefix: result
type: json-mapper
- exec: echo ${data.firstName}
keepgoing: false
strategy: node-first
uuid: fb6042da-5e4c-46d2-a2c1-d3a486ce019f
Check the result.
I'm trying to setup an IIS application pool via PowerShell 7.1.1.
I read configuration from a JSON file into the variable $configuration which is hand over to Windows Powershell because of WebAdministration module which isn't natively supported PS 7.1.1.
A script block is defined in the top level function, the configuration is injected as PSCustomObject into the script block and executed in Windows PowerShell.
function Set-AxisAppPool
{
Write-Message 'Setting up a resource pool for Axis...'
$executeInWindowsPowerShellForCompatibilityReasons = {
param (
[Parameter(Mandatory)]
[ValidateNotNullOrEmpty()]
[PSCustomObject]
$Configuration
)
Import-Module WebAdministration
Remove-WebAppPool -Name $Configuration.AppPool.Name -Confirm:$false -ErrorAction:SilentlyContinue
New-WebAppPool -Name $Configuration.AppPool.Name -Force | Write-Verbose
$params = #{
Path = "IIS:\AppPools\$($Configuration.AppPool.Name)"
Name = 'processModel'
Value = #{
userName = $Configuration.AxisUser.Name
password = $Configuration.AxisUser.Password
identitytype = 'SpecificUser'
}
}
Set-ItemProperty #params
}
powershell -NoLogo -NoProfile $executeInWindowsPowerShellForCompatibilityReasons -Args $configuration # This is a line 546
}
When the configuration JSON file exceeds a certain level, PowerShell can't pass through this deserialized JSON, the PSCustomObject, into Windows PowerShell.
Program 'powershell.exe' failed to run: The Process object must have the UseShellExecute property set to false in order to use environment
| variables.At C:\Users\JohnDoe\Desktop\Localhost automatization\Set-AxisEnvironment.ps1:546 char:5 + powershell -NoLogo -NoProfile
| $executeInWindowsPowerShellForCompa … +
It literally work with level n of objects in the JSON and it doesn't with n+1 level of objects in the configuration JSON. The JSON schema is validated, deserialization works as expected.
When I use Start-Process for invoking Windows PowerShell, I receive a different problem. Does anybody have any hint on this one?
Update
This seems to be a bug in PowerShell.
I suspect it is the size of the argument list overflowing into other fields, thus giving you weird error messages. From Start Process:
The length of the string assigned to the Arguments property must
be less than 32,699.
If you are passing a configuration that is larger than 32,699 characters (including spaces), then that likely may be your problem. It would likely take those first 32,699 characters then continue to the next field, -UseShellExecute which would receive a character which is not zero or false, and thus true. This would trip the "wrong", and misleading error message.
Say I have a release pipeline in Azure DevOps written in yaml which has two tasks, one for reading json from a file and the second one for setting a key into a different json file using the json read in the first task. I have the following pipeline.yml -
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: PowerShell#2
name: ReadMetadataJson
inputs:
filePath: 'MetadataReader.ps1'
arguments: 'Run MetadataReader.ps1 -pathToMetadata metadata.json'
- task: PowerShell#2
name: SetAppSetting
inputs:
filePath: 'AppSettingSetter.ps1'
arguments: 'Run AppSettingSetter.ps1 -pathToAppSetting SomeApp/Data.json -appSettingKey testkey -appSettingValue $($(ReadMetadataJson)).testkey'
- script: echo $(ReadMetadataJson.metadata)
Below are the Powershell scripts being called from each tasks -
Powershell 1
# Read From the Metadata.json
param ($pathToMetadata)
echo $pathToMetadata
$metadata = Get-content $pathToMetadata | out-string | ConvertFrom-Json
Write-Output "Metadata Json from metadata reader ps script - $metadata"
echo "##vso[task.setvariable variable=metadata;]$metadata"
Powershell 2
# For now just accept the values in parameter and log them
param ($pathToAppSetting, $appSettingKey, $appSettingValue)
echo "pathToAppSetting : $pathToAppSetting"
echo "appSettingKey : $appSettingKey"
echo "appSettingValue : $appSettingValue"
# Code to set in another file. I have this working, so omitting for brevity
And these are the json files -
Metadata.json
{
"testkey": "TestValueFromMetadata",
"testkey1": "TestValueFromMetadata1"
}
appSetting.json
{
"testkey": "TestValueInAppSetting",
"testkey1": "TestValueInAppSetting1"
}
The problem is when I want to return the json data as output from the first task and use it in the second task to pass the parameter to the second powershell script. Below is a screenshot of the pipeline result after I run it.
As can be seen, it says ReadMetadataJson.metadata: command not found. I have been following the Microsoft document as a reference and have searched for other articles, but all I could find was handling values like string or integer, but not a json object. What is it that I am missing or doing wrong.
You can convert your JSON object to string (ConvertTo-Json) and pass it as variable to the second script.
Then on the second script, you just parse the string to JSON object again, using the ConvertFrom-Json method.
Except the method that Hugo mentioned above, there has another solution can achieve what you want without any additional step added.
Just add one line into your MetadataReader.ps1:
param ($pathToMetadata)
echo $pathToMetadata
$metadata = Get-content $pathToMetadata | out-string | ConvertFrom-Json
$metadata | Get-Member -MemberType NoteProperty | % { $o = $metadata.($_.Name); Write-Host "##vso[task.setvariable variable=$($_.Name);isOutput=true]$o" }
Then, it will parse those json objects into corresponding variables after the json file contents get.
(I make use of the work logic of terroform outputs here)
Then you can directly use {reference name}.{object name} to call corresponding json value.
- task: PowerShell#2
name: ReadMetadataJson
inputs:
filePath: 'MetadataReader.ps1'
arguments: 'Run MetadataReader.ps1 -pathToMetadata metadata.json'
- task: PowerShell#2
name: SetAppSetting
inputs:
filePath: 'AppSettingSetter.ps1'
arguments: 'Run AppSettingSetter.ps1 -pathToAppSetting Data.json -appSettingKey testkey -appSettingValue $(ReadMetadataJson.testkey)'
- script: echo $(ReadMetadataJson.testkey)
Note: I made changes here: -appSettingValue $(ReadMetadataJson.testkey)
I have a log file called log.json that's formatted like this:
{"msg": "Service starting up!"}
{"msg": "Running a job!"}
{"msg": "Error detected!"}
And another file called messages.json, which looks like this:
{"msg": "Service starting up!", "out": "The service has started"}
{"msg": "Error detected!", "out": "Uh oh, there was an error!"}
{"msg": "Service stopped", "out": "The service has stopped"}
I'm trying to write a function using jq that reads in both files, and whenever it finds a msg in log.json that matches a msg in messages.json, print the value of out in the corresponding line in messages.json. So, in this case, I'm hoping to get this as output:
"The service has started"
"Uh oh, there was an error!"
The closest that I've been able to get so far is the following:
jq --argfile a log.json --argfile b messages.json -n 'if ($a[].msg == $b[].msg) then $b[].out else empty end'
This successfully performs all of the comparisons that I'm hoping to make. However, rather than printing the specific out that I'm looking for, it instead prints every out whenever the if statement returns true (which makes sense. $b[].out was never redefined, and asks for each of them). So, this statement outputs:
"The service has started"
"Uh oh, there was an error!"
"The service has stopped"
"The service has started"
"Uh oh, there was an error!"
"The service has stopped"
So at this point, I need some way to ask for $b[current_index].out, and just print that. Is there a way for me to do this (or an entirely seperate approach that I can use)?
messages.json effectively defines a dictionary, so let's begin by creating a JSON dictionary which we can lookup easily. This can be done conveniently using INDEX/2 which (in case your jq does not have it) is defined as follows:
def INDEX(stream; idx_expr):
reduce stream as $row ({};
.[$row|idx_expr|
if type != "string" then tojson
else .
end] |= $row);
A first-cut solution is now straightforward:
INDEX($messages[]; .msg) as $dict
| inputs
| $dict[.msg]
| .out
Assuming this is in program.jq, an appropriate invocation would be as follows (note especially the -n option):
jq -n --slurpfile messages messages.json -f program.jq log.json
The above will print null if the .msg in the log file is not in the dictionary. To filter out these nulls, you could (for example) add select(.) to the pipeline.
Another possibility would be to use the original .msg, as in this variation:
INDEX($messages[]; .msg) as $dict
| inputs
| . as $in
| $dict[.msg]
| .out // $in.msg