The best way to running several functions - function

i need help with running different functions at the same with the same arguments.
I have a powershell script are build like this:
$ObjectsArray = #(Object1, Object2, Object3)
function function1($arg) {
do something...
}
function function2($arg) {
do something...
}
function function3($arg) {
do something...
}
foreach($Objec in ObjectArray) {
function1 -arg $Object.Name
function2 -arg $Object.Name
function3 -arg $Object.Name
}
in my script i have many functions and i want optimize the code.
there is any way to run all of these function in one time? maybe with regex?
in all the function i'm use with the same arguments.
Thanks!!

short answer: yes, it's possible.
longer answer: you will need to separate the various executions into powershell jobs. This is sort of multi-threading, but I don't know enough to tell you that it's actually peeling threads from the (virtual) core(s).
Here is how you call an individual job:
PS C:\temp\StackWork\csvimport> start-job -ScriptBlock {Get-ChildItem c:\ | select Mode,Name | ft -w -auto}
Id Name PSJobTypeName State HasMoreData Location Command
-- ---- ------------- ----- ----------- -------- -------
9 Job9 BackgroundJob Running True localhost Get-ChildItem c:\ | se...
There you see the output is not the results of the command, but the properties of the job itself. That 'State' field is how you check if the job is still running or completed.
Then this is how you get the resulting output of the job:
PS C:\temp\StackWork\csvimport> receive-job 9
Mode Name
---- ----
d----- inetpub
d----- PerfLogs
d-r--- Program Files
d-r--- Program Files (x86)
d----- Python27
d----- Quarantine
d----- Tech
d----- Temp
d-r--- Users
d----- Windows
Here is how you get the info on a running job:
PS C:\temp\StackWork\csvimport> get-job -Id 9
Id Name PSJobTypeName State HasMoreData Location Command
-- ---- ------------- ----- ----------- -------- -------
9 Job9 BackgroundJob Completed False localhost
Expanding this really depends on what you need to see in the output, and what you need to trigger as a next action. In your example, it's only really 3 parallel runs, but as you increase you may need to track running jobs and set limits to complete some before starting new ones. A good rule of thumb I've always heard was two running threads x (# cores -1).
All of that is very specific to your needs, but hopefully this helps with the basis of implementation.

In case you want to avoid the repetition of explicitly enumerating individual function calls that take the same arguments:
# Get function-info objects for the target functions by name pattern
# (wildcard expression), as an array.
# Note: Alternatively, you can store the names explicitly in an array:
# $funcsToCall = 'function1', 'function2', 'function3'
$funcsToCall = Get-Item function:function?
foreach ($object in $ObjectsArray) {
# Loop over all functions in the array and call each.
foreach ($func in $funcsToCall) {
# Use & (call operator) to call a function
# by function-info object or name.
& $func -arg $object.Name
}
}
This will still execute the functions sequentially, however.

Related

How do I turn this into a function?

I found this script on this site <thanks Nick!>
$files = gci $srcpath
foreach ($srcfile in $files) {
# Build destination file path
$dstfile = [string]($dstpath, '\', $srcfile.name -join '')
# Copy the file
cp $srcfile.FullName $dstfile.FullName -whatif
# Make sure file was copied and exists before copying over properties/attributes
if ($dstfile.Exists) {
# $dstfile.CreationTime = $srcfile.CreationTime
# $dstfile.LastAccessTime = $srcfile.LastAccessTime
$dstfile.LastWriteTime = $srcfile.LastWriteTime
# $dstfile.Attributes = $srcfile.Attributes
# $dstfile.SetAccessControl($srcfile.GetAccessControl())
}
}
I want to turn this into a function so it accepts recursive directories to copy the timestamps from the source, a cloud folder to it's equivalent at the destination.
Now, I have tried calling the function using multiple variables, trying different methods such as:
$src = $args[0]
$dst = $args[1]
Get-Timestamp $src, $dst
either uses the default folder that the script is running or will fail when it tries to list the contents combining the 2 variables together.
even setting up the function like so
[CmdletBinding(DefaultParamerterSetName='Srcpath')]
param (
[Parameter(Mandatory = $true,
ParameterSetName = 'Srcpath',
Position = 0)]
[string[]]$Srcpath,
# [Parameter(Mandatory = $true,
# ParameterSetName = 'DSTpath',
# Position = 1)]
[string]$DSTpath
)
$PSCmdlet.ParameterSetName
is not producing the expected result.
This will work on its own, doing one folder at a time. But not when doing subfolders.
Suggestions would be greatly appreciated.
From the PowerShell ISE or VSCode select a simple or advanced function snippet.
PowerShell ISE - use CRTL+J, type function, hit enter and you get this:
function MyFunction ($param1, $param2)
{
}
VSCode - use CRTL=ALT+J, type function, hit enter and you get this:
function FunctionName {
param (
OptionalParameters
)
}
Put your code below the param block. Now, both those function names are not best practice and should be verb-noun, as documented here:
About Functions | MSDocs
Function Names
You can assign any name to a function, but functions
that you share with others should follow the naming rules that have
been established for all PowerShell commands.
Functions names should consist of a verb-noun pair in which the verb
identifies the action that the function performs and the noun
identifies the item on which the cmdlet performs its action.
Functions should use the standard verbs that have been approved for
all PowerShell commands. These verbs help us to keep our command names
simple, consistent, and easy for users to understand.
For more information about the standard PowerShell verbs, see Approved
Verbs in the Microsoft Docs.
You are not getting a recursive search because you are not telling it to. That is what the -Recurse of the Get-ChildItem cmdlet is for.
# Get specifics for a module, cmdlet, or function
(Get-Command -Name Get-ChildItem).Parameters
(Get-Command -Name Get-ChildItem).Parameters.Keys
# Results
<#
Path
LiteralPath
Filter
Include
Exclude
Recurse
Depth
Force
Name
Verbose
Debug
ErrorAction
WarningAction
InformationAction
ErrorVariable
WarningVariable
InformationVariable
OutVariable
OutBuffer
PipelineVariable
UseTransaction
Attributes
Directory
File
Hidden
ReadOnly
System
#>
Get-help -Name Get-ChildItem -Examples
# Results
<#
Example 3: Get child items in the current directory and subdirectories
Get-ChildItem -Path C:\Test\*.txt -Recurse -Force
Directory: C:\Test\Logs\Adirectory
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 2/12/2019 16:16 20 Afile4.txt
-a-h-- 2/12/2019 15:52 22 hiddenfile.txt
-a---- 2/13/2019 13:26 20 LogFile4.txt
Directory: C:\Test\Logs\Backup
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 2/12/2019 16:16 20 ATextFile.txt
-a---- 2/12/2019 15:50 20 LogFile3.txt
#>
Get-help -Name Get-ChildItem -Full
Get-help -Name Get-ChildItem -Online
Lastly, aliases/shorthand names are for interactive throw-away code, not in production/shared scripts. As discussed here:
• Best Practices for aliases
https://devblogs.microsoft.com/scripting/best-practice-for-using-aliases-in-powershell-scripts
https://devblogs.microsoft.com/scripting/using-powershell-aliases-best-practices
Why worry about aliases in the first place? ... There are two things
at work when it comes to a script. The first is that no alias is
guaranteed to exist —even aliases that are created by Windows
PowerShell. ...
... and if you create custom ones, you can step on one already configured on a host.
Lastly, PSScriptAnalyzer, VSCode, etc., make all known aliases as errors, until you make them their full name. Custom aliases, you need to expand yourself. Never assume folks know them or care to. PowerShell is verbose for a reason (good/bad/indifferent is all opinion), easy to read for the most inexperienced, self-documenting, and easier to maintain. Make sure to read the available PowerShell Best Practice references that are available and remember, you write code for those who will use it or follow you or maintain it.

Octave parallel package: User-defined function call issues

I have trouble calling a user-defined function with pararrayfun (likewise for parcellfun). When I execute the following code:
pkg load parallel
function retval = mul(x,y)
retval = x*y;
endfunction
vector_x = 1:2^3;
vector_y = 1:2^3;
vector_z = pararrayfun(nproc, #(x,y) mul(x,y), vector_x, vector_y)
vector_z = pararrayfun(nproc, #(x,y) x*y, vector_x, vector_y)
I get the following output:
vector_z =
-1 -1 -1 -1 -1 -1 -1 -1
vector_z =
1 4 9 16 25 36 49 64
That is, the call to the user-defined function does not seem to work, whereas the same as an anonymous function is working.
The machine is x86_64 with Debian bullseye and 5.10.0-1-amd64 kernel. Octave's version is 6.1.1~hg.2020.12.27-1. The pkg list command gives me:
Package Name | Version | Installation directory
--------------+---------+-----------------------
dataframe | 1.2.0 | /usr/share/octave/packages/dataframe-1.2.0
parallel *| 4.0.0 | /usr/share/octave/packages/parallel-4.0.0
struct *| 1.0.16 | /usr/share/octave/packages/struct-1.0.16
Funny thing is that the same code works flawless on armv7l with Debian buster and 4.14.150-odroidxu4 kernel. That is the call to the user-defined function and the anonymous function produce the output:
parcellfun: 8/8 jobs done
vector_z =
1 4 9 16 25 36 49 64
parcellfun: 8/8 jobs done
vector_z =
1 4 9 16 25 36 49 64
On that machine Octave's version is 4.4.1 and pkg list gives:
Package Name | Version | Installation directory
--------------+---------+-----------------------
dataframe | 1.2.0 | /usr/share/octave/packages/dataframe-1.2.0
parallel *| 3.1.3 | /usr/share/octave/packages/parallel-3.1.3
struct *| 1.0.15 | /usr/share/octave/packages/struct-1.0.15
What is wrong and how can I fix this behavior?
This is probably a bug, but do note that the new version of parallel has introduced a few limitations as per its documentation (also see the latest release news) which may relate to what's happening here.
Having said that, I want to clarify this sentence:
the call to the user-defined funtion does not seem to work, whereas the same as an anonymous function is working.
That's not what's happening. You're passing an anonymous function in both cases. It's just that the first calls mul inside, and the second calls mtimes.
As for your error (bug?) this may have something to do with mul being a 'command-line' function. It's not clear from the documentation if command-line functions are a limitation and this is simply an oversight in the docs, or if ill-treatment of command-line functions is a genuine bug. I think if you put it in its own file it should work fine. (and equally, if you do, it's worth passing it as a handle directly, rather than wrapping it inside another anonymous function).
Having said that, I think the -1's you see are basically "error returns" from inside pararrayfun's guts. The reason for this is the following: if instead of creating mul as a command-line function, you make it an anonymous function:
mul = #(x,y) x * y
Observe what the three calls below return:
x = pararrayfun( nproc, #(x,y) mul(x,y), vector_x, vector_y ) # now it works as expected.
x = pararrayfun( nproc, mul, vector_x, vector_y ) # same: mul is a valid handle expecting two inputs
x = pararrayfun( nproc, #mul, vector_x, vector_y ) # x=-1,-1,-1,-1,-1,-1,-1,-1
If you had tried the last command using normal array fun, you would have seen an error relating to the fact that you accidentally passed #mul instead of mul, when mul is a proper handle. In pararrayfun, it just does the calculation, and presumably -1 was the return value from an internal error.
I don't know exactly why a command-line function fails, but presumably it has something to do with the fact that pararrayfun creates separate octave instances under the hood, which need access to all function definitions, and perhaps command-line functions cannot be transfered / compiled in the new instance as easily as in the parent instance, because of the way they are created / compiled in the current session.
In any case, I think you'll solve your problem if instead of a command-line function definition, you create an external function or (if dealing with simple enough functions) a handle to an anonymous function.
However, I would still submit a bug to the octave bug tracker to help the project :)

PowerShell - My first attempt at a function can't find the function

Trying to include a function in a PowerShell script. I get the message that the function does not exist.
I have the function just below where I am creating the parameters. I assume I'm missing something. I have a number of folders that I want to backup and want to call the function for each folder.
CODE STARTS HERE (code above and pram creation left off for brevity).
$ImagesSaveTo = "s3://crisma-backup/HERTRICH/Milford/$dow/images"
#
# Call Backup
BackupCrismaByShop -$LogFile $CrismaSource $CrismaSaveTo $ImagesSource
# Begin Backup Function
# ------------------------------------------------------------------------
function BackupCrismaByShop {
param(
[string]$LogFile,
[string]$CrismaSource,
[string]$CrismaSaveTo,
[string]$ImagesSource
)
# Backup script....
}
Powershell is a language which is interpreted, it means files are read top to bottom and being interpreted as if we speak.
So, if function is called before you have defined it, the Powershell interpreter does not know what you are talking about.
You can try to reorder your code and this should do the trick:
# DEFINE FUNCTION
function BackupCrismaByShop {
param(
[string]$LogFile,
[string]$CrismaSource,
[string]$CrismaSaveTo,
[string]$ImagesSource
)
# Backup script....
}
# YOUR VARIABLES AND OTHER STUFF
$ImagesSaveTo = "s3://crisma-backup/HERTRICH/Milford/$dow/images"
# CALLING THE FUNCTION
BackupCrismaByShop -$LogFile $CrismaSource $CrismaSaveTo $ImagesSource
I can imagine you are using Powershell ISE to code. Let me suggest you to try Visual Studio Code. It would provide you with some recommendations and warnings as you code such variables you are not using, functions called but not still defined, etc.
Thanks.

Run Simultaneous Functions for Patch Automation with PowerShell

I've seen different post regarding paralleling or run functions simultaneously in parallel but the code in the answers have not quite worked for me. I'm doing patching automation and the functions I have all work and do their thing separately but since we work with more than 200+ computers, waiting for each function to finish with its batch of computers kind of defeats the purpose. I have the code in one script and in summary its structured like this:
define global variables
$global:varN
define sub-functions
function sub-function1()
function sub-functionN()
define main functions
function InstallGoogleFunction($global:varN)
{
$var1
$result1 = sub-function1
$resultN = sub-functionN
}
function InstallVLCFunction($global:varN)
{
"similar code as above"
}
function InstallAppFunction($global:varN)
{
"similar code as above"
}
The functions will all install a different app/software and will write output to a file. The only thing is I cannot seem to run all the functions for installation without waiting for the first one to finish. I I tried start-job code but it executed and displayed a table like output but when verifying the computers neither had anything running on Task Manager. Is there a way powershell can run this installation functions at the same time? If I have to resort to a one-by-one I will call the functions by the least amount of time taken or the least computers the functions read they need to install I will but I just wanted someone to better explain if this can be done.
You can use multithreading to achieve this. Below example shows how you can trigger tasks on multiple using multi threading in PowerShell.
Replace list of machines in $Computers with your machines and give it a try. Below example get the disk details on given machines
# This is your script you want to execute
$wmidiskblock =
{
Param($ComputerName = "LocalHost")
Get-WmiObject -ComputerName $ComputerName -Class win32_logicaldisk | Where-Object {$_.drivetype -eq 3}
}
# List of servers
$Computers = #("Machine1", "Machine2", "Machine3")
#Start all jobs in parallel
ForEach($Computer in $Computers)
{
Write-Host $Computer
Start-Job -scriptblock $wmidiskblock -ArgumentList $Computer
}
Get-Job | Wait-Job
$out = Get-Job | Receive-Job
$out |export-csv 'c:\temp\wmi.csv'

Korn Shell 93: Function Sends Non-null Value, But Calling Function Gets Null Value

Readers:
I’ve spent a few days investigating the following incidents without successfully identifying the cause. I’m writing in regard to ksh scripts I wrote to the ksh88 standards which have run for years on many HP-UX/PA-RISC and Solaris/Sparc platforms, and, even a few Linux/x86_64) platforms … until this week. Upon running the scripts on CentOS 6.4/x86-x64 with Korn shell “Version AJM 93u+ 2012-08-01”, non-null values being returned to the Caller by some functions are retrieved by the Caller as null values.
Specifically, in the edited excerpts following, the variable ToDo always contains a value in fSendReqToSvr prior to fSendReqToSvr returning. When fSendReqToSvr returns in fGetFileStatusFromSvr, Todo is assigned a null value. The context of this script is as a child invoked by another ksh script run from cron. I’ve included the code reassigning stdout and stderr on the chance this is somehow significant.
What don’t I understand?
OS:
CentOS-6.4 (x86-64) Development Installation
Korn Shell:
Version: AJM 93u+ 2012-08-01
Package: Ksh.x86_64 20120801-10.el6
...
function fLogOpen
{
...
exec 3>$1 #C# Assigned Fd 3 to a log file
#C# stdout and stderr are redirected to log file as insurance that
#C# no “errant” output from script (1700 lines) “escapes” from script.
#C# stdout and stderr restored in fLogClose.
exec 4>&1
exec 1>&3
exec 5>&2
exec 2>&3
...
}
...
#C# Invokes curl on behalf of caller and evaluates
function fSendReqToSvr
{
typeset Err=0 ... \
ToDo=CONTINUE ... \
CL=”$2” ...
...
curl $CL > $CurlOutFFS 2>&1 &
gCurlPId=$!
while (( iSecsLeft > 0 )) ; do
...
#C# Sleep N secs, check status of curl with “kill -0 $gCurlPId”
#C# and if curl exited, get return code from “wait $gCurlPId”.
...
done
...
#C# Evaluate curl return code and contents of CurlOutFFS file to
#C# determine what to set ToDo to.
...
print –n -– “$ToDo” #C# ToDo confirmed to always have a value here
return $Err
}
...
function fGetFileStatusFromSvr
{
typeset Err=0 ... \
ToDo=CONTINUE ... \
...
...
ToDo=$( fSendReqToSvr “$iSessMaxSecs” “$CurlCmdLine” )
Err=$?
#C# ToDo contains null here
...
return $Err
}
One problem here is that we don't see the code responsible for the ToDo result.
If this worked properly with ksh88 before, you may have a problem if you don't have good tests for the individual functions, as ksh88 and ksh93 have many subtle and not so subtle differences.
Paradoxically, ksh93 is easier to drop in as a replacement for /bin/sh (The mythical Bourne shell :-) than for ksh88.
The reason for this is that ksh88 introduced extensions to the shell that were further enhanced and changed in ksh93.
One example that may touch on your question is arithmetic, which is limited to integer arithmetic in ksh88 and got extended to floating point in ksh93.
Any utility expecting integer values can be fed results from arithmetic expressions in ksh88.
It may choke on the floating point results returned in ksh93.
Please supply a proper code sample that shows how the ToDo value is determined.