Undefine all shared variables on the current node - undefined

I have defined multiple shared tables in the current node. Are there any ways or built-in functions to undefine them at one time?

Try the following user-defined function to see if this script can meet your requirements:
def existsShareVariable(varName){
return objs(true).name.find(varName)>=0
}
def ClearAllSharedTables(){
sharedTables = exec name from objs(true) where form="TABLE", shared=true
for(sharedTable in sharedTables){
print("Undef Shared Table: " + sharedTable)
try{
undef(sharedTable, SHARED)
}
catch(ex){
print(ex)
}
}
print("All shared table have been cleared !")
}

Related

How to organize functions into script files to allow call from command line?

I have an Advanced Function with a number of helper functions in a .ps1 script file.
How do organize my functions so I can call AdvFunc from the command line and not break the ability of AdvFunc to use the helper functions?
Abbreviated contents of script.ps1:
Function AdvFunc {
[cmdletbinding(DefaultParameterSetName='Scheduled')]
Param (
[Parameter(ValueFromPipeline=$true, ParameterSetName='Scheduled')]$SomeValue
)
Begin {
$value = Helper1 $stuff
}
Process {
# do stuff
}
End {
# call Helper2
}
}
Helper1 {
Param ($stuff)
# do stuff
Return $valueForAdvFunc
}
Helper2 {
# do other stuff
}
# Entry Point
$collection | AdvFunc
script.ps1 is currently launched by a scheduler and processes pre-defined $collection as expected.
The problem is I need to call AdvFunc from the command like with a different parameter set. I've add the AdHoc Parameter Set below. This will be used to send a different collection to AdvFunc. As I understand things, this means the first lines of script.ps1 will now need to be:
Param (
[Parameter(ValueFromPipeline=$true, ParameterSetName='Scheduled')]$SomeValue,
[Parameter(ParameterSetName='AdHoc')][string]$OtherValue1,
[Parameter(ParameterSetName='AdHoc')][string]$OtherValue2
)
Obviously this means the helper functions can no longer be in the same .ps1 file.
Does this mean I will now need 3 script files, each dot-sourcing the other as needed?
Should I use: script.ps1 (containing only AdvFunc), helpers.ps1 (containing the several helper functions), and collection.ps1 (with only $collection being piped to script.ps1) ?
Are there any alternatives?
Proposed solution: Use a launcher script that sources script.ps1. All functions (AdvFunc and all helper functions) reside in script.ps1.
# launcher.ps1
[cmdletbinding(DefaultParameterSetName='Scheduled')]
Param (
[Parameter(ParameterSetName='AdHoc', Mandatory=$true)][ValidateNotNullOrEmpty()][string]$Param1,
[Parameter(ParameterSetName='AdHoc', Mandatory=$true)][ValidateNotNullOrEmpty()][string]$Param2,
[Parameter(ParameterSetName='AdHoc', Mandatory=$true)][ValidateNotNullOrEmpty()][string]$Param3
)
. .\script.ps1
if ($PSBoundParameters.ContainsKey('param1')) {
AdvFunc -Param1 $Param1 -Param2 $Param2 -Param3 $Param3
}
else {
$collection | AdvFunc
}
The idea is to accommodate either no parameter (sending $collection to AdvFunc) or a full 'AdHoc' set of parameters (sending the command-line defined collection to AdvFunc). The empty 'Scheduled' Parameter Set may not be necessary to accommodate the no parameter option.

PowerShell: Cannot find an overload for IndexOf and the argument count: 2

I am attempting to use a portion of a script I wrote to return a list of Local Groups a specified user may be part of on a remote server so they can be quickly removed from said Local Group. Everything seems to work fine until the groups a person may be part of goes below two. When there is only one group I get the following error:
Cannot find an overload for "IndexOf" and the argument count: "2".
At line:177 char:30
+ [string]([array]::IndexOf <<<< ($localGroups, $_)+1) + ". " + $_
+ CategoryInfo : NotSpecified: (:) [], MethodException
+ FullyQualifiedErrorId : MethodCountCouldNotFindBest
Here are the Functions I wrote for this particular script...
This function will return a list of groups the given user is part of:
function Get-LocalGroupAccess
{
[CmdletBinding()]
Param(
[Parameter(Mandatory=$true)]
[string]$fqdn,
[Parameter(Mandatory=$true)]
[string]$userName
)
Process
{
$serverPath = [ADSI]"WinNT://$fqdn,computer"
$serverPath.PSBase.Children | where {$_.PSBase.SchemaClassName -eq 'group'} | foreach {
$lGroup = [ADSI]$_.PSBase.Path
$lGroup.PSBase.Invoke("Members") | foreach {
$lMember = $_.GetType().InvokeMember("Name", 'GetProperty', $null, $_, $null).Replace("WinNT://","")
if ($lMember -like "$userName")
{
$localAccess += #($lGroup.Name)
}
}
}
return($localAccess)
}
}
This function sets the User Object (I am not sure this is the technical term):
function Set-UserObj($userDomain, $userName)
{
$userObj = [ADSI]"WinNT://$userDomain/$userName"
return ($userObj)
}
This function set the FQDN (checks to see if it is pingable):
function Set-FQDN($fqdn)
{
do{
$fqdn = Read-Host "Enter the FQDN"
} while (-not(Test-Connection $fqdn -quiet))
return($fqdn)
}
This function will take the selection for the group you want to remove the given user from, change it to the proper place in the array, and return the group:
function Set-LocalGroup($localGroups, $selectGroup)
{
$selectGroup = Read-Host "What group would you like to add $userDomain/$userName to?"
$selectGroup = [int]$selectGroup -= 1
$setGroup = $localGroups[$selectGroup]
return($setGroup)
}
This function sets the Group object (not sure if this is the technical term):
function Set-GroupObj($fqdn, $group)
{
$groupObj = [ADSI]"WinNT://$fqdn/$group"
return($groupObj)
}
This function removes the given user from the group selected:
function Remove-UserAccess ($gObj, $uObj)
{
$gObj.PSBase.Invoke("Remove",$uObj.PSBase.Path)
}
In the script the user name, domain and FQDN are requested. After these are provided the script will return a list of groups the given user is part of. Everything works fine until the user is part of one group. Once that takes place, it throws the error I pasted above.
Please note, this is my first time posting and I am not sure what information is needed here. I hope I provided the proper and correct information. if not, please let me know if there is something else that you require.
Thanks!
I went back and was looking at the difference, if there were any, in the variable $localGroups that I was creating (I used Get-Member -InputObject $localGroups). I noticed that when $localGroups had only one item it was a System.String type but when it had more than one item in it, it was a System.Object[] type. I decided to do the following and it addressed the issue I was having:
$localGroups = #(Get-LocalGroupAccess $fqdn $userName)
previous code:
$localGroups = Get-LocalGroupAccess $fqdn $userName
Everything is working as it should because I forced the variable to a static type instead of allowing it to create whatever type it wanted.

Lua - Execute a Function Stored in a Table

I was able to store functions into a table. But now I have no idea of how to invoke them. The final table will have about 100 calls, so if possible, I'd like to invoke them as if in a foreach loop. Thanks!
Here is how the table was defined:
game_level_hints = game_level_hints or {}
game_level_hints.levels = {}
game_level_hints.levels["level0"] = function()
return
{
[on_scene("scene0")] =
{
talk("hint0"),
talk("hint1"),
talk("hint2")
},
[on_scene("scene1")] =
{
talk("hint0"),
talk("hint1"),
talk("hint2")
}
}
end
Aaand the function definitions:
function on_scene(sceneId)
-- some code
return sceneId
end
function talk(areaId)
-- some code
return areaId
end
EDIT:
I modified the functions so they'll have a little more context. Basically, they return strings now. And what I was hoping to happen is that at then end of invoking the functions, I'll have a table (ideally the levels table) containing all these strings.
Short answer: to call a function (reference) stored in an array, you just add (parameters), as you'd normally do:
local function func(a,b,c) return a,b,c end
local a = {myfunc = func}
print(a.myfunc(3,4,5)) -- prints 3,4,5
In fact, you can simplify this to
local a = {myfunc = function(a,b,c) return a,b,c end}
print(a.myfunc(3,4,5)) -- prints 3,4,5
Long answer: You don't describe what your expected results are, but what you wrote is likely not to do what you expect it to do. Take this fragment:
game_level_hints.levels["level0"] = function()
return
{
[on_scene("scene0")] =
{
talk("hint0"),
}
}
end
[This paragraph no longer applies after the question has been updated] You reference on_scene and talk functions, but you don't "store" those functions in the table (since you explicitly referenced them in your question, I presume the question is about these functions). You actually call these functions and store the values they return (they both return nil), so when this fragment is executed, you get "table index is nil" error as you are trying to store nil using nil as the index.
If you want to call the function you stored in game_level_hints.levels["level0"], you just do game_level_hints.levels["level0"]()
Using what you guys answered and commented, I was able to come up with the following code as a solution:
asd = game_level_hints.levels["level0"]()
Now, asd contains the area strings I need. Although ideally, I intended to be able to access the data like:
asd[1][1]
accessing it like:
asd["scene0"][1]
to retrieve the area data would suffice. I'll just have to work around the keys.
Thanks, guys.
It's not really clear what you're trying to do. Inside your anonymous function, you're returning a table that uses on_scene's return value as keys. But your on_scene doesn't return anything. Same thing for talk.
I'm going to assume that you wanted on_scene and talk to get called when invoking each levels in your game_level_hints table.
If so, this is how you can do it:
local maxlevel = 99
for i = 0, maxlevel do
game_level_hints.levels["level" .. i] = function()
on_scene("scene" .. i)
talk("hint" .. i)
end
end
-- ...
for levelname, levelfunc in pairs(game_level_hints.levels) do
levelfunc()
end

Excluding Content From SQL Bulk Insert

I want to import my IIS logs into SQL for reporting using Bulk Insert, but the comment lines - the ones that start with a # - cause a problem becasue those lines do not have the same number f fields as the data lines.
If I manually deleted the comments, I can perform a bulk insert.
Is there a way to perform a bulk insert while excluding lines based on a match such as : any line that beings with a "#".
Thanks.
The approach I generally use with BULK INSERT and irregular data is to push the incoming data into a temporary staging table with a single VARCHAR(MAX) column.
Once it's in there, I can use more flexible decision-making tools like SQL queries and string functions to decide which rows I want to select out of the staging table and bring into my main tables. This is also helpful because BULK INSERT can be maddeningly cryptic about the why and how of why it fails on a specific file.
The only other option I can think of is using pre-upload scripting to trim comments and other lines that don't fit your tabular criteria before you do your bulk insert.
I recommend using logparser.exe instead. LogParser has some pretty neat capabilities on its own, but it can also be used to format the IIS log to be properly imported by SQL Server.
Microsoft has a tool called "PrepWebLog" http://support.microsoft.com/kb/296093 - which strips-out these hash/pound characters, however I'm running it now (using a PowerShell script for multiple files) and am finding its performance intolerably slow.
I think it'd be faster if I wrote a C# program (or maybe even a macro).
Update: PrepWebLog just crashed on me. I'd avoid it.
Update #2, I looked at PowerShell's Get-Content and Set-Content commands but didn't like the syntax and possible performance. So I wrote this little C# console app:
if (args.Length == 2)
{
string path = args[0];
string outPath = args[1];
Regex hashString = new Regex("^#.+\r\n", RegexOptions.Multiline | RegexOptions.Compiled);
foreach (string file in Directory.GetFiles(path, "*.log"))
{
string data;
using (StreamReader sr = new StreamReader(file))
{
data = sr.ReadToEnd();
}
string output = hashString.Replace(data, string.Empty);
using (StreamWriter sw = new StreamWriter(Path.Combine(outPath, new FileInfo(file).Name), false))
{
sw.Write(output);
}
}
}
else
{
Console.WriteLine("Source and Destination Log Path required or too many arguments");
}
It's pretty quick.
Following up on what PeterX wrote, I modified the application to handle large log files since anything sufficiently large would create an out-of-memory exception. Also, since we're only interested in whether or not the first character of a line starts with a hash, we can just use StartsWith() method on the read operation.
class Program
{
static void Main(string[] args)
{
if (args.Length == 2)
{
string path = args[0];
string outPath = args[1];
string line;
foreach (string file in Directory.GetFiles(path, "*.log"))
{
using (StreamReader sr = new StreamReader(file))
{
using (StreamWriter sw = new StreamWriter(Path.Combine(outPath, new FileInfo(file).Name), false))
{
while ((line = sr.ReadLine()) != null)
{
if(!line.StartsWith("#"))
{
sw.WriteLine(line);
}
}
}
}
}
}
else
{
Console.WriteLine("Source and Destination Log Path required or too many arguments");
}
}
}

Jenkins parameterized job that only queues one build

Imagine a Jenkins job A which takes 1 minute to run, and job B which takes 5 minutes.
If we configure job A to trigger job B, while job B is running job A may run 5 times before B completes. However, Jenkins doesn't add 5 builds to job B's queue, which is great because otherwise speedy job A would be creating an ever-growing backlog of builds for poor slow job B.
However, now we want to have job A trigger B as a parameterized job, using the parameterized trigger plugin. Parameterized jobs do queue up a backlog, which means job A is happily creating a huge pile of builds for job B, which can't possibly keep up.
It does make sense to add a new parameterized build to the queue each time it's triggered, since the parameters may be different. Jenkins should not always assume that a new parameterized build renders previously queued ones unnecessary.
However, in our case we actually would like this. Job A builds and packages our application, then Job B deploys it to a production-like environment and runs a heavier set of integration tests. We also have a build C which deploys to another environment and does even more testing, so this is an escalating pattern for us.
We would like the queue for our parameterized job B to only keep the last build added to it; each new build would replace any job currently in the queue.
Is there any nice way to achieve this?
Add a "System Groovy Script" pre-build step to job B that checks for (newer) queued jobs of the same name, and bails out if found:
def name = build.properties.environment.JOB_NAME
def queue = jenkins.model.Jenkins.getInstance().getQueue().getItems()
if (queue.any{ it.task.getName() == name }) {
println "Newer " + name + " job(s) in queue, aborting"
build.doStop()
} else {
println "No newer " + name + " job(s) in queue, proceeding"
}
You could get rid of Parameterized Trigger Plugin, and instead, use the traditional triggering. As you said, this would prevent job B queue from piling up.
How to pass the parameters from A to B then? Make job A to yield the parameters in it's console output. In job B, to get these build parameters, examine the console output of the latest A build (with a Python script, perhaps?).
Ron's solution worked for me. If you don't like having bunch of cancelled builds in build history you can add the following system groovy script to job A before you trigger job B:
import hudson.model.*
def q = jenkins.model.Jenkins.getInstance().getQueue()
def items = q.getItems()
for (i=0;i<items.length;i++){
if(items[i].task.getName() == "JobB"){
items[i].doCancelQueue()
}
}
Here's one workaround:
Create a job A2B between jobs A and B
Add a build step in job A2B that determines whether B is running. To achieve that, check:
Determine if given job is currently running using Hudson/Jenkins API
Python API's is_queued_or_running()
Finally, trigger job B from A2B only if there are no B builds queued or running (carrying the parameters through)
In case you're using Git, this is now supported by the "Combine Queued git hashes" under the Triggering/ Parameters/ Pass-through option.
The first Git plugin version that should actually work with this is 1.1.27 (see Jenkins-15160)
Here's a more flexible option if you are only care about a few parameters matching. This is especially helpful when a job is triggered externally (i.e. from GitHub or Stash) and some parameters don't need a separate build.
If the checked parameters match in both value and existence in both the current build and a queued build, the current build will be aborted and the description will show that a future build contains the same checked parameters (along with what they were).
It could be modified to cancel all other queued jobs except the last one if you don't want to have build history showing the aborted jobs.
checkedParams = [
"PARAM1",
"PARAM2",
"PARAM3",
"PARAM4",
]
def buildParams = null
def name = build.project.name
def queuedItems = jenkins.model.Jenkins.getInstance().getQueue().getItems()
yieldToQueuedItem = false
for(hudson.model.Queue.Item item : queuedItems.findAll { it.task.getName() == name }) {
if(buildParams == null) {
buildParams = [:]
paramAction = build.getAction(hudson.model.ParametersAction.class)
if(paramAction) {
buildParams = paramAction.getParameters().collectEntries {
[(it.getName()) : it.getValue()]
}
}
}
itemParams = [:]
paramAction = item.getAction(hudson.model.ParametersAction.class)
if(paramAction) {
itemParams = paramAction.getParameters().collectEntries {
[(it.getName()) : it.getValue()]
}
}
equalParams = true
for(String compareParam : checkedParams) {
itemHasKey = itemParams.containsKey(compareParam)
buildHasKey = buildParams.containsKey(compareParam)
if(itemHasKey != buildHasKey || (itemHasKey && itemParams[compareParam] != buildParams[compareParam])) {
equalParams = false
break;
}
}
if(equalParams) {
yieldToQueuedItem = true
break
}
}
if (yieldToQueuedItem) {
out.println "Newer " + name + " job(s) in queue with matching checked parameters, aborting"
build.description = "Yielded to future build with:"
checkedParams.each {
build.description += "<br>" + it + " = " + build.buildVariables[it]
}
build.doStop()
return
} else {
out.println "No newer " + name + " job(s) in queue with matching checked parameters, proceeding"
}
The following is based on Ron's solution, but with some fixes to work on my Jenkins 2 including removing java.io.NotSerializableException exception and handling that the format of getName() is some times different from that of JOB_NAME
// Exception to distinguish abort due to newer jobs in queue
class NewerJobsException extends hudson.AbortException {
public NewerJobsException(String message) { super(message); }
}
// Find jenkins job name from url name (which is the most consistently named
// field in the task object)
// Known forms:
// job/NAME/
// job/NAME/98/
#NonCPS
def name_from_url(url)
{
url = url.substring(url.indexOf("/") + 1);
url = url.substring(0, url.indexOf("/"));
return url
}
// Depending on installed plugins multiple jobs may be queued. If that is the
// case skip this one.
// http://stackoverflow.com/questions/26845003/how-to-execute-only-the-most-recent-queued-job-in-jenkins
// http://stackoverflow.com/questions/8974170/jenkins-parameterized-job-that-only-queues-one-build
#NonCPS
def check_queue()
{
def name = env.JOB_NAME
def queue = jenkins.model.Jenkins.getInstance().getQueue().getItems()
if (queue.any{ name_from_url(it.task.getUrl()) == name }) {
print "Newer ${name} job(s) in queue, aborting"
throw new NewerJobsException("Newer ${name} job(s) in queue, aborting")
} else {
print "No newer ${name} job(s) in queue, proceeding"
}
}