Powershell refuses to sort hashtable initially or keep it sorted [duplicate] - function

I have a hash table here and I have it eventually outputting to an Excel spreadsheet, but the issue appears to be the way the system sorts the hash table by default. I want it to return the machines in the same order that they are inputted, they way it currently works is a box pops up and you paste in all your machine names so they are all in memory prior to the foreach loop. I was previously sorting this by the longest uptime but it now needs to be the same way they are inputted. My initial thought is to create another hash table and capture them in the same order versus the $machineList variable, but that might even leave me in the same position. I tried to search but I couldn't find info on the default way that hash tables sort.
Any ideas?
$machineUptime = #{}
foreach($machine in $machineList){
if(Test-Connection $machine -Count 1 -Quiet){
try{
$logonUser = #gets the logged on user
$systemUptime = #gets the wmi property for uptime
if($logonUser -eq $null){
$logonUser = "No Users Logged on"
}
$machineUptime[$machine] = "$systemUptime - $logonUser"
}
catch{
Write-Error $_
$machineUptime[$machine] = "Error retrieving uptime"
}
}
else{
$machineUptime[$machine] = "Offline"
}
}

Create $machineUptime as an ordered hashtable (provided you have PowerShell v3 or newer):
$machineUptime = [ordered]#{}

Related

Problem using Powershell to extract a value from a Json structure

Using Powershell, I get a json snippet returned into my First variable (this always works fine);
# Initialise variables...
$nMessage_id = "";
$whatStatusJsonContent = "";
# Function https://abc.googleapis.com/abc/load call here and returns...
$whatStatusJsonContent = '{"message_id":9093813071099257562}'
Then I call the convert function into a temp variable like this;
$ResponseBody = ConvertFrom-Json $whatStatusJsonContent;
which puts the Json into a nice little data structure like this;
message_id
----------
9093813071099257562
From which I can select the value I want by calling this;
$nMessage_id = $ResponseBody.message_id;
Usually, this works fine and I get the value into my second variable;
$nMessage_id = 9093813071099257562
The problem is: Sometimes I get nothing in $nMessage_id, even though $whatStatusJsonContent is definitely logged as having the Json returned correctly from the function.
My question is: Do I have to ConvertFrom-Json, or can I read it raw from the First variable..?
COMBINED SOLUTION: Thanks to #mklement() and #Bernard-Moeskops
# Initialise variables...
$nMessage_id = "";
$whatStatusJsonContent = "";
# Function https://abc.googleapis.com/abc/load call here and returns...
$whatStatusJsonContent = '{"message_id":9093813071099257562}'
$ResponseBody = ConvertFrom-Json $whatStatusJsonContent;
if($ResponseBody.message_id){
# ConvertFrom-Json got the value!
$nMessage_id = $ResponseBody.message_id
}else{
# ConvertFrom-Json didn't work!
$nMessage_id = = ($whatStatusJsonContent -split '[:}]')[1]
}
There's nothing overtly wrong with your code.
ConvertFrom-Json should work as expected and return a [pscustomobject] instance with a .message_id property.
In your example, the message_id JSON property value is a number that is an integer, for which ConvertTo-Json automatically chooses a suitable integer data type as follows: the smallest signed type >= [int] (System.Int32)[1] that can accommodate the value ([int] -> [long] (System.Int64) -> [decimal] (System.Decimal)); the caveat is that if the value can't even fit into a [decimal], an - inexact - [double] is used.[2]
With the sample JSON in your question, [long] is chosen.
In a follow-up comment you state:
The routine makes over 1000 calls/hour and for most of them the Json comes back and the $nMessage_id is yielded perfectly. Then, suddenly, the $nMessage_id is empty, even though the Json is logged as coming back fine. So, somewhere in the ConvertFrom-Json or $ResponseBody.message_id the value is going missing...
I have no explanation, but if - for whatever reason - ConvertFrom-Json is the culprit, you can try string manipulation as a workaround to extract the message ID and see if that helps:
$whatStatusJsonContent = '{"message_id":9093813071099257562}'
# Extract the message_id property value as a *string*
# (which you can cast to a numeric type if/as needed).
$message_id = ($whatStatusJsonContent -split '[:}]')[1]
The above stores a string with content 9093813071099257562 in $message_id; note that, as written, the input string must have the exact format as above with respect to whitespace; while it is possible to make the text parsing more robust, not having to worry about format variations is one good reason to use a dedicated parser such as ConvertFrom-Json.
Another option is to try a different JSON parser to see if that helps.
Json.NET is the preeminent JSON parser in the .NET world (which now underlies the JSON cmdlets in PowerShell Core):
$whatStatusJsonContent = '{"message_id":9093813071099257562}'
$message_id = [NewtonSoft.Json.Linq.JObject]::Parse($whatStatusJsonContent).message_id.Value
Note: Json.NET - like ConvetFrom-Json in PowerShell _Core - commendably uses the arbitrary large [bigint] type as well once a number is too large to fit into a [long].
Use of the Json.NET assembly has the added advantage of better performance than the ConvertFrom-Json cmdlet.
In PowerShell Core, you can run the above code as-is (the assembly is preloaded); in Windows PowerShell you'll have to download the package via the link above and add the assembly (NewtonSoft.Json.dll) to your session with Add-Type -LiteralPath.
[1] Curiously, in PowerShell Core, as of (at least) v6.2.0, the smallest type chosen is [long] (System.Int64).
[2] More helpfully, PowerShell Core, as of (at least) v6.2.0, creates an arbitrarily large [bigint] (System.Numerics.BigInteger) instance once a value doesn't fit into a [long] anymore; that is, the [decimal] type is skipped altogether.
You are going to have to convert it, so that PowerShell can understand it. It will convert from a string to a PSCustomObject. Just check by asking the type of the variable before and after.
$ResponseBody.message_id.GetType()
If sometimes the output is nothing, you could do something like:
if($ResponseBody.message_id){
$nMessage_id = $ResponseBody.message_id
}else{
throw "No message id found"
}
Hope this helps.

Function calling itself to enumerate nested group memberships

So I've been working on a new logon script for our users. It's bascially finished and it works just fine. However, I've coded a function to enumerate nested group memberships and I don't see how it can work the way I wrote it (although it does!).
Code:
$Groups = New-Object System.Collections.ArrayList #Array for all groups incl. nested
$MaxIndex = 0
#---Adds every group to the array
Function AddGroup($P1)
{
do
{
#---Param
[String]$ADObject = $P1
#---Gets all "main" groups of the current user
$AllGroups = ([ADSISEARCHER]"samaccountname=$ADObject").Findone().Properties.memberof
#---Durchlauf für jede Gruppe
ForEach($Group in $AllGroups)
{
#---Convert
[String]$GroupSTR = $Group
if($GroupSTR -ne "")
{
#---Get actual group name
$GroupSTR = $GroupSTR.SubString(3)
$IndexOfChar = $GroupSTR.IndexOf(",")
$GroupSTR = $GroupSTR.SubString(0,$IndexOfChar)
$Groups.Add($GroupSTR) | Out-Null
if($MaxIndex -le $AllGroups.Count) { AddGroup $GroupSTR }
$MaxIndex = $MaxIndex + 1
}
}
$GroupSTR = ""
}
while($GroupSTR -ne "")
}
AddGroup $LoggedOnUser
So what I do is I call the function AddGroup with the username of the current user. The script then gets all the main groups in which the user is currently in and goes into a foreach loop for every group found.
It removes the unwanted gibberish of the ouput of the ADSISEARCHER and then adds the clean group name to the array.
Now to also get the nested group memberships, I get the count of all groups of the current AD object. If my index is below this count, it means the current object is also part of other groups and then calls the function again to get those as well.
Now what I don't understand is, how can my function ever count up my index? The index will always be lower or equal the total amount of found groups. Even if it finds zero groups. Therefore it should call the function infinite times but it doesn't.
What am I missing here?
If you want to find how your code is working, you can use ISE or PowerGUI in debug mode.
In order to solve your problem why don't you use LDAP_MATCHING_RULE_IN_CHAIN have a look to Search Filter Syntax
To find all the groups that "user1" is a member of :
Set the base to the groups container DN; for example root DN (dc=dom,dc=fr)
Set the scope to subtree
Use the following filter : (member:1.2.840.113556.1.4.1941:=cn=user1,cn=users,DC=x)
Here is an sample in PowerShell :
$ADObjectDN = ([ADSISEARCHER]"samaccountname=$ADObject").Findone().Properties.distinguishedname
$AllGroups =([ADSISEARCHER]"member:1.2.840.113556.1.4.1941:=$ADObjectDN").FindAll()

Loop through query results without loading them all in array in Codeigniter [duplicate]

The normal result() method described in the documentation appears to load all records immediately. My application needs to load about 30,000 rows, and one at a time, submit them to a third-party search index API. Obviously loading everything into memory at once doesn't work well (errors out because of too much memory).
So my question is, how can I achieve the effect of the conventional MySQLi API method, in which you load one row at a time in a loop?
Here is something you can do.
while ($row = $result->_fetch_object()) {
$data = array(
'id' => $row->id
'some_value' => $row->some_field_name
);
// send row data to whatever api
$this->send_data_to_api($data);
}
This will get one row at the time. Check the CodeIgniter source code, and you will see that they will do this when you execute the result() method.
For those who want to save memory on large result-set:
Since CodeIgniter 3.0.0,
There is a unbuffered_row function,
All the methods above will load the whole result into memory (prefetching). Use unbuffered_row() for processing large result sets.
This method returns a single result row without prefetching the whole result in memory as row() does. If your query has more than one row, it returns the current row and moves the internal data pointer ahead.
$query = $this->db->query("YOUR QUERY");
while ($row = $query->unbuffered_row())
{
echo $row->title;
echo $row->name;
echo $row->body;
}
You can optionally pass ‘object’ (default) or ‘array’ in order to specify the returned value’s type:
$query->unbuffered_row(); // object
$query->unbuffered_row('object'); // object
$query->unbuffered_row('array'); // associative array
Official Document: https://www.codeigniter.com/userguide3/database/results.html#id2
Well, the thing is that result() gives away the entire reply of the query. row() simply fetches the first case and dumps the rest. However the query can still fetched 30 000 rows regardles of which function you use.
One design that would fit your cause would be:
$offset = (int)#$_GET['offset'];
$query = $this-db->query("SELECT * FROM table LIMIT ?, 1", array($offset));
$row = $query->row();
if ($row) {
/* Run api with values */
redirect(current_url().'?offset'.($offset + 1));
}
This would take one row, send it to api, update the page and use the next row. It will alos prevent the page from having a timeout. However it would most likely take a while with 30 000 records and refreshes, so you may wanna adjust your LIMIT ?, 1 to a higher number than 1 and go result() and foreach() multiple apis per pageload.
Well, there'se the row() method, which returns just one row as an object, or the row_array() method, which does the same but returns an array (of course).
So you could do something like
$sql = "SELECT * FROM yourtable";
$resultSet = $this->db->query($sql);
$total = $resultSet->num_rows();
for($i=0;$i<$total;$i++) {
$row = $resultSet->row_array($i);
}
This fetches in a loop each row from the whole result set.
Which is about the same as fetching everyting and looping over the $this->db->query($sql)->result() method calls I believe.
If you want a row at a time either you make 30.000 calls, or you select all the results and fetch them one at a time or you fetch all and walk over the array. I can't see any way out now.

Changing variable name in foreach function in PowerShell

I have a script where a portion of it needs to run three different times, so I thought I'd try expand my limited PowerShell knowledge by calling the same code using a function rather than copying and pasting again and again, and making a longer-than-necessary script.
The code I want to re-use in a function:
$users = Get-Content users.txt
foreach ($user in $users){
# Get some information from Exchange about the user
$dn = (Get-MailboxStatistics -id $user).displayname
$ic = (Get-MailboxStatistics -id $user).itemcount
# Make a hash table where user=itemcount
$firstrun += #{"$dn"="$ic"} # Each time the script runs, we
# need a different hash table
# Kick off some Exchange maintenance on the user. (Removed
# to keep the post shorter)
# Item count should lower after a few seconds.
}
When the code repeats the second and third time, I want there to be a new hash table created ("secondrun" and "thirdrun"). My first problem is changing the name of the hash table name in the function each time - can this be done?
I also started to wonder if a hash table is even the right tool for the job or if there is something better? For a little more background, after I have the second hash table, I want to do a comparison:
foreach ($user in $users){
$c1 = $firstrun.get_item($user)
$c2 = $secondrun.get_item($user)
# If the item count hasn't substantially dropped
if ($c2 -ge $c1){
# Do some different Exchange tasks on the user (removed
# to keep the post shorter)
}
}
And finally there'll be a third run which will simply create a third hash table (again, user=itemcount). I'll then output some kind of report to a text file using values in each of the hash tables.
I guess at this stage I have two main problems - having a changing variable name for the hash table in the function, and also I'm having difficulty maintaining the hash tables after the functions run - trying to declare them like global variables doesn't seem to work. I'm open for ideas on how any of this could be done better.
If I understand what you're saying, you are doing the following:
Populating a hash table that maps the set of users to their item count.
Doing something that prunes the items
Regenerate the hash table
Comparing the hash tables generated in steps 1 and 3; act on the list again.
Regenerate the hash table
Produce a report based on all three tables
As you can see from the above list, you really want to generate a function that produces the hash table and returns it:
function Get-UsersItemCount
{
$ht = #{}
$users = Get-Content users.txt
foreach ($user in $users){
# Get some information from Exchange about the user
$dn = (Get-MailboxStatistics -id $user).displayname
$ic = (Get-MailboxStatistics -id $user).itemcount
# Make a hash table where user=itemcount
$ht += #{"$dn"="$ic"}
}
$ht # Returns the hashtable
}
Now you can call this function three times:
$firstrun = Get-UsersItemCount
# Do first run stuff
$secondrun = Get-UsersItemCount
# Do second run stuff
$thirdrun = Get-UsersItemCount
# Generate your report
You could just use one hash table, making the values an array, with one element for each pass:
$ht = #{}
$users = Get-Content users.txt
foreach ($user in $users){
# Get some information from Exchange about the user
$stats = Get-MailboxStatistics $user |
select -expand itemcount
$ht[user] += #($stats)}
}
# Kick off some Exchange maintenance on the user. (Removed to
# keep post shorter)
# Item count should lower after a few seconds.
foreach ($user in $users){
# Get some information from Exchange about the user
$stats = Get-MailboxStatistics $user |
select -expand itemcount
$ht[user] += #($stats)
# If the item count hasn't substantially dropped
if ($ht[$user][1] -ge $ht[$user][0])
# Do some different Exchange tasks on the user (removed
# to keep the post shorter)
}
What I would do if I were you? I'll assume it doesn't matter what the name of the hash table is. If that's the case, you can grab the current date time and use that to name your hash table, like
$HashTblName = "HashTbl_$($(get-date).ToString("yyyyMMddhhmmssff"))"

Function containing a process is returning a garbled value

Spent my whole morning trying to find where my return value was getting garbled. Now that I've finally found where, I still have no idea why. Function looks like this:
function Run-RemoteCommand {
param(...) # params are $Remote (host) $Command $Credentials $Quiet (optional switch)
if($Quiet) {
$Process = New-Object System.Diagnostics.Process
$Process.StartInfo.UseShellExecute=$false
$Process.StartInfo.Domain=$Credentials.GetNetworkCredential().Domain
$Process.StartInfo.UserName=$Credentials.GetNetworkCredential().UserName
$Process.StartInfo.Password=$Credentials.Password
$Process.StartInfo.WindowStyle="Hidden"
$Process.StartInfo.FileName=$PSExec
$Process.StartInfo.Arguments=#("/acceptEULA",$Remote,"-s",$Command)
$Process.Start()
$Process.WaitForExit()
$result = $Process.ExitCode
return $result
} else {
...
}
}
What's odd is that I can step through this in a debugger and watch everything work fine. The command runs, $result is filled with the return code, but the calling function receives True appended to the return code (eg True0 on success). I even tried overriding the return value and just saying
return "false"
The calling function receives "Truefalse." All I can tell is that it's tied to $Process running. If I comment out $Process.Start(), the return code functions normally. Someone please save my sanity.
$Process.Start() returns a boolean value which is True if it succeeds. Remember that functions in PowerShell behave differently than standard programming languages. PowerShell functions "return" or more technically correct "output" any command output that isn't captured by a variable or redirected to a file or Out-Null. In this case change the Start line to:
[void]$Process.Start()
or
$Process.Start() | Out-Null
Check out this blog post for a deeper explanation.