Import CSV and updating specific lines - csv

So I have a script that runs at logon to search for PST's on a users machine, then copies them to a holding area waiting for migration.
When the search/copy is complete it outputs to a CSV that looks something like this:
Hostname,User,Path,Size_in_MB,Creation,Last_Access,Copied
COMP1,user1,\\comp1\c$\Test PST.pst,20.58752,08/12/2015,08/12/2015,Client copied
COMP1,user1,\\comp1\c$\outlook\outlook.pst,100,08/12/2015,15,12,2015,In Use
The same logon script has an IF to import the CSV if the copied status is in use and makes further attempts at copying the PST into the holding area. If it's successful it exports the results to the CSV file.
My question is, is there anyway of getting it to either amend the existing CSV changing the copy status? I can get it to add the new line to the end, but not update.
This is my 'try again' script:
# imports line of csv where PST file is found to be in use
$PST_IN_USE = Import-CSV "\\comp4\TEMPPST\PST\$HOSTNAME - $USER.csv" | where { $_.copied -eq "In Use" }
ForEach ( $PST_USE in $PST_IN_USE )
{ $NAME = Get-ItemProperty $PST_IN_USE.Path | select -ExpandProperty Name
$NEW_NAME = $USER + "_" + $PST_IN_USE.Size_in_MB + "_" + $NAME
# attempts to copy the file to the pst staging area then rename it.
TRY { Copy-Item $PST_IN_USE.Path "\\comp4\TEMPPST\PST\$USER" -ErrorAction SilentlyContinue
Rename-Item "\\comp4\TEMPPST\PST\$USER\$NAME" -NewName $NEW_NAME
# edits the existing csv file replacing "In Use" with "Client Copied"
$PST_IN_USE.Copied -replace "In Use","Client Copied"
} # CLOSES TRY
# silences any errors.
CATCH { }
$PST_IN_USE | Export-Csv "\\comp4\TEMPPST\PST\$HOSTNAME - $USER.csv" -NoClobber -NoTypeInformation -Append
} # CLOSES ForEach ( $PST_USE in $PST_IN_USE )
This is the resulting CSV
Hostname,User,Path,Size_in_MB,Creation,Last_Access,Copied
COMP1,user1,\\comp1\c$\Test PST.pst,20.58752,08/12/2015,08/12/2015,Client copied
COMP1,user1,\\comp1\c$\outlook\outlook.pst,100,08/12/2015,15,12,2015,In Use
COMP1,user1,\\comp1\c$\outlook\outlook.pst,100,08/12/2015,15,12,2015,Client copied
It's almost certainly something really simple, but if it is, it's something I've yet to come across in my scripting. I'm mostly working in IF / ELSE land at the moment!

If you want to change the CSV file, you have to write it completely again, not just appending new lines. In your case this means:
# Get the data
$data = Import-Csv ...
# Get the 'In Use' entries
$inUse = $data | where Copied -eq 'In Use'
foreach ($x in $inUse) {
...
$x.Copied = 'Client Copied'
}
# Write the file again
$data | Export-Csv ...
The point here is, you grab all the lines from the CSV, modify those that you process and then write the complete collection back to the file again.

I've cracked it. It's almost certainly a long winded way of doing it, but it works and is relatively clean too.
#imports line of csv where PST file is found to be in use
$PST_IN_USE = Import-CSV "\\comp4\TEMPPST\PST\$HOSTNAME - $USER.csv" | where { $_.copied -eq "In Use" }
$PST_IN_USE | select -ExpandProperty path | foreach {
# name of pst
$NAME = Get-ItemProperty $_ | select -ExpandProperty Name
# size of pst in MB without decimals
$SIZE = Get-ItemProperty $_ | select -ExpandProperty length | foreach { $_ / 1000000 }
# path of pst
$PATH = $_
# new name of pst when copied to the destination
$NEW_NAME = $USER + "_" + $SIZE + "_" + $NAME
TRY { Copy-Item $_ "\\comp4\TEMPPST\PST\$USER" -ErrorAction SilentlyContinue
TRY { Rename-Item "\\comp4\TEMPPST\PST\$USER\$NAME" -NewName $NEW_NAME -ErrorAction SilentlyContinue | Out-Null }
CATCH { $NEW_NAME = "Duplicate exists" }
$COPIED = "Client copied" }
CATCH { $COPIED = "In use" ; $NEW_NAME = " " }
$NEW_FILE = Test-Path "\\comp4\TEMPPST\PST\$HOSTNAME - $USER 4.csv"
IF ( $NEW_FILE -eq $FALSE )
{ "Hostname,User,Path,Size_in_MB,Creation,Last_Access,Copied,New_Name" |
Set-Content "\\lccfp1\TEMPPST\PST\$HOSTNAME - $USER 4.csv" }
"$HOSTNAME,$USER,$PATH,$SIZE,$CREATION,$LASTACCESS,$COPIED,$NEW_NAME" |
Add-Content "\\comp4\TEMPPST\PST\$HOSTNAME - $USER 4.csv"
} # CLOSES FOREACH #
$a = Import-CSV "\\comp4\TEMPPST\PST\$HOSTNAME - $USER.csv" | where { $_.copied -ne "in use" }
$b = Import-Csv "\\comp4\TEMPPST\PST\$HOSTNAME - $USER 4.csv"
$a + $b | export-csv "\\comp4\TEMPPST\PST\$HOSTNAME - $USER 8.csv" -NoClobber -NoTypeInformation
Thanks for the help. Sometimes it takes a moments break and a large cup of coffee to see things a different way.

Related

Powershell and filenames with non-ASCII characters (e.g. Æ)

I am attempting to index my movie collection and in doing so have run across an issue where at least one title is skipped in the import phase due to special characters. The code skips over "Æon Flux" due to it starting with Æ. Would anyone know how to correct this, please?
Clear-Host
# Variables:
$movie_dir = "K:\Movies"
# Because reasons...
$PSDefaultParameterValues['*:Encoding'] = 'utf8'
# Connect to the library MySQL.Data.dll
Add-Type -Path 'C:\Program Files (x86)\MySQL\Connector NET 8.0\Assemblies\v4.8\MySql.Data.dll'
# Create a MySQL Database connection variable that qualifies:
$Connection = [MySql.Data.MySqlClient.MySqlConnection]#{ConnectionString='server=127.0.0.1;uid=username;pwd=password;database=media'}
$Connection.Open()
# Drop the table to clear all entries.
$sql_drop_table = New-Object MySql.Data.MySqlClient.MySqlCommand
$sql_drop_table.Connection = $Connection
$sql_drop_table.CommandText = 'DROP TABLE Movies'
$sql_drop_table.ExecuteNonQuery() | Out-Null
# (Re)create the table.
$sql_create_table = New-Object MySql.Data.MySqlClient.MySqlCommand
$sql_create_table.Connection = $Connection
$sql_create_table.CommandText = 'create table Movies(movie_id INT NOT NULL AUTO_INCREMENT, movie_title VARCHAR(255) NOT NULL, movie_file_date INT, movie_IMDB_id INT, PRIMARY KEY (movie_id))'
$sql_create_table.ExecuteNonQuery() | Out-Null
$movies = Get-ChildItem $movie_dir -File -include *.mp4 -Recurse -Depth 1 |
Select-Object -ExpandProperty FullName |
Sort-Object |
Get-Unique |
where{$_ -ne ""}
foreach ($movie in $movies)
{
# .net function to get just the filename (movie title).
$title = [System.IO.Path]::GetFileNameWithoutExtension($movie)
# Get the creation date of the movie and reformat it to yearmonthday.
$add_date = (Get-ChildItem $movie).CreationTime.toString("yyyyMMdd")
$query = "INSERT INTO Movies(movie_id, movie_title, movie_file_date) VALUES(NULL, #title, $add_date)"
$command = $connection.CreateCommand()
$command.CommandText = $query
# Sanatize single quotes in filenames for input.
$command.Parameters.AddWithValue("#title", $title) | Out-Null
$command.ExecuteNonQuery() | Out-Null
}
# Close the MySQL connection.
$Connection.Close()
Write-Host
Write-Host("Added") $movies.Count ("movies.")
I don't think it is the Get-ChildItem that skips the file with that special character. More likely, you need to tell your MySql to use UTF-8.
For that, have a look at How to make MySQL handle UTF-8 properly
As for your code, I would change this:
$movies = Get-ChildItem $movie_dir -File -include *.mp4 -Recurse -Depth 1 |
Select-Object -ExpandProperty FullName |
Sort-Object |
Get-Unique |
where{$_ -ne ""}
into
$movies = Get-ChildItem -Path $movie_dir -File -Filter '*.mp4' -Recurse -Depth 1 | Sort-Object -Property FullName
and work with the FileInfo objects from there on:
foreach ($movie in $movies) {
$title = $movie.BaseName
# Get the creation date of the movie and reformat it to yearmonthday.
$add_date = '{0}:yyyyMMdd}' -f $movie.CreationTime
. . .
}
Though Æ is not an ASCII character it is not otherwise "special", so I edited the question title and tags to reflect that.
ExecuteNonQuery() returns the number of rows affected by the command; in the case of $command, it's the number of rows inserted. You are discarding this value, however...
$command.ExecuteNonQuery() | Out-Null
...which masks the problem in the event the INSERT fails. Instead, test the result and respond appropriately...
if ($command.ExecuteNonQuery() -eq 1)
{
Write-Host -Message "Successfully inserted movie ""$title""."
}
else
{
Write-Warning -Message "Failed to insert movie ""$title""."
}
This will make it clear if the issue lies in interacting with the filesystem or the database.
Some other notes:
MySqlCommand implements the IDisposable interface and so each instance should be disposed when you're done using it...
$query = "INSERT INTO Movies(movie_id, movie_title, movie_file_date) VALUES(NULL, #title, $add_date)"
$command = $connection.CreateCommand()
try
{
$command.CommandText = $query
# Sanatize single quotes in filenames for input.
$command.Parameters.AddWithValue("#title", $title) | Out-Null
if ($command.ExecuteNonQuery() -eq 1)
{
Write-Host -Message "Successfully inserted movie ""$title""."
}
else
{
Write-Warning -Message "Failed to insert movie ""$title""."
}
}
finally
{
$command.Dispose()
}
...and the same for $sql_drop_table and $sql_create_table. The code in the finally block will run even if an error is thrown from within the try block.
See Difference with Parameters.Add and Parameters.AddWithValue and its links for why AddWithValue() can be problematic.
Instead of...
Write-Host("Added") $movies.Count ("movies.")
...a more typical way to build this message would be with string interpolation...
Write-Host "Added $($movies.Count) movies."
...or the format operator...
Write-Host ('Added {0} movies.' -f $movies.Count)
You can also incorporate numeric format strings, so if $movies.Count is 1234 and $PSCulture is 'en-US' then...
Write-Host "Added $($movies.Count.ToString('N0')) movies."
...and...
Write-Host ('Added {0:N0} movies.' -f $movies.Count)
...will both write...
Added 1,234 movies.

log file variables in function eat up all memory

i have the following simple function which is used several times in a script which iterates through a directory and checks the age of the files in it.
function log($Message) {
$logFilePath = 'C:\logPath\myLog.txt'
$date = Get-Date -Format 'yyyyMMddHHmmss'
$logMessage = "{0}_{1}" -f $date,$Message
if(Test-Path -Path $logFilePath) {
$logFileContent = Get-Content -Path $logFilePath
} else {
$logFileContent = ''
}
$logMessage,$logFileContent | Out-File -FilePath $logFilePath
}
i figured out that this eats up all ram. i don't understand why. i thought the scope of variables in a function are destroyed once the function is run. i fixed the ram issue by adding Remove-Variable logMessage,logFileContent,logFilePath,date to the very end of the function but would like to know how this ram issue could be solved otherwise and why the variables within the function are not destroyed automatically.
Powershell or .Net has a garbage collector, so freeing the memory isn't instant. Garbage Collection in Powershell to Speed Scripts Also the memory management is probably better in Powershell 7. I tried repeating your function many times, but the memory usage didn't go above a few hundred megs.
There's probably some more efficient .net way to prepend a line to a file: Prepend "!" to the beginning of the first line of a file
I have a weird way to prepend a line. I'm not sure how well this would work with a large file. With a 700 meg file, the Working Set memory stayed at 76 megs.
$c:file
one
two
three
$c:file = 'pre',$c:file
$c:file
pre
one
two
three
ps powershell
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
------- ------ ----- ----- ------ -- -- -----------
607 27 67448 76824 1.14 3652 3 powershell
Although as commented, I can hardly believe that this function gobbles up your memory, you could optimize it:
function log ([string]$Message) {
$logFilePath = 'C:\logPath\myLog.txt'
# prefix the message with the current date
$Message = "{0:yyyyMMddHHmmss}_{1}" -f (Get-Date), $Message
if (Test-Path -Path $logFilePath -PathType Leaf) {
# newest log entry on top: append current content
$Message = "{0}`r`n{1}" -f $Message, (Get-Content -Path $logFilePath -Raw)
}
Set-Content -Path $logFilePath -Value $Message
}
I just want to rule out the RAM usage is caused by prepending to the file. Have you tried not storing the log contents in a variable? i.e.
$logMessage,(Get-Content -Path $logFilePath) | Out-File -FilePath $logFilePath
Edit 5/8/20 - Turns out that prepending (when done efficiently) isn't as slow as I thought - it is on the same order as using -Append. However the code (that js2010 pointed to) is long and ugly, but if you really need to prepend to the file, this is the way to do it.
I modified the OP's code a bit to automatically insert a new line.
function log-prepend{
param(
$content,
$filePath = 'C:\temp\myLogP.txt'
)
$file = get-item $filePath
if(!$file.exists){
write-error "$file does not exist";
return;
}
$filepath = $file.fullname;
$tmptoken = (get-location).path + "\_tmpfile" + $file.name;
write-verbose "$tmptoken created to as buffer";
$tfs = [System.io.file]::create($tmptoken);
$fs = [System.IO.File]::Open($file.fullname,[System.IO.FileMode]::Open,[System.IO.FileAccess]::ReadWrite);
try{
$date = Get-Date -Format 'yyyyMMddHHmmss'
$logMessage = "{0}_{1}`r`n" -f $date,$content
$msg = $logMessage.tochararray();
$tfs.write($msg,0,$msg.length);
$fs.position = 0;
$fs.copyTo($tfs);
}
catch{
write-verbose $_.Exception.Message;
}
finally{
$tfs.close();
$fs.close();
if($error.count -eq 0){
write-verbose ("updating $filepath");
[System.io.File]::Delete($filepath);
[System.io.file]::Move($tmptoken,$filepath);
}
else{
$error.clear();
[System.io.file]::Delete($tmptoken);
}
}
}
Here was my original answer that shows how to test the timing using a stopwatch.
When you prepend to a log file, you're reading the entire log file into memory, then writing it back.
You really should be using append - that would make the script run a lot faster.
function log($Message) {
$logFilePath = 'C:\logPath\myLog.txt'
$date = Get-Date -Format 'yyyyMMddHHmmss'
$logMessage = "{0}_{1}" -f $date,$Message
$logMessage | Out-File -FilePath $logFilePath -Append
}
Edit: To convince you that prepending to a log file is a bad idea, here's a test you can do on your own system:
function logAppend($Message) {
$logFilePath = 'C:\temp\myLogA.txt'
$date = Get-Date -Format 'yyyyMMddHHmmss'
$logMessage = "{0}_{1}" -f $date,$Message
$logMessage | Out-File -FilePath $logFilePath -Append
}
function logPrepend($Message) {
$logFilePath = 'C:\temp\myLogP.txt'
$date = Get-Date -Format 'yyyyMMddHHmmss'
$logMessage = "{0}_{1}" -f $date,$Message
if(Test-Path -Path $logFilePath) {
$logFileContent = Get-Content -Path $logFilePath
} else {
$logFileContent = ''
}
$logMessage,$logFileContent | Out-File -FilePath $logFilePath
}
$processes = Get-Process
$stopwatch = [system.diagnostics.stopwatch]::StartNew()
foreach ($p in $processes)
{
logAppend($p.ProcessName)
}
$stopwatch.Stop()
$stopwatch.Elapsed
$stopwatch = [system.diagnostics.stopwatch]::StartNew()
foreach ($p in $processes)
{
logPrepend($p.ProcessName)
}
$stopwatch.Stop()
$stopwatch.Elapsed
I've run this several times until I got a few thousand lines in the log file.
Going from: 1603 to 1925 lines, my results were:
Append: 7.0167008 s
Prepend: 21.7046793 s

RunSpacePool output CSV contains blank rows

I am using this amazing answer and got RunSpacePools to output a CSV file but my CSV file has blank rows and I just cannot figure out where the blank rows are coming from.
The blank lines are shown in Notepad as ,,,
IF(Get-Command Get-SCOMAlert -ErrorAction SilentlyContinue){}ELSE{Import-Module OperationsManager}
"Get Pend reboot servers from prod"
New-SCOMManagementGroupConnection -ComputerName ProdServer1
$AlertData = get-SCOMAlert -Criteria "Severity = 1 AND ResolutionState < 254 AND Name = 'Pending Reboot'" | Select NetbiosComputerName
"Get Pend reboot servers from test"
#For test information
New-SCOMManagementGroupConnection -ComputerName TestServer1
$AlertData += Get-SCOMAlert -Criteria "Severity = 1 AND ResolutionState < 254 AND Name = 'Pending Reboot'" | Select NetbiosComputerName
"Remove duplicates"
$AlertDataNoDupe = $AlertData | Sort NetbiosComputerName -Unique
$scriptblock = {
Param([string]$server)
$csv = Import-Csv D:\Scripts\MaintenanceWindow2.csv
$window = $csv | where {$_.Computername -eq "$server"} | % CollectionName
$SCCMWindow = IF ($window){$window}ELSE{"NoDeadline"}
$PingCheck = Test-Connection -Count 1 $server -Quiet -ErrorAction SilentlyContinue
IF($PingCheck){$PingResults = "Alive"}
ELSE{$PingResults = "Dead"}
Try{$operatingSystem = Get-WmiObject Win32_OperatingSystem -ComputerName $server -ErrorAction Stop
$LastReboot = [Management.ManagementDateTimeConverter]::ToDateTime($operatingSystem.LastBootUpTime)
$LastReboot.DateTime}
Catch{$LastReboot = "Access Denied!"}
#create custom object as output for CSV.
[PSCustomObject]#{
Server=$server
MaintenanceWindow=$SCCMWindow
Ping=$PingResults
LastReboot=$LastReboot
}#end custom object
}#script block end
$RunspacePool = [RunspaceFactory]::CreateRunspacePool(100,100)
$RunspacePool.Open()
$Jobs =
foreach ( $item in $AlertDataNoDupe )
{
$Job = [powershell]::Create().
AddScript($ScriptBlock).
AddArgument($item.NetbiosComputerName)
$Job.RunspacePool = $RunspacePool
[PSCustomObject]#{
Pipe = $Job
Result = $Job.BeginInvoke()
}
}
Write-Host 'Working..' -NoNewline
Do {
Write-Host '.' -NoNewline
Start-Sleep -Milliseconds 500
} While ( $Jobs.Result.IsCompleted -contains $false)
Write-Host ' Done! Writing output file.'
Write-host "Output file is d:\scripts\runspacetest4.csv"
$(ForEach ($Job in $Jobs)
{ $Job.Pipe.EndInvoke($Job.Result) }) |
Export-Csv d:\scripts\runspacetest4.csv -NoTypeInformation
$RunspacePool.Close()
$RunspacePool.Dispose()
After trial and error, I ended up working with this method of run space pools to get close. Looking closer, I found the output was polluted by WMI's extra whitespaces.
To solve this, I ended up using the following within the ScriptBlock's Try statement.
$LastReboot = [Management.ManagementDateTimeConverter]::ToDateTime `
($operatingSystem.LastBootUpTime).ToString().Trim()
Now the data returned is all single line as desired.
-Edit to comment on WMI's extra whitespaces in output. See this question for more details.
Consider the following method to return a computer's last reboot timestamp. Note you can format the string as needed, see this library page for more info.
$os = (gwmi -Class win32_operatingsystem).LastBootUpTime
[Management.ManagementDateTimeConverter]::ToDateTime($os)
Observe the whitespaces, which can be removed by converting the output to a string then using Trim() to remove the whitespaces.

Replace blank characters from a file line by line

I would like to be able to find all blanks from a CSV file and if a blank character is found on a line then should appear on the screen and I should be asked if I want to keep the entire line which contains that white space or remove it.
Let's say the directory is C:\Cr\Powershell\test. In there there is one CSV file abc.csv.
Tried doing it like this but in PowerShell ISE the $_.PSObject.Properties isn't recognized.
$csv = Import-Csv C:\Cr\Powershell\test\*.csv | Foreach-Object {
$_.PSObject.Properties | Foreach-Object {$_.Value = $_.Value.Trim()}
}
I apologize for not includding more code and what I tried more so far but they were silly attempts since I just begun.
This looks helpful but I don't know exactly how to adapt it for my problem.
Ok man here you go:
$yes = New-Object System.Management.Automation.Host.ChoiceDescription "&Yes", "Retain line."
$no = New-Object System.Management.Automation.Host.ChoiceDescription "&No", "Delete line."
$n = #()
$f = Get-Content .\test.csv
foreach($item in $f) {
if($item -like "* *"){
$res = $host.ui.PromptForChoice("Title", "want to keep this line? `n $item", [System.Management.Automation.Host.ChoiceDescription[]]($yes, $no), 0)
switch ($res)
{
0 {$n+=$item}
1 {}
}
} else {
$n+=$item
}
}
$n | Set-Content .\test.csv
if you have questions please post in the comments and i will explain
Get-Content is probably a better approach than Import-Csv, because that'll allow you to check an entire line for spaces instead of having to check each individual field. For fully automated processing you'd just use a Where-Object filter to remove non-matching lines from the output:
Get-Content 'C:\CrPowershell\test\input.csv' |
Where-Object { $_ -notlike '* *' } |
Set-Content 'C:\CrPowershell\test\output.csv'
However, since you want to prompt for each individual line that contains spaces you need a ForEach-Object (or a similiar construct) and a nested conditional, like this:
Get-Content 'C:\CrPowershell\test\input.csv' | ForEach-Object {
if ($_ -notlike '* *') { $_ }
} | Set-Content 'C:\CrPowershell\test\output.csv'
The simplest way to prompt a user for input is Read-Host:
$answer = Read-Host -Prompt 'Message'
if ($answer -eq 'y') {
# do one thing
} else {
# do another
}
In your particular case you'd probably do something like this for any matching line:
$anwser = Read-Host "$_`nKeep the line? [y/n] "
if ($answer -ne 'n') { $_ }
The above checks if the answer is not n to make removal of the line a conscious decision.
Other ways to prompt for user input are choice.exe (which has the additional advantage of allowing a timeout and a default answer):
choice.exe /c YN /d N /t 10 /m "$_`nKeep the line"
if ($LastExitCode -ne 2) { $_ }
or the host UI:
$title = $_
$message = 'Keep the line?'
$yes = New-Object Management.Automation.Host.ChoiceDescription '&Yes'
$no = New-Object Management.Automation.Host.ChoiceDescription '&No'
$options = [Management.Automation.Host.ChoiceDescription[]]($yes, $no)
$answer = $Host.UI.PromptForChoice($title, $message, $options, 1)
if ($answer -ne 1) { $_ }
I'm leaving it as an exercise for you to integrate whichever prompting routine you chose with the rest of the code.

How to deal with automated duplicate user removal

I have the following:
#(Import-Csv C:\Users\Administrator\Desktop\dbs\Monday.csv) +
#(Import-Csv C:\Users\Administrator\Desktop\dbs\Tuesday.csv) +
#(Import-Csv C:\Users\Administrator\Desktop\dbs\Wednesday.csv) +
#(Import-Csv C:\Users\Administrator\Desktop\dbs\Thursday.csv) +
#(Import-Csv C:\Users\Administrator\Desktop\dbs\Friday.csv) |
sort first_name,last_name,phone1 -Unique |
Export-Csv C:\Users\Administrator\Desktop\dbs\joined.csv
Import-Module ActiveDirectory
#EDIT PATH SO IT POINTS TO DB FILE \/
$newUserList = Import-Csv C:\Users\Administrator\Desktop\dbs\joined.csv
ForEach ($item in $newUserList){
$fname = $($item.first_name)
$lname = $($item.last_name)
$phone = $($item.phone1)
$username=$fname+$lname.substring(0,1)
# Puts Domain name into a Placeholder.
$domain='#csilab.local'
# Build the User Principal Name Username with Domain added to it
$UPN=$username+$domain
# Create the Displayname
$Name=$fname+" "+$lname
$newusers1 = (New-ADUser -GivenName $fname -Surname $lname -HomePhone $phone -Name $Name -DisplayName $Name -SamAccountName $username -AccountPassword (ConvertTo-SecureString "1NewPassword" -asplaintext -force) -ChangePasswordAtLogon $true -UserPrincipalName $UPN -Path "ou=test,dc=csi,dc=lab" -Enabled $true -PassThru) |
# I need this block to check for duplicates missed by the csv sort & merge
# as well as any in the destination OU itself as the script will run daily
# with inevitable possibility that user is unique to the csv but not the AD.
$newusers1 | Get-ADUser -Filter * -SearchBase "OU=Active Users,DC=csilab,DC=local" |
Sort-Object -Unique |
Remove-ADUser -confirm:$false
However I run it and get:
Get-ADUser : The input object cannot be bound to any parameters for the command
either because the command does not take pipeline input or the input and its
properties do not match any of the parameters that take pipeline input.
At C:\Users\Administrator\Desktop\Team2.ps1:40 char:14
+ $newusers1 | Get-ADUser -Filter * -SearchBase "OU=Active Users,DC=csilab,DC=loca ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (CN=Bethanie Cut...csilab,dc=local:PSObject) [Get-ADUser], ParameterBindingException
+ FullyQualifiedErrorId : InputObjectNotBound,Microsoft.ActiveDirectory.Management.Commands.GetADUser
I also worry even if it did work that it'd delete the unique users instead of duplicates.
get-AdUser $username | Move-ADObject -TargetPath 'OU=Active Users,dc=csilab,dc=local'
}
What can I do to ensure all users are there without any originals getting deleted, just the duplicates?
You still have an empty pipe at the end of your New-ADUser statement, which would cause your script to fail with an "empty pipe element is not allowed" error, but oh well ...
To avoid collisions just check if an account already exists before you try to create it, and create it only if it doesn't:
$username = $fname + $lname.substring(0,1)
...
if (Get-ADUser -Filter "SamAccountName -eq '$username'") {
Write-Host "Account $username already exists."
} else {
New-ADUser -SamAccountName $username -Name $Name ... -PassThru
}
Also, you're overcomplicating the CSV handling. Simply process a list of the files via ForEach-Object:
$domain = '#csilab.local'
Set-Location 'C:\Users\Administrator\Desktop\dbs'
'Monday.csv', 'Tuesday.csv', 'Wednesday.csv', 'Thursday.csv', 'Friday.csv' |
ForEach-Object { Import-Csv $_ } |
Sort-Object first_name, last_name, phone1 -Unique |
ForEach-Object {
$fname = $_.first_name
$lname = $_.last_name
$phone = $_.phone1
...
}
You might want to use Try/Catch:
Try {$newusers1=(New-ADUser–GivenName$fname–Surname$lname-HomePhone$phone–Name$Name-DisplayName $Name –SamAccountName$username–AccountPassword(ConvertTo-SecureString"1NewPassword"-asplaintext-force) -ChangePasswordAtLogon$true–UserPrincipalName$UPN–Path"dc=csi,dc=lab"-Enabled$true)}
Catch {
If ($_.Exception.Message -match "account already exists")
{
#do whatever here, eg $NewUsers1 = Get-ADUser $Name
}
}
Also, if you can't see the user when browsing via ADUC it could be that you are connected to a different DC.
As mentioned above, the newuser1 variable will be null if the command failed. It will not load with the other user automatically, and it would be scary bad if it did. You need to decide what to do if the account already exist, that may simply be loading the variable with the other account, or doing something like appending a "1" to $name and retrying the command.