Data inserted into MySQL through powershell script unexpectedly inserts two rows - mysql

I have working script to insert data like computer name, date, ip adress and two other data what my powershell script read from txt file.
I don't know why it makes double insert to mysql server where first insert is good and second don't have data from this txt file.
The script I've posted below successfully inserts data into my database however it seems to insert two separate rows. One with the appropriate data in the End1 and End2 columns and one with incorrect data.
Code to capture and insert data:
$txt = "app_log.txt"
function find_nr {
get-content $txt -ReadCount 1000 |
foreach { $_ -match "P3#X" }
}
$string = find_nr
$separator = "\;"
function seperate {
$string.Split($separator,6)
}
function nr_lines {
$i = 99901; seperate | % {$i++;"$($i-1) `t $_"}
}
function find {
$line = $args[0] | Select-String -Pattern "99905" -CaseSensitive
($line.line.split(' ') |Where-Object {$_.Trim() -ne ''})[1]
}
function liczba {
$result = nr_lines
find $result
}
liczba
function find_sn {
get-content $txt -ReadCount 1000 |
foreach { $_ -match "P2#X" }
}
$string_sn = find_sn
$separator_sn = "\/"
function seperate_sn {
$string_sn.Split($separator_sn,20)
}
function sn_lines {
$i = 55501; seperate_sn | % {$i++;"$($i-1) `t $_"}
}
function find_sn {
$line_sn = $args[0] | Select-String -Pattern "55518" -CaseSensitive
($line_sn.line.split(' ') |Where-Object {$_.Trim() -ne ''})[1]
}
function sn {
$result2 = sn_lines
find_sn $result2
}
sn
[System.Reflection.Assembly]::LoadWithPartialName("MySql.Data")
$name = $env:COMPUTERNAME
$ipv4 = Test-Connection -ComputerName (hostname) -Count 1 | foreach { $_.ipv4address }
$time = (Get-Date).ToString('yyyy-MM-dd HH:mm:ss')
$end1 = liczba
$end2 = sn
Start-Sleep -s 5
[string]$sMySQLUserName = 'user'
[string]$sMySQLPW = 'pass'
[string]$sMySQLDB = 'db'
[string]$sMySQLHost = '1.0.0.0'
[string]$sConnectionString = "server="+$sMySQLHost+";port=3306;uid=" + $sMySQLUserName + ";pwd=" + $sMySQLPW + ";database="+$sMySQLDB+";SslMode=none"
$oConnection = New-Object MySql.Data.MySqlClient.MySqlConnection($sConnectionString)
$Error.Clear()
try
{
$oConnection.Open()
}
catch
{
write-warning ("DB error: $sMySQLDB na ip: $sMySQLHost. Error: "+$Error[0].ToString())
}
$oMYSQLCommand = New-Object MySql.Data.MySqlClient.MySqlCommand
$oMYSQLCommand.CommandText="
INSERT into `db.scanner` (name,ipv4,date,raports,serialnumber) VALUES('$name','$ipv4','$time','$end1','$end2')"
$oMYSQLCommand.Connection=$oConnection
$oMYSQLCommand.ExecuteNonQuery()
$oConnection.Close()
Script starts every 3rd day of the week at the user login.
This is how it looks in mysql db:
ID NAME IP DATE END1 END2
239 S02C-KASA1 1.0.0.1 2021-09-02 08:15:37 0
238 S02C-KASA1 1.0.0.1 2021-09-02 08:15:37 1476 CAO1802176616FC
It appears the inserts are happening at the same time from the data. Does anyone have an idea as to what is causing this?
Edit:
This is a app_log.txt data:
Send : P23#sae\
Resp.: ...P2#X0;1;0;1;1;2;20;9;29/23.00/08.00/05.00/00.00/100.00/101.00/101.00/0/0./0./0./0./0./0./0./29930676.49/BDS13287171EF\
Resp.: .
Send : P24#sa9\
Resp.: ..P3#X2015;2;10;1771;59;0;249.00/0.00/0.00/0.00/0.00/0.00/0.00/80\
Resp.: .
find_nr takes this line: P3#X2015;2;10;1771;59;0;249.00/0.00/0.00/0.00/0.00/0.00/0.00/80\
find_sn takes this line:
P2#X0;1;0;1;1;2;20;9;29/23.00/08.00/05.00/00.00/100.00/101.00/101.00/0/0./0./0./0./0./0./0./29930676.49/BDS13287171EF\

Related

Script to Enforce MFA failing to grab dataset from servers

function Enforce-MFA($exclude){
Connect-MsolService
$excludedUsers = 'admin','admin2','admin3','admin4' + $exclude
$excluded = ($excludedUsers | ForEach-Object { [regex]::Escape($_) }) -join '|'
$st = New-Object -TypeName Microsoft.Online.Administration.StrongAuthenticationRequirement
$st.RelyingParty = "*"
$st.State = "Enforced"
$sta = #($st)
$array = (Get-MsolUser | Where-Object { $_.DisplayName -notmatch $excluded }).UserPrincipalName
ForEach ($user in $array)
{
Set-MsolUser -UserPrincipalName $user -StrongAuthenticationRequirements $sta
Write-Host "Complete"
}
}
The general function is to grab a list of objects, exclude certain objects, and Enforce MFA for the remaining objects. This script seemed to work without any issue last week, but this week, I'm getting no data from the Array variable. I was working on a lot of different changes and I'm thinking I may have messed something up in the process, but I'm just not seeing it. What did I mess up or what am I not seeing?
You forgot to add -Credential $cred to the Connect-MsolService.
You should create that connection first and take it out of the function.
function Enforce-MFA {
$excludeTheseUsers = 'admin', 'user1', 'user2' # etc.
# for using the regex `-notmatch` operator later, you need to combine the entries with the regex OR sign ('|'),
# but you need to make sure to escape special characters some names may contain
$excludes = ($excludeTheseUsers | ForEach-Object { [regex]::Escape($_) }) -join '|'
# create the StrongAuthenticationRequirement object just once, to use on all users
$st = New-Object -TypeName Microsoft.Online.Administration.StrongAuthenticationRequirement
$st.RelyingParty = "*"
$st.State = "Enabled"
$sta = #($st)
# get an array of UserPrincipalNames
$array = (Get-MsolUser | Where-Object { $_.DisplayName -notmatch $excludes }).UserPrincipalName
foreach ($user in $array) {
Set-MsolUser -UserPrincipalName $user -StrongAuthenticationRequirements $sta
}
Write-Host "Enforcing MFA Complete"
}
# ask for credentials to make the connection
$cred = Get-Credential -Message 'Please enter your credentials to connect to Azure Active Directory'
Connect-MsolService -Credential $cred
As for your loop, try something like this:
# enter an endless loop
while($true) {
$var = Read-Host -Prompt "Enter the corresponding number: 1: Enforce 2: Enable 3: Disable 4: Exit"
switch($var){
1 { Enforce-MFA }
2 { Enable-MFA }
3 { Disable-MFA }
4 { exit }
default{ "Please choose either 1, 2 ,3 or 4" }
}
}

Powershell scripts work when run directly but not when called by another

I have a long list of simple jobs I would like to somewhat automate. It's simple stuff, grab or post info via API and build some reports, nothing fancy.
I decided to build a master script which directs out to a variety of other scripts, each handling its own job. Each one of those little scripts, reference functions from a Utility script which I built that has functions which are common to all the other simple job scripts.
Each of the scripts work perfectly when I run them directly, however, when I try to run them via the master script, which routes to them, they all fail.
One example is that in many cases I need to fetch data from an API but get capped at 1000 object returns when I need 10k+. To solve this, I built a function which recursively calls itself until there is no more data left to collect. Again, this works when called by itself but not from the master script, for some reason, it bails out after the first run (should run 10+ times in this case). Then, it returns nothing.
I am thinking maybe this has something to do with how I am scoping the functions/variables?? Not sure. I have tried scoping to Global, Local & Script but none seem to work. Here's some of the code...
*Master Director Script runs script based on user input*
...
&$choice_hash[$action].script_path
$ScriptDirectory = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent
. "$ScriptDirectory\Utilities.psm1"
$user_data = $null
$env_choice = $null
$csv_output_path = $null
$collated_user_data = [System.Collections.ArrayList]#()
function selectEnv {
$global:env_choice = Read-Host #"
> Select an Environment: [Prod] or [Dev]
Your Choice
"#
if ($env_choice -ne 'Prod' -and $env_choice -ne 'Dev') {
consoleCmt $env_choice
consoleCmt 'Invalid Choice. Try again...'
selectEnv
} else {
if ($env_choice -eq 'Prod') {
$global:csv_output_path = '\\etoprod\******\Exports\Report_Users_Prod.csv'
} else {
$global:csv_output_path = '\\etoprod\******\Exports\Report_Users_Dev.csv'
}
$global:user_data = process_data $env_choice 'api/xm/1/people?embed=roles&limit=1000'
}
}
function processUsersData {
foreach($user in $user_data) {
$user_roles = ''
$role_divider = ','
for($i = 0; $i -lt $user.roles.data.length; $i++) {
# Only append a comma if there are more, otherwise leave blank for CSV deliniation
if ($i -eq $user.roles.data.length - 1) {
$role_divider = ''
}
$user_roles += $user.roles.data[$i].name + $role_divider
}
# Build ordered hash table with above data
$sanatized_user = [pscustomobject][ordered]#{id = $user.targetName; firstName = $user.firstName; lastName = $user.lastName; siteName = $user.site.name; roles = $user_roles }
# Shovel into storage array used for building the CSV
$global:collated_user_data += $sanatized_user
}
}
notice 'Initiating Groups Report Script'
selectEnv
processUsersData
exportCsv $collated_user_data $csv_output_path
Utility Script (relevant functions being called)
$res = $null
$content = #()
...
function process_data($env, $url) {
fetch_data $env $url
foreach($i in $res.data) {
$global:content += $i
}
if($res.links.next) {
fetch_more $env $res.links.next
}
return $content **Should return full collection of data, but fails after one pass**
}
function fetch_data($env, $url) {
$base = generateEnvBase $env
$path = "$base/$url"
$req = Invoke-WebRequest -Credential $cred -Uri $path -Method GET
$global:res = ConvertFrom-Json $req
}
function fetch_more($env, $url) {
$base = generateEnvBase $env
$path = "$base$url"
$req = Invoke-WebRequest -Credential $cred -Uri $path -Method GET
$res = ConvertFrom-Json $req
foreach($i in $res.data) {
$global:content += $i
}
if($res.links.next) {
fetch_more $env $res.links.next
}
}
Sorry in advance if I have not followed procedure or etiquette, I'm new here.
This should work if you declare all variables in Main.ps1 that are needed by functions. You could also use the "Script" scope when creating a new variable inside a function that you want to use outside the function. Example $Script:Var = "Stuff" created inside a function will be available to whole script.
Directory Structure
C:\Script\Root
| Main.ps1
\---Utilities
fetch_data.ps1
fetch_more.ps1
processUsersData.ps1
process_data.ps1
selectEnv.ps1
Main.ps1
#---[ Initization ]---#
# Strings
[String]$RootPath = $PSScriptRoot
[String]$UtilPath = "$($RootPath)\Utilities"
[String]$env_choice = $null
[String]$csv_output_path = $null
# Arrays
[Array]$user_data = #()
[Array]$content = #()
[Array]$collated_user_data = #()
[Array]$res = #()
#---[ Source in Utilities ]---#
# Get the scripts
$Utilities = Get-ChildItem -Path "$UtilPath" -File | Where-Object {$_.Extension -eq ".ps1"}
# Source in each one
foreach ($Item in $Utilities) {
.$Item.FullName
}
#---[ Select an Environment ]---#
# Get the User's choice
$env_choice = selectEnv
# Process the choice
switch ($env_choice) {
Prod {
$csv_output_path = '\\etoprod\******\Exports\Report_Users_Prod.csv'
$user_data = process_data 'Prod' 'api/xm/1/people?embed=roles&limit=1000'
}
Dev {
$csv_output_path = '\\etoprod\******\Exports\Report_Users_Dev.csv'
$user_data = process_data 'Dev' 'api/xm/1/people?embed=roles&limit=1000'
}
Test {
Write-Output "Test is not an option. Choose wisely."
exit 1
}
Default {
Write-Output "Unknown Environment Choice."
exit 1
}
}
#---[ Process Users and Export ]---#
processUsersData
exportCsv $collated_user_data $csv_output_path
selectEnv.ps1
function selectEnv {
$Title = "Environment:"
$Info = "Please choose an environment"
# Options
$Prod = New-Object System.Management.Automation.Host.ChoiceDescription '&Prod', 'Production environment'
$Dev = New-Object System.Management.Automation.Host.ChoiceDescription '&Dev', 'Development environment'
$Test = New-Object System.Management.Automation.Host.ChoiceDescription '&Test', 'Testing environment'
$Options = [System.Management.Automation.Host.ChoiceDescription[]]($Prod, $Dev, $Test)
$Default = 0
# Promp the User
$Choice = $host.UI.PromptForChoice($Title , $Info , $Options, $Default)
$Result = $Options[$Choice].Label -Replace '&',''
return $Result
}

Leading zero is dropped when using ExportTo-Csv cmdlet

Below is the script I am running.
The script is working fine. It is giving proper output, however it is removing leading zeros from couple of columns. Please suggest how to retain leading zero for the integer fields.
I am using
$res.data.Tables[0] | ConvertTo-Csv -NoType | ForEach-Object {$_.Replace('"','')} | Out-file $fileName -Force"
for exporting data to the CSV file. Please suggest how to retain the leading zeros (at least 2 decimal places).
param([int]$accountingDay = 1, [string]$outputFolder)
$server = "ADMSQL01"
$db = "cc111db"
function exec-storedprocedure($storedProcName,
[hashtable] $parameters=#{},
[hashtable] $outparams=#{},
$conn) {
function put-outputparameters($cmd, $outparams) {
foreach ($outp in $outparams.Keys) {
$p = $cmd.Parameters.Add("#$outp", (get-paramtype $outparams[$outp]))
$p.Direction=[System.Data.ParameterDirection]::Output
$p.Size=4
}
}
function get-outputparameters($cmd,$outparams){
foreach ($p in $cmd.Parameters) {
if ($p.Direction -eq [System.Data.ParameterDirection]::Output) {
$outparams[$p.ParameterName.Replace("#","")]=$p.Value
}
}
}
function get-paramtype($typename) {
switch ($typename) {
'uniqueidentifier' {[System.Data.SqlDbType]::UniqueIdentifier}
'int' {[System.Data.SqlDbType]::Int}
'xml' {[System.Data.SqlDbType]::Xml}
'nvarchar' {[System.Data.SqlDbType]::NVarchar}
default {[System.Data.SqlDbType]::Varchar}
}
}
$close = ($conn.State -eq [System.Data.ConnectionState]'Closed')
if ($close) {
$conn.Open()
}
$cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
$cmd.CommandType = [System.Data.CommandType]'StoredProcedure'
$cmd.CommandText = $storedProcName
foreach ($p in $parameters.Keys) {
$cmd.Parameters.AddWithValue("#$p",[string]$parameters[$p]).Direction = [System.Data.ParameterDirection]::Input
}
put-outputparameters $cmd $outparams
$ds = New-Object System.Data.DataSet
$da = New-Object System.Data.SqlClient.SqlDataAdapter($cmd)
[Void]$da.fill($ds)
if ($close) {
$conn.Close()
}
get-outputparameters $cmd $outparams
return #{data=$ds;outputparams=$outparams}
}
# setup the 'framework' to use PowerShell with SQL
$sqlConnection = New-Object System.Data.SqlClient.SqlConnection
# SQL Server connection string
$sqlConnection.ConnectionString = $sqlConnection.ConnectionString = 'server=' + $server + ';integrated security=TRUE;database=' + $db
# execute stored procedure
$res=exec-storedprocedure -storedProcName 'sp_OPTUS_summary' -parameters #{inAccountFieldValues=$null;inAccountViewID=1;inAccountLevel=4;inAccountingDay=$accountingDay;inAccountPeriod=3;inUserGroupID=2} -outparams #{} $sqlConnection
if ($res.data.Tables.Count) {
# store results in file
$curYear = Get-Date -Format yyyy
$curMonth = Get-Date -Format MMM
$curTime = (Get-Date -Format s).Replace(':', ' ')
$fileName = $outputFolder + '\scheduled-' + $curYear + '-' + $curMonth + '-' + $accountingDay + '-' + $curTime + '.csv'
$res.data.Tables[0] | ConvertTo-Csv -NoType | ForEach-Object {$_.Replace('"','')} | Out-File $fileName -Force
}
Integers don't have "leading zeroes". If you want to export formatted output you need to convert the respective fields to formatted strings, e.g. like this:
$res.data.Tables[0] |
Select-Object FieldA, FieldB, #{n='FieldC';e={'{0:d2}' -f $_.FieldC}},
#{n='FieldD';e={'{0:d2}' -f $_.FieldD}}, FieldE, ... |
ConvertTo-Csv -NoType |
...

RunSpacePool hash table lookup

I'm putting together a powershell script that will use RunSpacePools to output a CSV file containing 1)ServerName, 2)SCCM Maintenance Window, 3)PingCheck, 4)LastRebootTimestamp.
I've got something working by using this amazing answer but my CSV file has blank lines and I'm stuck on getting the SCCM Maintenance Window into the CSV.
I'm unsure of how to complete the SCCM Maintenance Window lookup then add it to the output of the $Job.Result or could I just add it into the $ScriptBlock and let the RunSpacePool very quickly complete the lookup.
The blank CSV line is ,, and some lines don't have the extra blank line.
-edit, my thinking is now to perform the SCCM window lookup then simply pass that into the runspacepool as another param/argument.
IF(Get-Command Get-SCOMAlert -ErrorAction SilentlyContinue){}ELSE{Import-Module OperationsManager}
"Get Pend reboot servers from prod"
New-SCOMManagementGroupConnection -ComputerName ProdSCOMServer
$AlertData = get-SCOMAlert -Criteria "Severity = 1 AND ResolutionState < 254 AND Name = 'Pending Reboot detected on the ConfigMgr 2012 Client'" | Select NetbiosComputerName
"Get Pend reboot servers from test"
#For test information
New-SCOMManagementGroupConnection -ComputerName TestSCOMServer
$AlertData += Get-SCOMAlert -Criteria "Severity = 1 AND ResolutionState < 254 AND Name = 'Pending Reboot detected on the ConfigMgr 2012 Client'" | Select NetbiosComputerName
"Remove duplicates"
$AlertDataNoDupe = $AlertData | Sort NetbiosComputerName -Unique
$Global:table = #{}
"Populate hash table"
$MaintenanceWindow = Import-Csv D:\Scripts\MaintenanceWindow2.csv
$MaintenanceWindow | ForEach-Object {$Global:table[$_.Computername] = $_.CollectionName}
$scriptblock = {
Param([string]$server)
#Try getting SCCM Maintenance Window
$SCCMWindow = IF($Global:table.ContainsKey($server)){
$SCCMWindow = $table[$server]
} Else { $SCCMWindow = "Not Found!"}
$PingCheck = Test-Connection -Count 1 $server -Quiet -ErrorAction SilentlyContinue
IF($PingCheck){$PingResults = "Alive"}
ELSE{$PingResults = "Dead"}
Try{$operatingSystem = Get-WmiObject Win32_OperatingSystem -ComputerName $server -ErrorAction Stop
$LastReboot = [Management.ManagementDateTimeConverter]::ToDateTime($operatingSystem.LastBootUpTime)
$LastReboot.DateTime}
Catch{$LastReboot = "Access Denied!"}
[PSCustomObject]#{
Server=$server
Ping=$PingResults
LastReboot=$LastReboot
}#end custom object
}#script block end
$RunspacePool = [RunspaceFactory]::CreateRunspacePool(100,100)
$RunspacePool.Open()
$Jobs =
foreach ( $item in $AlertDataNoDupe )
{
$Job = [powershell]::Create().
AddScript($ScriptBlock).
AddArgument($item.NetbiosComputerName)
$Job.RunspacePool = $RunspacePool
[PSCustomObject]#{
Pipe = $Job
Result = $Job.BeginInvoke()
}
}
Write-Host 'Working..' -NoNewline
Do {
Write-Host '.' -NoNewline
Start-Sleep -Seconds 1
} While ( $Jobs.Result.IsCompleted -contains $false)
Write-Host ' Done! Writing output file.'
Write-host "Output file is d:\scripts\runspacetest5.csv"
$(ForEach ($Job in $Jobs)
{ $Job.Pipe.EndInvoke($Job.Result) }) |
Export-Csv d:\scripts\runspacetest5.csv -NoTypeInformation
$RunspacePool.Close()
$RunspacePool.Dispose()
Not sure if this is the best way but I ended up using the following which presents a problem in the event there are two entries in MaintenanceWindow2.csv because it returns System.Object[]
$scriptblock = {
Param([string]$server)
$csv = Import-Csv D:\Scripts\MaintenanceWindow2.csv
$window = $csv | where {$_.Computername -eq "$server"} | % CollectionName
$SCCMWindow = IF ($window){$window}ELSE{"NoDeadline"}
}

Import CSV and updating specific lines

So I have a script that runs at logon to search for PST's on a users machine, then copies them to a holding area waiting for migration.
When the search/copy is complete it outputs to a CSV that looks something like this:
Hostname,User,Path,Size_in_MB,Creation,Last_Access,Copied
COMP1,user1,\\comp1\c$\Test PST.pst,20.58752,08/12/2015,08/12/2015,Client copied
COMP1,user1,\\comp1\c$\outlook\outlook.pst,100,08/12/2015,15,12,2015,In Use
The same logon script has an IF to import the CSV if the copied status is in use and makes further attempts at copying the PST into the holding area. If it's successful it exports the results to the CSV file.
My question is, is there anyway of getting it to either amend the existing CSV changing the copy status? I can get it to add the new line to the end, but not update.
This is my 'try again' script:
# imports line of csv where PST file is found to be in use
$PST_IN_USE = Import-CSV "\\comp4\TEMPPST\PST\$HOSTNAME - $USER.csv" | where { $_.copied -eq "In Use" }
ForEach ( $PST_USE in $PST_IN_USE )
{ $NAME = Get-ItemProperty $PST_IN_USE.Path | select -ExpandProperty Name
$NEW_NAME = $USER + "_" + $PST_IN_USE.Size_in_MB + "_" + $NAME
# attempts to copy the file to the pst staging area then rename it.
TRY { Copy-Item $PST_IN_USE.Path "\\comp4\TEMPPST\PST\$USER" -ErrorAction SilentlyContinue
Rename-Item "\\comp4\TEMPPST\PST\$USER\$NAME" -NewName $NEW_NAME
# edits the existing csv file replacing "In Use" with "Client Copied"
$PST_IN_USE.Copied -replace "In Use","Client Copied"
} # CLOSES TRY
# silences any errors.
CATCH { }
$PST_IN_USE | Export-Csv "\\comp4\TEMPPST\PST\$HOSTNAME - $USER.csv" -NoClobber -NoTypeInformation -Append
} # CLOSES ForEach ( $PST_USE in $PST_IN_USE )
This is the resulting CSV
Hostname,User,Path,Size_in_MB,Creation,Last_Access,Copied
COMP1,user1,\\comp1\c$\Test PST.pst,20.58752,08/12/2015,08/12/2015,Client copied
COMP1,user1,\\comp1\c$\outlook\outlook.pst,100,08/12/2015,15,12,2015,In Use
COMP1,user1,\\comp1\c$\outlook\outlook.pst,100,08/12/2015,15,12,2015,Client copied
It's almost certainly something really simple, but if it is, it's something I've yet to come across in my scripting. I'm mostly working in IF / ELSE land at the moment!
If you want to change the CSV file, you have to write it completely again, not just appending new lines. In your case this means:
# Get the data
$data = Import-Csv ...
# Get the 'In Use' entries
$inUse = $data | where Copied -eq 'In Use'
foreach ($x in $inUse) {
...
$x.Copied = 'Client Copied'
}
# Write the file again
$data | Export-Csv ...
The point here is, you grab all the lines from the CSV, modify those that you process and then write the complete collection back to the file again.
I've cracked it. It's almost certainly a long winded way of doing it, but it works and is relatively clean too.
#imports line of csv where PST file is found to be in use
$PST_IN_USE = Import-CSV "\\comp4\TEMPPST\PST\$HOSTNAME - $USER.csv" | where { $_.copied -eq "In Use" }
$PST_IN_USE | select -ExpandProperty path | foreach {
# name of pst
$NAME = Get-ItemProperty $_ | select -ExpandProperty Name
# size of pst in MB without decimals
$SIZE = Get-ItemProperty $_ | select -ExpandProperty length | foreach { $_ / 1000000 }
# path of pst
$PATH = $_
# new name of pst when copied to the destination
$NEW_NAME = $USER + "_" + $SIZE + "_" + $NAME
TRY { Copy-Item $_ "\\comp4\TEMPPST\PST\$USER" -ErrorAction SilentlyContinue
TRY { Rename-Item "\\comp4\TEMPPST\PST\$USER\$NAME" -NewName $NEW_NAME -ErrorAction SilentlyContinue | Out-Null }
CATCH { $NEW_NAME = "Duplicate exists" }
$COPIED = "Client copied" }
CATCH { $COPIED = "In use" ; $NEW_NAME = " " }
$NEW_FILE = Test-Path "\\comp4\TEMPPST\PST\$HOSTNAME - $USER 4.csv"
IF ( $NEW_FILE -eq $FALSE )
{ "Hostname,User,Path,Size_in_MB,Creation,Last_Access,Copied,New_Name" |
Set-Content "\\lccfp1\TEMPPST\PST\$HOSTNAME - $USER 4.csv" }
"$HOSTNAME,$USER,$PATH,$SIZE,$CREATION,$LASTACCESS,$COPIED,$NEW_NAME" |
Add-Content "\\comp4\TEMPPST\PST\$HOSTNAME - $USER 4.csv"
} # CLOSES FOREACH #
$a = Import-CSV "\\comp4\TEMPPST\PST\$HOSTNAME - $USER.csv" | where { $_.copied -ne "in use" }
$b = Import-Csv "\\comp4\TEMPPST\PST\$HOSTNAME - $USER 4.csv"
$a + $b | export-csv "\\comp4\TEMPPST\PST\$HOSTNAME - $USER 8.csv" -NoClobber -NoTypeInformation
Thanks for the help. Sometimes it takes a moments break and a large cup of coffee to see things a different way.