When dealing with csv files using the cmdlet Import-Csv in PowerShell, is there an easy / elegant way to query a column by its position instead of the header name?
Example with this data:
row1, abc
row2, def
row3, xyz
The below command works fine if the file has "header_column2" as the header of the 2nd column:
import-csv data.csv | where {$_.header_column2 -eq "xyz"}
I am looking for something like this (which obviously doesn't work!):
import-csv data.csv | where {$_.[2] -eq "xyz"}
I know that if I don't have headers in the file I could use the -Header parameter to specify those, but I am looking for something quick and handy to write on the fly for various files, where headers are not relevant.
Many thanks!
Since #PetSerAl doesn't understand the difference between comments and answers here at StackOverflow, I'll provide a similar answer.
You can list the properties in the object using .PSObject.Properties. This returns an IEnumerable that doesn't know about indexes, but if you convert it to an array, it will.
#($_.PSObject.Properties)[2].Value
Related
fairly new to powershell and I have given myself a bit of a challenge which I think should be possible, I'm just not sure about the best way around it.
We have a user who has a large number of columns in a csv (can vary from 20-50), rows can vary between 1 and 10,000. the data is say ClientName,Address1,Address2,Postcode etc.. (although these can vary wildly depending on the source of the data - external companies) This needs importing into a system using a pre-built routine which looks at the file and needs the database column headers as the csv headers. so say ClientDisplay,Ad_Line1,Ad_Line2,PCode etc..
I was thinking along the lines of either a generic powershell 'mapping' form which could read the headers from ExternalSource.csv and either a DatabaseHeaders.csv (or a direct sql query lookup) display them as columns in a form and then highlight one from each column and a 'link' button, once you have been through all the columns in ExternalSource.csv a 'generate csv' button which takes the mapped headers an appends the correct data columns from ExternalSource.csv
Am I barking up the wrong tree completely trying to use powershell? at the moment its a very time consuming process so just trying to make life easier for users.
Any advice appreciated..
Thanks,
Jon
You can use the Select-Object cmdlet with dynamic columns to shape the data into the form you need.
Something like:
Import-Csv -Path 'source.svc' |
Select-Object Id, Name, #{ Name='Ad_Line1'; Expression={ $_.Address1 } } |
Export-Csv -Path 'target.csv'
In this example, the code #{ Name='Ad_Line1'; Expression={ $_.Address1 } } is a dynamic column, that creates a column with name AD_Line1' and the value ofAddress1`
It is possible to read the column mappings from a file, you will have to write some code to read the file, select the properties and create the format.
A very simple solution could be to read the Select-Object part from another script file, so you can differentiate that part for each import.
A (simple, naive, low performant) solution could look like this (untested code):
# read input file
$input = Import-Csv -Path $inputFile
# read source, target name columns from mapping file
$mappings = Import-Csv -Path $mappingFile | Select Source, Target
# apply transformations
$transformed = $input
foreach($mapping in $mappings) {
# collect the data, add an extra column for each mapping
$transformed = $transformed | Select-Object *, #{ Name = $mapping.Target; Expression = { $_.$($mapping.Source) } }
}
#export transformed data
$transformed | Export-Csv -Path $outputFile
Alternatively; It is possible to convert the data into XML with Import-Csv | Export-CliXml, apply an Xslt template on the Xml to perform a transformation, and save the Xml objects into Csv again with Import-CliXml | Export-Csv.
See this blog by Scott Hansleman on how you can use XSLT with PowerShell.
I have a CSV file that contains lines like this:
"Title","Path"
"News","/somepath/news.json/$variable1$variable2"
"Boards","/somepath/$year/$variable3/boardfile.json"
I set all these variables in the script and then import data using Import-Csv cmdlet. As a result, I get object with Title and Path properties and that's what I need. But the problem is that the variables in the CSV file are not getting replaced with actual values during import.
I know there is a solution with Get-Content and ExpandVariable, but it produces an array of strings, and I would rather have an object (or maybe a hashtable?) with properties similar to what Import-Csv produces.
Is there a way to replace variables with values while still using Import-Csv and get a full-featured object as a result?
Import-Csv doesn't expand variables in imported data, so the values in your CSV behave as if they were in single quoted strings. You need something like this to get the variables expanded after import:
Import-Csv 'C:\path\to\your.csv' | Select-Object Title,
#{n='Path';e={$ExecutionContext.InvokeCommand.ExpandString($_.Path)}}
So I'm just starting out with this whole Powershell thing and so far so good - until now. I just can't figure out how to do this!
I'm looking at manipulating CSV files which are output from one system (which I can't change at output), renaming some column headers and merging a couple of the results into one column so that it matches the input requirements to upload into another system (again, I can't change those parameters).
So, as an example.
The first file is created:
File1.csv
"A","B","C""1","2","3"
I want a powershell script that will output:
File2.csv
"X","Y""1","23"
So I can import it into another system.
I hope that all makes sense, and thanks in advance for any assistance.
I'm going to assume that your actual/desired formats of your files look like this:
"A","B","C"
"1","2","3"
"X","Y"
"1","23"
rather than having everything in one line. If that's correct you can import File1.csv with Import-Csv, rename and merge columns with calculated properties:
... | Select-Object #{n='X';e={$_.A}}, #{n='Y';e={$_.B + $_.C}} | ...
and write the result to File2.csv with Export-Csv.
I got multiple csv files, all with the same header, all in the same folder which I have to combine to create one large csv file. Since every .csv-file has the same header, I guess using it once should be enough.
All files look like this (also delimited by ','):
Header1,Header2,Header3
Data1, Data2, Data3
Data4, Data5, Data6
Can you help out? I'm not very comfortable with Powershell yet, tried out different codes but nothing really helped me out.
Thanks
if all csv's has the same columns, simply:
# Original CSV
$csv = import-csv c:\temp.csv
# 2nd CSV
$csv2 = import-csv c:\temp2.csv
# As simple as:
$csv += $csv2
If you want to import all CSV's in the current folder you can do something like the following:
Get-ChildItem *.csv | % { Import-Csv $_ }
start by copying the first file to an output file.
then see this (question, answer):
https://stackoverflow.com/a/2076557/1331076
it shows you how to remove the first line from a file.
you would have to modify it, so instead of replacing your existing file, you Add-Content to the output file.
I wrote a script to read a list of folder objects into an xml file showing the properties for the folder. It also creates another xml file at a later point in time. After the delta xml file is created, I import both xml files and compare them based on the folder name to display which folders have been deleted or removed and save the results in html format to a file for viewing. Everything works well, but I want to replace some of the values in the results. The compare-object cmd-let lets me display some attributes, but tells what side the change was on by putting => for a folder added in the delta file or <= for a folder removed in the delta file. I really would like to replace the column name of SideIndicator and the replace the => or <= values with something more intuitive. I played around with useing -Replace {$_ $original, $newvalue} type method. I found guidance on Hey Scripting Guy blog and some other examples around, but none seemed to do what I want. What is the best way to approach this? Currently I'm not storing the compared results, just formatting and converting to HTML. Any advice is appreciated. I can post the code if needed, but it is about 60 lines long and I'm really looking for the best way to accomplish this, not neccesarily someone to write the code.
Thanks!
If you just want to replace column names in the output display, you can create a custom table:
http://gallery.technet.microsoft.com/scriptcenter/ed188912-1a20-4be9-ae4f-8ac46cf2aae4
That is one approach, but I ended up modifying the html report by doing the following:
(Get-Content $WorkingDirectory\FolderAudit_$UseDateTime.htm) |
Foreach-Object {$_ -replace "SideIndicator", "Change Status"} |
Set-Content $WorkingDirectory\FolderAudit_$UseDateTime.htm
(Get-Content $WorkingDirectory\FolderAudit_$UseDateTime.htm) |
Foreach-Object {$_ -replace "=>", "New"} |
Set-Content $WorkingDirectory\FolderAudit_$UseDateTime.htm
(Get-Content $WorkingDirectory\FolderAudit_$UseDateTime.htm) |
Foreach-Object {$_ -replace "<=", "Removed"} |
Set-Content $WorkingDirectory\FolderAudit_$UseDateTime.htm
It didn't seem to work well as one statement that replaced the three different pieces I was looking for. Eventually I'll play around with it and streamline it some, but for now it does what I want.