Renaming files with the same name using sequential numbers? - google-apps-script

I feel like this is pretty simple, but I'm missing something. I have 130 folders, all containing the same file, "Document.pdf". Of course, the contents vary from file to file, but they all have the same name and extension. What I'm trying to do is have a script take all those 130 files, and give them names from "1.pdf" to "130.pdf", in order. The folders are in order as well (1-130). I have these folders on both local storage and Google Drive, so any solution involving either bash or GScripts will be good with me. Thanks.

This should do the trick:
Code:
function renameFiles() {
var iter = 1;
while (iter <= 130) {
Logger.log(iter);
var folders = DriveApp.getFoldersByName(iter);
while (folders.hasNext()) {
var folder = folders.next();
Logger.log("Folder: " + folder.getName());
var files = folder.getFilesByName("Document.pdf");
while (files.hasNext()) {
var file = files.next();
file.setName(iter + "." + file.getMimeType().substr(-3));
Logger.log("File: " + file.getName());
}
}
iter++;
}
}
Assumptions:
Folders are named 1, 2, .... 130
Sample ouptut:

Assuming the directories are actually named 1, 2, and so on up to and including 130, here is a bash solution:
# Edit this to desired path
parent_dir='.'
find "$parent_dir" -maxdepth 1 -type d | while read -r directory; do
dir_name="$(basename directory)"
if [ "$dir_name" -ge 1 ] && [ "$dir_name" -le 130 ]; then
mv "$directory/Document.pdf" "$directory/$dir_name.pdf"
fi
done
This uses basename to get the name of each directory (i.e., the value between 1 and 130), and uses that to rename the Document.pdf files.
Note that -maxdepth is not POSIX. To replace -maxdepth with POSIX options, see this answer.

Related

Reading JSON objects in Powershell

I need to integrate a JSON file which contains paths of the different objects in a PS script that generates and compares the hash files of source and destination. The paths in the JSON file are written in the format that I have stated below. I want to use the paths in that manner and pipe them into Get-FileHash in PowerShell. I can't figure out how to integrate my current PowerShell script with the JSON file that contains the information (File Name, full path) etc.
I have two scripts that I have tested and they work fine. One generates the MD5 hashes of two directories (source and destination) and stores them in a csv file. The other compares the MD5 hashes from the two CSV files and generates a new one, showing the result(whether a file is absent from source or destination).
Now, I need to integrate these scripts in another one, which is basically a Powershell Installer. The installer saves the configurations (path, ports, new files to be made etc etc) in a JSON format. In my original scripts, the user would type the path of the source and destination which needed to be compared. However, I now need to take the path from the JSON configuration files. For example, the JSON file below is of the similar nature as the one I have.
{
"destinatiopath": "C:\\Destination\\Mobile Phones\\"
"sourcepath": "C:\\Source\\Mobile Phones\\"
"OnePlus" : {
"files": [
{
"source": "6T",
"destination: "Model\\6T",
]
}
"Samsung" : {
"files": [
{
"source": "S20",
"destination": "Galaxy\\S20",
}
]
This is just a snippet of the JSON code. It's supposed to have the destination and source files. So for instance if the destination path is: C:\\Destination\\Mobile Phones\\ and the source path is C:\\Source\\Mobile Phones\\ and OnePlus has 6T as source and Model\\6T as destination, that means that the Powershell Installer will have the full path C:\\Destination\\Mobile Phones\\Model\\6T as the destination, and C:\\Source\\Mobile Phones\\6T as the source. The same will happen for Samsung and others.
For now, the MD5 hash comparison PS script just generates the CSV files in the two desired directories and compares them. However, I need to check the source and destination of each object in this case. I can't figure out how I can integrate it here. I'm pasting my MD5 hash generation code below.
Generating hash
#$p is the path. In this case, I'm running the script twice in order to get the hashes of both source and destination.
#$csv is the path where the csv will be exported.
Get-ChildItem $p -Recurse | ForEach-Object{ Get-FileHash $_.FullName -Algorithm MD5 -ErrorAction SilentlyContinue} | Select-Object Hash,
#{
Name = "FileName";
Expression = { [string]::Join("\", ($_.Path -split "\\" | Select-Object -Skip ($number))) }
} | Export-Csv -Path $csv
I want to use the paths in that manner and pipe them into Get-FileHash in PowerShell.
As the first step I would reorganize the JSON to be easier to handle. This will make a big difference on the rest of the script.
{
"source": "C:\\Source\\Mobile Phones",
"destination": "C:\\Destination\\Mobile Phones",
"phones": [
{
"name": "OnePlus",
"source": "6T",
"destination": "Model\\6T"
},
{
"name": "Samsung",
"source": "S20",
"destination": "Galaxy\\S20"
}
]
}
Now it's very easy to get all the paths no matter how many "phone" entries there are. You don't even really need an intermediary CSV file.
$config = Get-Content config.json -Encoding UTF8 -Raw | ConvertFrom-Json
$config.phones | ForEach-Object {
$source_path = Join-Path $config.source $_.source
$destination_path = Join-Path $config.destination $_.destination
$source_hashes = Get-ChildItem $source_path -File -Recurse | Get-FileHash -Algorithm MD5
$destination_hashes = Get-ChildItem $destination_path -File -Recurse | Get-FileHash -Algorithm MD5
# the combination of relative path and file hash needs to be unique, so let's combine them
$source_relative = $source_hashes | ForEach-Object {
[pscustomobject]#{
Path = $_.Path
PathHash = $_.Path.Replace($source_path, "") + '|' + $_.Hash
}
}
$destination_relative = $destination_hashes | ForEach-Object {
[pscustomobject]#{
Path = $_.Path
PathHash = $_.Path.Replace($destination_path, "") + '|' + $_.Hash
}
}
# Compare-Object finds the difference between both lists
$diff = Compare-Object $source_relative $destination_relative -Property PathHash, Path
Write-Host $diff
$diff | ForEach-Object {
# work with $_.Path and $_.SideIndicator
}
}

Powershell Make a function for directories and add a prefix at the end

Make a function that makes 3 directories with the name John_S and add the prefix and appended with the number 1, 2, m
Example
1. John_S1
2. John_S2
3. John_S3
Use a loop (ForEach)
Use a variable for the number of iterations
What I have so far...
$DirName = "John_S"
function mulcheck {New-item "$DirName"}
$i = 1
foreach($DirName in $DirNames)
{$newname = $DirName Rename-Item $($DirName) $newname $i++}
The easiest way to generate the numbers 1 through 3 is with the .. range operator:
foreach($suffix in 1..3){
mkdir "John_S${suffix}"
}
To make the function re-usable with something other than John_S, declare a [string] parameter for the prefix:
function New-Directories([string]$Prefix) {
foreach($suffix in 1..3){
mkdir "${Prefix}${suffix}"
}
}
If I understand your latest comment correctly, you want a function that takes the name of a new folder and checks if a folder with that name altready exists in the rootpath. When that is the case, it should create a new folder with the given name, but with an index number appended to it, so it has a unique name.
For that you can use something like this:
function New-Folder {
[CmdletBinding()]
param (
[Parameter(Mandatory = $false)]
[string]$RootPath = $pwd, # use the current working directory as default
[Parameter(Mandatory = $true)]
[string]$FolderName
)
# get an array of all directory names (name only) of the folders with a similar name already present
$folders = #((Get-ChildItem -Path $RootPath -Filter "$FolderName*" -Directory).Name)
$NewName = $FolderName
if ($folders.Count) {
$count = 1
while ($folders -contains $NewName) {
# append a number to the FolderName
$NewName = "{0}{1}" -f $FolderName, $count++
}
}
# we now have a unique foldername, so create the new folder
$null = New-Item -Path (Join-Path -Path $RootPath -ChildPath $NewName) -ItemType Directory
}
New-Folder -FolderName "John_S"
If you run this several times, you will have created several folders like

CSV that contains blobs, to the actual files

I have extracted a load of files from a webapp, and all it gives me is a CSV file containing a load of blobs of the file, and the filename.
What is the best way to convert these to the actual files? I was thinking using a powershell script?
I created a PowerShell script, that extracts the blobs and filename from CSV and saves in a given directory as raw files.
$path = 'D:\TEMP\Attachments\extract.csv'
$exportPath = 'D:\TEMP\Files'
Import-Csv $path | Foreach-Object {
$b64 = $_.BODY
$bytes = [Convert]::FromBase64String($b64)
$filename = $exportPath + "\" + $_.NAME
[IO.File]::WriteAllBytes($filename, $bytes)
}

How to have Powershell read multiple values from a single CSV cell?

I am trying to apply permissions to multiple folders at once. I have a .csv file with the folder paths in one column and the security groups in a second column. Most folders will have multiple groups with access. The .csv is in the following format:
folderpath groupname
---------- ---------
C:\Folder1 Group1
C:\Folder2 Group2, Group3
My code will not currently apply the permissions because there are multiple values in some groupname cells. It will apply if only one group is listed.
#Specify CSV location
$csv = import-csv G:\testcsv.csv
#Start loop
foreach($masterlist in $csv)
{
$folderpath = $masterlist.folderpath
$groupname = $masterlist.groupname
#Apply permission to folderpath
add-ntfsaccess -path $folderpath -account $groupname -accessrights modify,synchronize
}
For example, if I run the above code, Group1 will successfully be given permission to C:\Folder1. But C:\Folder2 will have nothing applied because Group2 and Group3 are both in the same cell. Can I adjust my code without having to manually separate hundreds of these cells in the .csv?
Thank you for your help!!!!!
just create inside loop foreach Group :) simplest method
#Start loop
foreach($masterlist in $csv)
{
$folderpath = $masterlist.folderpath
$groupnames = $masterlist.groupname -split ","
foreach ($group in $groupnames ) {
#Apply permission to folderpath
add-ntfsaccess -path $folderpath -account $group -accessrights modify,synchronize
}
}
this is adHoc response but i think you get the idea.

Linux- Breaking up a large JSON file into chunks to minify

I am working with a 6.0 MB JSON file that is being used with about 100 other scripts on a server that will soon be set up. I wish to compress the file by deleting all of the extra spaces, tabs, returns, etc., but all of the sources I've found for compressing the file can't handle the file's size (it's around 108,000 lines of code). I need to break the file up in a way that it will be easy to reassemble once each chunk has been compressed. Does anyone know how to break it up in an efficient way? Help would be much appreciated!
Because python scripts could already handle the large file, I ended up using ipython and writing a .py script that would dump the script without spaces. To use this script, one would type:
$ ipython -i compression_script.py
This is the code within compression_script.py:
import json
filename= raw_input('Enter the file you wish to compress: ')# file name we want to compress
newname = 'compressed_' + filename # by default i have set the new filename to be 'compressed_' + filename
fp = open(filename)
jload = json.load(fp)
newfile = json.dumps(jload, indent = None, separators = (',', ':'))
f = open(newname, 'wb')
f.write(newfile)
f.close()
print('Compression complete! Type quit to exit IPython')
you can be done in php also like ....
//
$myfile = fopen("newfile.txt", "w") or die("Unable to open file!");
$handle = fopen("somehugefile.json", "r");
if ($handle) {
$i = 0;
while (!feof($handle)) {
$buffer = fgets($handle, 5096);
$buffer = str_replace("\r\n","", $buffer);
$buffer = str_replace("\t","", $buffer);
fwrite($myfile, $buffer);
$i++;
//var_dump($buffer);
/*
if ($i == 1000) {
die('stop');
}
*/
}
fclose($handle);
fclose($myfile);
}