How to rename all files in subdirectories using SSH - html

I am trying to rename all html files to "index.html" in my directories, and I can find the .html files using the "find" command, but I can't rename the files using the "rename" command at the same time.
This is my directory structure for example:
Test -> Example1 -> Example1.html
Test -> Example2 -> Example2.html
Test -> Example3.html
And this is the expected result after renaming the html files:
Test -> Example1 -> index.html
Test -> Example2 -> index.html
Test -> index.html
After I run the following command it seems that the syntax is correct (no errors), but nothing happened:
find . -type f -name "*.html" | rename *.html index.html *html

Your find command is right.
find . -type f -name \*.html
You can run the command and save the output or just use it with pipe |
Then you can use rename command or perl directly or a simple shell script.
Usually I use Perl itself.
First you need this pattern to match all file names (?<=\/)[^\/]+$
Than you can run perl like so:
# file for test
# list of file which should be renamed (saved by find)
$ cat > file
/one.one/one.one/one.file.html
/two.two/two.two/one.file.html
/three.three/three.three/one.file.html
# dry run
$ perl -lne '($old=$_) && s/(?<=\/)[^\/]+$/index.html/g && print' < file
/one.one/one.one/index.html
/two.two/two.two/index.html
/three.three/three.three/index.html
# actual rename
# perl -lne '($old=$_) && s/(?<=\/)[^\/]+$/index.html/g && rename($old,$_)'
Which means replace any (?<=\/)[^\/]+$ with index.html.

When one pipes find output or uses -exec, there is a race condition where a malicious actor can substitute the files being acted on - https://unix.stackexchange.com/q/273965/333919
If you are using GNU find, it provides -execdir, allowing:
find -type f -name '*.html' -execdir mv -iv \{} index.html \;

Related

wget command to download web-page & rename file with with html title?

I would like to download an html web-page and have the filename be the title of the html page.
I have found a command to get the html title:
wget -qO- 'https://www.linuxinsider.com/story/Austrumi-Linux-Has-Great-Potential-if-You-Speak-Its-Language-86285.html/' | gawk -v IGNORECASE=1 -v RS='</title' 'RT{gsub(/.*<title[^>]*>/,"");print;exit}'
And it prints this: Austrumi Linux Has Great Potential if You Speak Its Language | Reviews | LinuxInsider
Found on: https://unix.stackexchange.com/questions/103252/how-do-i-get-a-websites-title-using-command-line
How could i pipe the title back into wget to use it as the filename when downloading that web-page?
EDIT: in case there is no way to do this directly in wget, i found a way to simply rename the html files once downloaded
Renaming HTML files using <title> tags
You can't wget a file, analyze it's contents and then make the same wget execution that downloaded the file magically go back in time and output it to a new file named after it's contents that you analyzed in step 2. Just do this:
wget '...' > tmp &&
name=$(gawk '...' tmp) &&
mv tmp "$name"
Add protection against / in name as necessary.

Add column containing filename to a batch of .csv files in mac OSX

I've got 300 .csv files. I need to add a column that contains the filename in each row. I'm new to using command line in terminal. I've looked around but haven't found code I understand for doing this in mac. Would be very grateful for help
You could write a bash-script:
vi csv-script.sh
press i (for inserting text)
copy/paste this code as example (is kind of self-explaining):
#!/bin/bash
for file in $(find ./ -maxdepth 1 -name '*.csv' -type f)
do
touch tmp
for line in $(cat $file)
do
fileCol="${file:3}"
echo "${fileCol};${line}" >>tmp
done
mv -f tmp $file
rm -f tmp
done
change the semicolon within the echo-command to whatever separator you need.
press esc and press colon :wq
then call from cli with:. csv-script.sh. (pay attention to dot space csv-script.sh)
have a try.

Getting error while converting JPG to PDF via ghostscript using TCL script

I tried to execute the command with open command also
set command "C:\Program Files(x86)\gs\bin\gswin32c.exe -sDEVICE=pdfwrite -o $createpdfpath D:/test/1/ghostscript/gs9.19/lib/viewjpeg.ps -c \"($Modifiedjpgpath) <</PageSize 2 index viewJPEGgetsize 2 array astore >> setpagedevice viewJPEG\""
set f [open "$command" "r"]
After execution i am getting the below error:
couldn't open "C:\Program Files(x86)\gs\bin\gswin32c.exe -sDEVICE=pdfwrite -o C:/sample/Et/Alpha_10H00000001.0.00000102.00000001/23.pdf D:/test/1/ghostscript/gs9.19/lib/viewjpeg.ps -c "(\\\\Test-PC\\TRAIL-P\\Ds\\PS\\0\\17\\Color_00000001.jpg) > setpagedevice viewJPEG"": no such file or directory
­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­
But if i am executing the same command via command prompt it is converting the jpg to pdf file without any error.
Unless your Windows setup is different from the run of the mill, then "C:\Program Files(x86)" is incorrect, and should be "C:\Program Files (x86)", note the missing space in your definition.
So something like :
set command "C:\Program Files (x86)\gs\bin\gswin32c.exe........"
FWIW Ghostscript doesn't normally install into that directory either, I would expect the directory to be of the form "c:\Program Files (x86)\gs\gsX.YY\bin\gswin32c" where X.YY is the Ghostscript version number.
The main problem you've got is that you're not running that command as a pipeline.
You need to change:
set f [open "$command" "r"]
to:
set f [open |$command "r"]
You may also have typos in your pipeline descriptor, which I recommend you build as a list, and file nativename is probably important as well, not so much for the name of the ghostscript interpreter itself, but rather for any filenames that are given to it:
# Easiest to use / instead of \ in filenames inside Tcl, really
set gs "C:/Program Files (x86)/gs/bin/gswin32c.exe"
set psscript "D:/test/1/ghostscript/gs9.19/lib/viewjpeg.ps"
# The next bit is building some postscript to run
set thejpgfile [file nativename $Modifiedjpgpath]
set pscmd "($thejpgfile) <</PageSize 2 index viewJPEGgetsize 2 array astore >> setpagedevice viewJPEG"
# Compose everything into a subprocess invokation
set command [list $gs -sDEVICE=pdfwrite -o $createpdfpath [file nativename $psscript] -c $pscmd]
# Actually run it
set f [open |$command "r"]
I find it is usually simpler to try to keep lines of code shorter and use variables to give individual bits a helpful name. It's also a lot easier to debug; you can just print out anything that looks too mysterious.

Convert Multiple Files To UTF-8

I have a number of files that get dumped every morning on a server. There are 9 CSV comma delimited files, varying in size from 59kb to as large as 127mb. I need to import these into a MySQL database, however the files are currently in western iso-8859-16 format and I need to convert these to UTF-8 for my import. The manual process of converting these is quite teadious as you can imagine.
Is there a script / batch file I can create to run every morning via task scheduler? Or what is the best way to try automating this process on a Windows Server 2012?
Current Script, that reads the files, but doesn't seem to convert them:
!/bin/bash
Recursive file convertion CP1252 --> utf-8
Place this file in the root of your site, add execute permission and run
Converts *.php, *.html, *.css, *.js files.
To add file type by extension, e.g. *.cgi, add '-o -name "*.cgi"' to the find command
find ./ -name "*.csv" -o -name "*.html" -o -name "*.css" -o -name "*.js" -type f |
while read file
do
echo " $file"
mv $file $file.icv
iconv -f WINDOWS-1252 -t UTF-8 $file.icv > $file
rm -f $file.icv
done
Any help is greatly appreciated.
Thanks.

Rename Files and Directories (Add Prefix)

I would like to add prefix on all folders and directories.
Example:
I have
Hi.jpg
1.txt
folder/
this.file_is.here.png
another_folder.ok/
I would like to add prefix "PRE_"
PRE_Hi.jpg
PRE_1.txt
PRE_folder/
PRE_this.file_is.here.png
PRE_another_folder.ok/
Thanks to Peter van der Heijden, here's one that'll work for filenames with spaces in them:
for f in * ; do mv -- "$f" "PRE_$f" ; done
("--" is needed to succeed with files that begin with dashes, whose names would otherwise be interpreted as switches for the mv command)
Use the rename script this way:
$ rename 's/^/PRE_/' *
There are no problems with metacharacters or whitespace in filenames.
For adding prefix or suffix for files(directories), you could use the simple and powerful way by xargs:
ls | xargs -I {} mv {} PRE_{}
ls | xargs -I {} mv {} {}_SUF
It is using the paramerter-replacing option of xargs: -I. And you can get more detail from the man page.
This could be done running a simple find command:
find * -maxdepth 0 -exec mv {} PRE_{} \;
The above command will prefix all files and folders in the current directory with PRE_.
To add a prefix to all files and folders in the current directory using util-linux's rename (as opposed to prename, the perl variant from Debian and certain other systems), you can do:
rename '' <prefix> *
This finds the first occurrence of the empty string (which is found immediately) and then replaces that occurrence with your prefix, then glues on the rest of the file name to the end of that. Done.
For suffixes, you need to use the perl version or use find.
If you have Ruby(1.9+)
ruby -e 'Dir["*"].each{|x| File.rename(x,"PRE_"+x) }'
with Perl:
perl -e 'rename $_, "PRE_$_" for <*>'
On my system, I don't have the rename command. Here is a simple one liner. It finds all the HTML files recursively and adds prefix_ in front of their names:
for f in $(find . -name '*.html'); do mv "$f" "$(dirname "$f")/prefix_$(basename "$f")"; done
Here is a simple script that you can use. I like using the non-standard module File::chdir to handle managing cd operations, so to use this script as-is you will need to install it (sudo cpan File::chdir).
#!/usr/bin/perl
use strict;
use warnings;
use File::Copy;
use File::chdir; # allows cd-ing by use of $CWD, much easier but needs CPAN module
die "Usage: $0 dir prefix" unless (#ARGV >= 2);
my ($dir, $pre) = #ARGV;
opendir(my $dir_handle, $dir) or die "Cannot open directory $dir";
my #files = readdir($dir_handle);
close($dir_handle);
$CWD = $dir; # cd to the directory, needs File::chdir
foreach my $file (#files) {
next if ($file =~ /^\.+$/); # avoid folders . and ..
next if ($0 =~ /$file/); # avoid moving this script if it is in the directory
move($file, $pre . $file) or warn "Cannot rename file $file: $!";
}
This will prefix your files in their directory.
The ${f%/*} is the path till the last slash / -> the directory
The ${f##*/} is the text without anything before last slash / -> filename without the path
So that's how it goes:
for f in $(find /directory/ -type f); do
mv -v $f ${f%/*}/$(date +%Y%m%d)_Prefix_${f##*/}
done
Open cmd and set the directory to the folder and run the following command:
for /f "tokens=*" %a in ('dir /b') do ren "%a" "00_%a"
00_ is prefix in "00_%a", so you can change it according to your requirements.
It will rename all of the files in the selected folder.