I have several html files within subfolders which have a redundant link like:
<link rel="stylesheet" href="../demos.css">
I am trying to remove this line from all the html files using the following command in linux:
find -name '*.html' -exec sed --in-place=.bak 'demos.css' "{}" ;
But this gives me the following error:
find: missing argument to `-exec'
Yes of course I have checked all the solutions on Stackoverflow related to this but most of them are regarding a single file and the rest don't help. What am I doing wrong with this code?
find is missing starting path, sed is missing d command and you need to escape the semi colon in find command:
find . -name '*.html' -exec sed -i.bak '/demos\.css/d' '{}' \;
Or better:
find . -name '*.html' -exec sed -i.bak '/demos\.css/d' '{}' +
for i in `find /www/htmls/ -name "*.html" 2>/dev/null`
do
sed -i 's/^demos.css.*//' "$i"
done
try this please:
for i in `find /www/htmls/ -name "*.html" 2>/dev/null`
do
sed -i '/demos\.css/d' "$i"
done
Related
On my Synology NAS I've written a bash script that is supposed to delete the junk files that windows creates:
#/bin/ash
find /volume2 -type f -name "Thumbs.db" -exec rm -f {} \;
find /volume2 -type f -name "desktop.ini" -exec rm -f {} \;
#find /volume2 -type d -empty -delete \;
This was to the best of my knowledge working fine until I added the now commented-out line to remove the empty folders after the files are deleted. Now despite not intentionally changing anything, the script is failing with
find: missing argument to `-exec'
I'm sure I'm missing something incredibly obvious but please help, also I don't know why the last line didn't work either. I've read lots of threads on Stack with no joy.
Please help me understand why ncu is causing a find operation to stop after the first file? I have 25 project folders, all with their own package.json and bower.json file (not all have bower.json).
Issuing this command with an echo works perfectly:
find ../ -name "package.json" -type f -exec echo '{}' +
... all files are printed to the screen.
However, this syntax stops after the first file when I use ncu:
find ../ -name "package.json" -type f -exec ncu -u -a --packageFile '{}' +
Here's the only output of the command:
$ find ../ -name "package.json" -type f -exec ncu -u -a --silent --packageFile '{}' +
Using /home/joeblow/projects/app01/package.json
[..................] - :
All dependencies match the latest package versions :)
The versions I'm using is:
bash version: 4.3.42(1)-release
find version: 4.7.0-git
node version: 6.9.4
npm version: 4.1.2
npm-check-updates version: 2.8.9
ncu (aka npm-check-updates) silently ignores all but the first file.
Use -exec ncu -u -a --packageFile '{}' \; instead to only run it with a single file at a time.
I am trying to uncomment a line in html file using shell script but I am not able to write a sed command for this .
I have a line
<!--<url="/">-->
I need to uncomment this line using shell script
<url="/"/>
sed -i -e "s|'<!--<url="/"/>-->'|<url="/">|g" myFile.html
Any idea how to replace this comment?
Use :
sed -re 's/(<!--)|(-->)//g'
e.g:
echo '<HTML><!--<url="/">--> <BODY>Test</BODY></HTML>' | sed -re 's/(<!--)|(-->)//g'
Like this?
sed -i 's|<!--<url="/">-->|<url="/">|g' myFile.html
It's better to use single quotes because it prevents interpretation of everything including double quotes.
You need to escape(add backslash) before / character.Secondly, both crucial arguments should be separated with /, but not with |.Use the following line:
sed -i 's/<!--<url="\/">-->/<url="\/">/g' myFile.html
given a plain text document with several lines like:
c48 7.587 7.39
c49 7.508 7.345983
c50 5.8 7.543
c51 8.37454546 7.34
I need to add some info 2 spaces after the end of the line, so for each line I would get:
c48 7.587 7.39 def
c49 7.508 7.345983 def
c50 5.8 7.543 def
c51 8.37454546 7.34 def
I need to do this for thousands of files. I guess this is possible to do with sed, but do not know how to. Any hint? Could you also give me some link with a tutorial or table for this cases?
Thanks
if all your files are in one directory
sed -i.bak 's/$/ def/' *.txt
to do it recursive (GNU find)
find /path -type f -iname '*.txt' -exec sed -i.bak 's/$/ def/' "{}" +;
you can see here for introduction to sed
Other ways you can use,
awk
for file in *
do
awk '{print $0" def"}' $file >temp
mv temp "$file"
done
Bash shell
for file in *
do
while read -r line
do
echo "$line def"
done < $file >temp
mv temp $file
done
for file in ${thousands_of_files} ; do
sed -i ".bak" -e "s/$/ def/" file
done
The key here is the search-and-replace s/// command. Here we replace the end of the line $ with 2 spaces and your string.
Find the sed documentation at http://sed.sourceforge.net/#docs
I am trying to write a script to search and remove htm and html tags from all files recursively. The starting point is given as input in the command to run the script. The resultant files should be saved in new file at the same place ending with _changed. e.g., start.html > start.html_changed.
Here is the script I wrote so far. It works fine, but the output prints out to the terminal, and I want it to be saved in files respectively.
#!/bin/bash
sudo find $1 -name '*.html' -type f -print0 | xargs -0 sed -n '/<div/,/<\/div>/p'
sudo find $1 -name '*.htm' -type f -print0 | xargs -0 sed -n '/<div/,/<\/div>/p'
Any help is much appreciated.
The following script works just fine, but it is not recursive. how can I make it recursive?
#!/bin/bash
for l in /$1/*.html
do
sed -n '/<div/,/<\/div>/p' $l > "${l}_nobody"
done
for m in /$1/*.htm
do
sed -n '/<div/,/<\/div>/p' $m > "${m}_nobody"
done
Just edit the xargs part as follows:
xargs -0 -I {} sh -c "sed -n '/<div/,/<\/div>/p' {} > {}_changed"
Explanation:
-I {}: sets a placeholder
> {}_changed": does redirection to the file with _changed suffix