Troubleshooting Bash script to delete files - ash

On my Synology NAS I've written a bash script that is supposed to delete the junk files that windows creates:
#/bin/ash
find /volume2 -type f -name "Thumbs.db" -exec rm -f {} \;
find /volume2 -type f -name "desktop.ini" -exec rm -f {} \;
#find /volume2 -type d -empty -delete \;
This was to the best of my knowledge working fine until I added the now commented-out line to remove the empty folders after the files are deleted. Now despite not intentionally changing anything, the script is failing with
find: missing argument to `-exec'
I'm sure I'm missing something incredibly obvious but please help, also I don't know why the last line didn't work either. I've read lots of threads on Stack with no joy.

Related

Getting an error code when deleting directory after mysqldump

I am currently running a cronjob executing a bash-script for a mysqldump.
#!/bin/bash
#
# define variables
TIMESTAMP=$(date +%Y-%m-%d_%H:%M)
USER=mysqluser
PASSWORD='myPassword'
BACKUP_DIR=/backup
# create directory with timestamp
mkdir -p "$BACKUP_DIR/$TIMESTAMP"
# dump database
# compress file with GZip
mysqldump --opt --user=$USER --password=$PASSWORD --host='host3.mydbserver.com' db1 | gzip -9 > ${BACKUP_DIR}/$TIMESTAMP/backup-db1-$TIMESTAMP.sql.gz
mysqldump --opt --user=$USER --password=$PASSWORD --host='host7.mydbserver.com' db2 | gzip -9 > ${BACKUP_DIR}/$TIMESTAMP/backup-db2-$TIMESTAMP.sql.gz
# find directories older than 120 minutes and delete recursively
find $BACKUP_DIR/* -type d -mmin +120 -exec rm -r {} \;
The script seems to be working fine dumping the data into sql files in a timestamped directory. After executing the dump the script is to delete all folders older than 120 minutes, but I am getting an error code 1 from the cron service saying the directory of the script is nonexistent. The output for the find command is
find: '/backup/2020-05-22_13:30': No such file or directory
Any idea how I can avoid this error?
EDIT 1: Once every couple of times I'm not getting the error code, without any changes in the code (it is currently running every 5 minutes for test purposes). I'm getting a feeling maybe this is a timing thing?
EDIT 2: I'm a noob regarding all things Linux, scripting, bash etc. and I spent days getting as far as I did. So any downvoters please leave a comment to let me know what obvious thing I overlooked.
Try to add -depth to the find options:
find $BACKUP_DIR -depth -type d -mmin +120 -exec rm -r {} \;

Find all files with .md extension and execute a command with the file and generate a new file with a name generated through the the md file name

I'm trying to write a shell script to recursively find all files under a directory with the extension .md and execute a command with the .md file and generated new file with the same name but a different extension.
below is the command I'm having but its actually appending the .html to the file instead of replacing .md with .html
find . -name '*.md' -exec markdown-html {} -s
resources/styles/common-custom.css -o {}.html \;
the above command generates a new file "home.md.html" from "home.md" but i want the .md removed. tried different solutions but didn't work
Hi you have to write a small script here, I have given the description how it is going to work, please refer to the comments in the below codes:-
First create a shell script file like convertTohtml.sh and add below codes in it
#!/bin/bash
find . -name '*.md' > filelist.dat
# list down all the file in a temp file
while read file
do
html_file=$( echo "$file" | sed -e 's/\.md//g')
# the above command will store 'home.md' as 'home' to variable 'html_file'
#hence '$html_file.html' equal to 'home.html'
markdown-html $file -s resources/styles/common-custom.css -o $html_file.html
done < filelist.dat
# with while loop read each line from the file. Here each line is a locatin of .md file
rm filelist.dat
#delete the temporary file finally
provide execute permission to your script file like below:-
chmod 777 convertTohtml.sh
Now execute the file:-
./convertTohtml.sh
Below script will work to solve the extension problem.
#!/bin/bash
find . -name '*.md' > filelist.dat
# list down all the file in a temp file
while read file
do
file=`echo ${file%.md}`
#get the filename witout extension
markdown-html $file -s resources/styles/common-custom.css -o $file.html
done < filelist.dat
# with while loop read each line from the file. Here each line is a locatin of .md file
rm filelist.dat
#delete the temporary file finally
If you want to use the output of find multiple times you could try something like this:
find . -name "*.md" -exec sh -c "echo {} && touch {}.foo" \;
Notice the:
sh -c "echo {} && touch {}.foo"
The sh -c will run commands read from the string, then the {} will be replaced with the find output, in this example is first doing an echo {} and if that succeeds && it will then touch {}.foo, in your case this could be like:
find . -name "*.md" -exec sh -c "markdown-html {} -s resources/styles/common-custom.css -o {}.html" \;

Find stops after the first file

Please help me understand why ncu is causing a find operation to stop after the first file? I have 25 project folders, all with their own package.json and bower.json file (not all have bower.json).
Issuing this command with an echo works perfectly:
find ../ -name "package.json" -type f -exec echo '{}' +
... all files are printed to the screen.
However, this syntax stops after the first file when I use ncu:
find ../ -name "package.json" -type f -exec ncu -u -a --packageFile '{}' +
Here's the only output of the command:
$ find ../ -name "package.json" -type f -exec ncu -u -a --silent --packageFile '{}' +
Using /home/joeblow/projects/app01/package.json
[..................] - :
All dependencies match the latest package versions :)
The versions I'm using is:
bash version: 4.3.42(1)-release
find version: 4.7.0-git
node version: 6.9.4
npm version: 4.1.2
npm-check-updates version: 2.8.9
ncu (aka npm-check-updates) silently ignores all but the first file.
Use -exec ncu -u -a --packageFile '{}' \; instead to only run it with a single file at a time.

How to delete a particular line from several files in linux?

I have several html files within subfolders which have a redundant link like:
<link rel="stylesheet" href="../demos.css">
I am trying to remove this line from all the html files using the following command in linux:
find -name '*.html' -exec sed --in-place=.bak 'demos.css' "{}" ;
But this gives me the following error:
find: missing argument to `-exec'
Yes of course I have checked all the solutions on Stackoverflow related to this but most of them are regarding a single file and the rest don't help. What am I doing wrong with this code?
find is missing starting path, sed is missing d command and you need to escape the semi colon in find command:
find . -name '*.html' -exec sed -i.bak '/demos\.css/d' '{}' \;
Or better:
find . -name '*.html' -exec sed -i.bak '/demos\.css/d' '{}' +
for i in `find /www/htmls/ -name "*.html" 2>/dev/null`
do
sed -i 's/^demos.css.*//' "$i"
done
try this please:
for i in `find /www/htmls/ -name "*.html" 2>/dev/null`
do
sed -i '/demos\.css/d' "$i"
done

Argument list too long - Apache

I'm following this tutorial on deploying an wordpress application inside an AWS instance http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/hosting-wordpress.html and I get an error when I do
[ec2-user#ip-10-10-1-73 ]$ find /var/www -type f -exec sudo chmod 0664 {} +
sudo: unable to execute /bin/chmod: Argument list too long
sudo: unable to execute /bin/chmod: Argument list too long
What is the root problem of this error?
So you are trying to pass to many arguments to chmod, you could be running out of stack space. This is a limit you can set on linux using ulimit, but personally I would just modify the command
find /var/www -type f -exec sudo chmod 0664 {} \;
The difference is that with the + you are trying change the permissions of all the files at once, with \; you are setting the permissions one file at a time