my goal is to create a logrotate configuration which specifies a different rotation number for each set of log files. 10 for the first set, 14 for the second set, 5 for all the rest. this is what i have so far:
~/log/myLog1*.log{
rotate 10
...
}
~/log/myLog2*.log {
rotate 14
...
}
~/log/[!{myLog1,myLog2}]*.log {
rotate 5
...
}
the following shows the files i want to rotate:
ls ~/log/[!{myLog1,myLog2}]*.log
but because logrotate doesn't support extended globbing, the 3rd pattern doesn't work and i get this error message
rotating pattern: ~/log/[! forced from command line (5 rotations)
empty log files are not rotated, old logs are removed
considering log ~/log/[!
log ~/log/[! does not exist -- skipping
can anyone correct my final pattern so that it works?
Have you tried using excluding with prerotate? Following example is untested.
~/log/*.log {
rotate 5
nosharedscripts
prerotate
bash -c "[[ ! $1 =~ myLog1.log ]] && [[ ! $1 =~ myLog2.log ]]"
endscript
}
~/log/myLog1.log {
rotate 10
}
~/log/myLog1.log {
rotate 14
}
source
Related
Context: I'm making my own i3-Bar script to read output from other (asynchronous) scripts running in background, concatenate them and then echo them to i3-Bar itself.
The way I'm passing outputs is in plain files, and I guess (logically) the problem is that the files are sometimes read and written at the same time. The best way to reproduce this behavior is by suspending the computer and then waking it back up - I don't know the exact cause of this, I can only go on what I see from my debug log files.
Main Code: Added comments for clarity
#!/usr/bin/env bash
cd "${0%/*}";
trap "kill -- -$$" EXIT; #The bg. scripts are on a while [ 1 ] loop, have to kill them.
rm -r ../input/*;
mkdir ../input/; #Just in case.
for tFile in ./*; do
#Run all of the available scripts in the current directory in the background.
if [ $(basename $tFile) != "main.sh" ]; then ("$tFile" &); fi;
done;
echo -e '{ "version": 1 }\n['; #I3-Bar can use infinite array of JSON input.
while [ 1 ]; do
input=../input/*; #All of the scripts put their output in this folder as separate text files
input=$(sort -nr <(printf "%s\n" $input));
output="";
for tFile in $input; do
#Read and add all of the files to one output string.
if [ $tFile == "../input/*" ]; then break; fi;
output+="$(cat $tFile),";
done;
if [ "$output" == "" ]; then
echo -e "[{\"full_text\":\"ERR: No input files found\",\"color\":\"#ff0000\"}],\n";
else
echo -e "[${output::-1}],\n";
fi;
sleep 0.2s;
done;
Example Input Script:
#!/usr/bin/env bash
cd "${0%/*}";
while [ 1 ]; do
echo -e "{" \
"\"name\":\"clock\"," \
"\"separator_block_width\":12," \
"\"full_text\":\"$(date +"%H:%M:%S")\"}" > ../input/0_clock;
sleep 1;
done;
The Problem
The problem isn't the script itself, but the fact, that i3-Bar receives a malformed JSON input (-> parse error), and terminates - I'll show such log later.
Another problem is, that the background scripts should run asynchronously, because some need to update every 1 second nad some only every 1 minute, etc. So the use of a FIFO isn't really an option, unless I create some ugly inefficient hacky stuff.
I know there is a need for IPC here, but I have no idea how to effieciently do this.
Script output from randomly crashing - waking up error looks the same
[{ "separator_block_width":12, "color":"#BAF2F8", "full_text":"192.168.1.104 "},{ "separator_block_width":12, "color":"#BAF2F8", "full_text":"100%"}],
[{ "separator_block_width":12, "color":"#BAF2F8", "full_text":"192.168.1.104 "},,],
(Error is created by the second line)
As you see, the main script tries to read the file, doesn't get any output, but the comma is still there -> malformed JSON.
The immediate error is easy to fix: don't append an entry to output if the corresponding file is empty:
for tFile in $input; do
[[ $tFile != "../input/*" ]] &&
[[ -s $tFile ]] &&
output+="$(<$tFile),"
done
There is a potential race condition here, though. Just because a particular input file exists doesn't mean that the data is fully written to it yet. I would change your input scripts to look something like
#!/usr/bin/env bash
cd "${0%/*}";
while true; do
o=$(mktemp)
printf '{"name": "clock", "separator_block_width": 12, "full_text": %(%H:%M:%S)T}\n' > "$o"
mv "$o" ../input/0_clock
sleep 1
done
Also, ${output%,} is a safer way to trim a trailing comma when necessary.
Calling a function in another script to delete old files - need to pass $1 as string, and not the eval of that arg (filelist from directory)
Have tried:
- single and double quotes around echo $1 ("$1", '$1')
- single and double quotes around arg ("/tmp/AB*", '/tmp/AB*')
Have read 3 similar questions here, but unsuccessful at understanding the issue...
AIX 6
#!/bin/ksh
#### common load function ######
. /tmp/functions.sh
deletefiles /usr/tmp/AB* 1
#!/bin/sh
# Deletes files from a filelist that are older than X days
deletefiles() {
echo $1
echo $2
#filelist=$1
#days=$2
#execute
#`find ${filelist} -type f -mtime +${days} -exec rm {} + 2>&1`
}
It looks like you want to pass /usr/tmp/AB* as is, without expanding it. This can be done with '/usr/tmp/AB*', "/usr/tmp/AB*", or /usr/tmp/AB\*.
Then, to confirm that you got the right value, you need to use "$1" to prevent wildcard expansion in echo:
deletefiles() {
echo "$1"
echo "$2"
}
deletefiles '/usr/tmp/AB*' 1
I guess your main problem is that you want AB* expanded in deletefiles().
When you don't do something special, how do you find the last parameter?
You can expand the wildcard within deletefiles() with eval, but eval can do more than you wanted. Another method is swithing the order of your parameters (days first) end use shift for deleting days from the paramaterlist when you assigned it to a var.
I'll show both solutions.
deletefiles_notsecure() {
filelist="$(eval echo $1)"
days=$2
echo "Filelist: $filelist"
echo "Days: $days"
}
deletefiles_secure() {
days=$1
shift
filelist="$*"
echo "Filelist: $filelist"
echo "Days: $days"
}
# deletefiles /usr/tmp/AB* 1
deletefiles_notsecure "/tmp/*" 1
echo ===========
deletefiles_secure 1 /tmp/*
As you can see, the second form can be used without quotes from the caller, so that will be easier to use.
Note: It will expand the vars during the call, relative to path you are standing in. When deletefiles_secure() starts with cd "${logdir}" and you are standing in your $HOME when you call deletefiles_secure 1 access*.log* an access.log in your homedir will be found. Use full paths on your local computer!
Don't use eval if you can avoid it: find does have a -name option to specify a file-mask, eg:
deletefiles () {
find "$1" -name "$2" ...
}
deletefiles /somedir 'AB*'
This question already has answers here:
Parsing JSON with Unix tools
(45 answers)
Closed 6 years ago.
In shell I have a requirement wherein I have to read the JSON response which is in the following format:
{ "Messages": [ { "Body": "172.16.1.42|/home/480/1234/5-12-2013/1234.toSort", "ReceiptHandle": "uUk89DYFzt1VAHtMW2iz0VSiDcGHY+H6WtTgcTSgBiFbpFUg5lythf+wQdWluzCoBziie8BiS2GFQVoRjQQfOx3R5jUASxDz7SmoCI5bNPJkWqU8ola+OYBIYNuCP1fYweKl1BOFUF+o2g7xLSIEkrdvLDAhYvHzfPb4QNgOSuN1JGG1GcZehvW3Q/9jq3vjYVIFz3Ho7blCUuWYhGFrpsBn5HWoRYE5VF5Bxc/zO6dPT0n4wRAd3hUEqF3WWeTMlWyTJp1KoMyX7Z8IXH4hKURGjdBQ0PwlSDF2cBYkBUA=", "MD5OfBody": "53e90dc3fa8afa3452c671080569642e", "MessageId": "e93e9238-f9f8-4bf4-bf5b-9a0cae8a0ebc" } ] }
Here I am only concerned with the "Body" property value. I made some unsuccessful attempts like:
jsawk -a 'return this.Body'
or
awk -v k="Body" '{n=split($0,a,","); for (i=1; i<=n; i++) print a[i]}
But that did not suffice. Can anyone help me with this?
There is jq for parsing json on the command line:
jq '.Body'
Visit this for jq: https://stedolan.github.io/jq/
tl;dr
$ cat /tmp/so.json | underscore select '.Messages .Body'
["172.16.1.42|/home/480/1234/5-12-2013/1234.toSort"]
Javascript CLI tools
You can use Javascript CLI tools like
underscore-cli:
json:select(): CSS-like selectors for JSON.
Example
Select all name children of a addons:
underscore select ".addons > .name"
The underscore-cli provide others real world examples as well as the json:select() doc.
Similarly using Bash regexp. Shall be able to snatch any key/value pair.
key="Body"
re="\"($key)\": \"([^\"]*)\""
while read -r l; do
if [[ $l =~ $re ]]; then
name="${BASH_REMATCH[1]}"
value="${BASH_REMATCH[2]}"
echo "$name=$value"
else
echo "No match"
fi
done
Regular expression can be tuned to match multiple spaces/tabs or newline(s). Wouldn't work if value has embedded ". This is an illustration. Better to use some "industrial" parser :)
Here is a crude way to do it: Transform JSON into bash variables to eval them.
This only works for:
JSON which does not contain nested arrays, and
JSON from trustworthy sources (else it may confuse your shell script, perhaps it may even be able to harm your system, You have been warned)
Well, yes, it uses PERL to do this job, thanks to CPAN, but is small enough for inclusion directly into a script and hence is quick and easy to debug:
json2bash() {
perl -MJSON -0777 -n -E 'sub J {
my ($p,$v) = #_; my $r = ref $v;
if ($r eq "HASH") { J("${p}_$_", $v->{$_}) for keys %$v; }
elsif ($r eq "ARRAY") { $n = 0; J("$p"."[".$n++."]", $_) foreach #$v; }
else { $v =~ '"s/'/'\\\\''/g"'; $p =~ s/^([^[]*)\[([0-9]*)\](.+)$/$1$3\[$2\]/;
$p =~ tr/-/_/; $p =~ tr/A-Za-z0-9_[]//cd; say "$p='\''$v'\'';"; }
}; J("json", decode_json($_));'
}
use it like eval "$(json2bash <<<'{"a":["b","c"]}')"
Not heavily tested, though. Updates, warnings and more examples see my GIST.
Update
(Unfortunately, following is a link-only-solution, as the C code is far
too long to duplicate here.)
For all those, who do not like the above solution,
there now is a C program json2sh
which (hopefully safely) converts JSON into shell variables.
In contrast to the perl snippet, it is able to process any JSON,
as long as it is well formed.
Caveats:
json2sh was not tested much.
json2sh may create variables, which start with the shellshock pattern () {
I wrote json2sh to be able to post-process .bson with Shell:
bson2json()
{
printf '[';
{ bsondump "$1"; echo "\"END$?\""; } | sed '/^{/s/$/,/';
echo ']';
};
bsons2json()
{
printf '{';
c='';
for a;
do
printf '%s"%q":' "$c" "$a";
c=',';
bson2json "$a";
done;
echo '}';
};
bsons2json */*.bson | json2sh | ..
Explained:
bson2json dumps a .bson file such, that the records become a JSON array
If everything works OK, an END0-Marker is applied, else you will see something like END1.
The END-Marker is needed, else empty .bson files would not show up.
bsons2json dumps a bunch of .bson files as an object, where the output of bson2json is indexed by the filename.
This then is postprocessed by json2sh, such that you can use grep/source/eval/etc. what you need, to bring the values into the shell.
This way you can quickly process the contents of a MongoDB dump on shell level, without need to import it into MongoDB first.
I wrote a quick shell script to emulate the situation of xkcd #981 (without hard links, just symlinks to parent dirs) and used a recursive function to create all the directories. Unfortunately this script does not provide the desired result, so I think my understanding of the scope of variable $count is wrong.
How can I properly make the function use recursion to create twenty levels of folders, each containing 3 folders (3^20 folders, ending in soft links back to the top)?
#!/bin/bash
echo "Generating folders:"
toplevel=$PWD
count=1
GEN_DIRS() {
for i in 1 2 3
do
dirname=$RANDOM
mkdir $dirname
cd $dirname
count=$(expr $count + 1)
if [ $count < 20 ] ; then
GEN_DIRS
else
ln -s $toplevel "./$dirname"
fi
done
}
GEN_DIRS
exit
Try this (amended version of the script) — it seems to work for me. I decline to test to 20 levels deep, though; at 8 levels deep, each of the three top-level directories occupies some 50 MB on a Mac file system.
#!/bin/bash
echo "Generating folders:"
toplevel=$PWD
GEN_DIRS()
{
cur=${1:?}
max=${2:?}
for i in 1 2 3
do
dirname=$RANDOM
if [ $cur -le $max ]
then
(
echo "Directory: $PWD/$dirname"
mkdir $dirname
cd $dirname
GEN_DIRS $((cur+1)) $max
)
else
echo "Symlink: $PWD/$dirname"
ln -s $toplevel "./$dirname"
fi
done
}
GEN_DIRS 1 ${1:-4}
Lines 6 and 7 are giving names to the positional parameters ($1 and $2) passed to the function — the ${1:?} notation simply means that if you omit to pass a parameter $1, you get an error message from the shell (or sub-shell) and it exits.
The parentheses on their own (lines 13 and 18 above) mean that the commands in between are run in a sub-shell, so changes in directory inside the sub-shell do not affect the parent shell.
The condition on line 11 now uses arithmetic (-le) instead of string < comparisons; this works better for deep nesting (because the < is a lexicographic comparison, so level 9 is not less than level 10). It also means that the [ command is OK to use instead of the [[ command (although [[ would also work, I prefer the old-fashioned notation).
I end up creating a script like this:
#!/bin/bash
echo "Generating folders:"
toplevel=$PWD
level=0
maxlevel=4
function generate_dirs {
pushd "$1" >/dev/null || return
(( ++level ))
for i in 1 2 3; do
dirname=$RANDOM
if (( level < maxlevel )); then
echo "$PWD/$dirname"
mkdir "$dirname" && generate_dirs "$dirname"
else
echo "$PWD/$dirname (link to top)"
ln -sf "$toplevel" "$dirname"
fi
done
popd >/dev/null
(( --level ))
}
generate_dirs .
exit
I've written a function in zsh to find and replace a specific number with a keyword that I'll use later on in a larger script. Here's what I've got:
function replace_metal() {
for file in "$#"; do
[ -f "$file" ] && mv $file $file.old
# replace metal
awk '/^28\s/ { gsub(/28\s/, "METAL") }; { print }' $file.old > $file
# remove temporary files
rm -f $file.old
done
}
The awk portion works fine when I run it on the command line but while in the script, it fails to parse the file and replace the number with the keyword. I'm not sure why it fails. I've written a function that is similar that works without any trouble:
function fix_filename() {
for file in "$#"; do
[ -f "$file" ] && mv $file $file.old
# fix filename
awk '{ gsub(/myFileName/,FILENAME); print }' $file.old > $file.tmp
# clean up filename
awk '{ gsub(/.gjf.old/,""); print }' $file.tmp > $file
# remove temporary files
rm -f $file.old $file.tmp
done
}
I'm especially confused as to why awk won't work in the replace_metal function but will on the command line. If anyone can explain that, I'd really appreciate it.
Here's an example portion of a file that I'd run this script on. They are cartesian coordinates for a molecular geometry program I use.
6 4.387152 -0.132561 1.145384
6 4.435130 0.035315 -0.261758
6 3.241800 0.069735 -1.002575
7 2.023205 -0.053248 -0.382329
6 1.948032 -0.217668 0.977856
6 3.120408 -0.260395 1.759133
8 0.936529 -0.001059 -1.144164
28 -0.810634 -0.374713 -0.376819
7 -1.066408 1.593331 -0.221421
6 -2.101594 2.162030 0.386527
6 -3.220999 1.475281 0.925467
7 -2.581803 -0.796964 0.180331
6 -3.412540 0.082878 0.747753
6 -0.299269 -2.264241 -0.449077
1 5.304344 -0.163663 1.737743
1 5.382399 0.136858 -0.794636
1 3.185977 0.187888 -2.085134
1 0.932373 -0.311671 1.366224
1 3.017555 -0.393258 2.837678
1 -2.114644 3.263364 0.463786
1 -4.007715 2.050042 1.415626
1 -4.379471 -0.313239 1.099097
1 -0.572811 -2.828718 0.461055
1 0.789786 -2.379489 -0.603095
1 -0.795666 -2.747919 -1.311858
6 -3.146815 -2.155894 0.046938
1 -2.990568 -2.540510 -0.972499
1 -2.672661 -2.865421 0.746200
1 -4.233217 -2.149944 0.247135
6 -0.086130 2.536630 -0.792152
1 0.886270 2.480474 -0.265799
1 0.102603 2.306402 -1.853394
1 -0.445050 3.580750 -0.720938
Items in the first column are the only things that can be changed. Items in the other three columns should not ever change.
Thanks for your help!
the problem is the escaping of the "\"-character. Experiment with "\\s" or even "\\\\s". If you don't run the script directly, the "\"-character is evaluated two times: at first by the shell and then by awk. Anyway, you solution is way too complicated.
Try:
sed -i "s/^28 /METAL/" file
sed -i means substitute in place, so you don't have to copy the file "file" to "file.old" and then back again to "file".
Zsh has a built-in function to escape strings:
f="to be escaped"
print ${(q)f}
HTH Chris
If you can't win and quoting hell drives you mad (and you know there's a space and not a tab), just cheat:
awk '/^28 / { gsub(/^28 /, "METAL ") }; { print }' $file
... or else use [[:space:]] instead of \s, it appears GNU awk doesn't understand \s. For me, even plain
[0 1047 19:39:10] ~/temp/stack % gawk '/^28\s/ { gsub(/28\s/, "METAL") }; { print }' data
fails to replace. (Also, don't replace your space away if it's the only thing separating columns 1 and 2: replace with "METAL " or replace just /^28/.