I've been looking for a way to uglify some JSON while in my bash console. This help using it afterward in another command (for example, to pass json inline to httpie)
Giving:
{
"foo": "lorem",
"bar": "ipsum"
}
I want to obtain:
{"foo":"lorem","bar":"ipsum"}
NOTE: this question is intentionnaly greatly inspired by it's pretty-print counterpart. However, googling for bash minify json didn't give me a proper result, hence this questions for the minify/uglify.
You can use jq -c (compact) option.
jq -c . < input.json
TL;DR
no install
python -c 'import json, sys;json.dump(json.load(sys.stdin), sys.stdout)' < my.json
very fast (with jj)
jj -u < my.json
Perf benchmark
Here's the script, using hyperfine:
#!/usr/bin/env bash
tmp=$(mktemp json.XXX)
tmp_md=$(mktemp md.XXX)
trap "rm $tmp $tmp_md" EXIT
cat <<JSON > $tmp
{
"foo": "lorem",
"bar": "ipsum"
}
JSON
hyperfine \
--export-markdown $tmp_md \
--warmup 100 \
"jj -u < $tmp" \
"yq eval -j -I=0 < $tmp" \
"xidel -s - -e '\$json' --printed-json-format=compact < $tmp" \
"jq --compact-output < $tmp" \
"python3 -c 'import json, sys;json.dump(json.load(sys.stdin), sys.stdout)' < $tmp" \
"ruby -r json -e 'j JSON.parse \$stdin.read' < $tmp"
pbcopy < $tmp_md
The result on my mac — MacBook Air (M1, 2020), 8 GB:
Command
Mean [ms]
Min [ms]
Max [ms]
Relative
jj -u < json.p72
1.3 ± 0.2
0.9
2.7
1.00
yq eval -j -I=0 < json.p72
4.4 ± 0.4
3.8
7.8
3.37 ± 0.65
xidel -s - -e '$json' --printed-json-format=compact < json.p72
5.5 ± 0.3
5.0
6.5
4.19 ± 0.77
python3 -c 'import json, sys;json.dump(json.load(sys.stdin), sys.stdout)' < json.p72
14.0 ± 0.4
13.4
15.0
10.71 ± 1.89
jq --compact-output < json.p72
14.4 ± 2.0
13.2
33.6
11.02 ± 2.45
ruby -r json -e 'j JSON.parse $stdin.read' < json.p72
47.3 ± 0.6
46.1
48.5
36.10 ± 6.32
Result for a large JSON file (14k lines):
http https://france-geojson.gregoiredavid.fr/repo/regions.geojson | jj -p > $tmp
Command
Mean [ms]
Min [ms]
Max [ms]
Relative
jj -u < json.wFY
3.4 ± 0.7
2.7
12.2
1.00
jq --compact-output < json.wFY
35.1 ± 0.4
34.5
36.1
10.24 ± 2.23
python3 -c 'import json, sys;json.dump(json.load(sys.stdin), sys.stdout)' < json.wFY
47.4 ± 0.5
46.3
48.7
13.82 ± 3.01
xidel -s - -e '$json' --printed-json-format=compact < json.wFY
55.5 ± 1.2
54.7
63.5
16.17 ± 3.53
ruby -r json -e 'j JSON.parse $stdin.read' < json.wFY
94.9 ± 0.7
93.8
96.8
27.65 ± 6.02
yq eval -j -I=0 < json.wFY
3087.0 ± 26.6
3049.3
3126.8
899.63 ± 195.81
And here is the pretty print counterpart benchmark
yq worked for me, via utilization of input file (containing the prettified JSON)
yq eval -j -I=0 uglify-test.txt
Docs link: https://mikefarah.gitbook.io/yq/usage/convert
With xidel:
xidel -s input.json -e '$json' --printed-json-format=compact
#or
xidel -s input.json -e 'serialize-json($json)'
{"foo": "lorem", "bar": "ipsum"}
Interesting "benchmark", Ulysse BN.
I couldn't test jj, but on my old cpu these are my results:
var='{
"foo": "lorem",
"bar": "ipsum"
}'
time (for i in {1..100}; do python -c 'import json, sys;json.dump(json.load(sys.stdin), sys.stdout)' <<< "$var" >& /dev/null; done)
real 0m10.813s
user 0m7.532s
sys 0m5.798s
time (for i in {1..100}; do jq --compact-output <<< "$var" >& /dev/null; done)
real 0m10.500s
user 0m1.835s
sys 0m0.769s
time (for i in {1..100}; do xidel -se '$json' --printed-json-format=compact <<< "$var" >& /dev/null; done)
real 0m2.250s
user 0m1.692s
sys 0m0.889s
jq-minify
Here is a bash script that will write back to the file minified
works with bash v3.2+ and jq v1.6+
#!/usr/bin/env bash
set -eu
path=
options=()
# change -c to -r to get pretty-print
set -- "$#" -c .
for arg; do
if [ -f "$arg" ]; then
if [ -n "$path" ]; then
echo "Cannot specify multiple paths to jq-minify" >&2
exit 1
fi
path="$arg"
else
options+=("$arg")
fi
done
tmp=$(mktemp)
jq "${options[#]}" "$path" >"$tmp"
cat "$tmp" >"$path"
Related
I have some problems with my script which collects data from a JSON, stores them in variables which are then used for a CURL request. I need to build a CURL request for EACH JSON entry.
My problem is that I would like to pass parameters to the CURL request, one by one.
I was thinking about a for-loop but this won't be actually the right workaround.
This because
ruleId=$($whitelist | jq -r '.[].ruleId')
gives:
10055
10098
This can not be interpreted correctly from CURL.
So the question is, how can I pass variables in a proper manner, in a sort of iteration, using JQ? Again, I need to do single calls using CURLs for each entry in the JSON file.
Code:
$!/bin/sh
#set -e
# Info to test this script
# 1. Start docker
# 2. Run this command from terminal: docker run -u zap -p 8080:8080 -i owasp/zap2docker-stable zap.sh -daemon -host 0.0.0.0 -port 8080 -config api.disablekey=true -config api.addrs.addr.name=.* -config api.addrs.addr.regex=true
# 2. Get the JSON from api (test for now)
# 3. bash filter.sh
# 4. Check for filter to be set with browser http://localhost:8080/UI/alertFilter/ => view global filter
curl -s 'https://api.npoint.io/c29e3a68be632f73fc22' > whitelist_tmp.json
whitelist="cat whitelist_tmp.json"
listlength=$(jq '. | length' $whitelist)
ruleId=$($whitelist | jq -r '.[].ruleId')
alert=$($whitelist | jq -r '.[].alertName')
newLevel=$($whitelist | jq -r '.[].newLevel')
url=$($whitelist | jq -r '.[].url | .[]')
urlIsRegex=$($whitelist | jq -r '.[].urlIsRegex')
enabled=$($whitelist | jq -r '.[].enabled')
parameter=$($whitelist | jq -r '.[].parameter')
evidence=$($whitelist | jq -r '.[].evidence')
echo "Setting Rule for: $ruleId"
echo "$(curl --data-urlencode "ruleId=$ruleId" --data-urlencode "newLevel=$newLevel" --data-urlencode "url=$url" --data-urlencode "urlIsRegex=$urlIsRegex" --data-urlencode "enabled=$enabled" --data-urlencode "parameter=$parameter" --data-urlencode "evidence=$evidence" "http://localhost:8090/JSON/alertFilter/action/addGlobalAlertFilter")"
You can use bash arrays to store your values:
ruleId=($($whitelist | jq -r '.[].ruleId'))
alert=($($whitelist | jq -r '.[].alertName'))
...
and then iterate over them. Example:
for (( i = 0; i < "${#ruleId[#]}"; i++ )); do
id="${ruleId[i]}"
al="${alert[i]}"
...
echo "$(curl --data-urlencode "ruleId=$id" ...
done
This works if and only if the values returned by your commands are single words (no spaces in them) or they are properly quoted. If you have more complex values you cannot simply assign them to an array with array=($(command)). You would get more cells than values in your array.
Pulling the comments from your previous question:
whitelist="whitelist_tmp.json"
listlength=$(jq '. | length' "${whitelist}")
mapfile -t rule < <(jq -r '.[].ruleId' "${whitelist}")
mapfile -t alert < <(jq -r '.[].cwalertName' "${whitelist}")
mapfile -t level < <(jq -r '.[].newLevel' "${whitelist}")
mapfile -t url < <(jq -r '.[].url | .[]' "${whitelist}")
mapfile -t regex < <(jq -r '.[].urlIsRegex' "${whitelist}")
mapfile -t parameter < <(jq -r '.[].parameter' "${whitelist}")
mapfile -t evidence < <(jq -r '.[].evidence' "${whitelist}")
for ((i=0; i<${listlength}; i++))
do
curl ... "${rule[$i]}" ... "${alert[$i]}" ...
done
The mapfile should maintain embedded white space in values returned by jq.
I want to use a bash script to output the contents of top command and then write it to a json file. But I'm having difficulty writing the slashes/encodings/line breaks into a file with a valid json object
Here's what I tried:
#!/bin/bash
message1=$(top -n 1 -o %CPU)
message2=$(top -n 1 -o %CPU | jq -aRs .)
message3=$(top -n 1 -o %CPU | jq -Rs .)
message4=${message1//\\/\\\\/}
echo "{\"message\":\"${message2}\"}" > file.json
But when I look at the file.json, it looks soemthing like this:
{"message":""\u001b[?1h\u001b=\u001b[?25l\u001b[H\u001b[2J\u001b(B\u001b[mtop - 21:34:53 up 55 days, 5:14, 2 users, load average: 0.17, 0.09, 0.03\u001b(B\u001b[m\u001b[39;49m\u001b(B\u001b[m\u001b[39;49m\u001b[K\nTasks:\u001b(B\u001b[m\u001b[39;49m\u001b[1m 129 \u001b(B\u001b[m\u001b[39;49mtotal,\u001b(B\u001b[m\u001b[39;49m\u001b[1m 1 \u001b(B\u001b[m\u001b[39;49mrunning,\u001b(B\u001b[m\u001b[39;49m\u001b[1m 128 \u001b(B\u001b[m\u001b[39;49msleeping,\u001b(B\u001b[m
Each of the other attempts with message1 to message4 all result in various json syntax issues.
Can anyone suggest what I should try next?
You don't need all the whistle of echo and multiple jq invocations:
top -b -n 1 -o %CPU | jq -aRs '{"message": .}' >file.json
Or pass the output of the top command as an argument variable.
Using --arg to pass arguments to jq:
jq -an --arg msg "$(top -b -n 1 -o %CPU)" '{"message": $msg}' >file.json
i have a file which contains an JSON object as a string:
{"STATUS":[{"STATUS":"S","When":1530779438,"Code":70,"Msg":"CGMiner stats","Description":"cgminer 4.9.0"}],"STATS":[{"CGMiner":"4.9.0","Miner":"9.0.0.5","CompileTime":"Sat May 26 20:42:30 CST 2018","Type":"Antminer Z9-Mini"},{"STATS":0,"ID":"ZCASH0","Elapsed":179818,"Calls":0,"Wait":0.000000,"Max":0.000000,"Min":99999999.000000,"GHS 5s":"16.39","GHS av":16.27,"miner_count":3,"frequency":"750","fan_num":1,"fan1":5760,"fan2":0,"fan3":0,"fan4":0,"fan5":0,"fan6":0,"temp_num":3,"temp1":41,"temp2":40,"temp3":43,"temp2_1":56,"temp2_2":53,"temp2_3":56,"temp_max":43,"Device Hardware%":0.0000,"no_matching_work":0,"chain_acn1":4,"chain_acn2":4,"chain_acn3":4,"chain_acs1":" oooo","chain_acs2":" oooo","chain_acs3":" oooo","chain_hw1":0,"chain_hw2":0,"chain_hw3":0,"chain_rate1":"5.18","chain_rate2":"5.34","chain_rate3":"5.87"}],"id":1}
now i want to get some values from keys in this object within a sh script.
The following cmd works, but however not for all keys!?
this works: (i get "750")
grep -o '"frequency": *"[^"]*"' LXstats.txt | grep -o '"[^"]*"$'
but this not: (empty)
grep -o '"fan_num": *"[^"]*"' LXstats.txt | grep -o '"[^"]*"$'
same with this:
grep -o '"fan1": *"[^"]*"' LXstats.txt | grep -o '"[^"]*"$'
Working on a xilinx OS, which has no python, so "jq" will not work and grep has no "-P" option. So anyone have an idea to work with that? :)
Thanks and best regards,
dave
When you want to do more than just g/re/p you should be using awk, not combinations of greps+pipes, etc.
$ awk -v tag='frequency' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)") { val=substr($0,RSTART,RLENGTH); sub(/^"[^"]+": *"?/,"",val); print val }' file
750
$ awk -v tag='fan_num' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)") { val=substr($0,RSTART,RLENGTH); sub(/^"[^"]+": *"?/,"",val); print val }' file
1
$ awk -v tag='fan1' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)") { val=substr($0,RSTART,RLENGTH); sub(/^"[^"]+": *"?/,"",val); print val }' file
5760
The above will work with any awk in any shell on any UNIX box. If you have GNU awk for the 3rd arg to match() and gensub() you can write it a bit briefer:
$ awk -v tag='frequency' 'match($0,"\""tag"\": *(\"[^\"]*|[0-9]+)",a) { print gensub(/^"/,"",1,a[1]) }' file
750
What I'm trying to do:
Use jq to pass along parameters to ffmpeg in a bash script.
Have a JSON in this external file that I generate regularly.
{
"streams":[
{
"track":"/var/www/html/stream1.m3u8",
"link":"http://playertest.longtailvideo.com/adaptive/bipbop/bipbop.m3u8"
},
{
"track":"/var/www/html/stream2.m3u8",
"link":"https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8"
},
{
"track":"/var/www/html/stream3.m3u8",
"link":"http://www.streambox.fr/playlists/test_001/stream.m3u8"
}
]
}
This is the command I've tried based on the response found here https://github.com/stedolan/jq/issues/503
jq -r '.streams[] | ffmpeg -v verbose -i \(.link | #sh) -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 \(.Track | #sh)"' streams.json | sh
However I get this error message:
jq: error: syntax error, unexpected IDENT, expecting $end (Unix shell quoting issues?) at <top-level>, line 1:
.streams[] | ffmpeg -v verbose -i \(.link | #sh) -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 \(.Track | #sh)"
jq: 1 compile error
The shortest possible change to your original code is just to add the quotes that were missing:
jq -r '.streams[] | "ffmpeg -v verbose -i \(.link | #sh) -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 \(.Track | #sh)"' streams.json | sh
# ^-- this was missing
Note that "ffmpeg ..." is a string, and is contained in quotes. That said, you're relying on jq to generate safe code for your shell -- since it has features explicitly built for the purpose, this isn't as bad as an idea as it might be otherwise; but it's still better practice to avoid code generation wherever possible.
As an alternate approach that avoids code generation and is safe with all possible filenames, use jq to generate a NUL-delimited stream of track / link pairs, and a BashFAQ #1 loop to iterate over them:
#!/usr/bin/env bash
while IFS= read -r -d '' track && IFS= read -r -d '' link; do
ffmpeg -v verbose -i "$link" -c copy -flags -global_header -hls_time 10 \
-hls_list_size 6 -hls_wrap 10 -start_number 1 "$track"
done < <(jq -j '.streams[] | ( .track + "\u0000" + .link + "\u0000" )' streams.json)
Using bash and jq :
#!/bin/bash
file="$1"
c=0
while true; do
track=$(jq -r ".streams[$c].track" "$file" 2>/dev/null)
link=$(jq -r ".streams[$c].link" "$file" 2>/dev/null)
[[ ! $stream || ! $link ]] && break
ffmpeg -v verbose -i "$link" -c copy -flags -global_header -hls_time 10 \
-hls_list_size 6 -hls_wrap 10 -start_number 1 "$track"
((c++))
done
Usage :
./script.bash file.json
Using nodejs to generate the shell commands :
(replace file.json with your own path/file)
#!/bin/bash
node<<EOF
var j=$(<file.json);
for (var i = 0; i<j.streams.length; i++) {
console.log("ffmpeg -v verbose -i '" + j.streams[i].link + "' -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 '" + j.streams[i].track + "'");
}
EOF
Output :
ffmpeg -v verbose -i 'http://playertest.longtailvideo.com/adaptive/bipbop/bipbop.m3u8' -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 '/var/www/html/stream1.m3u8'
ffmpeg -v verbose -i 'https://mnmedias.api.telequebec.tv/m3u8/29880.m3u8' -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 '/var/www/html/stream2.m3u8'
ffmpeg -v verbose -i 'http://www.streambox.fr/playlists/test_001/stream.m3u8' -c copy -flags -global_header -hls_time 10 -hls_list_size 6 -hls_wrap 10 -start_number 1 '/var/www/html/stream3.m3u8'
How can I list normal text (.txt) filenames, that don't end with a newline?
e.g.: list (output) this filename:
$ cat a.txt
asdfasdlsad4randomcharsf
asdfasdfaasdf43randomcharssdf
$
and don't list (output) this filename:
$ cat b.txt
asdfasdlsad4randomcharsf
asdfasdfaasdf43randomcharssdf
$
Use pcregrep, a Perl Compatible Regular Expressions version of grep which supports a multiline mode using -M flag that can be used to match (or not match) if the last line had a newline:
pcregrep -LMr '\n\Z' .
In the above example we are saying to search recursively (-r) in current directory (.) listing files that don't match (-L) our multiline (-M) regex that looks for a newline at the end of a file ('\n\Z')
Changing -L to -l would list the files that do have newlines in them.
pcregrep can be installed on MacOS with the homebrew pcre package: brew install pcre
Ok it's my turn, I give it a try:
find . -type f -print0 | xargs -0 -L1 bash -c 'test "$(tail -c 1 "$0")" && echo "No new line at end of $0"'
If you have ripgrep installed:
rg -l '[^\n]\z'
That regular expression matches any character which is not a newline, and then the end of the file.
Give this a try:
find . -type f -exec sh -c '[ -z "$(sed -n "\$p" "$1")" ]' _ {} \; -print
It will print filenames of files that end with a blank line. To print files that don't end in a blank line change the -z to -n.
If you are using 'ack' (http://beyondgrep.com) as a alternative to grep, you just run this:
ack -v '\n$'
It actually searches all lines that don't match (-v) a newline at the end of the line.
The best oneliner I could come up with is this:
git grep --cached -Il '' | xargs -L1 bash -c 'if test "$(tail -c 1 "$0")"; then echo "No new line at end of $0"; exit 1; fi'
This uses git grep, because in my use-case I want to ensure files commited to a git branch have ending newlines.
If this is required outside of a git repo, you can of course just use grep instead.
grep -RIl '' . | xargs -L1 bash -c 'if test "$(tail -c 1 "$0")"; then echo "No new line at end of $0"; exit 1; fi'
Why I use grep? Because you can easily filter out binary files with -I.
Then the usual xargs/tail thingy found in other answers, with the addition to exit with 1 if a file has no newline. So this can be used in a pre-commit githook or CI.
This should do the trick:
#!/bin/bash
for file in `find $1 -type f -name "*.txt"`;
do
nlines=`tail -n 1 $file | grep '^$' | wc -l`
if [ $nlines -eq 1 ]
then echo $file
fi
done;
Call it this way: ./script dir
E.g. ./script /home/user/Documents/ -> lists all text files in /home/user/Documents ending with \n.
This is kludgy; someone surely can do better:
for f in `find . -name '*.txt' -type f`; do
if test `tail -c 1 "$f" | od -c | head -n 1 | tail -c 3` != \\n; then
echo $f;
fi
done
N.B. this answers the question in the title, which is different from the question in the body (which is looking for files that end with \n\n I think).
Most solutions on this page do not work for me (FreeBSD 10.3 amd64). Ian Will's
OSX solution does almost-always work, but is pretty difficult to follow : - (
There is an easy solution that almost-always works too : (if $f is the file) :
sed -i '' -e '$a\' "$f"
There is a major problem with the sed solution : it never gives you the
opportunity to just check (and not append a newline).
Both the above solutions fail for DOS files. I think the most
portable/scriptable solution is probably the easiest one,
which I developed myself : - )
Here is that elementary sh script which combines file/unix2dos/tail. In
production, you will likely need to use "$f" in quotes and fetch tail output
(embedded into the shell variable named last) as \"$f\"
if file $f | grep 'ASCII text' > /dev/null; then
if file $f | grep 'CRLF' > /dev/null; then
type unix2dos > /dev/null || exit 1
dos2unix $f
last="`tail -c1 $f`"
[ -n "$last" ] && echo >> $f
unix2dos $f
else
last="`tail -c1 $f`"
[ -n "$last" ] && echo >> $f
fi
fi
Hope this helps someone.
This example
Works on macOS (BSD) and GNU/Linux
Uses standard tools: find, grep, sh, file, tail, od, tr
Supports paths with spaces
Oneliner:
find . -type f -exec sh -c 'file -b "{}" | grep -q text' \; -exec sh -c '[ "$(tail -c 1 "{}" | od -An -a | tr -d "[:space:]")" != "nl" ]' \; -print
More readable version
Find under current directory
Regular files
That 'file' (brief mode) considers text
Whose last byte (tail -c 1) is not represented by od's named character "nl"
And print their paths
#!/bin/sh
find . \
-type f \
-exec sh -c 'file -b "{}" | grep -q text' \; \
-exec sh -c '[ "$(tail -c 1 "{}" | od -An -a | tr -d "[:space:]")" != "nl" ]' \; \
-print
Finally, a version with a -f flag to fix the offending files (requires bash).
#!/bin/bash
# Finds files without final newlines
# Pass "-f" to also fix those files
fix_flag="$([ "$1" == "-f" ] && echo -true || echo -false)"
find . \
-type f \
-exec sh -c 'file -b "{}" | grep -q text' \; \
-exec sh -c '[ "$(tail -c 1 "{}" | od -An -a | tr -d "[:space:]")" != "nl" ]' \; \
-print \
$fix_flag \
-exec sh -c 'echo >> "{}"' \;
Another option:
$ find . -name "*.txt" -print0 | xargs -0I {} bash -c '[ -z "$(tail -n 1 {})" ] && echo {}'
Since your question has the perl tag, I'll post an answer which uses it:
find . -type f -name '*.txt' -exec perl check.pl {} +
where check.pl is the following:
#!/bin/perl
use strict;
use warnings;
foreach (#ARGV) {
open(FILE, $_);
seek(FILE, -2, 2);
my $c;
read(FILE,$c,1);
if ( $c ne "\n" ) {
print "$_\n";
}
close(FILE);
}
This perl script just open, one per time, the files passed as parameters and read only the next-to-last character; if it is not a newline character, it just prints out the filename, else it does nothing.
This example works for me on OSX (many of the above solutions did not)
for file in `find . -name "*.java"`
do
result=`od -An -tc -j $(( $(ls -l $file | awk '{print $5}') - 1 )) $file`
last_char=`echo $result | sed 's/ *//'`
if [ "$last_char" != "\n" ]
then
#echo "Last char is .$last_char."
echo $file
fi
done
Here another example using little bash build-in commands and which:
allows you to filter for extension (e.g. | grep '\.md$' filters only the md files)
pipe more grep commands for extending the filter (like exclusions | grep -v '\.git' to exclude the files under .git
use the full power of grep parameters to for more filters or inclusions
The code basically, iterates (for) over all the files (matching your chosen criteria grep) and if the last 1 character of a file (-n "$(tail -c -1 "$file")") is not not a blank line, it will print the file name (echo "$file").
The verbose code:
for file in $(find . | grep '\.md$')
do
if [ -n "$(tail -c -1 "$file")" ]
then
echo "$file"
fi
done
A bit more compact:
for file in $(find . | grep '\.md$')
do
[ -n "$(tail -c -1 "$file")" ] && echo "$file"
done
and, of course, the 1-liner for it:
for file in $(find . | grep '\.md$'); do [ -n "$(tail -c -1 "$file")" ] && echo "$file"; done