tmux send-keys swallows spaces - function

Synopsis: "tmux send-keys" strips the spaces from a bash command and I don't understand why (or how, really.)
test ()
{
tmux new -s testsession -d
tmux send-keys -t testsession "time tar -I \"zstd -19 -T0\" -cvf ${1}.tar.zst "${#:2}""
tmux attach -t testsession
}
with an input of
input1 input2 input3 i\ n\ p\ u\ t\ 4
Expected (and desired) output is
time tar -I "zstd -19 -T0" -cvf input1.tar.zst "input2" "input3" "i n
p u t 4"
Instead I get
time tar -I "zstd -19 -T0" -cvf input1.tar.zst "input2input3input4"
Note I have omitted the ; C-m or ; ENTER at the end of the send-keys. (And I've also simplified the original function since other parts are more straightforward and work.) I've done that to get a more precise understanding of what is outputted on the terminal during several hours spent last night trying to brute-force the 'correct' syntax (, to no avail.)

I stumbled over the same problem and I found the not-so-nice solution of adding the keyword, "Space", for the spaces.
So, in your case, I would expect following command to work:
tmux send-keys -t testsession "time Space tar Space -I Space \"zstd Space -19 Space -T0\" Space -cvf Space ${1}.tar.zst Space "${#:2}""
I got this idea from this page's send-keys description "list of Keys" section.

Related

How to strip control characters when saving output to variable?

Trying to strip control characters such as ^[[1m and ^[(B^[[m from ^[[1mfoo^[(B^[[m.
$ cat test.sh
#! /bin/bash
bold=$(tput bold)
normal=$(tput sgr0)
printf "%s\n" "Secret:"
printf "$bold%s$normal\n" "foo"
printf "%s\n" "Done"
$ cat test.exp
#!/usr/bin/expect
log_file -noappend ~/Desktop/test.log
spawn ~/Desktop/test.sh
expect {
-re {Secret:\r\n(.+?)\r\nDone} {
set secret $expect_out(1,string)
}
}
$ expect ~/Desktop/test.exp
spawn ~/Desktop/test.sh
Secret:
foo
Done
$ cat -e ~/Desktop/test.log
spawn ~/Desktop/test.sh^M$
Secret:^M$
^[[1mfoo^[(B^[[m^M$
Done^M$
The escape sequences depend on the TERM variable. You can avoid getting them in the first place by pretending to have a dumb terminal:
set env(TERM) dumb
spawn ~/Desktop/test.sh
This works for the provided example. If it will work in the real case is impossible to tell from the provided information. That depends on whether the program actually uses termcaps to generate the escape sequences.
I don't see any way in expect to add hooks to manipulate the data being read before it's matched/logged/etc. However, you can add another layer into your pipeline to strip ANSI escapes from what the real program being run outputs before expect sees it by adjusting your test.exp:
set csi_re [subst -nocommands {\x1B\\[[\x30-\x3F]*[\x20-\x2F]*[\x40-\x7E]}]
spawn sh -c "~/Desktop/test.sh | sed 's/$csi_re//g'"
This uses sed to strip out all strings that match ANSI terminal CSI escape sequences from test.sh's output.

Split large directory into subdirectories

I have a directory with about 2.5 million files and is over 70 GB.
I want to split this into subdirectories, each with 1000 files in them.
Here's the command I've tried using:
i=0; for f in *; do d=dir_$(printf %03d $((i/1000+1))); mkdir -p $d; mv "$f" $d; let i++; done
That command works for me on a small scale, but I can leave it running for hours on this directory and it doesn't seem to do anything.
I'm open for doing this in any way via command line: perl, python, etc. Just whatever way would be the fastest to get this done...
I suspect that if you checked, you'd noticed your program was actually moving the files, albeit really slowly. Launching a program is rather expensive (at least compared to making a system call), and you do so three or four times per file! As such, the following should be much faster:
perl -e'
my $base_dir_qfn = ".";
my $i = 0;
my $dir;
opendir(my $dh, $base_dir_qfn)
or die("Can'\''t open dir \"$base_dir_qfn\": $!\n");
while (defined( my $fn = readdir($dh) )) {
next if $fn =~ /^(?:\.\.?|dir_\d+)\z/;
my $qfn = "$base_dir_qfn/$fn";
if ($i % 1000 == 0) {
$dir_qfn = sprintf("%s/dir_%03d", $base_dir_qfn, int($i/1000)+1);
mkdir($dir_qfn)
or die("Can'\''t make directory \"$dir_qfn\": $!\n");
}
rename($qfn, "$dir_qfn/$fn")
or do {
warn("Can'\''t move \"$qfn\" into \"$dir_qfn\": $!\n");
next;
};
++$i;
}
'
Note: ikegami's helpful Perl-based answer is the way to go - it performs the entire operation in a single process and is therefore much faster than the Bash + standard utilities solution below.
A bash-based solution needs to avoid loops in which external utilities are called order to perform reasonably.
Your own solution calls two external utilities and creates a subshell in each loop iteration, which means that you'll end up creating about 7.5 million processes(!) in total.
The following solution avoids loops, but, given the sheer number of input files, will still take quite a while to complete (you'll end up creating 4 processes for every 1000 input files, i.e., ca. 10,000 processes in total):
printf '%s\0' * | xargs -0 -n 1000 bash -O nullglob -c '
dirs=( dir_*/ )
dir=dir_$(printf %04s $(( 1 + ${#dirs[#]} )))
mkdir "$dir"; mv "$#" "$dir"' -
printf '%s\0' * prints a NUL-separated list of all files in the dir.
Note that since printf is a Bash builtin rather than an external utility, the max. command-line length as reported by getconf ARG_MAX does not apply.
xargs -0 -n 1000 invokes the specified command with chunks of 1000 input filenames.
Note that xargs -0 is nonstandard, but supported on both Linux and BSD/OSX.
Using NUL-separated input robustly passes filenames without fear of inadvertently splitting them into multiple parts, and even works with filenames with embedded newlines (though such filenames are very rare).
bash -O nullglob -c executes the specified command string with option nullglob turned on, which means that a globbing pattern that matches nothing will expand to the empty string.
The command string counts the output directories created so far, so as to determine the name of the next output dir with the next higher index, creates the next output dir, and moves the current batch of (up to) 1000 files there.
if the directory is not under use, I suggest the following
find . -maxdepth 1 -type f | split -l 1000 -d -a 5
this will create n number of files named x00000 - x02500 (just to make sure 5 digits although 4 will work too). You can then move the 1000 files listed in each file to a corresponding directory.
perhaps set -o noclobber to eliminate risk of overrides in case of name clash.
to move the files, it's easier to use brace expansion to iterate over file names
for c in x{00000..02500};
do d="d$c";
mkdir $d;
cat $c | xargs -I f mv f $d;
done
Moving files around is always a challenge. IMHO all the solutions presented so far have some risk of destroying your files. This may be because the challenge sounds simple, but there is a lot to consider and to test when implementing it.
We must also not underestimate the efficiency of the solution as we are potentially handling a (very) large number of files.
Here is script carefully & intensively tested with own files. But of course use at your own risk!
This solution:
is safe with filenames that contain spaces.
does not use xargs -L because this will easily result in "Argument list too long" errors
is based on Bash 4 and does not depend on awk, sed, tr etc.
is scaling well with the amount of files to move.
Here is the code:
if [[ "${BASH_VERSINFO[0]}" -lt 4 ]]; then
echo "$(basename "$0") requires Bash 4+"
exit -1
fi >&2
opt_dir=${1:-.}
opt_max=1000
readarray files <<< "$(find "$opt_dir" -maxdepth 1 -mindepth 1 -type f)"
moved=0 dirnum=0 dirname=''
for ((i=0; i < ${#files[#]}; ++i))
do
if [[ $((i % opt_max)) == 0 ]]; then
((dirnum++))
dirname="$opt_dir/$(printf "%02d" $dirnum)"
fi
# chops the LF printed by "find"
file=${files[$i]::-1}
if [[ -n $file ]]; then
[[ -d $dirname ]] || mkdir -v "$dirname" || exit
mv "$file" "$dirname" || exit
((moved++))
fi
done
echo "moved $moved file(s)"
For example, save this as split_directory.sh. Now let's assume you have 2001 files in some/dir:
$ split_directory.sh some/dir
mkdir: created directory some/dir/01
mkdir: created directory some/dir/02
mkdir: created directory some/dir/03
moved 2001 file(s)
Now the new reality looks like this:
some/dir contains 3 directories and 0 files
some/dir/01 contains 1000 files
some/dir/02 contains 1000 files
some/dir/03 contains 1 file
Calling the script again on the same directory is safe and returns almost immediately:
$ split_directory.sh some/dir
moved 0 file(s)
Finally, let's take a look at the special case where we call the script on one of the generated directories:
$ time split_directory.sh some/dir/01
mkdir: created directory 'some/dir/01/01'
moved 1000 file(s)
real 0m19.265s
user 0m4.462s
sys 0m11.184s
$ time split_directory.sh some/dir/01
moved 0 file(s)
real 0m0.140s
user 0m0.015s
sys 0m0.123s
Note that this test ran on a fairly slow, veteran computer.
Good luck :-)
This is probably slower than a Perl program (1 minute for 10.000 files) but it should work with any POSIX compliant shell.
#! /bin/sh
nd=0
nf=0
/bin/ls | \
while read file;
do
case $(expr $nf % 10) in
0)
nd=$(/usr/bin/expr $nd + 1)
dir=$(printf "dir_%04d" $nd)
mkdir $dir
;;
esac
mv "$file" "$dir/$file"
nf=$(/usr/bin/expr $nf + 1)
done
With bash, you can use arithmetic expansion $((...)).
And of course this idea can be improved by using xargs - should not take longer than ~ 45 sec for 2.5 million files.
nd=0
ls | xargs -L 1000 echo | \
while read cmd;
do
nd=$((nd+1))
dir=$(printf "dir_%04d" $nd)
mkdir $dir
mv $cmd $dir
done
I would use the following from the command line:
find . -maxdepth 1 -type f |split -l 1000
for i in `ls x*`
do
mkdir dir$i
mv `cat $i` dir$i& 2>/dev/null
done
Key is the "&" which threads out each mv statement.
Thanks to karakfa for the split idea.

How to extract data from html table in shell script?

I am trying to create a BASH script what would extract the data from HTML table.
Below is the example of table from where I need to extract data:
<table border=1>
<tr>
<td><b>Component</b></td>
<td><b>Status</b></td>
<td><b>Time / Error</b></td>
</tr>
<tr><td>SAVE_DOCUMENT</td><td>OK</td><td>0.406 s</td></tr>
<tr><td>GET_DOCUMENT</td><td>OK</td><td>0.332 s</td></tr>
<tr><td>DVK_SEND</td><td>OK</td><td>0.001 s</td></tr>
<tr><td>DVK_RECEIVE</td><td>OK</td><td>0.001 s</td></tr>
<tr><td>GET_USER_INFO</td><td>OK</td><td>0.143 s</td></tr>
<tr><td>NOTIFICATIONS</td><td>OK</td><td>0.001 s</td></tr>
<tr><td>ERROR_LOG</td><td>OK</td><td>0.001 s</td></tr>
<tr><td>SUMMARY_STATUS</td><td>OK</td><td>0.888 s</td></tr>
</table>
And I want the BASH script to output it like so:
SAVE_DOCUMENT OK 0.475 s
GET_DOCUMENT OK 0.345 s
DVK_SEND OK 0.002 s
DVK_RECEIVE OK 0.001 s
GET_USER_INFO OK 4.465 s
NOTIFICATIONS OK 0.001 s
ERROR_LOG OK 0.002 s
SUMMARY_STATUS OK 5.294 s
How to do it?
So far I have tried using the sed, but I don't know how to use it quite well. The header of the table(Component, Status, Time/Error) I excluded with grep using grep "<tr><td>, so only lines starting with <tr><td> will be selected for next parsing (sed).
This is what I used: sed 's#<\([^<>][^<>]*\)>\([^<>]*\)</\1>#\2#g'
But then <tr> tags still remain and also it wont separate the strings. In other words the result of this script is:
<tr>SAVE_DOCUMENTOK0.406 s</tr>
The full command of the script I'm working on is:
cat $FILENAME | grep "<tr><td>" | sed 's#<\([^<>][^<>]*\)>\([^<>]*\)</\1>#\2#g'
Go with (g)awk, it's capable :-), here is a solution, but please note: it's only working with the exact html table format you had posted.
awk -F "</*td>|</*tr>" '/<\/*t[rd]>.*[A-Z][A-Z]/ {print $3, $5, $7 }' FILE
Here you can see it in action: https://ideone.com/zGfLe
Some explanation:
-F sets the input field separator to a regexp (any of tr's or td's opening or closing tag
then works only on lines that matches those tags AND at least two upercasse fields
then prints the needed fields.
HTH
You can use bash xpath (XML::XPath perl module) to accomplish that task very easily:
xpath -e '//tr[position()>1]' test_input1.xml 2> /dev/null | sed -e 's/<\/*tr>//g' -e 's/<td>//g' -e 's/<\/td>/ /g'
You may use html2text command and format the columns via column, e.g.:
$ html2text table.html | column -ts'|'
Component Status Time / Error
SAVE_DOCUMENT OK 0.406 s
GET_DOCUMENT OK 0.332 s
DVK_SEND OK 0.001 s
DVK_RECEIVE OK 0.001 s
GET_USER_INFO OK 0.143 s
NOTIFICATIONS OK 0.001 s
ERROR_LOG OK 0.001 s
SUMMARY_STATUS OK 0.888 s
then parse it further from there (e.g. cut, awk, ex).
In case you'd like to sort it first, you can use ex, see the example here or here.
There are a lot of ways of doing this but here's one:
grep '^<tr><td>' < $FILENAME \
| sed \
-e 's:<tr>::g' \
-e 's:</tr>::g' \
-e 's:</td>::g' \
-e 's:<td>: :g' \
| cut -c2-
You could use more sed(1) (-e 's:^ ::') instead of the cut -c2- to remove the leading space but cut(1) doesn't get as much love as it deserves. And the backslashes are just there for formatting, you can remove them to get a one liner or leave them in and make sure that they're immediately followed by a newline.
The basic strategy is to slowly pull the HTML apart piece by piece rather than trying to do it all at once with a single incomprehensible pile of regex syntax.
Parsing HTML with a shell pipeline isn't the best idea ever but you can do it if the HTML is known to come in a very specific format. If there will be variation then you'd be better with with a real HTML parser in Perl, Ruby, Python, or even C.
A solution based on multi-platform web-scraping CLI xidel and XPath:
Tip of the hat to Reino for providing the simpler XPath equivalent to the original XQuery solution.[1]
xidel -s -e '//tr[position() > 1]/join(td)' file
With the sample input, this yields:
SAVE_DOCUMENT OK 0.406 s
GET_DOCUMENT OK 0.332 s
DVK_SEND OK 0.001 s
DVK_RECEIVE OK 0.001 s
GET_USER_INFO OK 0.143 s
NOTIFICATIONS OK 0.001 s
ERROR_LOG OK 0.001 s
SUMMARY_STATUS OK 0.888 s
Explanation:
//tr[position() > 1] matches the tr elements starting with the 2nd one, so as to skip the header row), and join(td) joins the values of the matching elements' child td elements with an implied single space as the separator.
-s makes xidel silent (suppresses output of status information).
While html2text is convenient for display of the extracted data, providing machine-parseable output is non-trivial, unfortunately:
html2text file | awk -F' *\\|' 'NR>2 {gsub(/^\||.\b/, ""); $1=$1; print}'
The Awk command removes the hidden \b-based (backspace-based) sequences that html2text outputs by default, and parses the lines into fields by |, and then outputs them with a space as the separator (a space is Awk's default output field separator; to change it to a tab, for instance, use -v OFS='\t').
Note: Use of -nobs to suppress backspace sequences at the source is not an option, because you then won't be able to distinguish between the hidden-by-default _ instances used for padding and actual _ characters in the data.
Note: Given that html2text seemingly invariably uses | as the column separator, the above will only work robustly if the are no | instances in the data being extracted.
[1] xidel -s --xquery 'for $tr in //tr[position()>1] return join($tr/td, " ")' file
You can parse the file using Ex editor (part of Vim) by removing HTML tags, e.g.:
$ ex -s +'%s/<[^>]\+>/ /g' +'v/0/d' +'wq! /dev/stdout' table.html
SAVE_DOCUMENT OK 0.406 s
GET_DOCUMENT OK 0.332 s
DVK_SEND OK 0.001 s
DVK_RECEIVE OK 0.001 s
GET_USER_INFO OK 0.143 s
NOTIFICATIONS OK 0.001 s
ERROR_LOG OK 0.001 s
SUMMARY_STATUS OK 0.888 s
Here is shorter version by printing the whole file without HTML tags:
$ ex +'%s/<[^>]\+>/ /g|%p' -scq! table.html
Explanation:
%s/<[^>]\+>/ /g - Substitute all HTML tags into empty space.
v/0/d - Deletes all lines without 0.
wq! /dev/stdout - Quits editor and writes the buffer to the standard output.
For the sake of completeness, pandoc does a good job when you have extracted the HTML table. For example,
pandoc --from html --to plain table.txt
---------------- -------- --------------
Component Status Time / Error
SAVE_DOCUMENT OK 0.406 s
GET_DOCUMENT OK 0.332 s
DVK_SEND OK 0.001 s
DVK_RECEIVE OK 0.001 s
GET_USER_INFO OK 0.143 s
NOTIFICATIONS OK 0.001 s
ERROR_LOG OK 0.001 s
SUMMARY_STATUS OK 0.888 s
---------------- -------- --------------

Can aspell output line number and not offset in pipe mode?

Can aspell output line number and not offset in pipe mode for html and xml files? I can't read the file line by line because in this case aspell can't identify closed tag (if tag situated on the next line).
This will output all occurrences of misspelt words with line numbers:
# Get aspell output...
<my_document.txt aspell pipe list -d en_GB --personal=./aspell.ignore.txt |
# Proccess the aspell output...
grep '[a-zA-Z]\+ [0-9]\+ [0-9]\+' -oh | \
grep '[a-zA-Z]\+' -o | \
while read word; do grep -on "\<$word\>" my_document.txt; done
Where:
my_document.txt is your original document
en_GB is your primary dictionary choice (e.g. try en_US)
aspell.ignore.txt is an aspell personal dictionary (example below)
aspell_output.txt is the output of aspell in pipe mode (ispell style)
result.txt is a final results file
aspell.ignore.txt example:
personal_ws-1.1 en 500
foo
bar
example results.txt output (for an en_GB dictionary):
238:color
302:writeable
355:backends
433:dataonly
You can also print the whole line by changing the last grep -on into grep -n.
This is just an idea, I haven't really tried it yet (I'm on a windows machine :(). But maybe you could pipe the html file through head (with byte limit) and count newlines using grep to find your line number. It's neither efficient nor pretty, but it might just work.
cat icantspell.html | head -c <offset from aspell> | egrep -Uc "$"
I use the following script to perform spell-checking and to work-around the awkward output of aspell -a / ispell. At the same time, the script also works around the problem that ordinals like 2nd aren't recognized by aspell by simply ignoring everything that aspell reports which is not a word of its own.
#!/bin/bash
set +o pipefail
if [ -t 1 ] ; then
color="--color=always"
fi
! for file in "$#" ; do
<"$file" aspell pipe list -p ./dict --mode=html |
grep '[[:alpha:]]\+ [0-9]\+ [0-9]\+' -oh |
grep '[[:alpha:]]\+' -o |
while read word ; do
grep $color -n "\<$word\>" "$file"
done
done | grep .
You even get colored output if the stdout of the script is a terminal, and you get an exit status of 1 in case the script found spelling mistakes, otherwise the exit status of the script is 0.
Also, the script protects itself from pipefail, which is a somewhat popular option to be set i.e. in a Makefile but doesn't work for this script. Last but not least, this script explicitly uses [[:alpha:]] instead of [a-zA-Z] which is less confusing when it's also matching non-ASCII characters like German äöüÄÖÜß and others. [a-zA-Z] also does, but that to some level comes at a surprise.
aspell pipe / aspell -a / ispell output one empty line for each input line (after reporting the errors of the line).
Demonstration printing the line number with awk:
$ aspell pipe < testFile.txt |
awk '/^$/ { countedLine=countedLine+1; print "#L=" countedLine; next; } //'
produces this output:
#(#) International Ispell Version 3.1.20 (but really Aspell 0.60.7-20110707)
& iinternational 7 0: international, Internationale, internationally, internationals, intentional, international's, Internationale's
#L=1
*
*
*
& reelly 22 11: Reilly, really, reel, rely, rally, relay, resell, retell, Riley, rel, regally, Riel, freely, real, rill, roll, reels, reply, Greeley, cruelly, reel's, Reilly's
#L=2
*
#L=3
*
*
& sometypo 18 8: some typo, some-typo, setup, sometime, someday, smote, meetup, smarty, stupor, Smetana, somatic, symmetry, mistype, smutty, smite, Sumter, smut, steppe
#L=4
with testFile.txt
iinternational
I say this reelly.
hello
here is sometypo.
(Still not as nice as hunspell -u (https://stackoverflow.com/a/10778071/4124767). But hunspell misses some command line options I like.)
For others using aspell with one of the filter modes (tex, html, etc), here's a way to only print line numbers for misspelled words in the filtered text. So for example, it won't print misspellings in the comments.
ASPELL_ARGS="--mode=html --personal=./.aspell.en.pws"
for file in "$#"; do
for word in $(aspell $ASPELL_ARGS list < "$file" | sort -u); do
grep -no "\<$word\>" <(aspell $ASPELL_ARGS filter < "$file")
done | sort -n
done
This works because aspell filter does not delete empty lines. I realize this isn't using aspell pipe as requested by OP, but it's in the same spirit of making aspell print line numbers.

Split function not working in UNIX

I'm trying to run a split on a file where the filename has spaces in it.
I can't seem to get it to work. So I have the following
SOURCE_FILE="test file.txt"
split -l 100 $SOURCE_FILE
Now I've tried enclosing the $SOURCE_FILE in " with no luck:
split -l 100 "\""$SOURCE_FILE"\""
or even
split -l 100 '"'$SOURCE_FILE'"'
I'm still getting:
usage: split [-l line_count] [-a suffix_length] [file [name]]
or: split -b number[k|m] [-a suffix_length] [file [name]]
You're trying too hard! A single set of double quotes will suffice:
split -l 100 "$SOURCE_FILE"
You want the arguments to split to look like this:
-l
100
test file.txt
The commands you were trying both yield these arguments:
-l
100
"test
file.txt"
As in, they are equivalent to this incorrect command:
split -l 100 '"test' 'file.txt"'
Or you could insert a backslash to escape the embedded space:
SOURCE_FILE=test\ file.txt
split -l 100 "$SOURCE_FILE"
I assume you tried just "$SOURCE_FILE" without the fancy escaping tricks?
I think I would try cat-ing the file into split, maybe split just has issues with files with spaces in their name, or maybe it is really pissed off about something other than the space.