Explain me how does this Shell pipe magic (... | tee >(tail -c1 >$PULSE) | bzip2 | ...) works? - mysql

Here the original source code (relevant 30 lines bash code highlighted)
Here simplified (s3 is a binary which streams to object storage). The dots (...) are options not posted here.
PULSE=$(mktemp -t shield-pipe.XXXXX)
trap "rm -f ${PULSE}" QUIT TERM INT
set -o pipefail
mysqldump ... | tee >(tail -c1 >$PULSE) | bzip2 | s3 stream ...
How does that work exactly? Can you explain me how this redirections and pipes working? Howto debug the error mysqldump: Got errno 32 on write. When manually invoked (only) mysqldump never fails with an error.

The tricky part is that:
tee writes to standard output as well as a file
>( cmd ) creates a writeable process substitution (a command that mimics the behaviour of a writeable file)
This is used to effectively pipe the output of mysqldump into two other commands: tail -c1 to print the last byte to a file and bzip2 to compress the stream.
As Inian pointed out in the comments, the error 32 comes from a broken pipe. I guess that this comes from s3 stream terminating (maybe a timeout?) which in turn causes the preceding commands in the pipeline to fail.

Related

Why is JSON from aws rds run in Docker "malformed" according to other tools?

To my eyes the following JSON looks valid.
{
"DescribeDBLogFiles": [
{
"LogFileName": "error/postgresql.log.2022-09-14-00",
"LastWritten": 1663199972348,
"Size": 3032193
}
]
}
A) But, jq, json_pp, and Python json.tool module deem it invalid:
# jq 1.6
> echo "$logfiles" | jq
parse error: Invalid numeric literal at line 1, column 2
# json_pp 4.02
> echo "$logfiles" | json_pp
malformed JSON string, neither array, object, number, string or atom,
at character offset 0 (before "\x{1b}[?1h\x{1b}=\r{...") at /usr/bin/json_pp line 51
> python3 -m json.tool <<< "$logfiles"
Expecting value: line 1 column 1 (char 0)
B) But on the other hand, if the above JSON is copy & pasted into an online validator, both 1 and 2, deem it valid.
As hinted by json_pp's error above, hexdump <<< "$logfiles" indeed shows additional, surrounding characters. Here's the prefix: 5b1b 313f 1b68 0d3d 1b7b ...., where 7b is {.
The JSON is output to a logfiles variable by this command:
logfiles=$(aws rds describe-db-log-files \
--db-instance-identifier somedb \
--filename-contains 2022-09-14)
# where `aws` is
alias aws='docker run --rm -it -v ~/.aws:/root/.aws amazon/aws-cli:2.7.31'
> bash --version
GNU bash, version 5.0.17(1)-release (x86_64-pc-linux-gnu)
Have perused this GitHub issue, yet can't figure out the cause. I suspect that double quotes get mangled somehow when using echo - some reported that printf "worked" for them.
The use of docker run --rm -it -v command to produce the JSON, added some additional unprintable characters to the start of the JSON data. That makes the resulting file $logfiles invalid.
The -t option allocations a tty and the -i creates an interactive shell. In this case the -t is allowing the shell to read login scripts (e.g. .bashrc). Something in your start up scripts is outputting ansi escape codes. Often this will to clear the screen, set up other things for the interactive shell, or make the output more visually appealing by colorizing portions of the data.

Storing aws ssm parameter as variable in bash script [duplicate]

I have a pretty simple script that is something like the following:
#!/bin/bash
VAR1="$1"
MOREF='sudo run command against $VAR1 | grep name | cut -c7-'
echo $MOREF
When I run this script from the command line and pass it the arguments, I am not getting any output. However, when I run the commands contained within the $MOREF variable, I am able to get output.
How can one take the results of a command that needs to be run within a script, save it to a variable, and then output that variable on the screen?
In addition to backticks `command`, command substitution can be done with $(command) or "$(command)", which I find easier to read, and allows for nesting.
OUTPUT=$(ls -1)
echo "${OUTPUT}"
MULTILINE=$(ls \
-1)
echo "${MULTILINE}"
Quoting (") does matter to preserve multi-line variable values; it is optional on the right-hand side of an assignment, as word splitting is not performed, so OUTPUT=$(ls -1) would work fine.
$(sudo run command)
If you're going to use an apostrophe, you need `, not '. This character is called "backticks" (or "grave accent"):
#!/bin/bash
VAR1="$1"
VAR2="$2"
MOREF=`sudo run command against "$VAR1" | grep name | cut -c7-`
echo "$MOREF"
Some Bash tricks I use to set variables from commands
Sorry, there is a loong answer, but as bash is a shell, where the main goal is to run other unix commands and react on result code and/or output, ( commands are often piped filter, etc... ).
Storing command output in variables is something basic and fundamental.
Therefore, depending on
compatibility (posix)
kind of output (filter(s))
number of variable to set (split or interpret)
execution time (monitoring)
error trapping
repeatability of request (see long running background process, further)
interactivity (considering user input while reading from another input file descriptor)
do I miss something?
First simple, old (obsolete), and compatible way
myPi=`echo '4*a(1)' | bc -l`
echo $myPi
3.14159265358979323844
Compatible, second way
As nesting could become heavy, parenthesis was implemented for this
myPi=$(bc -l <<<'4*a(1)')
Using backticks in script is to be avoided today.
Nested sample:
SysStarted=$(date -d "$(ps ho lstart 1)" +%s)
echo $SysStarted
1480656334
bash features
Reading more than one variable (with Bashisms)
df -k /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/dm-0 999320 529020 401488 57% /
If I just want a used value:
array=($(df -k /))
you could see an array variable:
declare -p array
declare -a array='([0]="Filesystem" [1]="1K-blocks" [2]="Used" [3]="Available" [
4]="Use%" [5]="Mounted" [6]="on" [7]="/dev/dm-0" [8]="999320" [9]="529020" [10]=
"401488" [11]="57%" [12]="/")'
Then:
echo ${array[9]}
529020
But I often use this:
{ read -r _;read -r filesystem size using avail prct mountpoint ; } < <(df -k /)
echo $using
529020
( The first read _ will just drop header line. ) Here, in only one command, you will populate 6 different variables (shown by alphabetical order):
declare -p avail filesystem mountpoint prct size using
declare -- avail="401488"
declare -- filesystem="/dev/dm-0"
declare -- mountpoint="/"
declare -- prct="57%"
declare -- size="999320"
declare -- using="529020"
Or
{ read -a head;varnames=(${head[#]//[K1% -]});varnames=(${head[#]//[K1% -]});
read ${varnames[#],,} ; } < <(LANG=C df -k /)
Then:
declare -p varnames ${varnames[#],,}
declare -a varnames=([0]="Filesystem" [1]="blocks" [2]="Used" [3]="Available" [4]="Use" [5]="Mounted" [6]="on")
declare -- filesystem="/dev/dm-0"
declare -- blocks="999320"
declare -- used="529020"
declare -- available="401488"
declare -- use="57%"
declare -- mounted="/"
declare -- on=""
Or even:
{ read _ ; read filesystem dsk[{6,2,9}] prct mountpoint ; } < <(df -k /)
declare -p mountpoint dsk
declare -- mountpoint="/"
declare -a dsk=([2]="529020" [6]="999320" [9]="401488")
(Note Used and Blocks is switched there: read ... dsk[6] dsk[2] dsk[9] ...)
... will work with associative arrays too: read _ disk[total] disk[used] ...
Other related sample: Parsing xrandr output: and end of Firefox tab by bash in a size of x% of display size? or at AskUbuntu.com Parsing xrandr output
Dedicated fd using unnamed fifo:
There is an elegent way! In this sample, I will read /etc/passwd file:
users=()
while IFS=: read -u $list user pass uid gid name home bin ;do
((uid>=500)) &&
printf -v users[uid] "%11d %7d %-20s %s\n" $uid $gid $user $home
done {list}</etc/passwd
Using this way (... read -u $list; ... {list}<inputfile) leave STDIN free for other purposes, like user interaction.
Then
echo -n "${users[#]}"
1000 1000 user /home/user
...
65534 65534 nobody /nonexistent
and
echo ${!users[#]}
1000 ... 65534
echo -n "${users[1000]}"
1000 1000 user /home/user
This could be used with static files or even /dev/tcp/xx.xx.xx.xx/yyy with x for ip address or hostname and y for port number or with the output of a command:
{
read -u $list -a head # read header in array `head`
varnames=(${head[#]//[K1% -]}) # drop illegal chars for variable names
while read -u $list ${varnames[#],,} ;do
((pct=available*100/(available+used),pct<10)) &&
printf "WARN: FS: %-20s on %-14s %3d <10 (Total: %11u, Use: %7s)\n" \
"${filesystem#*/mapper/}" "$mounted" $pct $blocks "$use"
done
} {list}< <(LANG=C df -k)
And of course with inline documents:
while IFS=\; read -u $list -a myvar ;do
echo ${myvar[2]}
done {list}<<"eof"
foo;bar;baz
alice;bob;charlie
$cherry;$strawberry;$memberberries
eof
Practical sample parsing CSV files:
As this answer is loong enough, for this paragraph,
I just will let you refer to
this answer to How to parse a CSV file in Bash?, I read a file by using an unnamed fifo, using syntax like:
exec {FD}<"$file" # open unnamed fifo for read
IFS=';' read -ru $FD -a headline
while IFS=';' read -ru $FD -a row ;do ...
... But using bash loadable CSV module.
On my website, you may find the same script, reading CSV as inline document.
Sample function for populating some variables:
#!/bin/bash
declare free=0 total=0 used=0 mpnt='??'
getDiskStat() {
{
read _
read _ total used free _ mpnt
} < <(
df -k ${1:-/}
)
}
getDiskStat $1
echo "$mpnt: Tot:$total, used: $used, free: $free."
Nota: declare line is not required, just for readability.
About sudo cmd | grep ... | cut ...
shell=$(cat /etc/passwd | grep $USER | cut -d : -f 7)
echo $shell
/bin/bash
(Please avoid useless cat! So this is just one fork less:
shell=$(grep $USER </etc/passwd | cut -d : -f 7)
All pipes (|) implies forks. Where another process have to be run, accessing disk, libraries calls and so on.
So using sed for sample, will limit subprocess to only one fork:
shell=$(sed </etc/passwd "s/^$USER:.*://p;d")
echo $shell
And with Bashisms:
But for many actions, mostly on small files, Bash could do the job itself:
while IFS=: read -a line ; do
[ "$line" = "$USER" ] && shell=${line[6]}
done </etc/passwd
echo $shell
/bin/bash
or
while IFS=: read loginname encpass uid gid fullname home shell;do
[ "$loginname" = "$USER" ] && break
done </etc/passwd
echo $shell $loginname ...
Going further about variable splitting...
Have a look at my answer to How do I split a string on a delimiter in Bash?
Alternative: reducing forks by using backgrounded long-running tasks
In order to prevent multiple forks like
myPi=$(bc -l <<<'4*a(1)'
myRay=12
myCirc=$(bc -l <<<" 2 * $myPi * $myRay ")
or
myStarted=$(date -d "$(ps ho lstart 1)" +%s)
mySessStart=$(date -d "$(ps ho lstart $$)" +%s)
This work fine, but running many forks is heavy and slow.
And commands like date and bc could make many operations, line by line!!
See:
bc -l <<<$'3*4\n5*6'
12
30
date -f - +%s < <(ps ho lstart 1 $$)
1516030449
1517853288
So we could use a long running background process to make many jobs, without having to initiate a new fork for each request.
You could have a look how reducing forks make Mandelbrot bash, improve from more than eight hours to less than 5 seconds.
Under bash, there is a built-in function: coproc:
coproc bc -l
echo 4*3 >&${COPROC[1]}
read -u $COPROC answer
echo $answer
12
echo >&${COPROC[1]} 'pi=4*a(1)'
ray=42.0
printf >&${COPROC[1]} '2*pi*%s\n' $ray
read -u $COPROC answer
echo $answer
263.89378290154263202896
printf >&${COPROC[1]} 'pi*%s^2\n' $ray
read -u $COPROC answer
echo $answer
5541.76944093239527260816
As bc is ready, running in background and I/O are ready too, there is no delay, nothing to load, open, close, before or after operation. Only the operation himself! This become a lot quicker than having to fork to bc for each operation!
Border effect: While bc stay running, they will hold all registers, so some variables or functions could be defined at initialisation step, as first write to ${COPROC[1]}, just after starting the task (via coproc).
Into a function newConnector
You may found my newConnector function on GitHub.Com or on my own site (Note on GitHub: there are two files on my site. Function and demo are bundled into one unique file which could be sourced for use or just run for demo.)
Sample:
source shell_connector.sh
tty
/dev/pts/20
ps --tty pts/20 fw
PID TTY STAT TIME COMMAND
29019 pts/20 Ss 0:00 bash
30745 pts/20 R+ 0:00 \_ ps --tty pts/20 fw
newConnector /usr/bin/bc "-l" '3*4' 12
ps --tty pts/20 fw
PID TTY STAT TIME COMMAND
29019 pts/20 Ss 0:00 bash
30944 pts/20 S 0:00 \_ /usr/bin/bc -l
30952 pts/20 R+ 0:00 \_ ps --tty pts/20 fw
declare -p PI
bash: declare: PI: not found
myBc '4*a(1)' PI
declare -p PI
declare -- PI="3.14159265358979323844"
The function myBc lets you use the background task with simple syntax.
Then for date:
newConnector /bin/date '-f - +%s' #0 0
myDate '2000-01-01'
946681200
myDate "$(ps ho lstart 1)" boottime
myDate now now
read utm idl </proc/uptime
myBc "$now-$boottime" uptime
printf "%s\n" ${utm%%.*} $uptime
42134906
42134906
ps --tty pts/20 fw
PID TTY STAT TIME COMMAND
29019 pts/20 Ss 0:00 bash
30944 pts/20 S 0:00 \_ /usr/bin/bc -l
32615 pts/20 S 0:00 \_ /bin/date -f - +%s
3162 pts/20 R+ 0:00 \_ ps --tty pts/20 fw
From there, if you want to end one of background processes, you just have to close its fd:
eval "exec $DATEOUT>&-"
eval "exec $DATEIN>&-"
ps --tty pts/20 fw
PID TTY STAT TIME COMMAND
4936 pts/20 Ss 0:00 bash
5256 pts/20 S 0:00 \_ /usr/bin/bc -l
6358 pts/20 R+ 0:00 \_ ps --tty pts/20 fw
which is not needed, because all fd close when the main process finishes.
As they have already indicated to you, you should use `backticks`.
The alternative proposed $(command) works as well, and it also easier to read, but note that it is valid only with Bash or KornShell (and shells derived from those),
so if your scripts have to be really portable on various Unix systems, you should prefer the old backticks notation.
I know three ways to do it:
Functions are suitable for such tasks:**
func (){
ls -l
}
Invoke it by saying func.
Also another suitable solution could be eval:
var="ls -l"
eval $var
The third one is using variables directly:
var=$(ls -l)
OR
var=`ls -l`
You can get the output of the third solution in a good way:
echo "$var"
And also in a nasty way:
echo $var
Just to be different:
MOREF=$(sudo run command against $VAR1 | grep name | cut -c7-)
When setting a variable make sure you have no spaces before and/or after the = sign. I literally spent an hour trying to figure this out, trying all kinds of solutions! This is not cool.
Correct:
WTFF=`echo "stuff"`
echo "Example: $WTFF"
Will Fail with error "stuff: not found" or similar
WTFF= `echo "stuff"`
echo "Example: $WTFF"
If you want to do it with multiline/multiple command/s then you can do this:
output=$( bash <<EOF
# Multiline/multiple command/s
EOF
)
Or:
output=$(
# Multiline/multiple command/s
)
Example:
#!/bin/bash
output="$( bash <<EOF
echo first
echo second
echo third
EOF
)"
echo "$output"
Output:
first
second
third
Using heredoc, you can simplify things pretty easily by breaking down your long single line code into a multiline one. Another example:
output="$( ssh -p $port $user#$domain <<EOF
# Breakdown your long ssh command into multiline here.
EOF
)"
You need to use either
$(command-here)
or
`command-here`
Example
#!/bin/bash
VAR1="$1"
VAR2="$2"
MOREF="$(sudo run command against "$VAR1" | grep name | cut -c7-)"
echo "$MOREF"
If the command that you are trying to execute fails, it would write the output onto the error stream and would then be printed out to the console.
To avoid it, you must redirect the error stream:
result=$(ls -l something_that_does_not_exist 2>&1)
This is another way and is good to use with some text editors that are unable to correctly highlight every intricate code you create:
read -r -d '' str < <(cat somefile.txt)
echo "${#str}"
echo "$str"
You can use backticks (also known as accent graves) or $().
Like:
OUTPUT=$(x+2);
OUTPUT=`x+2`;
Both have the same effect. But OUTPUT=$(x+2) is more readable and the latest one.
Here are two more ways:
Please keep in mind that space is very important in Bash. So, if you want your command to run, use as is without introducing any more spaces.
The following assigns harshil to L and then prints it
L=$"harshil"
echo "$L"
The following assigns the output of the command tr to L2. tr is being operated on another variable, L1.
L2=$(echo "$L1" | tr [:upper:] [:lower:])
Mac/OSX nowadays come with old Bash versions, ie GNU bash, version 3.2.57(1)-release (arm64-apple-darwin21). In this case, one can use:
new_variable="$(some_command)"
A concrete example:
newvar="$(echo $var | tr -d '123')"
Note the (), instead of the usual {} in Bash 4.
Some may find this useful.
Integer values in variable substitution, where the trick is using $(()) double brackets:
N=3
M=3
COUNT=$N-1
ARR[0]=3
ARR[1]=2
ARR[2]=4
ARR[3]=1
while (( COUNT < ${#ARR[#]} ))
do
ARR[$COUNT]=$((ARR[COUNT]*M))
(( COUNT=$COUNT+$N ))
done

Debug a large json file that is in one line

I have a 2MB json file that is only all in one line and now I get an error using jq:
$ jq .<nodes.json
parse error: Invalid literal at line 1, column 377140
How do I debug this on the console? To look at the mentioned column, I tried this:
head -c 377139 nodes.json|tail -c 1000
But I cannot find any error with a wrong t there, so it seems it is not the correct way to reach the position in the file.
How can I debug such a one-liner?
cut the file into more lines with
cat nodes.json|cut -f 1- -d} --output-delimiter=$'}\n'>/tmp/a.json
and analyse /tmp/a.json with, then you get an error with line nr:
parse error: Invalid literal at line 5995, column 47
use less -N /tmp/a.json to find that line
I see you are on a shell prompt. So you could try perl, because your operating system has it pre-installed, presumably.
cat nodes.json | json_xs -f json -t json-pretty
This tells the json_xs command line program to parse the file and prettify it.
If you don't have json_xs installed, you could try json_pp (pp is for pure-perl).
If you have neither, you must install the JSON::XS perl module with this command:
sudo cpanm JSON::XS
[sudo] password for knb:
--> Working on JSON::XS
Fetching http://www.cpan.org/authors/id/M/ML/MLEHMANN/JSON-XS-3.01.tar.gz ... OK
Configuring JSON-XS-3.01 ... OK
Building and testing JSON-XS-3.01 ... OK
Successfully installed JSON-XS-3.01 (upgraded from 2.34)
1 distribution installed
This installs JSON::XS and a few helper scripts, among them json_xs and json_pp.
Then you can run this simple one-liner:
cat dat.json | json_xs -f json -t json-pretty
After misplacing a parenthesis to force a nesting-error somewhere in the valid json file dat.json I got this:
cat dat.json | json_xs -f json -t json-pretty
'"' expected, at character offset 1331 (before "{"A_DESC":"density i...") at /usr/local/bin/json_xs line 181, <STDIN> line 1.
Maybe this is more informative than the jq output.

Root access required for CUDA?

I am using GeForce 8400M GS on Ubuntu 10.04 and I am learning CUDA programming. I am writing and running few basic programs. I was using cudaMalloc, and it kept giving me an error until I ran the code as root. However, I had to run the code as root only once. After that, even if I run the code as normal user, I do not get an error on malloc. What's going on?
This is probably due to your GPU not being properly initialized at boot. I've come across this problem when using Ubuntu Server and other installations where an X server isn't being started automatically. Try the following to fix it:
Create a directory for a script to initialize your GPUs. I usually use /root/bin. In this directory, create a file called cudainit.sh with the following code in it (this script came from the Nvidia forums).
#!/bin/bash
/sbin/modprobe nvidia
if [ "$?" -eq 0 ]; then
# Count the number of NVIDIA controllers found.
N3D=`/usr/bin/lspci | grep -i NVIDIA | grep "3D controller" | wc -l`
NVGA=`/usr/bin/lspci | grep -i NVIDIA | grep "VGA compatible controller" | wc -l`
N=`expr $N3D + $NVGA - 1`
for i in `seq 0 $N`; do
mknod -m 666 /dev/nvidia$i c 195 $i;
done
mknod -m 666 /dev/nvidiactl c 195 255
else
exit 1
fi
Now we need to make this script run automatically at boot. Edit /etc/rc.local to look like the following.
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
#
# Init CUDA for all users
#
/root/bin/cudainit.sh
exit 0
Reboot your computer and try to run your CUDA program as a regular user. If I'm right about what the problem is, then it should be fixed.
To work with Ubuntu 14.04 I followed https://devtalk.nvidia.com/default/topic/699610/linux/334-21-driver-returns-999-on-cuinit-cuda-/ to add nvidia-uvm to etc/modules, and to add a line to a custom udev rule. Create /etc/udev/rules.d/70-nvidia-uvm.rules with this line:
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/bin/mknod -m 666 /dev/nvidia-uvm c $(grep nvidia-uvm /proc/devices | cut -d \ -f 1) 0;'"
I don't understand why sudo modprobe nvidia-uvm works to create a proper /dev/nvidia-uvm (as does sudo cuda_program) but the /etc/modules listing requires the udev rule.

How To Capture network packets to MySQL

I'm going to design a network Analyzer for WiFi (802.11)
Currently I use tshark to capture and parse the WiFi frames and then pipe the output to a perl script to store the parsed information to Mysql database.
I just find out that I miss alot of frames in this process. I checked and the frames seem to be lost during the Pipe (when the output is delivered to perl to get srored in Mysql)
Here is how it goes
(Tshark) -------frames are lost----> (Perl) --------> (MySQL)
this is the how I pipe the output of tshark to script:
sudo tshark -i mon0 -t ad -T fields -e frame.time -e frame.len -e frame.cap_len -e radiotap.length | perl tshark-sql-capture.pl
this is simple template of the perl script I use (tshark-sql-capture.pl)
# preparing the MySQL
my $dns = "DBI:mysql:capture;localhost";
my $dbh = DBI->connect($dns,user,pass);
my $db = "captured";
while (<STDIN>) {
chomp($data = <STDIN>);
($time, $frame_len, $cap_len, $radiotap_len) = split " ", $data;
my $sth = $dbh-> prepare("INSERT INTO $db VALUES (str_to_date('$time','%M %d, %Y %H:%i:%s.%f'), '$frame_len', '$cap_len', '$radiotap_len'\n)" );
$sth->execute;
}
#Terminate MySQL
$dbh->disconnect;
Any Idea which can help to make the performance better is appreciated.Or may be there is an Alternative mechanism which can do better.
Right now my performance is 50% means I can store in mysql around half of the packets I'v captured.
Things written in a pipe don't get lost, what's probably really going on is that tshark tries to write to the pipe but perl+mysql is too slow to process the input so the pipeb is full, write would block so tshark just drops the packets.
Bottleneck could be either MySQL or Perl itself but probably the DB. Check CPU usage, measure insert rate. Then pick a faster DB or write to multiple DBs. You can also try batch inserts and increasing the size of the pipe buffer.
Update
while (<STDIN>)
this reads a line into $_, then you ignore it.
For pipe problems, you can improve packet capture with GULP http://staff.washington.edu/corey/gulp/
From the Man pages:
1) reduce packet loss of a tcpdump packet capture:
(gulp -c works in any pipeline as it does no data interpretation)
tcpdump -i eth1 -w - ... | gulp -c > pcapfile
or if you have more than 2, run tcpdump and gulp on different CPUs
taskset -c 2 tcpdump -i eth1 -w - ... | gulp -c > pcapfile
(gulp uses CPUs #0,1 so use #2 for tcpdump to reduce interference)
you can use a FIFO file, then read the packets and inserts in mysql using insert delay.
sudo tshark -i mon0 -t ad -T fields -e frame.time -e frame.len -e frame.cap_len -e radiotap.length > MYFIFO