run innobackupex with gzip and pipe display output to file - mysql

How is it possible to run this and output the innobackupex output to a file (but still send output to the display)?
innobackupex --user=root --password=pass --databases="db" --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz
I need to ouput the innobackupex log with ... completed OK! in the last line to a file? How can I do that?

I've also noticed that it is a bit challenging to save the "OK" output from xtrabackup to the log file, as the Perl script playing with tty. Here is what worked for me.
If you need execute innobackupex from the command line, you can do:
nohup innobackupex --user=root --password=pass --databases="db" --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz 2>/path/mybkp.log
if you need to script it and get an OK message you can do:
/bin/bash -c "innobackupex --user=root --password=pass --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz" 2>/path/mybkp.log
Please note that in the second command, the double quote closes before the 2>

Prepend
2> >(tee file)
to your command.

Related

Visually update size of file during mysql dump via tunnel

I have a bash script called copydata which does the following to do a MySQL dump of specific tables from our production MySQL server to a local file, and then push it into my local MySQL database.
#!/bin/sh
#set up tunnel
ssh -f -i ~/.ssh/ec2-eu-keypair.pem -o CompressionLevel=9 -o ExitOnForwardFailure=yes -L 3307:elr2.our-id.eu-west-1.rds.amazonaws.com:3306 username#example.com
echo "Dumping tables \"$#\" to /tmp/data.sql"
#dump tables to local file
mysqldump -u root -h 127.0.0.1 -pmypass -P 3307 live_db_name --extended-insert --single-transaction --default-character-set=utf8 --skip-set-charset $# > /tmp/data.sql
pv /tmp/data.sql | mysql -u root local_db_name --default-character-set=utf8 --binary-mode --force
So, it is called like copydata table1 table2
It works, but the mysqldump part can take a very long time, and it would be nice to have some visual feedback on progress. One thing which occurred to me is that I could show the size of /tmp/data.sql while the dump is in progress - if I just keep doing the following, in a seperate tab, for example, I can see it going up at the rate of approx 2mb per second:
ls -lh /tmp/data.sql
Can I add the above command, or something similar, to the above script so that I can see the file size updating while i'm waiting for the mysqldump line to complete?
Thanks to #YuriLachin in the comments, I did the following:
added a & to the mysqldump line, so it becomes asynchronous, ie the script carries on to the next line while the mysqldump continues in the background
added this line, to repeatedly call ls -lh on the local file:
pid=$!; while [ -d "/proc/$pid" ] ; do echo -n "$(ls -lh /tmp/data.sql)"\\r; sleep 1; done
Lets break that down, to aid my own learning as much as anything else:
#get the process id of that last backgrounded task (the mysqldump) so we
#can tell when it's finished running
pid=$!
#while it *is* still running
while [ -d "/proc/$pid" ] ; do
#get the size of the file, with ls, but do it inside an echo command.
#Wrapping it like this allows us to use the `-n` option which means "omit newline",
#or don't go onto the next line. Then, at the end, do \\r which is a carriage return,
#meaning 'go back to the start of the current line', so the next line will
#overwrite the first one.
#Now it updates in place rather than spewing out loads of lines.
echo -n "$(ls -lh /tmp/data.sql)"\\r
#then do nothing for 1 second, to avoid wasting cpu time.
sleep 1
done

xtrabackup can not use tar

I use
innobackupex --user=root --password=root --stream=tar ./ | gzip - >
backup.tar.gz
to backup a MySQL, but the backup.tar.gz only containes one file ".\backup-my.cnf", what's wrong?
--stream=xbstream is OK.“backup.xbstream” will containes all of the files.
MySQL:5.5,xtrabackup:2.2.6
I made a mistake.
The xtrabackup manual have said that : tar the gz file must use -i. I missed the -i, so I can get only one file.

Linux Bash file use Directory name

What I have is a few script files that are used for crons for different buildings in my company, but what I'm running into is I'm having to go into each file and change the OAK3 to a different building id, as well as the oak3(lowercase). The files are all located in there respectives warehouses folder ex: Desktop/CRON/OAK3. What I would like it to do, is if it's OAK3 use OAK3, and oak3(lowercase) instead of having to go into each file everytime we create a new db for a warehouse.
I am new to the linux world so I'm not sure if there is a way, and haven't found anything on google.
Example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/oak3_count_portal.txt --ignore-lines=1
Desired effect is possible
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/$WAREHOUSE_ID/$warehouse_id_count_portal.txt --ignore-lines=1
If i get what you want, which I´m not sure, this will help to do all new databases
databases=`mysql -B -r -u ${user} --skip-column-names -p${pass} --execute='show databases'`
for db in $databases; do
## now loop through the above array
echo $db # current DB
mysqldump -u $user --password=$pass $db > "$db.sql" #dump db to file
done
Using a combination of dirname and basename with the Bash special variable $0, you can get all of what you need.
The running script's filename is $0. Meanwhile dirname $0 will give you the directory path of the executing file. But you don't want the full path, just the last part, which basename will provide. realpath is used to expand the directory so . is not used.
Getting just the last directory name:
$ ls
tmp.sh # Ok, there's our file
$ dirname tmp.sh
. # The . is current directory
$ dirname $(realpath tmp.sh)
/home/mjb/OAK3 # so we expand it with realpath
$ basename $(dirname $(realpath tmp.sh))
OAK3 # then take only the last one with basename
So here's how it will work for you:
# Get the directory name
warehouse=$(basename $(dirname $(realpath $0)))
# And lowercase it with `tr` into a new variable
warehouse_lcase=$(echo $warehouse | tr '[:upper:]' '[:lower:]')
# Substitute the variables
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/${warehouse}/${warehouse_lcase}_count_portal.txt --ignore-lines=1
See also: Can a Bash script tell which directory it's stored in?
There is lot easier way to figure out the basename of the current-working-directory: pwd -PL | sed sg.\*/ggg
[san#alarmp OAK3]$ pwd; pwd -PL | sed sg.\*/ggg
/opt/local/OAK3
OAK3
So, if I understand your requirement correctly, if you don't wanna change the script(s) manually by hand, you can do this whilst inside that particular directory:
$ cat example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/oak3_count_portal.txt --ignore-lines=1
#
$ this_dir=$(pwd -PL | sed sg.\*/ggg)
#
$ sed -e "s/${this_dir}/\${WAREHOUSE_ID}/g" example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/${WAREHOUSE_ID}/oak3_count_portal.txt --ignore-lines=1
#
$ sed -e "s/$(echo $this_dir | tr '[:upper:]' '[:lower:]')/\${warehouse_id}/g" example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/${warehouse_id}_count_portal.txt --ignore-lines=1
Use -i option to make the change permanent in-file (without creating a new one) like this:
sed -ie "s/${this_dir}/\${WAREHOUSE_ID}/g" example.sh

Tshark - Export packet info from pcap to cvs

I am trying to programmatically capture a stream of packets by using Tshark. The simplified terminal command I am using is:
tshark -i 2 -w output.pcap
This is pretty straightforward, but I then need to get a .csv file in order to easily analyze the information captured.
By opening the .pcap file in Wireshark and exporting it in .csv what I get is a file structured as follows:
"No.","Time","Source","Destination","Protocol","Length","Info"
but,again, I need to do this in an automatic way. So I tried using the command:
tshark -r output.pcap -T fields -e frame.number -e ip.src -e ip.dst -e frame.len -e frame.time -e frame.time_relative -E header=y -E separator=, > output.csv
but I can not find anywhere the name of the "Info" field I get when manually exporting the .csv.
Any ideas? Thanks!
Yes, you can if you use the latest Development Release.
See Wireshark Bug 2892.
Download the Development Release Version 1.9.0.
Use the following command:
$ tshark -i 2 -T fields -e frame.time -e col.Info
Output
Feb 28, 2013 20:58:24.604635000 Who has 10.10.128.203? Tell 10.10.128.1
Feb 28, 2013 20:58:24.678963000 Who has 10.10.128.163? Tell 10.10.128.1
Note
-e col.Info,
Use capital I
How about directly exporting the packets to a csv file.
sudo tshark > fileName.csv

Putting the data passed to xargs twice in one line

tmp-file contains:
database_1
database_2
database_3
I want to run a command like "mysqldump DATABASE > database.sql && gzip database.sql" for each line in the above file.
I've got as far as cat /tmp/database-list | xargs -L 1 mysqldump -u root -p
I guess I want to know how to put the data passed to xargs in more than once (and not just on the end)
EDIT: the following command will dump each database into its own .sql file, then gzip them.
mysql -u root -pPASSWORD -B -e 'show databases' | sed -e '$!N; s/Database\n//' | xargs -L1 -I db mysqldump -u root -pPASSWORD -r db.backup.sql db; gzip *.sql
In your own example you use && to use two commands on one line - so why not do
cat file | xargs -L1 -I db mysqldump db > db.sql && cat file | xargs -L1 -I db gzip database.sql
if you really want to do it all in one line using xargs only. Though I believe that
cat file | xargs -L1 -I db mysqldump db > db.sql && cat file; gzip *.sql
would make more sense.
If you have a multicore CPU (most of us have these days) then GNU Parallel http://www.gnu.org/software/parallel/ may improve the time to run:
mysql -u root -pPASSWORD -B -e 'show databases' \
| sed -e '$!N; s/Database\n//' \
| parallel -j+0 "mysqldump -u root -pPASSWORD {} | gzip > {}.backup.sql"
-j+0 will run as many jobs in parallel as you have CPU cores.