Putting the data passed to xargs twice in one line - mysql

tmp-file contains:
database_1
database_2
database_3
I want to run a command like "mysqldump DATABASE > database.sql && gzip database.sql" for each line in the above file.
I've got as far as cat /tmp/database-list | xargs -L 1 mysqldump -u root -p
I guess I want to know how to put the data passed to xargs in more than once (and not just on the end)
EDIT: the following command will dump each database into its own .sql file, then gzip them.
mysql -u root -pPASSWORD -B -e 'show databases' | sed -e '$!N; s/Database\n//' | xargs -L1 -I db mysqldump -u root -pPASSWORD -r db.backup.sql db; gzip *.sql

In your own example you use && to use two commands on one line - so why not do
cat file | xargs -L1 -I db mysqldump db > db.sql && cat file | xargs -L1 -I db gzip database.sql
if you really want to do it all in one line using xargs only. Though I believe that
cat file | xargs -L1 -I db mysqldump db > db.sql && cat file; gzip *.sql
would make more sense.

If you have a multicore CPU (most of us have these days) then GNU Parallel http://www.gnu.org/software/parallel/ may improve the time to run:
mysql -u root -pPASSWORD -B -e 'show databases' \
| sed -e '$!N; s/Database\n//' \
| parallel -j+0 "mysqldump -u root -pPASSWORD {} | gzip > {}.backup.sql"
-j+0 will run as many jobs in parallel as you have CPU cores.

Related

How to log mysql queries of specific database - Linux

I have been looking at this post
How can I log "show processlist" when there are more than n queries?
It is working fine by running this command
mysql -uroot -e "show full processlist" | tee plist-$date.log | wc -l
the problem it is overriding the file
I also want to run it in cronjob.
I have added this command to the /var/spool/cron/root:
* * * * * [ $(mysql -uroot -e "show full processlist" | tee plist-`date +%F-%H-%M`.log | wc -l) -lt 51 ] && rm plist-`date +%F-%H-%M`.log
but it is not working. Or maybe it is saving the log file some place out of the root folder.
So my question is: how to temporarily log all queries from specific database and specific table and save the whole queries in 1 file?
Note: it is not slow/long queries log I am looking for, but just temp solution to read which queries are running for a database
solution is:
watch -n 1 "mysqladmin -u root -pXXXXX processlist | grep tablename" | tee -a /root/plist.log
The % character has special meaning in crontab commands, you need to escape them. So you need to do:
* * * * * [ $(mysql -uroot -e "show full processlist" | tee plist-`date +\%F-\%H-\%M`.log | wc -l) -lt 51 ] && rm plist-`date +\%F-\%H-\%M`.log
If you want to use your original command, but not overwrite the file each time, you can use the -a option of tee to append:
mysql -uroot -e "show full processlist" | tee -a plist-$date.log | wc -l
To run the command every second for a minute, write a shell script:
#!/bin/bash
for i in {1..60}; do
[ $(mysql -uroot -e "show full processlist" | tee -a plist.log | wc -l) -lt 51 ] && rm plist.log
sleep 1
done
You can then run this script from cron every minute:
* * * * * /path/to/script
Although if you want to run something continuously like this, cron may not be the best way. You could use /etc/inittab to run the script when the system boots, and it will automatically restart it if it dies for some reason. Then you would just use an infinite loop:
#!/bin/bash
while :; do
[ $(mysql -uroot -e "show full processlist" | tee -a plist.log | wc -l) -lt 51 ] && rm plist.log
sleep 1
done

Linux Bash file use Directory name

What I have is a few script files that are used for crons for different buildings in my company, but what I'm running into is I'm having to go into each file and change the OAK3 to a different building id, as well as the oak3(lowercase). The files are all located in there respectives warehouses folder ex: Desktop/CRON/OAK3. What I would like it to do, is if it's OAK3 use OAK3, and oak3(lowercase) instead of having to go into each file everytime we create a new db for a warehouse.
I am new to the linux world so I'm not sure if there is a way, and haven't found anything on google.
Example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/oak3_count_portal.txt --ignore-lines=1
Desired effect is possible
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/$WAREHOUSE_ID/$warehouse_id_count_portal.txt --ignore-lines=1
If i get what you want, which I´m not sure, this will help to do all new databases
databases=`mysql -B -r -u ${user} --skip-column-names -p${pass} --execute='show databases'`
for db in $databases; do
## now loop through the above array
echo $db # current DB
mysqldump -u $user --password=$pass $db > "$db.sql" #dump db to file
done
Using a combination of dirname and basename with the Bash special variable $0, you can get all of what you need.
The running script's filename is $0. Meanwhile dirname $0 will give you the directory path of the executing file. But you don't want the full path, just the last part, which basename will provide. realpath is used to expand the directory so . is not used.
Getting just the last directory name:
$ ls
tmp.sh # Ok, there's our file
$ dirname tmp.sh
. # The . is current directory
$ dirname $(realpath tmp.sh)
/home/mjb/OAK3 # so we expand it with realpath
$ basename $(dirname $(realpath tmp.sh))
OAK3 # then take only the last one with basename
So here's how it will work for you:
# Get the directory name
warehouse=$(basename $(dirname $(realpath $0)))
# And lowercase it with `tr` into a new variable
warehouse_lcase=$(echo $warehouse | tr '[:upper:]' '[:lower:]')
# Substitute the variables
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/${warehouse}/${warehouse_lcase}_count_portal.txt --ignore-lines=1
See also: Can a Bash script tell which directory it's stored in?
There is lot easier way to figure out the basename of the current-working-directory: pwd -PL | sed sg.\*/ggg
[san#alarmp OAK3]$ pwd; pwd -PL | sed sg.\*/ggg
/opt/local/OAK3
OAK3
So, if I understand your requirement correctly, if you don't wanna change the script(s) manually by hand, you can do this whilst inside that particular directory:
$ cat example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/oak3_count_portal.txt --ignore-lines=1
#
$ this_dir=$(pwd -PL | sed sg.\*/ggg)
#
$ sed -e "s/${this_dir}/\${WAREHOUSE_ID}/g" example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/${WAREHOUSE_ID}/oak3_count_portal.txt --ignore-lines=1
#
$ sed -e "s/$(echo $this_dir | tr '[:upper:]' '[:lower:]')/\${warehouse_id}/g" example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/${warehouse_id}_count_portal.txt --ignore-lines=1
Use -i option to make the change permanent in-file (without creating a new one) like this:
sed -ie "s/${this_dir}/\${WAREHOUSE_ID}/g" example.sh

run innobackupex with gzip and pipe display output to file

How is it possible to run this and output the innobackupex output to a file (but still send output to the display)?
innobackupex --user=root --password=pass --databases="db" --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz
I need to ouput the innobackupex log with ... completed OK! in the last line to a file? How can I do that?
I've also noticed that it is a bit challenging to save the "OK" output from xtrabackup to the log file, as the Perl script playing with tty. Here is what worked for me.
If you need execute innobackupex from the command line, you can do:
nohup innobackupex --user=root --password=pass --databases="db" --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz 2>/path/mybkp.log
if you need to script it and get an OK message you can do:
/bin/bash -c "innobackupex --user=root --password=pass --stream=tar ./ | gzip -c -1 > /var/backup/backup.tar.gz" 2>/path/mybkp.log
Please note that in the second command, the double quote closes before the 2>
Prepend
2> >(tee file)
to your command.

Save tail output to mysql

I'm having a problem, I need to save the tail output to mysql. I can save the output to a file,
Here is the tail command:
tail -f file_ | egrep --line-buffered param_ > path_destinty
For my application it is necessary to save the information in the time that it is written.
Any tips?
Example:
tail -f file_ | \
grep -E --line-buffered param_ | \
while read line; do \
mysql -E -u root -p root -h 127.0.0.1 'INSERT INTO `test`.`test` (`text`, `updated`) VALUES ("'${line}'", NOW());'; done
Pipes:
tail your file
because egrep is deprecated, use grep -E
cycle for parsing data and send them to MySQL
Params of MySQL:
-E Execute query
-u Username
-p Password for this user
-h Host/IP
`test` is the name of the database and table
${line} our varible with text

Load a series of sql files through capistrano

I'm having an issue trying to load a series of sql files through our capistrano recipe for our testing environment.
Here's what I came up to :
desc "Empty database and play sql scripts for fresh db structure"
task :mysqlrestore, :roles => :app do
run "find #{current_release}/migration/ -name '*.sql' -print0 | xargs -0 -I file mysql -hlocalhost -u#{db_username} -p#{db_password} #{db_database} < file"
My capistrano console outputs a :
failed: "sh -c 'find /home/toolbox/www/staging/releases/20120119111819/migration/ -name '\''*.sql'\'' -print0 | xargs -0 -I file mysql -hlocalhost -uuser -ppassword DBNAME < file'" on staging.env.com
Where could I be wrong ?
I was able to execute your command from bash just by removing your single quotes from your run command.
i.e.
run "find #{current_release}/migration/ -name *.sql -print0 | xargs -0 -I file mysql -hlocalhost -u#{db_username} -p#{db_password} #{db_database} < file"