Save tail output to mysql - mysql

I'm having a problem, I need to save the tail output to mysql. I can save the output to a file,
Here is the tail command:
tail -f file_ | egrep --line-buffered param_ > path_destinty
For my application it is necessary to save the information in the time that it is written.
Any tips?

Example:
tail -f file_ | \
grep -E --line-buffered param_ | \
while read line; do \
mysql -E -u root -p root -h 127.0.0.1 'INSERT INTO `test`.`test` (`text`, `updated`) VALUES ("'${line}'", NOW());'; done
Pipes:
tail your file
because egrep is deprecated, use grep -E
cycle for parsing data and send them to MySQL
Params of MySQL:
-E Execute query
-u Username
-p Password for this user
-h Host/IP
`test` is the name of the database and table
${line} our varible with text

Related

mysqldump in Jenkinsfile can't connect to server

This is the stage in Jenkinsfile where the problem comes from :
stage ('Build & Run container') {
imageMysql = docker.build('backend-server-mysql-dev', '--no-cache -f build/docker/mysql/Dockerfile .')
containerMysql = imageMysql.run("--name backend-server-mysql-dev -p 3306:3306 -e MYSQL_ROOT_PASSWORD=root -e MYSQL_ROOT_USER=root -e MYSQL_PASSWORD=mahmoud -e MYSQL_DATABASE=soextremedb")
sh 'docker ps | docker exec -it backend-server-mysql-dev /bin/bash | ls -l | mysqldump -u root -proot soextremedb < soextremedb.sql'
}
This is the error message:
Shell Script -- docker ps | docker exec -it backend-server-mysql-dev /bin/bash | ls -l | mysqldump -u root -proot soextremedb < soextremedb.sql -- (self time 566ms)
[soextremeBackEnd_Dev-MBC6SQWYSNVE6ADN2QOAOGZ4YYVT5E6K7Y2FUP6ROOROWRMCPFOA] Running shell script
+ docker ps
+ docker exec -it backend-server-mysql-dev /bin/bash
+ ls -l
+ mysqldump -u root -proot soextremedb
**mysqldump: Got error: 2002: "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory")" when trying to connect the input device is not a TTY**
I think there are a couple of issues with the sh command.
First, | is used to send the output of one command on to the next command, but it looks like you're just trying to execute a sequence of commands. For that, you can use ; or &&. You might take a look at this answer for a great summary of shell operators.
Then, for your docker exec command, I think you actually want to call a series of commands non-interactively: leave off off the -it and use /bin/bash -c to pass a command string to the shell.
This will give you something like:
sh 'docker ps ; docker exec backend-server-mysql-dev /bin/bash -c "ls -l ; mysqldump -u root -proot soextremedb < soextremedb.sql"'

How to escape mysqldump quotes in bash?

I need to take a backup of my database using mysqldump tool.
My test table has a TIMESTAMP column that I have to use as filter.
I'm using a bash script with the following code:
#!/bin/bash
cmd="mysqldump --verbose --opt --extended-insert --result-file /home/backups/mysql/test/test_table.20161205.bak test --where=\"timestamp_c>='2016-12-03 00:00:00'\" --tables \"test_table\""
echo $cmd
$cmd
I'm printing the command that I assume should work. The above script produces the output:
mysqldump --verbose --opt --extended-insert --result-file /home/backups/mysql/test/test_table.20161205.bak test --where="timestamp_c>='2016-12-03 00:00:00'" --tables "test_table"
If I copy the printed command on the terminal, it works;
however, the command on the script print the error:
mysqldump: Couldn't find table: "00:00:00'""
Is there something that I'm not understanding about quotes escape?
Variables hold data, not code. Define a function instead.
cmd () {
mysqldump --verbose --opt --extended-insert \
--result-file /home/backups/mysql/test/test_table.20161205.bak test \
--where="timestamp_c>='2016-12-03 00:00:00'" \
--tables "test_table"
}
Try executing your command using 'eval'. Example:
#!/bin/bash
cmd="mysqldump --verbose --opt --extended-insert --result-file /home/backups/mysql/test/test_table.20161205.bak test --where=\"timestamp_c>='2016-12-03 00:00:00'\" --tables \"test_table\""
eval $cmd

How can I purge up to the completed slave binary log, minus 50?

I'm using bash to purge binary logs on Master, only if the Slave has completed them. Instead I'd like to leave a bit more logs on the server. Can you help me turn mysql-bin.000345 into mysql-bin.000295 (subtracting 50 from 345)?
Here's my script:
# Fetch the current `Relay_Master_Log_File` from Slave
relay_master_log_file=$(ssh user#slave "mysql -hlocalhost -uroot -e 'SHOW SLAVE STATUS\G' | grep Relay_Master_Log_File | cut -f2- -d':' | sed -e 's/^[ \t]*//'")
# Purge binary logs on Master up to the current `Relay_Master_Log_File`
purge_cmd="PURGE BINARY LOGS TO '$relay_master_log_file';"
ssh user#master <<EOF
mysql -e "$purge_cmd"
EOF
But I'd like to keep 50 (or n) binary logs on Master instead.
Assuming you have mysql-bin.000345 in a variable, you could perform these steps:
Strip the beginning part with the trailing zeros, leaving only 345
Use Bash arithmetics $((...)) to subtract n from 345
Use printf to format the result padded with zeros
For example:
oldname=mysql-bin.000345
num=$(shopt -s extglob; echo ${filename##mysql-bin.+(0)})
newname=mysql-bin.$(printf '%06d' $((num - n)))
You can get it in your first command itself using awk and as a bonus skip use of grep, cut and sed.
# Fetch the current `Relay_Master_Log_File` from Slave
relay_master_log_file=$(ssh user#slave "mysql -hlocalhost -uroot -e 'SHOW SLAVE STATUS\G' |
awk -F':[[:blank:]]*' '$1~/Relay_Master_Log_File/{split($2, a, /\./); printf "%s.%06d", a[1], a[2]-50}'
)
Based on comments below use:
ssh user#slave <<-'EOF'
mysql -hlocalhost -uroot -e 'SHOW SLAVE STATUS\G' |
awk -F':[[:blank:]]*' '$1~/Relay_Master_Log_File/{split($2, a, /\./);
printf "%s.%06d\n", a[1], a[2]-50}'
EOF

Linux Bash file use Directory name

What I have is a few script files that are used for crons for different buildings in my company, but what I'm running into is I'm having to go into each file and change the OAK3 to a different building id, as well as the oak3(lowercase). The files are all located in there respectives warehouses folder ex: Desktop/CRON/OAK3. What I would like it to do, is if it's OAK3 use OAK3, and oak3(lowercase) instead of having to go into each file everytime we create a new db for a warehouse.
I am new to the linux world so I'm not sure if there is a way, and haven't found anything on google.
Example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/oak3_count_portal.txt --ignore-lines=1
Desired effect is possible
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/$WAREHOUSE_ID/$warehouse_id_count_portal.txt --ignore-lines=1
If i get what you want, which I´m not sure, this will help to do all new databases
databases=`mysql -B -r -u ${user} --skip-column-names -p${pass} --execute='show databases'`
for db in $databases; do
## now loop through the above array
echo $db # current DB
mysqldump -u $user --password=$pass $db > "$db.sql" #dump db to file
done
Using a combination of dirname and basename with the Bash special variable $0, you can get all of what you need.
The running script's filename is $0. Meanwhile dirname $0 will give you the directory path of the executing file. But you don't want the full path, just the last part, which basename will provide. realpath is used to expand the directory so . is not used.
Getting just the last directory name:
$ ls
tmp.sh # Ok, there's our file
$ dirname tmp.sh
. # The . is current directory
$ dirname $(realpath tmp.sh)
/home/mjb/OAK3 # so we expand it with realpath
$ basename $(dirname $(realpath tmp.sh))
OAK3 # then take only the last one with basename
So here's how it will work for you:
# Get the directory name
warehouse=$(basename $(dirname $(realpath $0)))
# And lowercase it with `tr` into a new variable
warehouse_lcase=$(echo $warehouse | tr '[:upper:]' '[:lower:]')
# Substitute the variables
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/${warehouse}/${warehouse_lcase}_count_portal.txt --ignore-lines=1
See also: Can a Bash script tell which directory it's stored in?
There is lot easier way to figure out the basename of the current-working-directory: pwd -PL | sed sg.\*/ggg
[san#alarmp OAK3]$ pwd; pwd -PL | sed sg.\*/ggg
/opt/local/OAK3
OAK3
So, if I understand your requirement correctly, if you don't wanna change the script(s) manually by hand, you can do this whilst inside that particular directory:
$ cat example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/oak3_count_portal.txt --ignore-lines=1
#
$ this_dir=$(pwd -PL | sed sg.\*/ggg)
#
$ sed -e "s/${this_dir}/\${WAREHOUSE_ID}/g" example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/${WAREHOUSE_ID}/oak3_count_portal.txt --ignore-lines=1
#
$ sed -e "s/$(echo $this_dir | tr '[:upper:]' '[:lower:]')/\${warehouse_id}/g" example.sh
/usr/bin/mysqlimport --host=localhost -u root -ppassword --local --verbose -C --delete test \
/workplace/gwwallen/ETLdump/OAK3/${warehouse_id}_count_portal.txt --ignore-lines=1
Use -i option to make the change permanent in-file (without creating a new one) like this:
sed -ie "s/${this_dir}/\${WAREHOUSE_ID}/g" example.sh

Putting the data passed to xargs twice in one line

tmp-file contains:
database_1
database_2
database_3
I want to run a command like "mysqldump DATABASE > database.sql && gzip database.sql" for each line in the above file.
I've got as far as cat /tmp/database-list | xargs -L 1 mysqldump -u root -p
I guess I want to know how to put the data passed to xargs in more than once (and not just on the end)
EDIT: the following command will dump each database into its own .sql file, then gzip them.
mysql -u root -pPASSWORD -B -e 'show databases' | sed -e '$!N; s/Database\n//' | xargs -L1 -I db mysqldump -u root -pPASSWORD -r db.backup.sql db; gzip *.sql
In your own example you use && to use two commands on one line - so why not do
cat file | xargs -L1 -I db mysqldump db > db.sql && cat file | xargs -L1 -I db gzip database.sql
if you really want to do it all in one line using xargs only. Though I believe that
cat file | xargs -L1 -I db mysqldump db > db.sql && cat file; gzip *.sql
would make more sense.
If you have a multicore CPU (most of us have these days) then GNU Parallel http://www.gnu.org/software/parallel/ may improve the time to run:
mysql -u root -pPASSWORD -B -e 'show databases' \
| sed -e '$!N; s/Database\n//' \
| parallel -j+0 "mysqldump -u root -pPASSWORD {} | gzip > {}.backup.sql"
-j+0 will run as many jobs in parallel as you have CPU cores.