What is this bash script trying to do with mysql? - mysql

The person who wrote the follow bash script has left and I need to figure out the meaning of the follow bash script he wrote. This script is executed in a docker container before running some test cases which requires a running mysql instance.
I guess it is about starting a mysql server, but I am not exactly sure about the exact meaning of each statement in this script.
echo -n "Loading [+"
( echo "SHOW TABLES;" | mysql mysql 2>/dev/null 1>/dev/null ) || \
run-mysqld 2>/dev/null 1>/dev/null &
while ! ( echo "SHOW TABLES;" | mysql mysql 2>/dev/null 1>/dev/null ) ;
do
echo -n +
sleep 1
done
echo "] Done."
I had to figure out this because our bitbucket pipeline recently gets stuck and timeout when running this script (previously it was fine). Thanks in advance.

This sequence attempts to run SHOW TABLES through mysql, ignoring any output. If mysql fails (because mysqld isn't running), it starts mysqld to run in background.
( echo "SHOW TABLES;" | mysql mysql 2>/dev/null 1>/dev/null ) || \
run-mysqld 2>/dev/null 1>/dev/null &
The second part of the code just waits for mysqld to start up, which is signaled by the following code exiting 0:
( echo "SHOW TABLES;" | mysql mysql 2>/dev/null 1>/dev/null )
If mysqld doesn't come up with a single attempt, the second part of the code would run forever. Looks like that's what happened.
The simplest way to make this code free from hanging is to put a limit on how long we sleep:
max_sleep=15
sleep=0
while ! ( echo "SHOW TABLES;" | mysql mysql 2>/dev/null 1>/dev/null ) ;
do
echo -n +
sleep 1
((sleep++ > max_sleep)) && { echo "Failed to start mysqld] Error."; exit 1; }
done
echo "] Done."
exit 0

Related

Mysql cli not returning data in bash script run by crontab

I have a bash script that is executed via a cron job
#!/bin/bash
# abort on errors
set -e
ABS_DIR=/path/
# extract the creds for the mysql db
DB_USER="USERNAME"
DB_PASS="PASSWORD"
function extract_data() {
file=$2
sql_query=`cat $ABS_DIR/$1`
data=`mysql -u $DB_USER --password="$DB_PASS" -D "database" -e "$sql_query" | tail -n +2`
echo -e "Data:"
echo -e "$data"
}
extract_data "sql_query.sql" "log.csv"
When running it manually with bash extract.sh the mysql cmd fetches the data correctly and I see the echo -e "$data" on the console.
When running the script via a cron job
* 12 * * * /.../extract.sh > /.../cron_log.txt
then I get an empty line saved to the cron_log.txt file!?
This is a common problem; a script behaves differently when run from user shell and when run from crontab. The cause is typically due to differences in the environment variables in the user shell, and in the crontab shell; by default, they are not the same.
To begin debugging this issue, you could direct stderr as well as stdout from crontab, hopefully to capture an error message:
extract.sh &> /.../cron_log.txt
(notice the &)
Also: you have three dots (/.../) -- that is likely a typo, could also be the cause.

what metric to use for failed slave AWS server

I have a master server running on a server in an independent data center, and a slave in AWS.
The replication failed with this error: "The incident LOST_EVENTS occured on the master. Message: error writing to the binary log".
Last time it went offline, it jumped from 4k bytes/second write throughput to 40k, and steadily increased to 252 k over a couple of weeks.
1) I'm wondering why write throughput would increase steadily after the failure?
2) I'm wondering what metric can be used within cloudwatch to create an SNS email to me when it does fail? Right now, I'm thinking the best thing to do is to run a simple bash script on the master, that compares Master_Log_File to Relay_Master_Log_File on 'show slave status;' and to forgo CloudWatch altogether.
edit update script:
Here's my script that I run every 10 minutes to check on the slave state (until I find an alernative metric in CloudWatch):
#!/bin/bash
a=$(mysql --host=*amazonaws.com --port=3306 -u whatever -ppass -N -B -e "show slave status;")
e=$(echo "$a" | awk -F\\t '{print $12}') #Slave_SQL_Running
d=$(echo "$a" | awk -F\\t '{print $26}') #Seconds_Behind_Master
if [ "$e" != 'Yes' ]; then
echo -e "slave mysql server down \n slave SQL running: $e \n seconds behind master: $d" | mail -s 'slave mysql server down' admin#email.com
fi
I didn't find a good metric from CloudWatch, so I made this script, which checks the slave status every 10 minutes through cron - it sends an email if it finds Slave_SQL_Running or Slave_IO_Running != 'Yes':
#!/bin/bash
a=$(mysql --host=host --port=3306 -u master -ppword -N -B -e "show slave status;")
b=$(echo "$a" | awk -F\\t '{print $6}') #Master_Log_File
c=$(echo "$a" | awk -F\\t '{print $10}') #Relay_Master_Log_File
e=$(echo "$a" | awk -F\\t '{print $12}') #Slave_SQL_Running
d=$(echo "$a" | awk -F\\t '{print $26}') #Seconds_Behind_Master
f=$(echo "$a" | awk -F\\t '{print $12}') #Slave_IO_Running
if [ "$e" != 'Yes' ] || [ "$f" != 'Yes' ]; then
echo -e "server id - slave mysql server down \n master log file: $b \n relay master log file: $c \n seconds behind master: $d \n Slave IO Running: $f \n Slave SQL Running: $e " | mail -s 'slave mysql server down' email#email.com
fi

Bash script does not execute in CRON (might be to do with Sudo commands?)

I have a bash script that uses ssh tunneling to get a PSQL dump remotely then puts the dump into a local MySQL database.
I can't seem to get CRON to run it.
I have performed the following steps:
1) Ran:
sudo crontab -e
2) Input the following to get the script to execute every 10 minutes. Script also takes about 2-3 minutes to execute fully.
*/10 * * * * /home/ubuntu/psqlimport 2>&1 /home/ubuntu/cronlog.log
3) Ran:
sudo chmod +x /home/ubuntu/psqlimport
4) Restarted CRON:
sudo /etc/init.d/cron restart
I check my database to see if there were any updates and there are none, as if the script never executed. There is also no /home/ubuntu/cronlog.log file created. When I run:
bash psqlimport
The script executes as expected.
Why doesn't the CRON job execute the script?
Filename: psqlimport
#!bin/bash
###
#
# PostgreSQL to MySQL Dump program
#
# Filename: psqlimport
# Description: This program dumps the 3 tables from a
# remote PostgreSQL database, converts the dump file
# into MySQL translatable INSERT INTO queries,
# and inputs the data into the local
# MySQL Inbevdash6 database.
#
# Script can be copied to the
# /etc/cron.daily folder for daily database updates.
#
#
# Script written by ramabrahma#stackoverflow
# Date: December 15, 2015
# Version: 01
#
#Set the dump file and temp file
DUMPFILE='psql.dump.sql' || (echo "assign var failed" 1>&2; exit 1)
TMPFILE=`mktemp` || (echo "mktemp failed" 1>&2; exit 1)
#Tables to dump: table1, table2, table3
#Dump as INSERT queries statements
sudo -E sshpass -p "<remote host password>" \
ssh username#remote_host \
pg_dump -U database_username -t table1 \
-t table2 -t table3 \
--column-inserts --data-only psqldatabase \
| awk '/^INSERT/ {i=1} /^SELECT/ {i=0} i' \
> "$TMPFILE" \
|| (echo "pg_dump failed" 1>&2; exit 1)
#Adding transaction blocks to the dump file
(echo "start transaction; truncate table table1; "; \
echo "truncate table table2; "; \
echo "truncate table table3; "; \
cat "$TMPFILE"; echo 'commit;' ) \
> "$DUMPFILE" \
|| (echo "parsing dump file failed" 1>&2; exit 1)
#Inputting the dump file to the MySQL database
mysql --defaults-file="/home/ubuntu/.mysql.db.cfg" < "$DUMPFILE" \
|| (echo "mysql failed" 1>&2; exit 1)
#Remove the temp file created
rm "$TMPFILE"
rm "$DUMPFILE"
You have an error in your shebang:
#!bin/bash
It should be
#!/bin/bash
bin/bash probably isn't a path to anything unless you're executing out of /. This wasn't a problem when you executed the script manually as bash psqlimport explicitly invokes bash to evaluate the script. You'll be able to see the problem in action if you try to run ./psqlimport to run the script.

Trying to populate video file data to mysql database

I'm trying on a mac to loop through all videos in a directory and add the details (Duration, size and time) of the file to a mysql db. But for some reason every time it fails on the mysql part.
If I take the mysql query generated by the script and run it on the mysql db it works fine. Can anyone help at all?
#!/bin/bash
OrDir="/Volumes/Misc/video"
find "$OrDir" -type f -exec /bin/bash -c \
'name=$(basename "$1")
name=${name%.*}
duration=$( ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 -sexagesimal "$1")
hash=$( md5 "$1" | cut -f 4 -d " ")
size=$( stat -f%z "$1")
QUERY="UPDATE Video SET Duration=\"$duration\", Hash=\"$hash\", Bytes=$size WHERE Name=\"$name\" "
echo "$QUERY \n"
mysql --host=**.**.**.** --user=**** --password=****** **** << EOF
$QUERY;
EOF
' _ {} \;
The * are omitting sensitive data and are correct (they're used in another shell script with the same method that runs fine on the same server)
This is just part of a larger script which will then be combined once this works properly

Script to check if an entry present on crontab, if not create new one

I want to make an entry in crontab that will optimize mysql database, so I am writing a script that will check if it is already present, else it will be created.
The problem I'm facing is when I check for entry it is returning many unwanted entries, to be specific all directories, files as output, present in the directory where I am executing the script.
My overall script looks like:
check=`crontab -l | grep -i "ABC"`
if [ -z "$check" ];
then
#echo " Adding opt to crontab "
crontab -l > ./tmpfile
echo "0 * * * * /bin/bash /usr/bin/mysqlcheck --all-databases -o --user=<user> --password=<pwd> 2>&1 >> ./tmpfile
crontab ./tmpfile
rm tmpfile
echo "ABC has been successfully added to crontab"
exit 1
else
echo "ABC already exists in crontab "
exit 1
fi