mysql connection terminates when i logoff from the remote server - mysql

I had written a script that connects to the local mysql server every 6 seconds and checks if there is any data in the table .if there is data it runs some php commands and then deletes that data from the table. I logged into my remote server(Shared hosting) through ssh and then copied the script and executed it using command "nohup ./script.sh 0<&- &>alert.log &" so that it runs in background and writes all the output to alert.log file. my problem is that when i log in to the server through SSH and execute the script it runs perfectly , but when i log out from server its not running . when i check the alert.log file after it is showing error "cannot connect to local mysql server". any solutions ??
this is the code
while true
do
res=($(mysql -u root -p123456 --skip-column-names -Dtest -e "select id from temptab"))
if [[ "$res" > 0 ]];then
del=`mysql -u root -p123456 -Dtest -e "delete from temptab;" `
now="$(date +'%d/%m/%Y:%H.%M.%S')"
for ((i=0; i < ${#res[#]}; i++))
do
php -n /var/lib/mysql/trigger.php ${res[$i]}
echo "[$now]:Trigger called with videoid ${res[$i]}"
done
fi
sleep 6
done
and this is the sample output
cat nohup.out
X-Powered-By: PHP/5.4.20
Content-type: text/html
{"multicast_id":8864856209398719411,"success":2,"failure":1,"canonical_ids":0,"results":[{"message_id":"0:1385797766832904%4f0c6467f9fd7ecd"},{"error":"InvalidRegistration"},{"message_id":"0:1385797766832901%4f0c6467f9fd7ecd"}]}81Inserted police info
[30/11/2013:00.49.26]:Trigger called with videoid 65
/etc/bashrc: line 14: whoami: command not found
/etc/bashrc: line 20: grep: command not found
/etc/bashrc: line 59: dircolors: command not found
./alert.sh: line 15: php: command not found
[30/11/2013:07.50.27]:Trigger called with videoid 70
./alert.sh: line 15: /ramdisk/php/54/bin/php54: No such file or directory
[30/11/2013:09.09.52]:Trigger called with videoid 71

screen is what you need. There are plenty of tutorials on google on screen usage.

I suggest to move your code into a crontab even that will run every X minutes (5 minutes, or anything else you like) rather than have your user run it during a live session.
Just place the PHP script inside a call to cron, login, and run crontab -e then add:
*/5 * * * * /home/username/phpscript.php

You could try to run your script like:
/path/to/script.sh </dev/null &>/home/yourname/alert.log &
disown
I

finally i got the solution .... i was using /usr/bin/php to call my php files but .....when i edited it to /usr/bin/php.orig it started working........... but what is that php.orig...?? #all thanks

Related

What is wrong with this bash script (cron + mysql)

Im using a bash script (sync.sh), used by cron, that is supposed to sync a file to a MySQL database. It works by copying a file from automatically uploaded location, parse it by calling SQL script which calls other MySQL internally stored scripts, and at the end emails a report text file as an attachment.
But, seems like something is not working as nothing happens to MySQL databases. All other commands are executed (first line and last line: copy initial file and e-mail sending).
MySQL command when run separately works perfectly.
Server is Ubuntu 16.04.
Cron job is run as root user and script is part of crontab for root user.
Here is the script:
#!/bin/bash
cp -u /home/admin/web/mydomain.com/public_html/dailyxchng/warehouse.txt /var/lib/mysql-files
mysql_pwd=syncit4321
cd /home/admin/web/mydomain.com/sync
mysql -u sync -p$mysql_pwd --database=database_name -e "call sp_sync_report();" > results.txt
echo "<h2>Report date $(date '+%d/%m/%Y %H:%M:%S')</h2><br/><br/> <strong>results.txt</strong> is an attached file which contains sync report." | mutt -e "set content_type=text/html" -s "Report date $(date '+%d/%m/%Y %H:%M:%S')" -a results.txt -- recipient#mydomain.com
cron will execute the script using a very stripped environment. you probably want to add the full path to the mysql command to the cron script
you can find the full path by
which mysql
at the prompt,
or you can add an expanded path to the cron invocation
1 2 * * * PATH=/usr/local/bin:$PATH scriptname

mysqldump sometimes returns empty file

i am running a cron job, that calls a sh script with the code bellow. I have noticed that sometimes it works, but sometimes i am getting a 0KB file. I have no idea what could be causing this 0KB or what could be done to fix this.
DATE=`date +%Y-%m-%d-%H-%m`
NAME=bkp-server-207-$DATE.sql.gz
mysqldump -u root -pXXXX#2016 xxxx | gzip > /media/backup_folder/$NAME
You need to find out why the command is failing. The default output of a cron job is lost, unless you redirect it to a file and check it later.
You can log it at the cron level (see http://www.thegeekstuff.com/2012/07/crontab-log/)
59 23 * * * /home/john/bin/backup.sh >> /home/john/logs/backup.log 2>&1
The 2>&1 folds stderr into the stdout, so both are saved.
Or else you can log specific commands within your script:
echo "Creating $NAME" >>/home/john/logs/backup.log
mysqldump -u root -pXXXX#2016 xxxx 2>>/home/john/logs/backup.log | gzip > /media/backup_folder/$NAME
Once you have the error output, you should have important clues as to the cause of the failure.

crontab behaviour difference for mysql

I did tried to search, but nothing comes up that really works for me.
So i would start this thread to see if anyone can help. I hope this is not a stupid question that i overlook something simple.
I have a mac mini, that running with a MySQL server.
There is some day end job, so i put them into a script, trigger by a crontab (Actually I also tried launched as this is mac OS X, but same behavior)
crontab looks like this
15 00 * * * /Users/fgs/Documents/database/process_db.sh > /Users/fgs/Documents/database/output.txt 2>&1
the script looks like this
#!/bin/bash
#some data patching task before everything start
#This sql takes 3 sec
/usr/local/bin/mysql dbname -u root "-ppassword" < /Users/fgs/Documents/database/loadrawdata.sql
#This sql takes 90 sec
/usr/local/bin/mysql dbname -u root "-ppassword" < /Users/fgs/Documents/database/LongLongsql.sql
#This sql takes 1 sec
/usr/local/bin/mysql dbname -u root "-ppassword" < /Users/fgs/Documents/database/anothersql.sql
Behavior:
A. When i execute the shell script directly in terminal, all the 3 sql works
B. When i execute this with crontab, the 90 sec SQL doesn't work (it is an insert into with a very big join, so there is no output printed, i did also tried to > output file, adding 2>&1, also no output), but the SQL before and after it works as expected.
C. To simulate crontab behavior, I tried to use
env - /bin/sh
and then start the shell script manually.
It appears that, the 90 sec longlongsql.sql was running only 5 sec, and skipped to the next line. No error message was displayed
I am wondering if there is any kind of timeout for crontab? (I did searched but found nothing)
I did checked ulimit is unlimited (checked within "env - /bin/sh", and also did tried to put into the script)
I believe it is not related to mysql command, since it works fine by running same scripts (I also did searched this topic, and nothing interesting)
Just wondering if anyone can shed some light on me, a direction or whatever will help.
Thanks everyone in advance.
Don't forget that cron will start an isolated shell where it may not be able to read the file.
I would recommend to put your mysql-stuff inside a script. If you are able to execute the script, cron should also be able to do so.
#!/bin/bash
/usr/local/bin/mysql dbname -u root "-ppassword" < /Users/fgs/Documents/database/LongLongsql.sq
Or:
#!/bin/bash
/usr/local/bin/mysql --user=root --password=xxxxxx -e "/Users/fgs/Documents/database/LongLongsql.sq"
Then call the script from crontab...

way to connect mysql which executes only once inside the for loop in bash script?

I have a script which will execute Insert Query n times - for that i have used FOR loop , but the problem is the command which connects to remote mysql also executes n times. Here is the script for the better idea for my problem.
#!/bin/bash -X
#fields: id|alias|booking_time|contact_no|deleted|grace|number_in_queue|pax|seated_time|status|walk_in_time|queue_id|user_id
echo "Bash version ${BASH_VERSION}..."
for i in {1..5..1}
do
_alias="Name$i"
_contact_no=$(cat /dev/urandom | tr -dc '1-9' | fold -w 10 | head -n 1)
_deleted="FALSE"
_number_in_queue=$i
_pax=$(( $RANDOM % 10 + 20 ))
_status="waiting"
_queue_id=424
_user_id=550
mysql -u root -p restbucks << EOF #Want this to execute only One time
INSERT INTO queue_item VALUES ('','$_alias',now(),'$_contact_no','$_deleted',NULL,'$_number_in_queue','$_pax',now(),'$_status',now(),'$_queue_id','$_user_id');
EOF
done
Everytime i try to run the script , it will ask me for the password. What i want is that only once the connection made.
You can move the mysql connect before the for loop.
mysql -u root -p restbucks << EOF #this execute only One time
for i in {1..5..1}
do
.....
done
EOF
Also it is recommended that you can write your queries into a file and then finally execute the file using single connection.
You can refer bulk-mysql-query
That is because it is placed inside the for loop. Every time the control passes to the loop it gets executed. You don't need to connect to your server once you are connected. Try placing the statement before the FOR statement.

Crontab is not running mysql command

I have set up a cronjob using crontab -e as follows:
12 22 * * * /usr/bin/mysql >> < FILE PATH >
This does not run the mysql command. It only creates a blank file.
Whereas mysqldump command is running via cron.
What could the problem be?
Surely mysql is the interactive interface into MySQL.
Assuming that you're just running mysql and appending the output to your file with >>, the first time it tries to read from standard input, it will probably get an end-of-file and exit.
Perhaps you might want to think about providing a command for it to process, something like:
12 22 * * * /usr/bin/mysql
-u me
-p never_you_mind
-e "select * from my_table"
-D my_database
>>/home/me/output_file
(split across multiple lines for readability, but should be on one line).
As an aside, that's not overly secure since your password may be visible from ps while the process is running. Since it's only an example, I'm not too worried, but you should consider storing the password in a properly secured my.cnf file if you go down this path.
In terms of running a shell script from cron which in turn executes MySQL commands, that should work as well. One choice is with a here-doc:
/usr/bin/mysql -u me -p never_you_mind -D my_database <<EOF
select * from my_table
select * from my_other_table where id = 74
EOF
12 22 * * * /usr/bin/mysql >> < FILE PATH > 2>&1
Redirect your error message to the same file so you can debug it.
There is also a good article about how to debug cron jobs:
How to debug a broken cron job