Cron to run every 4 hours with different name then overwrite, MySQL db - mysql

I want to run a cron job every hours hours to back up my mysql database.
When it has run for 24 hours I want it to start again and then overwrite each file.
The best I have come up with is:
15 0 * * * /usr/bin/mysqldump -u DBUSERNAME -pDBPASSWORD DBNAME > /PATH/backup1.sql
15 4 * * * /usr/bin/mysqldump -u DBUSERNAME -pDBPASSWORD DBNAME > /PATH/backup2.sql
15 8 * * * /usr/bin/mysqldump -u DBUSERNAME -pDBPASSWORD DBNAME > /PATH/backup3.sql
15 12 * * * /usr/bin/mysqldump -u DBUSERNAME -pDBPASSWORD DBNAME > /PATH/backup4.sql
15 16 * * * /usr/bin/mysqldump -u DBUSERNAME -pDBPASSWORD DBNAME > /PATH/backup5.sql
15 20 * * * /usr/bin/mysqldump -u DBUSERNAME -pDBPASSWORD DBNAME > /PATH/backup6.sql
Is there a more efficient way to do this and do cron jobs auto overwrite a file or is there a switch I need to add?
New to server stuff but gotta learn!

I would go for something like:
15 */4 * * * /bin/bash /path/to/your/script.sh
This executes /bin/bash /path/to/your/script.sh every 4 hours at minute 15.
And then let script.sh be:
num=$(( ($(date "+%H") + 4 ) / 4))
/usr/bin/mysqldump -u DBUSERNAME -pDBPASSWORD DBNAME > /PATH/backup${num}.sql
To get
hour num
0 1
4 2
....
20 6
I use:
$(date "+%H") returns the hour.
$(date "+%H") + 4 returns the hour +4.
$(( ($(date "+%H") + 4 ) / 4)) returns the hour +4 divided by 4.

You could simply rotate your backups before actual backup.
#!/bin/sh
for I in `jot 5 5 1`; do
NN=`expr $I + 1`
mv backup$I.sql backup$NN.sql
done
mysqldump -u DBUSERNAME -pDBPASSWORD DBNAME > backup1.sql

Related

Subtract fixed number of days from date column using awk and add it to new column

Let's assume that we have a file with the values as seen bellow:
% head test.csv
20220601,A,B,1
20220530,A,B,1
And we want to add two new columns, one with the date minus 1 day and one with minus 7 days, resulting the following:
% head new_test.csv
20220601,A,B,20220525,20220531,1
20220530,A,B,20220523,20220529,1
The awk that was used to produce the above is:
% awk 'BEGIN{FS=OFS=","} { a="date -d \"$(date -d \""$1"\") -7 days\" +'%Y%m%d'"; a | getline st ; close(a) ;b="date -d \"$(date -d \""$1"\") -1 days\" +'%Y%m%d'"; b | getline cb ; close(b) ;print $1","$2","$3","st","cb","$4}' test.csv > new_test.csv
But after applying the above in a large file with more than 100K lines it runs for 20 minutes, is there any way to optimize the awk?
One GNU awk approach:
awk '
BEGIN { FS=OFS=","
secs_in_day = 60 * 60 * 24
}
{ dt = mktime( substr($1,1,4) " " substr($1,5,2) " " substr($1,7,2) " 12 0 0" )
dt1 = strftime("%Y%m%d",dt - secs_in_day )
dt7 = strftime("%Y%m%d",dt - (secs_in_day * 7) )
print $1,$2,$3,dt7,dt1,$4
}
' test.csv
This generates:
20220601,A,B,20220525,20220531,1
20220530,A,B,20220523,20220529,1
NOTES:
requires GNU awk for the mktime() and strftime() functions; see GNU awk time functions for more details
other flavors of awk may have similar functions, ymmv
You can try using function calls, it is faster than calling the .
awk -F, '
function cmd1(date){
a="date -d \"$(date -d \""date"\") -1days\" +'%Y%m%d'"
a | getline st
return st
close(a)
}
function cmd2(date){
b="date -d \"$(date -d \""date"\") -7days\" +'%Y%m%d'"
b | getline cm
return cm
close(b)
}
{
$5=cmd1($1)
$6=cmd2($1)
print $1","$2","$3","$5","$6","$4
}' OFS=, test > newFileTest
I executed this against a file with 20000 records in seconds, compared to the original awk which took around 5 minutes.

How to write a for loop to perform an operation N times in the ash shell?

I'm looking to run a command a given number of times in an Alpine Linux docker container which features the /bin/ash shell.
In Bash, this would be
bash-3.2$ for i in {1..3}
> do
> echo "number $i"
> done
number 1
number 2
number 3
However, the same syntax doesn't seem to work in ash:
> docker run -it --rm alpine /bin/ash
/ # for i in 1 .. 3
> do echo "number $i"
> done
number 1
number ..
number 3
/ # for i in {1..3}
> do echo "number $i"
> done
number {1..3}
/ #
I had a look at https://linux.die.net/man/1/ash but wasn't able to easily find out how to do this; does anyone know the correct syntax?
I ended up using seq with command substitution:
/ # for i in $(seq 10)
> do echo "number $i"
> done
number 1
number 2
number 3
number 4
number 5
number 6
number 7
number 8
number 9
number 10
Simply like with bash or shell:
$ ash -c "for i in a b c 1 2 3; do echo i = \$i; done"
output:
i = a
i = b
i = c
i = 1
i = 2
i = 3
Another POSIX compatible alternative, which does not use potentially slow expansion, is to use
i=1; while [ ${i} -le 3 ]; do
echo ${i}
i=$(( i + 1 ))
done

Website loading issue in all browsers on DSL Connection

I am running ubuntu 14.04 and i am using a DSL Connection.From last few days i am having problem in loading some websites like ubuntuforums, hackerrank, mdn etc. I have tried alsmost all solutions available on internet. I am only having this problem on dsl connection. On wifi everything works fine.
1) I have changed default dns to google public dns.
2) This is the outcome of my traceroute ubuntuforums.org
SinScary "at" avenger:~:$traceroute ubuntuforums.org
traceroute to ubuntuforums.org (91.189.94.12), 30 hops max, 60 byte packets
1 100.65.128.1 (100.65.128.1) 0.943 ms 0.916 ms 0.578 ms
2 * * *
3 172.31.210.158 (172.31.210.158) 1.657 ms 1.649 ms 1.634 ms
4 172.31.10.73 (172.31.10.73) 1.950 ms 1.901 ms 1.881 ms
5 10.0.248.2 (10.0.248.2) 2.134 ms 2.126 ms 2.225 ms
6 ws86-230-252-122.rcil.gov.in (122.252.230.86) 1.834 ms 1.372 ms 1.891 ms
7 ws85-230-252-122.rcil.gov.in (122.252.230.85) 1.716 ms 1.830 ms 1.900 ms
8 172.31.210.42 (172.31.210.42) 2.193 ms 2.184 ms 2.231 ms
9 172.31.10.66 (172.31.10.66) 39.689 ms 39.668 ms 39.665 ms
10 172.31.10.198 (172.31.10.198) 2.109 ms 2.163 ms 2.093 ms
11 aes-static-113.195.22.125.airtel.in (125.22.195.113) 1.898 ms 1.863 ms 2.436 ms
12 182.79.245.141 (182.79.245.141) 214.911 ms 182.79.222.109 (182.79.222.109) 220.000 ms *
13 * * *
14 ae-126-3512.edge5.london1.Level3.net (4.69.166.45) 151.005 ms ae-123-3509.edge5.London1.Level3.net (4.69.166.33) 149.374 ms 141.512 ms
15 SOURCE-MANA.edge5.London1.Level3.net (212.187.138.82) 195.375 ms 159.963 ms 160.023 ms
16 * * *
17 * * *
18 * * *
19 * * *
20 * * *
21 * * *
22 * * *
23 * * *
24 * * *
25 * * *
26 * * *
27 * * *
28 * * *
29 * * *
30 * * *
3) I have disabled ipv6 as suggested in this Answer by Mitch
4) It successfully pinging from ubuntuforums.org, hackerrank.com
5) Appended nameserver 8.8.8.8 to /etc/resolve.conf
6) Reinstalled Chrome
7) Also tried the following commands
sudo apt-get install resolvconf
sudo dpkg-reconfigure resolvconf
8) Tried disabling dns masq
9) This is the output of route -n
SinScary "at" avenger:~:$route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 100.65.128.1 0.0.0.0 UG 0 0 0 ppp0
100.65.128.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
SinScary "at" avenger:~:$
10) This is the output of cat /etc/hosts
SinScary "at" avenger:~:$cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 avenger
127.0.0.2 sinscary.com sinscary
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
SinScary "at" avenger:~:$
This is the error i am getting while loading websites
ERR Timed Out
Any Help with this.

mysql query does not execute

I am trying to run some queries from bash. first, how can I connect once and perform SELECT queries from different dbs? and the following code does not work.
> $LOG_FILE
> $SQL_FILE
for sam in $db
do
echo "USE ${sam}; SELECT login, FORMAT(SUM(PROFIT), 2) AS PROFIT FROM MT4_TRADES WHERE CLOSE_TIME >= '2016-12-01' AND CLOSE_TIME < '2016-02-29' AND CMD IN (0 , 1) GROUP BY LOGIN LIMIT 10;" >> ${SQL_FILE}
done
while read line
do
echo "beginning: `date "+%F %T"`" | tee -a ${LOG_FILE}
out=`echo "$line" | mysql -N --host=${Host} --user=${User} --password=${Passwd} 2>&1`
echo "$out" >> ${LOG_FILE}
if [[ ${?} -eq 0 ]]; then
echo "RESULTS FETCHED: `date "+%F %T"`" | tee -a ${LOG_FILE}
else
echo "FETCHING RESULT failed" | tee -a ${LOG_FILE}
exit 1
fi
done < ${SQL_FILE}

Column width of mysql output

The output of an mysql command doesnt correctly align the titels (using script).
Here is the sql command in script.sh
#!/usr/bin/bash
echo "select * from hw_inventory "| mysql --host=localhost --user=root --database=monitor > /tmp/inventory
This gives me following output in /tmp/inventory
ip_address entity_index entity_physname entity_physdesc entity_serial
10.212.0.1 1000 Switch 1 WS-C3850-12S FOC1842U117
10.212.0.1 1009 Switch 1 - Power Supply A Switch 1 - Power Supply A LIT18300URD
10.212.0.1 1010 Switch 1 - Power Supply B Switch 1 - Power Supply B LIT183506NH
10.212.0.1 1034 Switch 1 FRU Uplink Module 1 2x1G 2x10G Uplink Module FOC18363NJX
As you cen see the alignment (tabs) is not in the same way as the text with Switch 1 should start under entity_physname.
It needs to be like following output:
ip_address entity_index entity_physname entity_physdesc entity_serial
10.212.0.1 1000 Switch 1 WS-C3850-12S FOC1842U117
10.212.0.1 1009 Switch 1 - Power Supply A Switch 1 - Power Supply A LIT18300URD
10.212.0.1 1010 Switch 1 - Power Supply B Switch 1 - Power Supply B LIT183506NH
10.212.0.1 1034 Switch 1 FRU Uplink Module 1 2x1G 2x10G Uplink Module FOC18363NJX
Any ideas?
Thanks in advance
For formatting the output from the mysql use the -t param
"select * from hw_inventory "| mysql -t --host=localhost --user=root --database=monitor > /tmp/inventory