keepalived + MySQL with periodic MISC_CHECK - mysql

I have Keepalived + MySQL (master - master) setup done.
I have kept the priority same for MASTER and BACKUP because I don't want them to start flapping frequently (one time switch of VIP is good enough).
This setup works fine if I use the simple 'vrrp-script' to check if mysql daemon is down. e.g.
script to check mysql daemon
vrrp_script chk_mysql {
script "killall -0 mysqld" # verify the pid is exist or not
interval 2 # check every 2 seconds
weight 2
}
I want to make it work with deeper health check with one python script. I want to use MISC_CHECK for that.
e.g.
MISC_CHECK {
misc_path “script_to_call_python_script.sh xxxx xxxx xxxx xxxx”
misc_timeout 5
}
My query is:
How can I make the MISC_CHECK to run at specified intervals?
Otherwise, what is 'required' output of script in 'vrrp_script', so that I could run
shell script there (which runs are periodic interval)?

Place the python code in a folder and in your vrrp_script call it like
vrrp_script chk_mysql {
script "location of you python script"
interval "the specified interval"
weight 2
}
Set the output to 0 or 1 depending on the check

as #nimesh said above, vrrp_script support python script directly. Just put your shell/python/rudy location with the script "location of you script" config.

Related

Can you set an artificial starting point in your code in Octave?

I'm relatively new to using Octave. I'm working on a project that requires me to collect the RGB values of all the pixels in a particular image and compare them to a list of other values. This is a time-consuming process that takes about half a minute to run. As I make edits to my code and test it, I find it annoying that I need to wait for 30 seconds to see if my updates work or not. Is there a way where I can run the code once at first to load the data I need and then set up an artificial starting point so that when I rerun the code (or input something into the command window) it only runs a desired section (the section after the time-consuming part) leaving the untouched data intact?
You may set your variable to save into a global variable,
and then use clear -v instead of clear all.
clear all is a kind of atomic bomb, loved by many users. I have never understood why. Hopefully, it does not close the session: Still some job for quit() ;-)
To illustrate the proposed solution:
>> a = rand(1,3)
a =
0.776777 0.042049 0.221082
>> global a
>> clear -v
>> a
error: 'a' undefined near line 1, column 1
>> global a
>> a
a =
0.776777 0.042049 0.221082
Octave works in an interactive session. If you run your script in a new Octave session each time, you will have to re-compute all your values each time. But you can also start Octave and then run your script at the interactive terminal. At the end of the script, the workspace will contain all the variables your script used. You can type individual statements at the interactive terminal prompt, which use and modify these variables, just like running a script one line at the time.
You can also set breakpoints. You can set a breakpoint at any point in your script, then run your script. The script will run until the breakpoint, then the interactive terminal will become active and you can work with the variables as they are at that point.
If you don't like the interactive stuff, you can also write a script this way:
clear
if 1
% Section 1
% ... do some computations here
save my_data
else
load my_data
end
% Section 2
% ... do some more computations here
When you run the script, Section 1 will be run, and the results saved to file. Now change the 1 to 0, and then run the script again. This time, Section 1 will be skipped, and the previously saved variables will be loaded.

How can I invoke a shell or Perl script from iptables?

We're using CentOS and would like to ban several Asian countries from accessing the entire server. Almost every IP we check which has tried to hack into our server is allocated to an Asian country (Russia, China, Pakistan, etc.)
We have an IP to country MySQL database we can efficiently query and would like to try something like:
-A INPUT -p tcp -m tcp --dport 80 -j /path/to/perlscript.pl
The script would need the IP passed in as an argument, then it would return either an ACCEPT or DROP target?
Thanks for the answers, here's my follow up.
Do you know if it is possible though? Having a rule point to a script which returns a target? (ACCPET/DROP)
Not entirely sure how ipset works, will have to experiment I guess, but it looks like it creates a single rule. How would it handle Russia for example, which has over 6000 ranges assigned to it? And we want to add probably 20 - 40 countries in total, so we could end up needing to add in excess of 100,000 ranges. Wouldn't the overhead of a single MySQL query be less taxing?
SELECT country FROM ip_countries WHERE $VAR{ip} >= range1 && $VAR{ip} <= range2
The database we use is freely available here : http://software77.net/geo-ip/
It represents IPs in the database by converting the IP to a number using this formula :
$VAR{numberedIP} = $octs[3] + ($octs[2] * 256) + ($octs[1] * 256 * 256) + ($octs[0] * 256 * 256 * 256);
It will store the start of the range in the "range1" column, and the end of the range in the "range2" column.
So you can see how we'd look up an IP using the above query. Literally takes less than a hundredth of a second to get a result and it's quite accurate. We have one website on a dedicated server, quite low traffic. But as with all servers I have ever checked, this one is hit daily by hackers' robots, checking email accounts, FTP accounts etc. And just about every web server I've ever worked on is compromised sooner or later. In our case, 99.99% of traffic from Asian countries has criminal intent attached to it.
We'd like this to run via iptables so that all ports are covered, not just HTTP for example by using directives in say .htaccess.
Do you think ipset would still be faster and more efficient?
It would be far too slow to launch perl for every matching packet. The right tool for this sort of thing is ipset, and there is much more information and documentation available on the ipset man page.
In CentOS you can install it with yum. Naturally, all of these commands and the script need to run as root:
# yum install ipset
Next install the kernel modules (you'll want this to happen at boot as well):
# modprobe -v ipset ip_set_hash_netport
And then use a script like the following to populate an ipset and block IP's from its ranges using iptables:
#!/usr/bin/env perl
use strict;
use warnings;
use DBI;
my $dbh = DBI->connect('... your DSN ...',...);
# I have no knowledge of your schema, but if you can pull the
# address range in the form: AA.BB.CC.DD/NN
my $ranges = $dbh->selectcol_arrayref(
q{SELECT cidr FROM your_table WHERE country_code IN ('CN',...)});
`ipset create geoblock hash:netport`;
for (#$ranges) {
# to match on port 80:
`ipset add geoblock $_,80`;
}
`iptables -I INPUT -m set --set geoblock src -j DROP`;
If you would like to block all ports rather than just 80, use the ip_set_hash_net module instead of ip_set_hash_netport, change hash:netport to hash:net, and remove ,80 from the ipset command.

How to format output from a program spawned from a expect script

I am writing a load testing script for radius server using tcl and expect.
I am invoking radclient, that comes inbuild with the radius server, from my script on remote server.
scripts does following:
take remote server IP
- spawn ssh to remote server
- invoke radclient
- perform load test using radclient commands
- need to collect the result from the output (as shown in the sample output) into a variable
- Extract authentication/sec as Transaction per second (TPS) from output or variable from previous step
Need help on last two steps:
Sample output from radclient:
*--> timetest 20 10 2 1 1
Cycles: 10, Repetitions: 2, Requests per Cycle: 10
Starting User Number: 1, Increment: 1
Current Repetition Number=1
Skipping Accounting On Request
Total Requests=100, Total Responses=100, Total Accepts=0 Total Not Accepts=100
1: Sending 100 requests and getting 100 responses took 449ms, or 0.00 authentications/sec
Current Repetition Number=2
Skipping Accounting On Request
Total Requests=100, Total Responses=100, Total Accepts=0 Total Not Accepts=100
2: Sending 100 requests and getting 100 responses took 471ms, or 0.00 authentications/sec
Expected Output:
TPS achieved = 0
You might use something like this:
expect -re {([\d.]+) authentications/sec}
set authPerSec $expect_out(1,string)
puts "TPS achieved = $authPerSec"
However, that's not to say that the information extracted is the right information. For example, when run against your test data it is likely to come unstuck as there are two places where you have authentications/sec due to all the repetitions; we don't account for that at all! More complex patterns might extract more information and so on.
expect {
-re {([\d.]+) authentications/sec} {
set authPerSec $expect_out(1,string)
puts "TPS achieved #[incr count] = $authPerSec"
exp_continue
}
"bash$" {
# System prompt means stop expecting; tune for what you've got...
}
}
Doing the right thing can be complex sometimes…

How to trigger an OpenNMS event with thresholds

it seems that it is not possible for me to trigger an event in OpenNMS using a threshold...
first the fact (as much detail as i can)
i want to monitor a html file, better, the content.
if a value is not what i expected OpenNMS should call be.
my html file:
Document Count: 5
in /var/lib/opennms/rrd/snmp/NODE are two files named: "documentCount" (.jbr & .meta)
--> because of the http-datacollection-config.xml
in my logfiles is written:
INFO [LegacyScheduler-Thread-2-of-50] RrdUtils: updateRRD: updating RRD file /var/lib/opennms/rrd/snmp/21/documentCount.jrb with values '1385031023:5'"
so the "5" is collected correctly.
now i created a threshold for this case:
<threshold type="high" ds-type="node"
value="4.0" rearm="2.0" trigger="1" triggeredUEI="uei.opennms.org/threshold/highThresholdExceeded"
filterOperator="or" ds-name="documentCount"
/>
in my collectd-configuration.xml is the threshold also enabled:
in my opinion the threshold of 4 is exceeded, because the value is 5. so the highTresholdEvent should be fired. BUT IT DOESNT.
so i'm here to ask if someone had an idea.
regards dawn
Check collectd.log with the following
tail -f collectd.log | grep -i thresholding
Threshold checking was moved to evaluate while the data is being retrieved a while back as opposed to a post process of rrd files.
Even with the log setting at info you should find some clues as to why the threshold rule is not matching any data.

How can I log "show processlist" when there are more than n queries?

Our mysql processes can sometimes get backlogged and processes begin queuing up. I'd like to debug when and why this occurs by logging the processlist during slow times.
I'd like to run show full processlist; via a cron job and save output to text file if there are more than 50 rows returned.
Can you point me in the right direction?
For example:
echo "show full processlist;" | mysql -uroot > processlist-`date +%F-%H-%M`.log
I'd like to run that only when the result contains the text 50 rows in set (or greater than 50 rows).
pt-stalk is designed for this exact purpose. It samples the process list every second (or whatever time you specify), then when a threshold is reached (Threads_running is the default and is what you want in this case), collects a whole bunch of data, including disk activity, tcpdumps, multiple samples of the process list, server status variables, mutex/innodb status, and a bunch more.
Here's how to start it:
pt-stalk --daemonize --dest /var/lib/pt-stalk --collect-tcpdump --threshold 50 --cycles 1 --disk-pct-free 20 --retention-time 3 -- --defaults-file=/etc/percona-toolkit/pt-stalk_my.cnf
The command above will sample Threads_running (--threshold; set this to your value for n), every second (default of --interval) and fire a data collection if Threads_running is greater than 50 for 1 consecutive sample (--cycles). 3 days (--retention-time) of samples will be kept and collect will not fire if less than 20% of your disk is free (--disk-pct-free). At each collection, a pcap format tcpdump will be executed (--collect-tcpdump) which can be analyzed with either conventional tcpdump tools, or a number of other Percona Toolkit tools, including pt-query-digest and pt-tcp-model. There will be a 5 minute rest in between samples (default of --sleep) in order to prevent from DoS'ing yourself. The process wil be daemonized (--daemonize). The parameters after -- will be passed to all mysql/mysqladmin commands, so is a good place to set things like --defaults-file where you can store your login credentials away from prying eyes.
First of all, make sure MySQL's slow queries log isn't what you need. Also, MySQL's -e parameter allows you to specify a query on the command line.
Turning the logic around, this saves the process list and removes it when the process list isn't long enough:
date=$(date +...) # set the desired date format here
[ $(mysql -uroot -e "show full processlist" | tee plist-$date.log | wc -l) -lt 51 ] && rm plist-$date.log