How to format output from a program spawned from a expect script - tcl

I am writing a load testing script for radius server using tcl and expect.
I am invoking radclient, that comes inbuild with the radius server, from my script on remote server.
scripts does following:
take remote server IP
- spawn ssh to remote server
- invoke radclient
- perform load test using radclient commands
- need to collect the result from the output (as shown in the sample output) into a variable
- Extract authentication/sec as Transaction per second (TPS) from output or variable from previous step
Need help on last two steps:
Sample output from radclient:
*--> timetest 20 10 2 1 1
Cycles: 10, Repetitions: 2, Requests per Cycle: 10
Starting User Number: 1, Increment: 1
Current Repetition Number=1
Skipping Accounting On Request
Total Requests=100, Total Responses=100, Total Accepts=0 Total Not Accepts=100
1: Sending 100 requests and getting 100 responses took 449ms, or 0.00 authentications/sec
Current Repetition Number=2
Skipping Accounting On Request
Total Requests=100, Total Responses=100, Total Accepts=0 Total Not Accepts=100
2: Sending 100 requests and getting 100 responses took 471ms, or 0.00 authentications/sec
Expected Output:
TPS achieved = 0

You might use something like this:
expect -re {([\d.]+) authentications/sec}
set authPerSec $expect_out(1,string)
puts "TPS achieved = $authPerSec"
However, that's not to say that the information extracted is the right information. For example, when run against your test data it is likely to come unstuck as there are two places where you have authentications/sec due to all the repetitions; we don't account for that at all! More complex patterns might extract more information and so on.
expect {
-re {([\d.]+) authentications/sec} {
set authPerSec $expect_out(1,string)
puts "TPS achieved #[incr count] = $authPerSec"
exp_continue
}
"bash$" {
# System prompt means stop expecting; tune for what you've got...
}
}
Doing the right thing can be complex sometimes…

Related

Snakemake fails to produce output

I'm trying to run fastqc on two paired files (1.fq.gz and 2.fq.gz). Running:
snakemake --use-conda -np newenv/1_fastqc.html
...produces what looks like a sensible DAG:
Building DAG of jobs...
Job stats:
job count min threads max threads
---------- ------- ------------- -------------
curlewtest 1 1 1
total 1 1 1
[Sat May 21 11:27:40 2022]
rule curlewtest:
input: 1.fq.gz, 2.fq.gz
output: newenv/1_fastqc.html, newenv/2_fastqc.html
jobid: 0
resources: tmpdir=/tmp
fastqc 1.fq.gz 2.fq.gz
Job stats:
job count min threads max threads
---------- ------- ------------- -------------
curlewtest 1 1 1
total 1 1 1
This was a dry-run (flag -n). The order of jobs does not reflect the order of execution.
When I run the job snakemake --use-conda --cores all newenv/1_fastqc.html
, the analysis runs, but the output files fail to appear. Snakemake also throws the following error:
Waiting at most 5 seconds for missing files.
MissingOutputException in line 2 of /mnt/data/kcollier/snakemake-workspace/snakefile:
Job Missing files after 5 seconds. This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait:
newenv/1_fastqc.html
newenv/2_fastqc.html completed successfully, but some output files are missing. 0
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Increasing latency does not help. The output directory I created beforehand (newenv) also disappears. Does anyone else know why this is?
Without your code, it is difficult to answer precisely what causes the error. But this can happen if your shell command (or script) does not produce the output exactly as stated in the output directive of the rule - it may be something as simple as an error in the file paths.
Perhaps try to run the shell command that Snakemake runs and see if the output files, you expect, get created. You can easily see the commands that Snakemake runs by adding the --verbose/-p flag or the --debug flag to your snakemake command.

zabbix: fping failed: simplejson.scanner.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

I want to use ICMPPING in zabbix as a simple check. I know it uses fping. But I want to override the fping program to do my desired work. fping utility can give me the result of an ICMP request from the system that runs it. I mean it investigates the availability of an IP address from just one server. But I want to use an API which is performed by the use of CURL that returns the results of availability of an IP address from multiple servers. I wrote the program with python and it is working well. But I don't know how to send the result to zabbix! By now it simply produce 1 if the IP is online and 0 if it is offline. I think I should submit the result in a JSON like format, but I do not know the right syntax!! I formerly wrote a script for discovering LVM partitions and submit the result with the following JSON synax: {"data":[{"{#MDNAME}":"md1"},{"{#MDNAME}":"md127"},{"{#MDNAME}":"md2"}]} But I don't know the correct json syntax for the icmpping! Any help is appreciated
It does not use JSON, Zabbix just parses the fping output - you would have to emulate that.
For example, fping output with the default settings and 3 packets sent looks like this:
> fping -C 3 127.0.0.1
127.0.0.1 : [0], 96 bytes, 0.07 ms (0.07 avg, 0% loss)
127.0.0.1 : [1], 96 bytes, 0.06 ms (0.06 avg, 0% loss)
127.0.0.1 : [2], 96 bytes, 0.07 ms (0.06 avg, 0% loss)

How to nicely format my console/terminal text as HTML

I have a java program which sends a results (as seen in a console) via HTTP to a browser. The real results are nicely formatted by tabs and newlines, as seen below:
./src/yse4 : The YSE emulator in use
[-w] - [w]rap around at tracefile end and begin anew at time zero.
[-f filename] - [-f tracefile to use from]
/home/Downloads/yse.wzt/tracefiles/capacity.3Mbps_400RTT_PER_0.0001.txt
File name: tracefiles/capacity.3Mbps_400RTT_PER_0.0001.txt
200 Forwarding Delay (ms)
200 Reversed Delay (ms)
3000000 Download Capacity (Mbps)
3000000 Upload Capacity (Mbps)
0.0001 Packet Error Rate (PER)
at=eth1
an=eth0
But when I send it as HTML, of course it does not recognize tabs and newlines. I manually add <br> at the end of each line, but still tabs are missing, and the browser shows it as below:
./src/yse4 : The YSE emulator in use
[-w] - [w]rap around at tracefile end and begin anew at time zero.
[-f filename] - [-f tracefile to use from]
/home/Downloads/yse.wzt/tracefiles/capacity.3Mbps_400RTT_PER_0.0001.txt
File name: tracefiles/capacity.3Mbps_400RTT_PER_0.0001.txt
200 Forwarding Delay (ms)
200 Reversed Delay (ms)
3000000 Download Capacity (Mbps)
3000000 Upload Capacity (Mbps)
0.0001 Packet Error Rate (PER)
at=eth1
an=eth0
How can I format it as HTML to be seen nicely? Maybe any library exists for that?
You can return an HTML Table containing two rows and format the table as you want
HTML Table

keepalived + MySQL with periodic MISC_CHECK

I have Keepalived + MySQL (master - master) setup done.
I have kept the priority same for MASTER and BACKUP because I don't want them to start flapping frequently (one time switch of VIP is good enough).
This setup works fine if I use the simple 'vrrp-script' to check if mysql daemon is down. e.g.
script to check mysql daemon
vrrp_script chk_mysql {
script "killall -0 mysqld" # verify the pid is exist or not
interval 2 # check every 2 seconds
weight 2
}
I want to make it work with deeper health check with one python script. I want to use MISC_CHECK for that.
e.g.
MISC_CHECK {
misc_path “script_to_call_python_script.sh xxxx xxxx xxxx xxxx”
misc_timeout 5
}
My query is:
How can I make the MISC_CHECK to run at specified intervals?
Otherwise, what is 'required' output of script in 'vrrp_script', so that I could run
shell script there (which runs are periodic interval)?
Place the python code in a folder and in your vrrp_script call it like
vrrp_script chk_mysql {
script "location of you python script"
interval "the specified interval"
weight 2
}
Set the output to 0 or 1 depending on the check
as #nimesh said above, vrrp_script support python script directly. Just put your shell/python/rudy location with the script "location of you script" config.

How can I log "show processlist" when there are more than n queries?

Our mysql processes can sometimes get backlogged and processes begin queuing up. I'd like to debug when and why this occurs by logging the processlist during slow times.
I'd like to run show full processlist; via a cron job and save output to text file if there are more than 50 rows returned.
Can you point me in the right direction?
For example:
echo "show full processlist;" | mysql -uroot > processlist-`date +%F-%H-%M`.log
I'd like to run that only when the result contains the text 50 rows in set (or greater than 50 rows).
pt-stalk is designed for this exact purpose. It samples the process list every second (or whatever time you specify), then when a threshold is reached (Threads_running is the default and is what you want in this case), collects a whole bunch of data, including disk activity, tcpdumps, multiple samples of the process list, server status variables, mutex/innodb status, and a bunch more.
Here's how to start it:
pt-stalk --daemonize --dest /var/lib/pt-stalk --collect-tcpdump --threshold 50 --cycles 1 --disk-pct-free 20 --retention-time 3 -- --defaults-file=/etc/percona-toolkit/pt-stalk_my.cnf
The command above will sample Threads_running (--threshold; set this to your value for n), every second (default of --interval) and fire a data collection if Threads_running is greater than 50 for 1 consecutive sample (--cycles). 3 days (--retention-time) of samples will be kept and collect will not fire if less than 20% of your disk is free (--disk-pct-free). At each collection, a pcap format tcpdump will be executed (--collect-tcpdump) which can be analyzed with either conventional tcpdump tools, or a number of other Percona Toolkit tools, including pt-query-digest and pt-tcp-model. There will be a 5 minute rest in between samples (default of --sleep) in order to prevent from DoS'ing yourself. The process wil be daemonized (--daemonize). The parameters after -- will be passed to all mysql/mysqladmin commands, so is a good place to set things like --defaults-file where you can store your login credentials away from prying eyes.
First of all, make sure MySQL's slow queries log isn't what you need. Also, MySQL's -e parameter allows you to specify a query on the command line.
Turning the logic around, this saves the process list and removes it when the process list isn't long enough:
date=$(date +...) # set the desired date format here
[ $(mysql -uroot -e "show full processlist" | tee plist-$date.log | wc -l) -lt 51 ] && rm plist-$date.log