MySQL login-path issues with clustercheck script used in xinetd - mysql

default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = root
server = /usr/bin/mysqlclustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
/etc/xinetd.d #
It is kind of strange that script works fine when run manually when it runs using /etc/xinetd.d/ , it is not working as expected.
In mysqlclustercheck script, instead of using --user= and passord= syntax, I am using --login-path= syntax
script runs fine when I run using command line but status for xinetd was showing signal 13. After debugging, I have found that even simple command like this is not working
mysql_config_editor print --all >>/tmp/test.txt
We don't see any output generated when it is run using xinetd ( mysqlclustercheck)

Have you tried the following instead of /usr/bin/mysqlclustercheck?
server = /usr/bin/clustercheck
I am wondering if you could test your binary location with the linux which command.

A long time ago since this question was asked, but it just came to my attention.
First of all as mentioned, Percona Cluster Control script is called clustercheck, so make sure you are using the correct name and correct path.
Secondly, since the server script runs fine from command line, it seems to me that the path of mysql client command is not known by the xinetd when it runs the Cluster Control script.
Since the mysqlclustercheck script as it is offered from Percona, it uses only the binary name mysql without specifying the absolute path I suggest you do the following:
Find where mysql client command is located on your system:
ccloud#gal1:~> sudo -i
gal1:~ # which mysql
/usr/local/mysql/bin/mysql
gal1:~ #
then edit script /usr/bin/mysqlclustercheck and in the following line:
MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \
place the exact path of mysql client command you found in the previous step.
I also see that you are not using MySQL connection credentials for connecting to MySQL server. mysqlclustercheck script as it is offered from Percona, it uses User/Password in order to connect to MySQL server.
So normally, you should execute the script in the command line like:
gal1:~ # /usr/sbin/clustercheck haproxy haproxyMySQLpass
HTTP/1.1 200 OK
Content-Type: text/plain
Where haproxy/haproxyMySQLpass is the MySQL connection user/pass for HAProxy monitoring user.
Additionally, you should specify them to your script's xinetd settings like:
server = /usr/bin/mysqlclustercheck
server_args = haproxy haproxyMySQLpass
Last but not least, the signal 13 you are getting is because you try to write something in a script run by xinetd. If for example in your mysqlclustercheck you try to add a statement like
echo "debug message"
you probably going to see the broken pipe signal (13 in POSIX).
Finally, I had issues with this script using SLES 12.3 and I finally manage to run it not as 'nobody' but as 'root'.
Hope it helps

Related

expect garbage before prompt

I try to connect to my router using ssh in order to automatically extract some logs from it.
I developed this code below :
#!/usr/bin/expect -f
spawn ssh root#192.168.1.1
expect "Are you sure you want to"
send -- "yes\r"
expect "password"
send -- "root\r"
expect "\#"
send -- "ls\r"
expect "\#"
the problem is I expected a garbage before the prompt in the output log.
spawn ssh root#192.168.1.1
The authenticity of host '192.168.1.1 (192.168.1.1)' can't be established.
RSA key fingerprint is SHA256:6aeE74qXMeQzg0SGJBZMIa0HFQ5HJrNqE5f3XZ6Irds.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/amin/.ssh/known_hosts).
root#192.168.1.1's password:
BusyBox v1.30.1 () built-in shell (ash)
OpenWrt Version: ALPHA
OpenWrt base: 19.07
------------------------------------
]0;root#openwrt: ~root#openwrt:~# ls
[0;0mnetwork[m
]0;root#openwrt: ~root#openwrt:~#
what's the main cause of this issue? How I can fix it?
The problem is that there are terminal escape sequences being issued, probably to control what colour the terminal uses. The easiest fix is to set the terminal type (an environment variable) to something that doesn't support colour before doing the spawn. Perhaps this will do the trick:
set env(TERM) "dumb"
If that doesn't work (it depends on exactly what is in someone's .bashrc) then you can just override the PS1 environment variable on the remote side with your first command after logging in.
# etc for logging in
expect "# "
send "PS1='# '\r"
expect "# "
# Everything should be right from here on

smbclient --authentication-file "session setup failed: NT_STATUS_INVALID_PARAMETER" and "SPNEGO(gse_krb5) NEG_TOKEN_INIT failed: NT_STATUS_NO_MEMORY"

(I have Centos 7 with samba-client.x86_64 4.6.2-8.el7 against windows server 2008 that is in a AD Domain controlled by separate windows server 2008 AD domain controller)
Started with this:
smbclient -W my.domain -U myuser //svr.my.domain/fred mypassword -c list
... which worked great, then decided to move domain,user and password into a file and use -A as described in the smbclient manpage. File windows-credentials, content:
username=myuser
domain=my.domain
password=mypassword
... with command line:
smbclient -A windows-credentials //svr.my.domain/fred -c list
.... did not work, gave error:
SPNEGO(gse_krb5) NEG_TOKEN_INIT failed: NT_STATUS_NO_MEMORY
session setup failed: NT_STATUS_NO_MEMORY
... an hour on the internet suggested lots of people had this trouble and just about each had a different ticked answer, and none of them worked for me. Tried various combinations of their answers - in particular, https://askubuntu.com/questions/1008992/ubuntu-17-10-to-access-windows-files-shares-within-workplace-it, and ended up with...
Created a separate my.smb.conf with just:
[global]
# seems to get rid of
# SPNEGO(gse_krb5) NEG_TOKEN_INIT failed: NT_STATUS_NO_MEMORY
client use spnego = no
# seems to get rid of
# session setup failed: NT_STATUS_NO_MEMORY
client ntlmv2 auth = no
... and used:
smbclient -s my.smb.conf -A windows-credentials //svr.my.domain/fred -c list
... and it looks like it works, but I'm not really sure as there seems to be credentials caching and a complete lack of information on how this stuff works or is supposed to work.
Can anyone actually explain any of this? Even if not, perhaps yet another answer to this problem will help someone somewhere.
This appears to be specific to Windows 2008. Attaching to Windows Server 2016 works without the modified smb.conf file. I have been unable to locate any real details.
In case of problems with smbclient
you can mount smb folder and use it like local folder
mount -t cifs //<ip>/<share folder>$ /mnt -o user=<user>,pass=<password>,domain=<workdomain>

Debian Exim4 SMTP-AUTH stopped working

I have a strange problem that recently popped on my Debian Squeeze server.
I've had Exim4 configured to use SMTP-AUTH with encryption setup and running on this box for a long time, but now it doesn't work.
At first I thought it was maybe my certificates expired, but that wasn't the case, they're good for several more years.
It appears that the server isn't listening on port 25 any longer.
If I try to telnet to port 25 it times out.
If I run netstat -tulpen on the server nothing is listening on port 25.
I'm using the splitconf for Exim4.
In conf.d/main I'm enabling MAIN_TLS_ENABLE=true
In conf.d/auth/30_exim4-config_examples I have the following
# Authenticate against local passwords using sasl2-bin
# Requires exim_uid to be a member of sasl group, see README.Debian.gz
plain_saslauthd_server:
driver = plaintext
public_name = PLAIN
server_condition = ${if saslauthd{{$auth2}{$auth3}}{1}{0}}
server_set_id = $auth2
server_prompts = :
.ifndef AUTH_SERVER_ALLOW_NOTLS_PASSWORDS
server_advertise_condition = ${if eq{$tls_cipher}{}{}{*}}
.endif
#
login_saslauthd_server:
driver = plaintext
public_name = LOGIN
server_prompts = "Username:: : Password::"
# don't send system passwords over unencrypted connections
server_condition = ${if saslauthd{{$auth1}{$auth2}}{1}{0}}
server_set_id = $auth1
.ifndef AUTH_SERVER_ALLOW_NOTLS_PASSWORDS
server_advertise_condition = ${if eq{$tls_cipher}{}{}{*}}
.endif
On the server if I run this command:
swaks -a -tls -q HELO -s localhost -au A_USER_NAME -ap '<>'
I get this ...
=== Trying localhost:25...
* Error connecting 0.0.0.0 to localhost:25:
* IO::Socket::INET: connect: Connection refused
Can someone point me to some more advanced debugging techniques?
OK. I figured it out.
Comcast blocks port 25. I don't know why this is coming up now, unless they've recently started blocking it.
I had to change a line in /etc/default/exim4
From this
SMTPLISTENEROPTIONS='-oX 25 -oP /var/run/exim4/exim.pid'
To this
SMTPLISTENEROPTIONS='-oX 465:25 -oP /var/run/exim4/exim.pid'
I also added this to /etc/exim4/conf.d/main/03_exim4-config_tlsoptions
tls_on_connect_ports=465
It's odd that this just popped up, unless a Debian package updated the /etc/default/exim4 file. It's confusing, but it's working. Hopefully this will be helpful to someone in the future.
Cheers.

User in passdb, but getpwnam() fails!

Attempting to set up Samba + OpenLDAP using nss_ldap.
After joining Windows7 to Samba stand alone PDC, I can not login with a domain account unless that account is also added to the /etc/passwd file.
I get: user in passdb, but getpwnam() fails!
Everything I've read points to an NSS_LDAP issue yet, getent passwd shows users perfectly fine and I am able to ssh into the same Linux host using a user account that is only in the LDAP database.
Additionally, if I crack open the /etc/passwd file and add a line for the user in question, I can then login.
I'm not using PAM. I added the two Windows7 registry updates required per the Samba.org site.
Software stack is as follows:
Samba 3.5.3
OpenLDAP 2.4.21
nss_ldap 264
Thoughts/suggestions?
--------------------------------- UPDATE ---------------------------------
Getting closer! My nsswitch.conf did have files ldap so I reversed the order (now ldap files) and something odd happen. Notice, before, I said I could login with SSH and getent passwd dumped users in both ldap and files. After making the nsswitch.conf change, ldap before files, simple commands like ls took a long time. Additionally I observed nss_ldap errors as follows:
ls: nss_ldap: could not search LDAP server - Server is unavailable
and
ls: nss_ldap: failed to bind to LDAP server ldap://tsrvr.example.corp: Invalid credentials
I commented out the rootbinddn line in ldap.conf and these errors went away and getent passwd immediately began working again. The order of the output changed also: ldap entries listed before files entries.
Still, though, my Windows7 client will not login to the domain and I continue to get the same Samba error message
User test in passdb, but getpwnam() fails!
In my smb.conf, I tried removing ldapsam:trusted = yes line and when I do, I get domain authentication errors.
I'm not using SSL/TLS with OpenLDAP and I have the SSL = no setting. I also have the ldap.secret file set. I'm running slapd under the root account. My rootbinddn, before commenting out, referenced an LDAP root user of uid=root,ou=Users,dc=example,dc=corp. root's userPassword using CRYPT matches the bindpw as well as the one in /etc/shadow.
Looking at LDAP log activity for when I get the Samba error, it appears as if LDAP is returning the correct result against a Samba query:
Jun 19 14:20:14 tsrvr slapd[3803]: conn=1025 op=15 SRCH base="dc=example,dc=corp" scope=2 deref=0 filter="(&(uid=test)(objectClass=sambaSamAccount))"
Jun 19 14:20:14 tsrvr slapd[3803]: conn=1025 op=15 SRCH attr=uid uidNumber gidNumber homeDirectory sambaPwdLastSet sambaPwdCanChange sambaPwdMustChange sambaLogonTime sambaLogoffTime
sambaKickoffTime cn sn displayName sambaHomeDrive sambaHomePath sambaLogonScript sambaProfilePath description sambaUserWorkstations sambaSID sambaPrimaryGroupSID sambaLMPassword sam
baNTPassword sambaDomainName objectClass sambaAcctFlags sambaMungedDial sambaBadPasswordCount sambaBadPasswordTime sambaPasswordHistory modifyTimestamp sambaLogonHours modifyTimestam
p uidNumber gidNumber homeDirectory loginShell gecos
Jun 19 14:20:14 tsrvr slapd[3803]: conn=1025 op=15 SEARCH RESULT tag=101 err=0 nentries=1 text=
Any other suggestions?
Much appreciated
Sounds like a problem with /etc/nsswitch.conf. Specifically, the passwd and group lines should refer to ldap before compat or file. Have you looked at this Samba wiki entry?
SOLVED!!!!!!!!!!!
I have a script that was starting Samba (NMBD, SMBD) as well as OpenLDAP (SLAPD). It's an RC script that reads configuration data from a file to determine, among other things, which processes are already running or if a dependent process fails to start, etc... Here is a snippet of the relevant part in the script. The last line copies a version of the nsswitch.conf into place that specifies to use LDAP lookups.
while [ $i -lt $MAXPROCS ];
do
PID=${PROC[$i]}
StartProc $PID
if test $? != 0; then
echo "!!! Aborting Any Remaining Start-up Processes !!!"
exit 1
fi
i=$(($i+1))
done
cp /etc/rc.d/pozix/nsswitch.conf.ldap /etc/nsswitch.conf
And upon shutdown I was doing the following; notice I copy a nsswitch.conf file that has "noldap" entries in it.
while [ $i -lt $MAXPROCS ];
do
PID=${PROC[$i]}
StopProc $PID
i=$(($i+1))
done
cp /etc/rc.d/pozix/nsswitch.conf.noldap /etc/nsswitch.conf
It turns out that in the start-up scenario, samba wants the nsswtich.conf content to have the ldap entries there prior to invocation. Here is what I did to fix my issues:
cp /etc/rc.d/pozix/nsswitch.conf.ldap /etc/nsswitch.conf
while [ $i -lt $MAXPROCS ];
do
PID=${PROC[$i]}
StartProc $PID
if test $? != 0; then
cp /etc/rc.d/pozix/nsswitch.conf.noldap /etc/nsswitch.conf
echo "!!! Aborting Any Remaining Start-up Processes !!!"
exit 1
fi
i=$(($i+1))
done
In summary, it appears that how you start SMBD is just as important as when you start it. If you start SMBD when nsswitch.conf has no LDAP entries, you get a version of smbd running linked to nss_ldap.so thinking it should only rely upon /etc/passwd (if that is all that is in the nsswitch.conf file) and changing the nsswitch.conf contents after SMBD is running has no effect.
Hope this helps other system builders....

How can I view live MySQL queries?

How can I trace MySQL queries on my Linux server as they happen?
For example I'd love to set up some sort of listener, then request a web page and view all of the queries the engine executed, or just view all of the queries being run on a production server. How can I do this?
You can log every query to a log file really easily:
mysql> SHOW VARIABLES LIKE "general_log%";
+------------------+----------------------------+
| Variable_name | Value |
+------------------+----------------------------+
| general_log | OFF |
| general_log_file | /var/run/mysqld/mysqld.log |
+------------------+----------------------------+
mysql> SET GLOBAL general_log = 'ON';
Do your queries (on any db). Grep or otherwise examine /var/run/mysqld/mysqld.log
Then don't forget to
mysql> SET GLOBAL general_log = 'OFF';
or the performance will plummet and your disk will fill!
You can run the MySQL command SHOW FULL PROCESSLIST; to see what queries are being processed at any given time, but that probably won't achieve what you're hoping for.
The best method to get a history without having to modify every application using the server is probably through triggers. You could set up triggers so that every query run results in the query being inserted into some sort of history table, and then create a separate page to access this information.
Do be aware that this will probably considerably slow down everything on the server though, with adding an extra INSERT on top of every single query.
Edit: another alternative is the General Query Log, but having it written to a flat file would remove a lot of possibilities for flexibility of displaying, especially in real-time. If you just want a simple, easy-to-implement way to see what's going on though, enabling the GQL and then using running tail -f on the logfile would do the trick.
Even though an answer has already been accepted, I would like to present what might even be the simplest option:
$ mysqladmin -u bob -p -i 1 processlist
This will print the current queries on your screen every second.
-u The mysql user you want to execute the command as
-p Prompt for your password (so you don't have to save it in a file or have the command appear in your command history)
i The interval in seconds.
Use the --verbose flag to show the full process list, displaying the entire query for each process. (Thanks, nmat)
There is a possible downside: fast queries might not show up if they run between the interval that you set up. IE: My interval is set at one second and if there is a query that takes .02 seconds to run and is ran between intervals, you won't see it.
Use this option preferably when you quickly want to check on running queries without having to set up a listener or anything else.
Run this convenient SQL query to see running MySQL queries. It can be run from any environment you like, whenever you like, without any code changes or overheads. It may require some MySQL permissions configuration, but for me it just runs without any special setup.
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep';
The only catch is that you often miss queries which execute very quickly, so it is most useful for longer-running queries or when the MySQL server has queries which are backing up - in my experience this is exactly the time when I want to view "live" queries.
You can also add conditions to make it more specific just any SQL query.
e.g. Shows all queries running for 5 seconds or more:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND TIME >= 5;
e.g. Show all running UPDATEs:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND INFO LIKE '%UPDATE %';
For full details see: http://dev.mysql.com/doc/refman/5.1/en/processlist-table.html
strace
The quickest way to see live MySQL/MariaDB queries is to use debugger. On Linux you can use strace, for example:
sudo strace -e trace=read,write -s 2000 -fp $(pgrep -nf mysql) 2>&1
Since there are lot of escaped characters, you may format strace's output by piping (just add | between these two one-liners) above into the following command:
grep --line-buffered -o '".\+[^"]"' | grep --line-buffered -o '[^"]*[^"]' | while read -r line; do printf "%b" $line; done | tr "\r\n" "\275\276" | tr -d "[:cntrl:]" | tr "\275\276" "\r\n"
So you should see fairly clean SQL queries with no-time, without touching configuration files.
Obviously this won't replace the standard way of enabling logs, which is described below (which involves reloading the SQL server).
dtrace
Use MySQL probes to view the live MySQL queries without touching the server. Example script:
#!/usr/sbin/dtrace -q
pid$target::*mysql_parse*:entry /* This probe is fired when the execution enters mysql_parse */
{
printf("Query: %s\n", copyinstr(arg1));
}
Save above script to a file (like watch.d), and run:
pfexec dtrace -s watch.d -p $(pgrep -x mysqld)
Learn more: Getting started with DTracing MySQL
Gibbs MySQL Spyglass
See this answer.
Logs
Here are the steps useful for development proposes.
Add these lines into your ~/.my.cnf or global my.cnf:
[mysqld]
general_log=1
general_log_file=/tmp/mysqld.log
Paths: /var/log/mysqld.log or /usr/local/var/log/mysqld.log may also work depending on your file permissions.
then restart your MySQL/MariaDB by (prefix with sudo if necessary):
killall -HUP mysqld
Then check your logs:
tail -f /tmp/mysqld.log
After finish, change general_log to 0 (so you can use it in future), then remove the file and restart SQL server again: killall -HUP mysqld.
I'm in a particular situation where I do not have permissions to turn logging on, and wouldn't have permissions to see the logs if they were turned on. I could not add a trigger, but I did have permissions to call show processlist. So, I gave it a best effort and came up with this:
Create a bash script called "showsqlprocesslist":
#!/bin/bash
while [ 1 -le 1 ]
do
mysql --port=**** --protocol=tcp --password=**** --user=**** --host=**** -e "show processlist\G" | grep Info | grep -v processlist | grep -v "Info: NULL";
done
Execute the script:
./showsqlprocesslist > showsqlprocesslist.out &
Tail the output:
tail -f showsqlprocesslist.out
Bingo bango. Even though it's not throttled, it only took up 2-4% CPU on the boxes I ran it on. I hope maybe this helps someone.
From a command line you could run:
watch --interval=[your-interval-in-seconds] "mysqladmin -u root -p[your-root-pw] processlist | grep [your-db-name]"
Replace the values [x] with your values.
Or even better:
mysqladmin -u root -p -i 1 processlist;
This is the easiest setup on a Linux Ubuntu machine I have come across. Crazy to see all the queries live.
Find and open your MySQL configuration file, usually /etc/mysql/my.cnf on Ubuntu. Look for the section that says “Logging and Replication”
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
log = /var/log/mysql/mysql.log
Just uncomment the “log” variable to turn on logging. Restart MySQL with this command:
sudo /etc/init.d/mysql restart
Now we’re ready to start monitoring the queries as they come in. Open up a new terminal and run this command to scroll the log file, adjusting the path if necessary.
tail -f /var/log/mysql/mysql.log
Now run your application. You’ll see the database queries start flying by in your terminal window. (make sure you have scrolling and history enabled on the terminal)
FROM http://www.howtogeek.com/howto/database/monitor-all-sql-queries-in-mysql/
Check out mtop.
I've been looking to do the same, and have cobbled together a solution from various posts, plus created a small console app to output the live query text as it's written to the log file. This was important in my case as I'm using Entity Framework with MySQL and I need to be able to inspect the generated SQL.
Steps to create the log file (some duplication of other posts, all here for simplicity):
Edit the file located at:
C:\Program Files (x86)\MySQL\MySQL Server 5.5\my.ini
Add "log=development.log" to the bottom of the file. (Note saving this file required me to run my text editor as an admin).
Use MySql workbench to open a command line, enter the password.
Run the following to turn on general logging which will record all queries ran:
SET GLOBAL general_log = 'ON';
To turn off:
SET GLOBAL general_log = 'OFF';
This will cause running queries to be written to a text file at the following location.
C:\ProgramData\MySQL\MySQL Server 5.5\data\development.log
Create / Run a console app that will output the log information in real time:
Source available to download here
Source:
using System;
using System.Configuration;
using System.IO;
using System.Threading;
namespace LiveLogs.ConsoleApp
{
class Program
{
static void Main(string[] args)
{
// Console sizing can cause exceptions if you are using a
// small monitor. Change as required.
Console.SetWindowSize(152, 58);
Console.BufferHeight = 1500;
string filePath = ConfigurationManager.AppSettings["MonitoredTextFilePath"];
Console.Title = string.Format("Live Logs {0}", filePath);
var fileStream = new FileStream(filePath, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite);
// Move to the end of the stream so we do not read in existing
// log text, only watch for new text.
fileStream.Position = fileStream.Length;
StreamReader streamReader;
// Commented lines are for duplicating the log output as it's written to
// allow verification via a diff that the contents are the same and all
// is being output.
// var fsWrite = new FileStream(#"C:\DuplicateFile.txt", FileMode.Create);
// var sw = new StreamWriter(fsWrite);
int rowNum = 0;
while (true)
{
streamReader = new StreamReader(fileStream);
string line;
string rowStr;
while (streamReader.Peek() != -1)
{
rowNum++;
line = streamReader.ReadLine();
rowStr = rowNum.ToString();
string output = String.Format("{0} {1}:\t{2}", rowStr.PadLeft(6, '0'), DateTime.Now.ToLongTimeString(), line);
Console.WriteLine(output);
// sw.WriteLine(output);
}
// sw.Flush();
Thread.Sleep(500);
}
}
}
}
In addition to previous answers describing how to enable general logging, I had to modify one additional variable in my vanilla MySql 5.6 installation before any SQL was written to the log:
SET GLOBAL log_output = 'FILE';
The default setting was 'NONE'.
Gibbs MySQL Spyglass
AgilData launched recently the Gibbs MySQL Scalability Advisor (a free self-service tool) which allows users to capture a live stream of queries to be uploaded to Gibbs. Spyglass (which is Open Source) will watch interactions between your MySQL Servers and client applications. No reconfiguration or restart of the MySQL database server is needed (either client or app).
GitHub: AgilData/gibbs-mysql-spyglass
Learn more: Packet Capturing MySQL with Rust
Install command:
curl -s https://raw.githubusercontent.com/AgilData/gibbs-mysql-spyglass/master/install.sh | bash
If you want to have monitoring and statistics, than there is a good and open-source tool Percona Monitoring and Management
But it is a server based system, and it is not very trivial for launch.
It has also live demo system for test.