How to run a cypher script file from Terminal with the cypher-shell neo4j command? - csv

I have a cypher script file and I would like to run it directly.
All answers I could find on SO to the best of my knowledge use the command neo4j-shell which in my version (Neo4j server 3.5.5) seems to be deprecated and substituted with the command cyphershell.
Using the command sudo ./neo4j-community-3.5.5/bin/cypher-shell --help I got the following instructions.
usage: cypher-shell [-h] [-a ADDRESS] [-u USERNAME] [-p PASSWORD]
[--encryption {true,false}]
[--format {auto,verbose,plain}] [--debug] [--non-interactive] [--sample-rows SAMPLE-ROWS]
[--wrap {true,false}] [-v] [--driver-version] [--fail-fast | --fail-at-end] [cypher]
A command line shell where you can execute Cypher against an
instance of Neo4j. By default the shell is interactive but you can
use it for scripting by passing cypher directly on the command
line or by piping a file with cypher statements (requires Powershell
on Windows).
My file is the following which tries to create a graph from csv files and it comes from the book "Graph Algorithms".
WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data" AS base
WITH base + "transport-nodes.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
MERGE (place:Place {id:row.id})
SET place.latitude = toFloat(row.latitude),
place.longitude = toFloat(row.latitude),
place.population = toInteger(row.population)
WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data/" AS base
WITH base + "transport-relationships.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
MATCH (origin:Place {id: row.src})
MATCH (destination:Place {id: row.dst})
MERGE (origin)-[:EROAD {distance: toInteger(row.cost)}]->(destination)
When I try to pass the file directly with the command:
sudo ./neo4j-community-3.5.5/bin/cypher-shell neo_4.cypher
first it asks for username and password but after typing the correct password (the wrong password results in the error The client is unauthorized due to authentication failure.) I get the error:
Invalid input 'n': expected <init> (line 1, column 1 (offset: 0))
"neo_4.cypher"
^
When I try piping with the command:
sudo cat neo_4.cypher| sudo ./neo4j-community-3.5.5/bin/cypher-shell -u usr -p 'pwd'
no output is generated and no graph either.
How to run a cypher script file with the neo4j command cypher-shell?

Use cypher-shell -f yourscriptname. Check with --help for more description.

I think the key is here:
cypher-shell -- help
... Stuff deleted
positional arguments:
cypher an optional string of cypher to execute and then exit
This means that the paremeter is actual cypher code, not a file name. Thus, this works:
GMc#linux-ihon:~> cypher-shell "match(n) return n;"
username: neo4j
password: ****
+-----------------------------+
| n |
+-----------------------------+
| (:Job {jobName: "Job01"}) |
| (:Job {jobName: "Job02"}) |
But this doesn't (because the text "neo_4.cypher" isn't a valid cypher query)
cypher-shell neo_4.cypher
The help also says:
example of piping a file:
cat some-cypher.txt | cypher-shell
So:
cat neo_4.cypher | cypher-shell
should work. Possibly your problem is all of the sudo's. Specifically the cat ... | sudo cypher-shell. It is possible that sudo is protecting cypher-shell from some arbitrary input (although it doesn't seem to do so on my system).
If you really need to use sudo to run cypher, try using the following:
sudo cypher-shell arguments_as_needed < neo_4.cypher
Oh, also, your script doesn't have a return, so it probably won't display any data, but you should still see the summary reports of records loaded.
Perhaps try something simpler first such as a simple match ... return ... query in your script.
Oh, and don't forget to terminate the cypher query with a semi-colon!

The problem is in the cypher file: each line should end with a semicolon: ;. I still need sudo to run the program.
The file taken from the book seems to contain other errors as well actually.

Related

How to import/load/run mysql file using golang?

I’m trying to run/load sql file into mysql database using this golang statement but this is not working:
exec.Command("mysql", "-u", "{username}", "-p{db password}", "{db name}", "<", file abs path )
But when i use following command in windows command prompt it’s working perfect.
mysql -u {username} -p{db password} {db name} < {file abs path}
So what is the problem?
As others have answered, you can't use the < redirection operator because exec doesn't use the shell.
But you don't have to redirect input to read an SQL file. You can pass arguments to the MySQL client to use its source command.
exec.Command("mysql", "-u", "{username}", "-p{db password}", "{db name}",
"-e", "source {file abs path}" )
The source command is a builtin of the MySQL client. See https://dev.mysql.com/doc/refman/5.7/en/mysql-commands.html
Go's exec.Command runs the first argument as a program with the rest of the arguments as parameters. The '<' is interpreted as a literal argument.
e.g. exec.Command("cat", "<", "abc") is the following command in bash: cat \< abc.
To do what you want you have got two options.
Run (ba)sh and the command as argument: exec.Command("bash", "-c", "mysql ... < full/path")
Pipe the content of the file in manually. See https://stackoverflow.com/a/36383984/8751302 for details.
The problem with the bash version is that is not portable between different operating systems. It won't work on Windows.
Go's os.exec package does not use the shell and does not support redirection:
Unlike the "system" library call from C and other languages, the os/exec package intentionally does not invoke the system shell and does not expand any glob patterns or handle other expansions, pipelines, or redirections typically done by shells.
You can call the shell explicitly to pass arguments to it:
cmd := exec.Command("/bin/sh", yourBashCommand)
Depending on what you're doing, it may be helpful to write a short bash script and call it from Go.

neo4j clean install doesnt recognize 'using', 'load' at import

I just installed neo4j 2.1.3 (the current latest release) on a mac. I started up the server via bin neo4j start and I can verify via localhost:7474 that neo4j is running. I then continued to the neo4j shell with the following code:
➜ bin neo4j-shell
Welcome to the Neo4j Shell! Enter 'help' for a list of commands
NOTE: Remote Neo4j graph database service 'shell' at port 1337
neo4j-sh (?)$ MATCH (n)
> OPTIONAL MATCH (n)-[r]-()
> DELETE n,r;
+--------------------------------------------+
| No data returned, and nothing was changed. |
+--------------------------------------------+
8946 ms
neo4j-sh (?)$ CREATE CONSTRAINT ON (w:Word) ASSERT w.value IS UNIQUE;
+--------------------------------------------+
| No data returned, and nothing was changed. |
+--------------------------------------------+
131 ms
neo4j-sh (?)$ USING PERIODIC COMMIT
Unknown command 'using'
neo4j-sh (?)$ LOAD CSV FROM
Unknown command 'load'
neo4j-sh (?)$ "file:/Users/code/Downloads/w2.txt" AS line FIELDTERMINATOR '\t'
Unknown command '"file:/users/code/downloads/w2.txt"'
neo4j-sh (?)$ MERGE (w1:Word { value: line[1] })
> MERGE (w2:Word { value: line[2] })
> CREATE (w1)-[:LINK { value : toInt(line[0])} ]->(w2);
SyntaxException: line not defined (line 1, column 25)
"MERGE (w1:Word { value: line[1] })"
I am trying to create a graph of bigrams, the csv file contains a list of words. I first delete the current data such that I can create a clean database from a .csv file. This works. I then create a constraint such that all the words are unique. So far, so good.
But then for the import it doesn't seem to recognise commands that are clearly defined in the docs.
What am I doing wrong?
---Edit 1---
I think there might be something wrong internally. Even when I copy code from the (docs)[http://docs.neo4j.org/chunked/stable/cypherdoc-importing-csv-files-with-cypher.html], it will give me erros.
Code
LOAD CSV WITH HEADERS FROM "http://docs.neo4j.org/chunked/2.1.3/csv/import/persons.csv" AS csvLine
CREATE (p:Person { id: toInt(csvLine.id), name: csvLine.name })
Error
➜ neo4j-v2.1.3 ./bin/neo4j-shell
Welcome to the Neo4j Shell! Enter 'help' for a list of commands
NOTE: Remote Neo4j graph database service 'shell' at port 1337
neo4j-sh (?)$ LOAD CSV WITH HEADERS FROM "http://docs.neo4j.org/chunked/2.1.3/csv/import/persons.csv" AS csvLine
Unknown command 'load'
neo4j-sh (?)$ CREATE (p:Person { id: toInt(csvLine.id), name: csvLine.name })
> ;
SyntaxException: Unknown function 'toInt' (line 1, column 24)
"CREATE (p:Person { id: toInt(csvLine.id), name: csvLine.name })"
---Edit 2---
The previous error prompted me for a new fresh install which revealed something even more strange. This script now runs properly in neo4j-shell;
MATCH (n)
OPTIONAL MATCH (n)-[r]-()
DELETE n,r;
CREATE CONSTRAINT ON (w:Word) ASSERT w.value IS UNIQUE;
USING PERIODIC COMMIT
LOAD CSV FROM
"file:/Users/code/Downloads/w2.txt"
AS line
FIELDTERMINATOR '\t'
MERGE (w1:Word { value: toString(line[1]) })
MERGE (w2:Word { value: toString(line[2]) })
CREATE (w1)-[:LINK { value : toInt(line[0])} ]->(w2);
But when it has completed running I find that the neo4j server has shut down/crashed. When I neo4j start or neo4j restart the server then I get the same errors again as before.
final edit
Turns out that $ neo4j starts up a neo4j version but that previously deleted. A hint for others, always make sure you use $ bin/neo4j start and $ bin/neo4j-shell from the install folder. Don't assume $ neo4j start or $ neo4j-shell to always work. What went wrong here is that I started a wrong server version but a right shell version. These two could not communicate nicely with eachother.
Perhaps it is kind of a version issue? I.e. do you have an older version of Neo4j installed?
Perhaps it messes up the paths? Try to clean out all your neo4j installations (search for filenames with neo4j on your disk) and try to start over?
Also check that there are no launch-daemon scripts for Neo4j anymore.
How do you determine that your server has crashed for the second script? Perhaps also check the logfiles in data/log/* and data/graph.db/messages.log
You should or Begin your transaction (see e.g. here : http://docs.neo4j.org/chunked/stable/shell-commands.html#_example_dump_scripts
Or create an Import .cql file with your statements and specify that file when running the shell .
For e.g a import.cql file :
CREATE CONSTRAINT ON (n:Node) assert n.id IS UNIQUE;
USING PERIODIC COMMIT 500
LOAD CSV WITH HEADERS FROM FILE "file:///path-to-your-file" AS csv
CREATE (n:Node {id:toInt(csv.id)})
Then execute the import.cql statements with the shell :
// This assumes that the import.cql file is located in your Neo4j install dir
./bin/neo4j-shell -file import.cql
Here a result that a I just executed on a running test DB :
absoluttly:graphdb ikwattro$ ./bin/neo4j-shell -file import.cql
+-------------------+
| No data returned. |
+-------------------+
Constraints added: 1
200 ms
+-------------------+
| No data returned. |
+-------------------+
Nodes created: 1
Properties set: 1
Labels added: 1
41 ms
Here are some useful links :
http://www.neo4j.org/graphgist?d788e117129c3730a042
http://jexp.de/blog/2014/06/using-load-csv-to-import-git-history-into-neo4j/

capture all proxytunnel output to a file

When I use an ssh command with proxytunnel I always get an output like
Via xxx.yyy.zzz.aaa:pppp -> bbb.ccc.ddd.eee:443 -> remotesvr:22
before I get the output of my command
I want to capture all output to a file including the "Via ..." stuff.
As an example using the *ls* command I tried
*ssh remotesvr ls 2>&1 > $TMPfile*
but the Via xxx... line still gets to my terminal instead of $TMPfile.
How can I get all output captured to $TMPfile?

How to solve %GTM-E-GDINVALID, Unrecognized Global Directory file format: mumps.gld, expected label: GTCGBDUNX007, found: GTCGBDUNX006?

I am getting this error with GT.M:
%GTM-E-GDINVALID, Unrecognized Global Directory file format: /home/blah/gt.m/example/mumps.gld, expected label: GTCGBDUNX007, found: GTCGBDUNX006
Here is what I did so far:
get the version http://sourceforge.net/projects/fis-gtm/
tar -xzf gtm_V55000_linux_i686_pro.tar.gz
chmod +x semstat2 mupip mumps lke gtmsecshr gtcm_shmclean gtcm_server gtcm_play gtcm_pkdisp gtcm_gnp_server geteuid ftok dse
Now we start like this in Bash:
mkdir example; cd example
...and invoke the mumps from the parent dir:
../mumps -r GDE
The output is this:
%GDE-I-GDUSEDEFS, Using defaults for Global Directory
/home/blah/gt.m/example/mumps.gld
Now we set the working dir to create the gld file.
GDE> change -s DEFAULT -f=/home/blah/gt.m/gt.m/example/
GDE> exit
The output from the command is this :
>%GDE-I-VERIFY, Verification OK
>%GDE-I-GDCREATE, Creating Global Directory file
> /home/blah/gt.m/example/mumps.gld
Now this creates a v6 version of gld, which mupip does not like:
strings mumps.gld | head -1
Which contains this string:
GTCGBDUNX006H
But mupip expects a 7 not a 6!
../mupip create
>%GTM-E-GDINVALID, Unrecognized Global Directory file format: >/home/blah/gt.m/example/mumps.gld, expected label: GTCGBDUNX007, found: GTCGBDUNX006
If I just edit the file and replace the 6 with a 7,
../mupip create.
This works!
Now I have a dat file, and go to gtm to save something :
GTM>s ^foo("blah")=1
%GTM-E-GDINVALID, Unrecognized Global Directory file format: >/home/blah/gt.m/example/mumps.gld, expected label: GTCGBDUNX006, found: GTCGBDUNX007
Oh so that wants a v6, so good thing i backed up the old, one, i replace it .
GTM>s ^foo("blah")=1
that works
GTM>zwr ^foo(*)
>^foo("blah")=1
So the data is stored.
Can anyone please explain this? In detail? Why does mupip operate with a different version number?
Note, I did not run any other commands, I am just learning and don't want to execute any huge install routines a root that I don't understand.
In your steps you don't show whether you installed GT.M or not.
That is only the unziped version, first:
chmod 777 configure
./configure
The installation will produce new files in the gtm_dist directory.
You either have GT.M already installed (and I would guess it is an older version) on your system somewhere else and have some environment variable defined for it in your bash/tcsh/*sh environment, or you didn't provide all the step you did to get to that error.
My guess is that you already have GT.M installed somewhere and your above commands uses part of that installation. You can easily verify this using this command : env | grep gtm.
If I follow your steps mentioned above, I get this result :
laurent#laurent /tmp/test $ tar -zxf ~/Projects/gtm_V55000_linux_i686_pro.tar.gz
laurent#laurent /tmp/test $ chmod +x semstat2 mupip mumps lke gtmsecshr gtcm_shmclean gtcm_server gtcm_play gtcm_pkdisp gtcm_gnp_server geteuid ftok dse
laurent#laurent /tmp/test $ mkdir example; cd example
laurent#laurent /tmp/test/example $ ../mumps -r GDE
%GTM-E-GTMDISTUNDEF, Environment variable $gtm_dist is not defined
So, I as said, you either did something else, or have a different GT.M version already installed and this is why some commands expect different versions of GLD.
As Bhaskar has noted in your cross post on Hardhats. Make sure you follow the installation instructions for GT.M. Instructions can be found in Chapter 2 of the UNIX Administration and Operations Guide

How can I view live MySQL queries?

How can I trace MySQL queries on my Linux server as they happen?
For example I'd love to set up some sort of listener, then request a web page and view all of the queries the engine executed, or just view all of the queries being run on a production server. How can I do this?
You can log every query to a log file really easily:
mysql> SHOW VARIABLES LIKE "general_log%";
+------------------+----------------------------+
| Variable_name | Value |
+------------------+----------------------------+
| general_log | OFF |
| general_log_file | /var/run/mysqld/mysqld.log |
+------------------+----------------------------+
mysql> SET GLOBAL general_log = 'ON';
Do your queries (on any db). Grep or otherwise examine /var/run/mysqld/mysqld.log
Then don't forget to
mysql> SET GLOBAL general_log = 'OFF';
or the performance will plummet and your disk will fill!
You can run the MySQL command SHOW FULL PROCESSLIST; to see what queries are being processed at any given time, but that probably won't achieve what you're hoping for.
The best method to get a history without having to modify every application using the server is probably through triggers. You could set up triggers so that every query run results in the query being inserted into some sort of history table, and then create a separate page to access this information.
Do be aware that this will probably considerably slow down everything on the server though, with adding an extra INSERT on top of every single query.
Edit: another alternative is the General Query Log, but having it written to a flat file would remove a lot of possibilities for flexibility of displaying, especially in real-time. If you just want a simple, easy-to-implement way to see what's going on though, enabling the GQL and then using running tail -f on the logfile would do the trick.
Even though an answer has already been accepted, I would like to present what might even be the simplest option:
$ mysqladmin -u bob -p -i 1 processlist
This will print the current queries on your screen every second.
-u The mysql user you want to execute the command as
-p Prompt for your password (so you don't have to save it in a file or have the command appear in your command history)
i The interval in seconds.
Use the --verbose flag to show the full process list, displaying the entire query for each process. (Thanks, nmat)
There is a possible downside: fast queries might not show up if they run between the interval that you set up. IE: My interval is set at one second and if there is a query that takes .02 seconds to run and is ran between intervals, you won't see it.
Use this option preferably when you quickly want to check on running queries without having to set up a listener or anything else.
Run this convenient SQL query to see running MySQL queries. It can be run from any environment you like, whenever you like, without any code changes or overheads. It may require some MySQL permissions configuration, but for me it just runs without any special setup.
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep';
The only catch is that you often miss queries which execute very quickly, so it is most useful for longer-running queries or when the MySQL server has queries which are backing up - in my experience this is exactly the time when I want to view "live" queries.
You can also add conditions to make it more specific just any SQL query.
e.g. Shows all queries running for 5 seconds or more:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND TIME >= 5;
e.g. Show all running UPDATEs:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND INFO LIKE '%UPDATE %';
For full details see: http://dev.mysql.com/doc/refman/5.1/en/processlist-table.html
strace
The quickest way to see live MySQL/MariaDB queries is to use debugger. On Linux you can use strace, for example:
sudo strace -e trace=read,write -s 2000 -fp $(pgrep -nf mysql) 2>&1
Since there are lot of escaped characters, you may format strace's output by piping (just add | between these two one-liners) above into the following command:
grep --line-buffered -o '".\+[^"]"' | grep --line-buffered -o '[^"]*[^"]' | while read -r line; do printf "%b" $line; done | tr "\r\n" "\275\276" | tr -d "[:cntrl:]" | tr "\275\276" "\r\n"
So you should see fairly clean SQL queries with no-time, without touching configuration files.
Obviously this won't replace the standard way of enabling logs, which is described below (which involves reloading the SQL server).
dtrace
Use MySQL probes to view the live MySQL queries without touching the server. Example script:
#!/usr/sbin/dtrace -q
pid$target::*mysql_parse*:entry /* This probe is fired when the execution enters mysql_parse */
{
printf("Query: %s\n", copyinstr(arg1));
}
Save above script to a file (like watch.d), and run:
pfexec dtrace -s watch.d -p $(pgrep -x mysqld)
Learn more: Getting started with DTracing MySQL
Gibbs MySQL Spyglass
See this answer.
Logs
Here are the steps useful for development proposes.
Add these lines into your ~/.my.cnf or global my.cnf:
[mysqld]
general_log=1
general_log_file=/tmp/mysqld.log
Paths: /var/log/mysqld.log or /usr/local/var/log/mysqld.log may also work depending on your file permissions.
then restart your MySQL/MariaDB by (prefix with sudo if necessary):
killall -HUP mysqld
Then check your logs:
tail -f /tmp/mysqld.log
After finish, change general_log to 0 (so you can use it in future), then remove the file and restart SQL server again: killall -HUP mysqld.
I'm in a particular situation where I do not have permissions to turn logging on, and wouldn't have permissions to see the logs if they were turned on. I could not add a trigger, but I did have permissions to call show processlist. So, I gave it a best effort and came up with this:
Create a bash script called "showsqlprocesslist":
#!/bin/bash
while [ 1 -le 1 ]
do
mysql --port=**** --protocol=tcp --password=**** --user=**** --host=**** -e "show processlist\G" | grep Info | grep -v processlist | grep -v "Info: NULL";
done
Execute the script:
./showsqlprocesslist > showsqlprocesslist.out &
Tail the output:
tail -f showsqlprocesslist.out
Bingo bango. Even though it's not throttled, it only took up 2-4% CPU on the boxes I ran it on. I hope maybe this helps someone.
From a command line you could run:
watch --interval=[your-interval-in-seconds] "mysqladmin -u root -p[your-root-pw] processlist | grep [your-db-name]"
Replace the values [x] with your values.
Or even better:
mysqladmin -u root -p -i 1 processlist;
This is the easiest setup on a Linux Ubuntu machine I have come across. Crazy to see all the queries live.
Find and open your MySQL configuration file, usually /etc/mysql/my.cnf on Ubuntu. Look for the section that says “Logging and Replication”
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
log = /var/log/mysql/mysql.log
Just uncomment the “log” variable to turn on logging. Restart MySQL with this command:
sudo /etc/init.d/mysql restart
Now we’re ready to start monitoring the queries as they come in. Open up a new terminal and run this command to scroll the log file, adjusting the path if necessary.
tail -f /var/log/mysql/mysql.log
Now run your application. You’ll see the database queries start flying by in your terminal window. (make sure you have scrolling and history enabled on the terminal)
FROM http://www.howtogeek.com/howto/database/monitor-all-sql-queries-in-mysql/
Check out mtop.
I've been looking to do the same, and have cobbled together a solution from various posts, plus created a small console app to output the live query text as it's written to the log file. This was important in my case as I'm using Entity Framework with MySQL and I need to be able to inspect the generated SQL.
Steps to create the log file (some duplication of other posts, all here for simplicity):
Edit the file located at:
C:\Program Files (x86)\MySQL\MySQL Server 5.5\my.ini
Add "log=development.log" to the bottom of the file. (Note saving this file required me to run my text editor as an admin).
Use MySql workbench to open a command line, enter the password.
Run the following to turn on general logging which will record all queries ran:
SET GLOBAL general_log = 'ON';
To turn off:
SET GLOBAL general_log = 'OFF';
This will cause running queries to be written to a text file at the following location.
C:\ProgramData\MySQL\MySQL Server 5.5\data\development.log
Create / Run a console app that will output the log information in real time:
Source available to download here
Source:
using System;
using System.Configuration;
using System.IO;
using System.Threading;
namespace LiveLogs.ConsoleApp
{
class Program
{
static void Main(string[] args)
{
// Console sizing can cause exceptions if you are using a
// small monitor. Change as required.
Console.SetWindowSize(152, 58);
Console.BufferHeight = 1500;
string filePath = ConfigurationManager.AppSettings["MonitoredTextFilePath"];
Console.Title = string.Format("Live Logs {0}", filePath);
var fileStream = new FileStream(filePath, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite);
// Move to the end of the stream so we do not read in existing
// log text, only watch for new text.
fileStream.Position = fileStream.Length;
StreamReader streamReader;
// Commented lines are for duplicating the log output as it's written to
// allow verification via a diff that the contents are the same and all
// is being output.
// var fsWrite = new FileStream(#"C:\DuplicateFile.txt", FileMode.Create);
// var sw = new StreamWriter(fsWrite);
int rowNum = 0;
while (true)
{
streamReader = new StreamReader(fileStream);
string line;
string rowStr;
while (streamReader.Peek() != -1)
{
rowNum++;
line = streamReader.ReadLine();
rowStr = rowNum.ToString();
string output = String.Format("{0} {1}:\t{2}", rowStr.PadLeft(6, '0'), DateTime.Now.ToLongTimeString(), line);
Console.WriteLine(output);
// sw.WriteLine(output);
}
// sw.Flush();
Thread.Sleep(500);
}
}
}
}
In addition to previous answers describing how to enable general logging, I had to modify one additional variable in my vanilla MySql 5.6 installation before any SQL was written to the log:
SET GLOBAL log_output = 'FILE';
The default setting was 'NONE'.
Gibbs MySQL Spyglass
AgilData launched recently the Gibbs MySQL Scalability Advisor (a free self-service tool) which allows users to capture a live stream of queries to be uploaded to Gibbs. Spyglass (which is Open Source) will watch interactions between your MySQL Servers and client applications. No reconfiguration or restart of the MySQL database server is needed (either client or app).
GitHub: AgilData/gibbs-mysql-spyglass
Learn more: Packet Capturing MySQL with Rust
Install command:
curl -s https://raw.githubusercontent.com/AgilData/gibbs-mysql-spyglass/master/install.sh | bash
If you want to have monitoring and statistics, than there is a good and open-source tool Percona Monitoring and Management
But it is a server based system, and it is not very trivial for launch.
It has also live demo system for test.