default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = root
server = /usr/bin/mysqlclustercheck
log_on_failure += USERID
only_from = 0.0.0.0/0
#
# Passing arguments to clustercheck
# <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
# Recommended: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.local"
# Compatibility: server_args = user pass 1 /var/log/log-file 1 /etc/my.cnf.local"
# 55-to-56 upgrade: server_args = user pass 1 /var/log/log-file 0 /etc/my.cnf.extra"
#
# recommended to put the IPs that need
# to connect exclusively (security purposes)
per_source = UNLIMITED
}
/etc/xinetd.d #
It is kind of strange that script works fine when run manually when it runs using /etc/xinetd.d/ , it is not working as expected.
In mysqlclustercheck script, instead of using --user= and passord= syntax, I am using --login-path= syntax
script runs fine when I run using command line but status for xinetd was showing signal 13. After debugging, I have found that even simple command like this is not working
mysql_config_editor print --all >>/tmp/test.txt
We don't see any output generated when it is run using xinetd ( mysqlclustercheck)
Have you tried the following instead of /usr/bin/mysqlclustercheck?
server = /usr/bin/clustercheck
I am wondering if you could test your binary location with the linux which command.
A long time ago since this question was asked, but it just came to my attention.
First of all as mentioned, Percona Cluster Control script is called clustercheck, so make sure you are using the correct name and correct path.
Secondly, since the server script runs fine from command line, it seems to me that the path of mysql client command is not known by the xinetd when it runs the Cluster Control script.
Since the mysqlclustercheck script as it is offered from Percona, it uses only the binary name mysql without specifying the absolute path I suggest you do the following:
Find where mysql client command is located on your system:
ccloud#gal1:~> sudo -i
gal1:~ # which mysql
/usr/local/mysql/bin/mysql
gal1:~ #
then edit script /usr/bin/mysqlclustercheck and in the following line:
MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \
place the exact path of mysql client command you found in the previous step.
I also see that you are not using MySQL connection credentials for connecting to MySQL server. mysqlclustercheck script as it is offered from Percona, it uses User/Password in order to connect to MySQL server.
So normally, you should execute the script in the command line like:
gal1:~ # /usr/sbin/clustercheck haproxy haproxyMySQLpass
HTTP/1.1 200 OK
Content-Type: text/plain
Where haproxy/haproxyMySQLpass is the MySQL connection user/pass for HAProxy monitoring user.
Additionally, you should specify them to your script's xinetd settings like:
server = /usr/bin/mysqlclustercheck
server_args = haproxy haproxyMySQLpass
Last but not least, the signal 13 you are getting is because you try to write something in a script run by xinetd. If for example in your mysqlclustercheck you try to add a statement like
echo "debug message"
you probably going to see the broken pipe signal (13 in POSIX).
Finally, I had issues with this script using SLES 12.3 and I finally manage to run it not as 'nobody' but as 'root'.
Hope it helps
I am using orabbix to monitor my db. The data from the queries executed on this db using orabbix are sent to zabbix server. However, I am not able to see the data reaching zabbix.
On my zabbix web console, I see this message on the triggers added - "Trigger expression updated. No status update so far."
Any ideas?
My update interval for the trigger is set to 30 sec.
Based on the screenshots you posted, your host is named "wfc1dev1" and you have items with keys "WFC_WFS_SYS_001" and "WFC_WFS_SYS_002". However, based on the Orabbix XML that it sends to Zabbix, the hostname and item keys are different. Here is the XML:
<req><host>V0ZDMURFVg==</host><key>V0ZDX0xFQUZfU1lTXzAwMg==</key><data>MA==</data></req>
From this, we can deduce the host:
$ echo V0ZDMURFVg== | base64 -d
WFC1DEV
The key:
$ echo V0ZDX0xFQUZfU1lTXzAwMg== | base64 -d
WFC_LEAF_SYS_002
The data:
$ echo MA== | base64 -d
0
It can be seen that neither the host name, nor item key match those configured on Zabbix server. Once you fix that, it should work.
I have two instances of zabbix running on two different RHEL servers. Data presented in web user interface on one are off by 2 hours and on the other by 3 hours (ahead). For example instead of 15:00 it says 17:00. Timezone in /etc/php.ini is set properly to America/Vancouver (the same as settings on my desktop). I use MySQL and database settings are ok as well. calling 'select now() from dual' test returns correct time.
Edit file: vim /etc/apache2/conf-available/zabbix.conf
Update line : php_value date.timezone America/Sao_Paulo
Edit file: vim /etc/php5/apache2/php.ini
Update line:
[Date]
; Defines the default timezone used by the date functions
; http://php.net/date.timezone
date.timezone = America/Sao_Paulo
Now restart apache. /etc/init.d/apache2 restart
What does typing
date
return?
Do you have your systems ZONE parameter correctly configured?
My Zabbix installations have both /etc/php.ini and the system's ZONE consistent.
vi /etc/sysconfig/clock
and garantee it's set to:
ZONE="America/Vancouver"
For the host time to reflect the change timezone link the intended zoneinfo file to /etc/localtime,
ln -sf /usr/share/zoneinfo/America/Vancouver /etc/localtime
All problems with time in Zabbix are caused by a problem with the client browser, the Zabbix server (Web or Server process) or a combination of the two. The time of the Database server however does not matter.
Check to see what the host thinks the timezone is.
This link:
How do I find the current system timezone? has more detailed information on how to determine the timezone, but the gist on your RHEL box will be:
ls -l /etc/localtime
If that is a symlink, job done you can now see your timezone.
if not then use the following to determine the timezone.
685e6cae6f7d63e690bf35b955ff4afb /etc/localtime
[root#admin ~]# find /usr/share/zoneinfo -type f | xargs md5sum | grep 685e6cae6f7d63e690bf35b955ff4afb
685e6cae6f7d63e690bf35b955ff4afb /usr/share/zoneinfo/posix/America/Los_Angeles
685e6cae6f7d63e690bf35b955ff4afb /usr/share/zoneinfo/posix/US/Pacific
685e6cae6f7d63e690bf35b955ff4afb /usr/share/zoneinfo/America/Los_Angeles
685e6cae6f7d63e690bf35b955ff4afb /usr/share/zoneinfo/US/Pacific
PHP (The web frontend) then put's own timezone information on top of everything else.
Attempting to set up Samba + OpenLDAP using nss_ldap.
After joining Windows7 to Samba stand alone PDC, I can not login with a domain account unless that account is also added to the /etc/passwd file.
I get: user in passdb, but getpwnam() fails!
Everything I've read points to an NSS_LDAP issue yet, getent passwd shows users perfectly fine and I am able to ssh into the same Linux host using a user account that is only in the LDAP database.
Additionally, if I crack open the /etc/passwd file and add a line for the user in question, I can then login.
I'm not using PAM. I added the two Windows7 registry updates required per the Samba.org site.
Software stack is as follows:
Samba 3.5.3
OpenLDAP 2.4.21
nss_ldap 264
Thoughts/suggestions?
--------------------------------- UPDATE ---------------------------------
Getting closer! My nsswitch.conf did have files ldap so I reversed the order (now ldap files) and something odd happen. Notice, before, I said I could login with SSH and getent passwd dumped users in both ldap and files. After making the nsswitch.conf change, ldap before files, simple commands like ls took a long time. Additionally I observed nss_ldap errors as follows:
ls: nss_ldap: could not search LDAP server - Server is unavailable
and
ls: nss_ldap: failed to bind to LDAP server ldap://tsrvr.example.corp: Invalid credentials
I commented out the rootbinddn line in ldap.conf and these errors went away and getent passwd immediately began working again. The order of the output changed also: ldap entries listed before files entries.
Still, though, my Windows7 client will not login to the domain and I continue to get the same Samba error message
User test in passdb, but getpwnam() fails!
In my smb.conf, I tried removing ldapsam:trusted = yes line and when I do, I get domain authentication errors.
I'm not using SSL/TLS with OpenLDAP and I have the SSL = no setting. I also have the ldap.secret file set. I'm running slapd under the root account. My rootbinddn, before commenting out, referenced an LDAP root user of uid=root,ou=Users,dc=example,dc=corp. root's userPassword using CRYPT matches the bindpw as well as the one in /etc/shadow.
Looking at LDAP log activity for when I get the Samba error, it appears as if LDAP is returning the correct result against a Samba query:
Jun 19 14:20:14 tsrvr slapd[3803]: conn=1025 op=15 SRCH base="dc=example,dc=corp" scope=2 deref=0 filter="(&(uid=test)(objectClass=sambaSamAccount))"
Jun 19 14:20:14 tsrvr slapd[3803]: conn=1025 op=15 SRCH attr=uid uidNumber gidNumber homeDirectory sambaPwdLastSet sambaPwdCanChange sambaPwdMustChange sambaLogonTime sambaLogoffTime
sambaKickoffTime cn sn displayName sambaHomeDrive sambaHomePath sambaLogonScript sambaProfilePath description sambaUserWorkstations sambaSID sambaPrimaryGroupSID sambaLMPassword sam
baNTPassword sambaDomainName objectClass sambaAcctFlags sambaMungedDial sambaBadPasswordCount sambaBadPasswordTime sambaPasswordHistory modifyTimestamp sambaLogonHours modifyTimestam
p uidNumber gidNumber homeDirectory loginShell gecos
Jun 19 14:20:14 tsrvr slapd[3803]: conn=1025 op=15 SEARCH RESULT tag=101 err=0 nentries=1 text=
Any other suggestions?
Much appreciated
Sounds like a problem with /etc/nsswitch.conf. Specifically, the passwd and group lines should refer to ldap before compat or file. Have you looked at this Samba wiki entry?
SOLVED!!!!!!!!!!!
I have a script that was starting Samba (NMBD, SMBD) as well as OpenLDAP (SLAPD). It's an RC script that reads configuration data from a file to determine, among other things, which processes are already running or if a dependent process fails to start, etc... Here is a snippet of the relevant part in the script. The last line copies a version of the nsswitch.conf into place that specifies to use LDAP lookups.
while [ $i -lt $MAXPROCS ];
do
PID=${PROC[$i]}
StartProc $PID
if test $? != 0; then
echo "!!! Aborting Any Remaining Start-up Processes !!!"
exit 1
fi
i=$(($i+1))
done
cp /etc/rc.d/pozix/nsswitch.conf.ldap /etc/nsswitch.conf
And upon shutdown I was doing the following; notice I copy a nsswitch.conf file that has "noldap" entries in it.
while [ $i -lt $MAXPROCS ];
do
PID=${PROC[$i]}
StopProc $PID
i=$(($i+1))
done
cp /etc/rc.d/pozix/nsswitch.conf.noldap /etc/nsswitch.conf
It turns out that in the start-up scenario, samba wants the nsswtich.conf content to have the ldap entries there prior to invocation. Here is what I did to fix my issues:
cp /etc/rc.d/pozix/nsswitch.conf.ldap /etc/nsswitch.conf
while [ $i -lt $MAXPROCS ];
do
PID=${PROC[$i]}
StartProc $PID
if test $? != 0; then
cp /etc/rc.d/pozix/nsswitch.conf.noldap /etc/nsswitch.conf
echo "!!! Aborting Any Remaining Start-up Processes !!!"
exit 1
fi
i=$(($i+1))
done
In summary, it appears that how you start SMBD is just as important as when you start it. If you start SMBD when nsswitch.conf has no LDAP entries, you get a version of smbd running linked to nss_ldap.so thinking it should only rely upon /etc/passwd (if that is all that is in the nsswitch.conf file) and changing the nsswitch.conf contents after SMBD is running has no effect.
Hope this helps other system builders....
How can I trace MySQL queries on my Linux server as they happen?
For example I'd love to set up some sort of listener, then request a web page and view all of the queries the engine executed, or just view all of the queries being run on a production server. How can I do this?
You can log every query to a log file really easily:
mysql> SHOW VARIABLES LIKE "general_log%";
+------------------+----------------------------+
| Variable_name | Value |
+------------------+----------------------------+
| general_log | OFF |
| general_log_file | /var/run/mysqld/mysqld.log |
+------------------+----------------------------+
mysql> SET GLOBAL general_log = 'ON';
Do your queries (on any db). Grep or otherwise examine /var/run/mysqld/mysqld.log
Then don't forget to
mysql> SET GLOBAL general_log = 'OFF';
or the performance will plummet and your disk will fill!
You can run the MySQL command SHOW FULL PROCESSLIST; to see what queries are being processed at any given time, but that probably won't achieve what you're hoping for.
The best method to get a history without having to modify every application using the server is probably through triggers. You could set up triggers so that every query run results in the query being inserted into some sort of history table, and then create a separate page to access this information.
Do be aware that this will probably considerably slow down everything on the server though, with adding an extra INSERT on top of every single query.
Edit: another alternative is the General Query Log, but having it written to a flat file would remove a lot of possibilities for flexibility of displaying, especially in real-time. If you just want a simple, easy-to-implement way to see what's going on though, enabling the GQL and then using running tail -f on the logfile would do the trick.
Even though an answer has already been accepted, I would like to present what might even be the simplest option:
$ mysqladmin -u bob -p -i 1 processlist
This will print the current queries on your screen every second.
-u The mysql user you want to execute the command as
-p Prompt for your password (so you don't have to save it in a file or have the command appear in your command history)
i The interval in seconds.
Use the --verbose flag to show the full process list, displaying the entire query for each process. (Thanks, nmat)
There is a possible downside: fast queries might not show up if they run between the interval that you set up. IE: My interval is set at one second and if there is a query that takes .02 seconds to run and is ran between intervals, you won't see it.
Use this option preferably when you quickly want to check on running queries without having to set up a listener or anything else.
Run this convenient SQL query to see running MySQL queries. It can be run from any environment you like, whenever you like, without any code changes or overheads. It may require some MySQL permissions configuration, but for me it just runs without any special setup.
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep';
The only catch is that you often miss queries which execute very quickly, so it is most useful for longer-running queries or when the MySQL server has queries which are backing up - in my experience this is exactly the time when I want to view "live" queries.
You can also add conditions to make it more specific just any SQL query.
e.g. Shows all queries running for 5 seconds or more:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND TIME >= 5;
e.g. Show all running UPDATEs:
SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep' AND INFO LIKE '%UPDATE %';
For full details see: http://dev.mysql.com/doc/refman/5.1/en/processlist-table.html
strace
The quickest way to see live MySQL/MariaDB queries is to use debugger. On Linux you can use strace, for example:
sudo strace -e trace=read,write -s 2000 -fp $(pgrep -nf mysql) 2>&1
Since there are lot of escaped characters, you may format strace's output by piping (just add | between these two one-liners) above into the following command:
grep --line-buffered -o '".\+[^"]"' | grep --line-buffered -o '[^"]*[^"]' | while read -r line; do printf "%b" $line; done | tr "\r\n" "\275\276" | tr -d "[:cntrl:]" | tr "\275\276" "\r\n"
So you should see fairly clean SQL queries with no-time, without touching configuration files.
Obviously this won't replace the standard way of enabling logs, which is described below (which involves reloading the SQL server).
dtrace
Use MySQL probes to view the live MySQL queries without touching the server. Example script:
#!/usr/sbin/dtrace -q
pid$target::*mysql_parse*:entry /* This probe is fired when the execution enters mysql_parse */
{
printf("Query: %s\n", copyinstr(arg1));
}
Save above script to a file (like watch.d), and run:
pfexec dtrace -s watch.d -p $(pgrep -x mysqld)
Learn more: Getting started with DTracing MySQL
Gibbs MySQL Spyglass
See this answer.
Logs
Here are the steps useful for development proposes.
Add these lines into your ~/.my.cnf or global my.cnf:
[mysqld]
general_log=1
general_log_file=/tmp/mysqld.log
Paths: /var/log/mysqld.log or /usr/local/var/log/mysqld.log may also work depending on your file permissions.
then restart your MySQL/MariaDB by (prefix with sudo if necessary):
killall -HUP mysqld
Then check your logs:
tail -f /tmp/mysqld.log
After finish, change general_log to 0 (so you can use it in future), then remove the file and restart SQL server again: killall -HUP mysqld.
I'm in a particular situation where I do not have permissions to turn logging on, and wouldn't have permissions to see the logs if they were turned on. I could not add a trigger, but I did have permissions to call show processlist. So, I gave it a best effort and came up with this:
Create a bash script called "showsqlprocesslist":
#!/bin/bash
while [ 1 -le 1 ]
do
mysql --port=**** --protocol=tcp --password=**** --user=**** --host=**** -e "show processlist\G" | grep Info | grep -v processlist | grep -v "Info: NULL";
done
Execute the script:
./showsqlprocesslist > showsqlprocesslist.out &
Tail the output:
tail -f showsqlprocesslist.out
Bingo bango. Even though it's not throttled, it only took up 2-4% CPU on the boxes I ran it on. I hope maybe this helps someone.
From a command line you could run:
watch --interval=[your-interval-in-seconds] "mysqladmin -u root -p[your-root-pw] processlist | grep [your-db-name]"
Replace the values [x] with your values.
Or even better:
mysqladmin -u root -p -i 1 processlist;
This is the easiest setup on a Linux Ubuntu machine I have come across. Crazy to see all the queries live.
Find and open your MySQL configuration file, usually /etc/mysql/my.cnf on Ubuntu. Look for the section that says “Logging and Replication”
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
log = /var/log/mysql/mysql.log
Just uncomment the “log” variable to turn on logging. Restart MySQL with this command:
sudo /etc/init.d/mysql restart
Now we’re ready to start monitoring the queries as they come in. Open up a new terminal and run this command to scroll the log file, adjusting the path if necessary.
tail -f /var/log/mysql/mysql.log
Now run your application. You’ll see the database queries start flying by in your terminal window. (make sure you have scrolling and history enabled on the terminal)
FROM http://www.howtogeek.com/howto/database/monitor-all-sql-queries-in-mysql/
Check out mtop.
I've been looking to do the same, and have cobbled together a solution from various posts, plus created a small console app to output the live query text as it's written to the log file. This was important in my case as I'm using Entity Framework with MySQL and I need to be able to inspect the generated SQL.
Steps to create the log file (some duplication of other posts, all here for simplicity):
Edit the file located at:
C:\Program Files (x86)\MySQL\MySQL Server 5.5\my.ini
Add "log=development.log" to the bottom of the file. (Note saving this file required me to run my text editor as an admin).
Use MySql workbench to open a command line, enter the password.
Run the following to turn on general logging which will record all queries ran:
SET GLOBAL general_log = 'ON';
To turn off:
SET GLOBAL general_log = 'OFF';
This will cause running queries to be written to a text file at the following location.
C:\ProgramData\MySQL\MySQL Server 5.5\data\development.log
Create / Run a console app that will output the log information in real time:
Source available to download here
Source:
using System;
using System.Configuration;
using System.IO;
using System.Threading;
namespace LiveLogs.ConsoleApp
{
class Program
{
static void Main(string[] args)
{
// Console sizing can cause exceptions if you are using a
// small monitor. Change as required.
Console.SetWindowSize(152, 58);
Console.BufferHeight = 1500;
string filePath = ConfigurationManager.AppSettings["MonitoredTextFilePath"];
Console.Title = string.Format("Live Logs {0}", filePath);
var fileStream = new FileStream(filePath, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite);
// Move to the end of the stream so we do not read in existing
// log text, only watch for new text.
fileStream.Position = fileStream.Length;
StreamReader streamReader;
// Commented lines are for duplicating the log output as it's written to
// allow verification via a diff that the contents are the same and all
// is being output.
// var fsWrite = new FileStream(#"C:\DuplicateFile.txt", FileMode.Create);
// var sw = new StreamWriter(fsWrite);
int rowNum = 0;
while (true)
{
streamReader = new StreamReader(fileStream);
string line;
string rowStr;
while (streamReader.Peek() != -1)
{
rowNum++;
line = streamReader.ReadLine();
rowStr = rowNum.ToString();
string output = String.Format("{0} {1}:\t{2}", rowStr.PadLeft(6, '0'), DateTime.Now.ToLongTimeString(), line);
Console.WriteLine(output);
// sw.WriteLine(output);
}
// sw.Flush();
Thread.Sleep(500);
}
}
}
}
In addition to previous answers describing how to enable general logging, I had to modify one additional variable in my vanilla MySql 5.6 installation before any SQL was written to the log:
SET GLOBAL log_output = 'FILE';
The default setting was 'NONE'.
Gibbs MySQL Spyglass
AgilData launched recently the Gibbs MySQL Scalability Advisor (a free self-service tool) which allows users to capture a live stream of queries to be uploaded to Gibbs. Spyglass (which is Open Source) will watch interactions between your MySQL Servers and client applications. No reconfiguration or restart of the MySQL database server is needed (either client or app).
GitHub: AgilData/gibbs-mysql-spyglass
Learn more: Packet Capturing MySQL with Rust
Install command:
curl -s https://raw.githubusercontent.com/AgilData/gibbs-mysql-spyglass/master/install.sh | bash
If you want to have monitoring and statistics, than there is a good and open-source tool Percona Monitoring and Management
But it is a server based system, and it is not very trivial for launch.
It has also live demo system for test.