MySQL log reader - mysql

So, I'm trying to analyse some of my program's MySQL queries. However, while I've got MySQL general query logging turned on, and can view the log file in a text editor (eg. notepad++), the program writes 1000s of lines of query a minute, so I could do with a slightly better program for reading the logs. Things that would be nice:
Better syntax highlighting.
Real-time updating.
doesn't get too slow when looking at long files
Handles random binary sequences in the log without breaking
Any suggestions?
Edit: Windows-7 compatible programmes only

You can try using tail -f <file_path>. That will follow the log as it's appended to.
Additionally, you could give multitail a try. It supports syntax highlighting (through regex).

pt-query-digest from the Percona Toolkit (= Maatkit, but Maatkit will not be developed any further, so switch to the Percona Toolkit). Don't use as a 'live' inspector though, but just as a bulk tool.

Use mysql log tables like general log and slow query log.
Update your mysql config file with:
general_log=1
slow_query_log=1
slow-launch-TIME = 2
log-output = TABLE
OR
You can use MySQL Administrator to view logs(general log, slow query log, error log).
OR
You can also view that log file using TextPad software. It can support a file more than a GB to read write.

So far, from testing out a bunch of programmes, the best option I've found is baretail, which has good real-time updating and handles large files reasonably well. It could do with better MySql-specific syntax, but it's not bad.
Alternatively, it turns out that there are actually options in notepad++ (in preferences: misc) to turn on real-time updating, but this doesn't work well unless you have focus on the notepad++ window
There's also a windows implementation of tail

Related

Apache: log storage into MySQL

Method 1: Pipe Log
Recently I've read an article about how to save Apache log in MySQL database. Briefly, the idea is to pipe each log to MySQL:
# Format log as a MySQL query
LogFormat "INSERT INTO apache_logs \
set ip='%h',\
datetime='%{%Y-%m-%d %H:%M:%S}t',\
status='%>s',\
bytes_sent='%B',\
content_type='%{Content-Type}o',\
url_requested='%r',\
user_agent='%{User-Agent}i',\
referer='%{Referer}i';" \
mysql_custom_log
# execute queries
CustomLog "|/usr/bin/mysql -h 127.0.0.1 -u log_user -plog_pass apache_logs" mysql_custom_log
# save queries to log file
CustomLog logs/mysql_custom_log mysql_custom_log
Question
It seems that untreated user inputs (ie: user_agent & referer) would be passed directly to MySQL.
Therefore, is this method vulnerable to SQL injection? If so, is it possible to harden it?
Method 2: Apache module
mod_log_sql is an Apache module that seems to do something similar, ie: "logs all requests to a database". According to the documentation, such module has several advantages:
power of data extraction with SQL-based log
more configurable and flexible than the standard module [mod_log_config]
links are kept alive in between queries to save speed and overhead
any failed INSERT commands are preserved to a local file
no more tasks like log rotation
no need to collate/interleave the many separate logfiles
However, despite all this advantages, mod_log_sql doesn't seem to be popular:
the documentation doesn't mention one production level user
few discussions through the web
several periods without a maintainer
Which sounds like a warning to me (although I might be wrong).
Questions
Any known reason why this module doesn't seem to be popular?
Is it vulnerable to SQL injection? If so, is it possible to harden it?
Which method should have better performance?
Pipe Log method is better because it creates a stream between your log and your database, this can reflect directly on time performance in insertion/searching. Another point about pipe log is the possibility to use a NoSQL database which is optmized for searching or insertion via specific queries, one example is the ELK Stack, Elasticsearch + Logstash(Log Parser + Stream) and Kibana.
Would recommend any reading related to that: https://www.guru99.com/elk-stack-tutorial.html
Related to your question about SQL Injection, it deppends on how you are communicating with your database, despite the type of database or method to store your log. You need to secure it by using tokens as a example.
Related to apache module, the intention was to made a pipe log but the last commented part it's from 2006, and the documentation is not user friendly.

History of queries in MySql

Is there any way to check the query that occurs in my MySql database?
For example:
I have an application (OTRS) that allows you to generate reports according to the frames that I desire. I would like to know which query is made by the application in the database.
Because I will use it to integrate with other reporting software.
Is this possible?
Yes, you can enable logging in your MySQL server. there are several types of logs you can use, depending on what you want to log, starting from errors only or slow queries, and to logs that write everything done on your server.
See the full doc here
Although, as Nir says, mysql can log all queries (you should be looking at the general log or the slow log configured with a threshold of 0 seconds) this will show all the queries being run; on a production system it may prove difficult to match what you are doing in your browser with specific entries in the log.
The reason I suggest using the slow query log is that there are tools available which will remove the parameters from the queries, allowing you to see what SQL code is running more frequently.
If you have some proficiency in Perl it should be straightforward to output - all queries are processed via an abstraction layer.
(Presumably you are aware that the schema is published)

Detecting hanging processes in Perl/MySQL (FreeBSD)

I have a Perl script running on a FreeBSD/Apache system, which makes some simple queries to a MySQL database via DBI. The server is fairly active (150k pages a day) and every once in a while (as much as once a minute) something is causing a process to hang. I've suspected a file lock might be holding up a read, or maybe it's a SQL call, but I have not been able to figure out how to get information on the hanging process.
Per Practical mod_perl it sounds like the way to identify the operation giving me the headache is either system trace, perl trace, or the interactive debugger. I gather the system trace is ktrace on FreeBSD, but when i attach to one of the hanging processes in top, the only output after the process is killed is:
50904 perl5.8.9 PSIG SIGTERM SIG_DFL
That isn't very helpful to me. Can anyone suggest a more meaningful approach on this? I am not terribly advanced in Unix admin, so your patience if I sound stupid is greatly appreciated.... :o)
If I understood correctly, your Perl process is hanging while querying the MySQL, which, by itself, is still operational. MySQL server has the embedded troubleshooting feature for that, the log_slow_queries option. Putting the following lines in your my.cnf enables the trick:
[mysqld]
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 10
After that, restart or reload the MySQL daemon. Let the server run for a while to collect the stats and analyse what's going on:
mysqldumpslow -s at /var/log/mysql/mysql-slow.log | less
On one server of mine, the top record (-s at orders by average query time, BTW) is:
Count: 286 Time=101.26s (28960s) Lock=14.74s (4214s) Rows=0.0 (0), iwatcher[iwatcher]#localhost
INSERT INTO `wp_posts` (`post_author`,`post_date`,`post_date_gmt`,`post_content`,`post_content_filtered`,`post_title`,`post_excerpt`,`post_status`,`post_type`,`comment_status`,`ping_status`,`post_password`,`post_name`,`to_ping`,`pinged`,`post_modified`,`post_modified_gmt`,`post_parent`,`menu_order`,`guid`) VALUES ('S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S','S')
FWIW, it is a WordPress with over 30K posts.
Ktracing only gives you system calls, signals I/O and namei processing. And it generates a lot of data very quickly. So it might not be ideal to fish out trouble spots.
If you can see the standard output for your script, put some strategically placed print statements in your code around suspected trouble spots. Then running the program should show you were the hang occurs:
print "Before query X"
$dbh->do($statement)
print "After query X".
If you cannot see the standard output, either use e.g. the Sys::Syslog perl module, or call FreeBSD's logger(1) program to write the debugging info to a logfile. It is probably easiest to encapsulate that into a debug() function and use that instead or print statements.
Edit: If you don't want a lot of logging on disk, write the logging info to a socket (Sys::Syslog supports that with setlogsock()), and write another script to read from that socket and dump the debug text to a terminal, prefixed with the time the data was received. Once the program hangs, you can see what it was doing.

Can I restore a mysql database faster?

I'm quite new to MySQL and I was wondering: when dumping a mysql database it takes only a few seconds, but when loading it sometimes it takes a few minutes! Is there a reverse of mysqldump to load the database in a few seconds?
Some easy tuning here might help.
Also, there are techniques that can help in specific situations, such as using --disable-keys.
In addition, there is an older post. Be careful of the chosen answer though, the comment said it is dangerous, which is correct, and this tool is now officially deprecated.
In mysql, for storage engines that use file-based storage, you can backup and restore using the files. See this relevant page:
http://dev.mysql.com/doc/refman/5.1/en/backup-methods.html
It's slower when you load it because it has to recreate the indexes. So the short answer is "no". However you can improve it by using --opt option when you dump. This adds some SQL to the dump file that does various things such as disabling the keys until all the data is loaded so it rebuilds indexes all at once.
This offers a nice improvement.
No - writes always take longer than reads - and with a relational database it has to rebuild the indexes too. There are some things you can do to make it go faster though (e.g. use extended inserts, defer index rebuilds)
there is an interesting to restore from mysql dump files (created using mysqldump) ... the technique is explained in detail at
http://www.geeksww.com/tutorials/database_management_systems/mysql/tips_and_tricks/fast_parallel_restore_from_sql_dumps_mysqldump_for_mysql.php
it uses different user accounts to run backups in parallel

Logging mysql queries

I am about to begin developing a logging system for future implementation in a current PHP application to get load and usage statistics from a MYSQL database.
The statistic will later on be used to get info about database calls per second, query times etc.
Of course, this will only be used when the app is in testing stage, since It will most certainly cause a bit of additional load itself.
However, my biggest questionmark right now is if i should use MYSQL to log the queries, or go for a file-based system. I'll guess that it would be a bit of a headache to create something that would allow writings from multiple locations when using a file based system to handle the logs?
How would you do it?
Use the general log, which will show client activity, including all the queries:
http://dev.mysql.com/doc/refman/5.1/en/query-log.html
If you need very detailed statistics on how long each query is taking, use the slow log with a long_query_time of 0 (or some other sufficiently short time):
http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
Then use http://www.maatkit.org/ to analyze the logs as needed.
MySQL already had logging built in- Chapter 5.2 of the manual describes these. You'll probably be interested in The General Query Log (all queries), the Binary Query Log (queries that change data) and the Slow log (queries that take too long, or don't use indexes).
If you insist on using your own solution, you will want to write a database middle layer that all your DB calls go through, which can handle the timing aspects. As to where you write them, if you're in devel, it doesn't matter too much, but the idea of using a second db isn't bad. You don't need to use an entirely separate DB, just as far as using a different instance of MySQL (on a different machine, or just a different instance using a different port). I'd go for using a second MySQL instance instead of the filesystem- you'll get all your good SQL functions like SUM and AVG to parse your data.
If all you are interested in is longer-term, non-real time analysis, turn on MySQL's regular query logging. There are tons of tools for doing analysis on the query-logs (both regular and slow-query), giving you information about the run-times, average rows returned, etc. Seems to be what you are looking for.
If you are doing tests on MySQL you should store the results in a different database such as Postgres, this way you won't increase the load with your operations.
I agree with macabail but would only add that you could couple this with a cron job and a simple script to extract and generate any statistics you might want.