I have my log file working, but I get extraneous info on each line like "29 Query", and I can't tell, but it looks like the queries logged are the interpretation of how MySQL treats each query internally. Is there a way to automatically log each query as they were executed by the application without any additional information added to the log by MySQL? Thanks!
EDIT:
As a part of offering the bounty, let me explain my situation. We're using Magento Commerce, which has an EAV database architecture. Tracking anything down, and where it is stored is an absolute nightmare. My thought was to insert a product into the database in the application, and then log every query that was executed during that process. This worked well, but the logs have a ton of other cruft around the queries. I really do just want something like this:
1.) SELECT * FROM <TABLE>;
2.) UPDATE <TABLE> SET <VALUE> = <VALUE>;
3.) ...
4.) ...
Something simple that tells me what was executed so that I don't have to go sifting through controllers and models to try and get all this. I don't need dates, times, line numbers or anything extra.
To enable full Log Query add the following to your my.cnf:
log=/var/log/mysqldquery.log
The above will log all queries to the log file.
Don't forgot to restart mysql service after making changes in my.cnf file.
Example output from actions via SequelPro (mac client):
090721 11:06:45 51 Query ALTER TABLE `test` ADD `name` varchar(10) DEFAULT NULL
51 Query SHOW COLUMNS FROM `test`
51 Query SHOW INDEX FROM `test`
090721 11:06:57 51 Query SHOW COLUMNS FROM `test`
51 Query UPDATE `test` SET `id`='1', `name`='test' WHERE `id` = '1' AND `name` IS NULL LIMIT 1
51 Query SELECT * FROM `test` LIMIT 0,100
51 Query SELECT COUNT(1) FROM `test`
090721 11:07:00 51 Query UPDATE `test` SET `id`='2', `name`='test' WHERE `id` = '2' AND `name` IS NULL LIMIT 1
51 Query SELECT * FROM `test` LIMIT 0,100
51 Query SELECT COUNT(1) FROM `test`
On *NIX based systems you can use grep to start
grep 'SELECT\|INSERT\|UPDATE' querylog.log
Or get more tricky and start doing things like:
grep 'SELECT\|INSERT\|UPDATE' querylog.log | awk '{$1="";$2="";print}'
This would give you something like this, not perfect but closer:
51 Query UPDATE `test` SET `id`='2', `name`='test' WHERE `id` = '2' AND `name` IS NULL LIMIT 1
SELECT * FROM `test` LIMIT 0,100
SELECT COUNT(1) FROM `test`
51 Query INSERT INTO `test` (`id`,`name`) VALUES ('3','testing')
SELECT * FROM `test` LIMIT 0,100
SELECT COUNT(1) FROM `test`
You could use the mysql query log file. Add this parameter when you start mysql:
--log=/var/log/mysqld.log
If you are referring to the binary log, you need to use mysqlbinlog to pass it through to get meaningful output.
cat log100.log | mysqlbinlog
This any help to you?
http://www.bigdbahead.com/?p=99
There's 2 solutions there - one is easier but requires mysql 5.1+.
Related
I started one script in a few threads, which takes some data from database
SELECT * from `base` LIMIT 1 where `used` = 0
and then update this row
UPDATE `base` set `used` = 1 where id ...
The problem is: parallel threads often take same record from table. UPDATE can not get in time to avoid this.
What should I do?
Thank you, Gordon Linoff !
I should use LOCK TABLE table WRITE and then UNLOCK TABLES;
I'm trying to enable the mysql.general_log options in mysql.ini file. But my log files prints like below,
1 Query SHOW CREATE TABLE `crm_user`
1 Query SELECT * FROM `crm_user` `t` WHERE g_user_id =1021271;
Can anyone explain how to avoid the "SHOW CREATE TABLE crm_user" line on each select query in log file ?
I've a problem with MySql, here is the details :
I created a new schema/database, executed (only) this queries :
create table mytable (
id varchar(50) not null,
name varchar(50) null default '',
primary key (id));
create view myview as
select id,name from mytable;
insert into mytable values ('1','aaa');
insert into mytable values ('2','bbb');
insert into mytable values ('3','ccc');
and then, if I run these queries :
select * from mytable;
select * from myview;
prepare cmd from 'select id,name from mytable where id=?';
set #param1 = '2';
execute cmd using #param1;
the queries give the correct result (3 rows,3 rows,1 row).
but, the problem exists if I run this query:
prepare cmd from 'select id,name from myview where id=?';
set #param1 = '2';
execute cmd using #param1;
ERROR: #1615 - Prepared statement needs to be re-prepared
I've done some research and found that the increment of configurations below "may" solve the problem :
increase table_open_cache_instances value
increase table_open_cache value
increase table_definition_cache value
As far as I know, the queries above are the common and standard MySql queries, so I think there is no problem with the syntax.
I'm on a shared webhosting and using MySql version is 5.6.22
But the things that make me confused is, it only contain 1 schema/database, with 1 table with 3 short records and 1 view,
and I executed a common and standard MySql select query,
does the increment of values above really needed?
is there anyone with the same problem had increase the values and really solve the problem?
or, perhaps do you have any other solution which you think may or will works to solve this problem?
ps: it does not happen once or twice in a day (which assumed caused by some backup or related), but in all day (24 hours).
Thank you.
Do you do this after each execute?
deallocate prepare cmd;
The closest guess until now is some other shared members on the server dont write their code quite well (because it is a shared webhosting), either doing large alter while doing the large select, or dont deallocate the prepared statement after using it, like Rick James said. (want to make the post usefull, but I dont have the reputation, sorry Rick)
I can not make sure if the increment of "table_definition_cache" will works because the system administrator still wont change the value until now, but incase you having the same problem and you can modify it, it worth to try.
My current solution is I change all my views in my query strings into non-view or subqueries, it works for me, but the problem is still in the air.
eg. from
select myview.id, myview.name
from myview
inner join other_table on ...
where myview.id=?
into
select x.id, x.name
from (select id,name from mytable) x
inner join other_table on ...
where x.id=?
The query listed below is running fine on localhost but it's somehow hanging when executed remotely targeting my service provider database (both the PHP script and the SQL query in the phpMyAdmin console hang), although every chunk is returning the expected table when run individually.
What's wrong? Any suggestions on how to make it shorter or simpler and thus help the remote MySQL?
SELECT * FROM `Table1`
WHERE `Tag1` IN (
SELECT DISTINCT `Tag1` FROM `Table2`
WHERE `Tag1` NOT IN (
SELECT `Tag1` FROM `Table2` WHERE `Tag2` = '$keyWord'
)
)
I'm having a strange problem with a SELECT in an Innodb table, it never returns, I waited more than two hours to see if I get the results but no, still waiting.
CREATE TABLE `example` (
`id` int(11) NOT NULL,
`done` tinyint(2) NOT NULL DEFAULT '0',
`agent` tinyint(4) NOT NULL DEFAULT '0',
`text` varchar(256) NOT NULL,
PRIMARY KEY (`id`),
KEY `da_idx` (`done`,`agent`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
The query that I can't obtain the results is:
SELECT id, text FROM example WHERE done = 0 AND agent = 0 LIMIT 120;
First I thought in some index optimization or lock problem, I was some time researching that but then I found this:
SELECT id FROM example WHERE done = 0 AND agent = 0 LIMIT 120;
...
...
...
120 rows in set (0.27 sec)
SELECT text FROM example WHERE done = 0 AND agent = 0 LIMIT 120;
...
...
...
120 rows in set (0.83 sec)
Now I'm lost, how obtaining id or text column separately with the exactly same query (same WHERE and LIMIT) works perfect and then obtaining both of them not??
Executing the "SELECT id, text..." again after that two queries have the same effect, never returns.
Any help is appreciated, an Innodb guru could help ;)
Added information:
Doesn't look like a transaction lock problem, look at the exponential increase of the response times for the next queries:
SELECT id, text FROM example WHERE done = 0 AND agent = 0 limit 109;
...
109 rows in set (0.31 sec)
SELECT id, text FROM example WHERE done = 0 AND agent = 0 limit 110;
...
110 rows in set (3.98 sec)
SELECT id, text FROM example WHERE done = 0 AND agent = 0 limit 111;
...
111 rows in set (4 min 5.00 sec)
I found the solution to my own question and I want to share here because it was quite strange, at least for me.
I always thought that the time delay that the mysql client returns after each query (for example "120 rows in set (0.27 sec)") was the time that takes to the mysql server to generate that result, but not.
The problem was a network problem! With no relation at all with the mysql server!
So, I found, opposed to what I always thought, that the time after each query showed by the mysql client includes the network delay. The same requests at the same time from a server in the same datacenter as the mysql server and another from another country returns a different time(considerabily different).