Is there a table/place that stores all the historical queries being run on MySQL?
I want to do an analysis of the historical queries in order to determine what INDEX to create in each table.
You can do that by creating slow_log or general_log tables.
MySQL Server provides a way to show the general query log and the slow query log, if those logs are enabled.
First, check if you already have the two tables slow_log and general_log existing in the MySQL database.
If you don't have them already - then you have to create them.
Make sure that you are creating them in the MySQL database.
Create general_log table:
CREATE TABLE `general_log` (
`event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`user_host` mediumtext NOT NULL,
`thread_id` bigint (21) unsigned NOT NULL,
`server_id` int(10) unsigned NOT NULL,
`command_type` varchar(64) NOT NULL,
`argument` mediumtext NOT NULL) ENGINE = CSV DEFAULT CHARSET = utf8 COMMENT = 'General log'
The general query log is a general record of what mysqld is doing.
There you will find information such as:
when clients connect or disconnect
each SQL statement received from clients
For slow_log table:
CREATE TABLE `slow_log` (
`start_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`user_host` mediumtext NOT NULL,
`query_time` time NOT NULL,
`lock_time` time NOT NULL,
`rows_sent` int(11) NOT NULL,
`rows_examined` int(11) NOT NULL,
`db` varchar(512) NOT NULL,
`last_insert_id` int(11) NOT NULL,
`insert_id` int(11) NOT NULL,
`server_id` int(10) unsigned NOT NULL,
`sql_text` mediumtext NOT NULL,
`thread_id` bigint (21) unsigned NOT NULL) ENGINE = CSV DEFAULT CHARSET = utf8 COMMENT = 'Slow log'
The slow query log consists of SQL statements that take more than long_query_time seconds to execute and require at leastmin_examined_row_limit rows to be examined.
The slow query log can be used to find queries that take a long time to execute and are therefore candidates for optimization (indexation in your case).
Then you need to enable it (if you don't already have it enabled):
SET global general_log = 1;
SET global log_output = 'table';
Now you can view the log by running this query:
SELECT * FROM mysql.general_log;
If you want to disable query logging on the database, run this query
SET global general_log = 0;
Please note that having this turned on comes with some caveats, such as consuming disk space and similar performance considerations, so you can turn it ON and OFF by need, and not keep it always ON.
Read more about these here:
https://dev.mysql.com/doc/refman/8.0/en/query-log.html
https://dev.mysql.com/doc/refman/8.0/en/slow-query-log.html
Related
I have a mysql table with 2 million rows, when I'm running any select query on the table it's taking long time to execute and ultimately it does not return any result.
I have tried running select query from both Mysql Workbench and terminal, it's the same issue happening.
Below is the table:
`object_master`
`key` varchar(300) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
`bucket` varchar(255) DEFAULT NULL,
`object_name` varchar(300) DEFAULT NULL,
`object_type` varchar(50) DEFAULT NULL,
`last_modified_date` datetime DEFAULT NULL,
`last_accessed_date` datetime DEFAULT NULL,
`is_deleted` tinyint(1) DEFAULT '0',
`p_object` varchar(300) DEFAULT NULL,
`record_insert_time` datetime DEFAULT CURRENT_TIMESTAMP,
`record_update_time` datetime DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`key`)
ENGINE=InnoDB DEFAULT CHARSET=latin1
And below is the select query i'm running :
select `key` from object_master;
even with a limit 1 is also taking long time and not returning a result, its getting timed out :
select `key` from object_master limit 1;
Could anyone tell me what can be the real reason here?
Also I would like to mention: before I was running these select queries, there was an alter table statement executed on this table which got timed out after 10 minutes and table remained un-altered.
Following is the alter statement:
alter table object_master
modify column object_name varchar(1096) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL;
Note: Using MYSQL version 5.7.24 and Mysql running on Linux Docker container.
So I got this resolved:
There was Java batch program which was executing a query on the same table for long time and was holding a lock on the table. I found this through "processlist" table of information_schema.
Had to kill the long running query through terminal:
mysql> kill <processlist_id> ;
Then it released the lock on that table and all got resolved.
Got help from below SO answers:
Unlocking tables if thread is lost
How do I find which transaction is causing a "Waiting for table metadata lock" state?
We have a table that will store data every minute per user. We have about 1,000 users. The table only has 11 columns. As mentioned a new record is create for each user each minute, so 1440 records per user per day. The table is highly indexed.
Once created, the data is read and processed via cron jobs every hour.
After 14 days the data is deleted. This is a rolling process.
Generic MySQL wisdom seems to be to use InnoDB for everything, however we have had problems deleting large amounts of data using InnoDB. A memory table is no good as the data must survive a reboot.
Does anyone understand the other MySQL table engines well enough to know is a different type would be better in this scenario?
Here is the table definition:
CREATE TABLE geoc1clo_where.map_data (
MapID bigint(20) NOT NULL AUTO_INCREMENT,
Date date DEFAULT NULL,
DeviceID varchar(128) DEFAULT NULL,
Alarm varchar(255) DEFAULT NULL,
FixTime datetime DEFAULT NULL,
Valid int(1) DEFAULT NULL,
Lat double DEFAULT NULL,
Lon double DEFAULT NULL,
Speed float DEFAULT NULL,
Course float DEFAULT NULL,
Address varchar(512) DEFAULT NULL,
PRIMARY KEY (MapID),
INDEX IDX_map_data (MapID, FixTime, DeviceID),
INDEX IDX_map_data_FixTime (FixTime),
INDEX IDX_map_data2 (DeviceID, FixTime),
INDEX IDX_map_data3 (DeviceID, Date)
)
ENGINE = MYISAM
AUTO_INCREMENT = 98169276
AVG_ROW_LENGTH = 69
CHARACTER SET latin1
COLLATE latin1_swedish_ci;
I have two tables with exactly the same schema. I can insert into one table but not another. The one that fails complains about no default value. Here's my create statement for the table
CREATE TABLE `t_product` (
`product_id` varchar(10) NOT NULL,
`prod_name` varchar(150) DEFAULT NULL,
`price` decimal(6,2) NOT NULL,
`prod_date` date NOT NULL,
`prod_meta` varchar(250) DEFAULT NULL,
`prod_key` varchar(250) DEFAULT NULL,
`prod_desc` varchar(150) DEFAULT NULL,
`prod_code` varchar(12) DEFAULT NULL,
`prod_price` decimal(6,2) NOT NULL,
`prod_on_promo` tinyint(1) unsigned NOT NULL,
`prod_promo_sdate` date DEFAULT NULL,
`prod_promo_edate` date DEFAULT NULL,
`prod_promo_price` decimal(6,2) NOT NULL,
`prod_discountable` tinyint(1) unsigned NOT NULL,
`prod_on_hold` tinyint(1) unsigned NOT NULL,
`prod_note` varchar(150) DEFAULT NULL,
`prod_alter` varchar(150) DEFAULT NULL,
`prod_extdesc` text,
`prod_img` varchar(5) NOT NULL,
`prod_min_qty` smallint(6) unsigned NOT NULL,
`prod_recent` tinyint(1) unsigned NOT NULL,
`prod_name_url` varchar(150) NOT NULL,
`upc_code` varchar(50) DEFAULT NULL,
PRIMARY KEY (`product_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
When I run this statement in database1, it successfully inserts:
insert into t_product (product_id) values ('jlaihello');
When I run this exact statement in database2, I get the error:
ERROR 1364 (HY000): Field 'price' doesn't have a default value
Why is this error happening only in database2? As far as I can tell, the difference between database1 and database2 are:
database1 uses mysql Ver 14.14 Distrib 5.5.53, for debian-linux-gnu (i686) using readline 6.3
and
database2 uses mysql Ver 14.14 Distrib 5.7.16, for Linux (x86_64) using EditLine wrapper
How do I make database2 behave like database1?
EDIT
There are hundreds of tables affected by this. Basically we're moving a database over to a new server. And I did a mysqldump from db1, and imported into db2. t_product is just ONE of the tables affected by this. I'd like to avoid manually modifying the schema for the hundreds of tables. I prefer a "simple switch" that will make db2 behave like db1.
ERROR 1364 (HY000): Field 'price' doesn't have a default value
price decimal(6,2) NOT NULL,
Set price to null or assign a default value
EDIT:
This is caused by the STRICT_TRANS_TABLES SQL mode.
Open phpmyadmin and goto More Tab and select Variables submenu. Scroll down to find sql mode. Edit sql mode and remove STRICT_TRANS_TABLES Save it.
OR
You can run an SQL query within your database management tool, such as phpMyAdmin:
-- verify that the mode was previously set:
SELECT ##GLOBAL.sql_mode;
-- update mode:
SET ##GLOBAL.sql_mode= 'YOUR_VALUE';
OR
Find the line that looks like so in the mysql conf file:
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
Comment above line out and restart mysql server
Most probably, the default for column price is missing in the second database. To check this you should output your table structure:
describe database2.t_product;
OR
show create table database2.t_product;
and check if the default is defined.
You can alter your table and add the missing default constraint like this:
ALTER TABLE database2.t_product MODIFY COLUMN decimal(6,2) NOT NULL DEFAULT 0
EDIT
Based on comments and specification (data type default values), I think there is a difference in sql_mode of the MySQL:
For data entry into a NOT NULL column that has no explicit DEFAULT
clause, if an INSERT or REPLACE statement includes no value for the
column, or an UPDATE statement sets the column to NULL, MySQL handles
the column according to the SQL mode in effect at the time:
If strict SQL mode is enabled, an error occurs for transactional
tables and the statement is rolled back. For nontransactional tables,
an error occurs, but if this happens for the second or subsequent row
of a multiple-row statement, the preceding rows will have been
inserted.
If strict mode is not enabled, MySQL sets the column to the implicit
default value for the column data type.
So, if strict mode is not enabled for the first database, INSERT/UPDATE is allowed and storing the default value of that type (a 0 decimal)
I have this table in one server:
CREATE TABLE `mh` (
`M` char(13) NOT NULL DEFAULT '',
`F` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
`D` char(6) DEFAULT NULL,
`A` int(11) DEFAULT NULL,
`DC` char(13) DEFAULT NULL,
`S` char(22) DEFAULT NULL,
`S0` int(11) DEFAULT NULL,
PRIMARY KEY (`F`,`M`),
KEY `IDX_S` (`S`),
KEY `IDX_M` (`M`),
KEY `IDX_A` (`M`,`A`)
) ENGINE=TokuDB DEFAULT CHARSET=latin1;
And the same table but using MyISAM engine in another similar server.
When I execute this query:
CREATE TEMPORARY TABLE temp
(S VARCHAR(22) PRIMARY KEY)
AS
(
SELECT S, COUNT(S) AS HowManyS
FROM mh
WHERE A = 1 AND S IS NOT NULL
GROUP BY S
);
The table has 120 millions of rows. The server using TokuDB executes the query in 3 hours... the server using MyISAM in 22 minutes.
The query using TokuDB shows a "Queried about 38230000 rows, Fetched about 303929 rows, loading data still remains" status.
Why TokuDB query duration take so long? TokuDB is a really good engine, but I don't know what I'm doing wrong with this query
The servers are using a MariaDB 5.5.38 server
TokuDB is not currently using it's bulk-fetch algorithm on this statement, as noted in https://github.com/Tokutek/tokudb-engine/issues/143. I've added a link to this page so it is considered as part of the upcoming effort.
After mysql upgrade I'm getting this error on my Centos box when I tried to enable general_log. Any idea?
SET GLOBAL general_log = 'ON';
ERROR 1146 (42S02) : Table 'mysql.general_log' doesn't exist
I have created that missing table and worked for me.
Login to mysql console
use mysql;
CREATE TABLE general_log(
event_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
user_host mediumtext NOT NULL,
thread_id int(11) NOT NULL,
server_id int(10) unsigned NOT NULL,
command_type varchar(64) NOT NULL,
argument mediumtext NOT NULL
) ENGINE=CSV DEFAULT CHARSET=utf8 COMMENT='General log'
When you find yourself in this situation then you problem have done a MySQL upgrade and incorrectly carried over the datadir (e.g. /usr/local/var/mysql) to the new installation.
So the accepted solution above will solve your immediate problem but it also indicates that you might have other problems with your MySQL install as well.
just an Addition to Harikrishnan's answer!
I had to alter the fields type to work from me as MYSQL could not write to table so:
if general_log is enabled, turn it off SET GLOBAL general_log= 0;
Create table
USE mysql;
CREATE TABLE mysql.general_log(
event_time TIMESTAMP(6) NOT NULL DEFAULTCURRENT_TIMESTAMP(6) ON UPDATECURRENT_TIMESTAMP(6),
user_host MEDIUMTEXT NOT NULL,
thread_id BIGINT(21)UNSIGNED NOT NULL,
server_id INT(10) UNSIGNED NOT NULL,
command_type VARCHAR(64) NOT NULL,
argument MEDIUMBLOB NOT NULL
) ENGINE=CSV DEFAULT CHARSET=utf8 COMMENT='General log';
reenable logging SET GLOBAL general_log= 1;
view log SELECT * FROM mysql.general_log;